Even the best methods can be irrelevant if used improperly. In this article, Steele, K., Carmel, Y., Cross., J. and Wilcox., C. discuss how failing to align the weights of criteria and scales of performance indicators can skew the results of scientifically sound Multi-Criteria Decision Analysis (MCDA) methods. Written in the context of environmental decision-making, the article evaluates the sensitivity of final criteria rankings to the scales of performance indicators and choice of criteria weighting. The authors placed specific emphasis on how this relates to the Analytic Hierarchy Process (AHP), a type of MCDA.
The authors discuss the importance of correctly scoring the criteria. If two options are given the same weight, they should be measured on the same scale in order to make them of equal importance. Much emphasis has been placed on sensitivity analysis, which takes criteria of equal weight and shows how tolerant the final scores are to changes in the weight of each option. However, in addition to the importance of weights, the article points out that changes in the indicators used to measure performance for the criteria have an overlooked but significant influence on criteria performance. When looking at environmental problems – the focus of this case study – it is common for MCDMs to include multiple options with differing measures of performance. The scaling of an option’s indicator can easily skew the results, meaning the option’s final ranking is sensitive to how widely or narrowly the analyst scales the indicator.
Yet this is only true when weights are held constant. The scaling of performance-scoring indicators must be calibrated with the weighting for the technique to be accurate. According to the article, researchers must “appreciate and make explicit in their methodology the fact that criteria weights, taken on their own, are meaningless.” Instead of conducting further sensitivity analysis of weights and scales independent of each other, researchers would benefit from considering the interplay of these two factors from the beginning. If they do not properly account for the relationship between weighting and scaling, researchers risk increasing the arbitrariness of their analysis. Without clearly defined indicator scales, it is likely that stakeholders, especially those with limited understanding of the method, will have conflicting ideas of the scales’ importance, thereby decreasing the effectiveness of this technique.
This problem is intensified by multi-criteria methods that obscure the relationship between weightings and scaling. According to the article, this is the predominant issue with (AHP). The authors offer several options for improving the process of giving weights to criteria, including using the SMART multi-criteria model and asking decisionmakers to answer pairwise comparisons to determine the relative importance of each criterion.
The authors conclude that stakeholders and decision modelers can overcome these challenges by discussing the importance of calibrating performance indicator scales, and gaining a better understanding of how changes to these scales relatively impact a criterion’s ranking.
Overall, the article was useful in highlighting an issue with common applications of MCDM. However, it does not go into much detail about specific techniques for addressing the issue. Still, being aware of the problem would be likely to help decisionmakers and analysts assign more accurate values, and therefore increase the accuracy of the method.
Steele, K., Carmel, Y., Cross, J., & Wilcox, C. (2009). Uses and Misuses of Multi-Criteria Decision Matrix (MCDA) in Environmental Decision-Making. Risk Analysis (29)1. doi 10.1111/j.1539-6924.2008.01130.x