Publication Type

Journal Article

Publication Date

3-2008

Abstract

Across many areas of psychology, concordance is commonly used to measure the (intragroup) agreement in ranking a number of items by a group of judges. Sometimes, however, the judges come from multiple groups, and in those situations, the interest is to measure the concordance between groups, under the assumption that there is some within-group concordance. In this investigation, existing methods are compared under a variety of scenarios. Permutation theory is used to calculate the error rates and the power of the methods. Missing data situations are also studied. The results indicate that the performance of the methods depend on (a) the number of items to be ranked, (b) the level of within-group agreement, and (c) the level of between-group agreement. Overall, using the actual ranks of the items gives better results than using the pairwise comparison of rankings. Missing data lead to loss in statistical power, and in some cases, the loss is substantial. The degree of power loss depends on the missing mechanism and the method of imputing the missing data, among other factors.

Keywords

concordance, intergroup, Kendall's W, missing data, ranking experiment

Discipline

Econometrics | Psychology

Research Areas

Econometrics

Publication

Psychological Methods

Volume

13

Issue

1

First Page

58

Last Page

71

ISSN

1082-989X

Identifier

10.1037/1082-989X.13.1.58

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Additional URL

http://doi.org/10.1037/1082-989X.13.1.58

Share

COinS