Judge response theory? A call to upgrade ...
Type de document :
Compte-rendu et recension critique d'ouvrage
Titre :
Judge response theory? A call to upgrade our psychometrical account of creativity judgments
Auteur(s) :
Myszkowski, Nils [Auteur]
Pace University
Storme, Martin [Auteur]
Laboratoire Adaptations Travail Individu [LATI - EA 4469]
Lille économie management - UMR 9221 [LEM]
Pace University
Storme, Martin [Auteur]
Laboratoire Adaptations Travail Individu [LATI - EA 4469]
Lille économie management - UMR 9221 [LEM]
Titre de la revue :
Psychology of Aesthetics, Creativity, and the Arts
Pagination :
167–175
Éditeur :
American Psychological Association
Date de publication :
2019
ISSN :
1931-3896
Discipline(s) HAL :
Sciences de l'Homme et Société/Gestion et management
Résumé en anglais : [en]
The Consensual Assessment Technique (CAT)—more generally, using product creativity judgments—is a central and actively debated method to assess product and individual creativity. Despite a constant interest in strategies ...
Lire la suite >The Consensual Assessment Technique (CAT)—more generally, using product creativity judgments—is a central and actively debated method to assess product and individual creativity. Despite a constant interest in strategies to improve its robustness, we argue that most psychometric investigations and scoring strategies for CAT data remain constrained by a flawed psychometrical framework. We first describe how our traditional statistical account of multiple judgments, which largely revolves around Cronbach’s α and sum/average scores, poses conceptual and practical problems—such as misestimating the construct of interest, misestimating reliability and structural validity, underusing latent variable models, and reducing judge characteristics as a source of error—that are largely imputable to the influence of classical test theory. Then, we propose that the item–response theory framework, traditionally used for multi-item situations, be transposed to multiple-judge CAT situations in Judge Response Theory (JRT). After defining JRT, we present its multiple advantages, such as accounting for differences in individual judgment as a psychological process—rather than as random error—giving a more accurate account of the reliability and structural validity of CAT data and allowing the selection of complementary—not redundant—judges. The comparison of models and their availability in statistical packages are notably discussed as further directions.Lire moins >
Lire la suite >The Consensual Assessment Technique (CAT)—more generally, using product creativity judgments—is a central and actively debated method to assess product and individual creativity. Despite a constant interest in strategies to improve its robustness, we argue that most psychometric investigations and scoring strategies for CAT data remain constrained by a flawed psychometrical framework. We first describe how our traditional statistical account of multiple judgments, which largely revolves around Cronbach’s α and sum/average scores, poses conceptual and practical problems—such as misestimating the construct of interest, misestimating reliability and structural validity, underusing latent variable models, and reducing judge characteristics as a source of error—that are largely imputable to the influence of classical test theory. Then, we propose that the item–response theory framework, traditionally used for multi-item situations, be transposed to multiple-judge CAT situations in Judge Response Theory (JRT). After defining JRT, we present its multiple advantages, such as accounting for differences in individual judgment as a psychological process—rather than as random error—giving a more accurate account of the reliability and structural validity of CAT data and allowing the selection of complementary—not redundant—judges. The comparison of models and their availability in statistical packages are notably discussed as further directions.Lire moins >
Langue :
Anglais
Vulgarisation :
Non
Collections :
Source :