Same Test, Better Scores: Boosting the ...
Type de document :
Compte-rendu et recension critique d'ouvrage
Titre :
Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models
Auteur(s) :
Storme, Martin [Auteur]
Lille économie management - UMR 9221 [LEM]
Myszkowski, Nils [Auteur]
Baron, Simon [Auteur]
Bernard, David [Auteur]
Lille économie management - UMR 9221 [LEM]
Myszkowski, Nils [Auteur]
Baron, Simon [Auteur]
Bernard, David [Auteur]
Titre de la revue :
Journal of Intelligence
Pagination :
17
Éditeur :
MDPI
Date de publication :
2019-09
ISSN :
2079-3200
Mot(s)-clé(s) en anglais :
E-assessment
general mental ability
nested logit models
item-response theory
ability-based guessing
general mental ability
nested logit models
item-response theory
ability-based guessing
Discipline(s) HAL :
Sciences de l'Homme et Société/Gestion et management
Résumé en anglais : [en]
Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor ...
Lire la suite >Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N=2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.Lire moins >
Lire la suite >Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N=2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.Lire moins >
Langue :
Anglais
Vulgarisation :
Non
Collections :
Source :
Fichiers
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6789760/pdf
- Accès libre
- Accéder au document
- Accès libre
- Accéder au document