Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Haberkorn, Kerstin; Pohl, Steffi; Carstensen, Claus H.  
Titel Incorporating different response formats of competence tests in an IRT model.  
URL http://www.psychologie-aktuell.com/fileadmin/download/ptam/2-2016_20160627/01_Haberkorn.pdf  
Erscheinungsjahr 2016, Jg. 58, H. 2  
Seitenzahl S. 223-252  
Zeitschrift Psychological test and assessment modeling  
ISSN 2190-0493; 2190-0507  
Dokumenttyp Zeitschriftenaufsatz; gedruckt; online  
Beigaben Literaturangaben, Abbildungen, Tabellen  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Kompetenzmessung; Testaufgabe; Antwortbogen; Skalierung; Item-Response-Theorie; Itemanalyse; Dimensionsanalyse  
Abstract Competence tests within large-scale assessments usually contain various task formats to measure the participants’ knowledge. Two response formats that are frequently used are simple multiple choice (MC) items and complex multiple choice (CMC) items. Whereas simple MC items comprise a number of response options with one being correct, CMC items consist of several dichotomous true-false subtasks. When incorporating these response formats in a scaling model, they are mostly assumed to be unidimensional. In empirical studies different empirical and theoretical schemes of weighting CMC items in relation to MC items have been applied to construct the overall competence score. However, the dimensionality of the two response formats and the different weighting schemes have only rarely been evaluated. The present study, thus, addressed two questions of particular importance when implementing MC and CMC items in a scaling model: Do the different response formats form a unidimensional construct and, if so, which of the weighting schemes considered for MC and CMC items appropriately models the empirical competence data? Using data of the National Educational Panel Study, we analyzed scientific literacy tests embedding MC and CMC items. We cross-validated the findings on another competence domain and on another large-scale assessment. The analyses revealed that the different response formats form a unidimensional measure across contents and studies. Additionally, the a priori weighting scheme of one point for MC items and half points for each subtask of CMC items best modeled the response formats’ impact on the competence score and resembled the empirical competence data well. (Orig.).  
Förderkennzeichen 01GJ0888