Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Haberkorn, Kerstin; Pohl, Steffi; Carstensen, Claus H.  
Titel Scoring of complex multiple choice items in NEPS competence tests.  
URL https://doi.org/10.1007/978-3-658-11994-2_29  
URN, persistent 10.1007/978-3-658-11994-2_29  
Erscheinungsjahr 2016  
Sammelwerk Blossfeld, Hans-Peter (Hrsg.); Jutta, von Maurice (Hrsg.); Bayer, Michael (Hrsg.); Skopek, Jan (Hrsg.): Methodological issues of longitudinal surveys.  
Seitenzahl S. 523-540  
Verlag Wiesbaden: Springer VS  
ISBN 978-3-658-11992-8; 978-3-658-11994-2  
Dokumenttyp Sammelwerksbeitrag; gedruckt; online  
Beigaben Literaturangaben  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Leistungsmessung; Test; Multiple-Choice-Verfahren; Skalierung; Item-Response-Theorie  
Abstract In order to precisely assess the cognitive achievement and abilities of students, different types of items are often used in competence tests. In the National Educational Panel Study (NEPS), test instruments also consist of items with different response formats, mainly simple multiple choice (MC) items in which one answer out of four is correct and complex multiple choice (CMC) items comprising several dichotomous “yes/no” subtasks. The different subtasks of CMC items are usually aggregated to a polytomous variable and analyzed via a partial credit model. When developing an appropriate scaling model for the NEPS competence tests, different questions arose concerning the response formats in the partial credit model. Two relevant issues were how the response categories of polytomous CMC variables should be scored in the scaling model and how the different item formats should be weighted. In order to examine which aggregation of item response categories and which item format weighting best models the two response formats of CMC and MC items, different procedures of aggregating response categories and weighting item formats were analyzed in the NEPS, and the appropriateness of these procedures to model the data was evaluated using certain item fit and test fit indices. Results suggest that a differentiated scoring without an aggregation of categories of CMC items best discriminates between persons. Additionally, for the NEPS competence data, an item format weighting of one point for MC items and half a point for each subtask of CMC items yields the best item fit for both MC and CMC items. In this paper, we summarize important results of the research on the implementation of different response formats conducted in the NEPS. (Orig.).  
Förderkennzeichen 01GJ0888