Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Rohm, Theresa; Freund, Micha; Gnambs, Timo; Fischer, Luise  
Institution LIfBi Leibniz-Institut für Bildungsverläufe  
Titel NEPS technical report for listening comprehension. Scaling results of starting cohort 3 for grade 9.  
URL https://www.neps-data.de/Portals/0/Survey Papers/SP_XXI.pdf  
Erscheinungsjahr 2017  
Seitenzahl 27 S.  
Verlag Bamberg: LIfBi Leibniz-Institut für Bildungsverläufe  
Reihe NEPS Survey Papers. Band 21  
Dokumenttyp Monographie; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Kompetenzerwerb; Kompetenzentwicklung; Evaluation; Item-Response-Theorie; Hörverstehensübung; 5. Schuljahr; Multiple-Choice-Verfahren; Rasch-Modell; Rasch Analysis;  
Abstract The National Educational Panel Study (NEPS) investigates the development of competencies from early childhood to late adulthood. Therefore, tests for the assessment of different competence domains are developed. To evaluate the quality of these tests, various analyses based on item response theory (IRT) are performed. This report describes the data and scaling procedures for the listening comprehension test in Starting Cohort 3 (fifth grade) for Grade 9. The listening comprehension test contained 16 items with complex multiple choice response formats that asked respondents about details on two spoken texts. The test was administered to 4,588 students. Their responses were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch-homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. These analyses showed that the test exhibited an acceptable reliability and that the items fitted the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. There was a negligible amount of missing responses; particularly, items that were not reached by the respondents were rare. Challenges of the test included the large number of items targeted toward a lower ability in listening comprehension. Further challenges arose from dimensionality analyses based on different cognitive requirements for the items. Overall, the listening comprehension test had acceptable psychometric properties that supported the estimation of reliable listening comprehension scores. Besides the scaling results, this paper also describes the data available in the scientific use file and presents the ConQuest syntax for scaling the data (Orig.).  
Förderkennzeichen 01GJ0888