Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Pohl, Steffi; Haberkorn, Kerstin; Hardt, Katinka  
Institution Leibniz-Institut für Bildungsverläufe; Nationales Bildungspanel  
Titel NEPS technical report for reading. Scaling results of starting cohort 5 for first-year students in main study 2010/11.  
URL https://www.neps-data.de/Portals/0/Working Papers/WP_XXXIV.pdf  
Erscheinungsjahr 2014  
Seitenzahl 29  
Verlag Bamberg: Leibniz Institute for Educational Trajectories (LIfBi)  
Reihe NEPS working paper. Band 34  
Dokumenttyp Monographie; Graue Literatur; online  
Beigaben Literaturangaben, Tabellen, Abbildungen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Lesekompetenz; Qualität; Testaufgabe; Item-Response-Theorie; Skalierung  
Abstract The National Educational Panel Study (NEPS) aims at investigating the development of competencies across the whole life span and developing tests for assessing different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on item response theory (IRT). This paper describes the data and scaling procedures of the students’ reading competence data in Starting Cohort 5. The reading competence test for the students contains 29 reading items with different response formats representing different cognitive requirements and text functions. The test was administered to 7,085 students and the data were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch-homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. The results showed that the test exhibits an acceptable reliability and that the items fit the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. Challenges of the test include the large number of items targeted toward a lower reading ability as well as the large percentage of items at the end of the test that were not reached due to time limits. Further challenges arise from dimensionality analyses based on both text functions and cognitive requirements. Overall, the reading test had acceptable psychometric properties and results of the quality of the scale support the estimation of are liable reading competence score. Besides scaling results, this paper describes the data available in the Scientific Use File and presents the ConQuest-syntax for scaling the data. (Orig.).  
Förderkennzeichen 01GJ0888