Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Krannich, Maike; Jost, Odin; Rohm, Theresa; Koller, Ingrid; Carstensen, Claus H.; Fischer, Luise; Gnambs, Timo  
Institution LIfBi Leibniz-Institut für Bildungsverläufe  
Titel NEPS technical report for reading. Scaling results of starting cohort 3 for grade 7.  
URL https://www.neps-data.de/Portals/0/Survey Papers/SP_XIV.pdf  
Erscheinungsjahr 2017  
Seitenzahl 37 S.  
Verlag Bamberg: LIfBi Leibniz-Institut für Bildungsverläufe  
Reihe NEPS Survey papers. Band 14  
Dokumenttyp Monographie; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Item-Response-Theorie; Skalierung; Lesekompetenz; Kompetenzmessung; Evaluation; 7. Schuljahr; Rasch-Modell; Rasch Analysis  
Abstract The National Educational Panel Study (NEPS) investigates the development of competences across the life span and develops tests for the assessment of different competence domains. In order to evaluate the quality of the competence tests, a range of analyses based on item response theory (IRT) are performed. This paper describes the data and scaling procedure for the reading competence test in grade 7 of starting cohort 3 (fifth grade). The reading competence test contained 42 items (distributed among an easy and a difficult booklet) with different response formats representing different cognitive requirements and text functions. The test was administered to 6,194 students. Their responses were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch-homogeneity, the test`s dimensionality, and local item independence were evaluated to ensure the quality of the test. These analyses showed that the test exhibited an acceptable reliability and that the items fitted the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. Limitations of the test were the large percentage of items at the end of the difficult test that were not reached due to time limits and minor differential item functioning between the easy and difficult test version for some items. Overall, the reading competence test had acceptable psychometric properties that allowed for an estimation of reliable reading competence scores. Besides the scaling results, this paper also describes the data in the Scientific Use File and presents the ConQuest syntax for scaling the data (Orig.).  
Förderkennzeichen 01GJ0888