Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Fischer, Luise; Rohm, Theresa; Gnambs, Timo  
Institution Leibniz-Institut für Bildungsverläufe LIfBi  
Titel NEPS technical report for mathematics. Scaling results of starting cohort 4 for grade 12.  
URL https://www.neps-data.de/Portals/0/Survey Papers/SP_XII.pdf  
Erscheinungsjahr 2017  
Seitenzahl 34 S.  
Verlag Bamberg: Leibniz-Institut für Bildungsverläufe LIfBi  
Reihe NEPS survey papers. Band 12  
Dokumenttyp Monographie; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Kompetenzentwicklung; Item-Response-Theorie; Mathematische Kompetenz; 12. Schuljahr; Kognitive Kompetenz; Quantitative Analyse; Quantitative Daten; Psychometrie;  
Abstract The National Educational Panel Study (NEPS) investigates the development of competencies across the life span and develops tests for the assessment of different competence domains. In order to evaluate the quality of the competence tests, a range of analyses based on item response theory (IRT) were performed. This paper describes the data and scaling procedures for the mathematical competence test in grade 12 of starting cohort 4 (ninth grade). The mathematical competence test contained 31 items (distributed among an easy and a difficult booklets containing 21 items each) with different response formats representing different cognitive requirements and different content areas. The test was administered to 5,733 students. Their responses were scaled using the partial credit model. Item fit statistics, differential item functioning, Rasch -homogeneity, the test’s dimensionality, and local item independence were evaluated to ensure the quality of the test. These analyses showed that the test exhibited an acceptable reliability and that all items but one fitted the model in a satisfactory way. Furthermore, test fairness could be confirmed for different subgroups. Limitations of the test were the number of items targeted toward a lower and higher mathematical ability as well as the large percentage of items in the difficult booklet at the end of the test that were not reached due to time limits. Further challenges related to the dimensionality analyses based on the four content areas. Overall, the mathematics test had acceptable psychometric properties that allowed for a reliable estimation of mathematics competence scores. Besides the scaling results, this paper also describes the data available in the scientific use file and presents the ConQuest-syntax for scaling the data (Orig.).  
Förderkennzeichen 01GJ0888