Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Senkbeil, Martin; Ihme, Jan Marten  
Titel NEPS technical report for computer literacy - scaling results of starting cohort 4 in ninth grade.  
URL https://www.neps-data.de/Portals/0/Working Papers/WP-XVII.pdf; https://www.neps-data.de/Portals/0/Working Papers/Erratum_WP_XVII.pdf  
Erscheinungsjahr 2012  
Seitenzahl 29 S.  
Verlag Bamberg: Otto-Friedrich-Universität  
Reihe NEPS Working Paper. Band 17  
Dokumenttyp Monographie; Discussion Paper / Working Paper / Konferenzbeitrag; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Bildungsforschung; Computer; Kompetenz; Schüler; Schuljahr 09; Forschungsdesign; Item-Response-Theorie; Skalierung; Modellierung; Empirische Untersuchung; Quantitative Methode; Deutschland  
Abstract The National Educational Panel Study (NEPS) aims at investigating the development of competences across the whole life span and tests for assessing the different competence domains are developed. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on Item Response Theory (IRT). This paper describes the computer literacy data of starting cohort 4 in ninth grade. Next to descriptive statistics of the data, the scaling model applied to estimate competence scores, analyses performed to investigate the quality of the scale, as well as the results of these analyses are presented. The reading test in fifth grade consisted of 36 items, which represented different cognitive requirements and text functions and used different response formats. The test was administered to 14,486 students. A partial credit model was used for scaling the data. Item fit statistics, differential item functioning, Rasch homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. The results show that the items exhibited good item fit and measurement invariance across various subgroups. Moreover, the test showed a high reliability and the different comprehension requirements foster a unidimensional construct. Challenges of the test are the small number of very difficult items, and the elevated number of items that have not been reached by test takers due to time limits. In summary, the scaling procedures show that the test is a reliable instrument with satisfying psychometric properties for assessing computer literacy. In the paper, the data available in the Scientific Use File are described and ConQuest-Syntax for scaling the data is provided. (DIPF/Orig.)  
Förderkennzeichen 01GJ0888