Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Senkbeil, Martin; Ihme, Jan Marten; Adrian, Esther Dameria  
Institution Leibniz-Institut für Bildungsverläufe; Nationales Bildungspanel  
Titel NEPS technical report for computer literacy. Scaling results of starting cohort 3 in grade 6 (Wave 2).  
URL https://www.neps-data.de/Portals/0/Working Papers/WP_XXXIX.pdf  
Erscheinungsjahr 2014  
Seitenzahl 25 S.  
Verlag Bamberg: Leibniz Institute for Educational Trajectories (LIfBi)  
Reihe NEPS working paper. Band 39  
Dokumenttyp Monographie; Graue Literatur; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Kompetenzmessung; Qualität; Test; Item-Response-Theorie; Computerkenntnisse; Schuljahr 06; Varianzanalyse; Skalierung  
Abstract The National Educational Panel Study (NEPS) aims at investigating the development of competences across the whole life span. Furthermore, NEPS develops tests for assessing the different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on Item Response Theory (IRT). This paper describes the computer literacy data of Starting Cohort 3 in Grade 6 (Wave 2). Next to descriptive statistics of the data, the scaling model applied to estimate competence scores, the analyses performed to investigate the quality of the scale as well as the results of these analyses are presented. The computer literacy test in Grade 6 consisted of 30 items, which represented different cognitive requirements and software applications. A multiple choice format was used. The test was administered to 4,872 students. A Rasch model was used for scaling the data. Item fit statistics, differential item functioning, Rasch homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. The results show that the items exhibited good item fit and measurement invariance across various subgroups. Moreover, the test showed acceptable reliability and the different comprehension requirements foster a unidimensional construct. Challenges of the test are the small number of very difficult items and the relatively low reliability of the test. In summary, the scaling procedures show that the test is a reliable instrument with satisfying psychometric properties for assessing computer literacy. In the paper, the data available in the Scientific Use File are described and ConQuest-Syntax for scaling the data is provided. (Orig.).  
Förderkennzeichen 01GJ0888