Rahmenprogramm des BMBF zur Förderung der empirischen Bildungsforschung

Literaturdatenbank

Vollanzeige

    Pfeil auf den Link... Verfügbarkeit 
Autoren Senkbeil, Martin; Ihme, Jan Marten  
Institution Leibniz-Institut für Bildungsverläufe, Nationales Bildungspanel  
Titel NEPS technical report for computer literacy. Scaling results of starting cohort 6-adults.  
URL https://www.lifbi.de/Portals/0/Working Papers/WP_LXI.pdf  
Erscheinungsjahr 2015  
Seitenzahl 28 S.  
Verlag Bamberg: Leibniz Institute for Educational Trajectories, National Educational Panel Study  
Reihe NEPS working paper. Band 61  
Dokumenttyp Monographie; Graue Literatur; online  
Beigaben Literaturangaben, Abbildungen, Tabellen, Anhang  
Sprache englisch  
Forschungsschwerpunkt Bildungspanel (NEPS)  
Schlagwörter Kompetenz; Kompetenzmessung; Kompetenzentwicklung; EDV; Software; Item-Response-Theorie; Datenanalyse; Erwachsener; Rasch-Modell; Mehrdimensionale Analysis  
Abstract The National Educational Panel Study (NEPS) aims to investigate the development of competencies across the whole life span. Furthermore, NEPS develops tests for assessing the different competence domains. In order to evaluate the quality of the competence tests, a wide range of analyses have been performed based on item response theory (IRT). This paper describes the computer literacy data of Starting Cohort 6–Adults (Wave 4). Apart from descriptive statistics of the data, the scaling model applied to estimate competence scores, and the analyses performed to investigate the quality of the scale, as well as the results of these analyses are presented here. The computer literacy test in Starting Cohort 6 (Adults) consisted of 29 items representing different cognitive requirements and software applications. A multiple choice format was used. The test was administered to 6,923 adults. Of these, 6,138 adults completed the assessment in this domain. A Raschmodel was used for scaling the data. Item fit statistics, differential item functioning, Rasch homogeneity, the tests’ dimensionality, and local item independence were evaluated to ensure the quality of the test. The results show that the items exhibited good item fit and measurement invariance across various subgroups. Moreover, the test showed good reliability and the different comprehension requirements foster a unidimensional construct. In summary, the scaling procedures show that the test is a reliable instrument with satisfying psychometric properties for assessing computer literacy. In this paper, the data available in the Scientific Use File are described and ConQuest-Syntax for scaling the data is provided. (Orig.).  
Förderkennzeichen 01GJ0888