70850 |
Creator |
c9d52ef4a11ad1277436a89c20d0a5d4 |
70850 |
Creator |
71122b7715e55a3f53d5c14540adcbbb |
70850 |
Creator |
ext-800aaa60919ea4a6ba0d5980ded9c80b |
70850 |
Date |
2020 |
70850 |
Is Part Of |
p19326203 |
70850 |
Is Part Of |
repository |
70850 |
abstract |
For decades, self-report measures based on questionnaires have been widely used in
educational research to study implicit and complex constructs such as motivation,
emotion, cognitive and metacognitive learning strategies. However, the existence of
potential biases in such self-report instruments might cast doubts on the validity
of the measured constructs. The emergence of trace data from digital learning environments
has sparked a controversial debate on how we measure learning. On the one hand, trace
data might be perceived as “objective” measures that are independent of any biases.
On the other hand, there is mixed evidence of how trace data are compatible with existing
learning constructs, which have traditionally been measured with self-reports. This
study investigates the strengths and weaknesses of different types of data when designing
predictive models of academic performance based on computer-generated trace data and
survey data. We investigate two types of bias in self-report surveys: response styles
(i.e., a tendency to use the rating scale in a certain systematic way that is unrelated
to the content of the items) and overconfidence (i.e., the differences in predicted
performance based on surveys’ responses and a prior knowledge test). We found that
the response style bias accounts for a modest to a substantial amount of variation
in the outcomes of the several self-report instruments, as well as in the course performance
data. It is only the trace data, notably that of process type, that stand out in being
independent of these response style patterns. The effect of overconfidence bias is
limited. Given that empirical models in education typically aim to explain the outcomes
of learning processes or the relationships between antecedents of these learning outcomes,
our analyses suggest that the bias present in surveys adds predictive power in the
explanation of performance data and other questionnaire data. |
70850 |
authorList |
authors |
70850 |
issue |
6 |
70850 |
status |
published |
70850 |
status |
peerReviewed |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148856 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148861 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148862 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148863 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148864 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1148865 |
70850 |
uri |
http://data.open.ac.uk/oro/document/1156807 |
70850 |
volume |
15 |
70850 |
type |
AcademicArticle |
70850 |
type |
Article |
70850 |
label |
Tempelaar, Dirk; Rienties, Bart and Nguyen, Quan (2020). Subjective data, objective
data and the role of bias in predictive modelling: Lessons from a dispositional learning
analytics application. PLoS ONE, 15(6), article no. e0233977. |
70850 |
Publisher |
ext-72433582b3abfd9f3b74d94a6e694560 |
70850 |
Title |
Subjective data, objective data and the role of bias in predictive modelling: Lessons
from a dispositional learning analytics application |
70850 |
in dataset |
oro |