61445 |
Creator |
210f2fe0a08b0fbfcda55d529f70583f |
61445 |
Creator |
30da15fb7e6d57dc7ca8f125ba056618 |
61445 |
Date |
2015 |
61445 |
Date |
2015-01-21 |
61445 |
Is Part Of |
repository |
61445 |
issuer |
ext-028462f8f5f3c4cd5a73a39954833128 |
61445 |
issuer |
ext-0ef780eb1c830042a23ecf0f26d4e300 |
61445 |
issuer |
ext-10551a057688cd53a14ed02dd45d44b3 |
61445 |
issuer |
ext-1252afd3e7181ddc49ebc9efacf45c4d |
61445 |
issuer |
ext-1ca4756cdbaf05c4f231db789da11491 |
61445 |
issuer |
ext-2a902878d4c18a4fe55d329f0ef2de81 |
61445 |
issuer |
ext-450ca90395569b0739927a189c89fda2 |
61445 |
issuer |
ext-467b2d4c5f78218f114a46b2be5ffdec |
61445 |
issuer |
ext-4b8951d6f903280f0c90c5fa6790481a |
61445 |
issuer |
ext-53322cf2a3949c2280dfd1a15fee67e3 |
61445 |
issuer |
ext-79a27b8045f2f8c560cbc2d928054cd5 |
61445 |
issuer |
ext-7d2bc514176c823e2b9c56c8bff5b0f1 |
61445 |
issuer |
ext-878fde1671f6436e6e53d7c2f2c264b2 |
61445 |
issuer |
ext-ab06faa625ff30f46a17b052634c3bc7 |
61445 |
issuer |
ext-aeed905dd0f0ddb92406f31c2c19e48a |
61445 |
issuer |
ext-c52bd27ab2ff21e9b6ee0190989b9f78 |
61445 |
issuer |
ext-ceba2f2d577f100ecab910f7ead6415c |
61445 |
issuer |
ext-d54ae7a3e455d927b0237fbd9852f1c8 |
61445 |
issuer |
ext-9149035637f12c8105ea36cfac7d58f0 |
61445 |
issuer |
ext-f920e8fe61b2efb65de8b7c84d28d4a8 |
61445 |
issuer |
ext-15f6494384ba4c3e9dc0486a79b9c48a |
61445 |
abstract |
The current technological era has largely influenced the development of learning environments.
As a result, there are new opportunities for teaching, learning and assessment. The
emergence of Massive Open Online Courses (MOOCs) in particular, has attracted the
attention of higher education institutions and course designers. MOOCs may provide
the opportunity to thousands of students to learn from anywhere and at their convenience.
Assessment is a component of the learning environment that drives student learning.
However, only a small proportion of existing literature on assessment investigates
its use for the enhancement of educational growth as most of the literature is concerned
with how to use assessment for purposes of grading and ranking (Rowntree, 1987). Assessment
has a double role in learning by both motivating students to study in order to undertake
it, but also providing the necessary feedback on their performance so that students
can track their learning progress (Rowntree, 1987).
Research in MOOCs is currently growing, focusing on different aspects such as the
“questionable course quality, high dropout rate, unavailable course credits, complex
copyright, limited hardware and ineffective assessments” (Chen, 2014). Assessment
in MOOCs has been mostly investigated from a perspective that is looking at: how the
grading load can be diminished by adopting automated techniques, the aims of each
technique, and finally new potential approaches that will be able to assess high-level
cognition. Summing up, researchers are currently testing tools that will be automatically
scoring essays and giving feedback to learners in an effective way (see Balfour, 2013).
However, the learners’ voice and standpoint about the different assessment types in
the MOOCs context is inconclusive in the current literature and there is need for
more research.
This study explores learners’ views on assessment types in Massive Open Online Courses,
whether any of these has an impact on their enrolment and completion of a course and
in what aspects each type of assessment is effective in supporting their learning
experience. Auto-assessment, peer-assessment and self-assessment are the types under
investigation as they are frequently used in MOOCs and therefore are the most commonly
discussed in literature (see Balfour, 2013, Suen, 2013, Wilkowski et al, 2014). The
study draws upon literature on assessment in general and on assessment in MOOCs in
particular. The concept of online communities, i.e. the learners that appear in MOOCs
will also be discussed in detail.
Online ethnographic approaches are employed to explore the issue in question by using
online interviewing and observation methods. Thematic analysis is carried out using
a sample of 12 MOOCs participants from online interviews and 13 posts of online observations.
The outcome of this qualitative research study reveals that even though participants
identify benefits in peer assessment, there is a preference for automated assessment
since it is an already known, clear type of assessment for them. Moreover, self-assessment
is not popular by participants. Learners’ comments also reveal that a clear guidance
for assessment helps them to carry out peer assessment more effectively. Some learners
also consider that the combination of assessment types may also have a positive effect
on students’ learning as each of them serves a different purpose. |
61445 |
authorList |
authors |
61445 |
status |
unpublished |
61445 |
status |
peerReviewed |
61445 |
uri |
http://data.open.ac.uk/oro/document/867210 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867213 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867214 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867221 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867222 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867223 |
61445 |
uri |
http://data.open.ac.uk/oro/document/867224 |
61445 |
type |
Article |
61445 |
type |
Thesis |
61445 |
label |
Papathoma, Tina (2015). Investigating Different Types of Assessment in Massive
Open Online Courses. MRes thesis The Open University. |
61445 |
label |
Papathoma, Tina (2015). Investigating Different Types of Assessment in Massive Open
Online Courses. MRes thesis The Open University. |
61445 |
Title |
Investigating Different Types of Assessment in Massive Open Online Courses |
61445 |
in dataset |
oro |