100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary IDE, Blok 1 (Master Educational Sciences)

Rating
-
Sold
-
Pages
82
Uploaded on
14-03-2024
Written in
2022/2023

This document offers a very comprehensive notes/summary of the Instructional Design and Evaluation (IDE) lectures, making it a great preparation resource for the exam. Enjoy studying!

Institution
Course











Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
March 14, 2024
Number of pages
82
Written in
2022/2023
Type
Summary

Subjects

Content preview

odeWeek 1 Web-lectures
The central question with the articles is: how can we apply theory to practice?
Instructional design & evaluation is about theories of learning, theories of instruction, and theories of
evaluation . they all have to align



Introduction 1.1 Theories – Bert Slof

Learning Pyramid (is more a Learning PyraMYTH)

There isn’t any scientific literature that is backing this up – so be a
critical reader, take a critical stand at those info graphics/literature



Where is the theory (educational practices)?

(Bertolini et al.,XX) (Margaryan et al., 2015)




1

,How can we apply theory to educational practices?

The first article was about instructional design and how to apply it to MOOC’S. (Margaryan) They
made differentiation between CMOOC and XMOOC. CMOOC’s were more the first version of the
MOOC, which were usually addressed by (commercial) companies. They have certain kinds of
characteristics. (visualization in the table, you don’t have to remember this by heart blue columns).

Compared to that you see a tendency that also universities try to develop MOOC’s for web lectures
and those kind of things (new version) – XMOOC’s. And they are more using web lectures, using
assessment questionnaires and they are less focussed on a broad community of learners who can
more or less volunteering join a community and then discuss things with each other with only some
kind of bets in contrast to formal certificate whether not you passed the course or not. (Bertolini et
al.,)

• cMOOCs = the original MOOCs, designed by companies to get outsiders to learn
• xMOOCs = made by universities, less focused on a learning community; traditional learning
online

Margaryan→ So what do they do? They said okay well Merrill is a big isle in instructional design – we
took 5 of his main principles (problem, activate, demonstrate, apply and integrate)

• Merills’ first five principles
o Problem-centred
o Activation
o Demonstration
o Application
o integration

Take a problem (that is the central point of instruction)→ You activate prior knowledge →you as a
teacher give a demonstration → than you let the learners apply it → then you let them reflect
(integrate) on how they applied it and whether or not they mastered the competent or skill.

And then they said if you take those 5 principles and 5 other ones – then we are going to see
whether or not the selected MOOC’s meet the criteria of those instructional design principles.
Conclusion = NOT! Main findings:

• MOOC are well organized
• (but content wise) Instructional design quality is low
• Findings seem to be comparable for both types of MOOC’s (no substantial differences
between the two types of MOOC’s)
• Difficult to transfer theory to educational practice

If you try to be a critical reader, what might be something that you noticed while reading the article?
It’s quit a straight forward article, we’ve got this we’ve got that. No difference, that’s it. But I for
example (Bert Slof) find it quit striking that the ten principles – that in order to be a good MOOC it
had to apply to all ten principles. Why those 10 principles? For example the principles 6 – 10 for the
collaborative learning - the feedback - were in not really explained compared to the principles of
Merrill. And that is something you might raise some questions about.
An other question you might raise is, well we have principles based on problem based learning and
we apply those principles to MOOC’s. So are those MOOC’s really problem based? Because that’s is a
quit big assumption for the Merrill principle. So aren’t you comparing apples with pears?
The curriculum in Maastricht University, is really problem based learning – you might have easily

2

,obtained a different outcome because of the criteria you chose to assess the instructional quality of a
specific course.

Note for long-term assignment: which criteria you chose have to match/ align in some way before
you start assessing the course in question. Because otherwise you can beforehand already tell the
quality is bad.

Other questions raised about the mythology: is ther inter-rater reliability? How did they rate it
(coding) and were there different raters? (high inter-rater reliability means that the two persons that
rate it, the principles for each course agreed – and give some quality check on how they applied the
method). I did not read much on how they rate it. If only 1 person who is coding, yeah well perhaps it
was Merrill – and he might have very strict opinions about applying his principle – compared to when
someone else is doing it.

Is there any explanation besides the fact that you only applied for only the Merrill principles and
there no difference between types of MOOCS? Because they seem to differ.

Discussion

• Do MOOCs have to incorporate all principles?
o Why those specific principles? Do they all need to be incorporated?
• Rational for including principle 6 – 10
• The principles are or problem-based learning, but are MOOCs problem based as well?
• Applied coding scheme?
o Selective?
o Dependency?
• Inter-rater reliability
o Was the inter-rater reliability sufficient?
• Explanation why findings seem to be comparable?
o There is no explanation for why the findings on cMOOCs and xMOOCs are similar
o (Margaryan et al 2015)

Other article:

Also about theories but on how theories are used when you write scientific articles. In that article:
We’ve some kind of definition of theory. Theory should be able to describe, explain or predict
phenomena (in this case: learning). And it can be prescriptive: how can it effect those processes in
order to also offer principles for actual design of the learning. And what also important is – theories
can be refined and be adaptive. A theory should also be able to be verified but also whether or not it
is untrue (falsified).

Theories

• Describe, explain and/or predict phenomena
• Offer guidelines: prescriptive and descriptive
• Can be refined and generalized to the discipline
• Can be verified / falsified

Conclusion in this article: Do we use a lot of theories in scientific articles? No! 174 of the total
amount of articles used theory (explicitly). Then you can see where they used theories- in theoretical
framework, for getting the data, in discussion and a little in refinement (use is getting downwards). Is
this troublesome? There is certainly room for improvement. They also did an comparison about, does

3

, it differ for which type of study is conducted? So if you have a really descriptive study, you see the
grey bar (no theoretical evidence) is quit high. And if you look at correlational studies you see the
bars have the most positive balance for use of theory. But if you look for example at comparison
studies, its less.

Discussion: Bert Slof had a hard time reading the article and reading the distinction explicit, vague
and none. How did you make that distinction? And how can you make the final claim that Education
technology does not appear to be a mature discipline (undeveloped field of science)?
And if it is unclear how they rated it (explicit, vague and none), if it is unclear how they distinguish
explicit vague and none - how is it possible to have agreement between raters?
If you look at the percentages in Table 2 (only the overall percentages) – the use is unclear

Critiques

• What is the difference between explicit, vague and none? What are the cut-off points?
• How was the inter-rater reliability achieved?
• If knowledge evolves, then what is it based on? Without a solid conceptualisation, everyone
could make up their own definition of what knowledge is




There are actually theories, theories about expertise development, about learning, instructional
design and assessment and evaluation. We going to talk about these theories and aligning them.



4

Get to know the seller

Seller avatar
Reputation scores are based on the amount of documents a seller has sold for a fee and the reviews they have received for those documents. There are three levels: Bronze, Silver and Gold. The better the reputation, the more your can rely on the quality of the sellers work.
maaiketaheij Universiteit Utrecht
Follow You need to be logged in order to follow users or courses
Sold
22
Member since
8 year
Number of followers
14
Documents
9
Last sold
9 months ago

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions