1ZM140: Summary Quiz 4
Article: FEDS, a Framework for Evaluation in Design Science Research
(Venable, Pries-Heje & Baskerville, 2016)
The extant DSR literature provides insufficient guidance on evaluation to enable Design Science
Researchers to effectively design and incorporate evaluation activities into a DSR project that can
achieve DSR goals and objectives. Much of this existing literature assumes that only one kind of
evaluation will be necessary to demonstrate both the artefact’s utility, fitness, or usefulness, as well
as any design principles or theory employed in the artefact’s construction. A FEDS (framework for
Evaluation in Design Science) strategy considers why, when, how, and what to evaluate.
In DSR, evaluation is primarily concerned with evaluation of design science outputs, including
Information Systems (IS), Design Theories and design artefacts. Together with ‘build’, evaluation is one
of two key activities that constitute DSR. Evaluation is ‘crucial’ to DSR and requires researchers to
rigorously demonstrate the utility, quality, and efficacy of a design artefact using well-executed
evaluation methods. Designed artefacts must be analyzed as to their use and performance as possible
explanations for changes (and hopefully improvements) in the behavior of systems, people, and
organizations. There is a tight linkage between evaluation and design itself, since evaluation has an
impact on designer thinking.
Ordinary design project: relevance cycle, without scientific aims, evaluation is focused on evaluating
the artefact in the context of the utility it contributes to its environment. Design science project: rigor
cycle, evaluation must also regard the design and the artefact in the context of the knowledge it
contributes to the knowledge base. Dual purpose: a build-and-evaluate cycle seeks to deliver both
environmental utility and additional (new) knowledge, needs to address the quality of the artefact
utility, but also the quality of its knowledge outcomes (rigor, relevant, scientific).
2 categories of evaluation
1. Formative vs summative evaluation (why to evaluate):
a. Formative: produce empirically based interpretations, basis for successful action in
improving the characteristics or performance of the evaluand. Focus on
consequences/support the kinds of decisions improving the evaluand.
b. Summative: produce empirically based interpretations, basis for creating shared
meanings about the evaluand in the face of different contexts. Focus on meanings and
support the decisions influencing the selection of the evaluand for an application.
2. Ex ante vs ex post evaluation (when to evaluate): formative evaluation episodes are often
regarded as iterative or cyclical to measure improvement as development progresses.
Summative evaluation episodes are more often used to measure the results of a completed
development or to appraise a situation before development begins. Two extremes of the
evaluation continuum. However, ex ante and ex post refer only to timing. A summative
evaluation may be required on an ex ante or intermediate basis (e.g., for continuation
approval) and ex post evaluations may also have formative purposes.
a. Ex-ante evaluation: predictive evaluation which is performed in order to estimate and
evaluate the impact of future situations.
b. Ex-post evaluation: assessment of the value of the implemented system on the basis
of both financial and non-financial measures.
Article: FEDS, a Framework for Evaluation in Design Science Research
(Venable, Pries-Heje & Baskerville, 2016)
The extant DSR literature provides insufficient guidance on evaluation to enable Design Science
Researchers to effectively design and incorporate evaluation activities into a DSR project that can
achieve DSR goals and objectives. Much of this existing literature assumes that only one kind of
evaluation will be necessary to demonstrate both the artefact’s utility, fitness, or usefulness, as well
as any design principles or theory employed in the artefact’s construction. A FEDS (framework for
Evaluation in Design Science) strategy considers why, when, how, and what to evaluate.
In DSR, evaluation is primarily concerned with evaluation of design science outputs, including
Information Systems (IS), Design Theories and design artefacts. Together with ‘build’, evaluation is one
of two key activities that constitute DSR. Evaluation is ‘crucial’ to DSR and requires researchers to
rigorously demonstrate the utility, quality, and efficacy of a design artefact using well-executed
evaluation methods. Designed artefacts must be analyzed as to their use and performance as possible
explanations for changes (and hopefully improvements) in the behavior of systems, people, and
organizations. There is a tight linkage between evaluation and design itself, since evaluation has an
impact on designer thinking.
Ordinary design project: relevance cycle, without scientific aims, evaluation is focused on evaluating
the artefact in the context of the utility it contributes to its environment. Design science project: rigor
cycle, evaluation must also regard the design and the artefact in the context of the knowledge it
contributes to the knowledge base. Dual purpose: a build-and-evaluate cycle seeks to deliver both
environmental utility and additional (new) knowledge, needs to address the quality of the artefact
utility, but also the quality of its knowledge outcomes (rigor, relevant, scientific).
2 categories of evaluation
1. Formative vs summative evaluation (why to evaluate):
a. Formative: produce empirically based interpretations, basis for successful action in
improving the characteristics or performance of the evaluand. Focus on
consequences/support the kinds of decisions improving the evaluand.
b. Summative: produce empirically based interpretations, basis for creating shared
meanings about the evaluand in the face of different contexts. Focus on meanings and
support the decisions influencing the selection of the evaluand for an application.
2. Ex ante vs ex post evaluation (when to evaluate): formative evaluation episodes are often
regarded as iterative or cyclical to measure improvement as development progresses.
Summative evaluation episodes are more often used to measure the results of a completed
development or to appraise a situation before development begins. Two extremes of the
evaluation continuum. However, ex ante and ex post refer only to timing. A summative
evaluation may be required on an ex ante or intermediate basis (e.g., for continuation
approval) and ex post evaluations may also have formative purposes.
a. Ex-ante evaluation: predictive evaluation which is performed in order to estimate and
evaluate the impact of future situations.
b. Ex-post evaluation: assessment of the value of the implemented system on the basis
of both financial and non-financial measures.