This week’s R541 topic, evaluation, ties together two of the
core courses of the IST program, as well as a few of the principles of the IST
field, according to the ADDIE model. It
reinforces the simultaneousness of Analyze, Design, Develop, Implement and
Evaluate; these steps can all occur independently and in conjunction with each
other.
In R561, Fitzpatrick (2011) is the course text, which
provided much of the below insights on evaluation as a field, process and
product. Evaluation is the step in which
the entire instructional process and product- as well as the individual steps-
are reviewed to ensure they are effective and efficient, and provide decision
makers with information necessary to make decisions. Evaluation is quite a complex field, and
according to the professionals within the field, its principles and approaches
are ubiquitous in all aspects of human life.
When we smell milk close to it’s expiration date, we are evaluating
whether it is safe to drink it.
Evaluation exists on a spectrum of formative to summative. Formative evaluation helps provide correction
and validation to the process, while summative evaluation focuses on making
more ultimate decisions concerning continuing with or ending a program.
It is interesting that evaluations built into IST’s
instructional products/outcomes not only provide the learner with feedback on
their success or failure in learning/developing a new behavior, they also
provide feedback to the instruction on whether it is successful in meeting its
intended goals or delivering the intended information.
One core concept of evaluation, at its very roots, is the
concurrence of the stakeholders, clients, evaluators, etc. on the commonly
accepted values that the evaluated item will be compared to. This concept is present in this week’s
Thorndike (1997) reading. While
achievement tests may be easier to determine a commonly accepted performance
standard, other tests or subjects may be more nebulous. Thorndike specifically mentions aptitude
tests and personality or interest tests as example of more difficult topics to
create a high level of concurrence and validity. This is because these topics are extremely
broad in nature, with an infinite number of indicators of behavior that can be
classified as intelligent (aptitude) or as indicative of a personality (do all
Intuitive people display behavior X?).
But on some level, both the evaluator and the eventual user of the
evaluation must agree on the validity of the evaluative standards, or the
product of the evaluation will be generally useless, or perhaps worse, misused.
I recall from R511 last spring that one example of the way
in which the ADDIE steps can be re-ordered is to conduct the analysis (A) to
determine the instructional need, then directly develop the evaluation (E) to
ensure the standards to which students will be held/tested both directly meet
the results of the analysis, and are formulated into the instructional goals
which serve as the basis for the design and development (D, D). Implementation (I) follows, with recurring
evaluation to ensure that the instructional product meets the need, and to
ensure that the need itself is still accurate.
No comments:
Post a Comment