Friday, March 8, 2013

Collaboration


In this week’s readings and videos, it was fascinating to learn of not only the academic benefits of collaboration, but of the social benefits as well.   Collaboration can optimize both learning and workplace achievement.  But it also helps individuals to grow as functional members of society who are able to contribute in a wider array of settings.  I can think of few things in life where the world is tailored to individual isolationism than to group collaboration.  Aspects of group collaboration can be found almost anywhere. 

Johnson & Johnson’s Social Interdependence Theory describes the five factors which impact group collaboration as:
  1. Positive Interdependence
  2. Individual Accountability
  3. Promotive Interaction
  4. Social Skills
  5. Group Processing
When the correct strategies for each are used, these five factors are said to optimize collaboration in academic undertakings.  And while the context for this week’s readings and videos is clearly in academic settings, there are absolute take-aways from this theory to be applied in organizational, work and volunteer settings as well.  One generalization that I often hear about the military is “you all always work together so well”.  In looking at military group interaction through the lens of Social Interdependence Theory and other collaborative strategies, its is apparent that many of these factors are ingrained in military members, as well as a part of the military culture.  In fact, each of the five factors listed above are vital to mission accomplishment in the military.  And typically, no mission can be accomplished by just one individual.  The same can be said for many civilian work environments too, which is one of the underlying purposes for social constructivism at large.  In the military, I would say that while each of the five factors is present, factors 1 through 3 play a larger role in some cases.  This is because of the clear-cut chain of command and hierarchical structure.  But, the hierarchy does not replace steps 4 and 5, rather is generally used to eliminate conflict in these steps.  The lack of ambiguity here helps to enhance the benefits of steps 1 through 3.

It is interesting to see the many ways in which collaboration benefits both the individual and the team at large!

-Bryan

Thursday, February 28, 2013

Reflection


Reflection

This week’s presentations and readings clearly articulate the benefits of incorporating reflection into instruction.  The age-old act of reflecting on what we have learned, are learning, have completed, or have to complete allows for better transfer of knowledge to the individuals involved.  And, as Davis (2003) highlights, certain reflective techniques yield enhanced learning.  Specifically, generic prompts allow the learner to make more meaningful connections as an individual, compared to directed prompts that may not fit a learners existing schema.  The directed prompts may result in less efficient and effective transfer, or even possibly confusion and a lack of learning.

There is a wide array of recommended methods for implementing reflection prompts in instruction, as discussed by the readings as well as Dr. Glazewski’s webcast.  Similar to visual design principles, I understood this week’s lesson to indicate that while there are best-practices and preferred methods, there may not be any 100% correct, direct method for instituting the reflective prompts.  Rather, it is situation-based.

This is where I get philosophical, and perhaps a touch melancholy about the state of today’s education (and world in general):

The world today is one of constantly competing resources, looming deadlines and non-stop rushing.  Reflection seems to be a skill that is slowly eroding in our social fabric.  As information is available at increasingly blinding speed and we are called to produce results all the time, pausing to reflect seems to be happening less and less.  At least I know I am guilty of this, and my observations seem to indicate I am not alone. 

The same seems to be true in our educational systems.  Schools must complete delivery of curriculum in a set amount of time.  Class sizes are growing.  So, I must ask: is reflection being sacrificed in our schools too?  I am not a professional educator.  I feel I do a decent job of monitoring my child’s education, but I am largely unaware of the actual classroom environment and instructional techniques.  Is reflection mandated?  Is it used?  Davis (2003) describes how “one size fits all” techniques for reflection prompts are not as effective as ones tailored to the individual.  Are class sizes and competing priorities resulting in non-effective or non-optimal reflective prompts?  Whether for metacognitive, observational, or other purposes, the benefits of reflection are clear, and I hope they are not being brushed aside in schools today.  To reiterate, I am not an educator, and am not asking this with an agenda.  I am genuinely curious, and am hopeful that some of the individuals in the class with a more formal educational background can shed some light on this for me.

As a side note, one of the most tangible learning outcomes for me as an IST Master’s student has been the introduction to the amazing array of tools available to design instruction.  It is nice to learn these tactical tools along with the theory behind them.  Dr. Leftwich’s presentation on the tools available online, including Screencast’O’Matic, Wordle, etc. (and that is just this week’s menu of recommended tools) is extremely helpful!  Dr. Bonk provided a variety of tools in P540 last semester as well.  Having an understanding of these capabilities will be extremely useful for both the IST program, as well as in the professional setting after the degree.  The availability of tools such as these in the hands of educators can maximize the benefits of reflection, and may help to overcome the adverse conditions I bemoan above.

Friday, February 22, 2013

Impact of Instructor Experience in OLEs


Open Learning Environments (OLEs) are dynamics, effective methods of learning outside of the traditional educational approaches.  Hannifan, Land and Oliver (1999) clearly delineated many of the benefits to implementing OLEs for certain situations.  The authors also allude to situations where OLEs may not be possible, mostly due to time constraints or standardization needs.   While the article discussed many of the strengths and weaknesses of these two approaches, as well as the techniques to be used by the instructor, it left me wondering if there are any prescriptions or recommendations as to the ideal background of the instructor.

Does an OLE require a more experienced teacher/instructor?  OLEs, especially those within the externally induces or individually generated Enabling Contexts (Hannifan et al., 1999), can meander down an infinite number of potential solution paths as the learner pursues non-prescribed solutions.  This is due to the often realistic/ authentic context of the learning, which can delve into topics which may not have been foreseen.   Does this require an instructor with more real-world experience (compared to an academic background)?  As the learner makes meaning and comes to solutions, it is possible that no list of correct answers exists.  Does this mean that an OLE instructor needs to be more adaptable within the subject matter, and more aware of minute details that may be outside of the scope of standardized instruction? 

Additionally, an instructor/teacher/trainer with significant experience may be necessary to evaluate to what degree learning has occurred, both formatively and summatively.   A student may not have learned strict theory, but has developed an acute ability to apply theory to solving real-world problems.  But, due to the lack of static evaluative resources often available to the teachers in traditional classrooms, such as quizzes/tests/etc., I wonder if the burden of proving knowledge transfer requires a SME who is intimately familiar with the what might make an individual proficient in a given field. 

I thought that some of the authors’ description of challenges with OLEs was interesting, and I wonder if instructor background might be another limitation to this otherwise excellent instructional method.

Thursday, February 14, 2013

Sea-Stories


This week in R541, we are learning about the use of Storytelling/Humor/Cases in the context of Instructional Design.  As I was going through this week’s readings, a recurring thought came to mind: Sea-Stories.  Being in the Coast Guard- which is the nation’s oldest maritime service- I have heard (and probably told) my fair share of sea-stories.  In looking back on these, they really connect each of the three topics headlining this week’s readings.  A good sea-story really captures and uses elements of storytelling, humor, and a case/event, and can often be found in many USCG training environments- either from design or instructor whim.  As Dr. Glazewski pointed out in the screencast, a case is sort of a story with an instructional purpose.  Sea-stories can be both just a story, or angled to have a lesson-learned in a formal instructional or other learning setting.

Here is perhaps a good (and short) example of a sea-story from my own background:

As a Boarding Officer on a Coast Guard Cutter in the Bering Sea, I was tasked with leading a boarding/inspection of a fishing boat.  Seas were about 6-8 feet, and it was freezing out.  The small-boat carried us (the Boarding Team of 4) from the cutter over to the fishing boat.  As our small-boat and the fishing boat pitched in the seas, we each precarious jumped over the gunnel (raised lip around the fishing boat – like a half-wall surrounding the deck).  One of the guys on the team mis-timed the jump, and the small-boat dropped out from under him.  He barely grabbed the edge of the gunnel, and was dangling dangerously over the freezing water.  He was able to pull himself up, with the help of the Boarding Team members who had already made it over.  Close call. 

There are many similarities between this example and the technician examples discussed in Jonassen and Hernandez-Serrano (2002).  It relays problem solving, develops self-efficacy through vicarious experience, enhances memory through connecting to existing schemata, and builds organizational culture. In retrospect, not every member on the cutter saw this happen, and the story was shared around the crew over the next few hours.  As it was told, it relayed a tale of caution, but with the eventual positive ending in that he made it aboard the fishing boat safely.  It was told in a certain “salty” manner that may not be appropriate for an academic forum, conveying some humor.  And it was an excellent case/example for both the technique of timing the jump, as well as why physical fitness is vital for the Boarding Team.

Stories have real-world context that pertains to the situation in which the learners will apply their new knowledge.  But in rigorous academic learning, I believe that stories may not be the most efficient or effective way to understand the full scope of details of complex subject matter, but rather may be better suited as attention-getters or as examples to clarify and contextualize the information.  As Jonasseen and Hernandez-Serrano (2002) stated: “Rather than generating scientific and rigorously rational explanations of phenomena, the new concern of these disciplines now seems to be with “meaning-making” (Bruner, 1990).”

Ertmer, Quinn and Glazewski outline a wide array of benefits to case-based learning, with the crux of the article positing that budding designers have much to learn from the experiences of seasoned professionals in the IST field.  This is true in almost every profession; we can all benefit from learning from our “elders” to some degree.  It is also true that learning is a life-long process, and even seasoned professionals can learn from a unique experience encountered by a younger or less-experienced individual, and can certainly benefit from their own new experiences, adding to their own Case Based Reasoning ability. 

The “reflective mind-set” that Ertmer, Quinn and Glazewski proffer is a skill that can be used in almost any situation, by any individual.  I view this point as critical.  Much like evaluation, which is present in almost everything we do (ex. smell the milk to evaluate if it went bad, or evaluate the success of a Fortune 500 company’s new product), the reflective mindset allows us to learn from the success and failure of others  (ex. Timmy didn’t turn off the power before changing a light fixture, and got badly shocked, so maybe I should turn it off when I do the same task).   Open-mindedness, adaptability, and the ability to question your own views allow for an increased likelihood of success in facing the complexities of the real world.

In addition to enhancing learner comprehension of a complex topic, storytelling and case learning both allow for sharing vital safety related information and best-practices.  In dangerous or other high-risk (financial, relationship, etc) tasks, learning from examples provides vital decision making resources to novices. As Jonassen and Hernandez-Serrano (2002) state:

Given the lack of previous experiences by novices, experiences available through a case library are expected to augment their repertoire of experiences by connecting with those they have experienced. Their prior experiences serve as a basis for interpreting current and future stories, forewarning us of potential problems, realizing what to avoid, and foreseeing the consequences of our decisions or actions.

In summary, this week’s readings were enlightening in that they portrayed common story-telling, which we as humans engage in on a daily or even hourly basis, as a tremendous resource which can be woven into the fabric of instruction to enhance the richness of the learning experience.   These examples can be simple stories, or more focused cases, and used in a wide array of ways.  Personally, I am a learner who finds examples very helpful.  I think that I had previously understood the role of storytelling and cases, but perhaps not in the formal way I do now.  Each of the three recommended uses of stories (examples, problems to be solved, advice) provided by Jonassen and Hernandez-Serrano (2002) are significantly beneficial in fulfilling Merrill’s First Principles of Instruction, especially in achieving the level of Integration.  

Problem solving through cases and examples brings context to the abstract.

Friday, February 8, 2013

Evaluation in Instructional Design


This week’s R541 topic, evaluation, ties together two of the core courses of the IST program, as well as a few of the principles of the IST field, according to the ADDIE model.  It reinforces the simultaneousness of Analyze, Design, Develop, Implement and Evaluate; these steps can all occur independently and in conjunction with each other. 

In R561, Fitzpatrick (2011) is the course text, which provided much of the below insights on evaluation as a field, process and product.  Evaluation is the step in which the entire instructional process and product- as well as the individual steps- are reviewed to ensure they are effective and efficient, and provide decision makers with information necessary to make decisions.  Evaluation is quite a complex field, and according to the professionals within the field, its principles and approaches are ubiquitous in all aspects of human life.  When we smell milk close to it’s expiration date, we are evaluating whether it is safe to drink it.  Evaluation exists on a spectrum of formative to summative.  Formative evaluation helps provide correction and validation to the process, while summative evaluation focuses on making more ultimate decisions concerning continuing with or ending a program.

It is interesting that evaluations built into IST’s instructional products/outcomes not only provide the learner with feedback on their success or failure in learning/developing a new behavior, they also provide feedback to the instruction on whether it is successful in meeting its intended goals or delivering the intended information. 

One core concept of evaluation, at its very roots, is the concurrence of the stakeholders, clients, evaluators, etc. on the commonly accepted values that the evaluated item will be compared to.  This concept is present in this week’s Thorndike (1997) reading.  While achievement tests may be easier to determine a commonly accepted performance standard, other tests or subjects may be more nebulous.  Thorndike specifically mentions aptitude tests and personality or interest tests as example of more difficult topics to create a high level of concurrence and validity.  This is because these topics are extremely broad in nature, with an infinite number of indicators of behavior that can be classified as intelligent (aptitude) or as indicative of a personality (do all Intuitive people display behavior X?).  But on some level, both the evaluator and the eventual user of the evaluation must agree on the validity of the evaluative standards, or the product of the evaluation will be generally useless, or perhaps worse, misused.

I recall from R511 last spring that one example of the way in which the ADDIE steps can be re-ordered is to conduct the analysis (A) to determine the instructional need, then directly develop the evaluation (E) to ensure the standards to which students will be held/tested both directly meet the results of the analysis, and are formulated into the instructional goals which serve as the basis for the design and development (D, D).  Implementation (I) follows, with recurring evaluation to ensure that the instructional product meets the need, and to ensure that the need itself is still accurate.