The following questions will help guide the design of learning experiences following the 5D Framework.
Who are we trying to reach?
What are their needs in this area?
How does an online learning experience fit into their lives? How can we reach them with this opportunity?
What will it cost to offer the optimal learning experience?
What people, skills, and other resources would be needed?
What is the expected return?
What skills and concepts will have the most relevant and powerful effect on the field?
What instructional topics would, if adopted by a large group of learners, have a significant impact on the education profession?
What platforms, tools, and technologies might be leveraged for this learning experience?
What are the constraints and barriers associated with each?
What prior learning and experiences will most benefit their success in the planned course?
What will we need to provide to orient them to this course?
What readings or other materials will need to be made available to them in this course?
Affect and Values
How do we want learners to feel as they approach this experience?
How do we want them to feel throughout the experience and as a result of completing it?
What are our values and priorities for this experience regarding video production, look and feel, interactivity, community building and other elements?
Strategies and Technologies
What instructional strategies will most effectively meet the intended goals and outcomes?
What tools and technologies identified in discovery can be leveraged to enhance these strategies?
Who will be involved in teaching the course and what qualifications do they need/have?
What are the primary goals of the facilitation?
What are the expectations for time investment, level of feedback, and number of people/teams per facilitator?
Cadence and Scheduling
How does this course flow from beginning to end?
What are the common milestones or consistent themes or actions from session to session?
When do they each start, end, and have activities due or feedback provided?
Evaluation and Research
When, and how, will the course be evaluated?
What are the criteria for success and how will these instruments be administered?
What research goals can be served by the resulting data?
Who will be responsible for collecting, analyzing and reporting on the evaluation results?
Rather than becoming programmers, look to those with experience in web development who can build a series of efficient, screen-reader friendly, web-optimized templates for use on pages in a learning platform.
While low-fi video production via webcams and phones has its place in the development stage, partnering with a professional media team for the creation of high-quality video and animation can lead to a richer user experience that positively impacts engagement and retention.
User Experience and Accessibility
By partnering with our experts in accessibility and usability, LXDs can create products that reflect best practices in these areas.
What kind of feedback did learners and facilitators provide for this experience?
What most supported and most hindered their learning?
What aspects of the experience were particularly effective?
Did the learners demonstrate the knowledge and skills we set out to provide?
How many completed the course and how many did not?
What contributed to these decisions?
Did learners indicate that the learning was relevant and applicable?
Can they demonstrate or explain ways in which what they have learned has impacted their practice?
How did it go?
What worked well?
What could have gone better?
What should we do the same and differently in future, similar projects?
Did we have the right people involved, in the right ways, at the right times?
In a 2016 article, Daniel Christian names two challenges to his own recommendations for embedding learning experience designers (LXDs) in course teams for the production of excellent online courses. First, he suggests that course teams can often be unwieldy and inefficient, becoming “a bottleneck to the organization” (Christian, 2016). Second, he wonders whether faculty members will accept the contributions of LXDs (or others) in the creation of learning experiences. To be sure, these challenges are real and experienced in many institutions of higher education engaged in the work of offering online and blended programming. Faculty are often reticent to invite people outside their discipline to participate in their curriculum and course development efforts, and in cases where course design teams are in place, inefficiencies can become commonplace.
In response to these and other challenges, I formulated a particularly collaborative design framework with former colleagues at in the Teaching and Learning Lab at the Harvard Graduate School of Education. The 5D framework encompasses the collaborative work phases of discovery, design, development, delivery, and debriefing.
Why the 5D Framework? What About ADDIE?
When communicated and managed effectively, this framework can yield tremendous efficiencies and foster trusting partnerships between faculty, Learning Experience Designers (LXDs), and other stakeholders. And, when applied specifically to a learning experience design project, the 5Ds can enhance the well-known ADDIE model (Analysis, Design, Development, Implementation, Evaluation).
ADDIE emerged from corporate and military contexts where instruction must be developed consistently and at scale, and it is often used to explain the work of designers. Entire books have been written about how to perform and strengthen competencies around each step of the model (Bozarth, 2008; Chaplowe & Bradley-Cousins, 2015; Jonassen, Martin & Wallace, 1998). While there is value in breaking down the phases of the design process in this systematic way, the ADDIE approach is primarily effective for designers who work independently with materials from a subject matter expert (SME). Taken out of that context, this model tends to overstate the direct role of the LXD while failing to address the constraints and cross-functional nature of the work inherent in other LXD positions, such as those in academia.
By contrast, the 5Ds depend on a collaborative approach, Herein lies one of the most significant lessons for those who have made their career in learning experience design: We need not be excellent at everything; we must leverage the expertise and contributions of stakeholders at every step of the way and seek out unexpected opportunities for collaboration. An effective, team-based approach yields efficiencies, promotes faculty buy-in, and leads to the creation of excellent learning experiences.
Discovery Every project begins with a thoughtful, inquiry-based process of discovery in which the needs and opportunities around a project are outlined. This phase often engages media, technology, and design teams along with SMEs, learners, and other stakeholders including finance, marketing, and administration. Together, we ask our guiding questions for discovery. These questions are like the “Analysis” step of the ADDIE framework, though they emphasize the inclusion of multiple partners and stakeholders. While the LXD’s contribution to this phase may certainly include an in-depth task and needs analysis, it is further extended to include expertise in the learning sciences and emerging trends that may inform the project plans. These contributions are held in tandem with all the other aspects of determining the feasibility and priority of the project. With some collective thinking around these questions, an informed decision is made about whether and how to proceed with the project. If it is to be pursued, the team moves into the design phase.
Though we have “design” in our title, LXDs are hardly the sole source of ideas when it comes to the creation of online learning experiences. The best designs depend on a tight collaborative relationship between LXDs, SMEs, learners, and others. In addition to coaching the team to craft clear and measurable goals for understanding and performance, and to align these goals with activities and assessments throughout the experience, the LXD helps the team consider our guiding questions. These questions arise from several embedded assumptions about how to create excellent online learning experiences for adults.
Malcolm Knowles’ Andragogy lays out four key principles that designers of adult learning experiences should bear in mind: they must involve the adult students in the design and evaluation phases; they must take the adult learners’ lived experiences into account in the design and delivery phases; they must create experiences that are centered around real-world problems rather than focused on the content itself; and the experience must have direct relevance to and impact on the learners’ lives.
Bloom’s Taxonomy is an indispensable tool for formulating learner-centered objectives targeting different levels of cognitive processes based on the content, context, and audience.
Important learning theories such as behaviorism (Skinner, 1964), cognitivism (Piaget, 1964), constructivism (Vygotsky, 1980) and connectivism (Siemens & Conole, 2011), allow the LXD to frame the experience in terms of specific outcomes and strategies.
Deci and Ryan’s (2002) Self-Determination Theory (SDT) posits that intrinsic motivation can be maximized when three basic human needs are met: autonomy (a sense of choice and self-agency), competence (a sense of confidence in one’s abilities), and relatedness (a sense of belonging with others). Selecting teaching strategies, tools, and technologies to enhance support for these three needs can help to increase the intrinsic motivation of adult learners throughout the experience.
The development phase is where many LXDs are typically used to piling more hats onto our already crowded heads. Quite often, LXD’s will set off on their own to develop a learning experience with a variety of tools, including rapid development tools like Articulate Storyline and Adobe Captivate, multimedia presentation tools like Articulate Presenter and VoiceThread, learning platforms like Canvas and Blackboard, and/or other technologies.
Alternatively, the 5Ds encourage learning experience designers to leverage contributions from others in ways that can significantly enhance the final product. See our guiding questions to use in development. In this phase, the LXD identifies and guides all contributors through the development process with an eye toward integrating these otherwise disparate contributions into a coherent whole in alignment with the vision of the course team. Once the learning experience is developed and tested, the team collectively braces itself for the big event: launch!
Through most of my career in design roles in various contexts (corporate, medical and academic), the delivery (or “implementation”) phase is often where my colleagues check out, leaving the experience in the hands of the platform (in asynchronous or self-paced online learning) and/or the facilitation team (in synchronous or facilitated learning). Then, the designer is only called back into the game when some technical issue is encountered. In my experience, this approach tends to underutilize and even demotivate the LXD. Rather than being primarily technology support specialists in this phase, LXDs are uniquely positioned to assist the delivery team with addressing teaching and learning challenges that may be encountered over the life of the course.
To this end, the “Learning Loop” concept, a series of meetings in which the entire course team meets at stated intervals (often weekly) during the first-run of a new learning experience, can yield significant efficiency gains. In these meetings, the team reviews feedback from learners and facilitators, identifies opportunities to make mid-stream improvements and/or iterative changes the next time the course is offered, and resolves new challenges around the facilitation model, technologies, or course structure. The Learning Loop model has proven to be an invaluable step in the process of creating and launching learning experiences because it keeps the entire team in touch with the course as it is being experienced by learners. It has also helped to set healthy precedents around continuous course improvement and ongoing engagement between LXDs and teaching teams.
Once the learning experience has been planned, created and offered to real learners, it is time to debrief by gathering with the team and other stakeholders to answer our guiding questions for Debrief. These questions can be answered through analysis of quantitative and qualitative data collected throughout the learning experience as well as interviews and/or focus groups with various contributors. This is an extension of ADDIE’s Evaluation phase, which tends to focus primarily on the learner evaluations in terms of satisfaction, outcomes, results and impact, commonly referenced as Kirkpatrick’s four levels of evaluation (Kirkpatrick, 1975). A thorough debrief with all relevant parties, including learner feedback, can help to uncover opportunities for continuous improvement for this project and, often, the rest of the portfolio as well.
Taken together, the 5Ds encompass a process of collaborative learning design in which dedicated LXDs both drive and contribute to a cross-functional team of widely varying expertise. This framework provides a structure for project planning and management that promotes efficiencies. Timelines and milestones are mapped to each of the 5 D phases, working out from the anticipated launch date for the experience. By positioning the LXD as the bridge connecting various stakeholders throughout the entire process, leveraging the expertise of various professionals, cultivating a culture of reflection and communication through the Learning Loops and the Debrief stages, each team member is free to make their strongest contributions in the most effective ways.
Bozarth, J. (2008). From Analysis to Evaluation: Tools, Tips, and Techniques for Trainers. John Wiley & Sons.
Budd, B. (2016a). The 5Ds: A collaborative model for learning design (Part 1). The Evolllution.
Budd, B. (2016b). The 5Ds: A collaborative model for learning design (Part 2). The Evolllution.
Chaplowe, S. G., Bradley-Cousins, J. (2015). Monitoring and Evaluation Training: A Systematic Approach. SAGE Publications.
Ask anyone who has taken an online course what they thought of the experience, and you are likely to hear words like “impersonal”, “boring”, “tedious”, “ineffective”, or “monotonous”. The modality has unfortunately earned itself a poor reputation due to a plethora of such examples. Unfortunately, the majority of online courses available today embody a passive approach to learning in that they provide lecture-style videos followed by multiple-choice quizzes with automated grading. This “talk and test” strategy is reflective of some of the least effective face-to-face classrooms, with the only potential advantage being that learners can access material in their own space and at their own pace.
Still, the cost savings, flexibility, convenience and other perceived benefits of learning online have combined to ensure that the modality is not going away anytime soon. An unprecedented number of Americans have enrolled in online learning experiences in recent years, whether formal courses for post-secondary credit, professional development courses for career growth, or non-credit experiences such as massive open online courses (MOOCs). As of 2015, key leaders at nearly 70% of American post-secondary educational institutions consider online education to be a critical component in their long-term strategies (Allen & Seaman, 2015).
While the promise of flexible, personalized and highly engaging online learning has yet to be realized at scale, the pieces are in place. New technologies with the potential to enable and enhance learning experiences are developed at an overwhelming pace. Leaders in the corporate world are beginning to recognize the impact that quality development opportunities can have on the bottom line, particularly when achieved online. Educational leaders face increased pressures to offer online learning experiences that not only generate revenue, but also standout in an increasingly cluttered landscape of such products.
Course Kitchen is engaged in these challenges on a regular basis. One of our primary motivators is the drive to improve the reputation of online learning experiences such that organizations will more readily embrace the affordances provided by today’s technologies. We work with clients to understand their learners, identify the challenges that training and education can and cannot address, and design experiences that keep learners motivated and connected to one another, to instructors, to the content, and to the organization. For more information on how we approach our work, visit our page on Learning Design and drop us a note with any questions you may have.
Have you participated in an online course or module? What adjectives would you use to describe the experience? What would you have done to make it the best experience possible?
“[A] thermostat that automatically turns on the heat whenever the temperature in a room drops below 68°F is a good example of single-loop learning.
A thermostat that could ask, “why am I set to 68°F?” and then explore whether or not some other temperature might more economically achieve the goal of heating the room would be engaged in double-loop learning.1”
Innovation as a Learning Process
In his book, “The Lean Startup” (affiliate), Eric Ries describes the innovative process in three stages: Build – Measure – Learn. He encourages entrepreneurs and organizations to engage in this continuous cycle of innovation by building a minimum viable product (MVP) or quick prototype for measuring the target user’s response and experience with the concept, and learning from this input to make incremental improvements. Babson College, the leading institution for entrepreneurship education, changes up the verbs to offer “Act, Learn, Build” as their trademark formula, known as “Entrepreneurial Thought and Action®”, for new venture creation and success.
Regardless of verb choice, the concept of user testing and incorporating feedback into future versions of a product or process is not necessarily new, nor is it unique to startup culture. In the early 1960s, organizational economists Richard Cyert and James March (affiliate) described a company’s growth in terms of changes based on feedback:
“An organization … changes its behavior in response to short-run feedback from the environment according to some fairly well-defined rules. It changes rules in response to longer-run feedback according to more general rules, and so on.3”
In the late 1970’s,organizational behaviorists Chris Argyris and Donald Schön (affiliate) gave this concept a name: The Learning Loop4. They distinguish two types; the Single Learning Loop is one in which something is planned, implemented, measured, and adjusted, and the Double Learning Loop extends this to evaluate the original assumptions that guided the planning5.
The Course Design Learning Loop
At Course Kitchen, we consider the first-run of a learning experience to be our “live prototype”, though many sub-prototypes may have been built and tested leading up to that first offering. The debut of a new learning experience provides an excellent opportunity to get real-time data (both quantitative and qualitative) from actual users (including learners, instructors, and administrative and support teams) about how the experience is received. Keeping pace with these data requires that course design teams meet at least weekly to review, discuss, and answer three key questions:
What needs to be changed immediately to improve the learning experience mid-stream?
How can we leverage the lessons we are learning to improve other experiences we are currently developing?
What opportunities are we discovering for ongoing improvements we can make to this learning experience before it is offered again?
We call these conversations “Learning Loop” meetings. A typical 60-minute learning loop meeting might use something like the following agenda:
Notes: Engagement, Questions, Issues, Communication
Focus: Upcoming Session/Week (15 minutes)
Notes: What should we anticipate? What should learners expect?
Focus: General (15 minutes)
Notes: What else should be reviewed, discussed, or considered at this time?
This model allows us to examine our early assumptions about how a learner will interact with the material, with our facilitators, and with one another based on the actual results we encounter as the course unfolds in real-time.
Each learning experience we create represents a number of similar experiments to test an assumption, a theory, a new strategy, a new technology, etc. Such tests have become a core vehicle for our pursuit of continuous improvement as related to our processes, collaborative efforts, and products. With this iterative approach to our work, we embrace the notion that learning never ends – not for students, not for instructors, and not for design teams. Instead, it loops back and forth, getting richer with each pass. Such is the meaning of the Learning Loop.
One of the interview questions I ask prospective learning designers is, “If you get this job, how will you describe your new role to your friends and family?” The way a candidate answers this question can reveal much about their understanding of and approach to the work of learning design.
What does a learning designer do, exactly? What makes this role distinct from that of an instructor, program developer, learning technologist, or media producer? How does a person become a dedicated learning designer….and why might they want to do so? In this series of posts, I will explore these questions with the aim of helping both prospective designers and those who collaborate with designers to understand the work, approach, and skills of those in this role. Specifically, I will look at the most common roles and responsibilities of learning designers (also known as instructional designers), the attributes and competencies of effective learning designers, the art and science of learning design as a collaborative process, and developmental paths toward becoming a learning designer. In this first post, I will outline key responsibilities of all learning designers regardless of context (academic, corporate, startup, government, or non-profit).
Learning design is an endlessly complex and fascinating field spanning a range of organizational contexts. Within academia alone, designers may work in public, private, profit and not for profit organizations from pre-K through postsecondary and graduate institutions. To dig even deeper, within higher education, a learning designer may be involved in campus-based programming, online and blended programming, and/or professional development programming for the wider community. Within any of these scopes of work, the designer is typically responsible for contributions requiring a wide range of skill sets.
Instructional Design in Higher Education, a report by Intentional Futures and the Bill and Melinda Gates Foundation, was written to help “institutions gain a better understanding about how instructional designers are utilized and their potential impact on student success” (page 2). The report describes four key domains of a designer’s work: Design, Manage, Train, and Support.
Based on my 10+ years in the field of learning design, and with a particular focus on the creation of online and blended educational programs, I would consider the proposed breakdown to be an accurate snapshot of the work, with two caveats. First, I would add “Develop” to the Design domain, and “Evaluate” to the Support domain. Learning Designers are rarely called upon to conceptualize a learning experience without also developing it, either in prototype or final form. This may include the development of multimedia resources, interactive exercises, text-based materials, and/or, for an online experience, the course site itself. Similarly, online learning experiences benefit greatly from iterative evaluation and improvement processes, so we should not overlook Evaluation as a critical component of the work.
How can one person be a creative designer and developer of learning experiences, a detailed project manager, a helpful trainer, and a service-oriented support person? In a future post, I will explore the attributes of effective learning designers and the steps one might take to prepare for work in this field.
Call to action/comments from current learning designers: What do you think of this breakdown of the responsibilities encompassed in your work? Do they accurately capture your role? What would you add or change?
Design and Develop Andrea and Joanna watched video of Dr. Hehir describing UDL principles and examples from a recent webinar offered by HGSE. They considered how this content may be offered as an asynchronous learning experience with opportunities for participants to dive more deeply into the material. This work involved:
Identifying learning goals, activities, and assessments for consideration and refinement by the teaching team.
Creating orientation materials to usher participants into what may be to them a new learning modality.
With an understanding of Universal Design for Learning (UDL) principles, ensure the course is accessible to a diverse range of learners across various abilities, genders, races, cultures, and other dimensions impacting the experience of our learners.
Using graphic design skills and tools, developing a visual template and navigational structure for the experience in the learning platform (Canvas).
Using programming skills and tools, building out the content pages, discussions, quizzes, and other resources in the learning platform.
Using user experience (UX) design skills and tools, ensuring language, flow and functionality of all aspects of the experience in the learning platform are as clean, intuitive and engaging as possible.
Create a project plan and timeline for faculty, PPE, TLL, and IT to follow to produce the module by the designated launch date.
Function as project manager within the TLL to ensure coordination of media team, technologists, and IT resources based on timeline.
Develop procedures for, and ensure that reviews are conducted and changes implemented for, usability, accessibility, quality, and technical functionality.
Advocate for the learner and the learning experience as the primary drivers of all decisions.
Train identified facilitator(s) in the use of the learning platform (Canvas) as well as best practices in facilitating online learning. For this project, this work was done by PPE, though the TLL is working on guidelines for the effective facilitation of online discussions (and learning in general).
Support and Evaluate
During the run of this module and future experiences, the TLL works closely with IT and PPE to iron out any technical or teaching and learning challenges that come up.
Each short-form module benefits from the lessons learned from the prior offering. As the UDL module is the second one we have developed, the design reflects improvements identified by learners in the first module. This iterative improvement will continue with each new module.
Managing Evidence was created with Professor Marty West to equip leaders with the skills needed to evaluate studies of education interventions and their implications for system-level decisions, and to strengthen the capacity of systems to base decisions on high-quality evidence. I worked with Dr. West, a teaching fellow (the newly minted Dr. David Blazar!), Brandon Pousley, Learning Technologist in the TLL, and Danielle Thomson, Special Projects Administrator at PPE to create and launch this module. As the learning designer on the project, my work included design, management, training, and support as shown below. For brevity, I will share how our work added to or slightly differed from that of the designers described in the example above.
Design and Develop I helped the team consider ways in which learning goals that are traditionally met in campus-based programs might also be accomplished online, and to understand the unique affordances of the online format. This project required all of the design work described in the example above, as well as:
The development of interactive exercises for use within the course. These include use of rapid development courseware (Articulate Storyline, Engage, and Quizmaker) as well as an online tool for multimedia discussions (VoiceThread).
Working with the team (including T127 Practicum student, Dustin Hilt) to build in meaningful opportunities for learners to connect with one another and create a vibrant learning community throughout the learning experience (see connectivism, discussion strategies, and self-determination theory).
I wrote about other principles impacting the design of CAEL modules in this blog post.
Manage This project required all of the management work described in the example above, as well as the creation of custom processes and tools such as an asset tracker, a cross-departmental timeline of responsibilities, and a unique edit decision list (EDL) mechanism for use with the TLL’s media team.
Train Brandon and I contribute to the Online Learning Facilitator (OLF) Handbook, a guide for the people responsible for facilitating the learning experience for up to fifteen people each. Brandon’s team also created videos and documentation for using various aspects of the course site and participated in a live training for the OLFs as well as the first Live Online for learners, orienting them to the learning platform.
Support and Evaluate As with the example above, the TLL works closely with IT and PPE to iron out any technical or teaching and learning challenges that come up. We also collaborate with the teaching and delivery teams to design the weekly learning evaluations, the results of which are discussed in our weekly “Learning Loop” meetings. Here, emerging questions and opportunities for improvement are addressed in thoughtful and strategic ways so that our own learning does not end.
What are your examples of work you have done to Design and Develop, Manage, Train, and Support and Evaluate?
In 2006 I was four years into my Master’s program in Adult Education at the University of Rhode Island and the end was in sight – just two more courses and a thesis. I had just resigned from the job I’d held for six years as a systems administrator and user support specialist at a small, Rhode Island-based research affiliate of the University of Massachusetts Medical School. I had gotten a new job supporting online training technologies at the mother ship, the UMMS campus in Worcester. I wanted to be an instructional designer (ID), and this position was a stepping stone in that direction. My new job brought with it an hour-long commute, an opportunity to apply the skills I was gaining in graduate school, and – as I learned on day one – a whole new vocabulary.
Here are some of the terms that were new to me at the time or that have confused me as I moved along in this buzzword-filled field. What terms would you add to the list?
2 Sigma Problem
If you hear designers toss out this term, they are likely referring to Bloom’s Two-Sigma Problem: the research finding that one-on-one instruction (direct tutoring) is as much as two standard deviations (sigmas) more effective than conventional group instruction.
This raises significant challenges and opportunities related to personalized learning, particularly with the affordances of today’s educational technologies. For example, to what extent can a learning system function as a personal tutor? If a learning is engaging on their own with a self-paced course that is responsive to their needs and skill level, is this also as effective one-on-one tutoring with a human instructor? Can such systems be created and supported in a sustainable way? Some startups and design strategists are attempting to crack this nut as we speak, and the challenge holds great promise for the future of digital learning.
The theory is that people retain only 10% of what is learned in a formal learning experience, be it an online course, a classroom, or tutoring; 20% of what we know comes from informal learning through peers and our community; and the vast majority of our learning, 70%, comes from actual, hands-on, direct experience.
This term is usually raised when a designer is conducting a needs analysis. “Let’s plan for how people learn according to research,” they say, and proceed to draw something like the following:
Designers talk about many types of analyses. Audience, Needs, Task, and Gap analysis are four common categories:
An audience analysis, or learner analysis, provides insight into the preferences, motivations, contexts, knowledge level, etc.
A needs analysis (aka needs assessment) reveals the learning needs of a particular audience regarding a given subject matter.
A task analysis produces a step-by-step, detailed description of how a task is performed.
A gap analysis identifies discrepancies between how something is (for example, how a task is performed) and how it should be (for example, how the task should be performed for best results).
While this term is used in a few ways, it typically refers to the abundance of data collected by today’s learning management systems and other platforms. These data can be used to determine, for example, how a learner is interacting with a course
(which content, when, and for how long?) and their peers (how often and with whom?) and whether and how such interactions impact their learning outcomes (correlating interactivity with grades, for example). Check out this infographic for a handy overview. A very exciting aspect of this work is the notion of predictive analytics. It may be possible for organizations and learning institutions to predict, for example, which individuals are most likely to succeed in certain types of learning experiences. The Predictive Analytics Reporting (PAR) Framework is doing some particularly interesting work in this area.
Submitted by Julie Riley 9/1/17
You may read or hear that pedagogy means the science of teaching children while andragogy is the science of teaching adults. While that is technically true, the word pedagogy is often used in a more generic sense to mean the art and science of teaching, period.
Andragogy as a term was popularized by Malcolm Knowles and is associated with his principles of adult learning, which emphasizes that adults learn best when:
they are involved in planning the learning
can learn through experiences
are learning topics that they can immediately apply to their lives or work, and
are provided problems to solve rather than just content to learn.
The most flexible type of digital learning, asynchronous experiences are those in which the learners are neither in the same time nor the same space (usually a self-paced online course in which each person accesses the material from their own location at the time most convenient to them).
An asynchronous learning experience is likely to include some form of content delivery along with one or more opportunities to practice application of new knowledge, as well as formative and/or summative assessments of the learning. Each of these components may be presented in a wide variety of ways, as today’s technologies enable multiple approaches including videos, animations, readings, streaming audio, and simulations. The nature and degree to which these elements are combined will depend upon the content and learning objectives.
Submitted by Julie Riley 9/1/17
Authentic assessment means having learners complete tasks that mirror what they’ll need to do in the real world.
For example, if nurses will need to be able to interview patients to obtain a medical history, an authentic assessment would be having them practice under observation with patients (or people acting as such) or completing simulated conversations online. A multiple choice quiz about interviewing techniques or dragging and dropping the components of a medical history would not be considered authentic assessments, but these types of evaluations can still be important as learners build foundational skills.
Submitted by Erin Ryan Casey 7/26/17
Start with the measurable goals or learning objectives, and plan backwards:
What is the desired result? What should a learner know (or be able to do) by the end of a course or module?
How will these outcomes be measured? What assessments will indicate success?
What knowledge or skill does a user need to reach these goals? Design learning activities and experiences based on the above.
Note from Bonnie: Backwards Design comes from McTighe and Wiggins‘ Understanding by Design (UbD) framework. At Course Kitchen, we add a step before the first one listed above. Before identifying the desired result, we need to understand the learner. An audience analysis resulting in a designer’s empathy for the learner can go a long way toward producing the right learning experience.
Also called hybrid learning, blended learning aligns each learning objective with the best delivery approach. Some objectives will be most efficiently achieved in real time, whether in a physical (classroom) or virtual learning space (webinar). Others will be best met in a self-paced context, allowing the learner to consume, process, and reflect on the learning in their own time. Combining, or blending, these approaches in one learning experience
A strategy in asynchronous learning that allows the learner to impact or choose their own experience. For example, if they are presented with a decision (“What would you do in this situation?”), the choice made can determine what happens next.
This approach can be used in a simulation to test the impact of various choices, to present a menu of content options for the learner to navigate in the order they choose, to present more or less advanced knowledge based on an assessment of learner readiness, and many more scenarios. Branching is a key aspect of interactivity, personalized learning and can make an asynchronous learning experience very effective.
This is the process of dividing content into manageable sections aligned with particular objectives.
For example, if a designer is working on a project to teach people how to drive, they might divide, or chunk, the material to be learned into categories such as Readiness, Operation, Traffic, and Laws. Since each category is still quite broad, they may be further chunked such that Readiness includes “Checking Mirrors”, “Seat Adjustment”, “Safety Measures”, and “Assessing the Area”. Each of these chunks might be taught and tested individually before moving on to the next. In this way, complex or dense subject matter (“content”) can be systematically taught and assessed in an accessible manner. There are several strategies for chunking according to cognitive domain or other classifications. An expert designer will choose the approach that best aligns with the content and objectives.
Designers rarely operate within a vacuum. In addition to partnering with faculty, business leaders, and other stakeholders, the role of designer increasingly requires close collaboration with technologists, media and graphic professionals, prospective learners, writers, testers, and more.
Beyond leveraging each skill set for various aspects of the process, true collaborative design solicits the regular input and validation of all involved.
One model of collaborative design builds on ADDIE to accommodate contributions from a full range of stakeholders: the 5 D’s (Discovery, Design, Development, Delivery, and Debrief). Under the skilled leadership of an expert designer, this model fosters open and trusting partnerships within cross-functional design team of widely varying expertise.
While design thinking has been prevalent in the fields of business and software development, the approach is getting a foothold in education. Design thinking prioritizes the experience and insights of the end-user in a process that moves iteratively through five actions: Empathize, Define, Ideate, Prototype, and Test.
An expert designer of learner-centered experiences essentially exercises design thinking in their work all the time, even if following a more traditional model such as ADDIE:
Whether it is referred to as analysis or discovery, the work always starts with attempting to understand and empathize with the needs of the learner and organization.
Interpretation of, ideation around, and experimentation with the results of the analysis is the heart of the design and development process. This cycle involves iterative rounds of prototyping, reviews and user testing, and modifications.
Evolution is ultimately the result of testing and evaluation; the purpose of measuring the efficacy of a learning experience is to make continuous improvements.
Submitted by Julie Riley 9/1/17
If you wait until the end of a learning experience to evaluate participants on what they’ve learned, that’s summative assessment. And it’s valuable. But it’s also valuable to include some low-stakes formative assessments along the way so that participants can know how well they’re learning and teachers can know how well they’re teaching and both groups can adjust accordingly.
See “Blended Learning”
Submitted by Julie Riley 9/1/17
Informal learning encompasses all the ways we learn outside of a formal learning experience. It’s usually unplanned and often impromptu.
Have you ever been stuck and asked your colleague for help? Or learned a new way of doing something by watching a YouTube video? That’s all informal learning. Organizations are becoming more interested in informal learning because it accounts for so much of how we learn. However, its very nature makes it difficult to track or evaluate. For more on informal learning, see the work of Jay Cross.
In the world of digital learning design, interactivity refers to the relationship between the learner’s decisions and what happens on the screen at given points throughout the learning experience.
If the online asset is responsive to a learner’s decisions such that what they choose will determine what happens next, as in a branching scenario, the asset is said to have a moderate to high level of interactivity. On the other hand, if a learner is not asked to make decisions, or their decisions do not have an effect on the outcome of the experience, the asset bears a basic, or perhaps moderate, level of interactivity. See the table below for a high-level view of this breakdown.
The most interactive learning experiences explicitly engage the learner in a process of decision making and feedback. The primary model for such instructional activity was developed by Michael Allen and is referred to as CCAF. In this sequence, a learner is provided with a Context in which to apply learning with an urgent Challenge and an Activity in which to respond to the challenge. Feedback is given to enforce the learning.
The level of interactivity selected for certain elements within a learning experience, or for the experience as a whole, should be determined by a careful needs analysis conducted by an experienced online learning designer. Higher interactivity is not necessarily better, as some learning objectives can be accomplished most effectively without interactivity, via a simple text page or video. A learning designer can help match each learning objective with a strategy to meet the goal in the most effective, efficient and engaging manner.
When a designer says this, they are following the lead of many others in using Dr. Don Kirkpatrick‘s name as a noun. In the 1950’s, he developed what is now the primary model used for evaluating training and development programs in the modern workplace.
The model consists of four levels: Reaction, Learning, Behavior, and Results. The following table provides an outline of the purpose and best application for each level:
1 – Reaction
Did they like the experience?
Learner’s reactions to: content, relevancy, format, methods etc.
Near the end of the experience
Representative sample of the learners
2 – Learning
Did they learn what you wanted them to learn?
Learner’s acquisition of knowledge and skills at the end of the learning experience.
Pre and/or post experience, most useful for experiences aiming to equip participants with a certain level of skill or knowledge
3 – Behavior
Has their behavior or performance changed as a result of what they learned?
Learner’s sustained behavior changes outside of the learning experience
Post-experience for each learner, only useful in conjunction with the data above
All learners and, when possible, their supervisors
4 – Results
Did the experience achieve the desired results for the organization?
Cost benefit analysis to the organization
Post-experience for all learners, only useful in conjunction with the data above
Learning Management System (LMS)
Submitted by Erin Ryan Casey 7/29/17
A Learning Management System (LMS) is a hub of educational activity in a software platform. It presents and stores curriculum and all records of student progress and may be used may be used for online, hybrid, and face-to-face education and training environments.
For the learner, learners can access their courses, submit work, track progress, and interact with other users within a course. From the teacher, instructors use the LMS to deliver course materials, administer tests, and view learner activity within a course, including group discussions, assessments, even monitor how much time students actively spend on course content.
Bonnie’s Note: There is a longstanding debate in academia about the merits of a learning management system. Does such a system focus on management over and above learning? In today’s open ed tech landscape, is a consolidated platform necessary for effective collaborative learning? While LMS providers like Blackboard and Canvas aren’t likely to disappear any time soon, thought leaders such as Clark Quinn and Tony Bates have been provoking important conversation about the current nature of learning.
Learning styles are often raised as a question during a learner analysis. “Are you a visual, aural, or kinesthetic (experiential) learner?”, a designer might ask. The theory is that individuals have preferences for how they like to consume information, and a good learning experience will provide options for each “style”.
While this may be a strong design strategy in general (see “Universal Design for Learning” below), the theory of learning styles has been widely criticized as a “neuromyth“, a misconception of neuroscience, and debunking efforts are underway.
Levels of Evaluation
Formal learning experiences can take place in a variety of contexts, or modalities, each with its own affordances and challenges. The modality through which learning is delivered will depend upon the natures of the audience, content, objectives, and, often, the teacher or facilitator.
Common modalities include asynchronous online, synchronous online, team-based, 1:1, and the physical, face-to-face classroom. A multi-modal learning experience leverages more than one modality to provide a richer experience and/or tailor to the needs of multiple learners.
See “Modality” and “Blended Learning”
Objectives are clear, measurable, action-oriented statements of the intended learning or performance outcomes of your course. The value of well-written objectives lies in designing a learning experience that effectively aligns assessment and instructional strategies.
Well-written objectives inform the design of the entire experience and assessments, tell learners what to expect, and provide a benchmark against which they can self-assess. A common tool used in the crafting of learning objectives is Bloom’s Taxonomy. A particularly helpful version of the taxonomy is this 3D model from Iowa State University’s Center for Excellence in Learning and Teaching, which maps the taxonomy with cognitive and knowledge domains:
Selling platforms, learning platforms, software platforms, cloud-based platforms, mobile platforms, and of course, platform shoes. What is a platform? At its most basic, it is a place on which to build or produce.
In instructional design contexts, it is simply the “how” and “where” information and materials are presented: an LMS, a physical classroom, social media, a learning app that integrates with social media.
Submitted by Amy Burns 7/25/17
When I first became interested in instructional design, SCORM intimidated me. I knew it had something to do with technical specifications, but not much else. Shareable Content Object Reference Model (SCORM) refers to technical standards that allow online educational content to work with Learning Management Systems (LMS).
Assessing the effect of learning on participants is key; too often this is limited to a quick post-experience survey. These “smile sheet” surveys are problematic
because they only tell us how participants feel about the experience (Level One on the Kirkpatrick scale). They don’t allow us to measure how much learning actually took place, and whether the experience resulted in any new behavior or performance improvements.
If asynchronous learning experiences are those in which the learners are neither in the same time nor the same space, then synchronous experiences feature learners gathered together in one place at the same time. Synchronous learning experiences may take place in a physical, face-to-face classroom, or a virtual, online classroom such as a webinar platform.
Just as with asynchronous learning, synchronous experiences can be designed to varying levels of interactivity according to the needs of the audience, content, objectives, and instructor. A basic synchronous experience featuring low-to-no interactivity is, unfortunately, the most common type people are likely to have encountered. These feature a talking-head style lecturer, perhaps filmed from the back of a room or on a low-grade webcam, filtering through a slide deck while participants multi-task on their email. A more strategically designed experience may engage participants in a lively, moderated chat session during which even the primary instructor will elicit feedback, questions, and insights.
Universal Design for Learning (UDL)
Submitted by Amy Burns 7/19/17
This term can be confusing since many people think it relates only to making content accessible for those with disabilities or learning challenges.
According to CAST, a nonprofit group that has supported and promoted UDL since the 1980’s, it is a “research-based set of principles to guide the design of learning environments that are accessible and effective for all”. While originally developed to utilize computer technology to support students with disabilities in the K-12 classroom, the principles and guidelines are now also used in higher education and workplace training. The UDL framework consists of guidelines to provide Multiple Means of Engagement, Representation, and Action & Expression that make learning accessible to anyone, anywhere. Click on the image below for details.