Learning Lingo

In 2006 I was four years into my Master’s program in Adult Education at the University of Rhode Island and the end was in sight – just two more courses and a thesis. I had just resigned from the job I’d held for six years as a systems administrator and user support specialist at a small, Rhode Island-based research affiliate of the University of Massachusetts Medical School. I had gotten a new job supporting online training technologies at the mother ship, the UMMS campus in Worcester. I wanted to be an instructional designer (ID), and this position was a stepping stone in that direction. My new job brought with it an hour-long commute, an opportunity to apply the skills I was gaining in graduate school, and – as I learned on day one – a whole new vocabulary.

Here are some of the terms that were new to me at the time or that have confused me as I moved along in this buzzword-filled field. What terms would you add to the list?

2 Sigma Problem

If you hear designers toss out this term, they are likely referring to Bloom’s Two-Sigma Problem: the research finding that one-on-one instruction (direct tutoring) is as much as two standard deviations (sigmas) more effective than conventional group instruction. This raises significant challenges and opportunities related to personalized learning, particularly with the affordances of today’s educational technologies. For example, to what extent can a learning system function as a personal tutor? If a learning is engaging on their own with a self-paced course that is responsive to their needs and skill level, is this also as effective one-on-one tutoring with a human instructor? Can such systems be created and supported in a sustainable way? Some startups and design strategists are attempting to crack this nut as we speak, and the challenge holds great promise for the future of digital learning.


The theory is that people retain only 10% of what is learned in a formal learning experience, be it an online course, a classroom, or tutoring; 20% of what we know comes from informal learning through peers and our community; and the vast majority of our learning, 70%, comes from actual, hands-on, direct experience. This term is usually raised when a designer is conducting a needs analysis. “Let’s plan for how people learn according to research,” they say, and proceed to draw something like the following:


While there is a lack of empirical evidence to support this breakdown, there are organizations, resources, and even an online community devoted to the model.

Image Source: http://tom.spiglanin.com/2014/12/i-believe-in-the-702010-framework/


Designers talk about many types of analyses. Audience, Needs, Task, and Gap analysis are four common categories:

    • An audience analysis, or learner analysis, provides insight into the preferences, motivations, contexts, knowledge level, etc.
    • A needs analysis (aka needs assessment) reveals the learning needs of a particular audience regarding a given subject matter.
    • A task analysis produces a step-by-step, detailed description of how a task is performed.
    • A gap analysis identifies discrepancies between how something is (for example, how a task is performed) and how it should be (for example, how the task should be performed for best results).


While this term is used in a few ways, it typically refers to the abundance of data collected by today’s learning management systems and other platforms. These data can be used to determine, for example, how a learner is interacting with a course [expander_maker id=”1″ more=”Read more” less=”Read less”](which content, when, and for how long?) and their peers (how often and with whom?) and whether and how such interactions impact their learning outcomes (correlating interactivity with grades, for example). Check out this infographic for a handy overview. A very exciting aspect of this work is the notion of predictive analytics. It may be possible for organizations and learning institutions to predict, for example, which individuals are most likely to succeed in certain types of learning experiences. The Predictive Analytics Reporting (PAR) Framework is doing some particularly interesting work in this area.


Submitted by Julie Riley 9/1/17

You may read or hear that pedagogy means the science of teaching children while andragogy is the science of teaching adults. While that is technically true, the word pedagogy is often used in a more generic sense to mean the art and science of teaching, period. Andragogy as a term was popularized by Malcolm Knowles and is associated with his principles of adult learning, which emphasizes that adults learn best when:

  • they are involved in planning the learning
  • can learn through experiences
  • are learning topics that they can immediately apply to their lives or work, and
  • are provided problems to solve rather than just content to learn.


Asynchronous Learning

The most flexible type of digital learning, asynchronous experiences are those in which the learners are neither in the same time nor the same space (usually a self-paced online course in which each person accesses the material from their own location at the time most convenient to them).  An asynchronous learning experience is likely to include some form of content delivery along with one or more opportunities to practice application of new knowledge, as well as formative and/or summative assessments of the learning. Each of these components may be presented in a wide variety of ways, as today’s technologies enable multiple approaches including videos, animations, readings, streaming audio, and simulations. The nature and degree to which these elements are combined will depend upon the content and learning objectives. [

Authentic Assessment

Submitted by Julie Riley 9/1/17

Authentic assessment means having learners complete tasks that mirror what they’ll need to do in the real world. For example, if nurses will need to be able to interview patients to obtain a medical history, an authentic assessment would be having them practice under observation with patients (or people acting as such) or completing simulated conversations online. A multiple choice quiz about interviewing techniques or dragging and dropping the components of a medical history would not be considered authentic assessments, but these types of evaluations can still be important as learners build foundational skills. 

Backwards Design

Submitted by Erin Ryan Casey 7/26/17

backwardStart with the measurable goals or learning objectives, and plan backwards:

  1. What is the desired result? What should a learner know (or be able to do) by the end of a course or module?
  2. How will these outcomes be measured? What assessments will indicate success?
  3. What knowledge or skill does a user need to reach these goals? Design learning activities and experiences based on the above.

Note from Bonnie: Backwards Design comes from McTighe and WigginsUnderstanding by Design (UbD) framework. At Course Kitchen, we add a step before the first one listed above. Before identifying the desired result, we need to understand the learner. An audience analysis resulting in a designer’s empathy for the learner can go a long way toward producing the right learning experience.[

Blended Learning

Also called hybrid learning, blended learning aligns each learning objective with the best delivery approach. Some objectives will be most efficiently achieved in real time, whether in a physical (classroom) or virtual learning space (webinar). Others will be best met in a self-paced context, allowing the learner to consume, process, and reflect on the learning in their own time. Combining, or blending, these approaches in one learning experience has been shown to be more effective than any one approach on its own.

Yes, this was a real cake presented to Bonnie and team at a real celebration for the launch the first blended learning program developed at a global financial services company back in 2008.


A strategy in asynchronous learning that allows the learner to impact or choose their own experience. For example, if they are presented with a decision (“What would you do in this situation?”), the choice made can determine what happens next.  This approach can be used in a simulation to test the impact of various choices, to present a menu of content options for the learner to navigate in the order they choose, to present more or less advanced knowledge based on an assessment of learner readiness, and many more scenarios. Branching is a key aspect of interactivity, personalized learning and can make an asynchronous learning experience very effective.



This is the process of dividing content into manageable sections aligned with particular objectives. For example, if a designer is working on a project to teach people how to drive, they might divide, or chunk, the material to be learned into categories such as Readiness, Operation, Traffic, and Laws. Since each category is still quite broad, they may be further chunked such that Readiness includes “Checking Mirrors”, “Seat Adjustment”, “Safety Measures”, and “Assessing the Area”. Each of these chunks might be taught and tested individually before moving on to the next. In this way, complex or dense subject matter (“content”) can be systematically taught and assessed in an accessible manner. There are several strategies for chunking according to cognitive domain or other classifications. An expert designer will choose the approach that best aligns with the content and objectives.

Image Source: http://theelearningcoach.com/elearning_design/chunking-information/

Collaborative Design

Designers rarely operate within a vacuum. In addition to partnering with faculty, business leaders, and other stakeholders, the role of designer increasingly requires close collaboration with technologists, media and graphic professionals, prospective learners, writers, testers, and more.

Beyond leveraging each skill set for various aspects of the process, true collaborative design solicits the regular input and validation of all involved.
One model of collaborative design builds on ADDIE to accommodate contributions from a full range of stakeholders: the 5 D’s (Discovery, Design, Development, Delivery, and Debrief). Under the skilled leadership of an expert designer, this model fosters open and trusting partnerships within cross-functional design team of widely varying expertise.

Design Thinking

While design thinking has been prevalent in the fields of business and software development, the approach is getting a foothold in education. Design thinking prioritizes the experience and insights of the end-user in a process that moves iteratively through five actions: Empathize, Define, Ideate, Prototype, and Test. An expert designer of learner-centered experiences essentially exercises design thinking in their work all the time, even if following a more traditional model such as ADDIE:


    • Whether it is referred to as analysis or discovery, the work always starts with attempting to understand and empathize with the needs of the learner and organization.
    • Interpretation of, ideation around, and experimentation with the results of the analysis is the heart of the design and development process. This cycle involves iterative rounds of prototyping, reviews and user testing, and modifications.
    • Evolution is ultimately the result of testing and evaluation; the purpose of measuring the efficacy of a learning experience is to make continuous improvements.

Formative/Summative Assessments

Submitted by Julie Riley 9/1/17

If you wait until the end of a learning experience to evaluate participants on what they’ve learned, that’s summative assessment. And it’s valuable. But it’s also valuable to include some low-stakes formative assessments along the way so that participants can know how well they’re learning and teachers can know how well they’re teaching and both groups can adjust accordingly.

Hybrid Learning

See “Blended Learning”

Informal Learning

Submitted by Julie Riley 9/1/17


Informal learning encompasses all the ways we learn outside of a formal learning experience. It’s usually unplanned and often impromptu. Have you ever been stuck and asked your colleague for help? Or learned a new way of doing something by watching a YouTube video? That’s all informal learning. Organizations are becoming more interested in informal learning because it accounts for so much of how we learn. However, its very nature makes it difficult to track or evaluate. For more on informal learning, see the work of Jay Cross.

Instructional Interactivity

In the world of digital learning design, interactivity refers to the relationship between the learner’s decisions and what happens on the screen at given points throughout the learning experience. If the online asset is responsive to a learner’s decisions such that what they choose will determine what happens next, as in a branching scenario, the asset is said to have a moderate to high level of interactivity. On the other hand, if a learner is not asked to make decisions, or their decisions do not have an effect on the outcome of the experience, the asset bears a basic, or perhaps moderate, level of interactivity. See the table below for a high-level view of this breakdown.


The most interactive learning experiences explicitly engage the learner in a process of decision making and feedback. The primary model for such instructional activity was developed by Michael Allen and is referred to as CCAF. In this sequence, a learner is provided with a Context in which to apply learning with an urgent Challenge and an Activity in which to respond to the challenge. Feedback is given to enforce the learning.


The level of interactivity selected for certain elements within a learning experience, or for the experience as a whole, should be determined by a careful needs analysis conducted by an experienced online learning designer. Higher interactivity is not necessarily better, as some learning objectives can be accomplished most effectively without interactivity, via a simple text page or video. A learning designer can help match each learning objective with a strategy to meet the goal in the most effective, efficient and engaging manner.

Image Sources: https://bonlinelearning.com.au/blog/are-we-there-yet-determining-elearning-development-time/; http://www.alleninteractions.com/elearning-instructional-design-ccaf


When a designer says this, they are following the lead of many others in using Dr. Don Kirkpatrick‘s name as a noun. In the 1950’s, he developed what is now the primary model used for evaluating training and development programs in the modern workplace. The model consists of four levels: Reaction, Learning, Behavior, and Results. The following table provides an outline of the purpose and best application for each level:

Level  Value Timing Participation
1 – ReactionDid they like the experience? Learner’s reactions to: content, relevancy, format, methods etc. Near the end of the experience Representative sample of the learners
2 – LearningDid they learn what you wanted them to learn? Learner’s acquisition of knowledge and skills at the end of the learning experience. Pre and/or post experience, most useful for experiences aiming to equip participants with a certain level of skill or knowledge All learners
3 – BehaviorHas their behavior or performance changed as a result of what they learned? Learner’s sustained behavior changes outside of the learning experience Post-experience for each learner, only useful in conjunction with the data above All learners and, when possible, their supervisors
4 – ResultsDid the experience achieve the desired results for the organization? Cost benefit analysis to the organization Post-experience for all learners, only useful in conjunction with the data above Supervisors

Learning Management System (LMS)

Submitted by Erin Ryan Casey 7/29/17

A Learning Management System (LMS) is a hub of educational activity in a software platform. It presents and stores curriculum and all records of student progress and may be used may be used for online, hybrid, and face-to-face education and training environments. For the learner, learners can access their courses, submit work, track progress, and interact with other users within a course. From the teacher, instructors use the LMS to deliver course materials, administer tests, and view learner activity within a course, including group discussions, assessments, even monitor how much time students actively spend on course content.

Bonnie’s Note: There is a longstanding debate in academia about the merits of a learning management system. Does such a system focus on management over and above learning? In today’s open ed tech landscape, is a consolidated platform necessary for effective collaborative learning? While LMS providers like Blackboard and Canvas aren’t likely to disappear any time soon, thought leaders such as Clark Quinn and Tony Bates have been provoking important conversation about the current nature of learning.

Learning Styles

Learning styles are often raised as a question during a learner analysis. “Are you a visual, aural, or kinesthetic (experiential) learner?”, a designer might ask. The theory is that individuals have preferences for how they like to consume information, and a good learning experience will provide options for each “style”. While this may be a strong design strategy in general (see “Universal Design for Learning” below), the theory of learning styles has been widely criticized as a “neuromyth“, a misconception of neuroscience, and debunking efforts are underway.[/expander_maker]

Levels of Evaluation

See “Kirkpatrick”


Formal learning experiences can take place in a variety of contexts, or modalities, each with its own affordances and challenges. The modality through which learning is delivered will depend upon the natures of the audience, content, objectives, and, often, the teacher or facilitator. Common modalities include asynchronous online, synchronous online, team-based, 1:1, and the physical, face-to-face classroom. A multi-modal learning experience leverages more than one modality to provide a richer experience and/or tailor to the needs of multiple learners.


See “Modality” and “Blended Learning”


Objectives are clear, measurable, action-oriented statements of the intended learning or performance outcomes of your course. The value of well-written objectives lies in designing a learning experience that effectively aligns assessment and instructional strategies. Well-written objectives inform the design of the entire experience and assessments, tell learners what to expect, and provide a benchmark against which they can self-assess. A common tool used in the crafting of learning objectives is Bloom’s Taxonomy.  A particularly helpful version of the taxonomy is this 3D model from Iowa State University’s Center for Excellence in Learning and Teaching, which maps the taxonomy with cognitive and knowledge domains:


Image Source: http://www.celt.iastate.edu/wp-content/uploads/2015/09/RevisedBloomsHandout-1.pdf


Submitted by Erin Ryan Casey 7/26/17

Selling platforms, learning platforms, software platforms, cloud-based platforms, mobile platforms, and of course, platform shoes. What is a platform? At its most basic, it is a place on which to build or produce. In instructional design contexts, it is simply the “how” and “where” information and materials are presented: an LMS, a physical classroom, social media, a learning app that integrates with social media.


Submitted by Amy Burns 7/25/17

tin_vs_scorm_table_update_0When I first became interested in instructional design, SCORM intimidated me. I knew it had something to do with technical specifications, but not much else. Shareable Content Object Reference Model (SCORM) refers to technical standards that allow online educational content to work with Learning Management Systems (LMS). SCORM creates a nice smooth road for online content to travel to computer based systems that house and make the content available. With the development of mobile learning, another set of standards is emerging called Tin Can API or Experience API (xAPI). While some say that Tin Can will replace SCORM, they don’t really do the same thing[

Self-Paced Learning

See “Asynchronous Learning”

Smile Sheets

Submitted by Julie Riley 9/1/17


Assessing the effect of learning on participants is key; too often this is limited to a quick post-experience survey. These “smile sheet” surveys are problematic because they only tell us how participants feel about the experience (Level One on the Kirkpatrick scale). They don’t allow us to measure how much learning actually took place, and whether the experience resulted in any new behavior or performance improvements.

Dr. Will Thalheimer presents a research-based approach to more effective “smile sheets” in his book, Performance-Focused Smile Sheets.

Synchronous Learning

If asynchronous learning experiences are those in which the learners are neither in the same time nor the same space, then synchronous experiences feature learners gathered together in one place at the same time. Synchronous learning experiences may take place in a physical, face-to-face classroom, or a virtual, online classroom such as a webinar platform.  Just as with asynchronous learning, synchronous experiences can be designed to varying levels of interactivity according to the needs of the audience, content, objectives, and instructor. A basic synchronous experience featuring low-to-no interactivity is, unfortunately, the most common type people are likely to have encountered. These feature a talking-head style lecturer, perhaps filmed from the back of a room or on a low-grade webcam, filtering through a slide deck while participants multi-task on their email. A more strategically designed experience may engage participants in a lively, moderated chat session during which even the primary instructor will elicit feedback, questions, and insights.

Universal Design for Learning (UDL)

Submitted by Amy Burns 7/19/17


This term can be confusing since many people think it relates only to making content accessible for those with disabilities or learning challenges. According to CAST, a nonprofit group that has supported and promoted UDL since the 1980’s, it is a “research-based set of principles to guide the design of learning environments that are accessible and effective for all”. While originally developed to utilize computer technology to support students with disabilities in the K-12 classroom, the principles and guidelines are now also used in higher education and workplace training.  The UDL framework consists of guidelines to provide Multiple Means of Engagement, Representation, and Action & Expression that make learning accessible to anyone, anywhere. Click on the image below for details.

Updated UDL Guidelines
UDL Guidelines graphic organizer developed by the National Center on Universal Design for Learning

Leave a Reply

%d bloggers like this: