Instructional Design: an ongoing, iterative process

While instructional design (ID) is of course most important when creating and launching an e-learning course, it is important to understand that ID is not just a one-time exercise. To get the results you expect and require, you need to keep an eye on the course to ensure it is performing well. By gathering and analyzing the right data, you can understand how learners are responding, track learner progress and collect learner feedback on various parameters, based on which the course can be continuously improved.

What parameters should be studied?

Through direct surveys and questionnaires, you can collect learner feedback at various points in the course on:

  • Learner engagement/Learner boredom
  • Motivation

Remember that testing isn’t just to examine the learners’ abilities and progress. It is also a test of the e-learning module itself. Analyze testing results to understand if:

  • All categories of learners – novice, intermediate and expert – are progressing similarly 
  • Most learners are able to understand and apply the concept to practical situations
  • Visual, auditory and kinesthetic learners are all progressing at a similar rate

How do you gather this data?

There are, broadly, two categories of data that can be gathered:

  • Subjective user feedback 
  • Objective learner data 

Through surveys, questionnaires and conversation, you can collect feedback on a large number of parameters at different stages in the course. By segmenting the learners at the start of the module, it’s possible to correlate the data and gain fine-grained subjective feedback on how well the course is supporting each learner category.

Present brief multiple choice surveys (with just a question or two at most) at the end of each sub-section, but take all opportunities to go beyond surveys for learner feedback. Have multiple conversations to gather subjective opinions on a wider variety of topics, going into detail on what’s working and what isn’t. 

Take subjective inputs on everything you can – to name a few, this could include:

  • course navigability
  • readability of content
  • speed of loading
  • which device your users are using
  • how they find the module on that device
  • favorite section and why
  • least favorite section

More objective data metrics can be gathered from LMS data and analytics, such as:

  • completion rates over time
  • user progress (time spent on the course, any roadblocks, where they are fully engaged and involved)
  • user enrollment (for non-mandatory courses!)
  • user engagement
  • number of engaged and returning users (number of visits per user)

In addition to the above two methods of data collection, assessment analysis and observation of learner behavior can yield valuable insights.

By continuously evaluating, analyzing and measuring course performance, you have a better understanding of the gaps in performance that exist. Through this iterative process, you can address these gaps and continuously improve the learning experience by updating the course instructional design.