If you have ever attended a training event, you have most certainly been presented with the opportunity to provide feedback to the trainer as the event concluded. More than likely, you have given your own learners an opportunity to provide you with feedback.
As a training professional, you innately understand that the post-learning survey is an important step in the instructional design process. The feedback from learners is intended to help instructional designers identify which activities the learners enjoyed, what concepts or topics they struggled with, and how much they feel they learned. Effective evaluation surveys allow you to analyze areas of the training where improvement is needed and can help you measure the overall effectiveness of your training programs. Post-learning surveys are so valuable that the ANSI/IACET 2018-1 Standard for Continuing Education and Training has an entire category covering the requirements for training providers to have processes for the evaluation of learning events.
However, the problem is, retail and food services businesses have become aggressive about collecting feedback from their customers. In today’s environment, everywhere your learners go and after every encounter, they are asked to complete a survey and provide feedback. Requests for completing surveys has become ubiquitous and the volume of requests for completing surveys has grown exponentially, leading your learners become bored, tired, or uninterested in your survey!
When the course ends and you ask them to complete your post-learning event survey, the attendees sigh and attempt to leave. This “survey fatigue” results in low response rates, abandoned surveys, low data quality, and serious survey bias. So, how do you persuade your learners to complete your survey? By designing great surveys! How do you design great surveys? By following these four strategies:
Before creating the survey, make a list of the information you would like to learn from the responses. Some of these might be:
These are just a quick handful of legitimate and necessary items that an instructional designer would want to understand in order to improve the course. If he asks one broad question for each goal, he may not get detailed enough information to make any adjustments. If he asks three or four detailed questions for each goal, his survey may become too long, and the learners’ survey fatigue will skew the results.
Instead of trying to collect all the information in one survey presented to all the learners, it may be better to create several surveys that focus on one or two areas each and rotate which survey is given. While no one group of learners will provide feedback on all the areas, a thorough and complete evaluation of the course will be collected from the community of learners.
By having a deep understanding of the thought process through which a learner completing a survey travels, you can intentionally minimize barriers at each of these mentally expensive processes. According to cognitive researchers[i], a learner answering a survey question treks through a five-stage process to accomplish the task.
As you can see, answering survey questions requires expending quite a bit of mental energy and this process must be repeated for each and every question on the survey! To prevent survey fatigue, designers must deliberately prevent mental fatigue. Once you understand the thought process a learner is facing to complete the survey, you’ll be able to make better design decisions when creating the survey that nudge the learner towards accurate completion rather than abandonment of the survey.
Survey questions can either be open-ended or closed-ended. Open-ended questions simply ask a question and allow for a free-form response from the participants. Close-ended questions, on the other hand, ask a question and provides a constrained set of options from which the survey completer must choose. Of course, neither question type is perfect with each having its trade-offs.
Open-ended Questions | Close-ended Questions | |
---|---|---|
Type of information | More qualitative in nature. | More quantitative in nature |
When to use | When instructional designers have more vaguely defined goal of the evaluation. | When instructional designers are looking for specific feedback. |
Burden on Survey Author | Relatively easy to write because there are no response options. | Relatively difficult to write because they must include an appropriate set of response options. |
Burden on the Respondent | Take more time and effort | Take less time and effort |
Burden on the Analyst | More difficult to analyze because the responses must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis | Easy to analyze because the responses can be easily converted to numbers and entered into a spreadsheet or statistical software. |
Dependability of Results | More valid and reliable due to less bias of the author. | Less valid and reliable due to authors introducing bias with choices. |
Tip: Avoid using scales with only numerical labels as they introduce too much room for interpretation. Instead, only present respondents with verbal labels that you will convert to numerical values in the analysis.
Once you know the goals of the survey, you can start to craft your questions. Asking long and complex questions requires the learners to expend more concentration resulting in the survey taking too long to complete and accelerating the learner to survey fatigue. Instead, questions should follow the BRUSO model[ii]:
Secondly, solicit feedback only on items on which you can make adjustments.For example, if the content of a course is mandated by a regulatory agency, then getting feedback on that content may be irrelevant because the instructional designer cannot change the content.The lessons learned from the feedback should be actionable.
ex) Describe the temperature of the room:
Along these same lines, it is important to be clear-cut when developing the response options. If your question asks for a respondent to place themselves in a category, the category options need to be both mutually exclusive and collectively exhaustive. Mutually exclusive means that each option is distinct and does not overlap another option in the list. Collectively exhaustive means that all possibilities are covered in the options.
A common error is with timeframes where the previous option’s timeframe ends on the same value that begins the next option’s timeframe. Take a look at this example:
This list of options is not mutually exclusive; a person who has worked 5, 10, or 20 years does not know exactly where to categorize himself. A person who has worked 5 years could either categorize himself in the 1 to 5 years option or in the 5 to 10 years option.
Also, the list is not collectively exhaustive. A person who has worked in the industry for 4 months may be confused because there is no option for under one year. Likewise, a long-time industry veteran with over 30 years of experience would not be able to quickly categorize himself.
To fix this, it would be better to ask:
The possible confusion caused by overlapping timeframes has been removed and the missing timeframes have been added.
If you’ve struggled with getting learners to complete the post-learning survey, then hopefully these tips and strategies will help you design a post-learning survey with intentionality that will provide accurate feedback without over-taxing your learners.
[i] Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
[ii] Peterson, R. A. (2000). Constructing effective questionnaires. Thousand Oaks, CA: Sage.
As the Vice President of Technology and Organizational Effectiveness, Randy is responsible for overseeing and implementing the technological solutions necessary to achieve the strategic and operational goals of the Board of Directors. He oversees the development of all web applications, like the public website, the IACET member portal, and the accreditation application submission and review modules.
With over 20 years of experience as a full-stack developer, providing software and IT solutions within the non-profit and government verticals, he is an expert in the design, development, and implementation of membership management systems, focusing on tightly integrating technology platforms for associations.