In a 2016 ATD survey, 88% of respondents reported that their organizations used smile sheets, yet only 44% said their learning measurement efforts were supporting their organization’s learning goals.
In two meta-analyses—studies of many scientific studies—traditional smile sheets have been found to be virtually uncorrelated with learning results, with correlations r = .09. That’s like correlating the daily number of my footsteps with the number of Patti Shank’s social-media posts. Not highly related, with the slight negative correlation being due to me being riveted by Patti’s brilliance, which keeps me from walking away from my computer screen!
Smile sheets are ubiquitous, but they are clearly not effective—in their current form—for giving us feedback about the success or weaknesses of our learning interventions. And, aren’t we, as learning professionals, sort of charged with ensuring that what we’re doing is working? Shouldn’t we get good feedback and make improvements?
Maybe we should just throw them out… On the other hand, there are many reasons besides getting good feedback to use smile sheets. From my recently published book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, I offer the following list, which I borrowed and modified from measurement expert Rob Brinkerhoff:
- Red-flagging training programs that are not sufficiently effective.
- Gathering ideas for ongoing updates and revision of a learning program.
- Judging strengths and weaknesses of a pilot program to enable revision.
- Providing instructors with feedback to aid their development.
- Helping learners reflect on and reinforce what they learned.
- Helping learners determine what (if anything) they plan to do with their learning.
- Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the training and/or the instructors.
- Upholding the spirit of common courtesy by giving learners a chance for feedback.
- Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.
In the book, I focus on the first four—the ones related to getting good feedback. I wrote the book because I think we can create better smile sheets. Not perfect smile sheets! There’s no such thing as a perfect measurement tool, and in the complex world of learning, this is doubly true. But organizations will still use smile sheets, so if we can make them better, we should. Also, as the list above shows, there are other reasons to use smile sheets.
To create better smile sheets—better in enabling feedback—we have two imperatives. First, we have to ask questions that give us information related to learning. Second, we have to ensure that our questions give us results that are more actionable. When a course is rated using a traditional smile sheet at a 4.1, it causes two HUGE problems. First, it enables bias. There is no clear standard for whether a 4.1, 4.3, etc. is acceptable or not. So, we evaluate the number based on our biases. Second, these numeric responses also create paralysis within our organizations. Because we don’t know what a 4.1 means, we stick with the status quo, which too often means we stick with learning interventions that are not as effective as they might be.
For further reading before our chat, here is an article that describes what improved smile sheet questions might look like: