The Trouble with Tribbles: Traditional Smile Sheets. Lovable? Or Exponentially Dangerous?

Will bookSmile sheets, happy sheets, reaction forms, response forms, learner evaluations, level 1’s. The same thing, different names.

In a 2016 ATD survey, 88% of respondents reported that their organizations used smile sheets, yet only 44% said their learning measurement efforts were supporting their organization’s learning goals.

In two meta-analyses—studies of many scientific studies—traditional smile sheets have been found to be virtually uncorrelated with learning results, with correlations r = .09. That’s like correlating the daily number of my footsteps with the number of Patti Shank’s social-media posts. Not highly related, with the slight negative correlation being due to me being riveted by Patti’s brilliance, which keeps me from walking away from my computer screen!

Smile sheets are ubiquitous, but they are clearly not effective—in their current form—for giving us feedback about the success or weaknesses of our learning interventions. And, aren’t we, as learning professionals, sort of charged with ensuring that what we’re doing is working? Shouldn’t we get good feedback and make improvements?

Maybe we should just throw them out… On the other hand, there are many reasons besides getting good feedback to use smile sheets. From my recently published book, Performance-Focused Smile Sheets: A Radical Rethinking of a Dangerous Art Form, I offer the following list, which I borrowed and modified from measurement expert Rob Brinkerhoff:

  1. Red-flagging training programs that are not sufficiently effective.
  2. Gathering ideas for ongoing updates and revision of a learning program.
  3. Judging strengths and weaknesses of a pilot program to enable revision.
  4. Providing instructors with feedback to aid their development.
  5. Helping learners reflect on and reinforce what they learned.
  6. Helping learners determine what (if anything) they plan to do with their learning.
  7. Capturing learner satisfaction data to understand—and make decisions that relate to—the reputation of the training and/or the instructors.
  8. Upholding the spirit of common courtesy by giving learners a chance for feedback.
  9. Enabling learner frustrations to be vented—to limit damage from negative back-channel communications.

In the book, I focus on the first four—the ones related to getting good feedback. I wrote the book because I think we can create better smile sheets. Not perfect smile sheets! There’s no such thing as a perfect measurement tool, and in the complex world of learning, this is doubly true. But organizations will still use smile sheets, so if we can make them better, we should. Also, as the list above shows, there are other reasons to use smile sheets.

To create better smile sheets—better in enabling feedback—we have two imperatives. First, we have to ask questions that give us information related to learning. Second, we have to ensure that our questions give us results that are more actionable. When a course is rated using a traditional smile sheet at a 4.1, it causes two HUGE problems. First, it enables bias. There is no clear standard for whether a 4.1, 4.3, etc. is acceptable or not. So, we evaluate the number based on our biases. Second, these numeric responses also create paralysis within our organizations. Because we don’t know what a 4.1 means, we stick with the status quo, which too often means we stick with learning interventions that are not as effective as they might be.

For further reading before our chat, here is an article that describes what improved smile sheet questions might look like:


What can L+D learn from product management?

Today’s post is written by Holly MacDonald, #chat2lrn crew member and Chief Spark at Spark + Co located on an island off the coast of BC in Western Canada. Holly is an instructional designer, consultant, serial dog walker and a self-confessed whale nerd. Find her on Twitter @sparkandco.

According to wikipedia:

Product management is an organizational lifecycle function within a company dealing with the planning, forecasting, and production, or marketing of a product or products at all stages of the product lifecycle.

In L+D, we often focus on what goes into our instructional product (content), but less about WHO uses it, WHY they use it, WHEN they use it etc. We tend to think of our work in terms of “projects” not products, but what if we changed our perspective?

What if we developed instructional products? What lessons could we learn from product management?


Product managers are guided by the following principles:

  • Products have a limited life and thus every product has a life cycle.
  • Product sales pass through distinct stages, each posing different challenges, opportunities, and problems to the seller.
  • Products require different marketing, financing, manufacturing, purchasing, and human resource strategies in each life cycle stage.

Lessons for L+D

We could develop principles for our instructional products.

How do we develop instructional products to make maintenance or sustainment easier? Do we even consider that? Do we start a “project” thinking about it’s lifespan and how things might be different on launch than 2 years down the road? For our audience and for ourselves? Do we consider product roadmaps?


Product planning involves relentless focus on the customer – using tools like Customer Discovery – the product manager is always thinking about their customers and how to deliver their product to their customer segments. They often use techniques like the “Fuzzy Front End” – which is the conceptual idea stage of the product. Some also use the “Minimum Viable Product” methodology to test their design.

Lessons for L+D

This is analogous to our analysis phase, however do we ensure that we define our customer on every instructional product? Do we truly define the problem that our instructional product will solve? Are we focused on our customers? Do we understand that our customers and users are not the same?  Do we do an FFE? Could we adopt a Minimum Viable Product methodology?


Product managers know their competitors – who are yours? Who vies for your customer’s attention? They also scan the competitive landscape to determine what influences are happening: political, economic, social, and technological.

Lessons for L+D

What is going on in your “market” that you need to keep tabs on? Do you do any forecasting around external forces? Do we anticipate what our business/client is going to need in the future? Are we prepared to provide that?


Product managers of course spend a lot of time on producing their product. They use techniques like “design thinking”. Consider all of the things that are designed: teapots, cars, solar panels, chainsaws, electric cars, stand up desks (and a bazillion other things). Take the lowly door. Even doors can be designed in a way that’s right or wrong. A door that isn’t designed well is a “Norman Door”:

Lessons for L+D

Do we approach design in the same way? Do we look at the overall process of design from all angles: Can we resist the pressure to just jump in and start building? Do you have “safeguards” in place so you don’t build a “Norman Course”.


There’s been variations on the “Marketing Mix“, or the “P’s” of marketing – product, price, promotion, place, (and in some instances, 5 P’s, adding profit) and more variation for service businesses (adding physical evidence, people and process to the mix) and even more for more “digital products” for decades. However you think about it, product managers use a model for marketing their products.

Lessons for L+D

Could we use some of  the “P’s” for our instructional products or adapt them to instructional products?

We’d love to know what you think. What CAN we in L+D learn from product management? Come and join us on April 7th to share your ideas, insights, questions, challenges and concerns.