This week Chat2lrn are happy to welcome guest blogger Barbara Camm. Barbara is the Vice President of Client and Staffing Services at Dashe & Thomson, Inc. in Minneapolis, Minnesota. She has been in the field of instructional design and performance improvement for over 20 years and has a special interest in evaluating both formal and informal learning. You can follow Barbara on Twitter @cammbl.
My colleague, Andrea May came back from ASTD International Conference & Exposition (ICE), which was held in Dallas in May of this year, raving about a presentation on “Evaluating Informal Learning.” She knows that I have been blogging about learning evaluation for the past couple of years—mostly Kirkpatrick but also Jack Phillips, Scriven, and Brinkerhoff. It turned out that the presenter was Saul Carliner and that I had attended an earlier version of his talk at a monthly meeting of the Professional Association of Computer Trainers (PACT) in Minneapolis.
Carliner (“A Model for Measuring and Evaluating Informal Learning.” Academy of Human Resource Development Conference in the Americas. February 15, 2013) says that Kirkpatrick doesn’t work with informal learning. He says that the Kirkpatrick’s Four Level (reaction, learning, behavior, and results) model is more appropriate for formal training events rather than for an informal learning process over which the employer has no control.
When considered for evaluating informal learning, Carliner says that established Kirkpatrick Model falls apart:
Kirkpatrick Level | Why It Doesn’t Work for Informal Learning |
1. Reaction | By nature, no objectives against which to test. Much learning occurs unintentionally. |
2. Learning | Much learning occurs either accidentally or from events intended for other purposes. |
3. Behavior | By nature, no objectives against which to assess. Informal learning processes are the ones used for transfer. |
4. Results | Because most informal learning is individually driven, no business objectives against which to evaluate it. |
He says that, instead, Learning and Development organizations within a company need to find out what resources are being used by employees to learn. This is Carliner’s framework for evaluating informal learning:
Individual Learning | Learning across Groups of Workers |
Identifying what workers learned | Determining the extent of use of resources for informal learning |
Identifying how workers learned it | Assessing satisfaction with individual resources |
Recognizing acquired competencies | Identifying the impact of individual resources |
The tools to evaluate informal learning include self-assessments, process portfolios in which individuals reflect on each item to identify strengths and weaknesses, and coaching/inventory sessions.
According to Carliner (“How to Evaluate Informal Learning.” ASTD Learning and Development Newsletter. September 20, 2012), Learning and Development organizations also need to know how employees are learning. This will ensure that employees can gain recognition and a place on the company advancement track, based on skills they have developed informally. He says that this can be accomplished by administering skill assessments and entering in employee education records completed training, results from certification exams, and documentation of learning badges.
Comparing these methods for assessing informal learning with the Kirkpatrick model, however, is like comparing apples to oranges. Finding out what resources individual employees are using to learn and documenting it for purposes of recognition and advancement seems to be a human resource function instead and is perfectly appropriate in that realm.
Other methods have been put forward for measuring informal learning. Dan Pontefract (“Time’s Up—Learning Will Forever Be Part Formal, Part Informal and Part Social.” Chief Learning Officer Magazine. February 6, 2011) has suggested starting with an end goal to achieve overall return on performance and engagement (RPE) and building social learning metrics and a perpetual 360 degree, open feedback mechanism.
Tom Gram (Evaluating Training and Learning circa 2011.” Performance X Design. “February 17, 2011) says when learning is integrated with work, nurtured by conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements for the achievement of (team) performance goals. He says that improved performance is the best evidence of team learning.
Finally, Don Clark (“The Tools of Our Craft.” Big Dog, Little Dog. February 13, 2011 and “Using Kirkpatrick’s Four Levels to Create and Evaluate Informal and Social Learning Processes.” Big Dog, Little Dog. February 22, 2011) says Kirkpatrick’s model has evolved into a backwards planning model (ordered as Levels 4 through 1) that treats learning as a process, not an event. He says that the model does not imply strictly formal learning methods, but rather any combination of the four learning processes (social, informal, non-formal, and formal). He points out how closely Kirkpatrick’s evolved model fits in with other models, such as Cathy Moore’s.
I agree with Clark that Kirkpatrick’s model, viewed as a process model, can become a way to implement informal, social, and non-formal learning as well as formal learning. However, I think that evaluating social learning is so new and such a wide open field that more evaluation models need to be explored.
Please join us to discuss Evaluating Informal Learning on Thursday, 11 July at 16:00 BST/11:00 EDT/ 08:00 PDT.