Are L&D Professionals ‘Cobbler’s Children’?

This post has been written by Lesley Price, (aka @lesleywprice) Membership Services Manager for the Learning and Performance Institute.  Lesley has been involved with #chat2lrn since its inception and would like to share her thoughts on L&D Skills.

“The cobbler’s bairns are aye the worst shod”. I used to hear my granny saying this a lot when I was growing up, in case you haven’t come across the saying before, let me explain…

In the days when shoes were all handmade, cobblers were always very busy.  The story goes that the cobbler had so many orders, he needed to work very long hours. He was in fact so busy working for paying customers, that he was too busy to make and repair shoes for his own children.  So the cobbler’s children ended up either going barefoot or wearing worn out shoes.

But what, I hear you ask, does this old Scottish proverb have to do with Learning and Development?  Last week, the Learning and Performance Institute (LPI) published the LPI Capability Map 6 Month Report. The LPI Capability Map is an online tool that allows learning professionals to self-assess their skills.  It has 9 category areas which contain a total of 27 skills, each one having 4 levels of competence.  The report is based on self-assessments of 983 L&D professionals against LPI’s Capability Map, and is a unique first look at the skills of a profession that is vital to any organisation working in the 21st Century.

The report certainly makes interesting reading and clearly shows that as a profession, we feel confident in our face-to-face delivery and content design skills.  These skills received the highest number of assessments (738 & 691respectively) and also the highest average scores (3.36 & 3.08).  However, there are a number of worrying skills gaps.  Some of these gaps are in the newer skills of Collaborative Learning, however although the average skill level was well below the average for Live Delivery, there were a large number of people who carried out assessments.  Which shows that people are at least beginning to develop these skills.

What is of more concern is the low number of assessments and low average scores in a number of other areas such as Competency Management: 349 assessments – average score 2.36, Change Management: 360 assessments – average score 2.54 and Data Interpretation: 319 assessments – average score 2.36. Also worrying is Communication, Marketing and Relationship Management:  349 assessments – average score 2.64. Are we not communicating and building relationships with other parts of the organisation?  Are we still designing courses or buying in training/elearning and expecting people to knock on our door? If Learning Professionals want to be agile business partners and have a positive impact on performance improvement, we need to have these kinds of skills;  otherwise there is a danger we will get stuck in, what Don Taylor describes as, the ‘Training Ghetto’

Which brings me back to the Cobbler’s Children.  After reading the report, I started wondering why there were fewer assessments in what are the less traditional L&D skills and also why the average scores are lower. Is it that we don’t think these skills are necessary?  Maybe, but if we read the latest news in various industry publications and blog posts from experts in our field, we should know that is not the case.  There is now a swell of feeling that L&D has to change its approach. So is it that L&D professionals don’t know this?  It could be as Industry Awareness had only 379 assessments and an average score of 2.64.

Most people I know who work in L&D are passionate about learning.  We spend most, if not all, of our time supporting others learning in one way or another.  So are we so busy looking after everybody else that we don’t look after our own skills needs? Are we in fact like the Cobbler’s Children and if we are, what are we going to do about it?

The summary of the LPI Capability Map 6 Month Report is available to the whole community free of charge.  To get your copy email  The full report is available to LPI members free of charge click here to request your copy.

Beyond Kirkpatrick: Evaluating Informal Learning

This week Chat2lrn are happy to welcome guest blogger Barbara Camm. Barbara is the Vice President of Client and Staffing Services at Dashe & Thomson, Inc. in Minneapolis, Minnesota. She has been in the field of instructional design and performance improvement for over 20 years and has a special interest in evaluating both formal and informal learning. You can follow Barbara on Twitter @cammbl.

My colleague, Andrea May came back from ASTD International Conference & Exposition (ICE), which was held in Dallas in May of this year, raving about a presentation on “Evaluating Informal Learning.”  She knows that I have been blogging about learning evaluation for the past couple of years—mostly Kirkpatrick but also Jack Phillips, Scriven, and Brinkerhoff.  It turned out that the presenter was Saul Carliner and that I had attended an earlier version of his talk at a monthly meeting of the Professional Association of Computer Trainers (PACT) in Minneapolis.

Carliner (“A Model for Measuring and Evaluating Informal Learning.” Academy of Human Resource Development Conference in the Americas.  February 15, 2013) says that Kirkpatrick doesn’t work with informal learning. He says that the Kirkpatrick’s Four Level (reaction, learning, behavior, and results) model is more appropriate for formal training events rather than for an informal learning process over which the employer has no control.

When considered for evaluating informal learning, Carliner says that established Kirkpatrick Model falls apart:

Kirkpatrick Level Why It Doesn’t Work for Informal Learning
1. Reaction By nature, no objectives against which to test.  Much learning occurs unintentionally.
2. Learning Much learning occurs either accidentally or from events intended for other purposes.
3. Behavior By nature, no objectives against which to assess.  Informal learning processes are the ones used for transfer.
4. Results Because most informal learning is individually driven, no business objectives against which to evaluate it.

He says that, instead, Learning and Development organizations within a company need to find out what resources are being used by employees to learn. This is Carliner’s framework for evaluating informal learning:

Individual Learning Learning across Groups of Workers
Identifying what workers learned Determining the extent of use of resources for informal learning
Identifying how workers learned it Assessing satisfaction with individual resources
Recognizing acquired competencies Identifying the impact of individual resources

The tools to evaluate informal learning include self-assessments, process portfolios in which individuals reflect on each item to identify strengths and weaknesses, and coaching/inventory sessions.

According to Carliner (How to Evaluate Informal Learning.ASTD Learning and Development Newsletter. September 20, 2012), Learning and Development organizations also need to know how employees are learning.  This will ensure that employees can gain recognition and a place on the company advancement track, based on skills they have developed informally. He says that this can be accomplished by administering skill assessments and entering in employee education records completed training, results from certification exams, and documentation of learning badges.

Comparing these methods for assessing informal learning with the Kirkpatrick model, however, is like comparing apples to oranges.  Finding out what resources individual employees are using to learn and documenting it for purposes of recognition and advancement seems to be a human resource function instead and is perfectly appropriate in that realm.

Other methods have been put forward for measuring informal learning. Dan Pontefract (Time’s Up—Learning Will Forever Be Part Formal, Part Informal and Part Social.Chief Learning Officer Magazine. February 6, 2011) has suggested starting with an end goal to achieve overall return on performance and engagement (RPE) and building social learning metrics and a perpetual 360 degree, open feedback mechanism.

Tom Gram (Evaluating Training and Learning circa 2011.” Performance X Design. “February 17, 2011) says when learning is integrated with work, nurtured by conversations and collaboration in social media environments, evaluation should simply be based on standard business measurements for the achievement of (team) performance goals. He says that improved performance is the best evidence of team learning.

Finally, Don Clark (“The Tools of Our Craft.Big Dog, Little Dog. February 13, 2011 and “Using Kirkpatrick’s Four Levels to Create and Evaluate Informal and Social Learning Processes.” Big Dog, Little Dog. February 22, 2011) says Kirkpatrick’s model has evolved into a backwards planning model (ordered as Levels 4 through 1) that treats learning as a process, not an event. He says that the model does not imply strictly formal learning methods, but rather any combination of the four learning processes (social, informal, non-formal, and formal). He points out how closely Kirkpatrick’s evolved model fits in with other models, such as Cathy Moore’s.

I agree with Clark that Kirkpatrick’s model, viewed as a process model, can become a way to implement informal, social, and non-formal learning as well as formal learning. However, I think that evaluating social learning is so new and such a wide open field that more evaluation models need to be explored.

Please join us to discuss Evaluating Informal Learning on Thursday, 11 July at 16:00 BST/11:00 EDT/ 08:00 PDT.