This weeks chat2lrn is about measurement and we are delighted to also include a ‘guest’ blog post from Kelly Meeker.
Measuring learning is and always has been a controversial and divisive issue. Senior operational managers are used to hard targets and whilst learning professionals need and want some feedback on the impact of their efforts, the “smileys” approach and even end of intervention testing are now widely accepted to be of little use in assessing the real impact of learning. However, measurement is a critical aspect of every professional’s work and if we don’t measure, how do we know whether the learning intervention has had any impact and delivered bottom line business benefits?
Kirkpatrick’s model was devised for face to face training, and some would argue that it is now outdated. It is also fraught with difficulty in its higher levels as performance improvement is rarely the result of a single identifiable intervention. ROI is also a very contentious issue as to calculate an accurate ROI of learning from formal provision and prove direct cause and effect, all other workplace variables would have to stay the same. It would require a ‘control’ group as well as an ‘experimental’ group, i.e. one group receives the formal learning provision (the ‘experimental’ group) and the other does not (the ‘control’ group). This is very often the model that is used during ‘pilot’ programmes, which if successful are then rolled out to a wider audience.
However, as we move so strongly towards a 70:20:10 model and recognise that most learning actually takes place ‘on-the-job’, does it mean that pilot programmes and establishing control groups are really only be suitable for formal learning interventions and if so, is it possible to measure informal learning?
It may be far better to look for measurements rooted in the day to day of the workflow. What tends to work well is when line managers can clearly say that decisions made or actions performed would not have happened before the intervention. The key is therefore in choosing the metrics and choosing them well. If you’re going to devote the time and energy to a learning programme, is it truly solving a problem that is important to your business?
Kelly Meeker aka @opensesame has this to say on the subject and suggests that we need to get into the trenches!!
Measurement is a challenge for learning and development professionals. Too often measuring learning outcomes falls into the pattern of sharing anecdotal evidence or only measuring production: “we’ve provided X resources” or “we’ve distributed Y widgets”.
Subconsciously, perhaps, developers like this kind of measurement because it measures only the outcomes that they can strictly control – what they do and make, day in, day out. What really matters for an organization, of course, isn’t measuring the number of courses the learning department produced, but measuring changed behaviors or outcomes.
This means L&D folks have to take a risk, and start measuring their own productivity by external factors. A successful learning initiative is measured by the change in behavior, situation or outcomes of the organization.
So what’s the challenge? First, identifying those desired outcomes – this can be harder than it sounds – and then identifying the incremental steps along the way to the desired end state. Second, assigning specific qualitative and quantitative values to both the baseline and the end state. This is probably just as hard as it sounds.
Theory of Change and Learning Measurement
The Theory of Change model is used by nonprofits and social change organizations to plan and target their programs. It also offers a helpful model for planning and measuring learning and development. This model supports productive change by forcing the developer to articulate a theory of change, or a model by which the desired outcomes can be reached.
The first step is beginning with baseline data that measures the current status or situation. The next step is to identify desired end outcomes – and the final and most powerful step is to create a model describing how your initiative will change that situation, and how. This puts huge goals into incremental, achievable steps – making the process simpler to understand and simpler to measure.
This, of course, is needs assessment. But it’s needs assessment with an open mind – that interests itself in more than just the traditional realm of L&D – and has a basis in data. Of course reaching agreement on all phases of this process requires group decision making, and that can be the biggest challenge of all. As Joitske Hulsebosch describes in this post on “Benchlearning”, it’s key to have an open mind, open discussion and avoid defensiveness on all sides.
The theory of change, once articulated, provides the metrics of your success. You will know you have succeeded in generating positive change once you can demonstrate the uptick in the metrics you planned to address.
Data’s Role in Decision Making
In summary, it’s essential to shift your focus from “What can I produce?” to “What can I change?” And those changes should be based on thoughtful analysis of the organization’s needs.
That means getting out of your office and into the trenches of your organization. Doing ride-alongs, observations and “undercover L&D professional” days. Be curious about what your organization does – and you’ll soon know where the gaps are. That’s the really valuable challenge for any knowledge worker.
Kelly Meeker is the Community Manager at OpenSesame, the elearning content marketplace, where she creates, curates and shares with the learning and development community. Find her on her blog at www.OpenSesame.com/blog, on Twitter (@OpenSesame) or at firstname.lastname@example.org.
Finally, a question?
Beyond what point in time after an intervention can improvement or application be identified and measured? For example, the airline pilot who learns an emergency drill in basic training but whose skill is only evident way down the line when something happens.
The transcript is now available for the chat……just look under transcripts and summaries. Also Kelly curated the content using Storify you can find her summary in our Links and Resources section.