Design Shortcuts for Surviving in the Real Worldcuts

Whechallenge1n we study design, we learn rigorous methods based upon sound research and elegant theory.  Then we hit the real world and are faced with deadlines, limited resources, and unrealistic demands.  How do we cope?  We generally choose some design shortcuts,

We generally choose some design shortcuts, heuristics, that give us what we believe to be suitable approximations to what we’d prefer to do in a perfect world.  These heuristics, experience-based solutions which may not be optimal but are often good enough to get the work completed, are often unexamined.

Our major steps in designing learning, whether ADDIE or SAM, still (or should) require determining meaningful objectives, creating essential practice, providing conceptual guidance and supporting examples, and creating a learning experience.  However, we might not do the full cognitive task analysis, simulation-based practice, model-based concepts, story-based examples, etc.  Some of our shortcuts are well-founded, yet some might also preclude us from exploring more effective alternatives.  Still, we need to be conscious of the tradeoffs.

For example, rapid e-learning tools make it easy to capture knowledge, present it, and test it.  Yet how often is knowledge the real barrier to success in performance?  Most of the research suggests that, instead, the emphasis should be on the ability to apply knowledge, not just recite it.  Knowledge alone isn’t sufficient for the ability to use the knowledge to meet workplace needs.  Do we find effective ways to even use these tools or are we just putting content on pages?

We need to be conscious of the shortcuts we take, the tradeoffs they entail, and reflect from time to time on where our practice in regards to where it could, and should, be.  What are the shortcuts we’re taking, and what are the assumptions they encompass?

This post was written by Clark Quinn, who is directing this week’s #chat2lrn tweetchat. Thank you, Clark, for your contribution!

Success – Building Legitimate Confidence

After our very interesting and insightful chat about Failure, we look at the other side of the coin this week – Success – what it can do to help us and our work……..

There is always something to learn from failure – always true, but what do we actually learn from it? How can “failure” be used to help learning? That was the subject of the last vibrant #chat2lrn – if like me, you missed it, the transcript is worth a few minutes study. It is also true that there is much learning involved in recovery from failure. But for me, that is a hard road and I have always been more excited by learning from success – it is just easier and more motivating. I need to explain why!

Confidence and energy are two of the words that go together to make up motivation – and most of us in the right environment are motivated to succeed. Is it not true that knowing what to do and how to do it, with confidence that one has the skills, is likely to get us into flow (http://en.wikipedia.org/wiki/Flow_(psychology) – or more colloquially, into the “zone”? The evidence suggests that when we are in “flow we feel good about ourselves and are therefore prepared to be innovative and put more enrgy and effort into our work.” Positive psychology has been exploited in many ways but there is one fundamental truth. Faced with a difficult task to perform, most people will fall back on tried and tested methods born out of successful past experience in order to attempt to accomplish it. In the absence of specific skills for a situation, we are most likely to align ourselves to it with the skills in which we have confidence.

So what has this got to do with success? Knowing exactly how a success is achieved provides a base for replicating it confidently and at will. It builds confidence and motivation. It involves getting beyond generalities (this was good work, the team worked well) and into understanding the specifics that lead to a success – either one’s own or that observed in others. What exactly was said and done that led to progress? How was cooperation obtained in achieving things with others? These are the kind of questions that, if answered properly, lead to a databank of experience that can be turned into generic practices which can be called upon at any time to tackle a similar task.

The contrast is in the analysis of failure. Analysing failure will certainly tell a person what not to do next time – pitfalls to avoid, wrong paths to step past etc, but in the end it only tells one what not to do next time in a similar circumstance. That does not build confidence and energy to tackle future tasks. The emotions are negative and are substantially about inability and lack of achievement. It takes a real effort to step back from that, extract the lessons and try to move forward again. Failure analysis is an ever-tightening spiral about what does not work. Ultimately it leads to paralysis.

How do we apply the principles of success analysis to our work in technology enabled learning?

• As ID’s, being able to create and repeat successful design strategies saves time, reduces negative emotional energy and gives confidence in our professionalism to the SME’s with whom we work
• Design that builds upon success in learning is likely to motivate learners to engage more deeply and to pursue learning further – hence the current interest in games-based learning and the spectacular results that can be achieved through it
• Enabling students to iterate their learning experiments from a point where they last experienced success speeds up learning
• Understanding what we have done that has helped our enterprises forward is powerful in building our self-esteem as adding value. Compare that with viewing ourselves as a cost to the business.

The more confident we are, and the more solid the bank of success we can draw upon, the more likely we are to be adventurous, courageous and innovative in our use of learning to help our enterprises. We will be able, with heads held high, to take our places alongside business leaders to offer solutions from our expertise in learning.

Please join us on Thursday 7 June at 16.00 BST/11.00EDT/08.00PDT to see how we can make Success Analysis a powerful theme in our work.

Looking forward to seeing you there!

Failure as a Learning Tool: 0/10

This week we are delighted to have a guest post from Fiona Quigley (@FionaQuigs).

Thomas Edison, famously said:

“I have not failed 1,000 times.  I have successfully discovered 1,000 ways to NOT make a light bulb.”

Abraham Lincoln, Louis Pasteur, Thomas Edison, Walt Disney and JK Rowling, to name but a few, are as famous for their failures as their successes.

If you talk to successful people, most will tell you that they “failed” many times.  History is laden with spectacular failures, stories of triumph over adversity and succeeding against the odds.  Human nature draws us to these stories.  Seeing others’ vulnerabilities and the hurdles they have overcome somehow helps to inspire us.

But what of the 21st century education and business environment – does failure still play an important role in our learning?  Does our perception of failure and its value need a re-think?  If failure is such an important part of achieving success, how can we use it better to learn?

One of the challenges with using failure as a learning tool is the meritocracy that we live in;  we are judged on our individual achievements.  From an early age, we are taught that grades matter.  Being top of the class, getting into the best schools and graduating with honours drives how we learn through the formal school system.

A recent French research study discovered that, by telling 11-year-olds that the puzzles they were working on were difficult and that they needed to practise, this improved their success rate.  The intervention of telling the students that learning is difficult, mistakes are common and practice is important, was found to improve their working memory.

Harold Jarche, in a blog entitled “Three Principles for Net Work”, speaks of the changing nature of work and our increasingly complex business environment.  Harold states that, due to the complex nature of business, “failure needs to be tolerated”.  Harold’s tagline for his blog “Life in perpetual Beta” makes a lot of sense.

The changing nature of business is also reflected in organisational structure changes.  Over the last few years, many organisations have adopted a matrix or network organisation structure, where ambiguous roles and uncertainty is part of everyday life.  In these types of environments, people may have two or more managers; work in physical teams and virtual teams and often have to redefine their roles on an on-going basis.

The skills of the 21st century worker are about negotiation, influence, collaboration and, often, compromise.  We don’t live in the black and white world that many of us were educated in as school children.

The idea of looking at failures and learning from them is worth exploring.  Conferences such as TED (www.ted.com) and TheFailCon (www.thefailcon.com) encourage people to be open and honest about their ideas, struggles and successes.  Shinning light on failure actually changes it into a feedback process.

It is also worth looking at the definition of failure.  We all make mistakes, but when does making a mistake result in failure?  Is it when we make the same mistake over and over without learning from it?

Human psychology is complex and there are many reasons why we might repeat a pattern of behaviour that is less than positive.  Paradoxically, perfectionism is a trait that can lead to a fear of failure which, in turn, can result in poor performance.  People can get so stressed by the thought of getting it wrong, that they may never start, procrastinate, or do the task half-heartedly because they “know” they will get it wrong.

Failure is tied up with judgement.  When you call yourself a failure, you are essentially judging yourself.  The word failure closes down thinking and leaves little room for overcoming problems and learning from them.  If you add our increasingly complex and ambiguous business environment to the mix, then perhaps the world we operate in may not be as ready to tolerate and benefit from failure as much as we need.

So it seems that our ability to learn can be significantly impacted by both our attitude to success and failure.  How can we embrace failure and integrate it into our learning processes?

To read more about failure as a learning tool:

Reducing Academic Pressure May Help Children to Succeed

http://www.apa.org/news/press/releases/2012/03/academic-pressure.aspx

Strategies for Learning From Failure – Harvard Business Review

http://hbr.org/2011/04/strategies-for-learning-from-failure/ar/1

Interpreting Successes and Failures: The Influence of Perfectionism on Perspective

http://www.eric.ed.gov/ERICWebPortal/search/detailmini.jsp?_nfpb=true&_&ERICExtSearch_SearchValue_0=EJ682722&ERICExtSearch_SearchType_0=no&accno=EJ682722

Harold Jarche – Three Principles for Net Work Blog

http://www.jarche.com/2012/04/three-principles-for-net-work/

Festival of Errors

http://www.guardian.co.uk/world/2010/jul/21/france-paris-festival-of-errors

Famous Failures

http://www.psychologytoday.com/blog/creative-thinkering/201111/famous-failures

Fiona is a Freelance Instructional Designer and Trainer based in Belfast, Northern Ireland who has been working in the learning and development area for 15 years.  You can contact Fiona via her blog http://fqlearning.wordpress.com/  or through following her on Twitter @FionaQuigs where she is an active contributor.

Learning Measurement Means Getting into the Trenches

This weeks chat2lrn is about measurement and we are delighted to also include a ‘guest’ blog post from Kelly Meeker.

Measuring learning is and always has been a controversial and divisive issue.  Senior operational managers are used to hard targets and whilst learning professionals need and want some feedback on the impact of their efforts, the “smileys” approach and even end of intervention testing are now widely accepted to be of little use in assessing the real impact of learning.  However, measurement is a critical aspect of every professional’s work and if we don’t measure, how do we know whether the learning intervention has had any impact and delivered bottom line business benefits?

Kirkpatrick’s model was devised for face to face training, and some would argue that it is now outdated. It is also fraught with difficulty in its higher levels as performance improvement is rarely the result of a single identifiable intervention.  ROI is also a very contentious issue as to calculate an accurate ROI of learning from formal provision and prove direct cause and effect, all other workplace variables would have to stay the same.  It would require a ‘control’ group as well as an ‘experimental’ group, i.e. one group receives the formal learning provision (the ‘experimental’ group) and the other does not (the ‘control’ group). This is very often the model that is used during ‘pilot’ programmes, which if successful are then rolled out to a wider audience.

However, as we move so strongly towards a 70:20:10 model and recognise that most learning actually takes place ‘on-the-job’, does it mean that pilot programmes and establishing control groups are really only be suitable for formal learning interventions and if so, is it possible to measure informal learning?

It may be far better to look for measurements rooted in the day to day of the workflow.  What tends to work well is when line managers can clearly say that decisions made or actions performed would not have happened before the intervention. The key is therefore in choosing the metrics and choosing them well.  If you’re going to devote the time and energy to a learning programme, is it truly solving a problem that is important to your business?

Kelly Meeker aka @opensesame has this to say on the subject and suggests that we need to get into the trenches!!

Measurement is a challenge for learning and development professionals. Too often measuring learning outcomes falls into the pattern of sharing anecdotal evidence or only measuring production: “we’ve provided X resources” or “we’ve distributed Y widgets”.

Subconsciously, perhaps, developers like this kind of measurement because it measures only the outcomes that they can strictly control – what they do and make, day in, day out. What really matters for an organization, of course, isn’t measuring the number of courses the learning department produced, but measuring changed behaviors or outcomes.

This means L&D folks have to take a risk, and start measuring their own productivity by external factors. A successful learning initiative is measured by the change in behavior, situation or outcomes of the organization.

So what’s the challenge? First, identifying those desired outcomes – this can be harder than it sounds – and then identifying the incremental steps along the way to the desired end state. Second, assigning specific qualitative and quantitative values to both the baseline and the end state. This is probably just as hard as it sounds.

Theory of Change and Learning Measurement

The Theory of Change model is used by nonprofits and social change organizations to plan and target their programs. It also offers a helpful model for planning and measuring learning and development. This model supports productive change by forcing the developer to articulate a theory of change, or a model by which the desired outcomes can be reached.

The first step is beginning with baseline data that measures the current status or situation. The next step is to identify desired end outcomes – and the final and most powerful step is to create a model describing how your initiative will change that situation, and how. This puts huge goals into incremental, achievable steps – making the process simpler to understand and simpler to measure.

This, of course, is needs assessment. But it’s needs assessment with an open mind – that interests itself in more than just the traditional realm of L&D – and has a basis in data. Of course reaching agreement on all phases of this process requires group decision making, and that can be the biggest challenge of all. As Joitske Hulsebosch describes in this post on “Benchlearning”, it’s key to have an open mind, open discussion and avoid defensiveness on all sides.

The theory of change, once articulated, provides the metrics of your success. You will know you have succeeded in generating positive change once you can demonstrate the uptick in the metrics you planned to address.

Data’s Role in Decision Making

In summary, it’s essential to shift your focus from “What can I produce?” to “What can I change?” And those changes should be based on thoughtful analysis of the organization’s needs.

That means getting out of your office and into the trenches of your organization. Doing ride-alongs, observations and “undercover L&D professional” days. Be curious about what your organization does – and you’ll soon know where the gaps are. That’s the really valuable challenge for any knowledge worker.

Kelly Meeker is the Community Manager at OpenSesame, the elearning content marketplace, where she creates, curates and shares with the learning and development community. Find her on her blog at www.OpenSesame.com/blog, on Twitter (@OpenSesame) or at kelly.meeker@opensesame.com.

Finally, a question?

Beyond what point in time after an intervention can improvement or application be identified and measured?  For example, the airline pilot who learns an emergency drill in basic training but whose skill is only evident way down the line when something happens.

The transcript is now available for the chat……just look under transcripts and summaries.  Also Kelly curated the content using Storify you can find her summary in our Links and Resources section.