Leicester City, Brexit and Pokemon Go: 2016 mid-year review

This week’s post is from #chat2lrn crew member Ross Garner, an Online Instructional Designer with GoodPractice in Edinburgh. 

2016’s been a crazy old year. First Leicester won the Premier League, then the UK voted itself out of Europe. Now, children and adults alike are walking in front of cars and crashing into lampposts as they use their phones to hunt virtual Pokemon.

If you’d put money on any of the above, you’d be very rich indeed.

But are we any wiser this July than we were back in January? Or has the unpredictability of the past six months shattered our confidence?

On this week’s #chat2lrn, we’ll be asking how this year has been for you? How have your expectations compared to reality? How have your ideas changed? What has gone well? What failures have you learned from?

Here are three ideas to get you started:

We operate in complex systems

How did Leicester City overcome 5000-1 odds to top the Premier League? Sure, training played a part. But so too did management decisions, the culture at the club, the mistakes made by opponents, and no small amount of luck.

When you are designing learning interventions, how much do you consider the system within which you operate? Is training the answer, or are there other factors at play? Can the success of one team be replicated to another, or are other factors like environment, team dynamic or luck skewing the results?

In complex systems, where we have a big impact on some areas but less of an impact on others, do you need to nudge rather than lead?

Emotion trumps facts

Throughout the UK Brexit debate – and the US Presidential race – facts have been cast aside in favour of sweeping generalisations. Why do these generalisations stick? Because they chime with the real-world experiences of voters. Because voters have an emotional connection to the candidates and to the ideas.

When we’re developing a new learning initiative, is it enough that we think it will improve the performance of our colleagues or clients? Do our learners believe that? Does it make sense to them, in their context, without knowing what we know? How much do you consider our learners’ hopes, fears, or even their workplace happiness?

Fun matters

Pokemon Go had as many users in its first week as Uber had in 7 years. It makes over $1million in revenue every day. As we look at the seriousness of the world around us, it’s encouraging to see hundreds of people gather in one space to catch a pikachu.

But how does this help us as learning and development professionals?

Well, it tells us that fun matters. Yes, we do a serious job. And yes, performance at work is important. But that doesn’t mean that developing a team, and striving towards a common goal, can’t be fun. What can we do to promote fun? Can fun improve productivity?

We’ll be discussing this, and your own ideas, at our #chat2lrn mid-year review. Thursday, August 28, at 8am Pacific, 11am Eastern, 4pm BST. See you there!

Virtual reality: Can it change how we learn?

This week’s post comes from #chat2lrn crew member Ross Garner. Ross is an Online Instructional Designer at GoodPractice and a member of the eLearning Network. You can reach him on Twitter @R0ssGarner

Virtual reality is back – and this time, it works

If you’ve spent any time on Twitter in the past couple of months, or have attended any Learning and Development conferences, you’ll be aware that the industry is abuzz with the news that virtual reality (VR) is about to go mainstream.

Forget the crummy graphics of the 1990s. For the first time, VR seems like it’s about to live up to it’s name. Realistic visuals and surround-sound audio are creating an immersive experience that can finally trick your brain into believing you are somewhere else.

vr

Woman Using a Samsung VR Headset at SXSW. Image courtesy Nan Palmero on Flickr.

Facebook, Sony and HTC are all launching headsets later this year, and Google Cardboard has made it affordable to try VR in your home.

pMeanwhile, companies like Magic Leap are raising millions in investment as they develop sophisticated augmented reality (AR) devices that combine simulated graphics with the world around you. Think Minority Report, or this YouTube demo.

But what does this have to do with L&D?

To quote blogger, speaker and #Chat2Lrn friend Donald Clark:

“In my 30+ years in technology I have never experienced a heat so intense and shocking as that I got when I first tried the Oculus Rift.

“As a learning professional, lots of applications flooded my mind. But more importantly, and this IS important, I thought of learning theory.

“The big problems in learning are:

  • attention
  • emotion
  • doing
  • context
  • retention
  • transfer

“This technology tackles these head on. We may be on the threshold of delivering educational and training experiences that are compelling and super-efficient, in terms of these positive attributes in learning.

“There’s also a bonus – this is a cool, consumer device that young people love. 2016 is only the start. VR is not a gadget, it’s a medium and a great learning medium.”

NATTC NAS Pensacola

U.S. Navy personnel using a VR parachute training simulator. Image from Wikipedia.

So what’s next?

VR is already used to train the army, pilots and surgeons, but what applications can you think of for VR and AR?

Is this going to be a technology that L&D grabs and exploits? Or will the cost and difficulty of implementation leave us lagging behind the entertainment industry?

Join in using the hashtag #chat2lrn and discuss these and other questions on 11 February, 2016, 08.00 PST/11.00 EST /16.00 GMT.

Audience Analysis, Critical for Instructional Outcomes

Written by Patti Shank PhD, CPT
It may not reflect all of the chat2lrn moderator opinions.

Audience analysis is part of the need analysis process during instructional design. The purpose of audience analysis is to help us understand who we are dealing with (including the organizational system) and how to serve them most effectively.

What Happens During Audience Analysis?

Some of the things considered during audience analysis:

  • Target audience: Who is the target (primary) audience and any secondary audiences? What are their expectations and needs? What problems are they experiencing? What is their level of experience? How much will they participate? How much time do they have? How will we respond, with instructional and non-instructional interventions?
  • Environment analysis: The entire environment people operate in. Leadership, learning, performance, business, competitive, work, tools, the entire system…
  • Instructional analysis: What tasks are needed to learn? Do people all know the same thing? How quickly does the information change? Is this declarative or procedural information? Is this information that people need to memorize?
  • Technical analysis: Does this involve technology? Hardware/software?  Will it be changing? How does it tie into company infrastructure? Who will deal with the hardware and software? Does the audience have the ability and capacity to deal with the technology and keep up with it? Who will build and maintain it?

Why Should We Perform Audience Analysis?

In “The Science of Training and Development in Organizations: What Matters in Practice,” Eduardo Salas and his fellow authors proclaim that “decisions about what to train, how to train, and how to implement and evaluate training should be informed by the best information science has to offer.”

I write about the critical nature of needs analysis for good training outcomes, according to Salas and fellow authors research in my ATD Science of Learning Blog article, Science of Learning 101: The Latest Research on Needs Analysis and Learning Climate (https://www.td.org/Publications/Blogs/Science-of-Learning-Blog/2015/07/Science-of-Learning-101-the-Latest-Research-on-Needs-Analysis-and-Learning-Climate).

Below is Table 2 from the Salas paper (http://psi.sagepub.com/content/13/2/74.full.pdf+html?ijkey=g8tvuLmoeZfN2&keytype=ref&siteid=sppsi), which shows that needs analysis is the key factor for maximizing training outcomes before training.

Table 2

 

 

Below, Table 3 of the paper, the needs analysis items that are most critical are clarified.

Table 3

 

The very first item, Conduct training needs analysis, includes (emphasis is mine):

Determine what needs to be trained, who needs to be trained, and what type of organizational system you are dealing with.

Internal vs. External

Today’s post comes to us from #chat2lrn crew members, Meg Bertapelle & Holly MacDonald.

Meg is a Senior Instructional Designer of Clinical and Product Education at Intuitive Surgical, a medical device company which makes the da Vinci Surgical System. You can find her on twitter at @megbertapelle

Holly is the owner and principal consultant of Spark + Co, a boutique training company that provides custom training solutions to organizations for employees or customers. You can find her on twitter at @sparkandco  

Startup Stock Photos

Meg and Holly were chatting about the differences between internal and external L+D work, and captured some of their observations in this blog post.

What’s the biggest challenge you face as an internal L+D expert?

Meg: I would have to say that we run so lean sometimes, that our team isn’t able to really do our best work under the timelines & sheer number and scope of projects assigned to us. Always having to compromise on the best solution to get an OK solution out the door eventually gets exhausting.

What’s the biggest challenge you face as an external L+D expert?

Holly: Typically the biggest challenge is communication. Working with such a range of clients, some of whom are brand new to e-learning, others who are familiar with it means that we are constantly having to check assumptions, confirm things and keep those lines of communication open.

How do you deal with analysis as an internal/external?

Meg: Our fall-back position is always surveys and focus-groups, but sometimes the timeline of a given project just doesn’t allow for those methods, and we have to try to extrapolate information about the need from internal folks that work closely with the true audience. Our company just recently created a data analytics group that will work cross-functionally to gather what data we can directly from our products, and will advise on other ways to incorporate data gathering as learning experiences are designed and revised. I’m very excited about this because we might actually get real (not anecdotal) information about the gaps in our current materials and processes.

Holly:I think it’s easier as an external to do analysis, since you need to get information about the client and the learning need before moving ahead. I think as an external, you get more latitude to do an analysis. That being said, sometimes you find out that the problem is not a training one and those are not conversations the client always wants to have. But, if it won’t fix the problem, then they need to know.

What design challenges do you face as an internal/external

Meg: Usually time is my biggest challenge here. I would LOVE to be able to design tons of scenario-based practice activities; link directly to resources; provide everything our learners need in an easily-accessible, SINGLE place; and provide just-in-time and performance support for a truly flexible and end-to-end solution to all of our challenges. It just ends up being impossible while also keeping up with the project load on our team.

Another big challenge for us is that in order to meet deadlines, especially for product-related training materials, we have to split up design & development work between team members, and then struggle with the lack of consistency in the end result.

Like Holly, we have to adhere somewhat to the company brand guidelines, but thankfully (!!) more of the general feeling rather than the “letter of the law.”

Holly: Either the constraints of the “brand guidelines” where the client’s marketing team has decided to apply branding rules to elearning. This can really mean that you aren’t able to get as creative as you’d like. I usually try to find out if there’s a way to adapt the brand guidelines to elearning. To be honest, if not, then I’d actually consider walking away. If the branding overshadows the need to learn, then it can actually be an indicator of an organization that really doesn’t value learning.

The other common constraint is that the budget is not big enough to get custom design assets, so you head into your digital closet to see what you’ve collected and stockpiled over the years to use on the project. One other aspect that I’ve found challenging is to source great designers who get instructional design and/or elearning. I have found a few who kind of get it, but there is sometimes a tension around which designer knows best.

What implementation challenges exist as an internal/external?

Meg: Managing the different permutations of products released where & when – what system, what software version, where is it cleared, where is it launched, in what language… (did you hear that? it was my head exploding)

Holly: The LMS. That’s the biggest challenge we’ve faced with the implementation. Some clients engage us to work on their launch plan with them, but sometimes we hand it off to the LMS Administrator or IT department and that’s the end of it.

What do wish you could do that you can’t as internal/external?

Meg: I think I would love to be able to say “no” to a project that I just don’t want to do. LOL 🙂 Honestly, since my biggest constraint is usually time, and I imagine that’s not that different in an external role, I’m not sure what else to wish for! Hopefully some of you in the chat will give me some good ideas that I can try to make happen internally 😉

Holly: I have been an internal before and I think the thing I miss the most is the ability to modify the program once it’s launched, or having a more flexible role to extend the program. As an external, you live and die by your scope and once the program is launched, it’s gone. We’re very lucky to have long term relationships with our clients, so we do get to do some of that with them. But, for some it’s a budget decision.

What do you think you could teach internal/external?

Meg: I have a lot of “tribal knowledge” of our business, so I think I could help an external person come up with a solution that would fit our organization, and make a business case for it. Sometimes the things that matter to the organization are not as visible to someone external.

Holly: After doing this for so long with many different clients, I think the thing I’ve really mastered is how to understand a client’s business quickly. I get to use my “ignorance card” constantly and coming at things from the perspective where you know little or nothing means you have a unique point of view. I have one client who often says things like: “I love how you make us think about things in ways we haven’t thought before.” When you are internal, it’s much harder to maintain that perspective. You need to find ways to do that consciously, otherwise you just end up making assumptions.

What about you? What have you found to be the benefits and challenges of being either an internal or external learning expert?

Let’s discuss during #chat2lrn on Sep. 10th, 8:00 PDT/11:00 EDT/16:00 BST

Hope to see you there!

The Learning Trap: Why Satisfied Learners and Knowledge Retention is Worthless

“Ajay is a Chartered Professional Accountant and a Certified Training and Development Professional but considers himself a Workforce Revolutionary. Ajay is a 3-time published author with John Wiley & Sons recently publishing his third book titled, “The Trainers Balanced Scorecard: A Complete Resource for Linking Learning and Growth to Organizational Strategy” (http://amzn.to/c3Qsk0). Training Magazine recognized his company CentralKnowledge (and LearningSourceOnline.com) as the 2008 Project of the Year for their work with Apple Inc. He is also a multi award-winning writer receiving the 2014 and 2015 prestigious TrainingIndustry.com Readership and Editors’ Award for Editor’s Choice and the Top 10 most read articles. Ajay regularly appears on the #1 Montreal Talk Radio morning show discussing workforce performance issues.”

knowledge-retention

Learning practitioners are taught early, or should I dare say brainwashed, to believe the ‘essential’ four levels of evaluation. Many of us refer to these levels as the Kirkpatrick’s Evaluation Model and it has been a cornerstone in every learning event and also a foundation for many evaluation models that followed.

But let’s be honest, the unspoken truth is that the Kirkpatrick model is flawed. Yes, I dare say it out loud and may the learning gods, and some of my peers, strike me down. While you pick you jaw off the floor, the fact is that the evaluation method has some apparent issues. new-and-improvedWhile the Kirkpatrick organization will not admit to this publicly (naturally, since it is the foundation of the revenue stream) they are attempting to ‘adjust’ it accordingly by repackaging it as the ‘New World Kirkpatrick’. This reminds us of an ‘All in the Family’ episode where Archie and Meathead ask the question about a product being new and improved asking what was wrong with the original one, was it old and lousy?

But I digress. Let’s review the four levels. Level one refers to learning satisfaction. Simply put, this is what learning practitioners refer to as the ‘smile sheet’. This learner feedback process asks everything from did the learning meet your needs to whether the lunch was adequate.

Level two speaks to learning retention or simply put, do you remember what you are supposed to remember? Often this is considered through some form of ‘testing’. While this is what many practitioners accept as learning success, the Kirkpatrick model assumes that if the learner remembers the knowledge they will naturally apply it to their job. I’ll revisit this logic shortly.

Level three is about changing the learner’s behavior or in layperson terms, skills application. This level is the first ‘holy grail’ for learning practitioners. The logic is that if the learner retains the knowledge from the initial learning process then their behavior will change and become more effective in their job. This sounds reasonable and correlates to Level four.

Finally, achieving level four for learning practitioners is similar to wining the Super Bowl. This level refers to the learning effort having an impact on business and performance objectives. What the Kirkpatrick model implies is that if learning practitioners are able to connect their efforts to this level the will gain the admiration of their business leaders. Essentially, this is the promise of demonstrating tangible results for your learning budget.

Now, the Kirkpatrick methodology sounds logical and simple enough that learning practitioners are able to buy into the process but dig deeper and you will discover issues that undermine learning efforts.

To accept the premise of this post you must first accept that the role of learning in any organization is considered an internal business unit. Just like every other internal business activity whether it is accounting, marketing, or HR, learning is also held accountable to specific performance expectations for itself and how it contributes to organizational results. You don’t have to accept this premise. But if you don’t then you should also not question why your training budget gets reduced every year.

By accepting the reality that your learning efforts are part of the business and ultimately affects the business, hopefully positively, you begin to see learning from the perspective of your business leaders and business unit managers.

With that said, for any business level one and level two are essentially irrelevant. Think about it. Why would leaders care whether their employees like the learning event (level one)? It has no bearing on the business or expected results. Level one smile sheets exist for learning practitioners to prove that they are actually doing something that helps them to avoid getting fired from their job.

Every learning practitioner has done this at least once. They wave their smile sheet results to their leaders hoping that this will validate their efforts, similar to a child seeking the admiration of their parent and trying to get their work put on the family refrigerator.

Don’t believe that Level two is any better. Like level one, your leaders could care less that employees actually can remember any of the skills they learned. Like the smile sheet learning practitioners are quick to fly their successful ‘test’ results in their leader’s faces. The problem with level two ‘learning’ retention is that, more often than not, they are inaccurate or invalid. Why? Essentially, practitioners ‘game’ results in their favor, the knowledge tested is often irrelevant to changing learner behavior, or worse, the skills tested are not applicable to their job. Whatever the reason, the practitioner’s goal is a futile attempt to prove to leaders that their efforts are close to being effective.

wrong-wayLevel two is as irrelevant for the business as is level one. What your leaders expect is that employees actually apply the skills on the job. Their logic, which many practitioners ignore, is that if an employee is applying a new skill or knowledge that improves their performance it will consequently improve the organization’s performance.

Fundamentally, leaders are concerned solely about level three and four. In reality, this all you should be concerned about as well. Regretfully for the Kirkpatrick model, there are still concerns that practitioners must be made aware. Even Kirkpatrick found flaws and hence, developed a ‘new world model’, but lets not get into that now.

At Level three the need to change behavior is not as relevant as the need for leaders to see the actual application of knowledge and skills. As any qualified psychologist will tell you changing human behavior is something that happens consistently over time and not something any type of training effort can accomplish successfully.

Simply, your leaders see level three evaluations as the vehicle to meet pre-established performance metrics and not necessarily to change employee behavior. The question we are asked from practitioners is, “how do we connect to level three expectations?” The answer is quite simple. First, don’t create new learning measures to prove your efforts are effective. Your leaders and business unit managers have their performance metrics already set. All you need to do is to partner with the business units, learn about their performance expectations, and then proactively work with them to conduct a needs assessment to determine the required skills that will help contribute to achieving their performance metrics.

Finally, level four is what every practitioner strives to achieve. Keep in mind that while level four is what your leaders expect they don’t expect every training effort to meet it. And for those initiatives that must achieve level four expectations you are not alone in your effort. You leaders don’t expect learning to be the sole hero. Recognize that when attempting to impact business results to take into account the involvement of other internal activities.

Your leaders will never believe that your ‘level 4’ achievement is only a result of your learning solution. It is a cross-functional effort so involving many internal business processes. So take credit when due but also, give credit to those that deserve it. This will build your business impact credibility and ensure sustainable leadership support for learning.

Finally, never, ever go to your leaders and refer to the Kirkpatrick four levels. They won’t understand what you are talking about and frankly don’t care about your evaluation methods. Just sayin’.

Join #chat2lrn to share your views and thoughts on “The Learning Trap” Thursday 13 August 8.00 PDT/11.00 EDT/16.00 BST

Revolutionize Learning and Development

#Chat2lrn is delighted to have a guest post from Clark Quinn. Clark Quinn, Ph.D., is a recognized leader in learning technology strategy, helping organizations take advantage of information systems to meet learning, knowledge, and performance needs. His approach is learning experience design, combining what we know about how people think and learn with comprehension of technology capabilities to meet real needs. He combines a deep background in cognitive science, a rich understanding of computer and network capabilities reinforced through practical application, considerable management experience, and a track record of strategic vision and successful innovations. He is a well regarded speaker and author on human performance technology directions. You can follow Clark on Twitter: @Quinnovator. See more of Clark’s views on this subject in his book Revolutionize Learning & Development.Revolutionize Learning & Development , Clark N. Quinn


 

Is Learning & Development achieving what it could and should? The evidence says no. Surveys demonstrate that most L&D groups acknowledge that they are not helping their organizations achieve their goals. It’s worse than the cobbler’s children, because they at least got others shoed, but here we’re not getting ourselves effective enough to help anyone else. Where are we falling apart?

My short answer is that we’re trying to use industrial age methods in an information age. Back then, one person thought for many and our training was to get people able to do rote tasks. There wasn’t a real need for knowledge work, and we were happy to filter out those who couldn’t succeed under these conditions. In this day and age knowledge work is most of what contributes to organizational success, and we want to foster it across the organization.

To succeed, there are a few things we need to get into alignment. The simple fact is that much of what is known about how we think, work, and learn isn’t being accounted for in L&D practices. We use courses to put information into the head, but there’s clear evidence that our thinking is distributed across information in the world. It’s also hard to get information into the head. So we should be focusing on putting as much information into the world as we can. We also now know that the way to get the best outcomes is to get people to work together, and that silos and hierarchies interfere. If we want the best outcomes, we want to facilitate people working and playing well together. Finally, we know that learning should involve models to guide performance, be emotionally engaging, and have sufficient, appropriate, and spaced practice. All of this is antithetical to so-called rapid elearning.

Underpinning this is the fact that we’re measuring the wrong things. We’re out of alignment with what the business needs; when we’re measuring how much it costs per seat per hour, we’re worrying about efficiency, and we’re paying no attention to effectiveness. It’s taken as a matter of faith that ‘if we build it, it is good’, and that’s empirically wrong.

Quite simply we need a revolution; a fundamental shift in what we value and what we do. It’s not redefining what we do completely; e.g. courses are still a viable tool, but they’re just one part of a bigger picture. There are two things organizations need: optimal execution of those things they know they need to be able to do, and continual innovation to adapt to the increasingly complex environment. Courses are only a part of the first, and essentially irrelevant to the latter. We need to incorporate performance support for one thing, and sponsoring innovation is about facilitating communication and collaboration. That comes from using social media (all of it, not just technology) in appropriate ways.

The upside is big. We can, and should, be the key to organizational outcomes. We should be designing and fostering a performance ecosystem where people can work in powerful ways. We should be shaping culture to get a workforce that is motivated and effective. If we do so, we’re as fundamental to organizational success as anything in the business. I suggest that this is an achievable goal and emphasize that it’s a desirable goal.

To get there, you need to ‘think different’. You need to shift from thinking about learning and training, and start thinking about performance. You need to take development to mean facilitation. L&D should be Performance & Development, or even Performance and Innovation. That’s the promise, and the opportunity. Are you ready to join the revolution? Your organization needs it.

Let’s discuss in #chat2lrn this week.  See you on Thursday, May 7th 8:00 am PDT / 11:00 am EDT / 4:00 pm BST.

eLearning Trends: Looking Back and Looking Forward

Today’s post comes to us from #chat2lrn crew members, Andrea May and Lisa Goldstein. Andrea is the Vice President of Instructional Design Services for Dashe & Thomson in Minneapolis, MN and Lisa is the founder of www.LDGlobalEvents.com and currently works for Nielsen. You can find Andrea and Lisa on twitter at @Andreamay1 and @LisaAGoldstein

Happy New Year!Happy-New-Year-Copy

It is the start of a new year and we would like to spend a little time looking back at what we learned in 2014 and looking forward to what we hope to learn in 2015.

As we first look back at the year that was, some of the most common trends we saw discussed included MOOCs, video, performance support, social learning, adaptive learning and the science of learning. Read more here:

Learning Technologies 2014: Eight Key Trends for Learning and Development

Learning Technology Trends in 2014

As we look forward to 2015, some new trends on the horizon seem to be wearable learning, School as a service (SaaS), Microlearning, Personalization and Minimum Viable Courses (MVCs). Read More Here:

Top 8 eLearning and EdTech Trends for 2015

What will be big in workplace learning in 2015?

Technology-Enabled Learning: What Will 2015 Bring?

Join us on January 15th for #Chat2lrn to discuss what we learned and accomplished in 2014 and what we hope to achieve in the coming year.

Benefits of PLN, Community and Professional Organizations

Today’s post comes to us from #chat2lrn crew member, Meg Bertapelle. Meg is a Senior Instructional Designer of Clinical and Product Education at Intuitive Surgical, a medical device company which makes the da Vinci Surgical System. You can find her on twitter at @megbertapelle


 

I just got back from attending the DevLearn conference and I’ve been struggling to pull together my “take-aways” for the last week (while also trying to catch up at work after being gone for a week). My gut was telling me that the best part was the people – but is that really OK? I mean, my company paid a lot of money to send me to this conference, and the best part was the people?

#chat2lrn pre-chat LIVE at #DevLearn 14

#chat2lrn pre-chat LIVE at #DevLearn 14
Thanks to @tomspiglanin for the picture via Twitter 🙂

 For me, it really is true. The sessions might have been the spark, but the conversations and connections with all of these great smart people really were the best part. I was able to connect with people in person that I normally only communicate with over the internet. While we have become great friends and I respected and trusted them all before I met them in person, the connection was much stronger, and our communication was more efficient, in person. We’ll leave THAT distinction for another chat (maybe talk to Helen Blunden), but my point is that meeting people in person (or seeing them again in person) this time has really brought home to me that I would not be anywhere NEAR as good an instructional designer, employee, problem solver – and even thinker – without my Personal Learning Network (PLN). Whoever first said “we are smarter than me” is SO right. (btw, apparently there’s a book – I haven’t read it, but I should put it on my list!)

 I have always captured some great information and ideas from attending a conference. In fact the first conference I went to was DevLearn in 2010. The sessions I went to and people I met (can’t possibly name them all) are the whole reason I am here today, part of the #chat2lrn crew, writing a blog for a Twitter chat where we can discuss and debate really interesting things with really smart people. The great ideas don’t wait for a conference though – people in the L+D community, in my PLN, come up with ideas, share interesting stuff and have wonderful debates and discussions on Twitter, or Skype, or LinkedIn, or Google+, and it’s happening ALL THE TIME. Without this community (that’s you!), I might still be creating really horrible training materials and calling them good! LOL

So thank you, all of you, for being the greatest benefit of all in my career. Thank you for allowing me to tag along – and possibly contribute in some small way – with your PLN. 

What about you? What have you found to be the benefits of having a PLN, or participating in a community or professional organization?

Let’s discuss during #chat2lrn on Nov. 13th, 8:00 PST/11:00 EST/16:00 GMT. Hope to see you there!

 

How Do We Do It? Crafting Decision-Making Practice in eLearning

Today’s post comes to us from #chat2lrn crew member, Meg Bertapelle. Meg is a Senior Instructional Designer for Clinical & Product Education at Intuitive Surgical. You can find her on twitter at @megbertapelle.

Decision-makingI think most of us would be happy to build more scenarios and practice decision-making activities into our elearning projects (time permitting) – if we knew up-front how to plan and execute them. Sometimes the hardest part is knowing where to start.

This week, I’ve asked a couple of our #chat2lrn community members to share their experiences crafting decision-making practice elearning activities. Fiona Quigley (@FionaQuigs), one of our #chat2lrn crew,  is Head of Learning Innovation at Logicearth, an Irish learning services company with a global client base. They specialise in the production of modern multi-device elearning content, learning technologies and training support services. Laura Payette (@ljwp) now works at Nielsen, but is coming off a three-year stint designing and developing elearning and corresponding product/marketing communication for KPA, a dealer services and internet marketing provider for over 5,000 automotive, truck, and equipment dealerships and service companies across the US. Her DOT Hazardous Materials course won the National Excellence in Training Award from the Automotive Training Managers Council in 2013.

We’ll get some great information from them in the form of an interview, then we can all discuss the pros & cons, and crowd-source some suggestions during the chat on Aug. 7th. I hope this will help us all take the initiative to help our audiences start applying their new knowledge & skills right away.


Q1: What kinds of decision-making practice activities have you been able to incorporate in your elearning projects?

Fiona: I’ve designed a lot of content for health and social care professionals, especially in the area of communication skills and policy compliance. I have designed scenarios that present a typical patient interaction and then ask the question – what would you do or say next? That would be what I call a level 1 decision making scenario.

Higher level scenarios are more immersive and instead of leading a leaner down a pre-planned path, they include random events that differ each time you ‘run’ the scenario. I’ve used these higher level scenarios with nurses and pharmacists in areas such as medication management – reducing errors and also for dealing with patient complaints.

Laura: I spent three years building elearning for the automotive industry, particularly in the areas of environmental safety and compliance. There was a lot of regulatory information that had to be included and I was constantly challenged to find ways to make it relevant and interactive. One of the ways I did that was to inject scenarios into the training. Keep in mind that many of them were on the smaller side. In other words, I didn’t build a course around one big scenario with a million branching options (although that would’ve been so cool!). My content simply didn’t lend itself to that. Instead, I used smaller scenarios and sprinkled them in where they had the most impact.

Q2: What kind of planning steps did you take before beginning to write the activities?

Fiona: You must talk to real people who do the jobs. Observing people making the real decisions is the gold standard – but it is often difficult to get the opportunity to do this. You need to be careful who you chose as the Subject Matter Expert is. Often SMEs are senior ‘expert’ people who are very far removed from day-to-day practice. To help people practice real decisions you must talk to the people who make the everyday decisions.  I also like to structure conversations with SMEs into what I call a ‘DIF’ analysis:

  1. Difficult – what if anything, do you find difficult about this decision
  2. Important – what is most important about getting this decision right/wrong?
  3. Frequent – what frequently comes up, e.g. common myths/misunderstandings, good practice?

Often competent practitioners won’t be aware of how they make good decisions. They are unconsciously competent; so it is the job of the ID to turn this tacit knowledge into explicit learning. Once that learning has been made explicit, you can more easily share that with others.

There is also a difference between formal and informal practice. There may be formal rules in place about how someone does their job – but many competent practitioners create shortcuts as they gain experience. Being able to identify these ‘tips and tricks’ is very useful learning in itself.

Finally, I would also advise talking to people at various levels of experience. For example, talking to a novice in the area will help you see the challenges first hand, rather than relying on the recall of someone more senior who may gloss over these challenges.

Laura: Research! Obviously, reviewing content from SMEs and talking to SMEs is critical but, like Fiona said, talking to people on the frontlines — or who at least aren’t far removed from the frontlines — really helps build context for understanding the challenges that employees face in doing their jobs. Sometimes that access can be hard to get; it was for me. If that’s the case, use everything you can to tease it out. Think of yourself like an investigative reporter. In my case, I had access to a robust database with thousands upon thousands of real-life examples that had been logged. I also had access to people who could elaborate on those examples to help fill in the gaps. I relied heavily on them and went back many times to ask additional questions.

Q3: How did you determine appropriate activities that would simulate the real-life application of your learning objectives?

Fiona: Again – much like the answer to Q2, observe decisions being made, find out how people actually make the decisions and base the activities on what actually happens in the workplace – not what SHOULD happen. Too often in elearning, we are forced to idealise and formalise the learning process, which then becomes so far removed from reality that it loses credibility with the target audience! You often see this in elearning content where the scenarios are so easy that you don’t actually need to complete the course to be good at them.

For example, when we designed a Medications Management programme, quite a few of our nurses said that one of the most difficult challenges they had was doing the ward round, handing out medication and being interrupted by patients or family members. They said they needed to concentrate and focus on making sure they gave out the correct medication – often a complex range of drugs for patients with very different medical needs. Another source of concern was worrying about patients who found medications hard to swallow and not having enough time to spend with them to help them. Together, we came up with guidelines about how to resolve these challenges and built in a scenario challenge around this.

Laura: The scenarios that I wrote were usually an outgrowth of the content development process. In other words, I didn’t approach a course with a specific scenario in mind. I generated them organically as I pulled the content together. It becomes apparent in talking to SMEs and frontline employees and in reviewing existing content where the gaps in understanding and practice are. Those gaps were usually the places I chose to insert scenarios because they illustrated the performance issue and allowed employees to think through things by answering the questions. In some cases with my content, there were right and wrong answers (remember, a lot of it was compliance based), but there were also usually shades of gray — and it was in those areas that I was able to challenge employees through scenarios to think about their actions and the ramifications of them.

Q4: How did you evaluate the effectiveness of your activities?

Fiona: I normally ‘dry run’ the decision plan with a selection of the target audience in a focus group setting. It is important to have a range of different people with different levels of experience. Role-playing the scenario, trying it on for size, works well to see if it is a realistic enough representation of the actual day-to-day-job. I normally use simple post-it notes to visualise the decision and focus on:

  1. Decisions – what is the actual decision to be made?
  2. Knowledge/Skill – what knowledge or skills do you need to make the decision?
  3. Actions – what specific actions do learners take to make the decision?
  4. Consequences – what are the results of each action, for both good and poor decisions?

Laura: I ran the activities by stakeholders and SMEs, as well as a core group of what I’ll call advisors for lack of a better word. (They were internal employees who interfaced directly with the external employees I built training for.) If they responded by effectively saying, “Oh, that really made me think about things differently,” or, “That really caught my attention,” then I knew I had hit the mark. If they didn’t, or if they were confused by what to do or how to respond, then I knew the scenarios needed more work. I know that’s vague, but there’s really no set recipe for scenario building; it’s very context specific. I also evaluated the activities by looking at actual evaluation responses from employees who took the course once it was deployed.

Q5: What made certain activities more effective/impactful than others?

Fiona: The more realistic the decision and scenario – the closer it is to the learner’s actual normal workplace activities, the better. Not only does the decision need to be realistic, but so does the consequence. We don’t want to use phrases like “Well done, that is correct” – rather, we need to show actually what happens in the workplace.

We have a challenge in elearning in that we usually have to design for a very generic audience. That means we lose the nuance and subtlety that actually drives high-performance. If you look at what drives and helps people to perform at a high level, it is mainly about understanding the subtlety of communication that goes on around you. It is also about reacting to unexpected happenings – like covering for a co-worker or working when you are understaffed.  We need to make sure we build in this nuance and realism.  To do this well, we perhaps need to have different types of scenarios to suit different types of people in our target audience. As learning designers we just can’t go on accepting a once-size fits all approach to our learners.

Also – a by-product to this analysis is that you need to be open to the fact that not all challenges that you uncover will be solved by training. For example, for our nurses, we identified that adding a simple “Drugs round in progress” notice to the drugs trolley, helped to reduce the interruptions staff faced. Identifying these possible environment or process problems is a great benefit of doing good decision making analysis. If you explain this to your client upfront, it can also be a great motivator for them to really engage with you.

Laura: Fiona makes some very good points here. I think including real consequences to real situations, and writing them in the parlance your target audience speaks, is key. If you fabricate your scenarios they won’t be authentic and people will dismiss them. They also have to be contextually bound. In other words, you may see a great idea for a scenario somewhere and think, “I’ll put that in my course!” But if you don’t mold it for your audience/content and their specific performance needs, it won’t be a great scenario for what you’re building. I think sometimes the scenarios that are most impactful are those that address gray areas — the places where employees are a little uncomfortable or uncertain — and the places where the biggest performance gaps are.

Q6: Please share your top tips/tricks for crafting decision-making practice activities.

Fiona: I think I have covered most of these in answering the questions above, but to summarise:

  1. Talk to real learners of different levels of experience.
  2. Be aware of the formal way of doing something versus the informal way.
  3. Help your SMEs make their decision making practice more explicit by asking good questions.
  4. Have a range of scenarios to suit different types of people in your target audience.
  5. Dry run your scenario plan with representatives of the target audience and adjust accordingly.
  6. Find out why people are making common mistakes e.g. is it a process or environment problem rather than a training problem?

Laura: Fiona’s tips are great. The only thing I’d add is be sure to craft your scenarios in the language your target audience speaks so they sound authentic.

Thank you to Fiona and Laura for sharing their insights. What about you? If you have some experiences and insights to share, or just want to hear what others may have to say, please join us Thursday, August 7th for #chat2lrn at 8am PDT, 11am EDT, 4pm BST.

What can our hobbies teach us about learning?

Today’s post is written by Meg Bertapelle, #chat2lrn crew member, instructional designer, mother, wife, crafter, and marching band geek who wishes there was more time in a day.
knitting - rainbow pom-pom scarf

I’m a knitter, and a crafter in general.  I grew up doing “crafty” things with my mom and my grandma (who lived with us starting when I was 5, and still lives with my parents).  I paint, draw, make jewelry and cards, and have attempted sewing. Pretty much, if it’s crafty, I am into it.  Of course, this can get a bit overwhelming 😉

digital scrapbooking

I have also, more recently, gotten into digital scrapbooking to help keep up with all the memories of my daughter’s early years that I want to save from the inevitable forgetful black hole that is my mommy-brain (and I am now obsessed, by the way!).

The first (and glaringly obvious) thing that my hobbies have taught me about learning is to just DO IT! Maybe have someone show you (or find a tutorial) the first time or two, and just get your hands dirty and try something.

Ask for help, or search Google or YouTube for tutorials, when you get stuck or feel like you could do better.

Go ahead and screw it up. If you can’t live with the mistake, start over & do it again, but don’t keep yourself from jumping in because you don’t want to “do it wrong.”

Don’t wait until you can “learn everything” about the hobby before you start – you can’t absorb the finer details until you try the basics.

The really great thing about learning and hobbies, is that we are already interested in the topic, and motivated to learn. We don’t have to figure out some contrived relevance to our real lives, we are seeking out the knowledge and skills required to DO the fun stuff.  Hobbies make us happy, and really that’s all we usually require of them.  As human beings we are happier and healthier being challenged, so learning is a natural and integral part of having a hobby.

And wow, if you can love what you do, do what you love and actually make a living at it, how much fun would that be?

Check out how Logan LaPlante has constructed his own education around this kind of plan:

And just for fun, 18 Important Life Lessons to Learn from Knitting [BuzzFeed]

What are your hobbies?

What have they taught you about learning?

Is it anything you think you could apply to your work?

Tell us in #chat2lrn Thurs Jan 30 8am PST/11am EST/4pm BST.  See you there!