Expertise has become increasingly advanced and increasingly essential in settings requiring skilled decision-making, particularly under time pressure and uncertainty. And yet expertise has come under greater and greater assault from a variety of communities. In this podcast, the world renowned cognitive psychologist, Gary Klein will review these attacks, explain their flaws, and describe tactics for countering the critics and for promoting expertise in organisational settings.
Nippin Anand 0:00
Welcome to another episode of embracing differences with me Nippin Anand, thank you for wanting to learn more than what he knew yesterday. Well, where do I start, because it’s such a fascinating conversation. I think I’m just looking at the last one week, and wondering how much the world has changed. In just one week, we already had a pandemic, we have a climate change. Of we are witnessing climate change like never before. And now we even have a have a situation of a war and the political instability that it brings in our lives, at least here in Europe. I had two people cancel one of my learning sessions last Thursday. And that was on the first day when when the war between Russia and Ukraine started. And why, because that was not a priority, when there are other things that are more important. Of course, that is all understandable. But the interesting thing is that today, we have become such an interconnected world that whatever happens in one part of the of the globe, affects each one of us, we cannot conceive the future, we cannot predict what lies ahead, we cannot manage what we cannot see. And if we don’t even know what we cannot manage. And yet, you speak to people. And you get a sense of this, this this whole idea that we must manage as if we can, as if we can control everything, which which really brings me to today’s topic, which is the war on expertise. With the the the famous the world famous cognitive psychologist, Gary Klein, I and we talk about the role of expertise in making sense of uncertainty that we all face today. Why expertise is under threat from different disciplines. In science, why we need expertise, and what we can do differently, to reinstate the role of expertise in our organisations. And I think it would be it would be fair to start with making, making a confession on an admittance that I myself have an affinity for expertise, after being a practitioner for more than 12 years. And also being a researcher for for five years in this area of work. And one of the reasons. And this is just one reason why I became so interested in the idea of expertise is because I think most people know much more than what they can tell. That comes straight from the work of Michael Palani, which is often referred to as tacit knowledge. And one of the biggest problems with ignoring the role of expertise in our organisations in our daily work is that we rely so heavily on what we can tell, and then capture and codify that knowledge into procedures and ring binder solutions, and miss out so much on what people cannot articulate. And if the aim is to, to improve the quality of decision making in our organisations, I think we have only scratched the surface. Because most decisions are not made from what people say. But what they unconsciously believe in and actually do. Which is where understanding the importance of expertise is central. So I think and I hope that this port podcast will make sense. But like always, don’t believe a word of what you hear from Gary, or from me. Keep an open mind and just enjoy the ambiguity and discomfort of not knowing enough of that is learning. And if you are genuinely interested in learning, something that you did not know yesterday. All right, we are live once again. I’m really sorry. We had some technical difficulties, but I’m glad that we are back again for those of you who have joined us again. We’ll do a very quick introduction once again because we really wanted to capture everything in one, one recording and I think if some of you have left please you will not leave the bed that way where you you missed it in the beginning. So yeah.
I am Nippin, the founder of Nivola Solutions, and I’m joined by Gary Klein. A very well renowned cognitive psychologist, the author of the book sources of power, and also the CEO of shadowbox a very innovative approach to decision making, I believe. And with that, I will hand it over to Gary, he will do this wonderful presentation on the war of on expertise, how to prepare and how to lose Gary, I’m sorry, they have to do it once again. But here we go.
Gary Klein 5:27
Okay, thank you, thank you, Nippin, for inviting me to give this presentation. And I hope it will be useful to the audience. And the title of the presentation is the war on expertise, how to prepare, and how to win the consent How to lose, but I’m more optimistic than Nippin might be. Okay. And the war on expertise is real. And it may sound that I’m being dramatic here, but I’m not there are five communities that are attacking expertise, and they’re having an impact. And in terms of social media, and, and it’s a source of great concern. So that’s why I need to sound this alarm. So you see what these communities are saying, and understand what’s wrong with with your message. And then understand some strategies because we’re trying to build expertise, the agenda is first I’ll talk about these assaults on expertise. Then I’ll discuss very briefly the nature of expertise. And finally, some ideas for strengthening expertise. So starting with the assaults of expertise, the assaults come from five communities, and Nippin. You’re advancing my slides, correct? I am Yes, yes. So this is the slide with the five arrows, called The War on expertise. And here are the five communities that are attacking expertise. Here’s the problem. First one is evidence based practices, and most famously, is the area of evidence based medicine. And the claim is that a lot of people, a lot of physicians, a lot of healthcare providers, could come up with ideas about treatments that they recommended that they put into practice that weren’t particularly good weren’t effective. Sometimes we’re counterproductive, sometimes were dangerous. And so you needed to conduct careful, carefully controlled studies to see if these treatments worked. And that was the basis of expertise. And so the claim of this community is instead of relying on anecdotes, and intuition we should rely on on evidence and data. Now, I certainly agree with that. There were plenty of inappropriate and accept unacceptable ineffective remedies, and and the collection of evidence to screen these out and to identify what worked and what didn’t work is extremely important. I’m not denying that. But you need expertise in order to know how to handle the evidence. And the evidence based community is way overstepping when they say we can use evidence, and we don’t have to rely on experts anymore. And I’ll have a slide in a little bit, showing the kinds of decisions that experts still have to make. So this is a great overstatement to say let’s rely on the evidence. And don’t worry about the experts. It’s completely misguided. Second community that’s attacking expertise is heuristics and biases. This community catalogues the kinds of biases that people show. Last count, and I saw, they were up to about 18 biases. And the claim here is that people are suffer from these biases. Even experts are biassed. So you can’t trust experts. You have to trust careful analysis, experts can get it wrong. And I agree experts can get wrong. So can the analysis there’s no perfect solution. The problem I see with the heuristics and biases community is that the they’re clever enough to be able to come up with experiments that show that if we use heuristics and certain conditions that heuristics can get in the way and lead to to poor outcomes. And they’re very skilled at demonstrating that. But the implication that therefore we shouldn’t use heuristics which can lead to bias it is simply wrong because the heuristics or rules of thumb that we learn from experience, and if you eliminated our heuristics, we would all be helpless. So the heuristics Rs are essential to to performing complex tasks and ambiguous situations. Furthermore, attempts to D bias People have notoriously failed. So so that that’s not a solution either. And if we take the demonstrations out of the control laboratory, and we add things like context,
Unknown Speaker 10:16
then we find that all of a sudden, people do much better. Because they can use context as part of their expertise. And, and so that the bias is diminishing, and in some cases disappear. So I think this year has overstated. It’s claimed by by a great deal. And nevertheless, it gets a lot of publicity. And so that’s part of the message, part of the of the assault on expertise, the war and expertise. The third community is decision research community, starting in probably the 1950s, Paul mill, and, and others.
Gary Klein 10:59
And they were able to show that if they studied experts making judgments, they could capture the way the experts made the judgments, and pack it into an algorithm a set of rules. And if you follow those rules, you could perform as well as the experts, and in many cases, in in most cases, outperform the experts. And so the claim here is, we don’t need experts, we can we have algorithms, and we can just follow the rules, and forget about the experts, because the algorithms can do better. The problem here is that where do we get the rules? We got it from the experts. So if you never had experts to study, you wouldn’t have any rules, and you wouldn’t have any algorithms, the reason the algorithms can outperform the experts that fed into the rules is that the algorithms are consistent and reliable. And experts sometimes get tired, you know, sometimes they get distracted. However, if you change the conditions, all of a sudden, the set of rules becomes obsolete. And you need you don’t need to start over. But you need to quickly scramble to try to come up with new rules, experts can easily adapt, experts are very resilient. And so reliance on rules and algorithm is going to make us vulnerable to changing conditions. And the world we live in is one of changing conditions. So I think this assault is misguided. The fourth community that’s attacking expertise is a community of sociologists. And their claim is that expertise belongs to a community, a community of practice, and just situate expertise. And the head of one person is misguided because it ignores the rest of the community plays into it. And certainly, we can appreciate the value of teamwork, and communities of practice, and all of those kinds of supports that we get, and in our relationships with each other. Nevertheless, if you were going to make a decision and you had a team that was going to advise you would you rather it was a team of novices or a team of experts. I would certainly prefer a team of experts a team of novices, we simply mischaracterize what was going on and what was needed. So I think this community is overstating its attack. And the fifth community that’s attacking expertise is the computer science community. Most famously, the artificial intelligence developers, and they have done wonderful things they have made progress that I never would have expected 20 and 30 years ago, they were able to show that their AI programmes could could be humans, checkers, certainly. But then, to the surprise of many people at chess, they can beat the world Grandmaster Kasparov, Deep Blue was able to do that. And then I never expected this, they were able to, to become the best player in the world and go and that came as a big surprise, a very impressive feat. And even today, poker is something that AI systems cannot before humans so the message is AI is going is overtaking experts and the future belongs to AI we don’t we should put our investment into AI and not into experts because AI can outperform the experts in just about any field. And that’s simply wrong. That where the AI systems are doing so well is in nicely structured well well defined. Games like poker and chess and and go with the rules are clear and there’s no ambiguity But you move out of those well defined environments, into the messy and chaotic world that we live in. And all of a sudden, the AI systems start to not just start to flounder, flounder very badly. And so there’s a general feeling that AI has probably been oversold. And we see examples. famous example is IBM’s programme, Watson. And we remember all the commercials, you know, Watson was able to outperform
Gary Klein 15:35
the humans that the game of Jeopardy, which was a very impressive feat. And so now IBM is marketing Watson as the future of the company, and as the forward looking face of AI. But it has worked out that way. And medical organisations that invested in Watson quickly became disillusioned. And in fact, we’ve we hear that AI is now selling off Watson, or has sold off Watson, it hasn’t performed the way it was expected. And I think this is what we’re seeing again, and again, some of this, this attack from computer science builds on a few examples, and over generalises and misses, the difficulties of handling complex situations. The point of this slide is, the public doesn’t appreciate these kinds of limitations from the of these five communities, it just knows that the media is captured by these five different attacks, any one of them, which seems serious enough, five of them together is giving a message, forget about expertise, we have better ways to approach things. And that’s wrong. And that devalues expertise. And that creates problems for us. Next slide. So, we also see people claiming that expertise has a shorter half life, I heard one of financial managers saying, don’t listen to the experts was he was aware of a number of some cases where experts said this, this so called Innovation is never going to work. And in fact, it did. And it was highly successful. So don’t listen to the experts. However, if you look at a medical technology, medical innovation, you find many failures over decades until they got something to work. So if you invested in each technology along the way, you would quickly go bankrupt. That only with hindsight, do you say ah, you know, the experts were wrong. But they were right for most of the time. COVID-19 was another attack on expertise, showing that the medical community was helpless as this new pandemic ravaged the world. So what happened to expertise there? Well, certainly expertise was was was very challenged with COVID-19. And still is, but so is evidence based medicine. And if people say here’s a treatment for COVID-19, but we need to collect the data. And so it’s going to take us a year to set up the experiment and get permission to run the subjects maybe more than a year and then another year to collect the data to see the effect, then another half a year a year to examine. To analyse the data. We don’t have we didn’t have that time. But did we have? We had experts, and they knew that they were being overwhelmed by the challenges of COVID-19 and Coronavirus. And they were quickly scrambling to update their expertise so that they can could provide useful guidelines and treatment and symptom recognition for for people who are coming down with a disease. So instead of expertise becoming obsolete, we saw expertise. We saw what happened when expertise wasn’t available, and how helpless we were without expertise. And we saw and could admire the medical community scrambling so adroitly and still scrambling to update its expertise. Next slide. So the takeaway here is, as I’ve described the attacks on expertise, none of these attacks seems warranted. They all seem to be overstatements and reflect mistaken ideas about what expertise is all about. So now we switch to the second part, the nature of expertise. And here, next slide. This is the the waves that happen to you have that up?
Nippin Anand 20:03
Yes, I do. Yes.
Gary Klein 20:05
Okay, so now we’re in the field of naturalistic decision making to understand how expertise operates, not in a laboratory, not under controlled conditions, but in the messy conditions of the real world, because that’s, those are the conditions that we face, a world of high stakes, which you can’t recreate in a laboratory, dynamic settings where things keep changing all the time, lots of uncertainty with goals are vague and are being discovered, we’re dealing with wicked problems here that are very hard to, to study in a laboratory, multiple players, teams and organisations and constraints of that nature, time, stress, and experience. And it’s hard to build the experience into laboratory studies, I would have had somebody tell me, No, I give I study people. And I give them lots of training in my, in my laboratory experiment, I give them 10 hours of practice. And I just finished a study of firefighters, and they had an average of more than 20 years of experience. And I’m thinking 10 hours, 20 years, that’s not even the same organisation. And if you think you’re generating expertise in 10 hours, you don’t really understand what expertise is about. So in the interest of time, Nippin, I’m going to go through this example of rock fishes. But I think I’ll just cut this example out and go to the slide about the cognitive dimension. Yes. So when you are dealing with a domain where you think there’s expertise, and people say, No, we’ve got rules, we’ve got procedures, we’ve got standard operating procedures, it’s all been, it’s all been captured, you just have to follow the steps. And they’re so confident that they have it all captured in these steps. But in a complex world, there are never enough steps to handle all the things that could happen, all the aspects of context that could enter into the into the situation. So if you get into a situation where people are trying to reassure you that they have the checklists, they have the standard operating procedures, and everything is smooth and ready to run. These are the questions you can ask. What can go wrong here? What kind of decisions that people don’t have to make? Where are the tough decisions? How people going to make them? What makes this hard? Why do some people struggle performing this task? How do people get confused? What are the sources of confusion? What kind of mistakes that people make, especially newer people entering into the domain? And how do you recover from those mistakes? How do you handle trade offs and manage risks? Those are all parts of what I’m calling the cognitive dimension. Those are all aspects of expertise that don’t get captured by standard procedures, and checklists. And all of these seemingly, well intended efforts to bottle performance requirements and into something that’s manageable and easy to follow, and how misguided that is. And if you think that, that the standard procedures are enough to keep you safe, then you’re just making yourself vulnerable. These are the kinds of these are the facets of expertise that need to be taken into account. Next slide.
Gary Klein 24:09
Now, I promised you I would come back, I would come back and talk about evidence based medicine. So now is the slide to talk about that. And this is a project that I did with several colleagues Devorah Klein, David woods, Sean aperi, and we published a few papers on it. The first one was in 2016, we were looking at evidence based medicine. And we were looking at the reliance of the healthcare community on evidence based medicine. And we said, Fine, we certainly appreciate the value of collecting evidence to see what works and what doesn’t work. There’s no argument with that. That’s necessary, but it’s not sufficient. What are the cognitive challenges for using the evidence and we came up with six so far and they’re probably more The first one is evidence based medicine works on the principle that you come in, you have a complaint, I see what your problem is. And then there the evidence shows me what the treatment should be. Sounds reasonable. How do I know what your complaint is? How do I know if you’re accurate in the way you’ve described your complaint? You may be honest, you may be sincere, but you may be misguided. So there’s a skill that healthcare professionals need to characterise what’s really wrong with the patient, and not just focus on what the patient is saying. Many times people come into emergency departments with saying that they have minor problems. And it turns out that there are major problems that they’re not aware of, or that they’re denying. And so that’s part that’s part of the cognitive dimension. That’s a cognitive challenge. Second, how confident are you in the evidence? I’ll give you an example. person I know is a young woman with pregnant and went in for a prenatal exam. And she checked out well, except for her platelet counts, which were not great. They were actually fairly low. And the obstetrician said, you know, right before you your due to give birth, we need to have you take a bit of blood transfusion to get your platelet counts. Well, my friend didn’t want a transfusion. So she, she went to a haematologist and he looked at the data, he drew a sample of blood confirm that her platelet count was low, and said yes, there’s a threshold, as I told you, and your platelets are actually just below that threshold of safety. And so the evidence is arguing that under these conditions, you should get a transfusion. Except I, I’ve looked at the studies. And the studies are the women who are included in these studies are generally much older than you and you You’re, you’re about 10 years younger than the average woman in these studies. And many of the women in the studies are not in good health, but they’re determined to have a baby. You’re in great health. You’re a young woman, you’re extremely healthy. And your platelets are very robust. They’re very effective. So you have better platelets than most of the people are in a studies. And he said, I’m not worried I don’t think you need a transfusion. She didn’t get it, she didn’t have anything go wrong. It may have been, you know, cherry picking an example. Maybe she was lucky. But I’m using the example to explain how you can simply look at the evidence and draw us our flat conclusions. You need to understand who is the patient, not a population of patients, but a specific patient in front of you. And you need to look at the way the evidence was collected to see if it applies, you still have to make a judgement that’s part of the cognitive challenge. Third, what do you do when the best practice conflicts with a professional expertise like in this platelet example. And there’s a compelling reason otherwise, you want physicians who can override the evidence, when the evidence doesn’t fully apply? Forth.
Gary Klein 28:52
The evidence is collected on people who have one particular problem to see whether the remedies work or not. The physicians deal with people who usually have may often have several things going on for them. These are complex situations, and sometimes the evidences do a but then there’s another part of it and the patient’s condition says do be a second approach and maybe a and b are in some ways in conflict and to have a physician have I have this cognitive challenge of handling that kind of a conflict and a complex situation. Fifth, what happens if you if you’re carrying out a treatment and it doesn’t look like it’s working? And the evidence says it’s supposed to work? You just continue forever until the patient dies? Or do you say this isn’t working? I need to up the dose I need to switch to a different medication I need to do something that’s a judgement that experience physicians make. That’s why you want to have an experience against expert physician working with you rather than just somebody carrying out what the evidence is claiming. And then fifth, what about remedies that aren’t best practices. And we saw that with COVID-19, where there wasn’t time, and still is not time to do the Catholic control studies. And there’s a variety of remedies that are being advocated. Some of those remedies make a lot of sense. Some of them seem to make sense, but actually, they’re not very effective. And when when people start to try them out, they see the problem. And some of the remedies are just quack remedies. And nevertheless, the medical community has to consider remedies. Even if you don’t have the evidence. They don’t have the, the where would they they don’t have the opportunity to say, well, let’s wait for two or three years until the data come back. There’s too much urgency. Okay. So next slide Nippin. This is the iceberg. So when we talk about expertise, we’re talking about aspects of knowledge that experts have, that are mostly invisible, the ability to recognise patterns, to make some or to make discriminations between subtle cues, the mental models that we form, the way we can judge if something is typical. And so we can see if we can become surprised if an anomaly occurs, something we didn’t expect, and the mindsets that we bring to bear. All of these things are hard to examine. It’s much easier to examine whether people have declarative information, you know, factual information, it’s easier to examine whether people are following procedures and just watch what they’re doing. And so there’s a gravitation towards the aspects of explicit knowledge above the waterline and to ignore everything that’s below because that that’s invisible. And that’s one reason why it’s so easy to attack expertise is because so much of it is invisible unless you know how to probe and how to, to search for it. And ferreted out. Next slide Nippin is a spill portfolio account of experts.
Gary Klein 32:34
For too many people expertise is simply a sim a single dimension, I have low expertise, better, better than high expertise, all on one dimension. And I think that’s too simplistic, I think we need to consider a portfolio of skills that experts have. And different ones of these skills may be relevant in one situation, and not another. And different experts may have more or less of these skills. So I think we need a more nuanced, a richer account of expertise, rather than a unit dimensional account. The types of skills that I have in mind, perceptual and motor skills to be able to use tools. For example, if you have a dentist, I still can’t imagine how dentists are able to work with that tiny little mirror that stuck in our mouths, and they’re doing everything backwards. And they’re, they’re so smooth and doing it all. That takes a lot of practice. And it’s it’s easy to ignore because they do it so smoothly. But it’s really very impressive. chess players do not need perceptual motor skills, it’s not hard to move a rock from one square to another. Next is conceptual skills, mental models, and just about all aspects of expertise, draw on conceptual skills. management skills is another variable. And these are all different management skills are different from conceptual skills and mental models, different from perceptual motor skills, is the skills of managing a task and managing people. And in some domains, these are more important in some they were they are less. And so you see variability, their communication skills, that’s another skill set that comes to bear, to be able to let people know what you want to understand what they’re concerned about. To understand an organisation of dynamics in order to to get a better sense of how to make the organisation more responsive in ways they need to be. And finally, adaptation skills. And some people think this is the primary most important skill that experts have is to adapt when the situation changes, to be able to rather smoothly switch gears and try something else, and reprogram. So we watch someone like Nippin. And we started this presentation. We started in one way, and all of a sudden the technology started to act up. Nippin stayed cool. I watched him. I mean, he’s very calm. And he said, we’re going to reorient, we’re going to start again. And we have and, you know, that’s part of what I admire about about Nippin that he exemplifies the smoothness of an expert. Okay, next slide. Nippin. Sorry to embarrass you there. Now, how do we strengthen expertise? Next slide. First of all, we have to recognise that most organisations when they want to improve performance, they have, they have two options. There’s a down arrow, which is what you want to reduce, you want to reduce errors. But there’s also an up arrow, what you want to increase, you want to increase expertise, and insights and things like that.
Gary Klein 36:26
You want to do both. Certainly, we want to reduce errors. But that can’t be the only focus. You don’t want to come home at the end of the day. And have somebody ask you how did it go and you say, I had a great day, I didn’t make any mistakes. That seems like a low bar seems like you want to achieve things that you want to learn things. But most organisations as they started to say, most organisations only think about the down arrow, they only focus on errors and how to reduce errors. And they build various techniques for reducing errors. And they give one off no thought, for building expertise, and improving insights. And I can tell you give you lots of examples, but we don’t have time for those. So that’s something that you can do as you look at organisation, you can expect that there’s an imbalance that the down arrow is going to predominate. And that there is an opportunity to look at ways of boosting expertise, and improve and having dramatic improvements in performance in that organisation. Next slide number is cognitive debriefing. After tough cases, in most settings, you don’t go back to the people and say let’s let’s do a debrief. Let’s see what went on the military does, especially the army and the Marines they do after action reviews. But the after action reviews that I’ve seen, are almost always about what did you do, you know, you got a wrong, you had a bad outcome. You turned West here, if you turned east went wrong, you know, it would have changed dramatically. And so the focus is on what you did. And not on what you thought, not on what you understood, not on what you noticed. And this brings up the opportunity here for cognitive debriefing, using cognitive task analysis methods to get underneath the courses of action to explore how people were thinking and expecting and interpreting and to unpack their expertise. And have them give you examples and instances of you know, events may be events of interest occurred. And now you’re doing a cognitive debriefing about it, maybe right after maybe a week or two later. And that brings me to the next slide, if you would advance it about stories. So most organisations when they want to improve performance, they look at standard operating procedures and checklists and slap on more procedures. They they come up with manuals, a variety of ways, mostly designed to reduce the chance of error. They don’t examine expertise, that’s a lower leg level. They don’t drill that far to get at incidents and experiences and anecdotes of things that happened things that were surprising and to use those as a window into the expertise and then There’s a level even deeper than that. And it’s not showing up in the slide, which is not just asking people to tell us about their experiences or not their incident incidents, but doing cognitive debriefing to unpack what happened during an incident. It’s not, well, this happened. And so I thought there was a problem. And then I did that. Why did you think there was a problem? What were you noticing? What were you picking up? Were there any? Was there anything that surprised you? If I was doing a task? What would I have missed that you picked up on? What were you noticing here that you wouldn’t have noticed seven or eight years ago, that’s the kind of unpacking that we can use to take advantage of incidences, and experiences, and to really drill down deep enough to get it expertise. Next slide. Nippin is the mental model matrix. And most people think of mental models, if you have a mental model of a system that you’re operating, on just looking at the time, just have a few more slides here.
Gary Klein 41:15
A mental model is simply how does that system work? And that’s the upper left hand corner, the system that is capability, and how does it work. And that’s all people pay attention to. But a project that I did, where I was working under Joseph borders, and working with drug design, working with petrochemical plant operators, we found that that was certainly part of that mental model of how the systems like a distillation tower, how the systems worked. But it was only part we discovered that the the mental models weren’t just about the capabilities of the system, by bathroom limitations, the upper right hand corner. So they knew how the system worked. But they also knew the limitations of the system, how it could fail, how it could break down what were the boundary conditions. And so that’s a critical additional part of their mental model. And we found that their mental model included not just the system, but the user now in the lower left hand quadrant is if there is a breakdown, how to do the workaround, that was part of their mental model, they were prepared to run into boundary conditions to come up with to detect anomalies, and perform the work arounds and adaptations. So that’s a third aspect of their of their mental models. And the experts that we examine. Also, were tuned to a fourth quadrant, the lower right hand corner, looking at users and their limitations, the people operating system say, you know, you’ve got somebody working underneath you, that person is the panel operator, how could they get confused? What kind of errors are they likely to make, and skill supervisors are attuned to that they’re aware of what kinds of confusions can occur, and they can anticipate maybe prevent, but maybe step in before much damage is done. And correct. So the idea of mental models is much richer than the upper left hand corner. When you go to work in an organisation and various kinds of settings. Try to be aware of all of these quadrants, not just the upper left hand corner. Now, the next slide is about the fact that so many organisations think that all they need to do is issue policies, and then things are gonna go well, and they don’t. And then the people who develop the policies get angry with everybody who’s not following the policy, can’t they read? I posted it, it’s right up there on the wall. Why don’t they just follow what the protocol says. And what we find is, you really can’t develop a protocol that’s complex enough to handle all the kinds of situations people might face. If you try to make your protocol more complex, it becomes less readable. And even then, it’s not going to capture all the nuances. And what you need to do is imagine what can go wrong? How can things fall apart? What kinds of complications can arise and give people a chance to imagine those two, see how they would notice it at an early point, and how to adapt to it and build scenarios to give people that kind of training. What type of scenario approach that we’ve developed is called shadow box. And that’s a scenario but largely a scenario based approach to building expertise. Next slide. So here’s the way shadow box works. And we didn’t invent it. It was invented by a firefighter Neil Heights in New York Fire Department, retired as a battalion, Chief and,
Gary Klein 45:24
and bottleneck for providing training, he was asked to develop training for difficult conditions. And what you want to do is have subject matter experts as your trainer, but they cost a lot and they’re often not available to do to be involved in the training programme. So for shadowbox, you give people challenging scenarios. And then in the middle of scenario, you stop the action now is a decision point. And it could be different types of decision point. You might say, here’s four different options about what you can do rank order them, what are you most likely to want to do to least likely, and then write down your reason. And we continue to scenario Stop it again, another decision point, here, three different goals, you might pursue rank order the importance of them, or write down your reason that we continue, that we stop it again, here’s five pieces of information, rank, order the value of those, and write down why. So that’s what I’m doing as a training. We’ve also had a small group of experts, maybe three or so three to five, they’ve gone through the exact same scenario I have, they did their own ranking of the options. And they wrote down their rationale. So as I go through the scenario, and I say, here’s how I ranked it at this decision point, that I get immediate feedback. Here’s how the experts ranked it. And I want my ranking to match the experts. And usually it doesn’t at the beginning of the training. But more important, I get to see the rationale, what did the experts write down as a reason for their ranking. And I compare it to the reason that I put down and I get to discover what the experts were noticing, and never even occurred to me the inferences that they were drawing. So now I have training where I’m getting the input from the experts, I’m seeing the world I’m seeing this scenario through the eyes of the expert, but the experts don’t have to be there. So we’ve removed that as a bottleneck. And we’ve allowed me to learn from them. And when we do evaluation studies, we show 18% sometimes as much as 27%. Better matches to the expert after just a half a day of training. And next slide Nippin So shadowbox can be conducted using paper and pencil versions more and more we using desktop software versions. It could be integrated into existing technology like full mission simulators, and a variety of different presentation modes, text, images, audio and video, sometimes even interactive video. Okay, last slide. Almost last slide Nippin. This is my summary. expertise is coming under increasing criticism that’s the war on expertise. But expertise is based on tacit knowledge, and forming a sweeter portfolio of skills. Fortunately, for us, there’s a growing set of methods for strengthening expertise shadowbox is just one of those methods. And so an organisation doesn’t have to ignore expertise might walk out the door, but they have to value it before they lose it and promote it to face the challenges of the future. And then the last slide Nippin is simply my contact information. My Twitter handle my email address, also the website for my company, and at the bottom, www dot naturalistic decision. making.com is an organisation and naturalistic decision making Association that’s just been stood up. And that concludes my presentation already you.
Nippin Anand 49:34
Well, well. Thank you. Gary, the last slide. Can you hear me? Okay? Yes, yes. So the last slide wasn’t the size of the font was too small. So what I’ll do is when we put this together on LinkedIn, we will put the contact details along with it and and the link if anyone’s wants to wants to get in touch with you. But I think I think we have had this conversation last time. But he got me thinking again, Gary, which is in the airline industry, and as well as in the maritime world, we have this classic grey beard problem where a lot of people who have the knowledge and experience of the profession are getting too old and retiring. And I think one of the, one of the assumptions we make is that, that by having a combination of technology, and non technical skills, as we call it, we can somehow overcome this issue. And time and again, when we see accidents, we we are reminded that, yes, for routine operations, that might seem to be a quick fix, but for every novel situation that we face, we come to realise that this way of thinking is, is very superfluous. And I wondered what your thoughts are on that.
Gary Klein 50:54
I completely agree we make ourselves vulnerable. If we if we think we just have to gain in efficiency, handling routine events. And, and that will be enough. And that, that means that when we move into non routine situations, where we sometimes are helpless, because we don’t have the experts there. So you do have people with experience retiring, one thing you can do, organisations try various knowledge management approaches to capture their expertise. I’m not all that enthusiastic about their approaches, because once you’ve captured it, I don’t often see people going into the database to try to dig it out. Or even remembering that that it’s there. That’s one of the things that excited me about the shadowbox approach is with that we can get people’s expertise in a form that directly translates into training. And so it stays, it stays vibrant.
Nippin Anand 51:59
And it’s quite powerful. Because, again, we spoke about it last time, and you got me thinking is that we’re not saying that we we need more or less expertise, what we are really saying is that, okay, if we need this, this is the level this is the quality of decision making if we need. And if we don’t have 15 years to produce a ship captain, or an aviation pilot, then here’s a way to do it in a much more efficient way. Because when you when you play these scenarios, when you get people together and you you create these methodologies that can transfer decision well, they can transfer expertise from the those experts to not so experts, or novices, as you call them. It’s a very efficient way of addressing this problem. The I will go to what questions people ask, but one thing I felt was was probably I didn’t see and I’m curious to hear from you is that you did mention the word your risk sticks. And a lot of heuristics actually sits in the unconscious. In fact, it’s the language of the unconscious. And, and again, in many high risk industries, a lot of people can, can actually do more than what they know that 1020 30 years of experience, and they struggle to explain that experience. But a lot of it sits in the unconscious. And one way to understand the unconsciousness is to listen to the language, the metaphors, the discourse and everything else. And that is something I did not did not see in your presentation. Is that something you feel is important or not? What do you what are your views on that?
Gary Klein 53:44
I agree Nippin that a lot of expertise is not visible. I showed that the iceberg diagram to indicate the various facets of expertise that that aren’t visible. I could have added heuristics there they heuristics are part of our mental models. What can we do about that you? You can’t completely make, you can’t, to a great extent make tacit knowledge visible I mean, that simply isn’t going to happen. But you can make it more visible than it is you can identify aspects of it that people can can think about. Plus you can say, I don’t know how this is done, but it is done. Here’s an example so that people know that it is possible and remember how electronic Tronic warfare coordinator told me when he was first starting out. He knew that it was possible to distinguish the electronic warfare signature for one type of aeroplane versus another, but he couldn’t do it. But he knew that the people who had been there 1520 years ago Could, and so that that, that gave him the confidence to keep trying and to keep developing his skill. So so it shows you what, what is possible and, and identifies the areas within your subconscious, that are substantial, and that are powerful, so that we don’t ignore it. The war on expertise is largely predicated on the fact that the contents of our unconscious and our subconscious are not available. And so people ignore them, and, and dismiss them. And even if we can’t unpack them, we need to value them, we need to value the fact that they’re there. And and, and we need to develop the next generation to get up to that level.
Nippin Anand 55:56
Well, what’s interesting is that we, we don’t even recognise this in most organisations, we we think that most decisions are made from the rational mind, which is, which is absurd. But yes, it’s brilliant. I will just see if there’s any questions, and then we can if anyone’s has any, any comments or questions?
Is this a question there? Which is his situation? Situational awareness, considered a type of knowledge? How do you distinguish between the two and the value in complex environments?
Gary Klein 56:41
Alright, so there are cognitive activities we engage in, such as decision making, such as problem detection, problem solving, situation, awareness, what’s going on in a situation. It’s not a type of knowledge. It’s a reflection of the knowledge we have. So it’s how we take our knowledge, and our mental models and our unconscious abilities and put it into play. And that’s why you know, some person looking at the same situation I am, comes to an expert comes to a different conclusion. And notices things that that I missed, notices changes, notice is something that should have been there. That’s not that’s important. And I don’t have that experience base. So I don’t know what to expect. So I can’t tell when something that was supposed to happen didn’t happen. So, so situation awareness, is how people put that kind of expertise into play.
Nippin Anand 58:02
Thank you. So here’s another one, which is. You, you mentioned a firefighters with 20 years of experience is the estimate from Erickson of $10,000 still an appropriate yardstick for expertise? Or has a more up to date measure been identified? Is there a shortcut to improve decision making for an experienced people? I think you you answered that through through the shadowbox training, isn’t it?
Gary Klein 58:37
Right. But let me go back to Ericsson because, because Ericsson told me that he he came up with this general rule. It seemed to hold in a number of domains about 10,000 hours, like it might take 10,000 hours of practice of deliberate practice to become a grandmaster. That used to be the case. But he said, Now everybody’s got these chess computers. And they can use the chess computers to get lots of concentrated deliberate practice in a way that they never could before. And so you see people becoming grandmasters in much less than 10,000 hours, maybe in less than 5000 hours. We’re seeing grandmasters at the age of 14, or even 13, which would have been impossible 20 years ago, but now we’re seeing it occur.
Nippin Anand 59:34
Excellent. If you understand your errors, and can reduce them, and learn from the context that surrounds them, could that lead to better expertise?
Gary Klein 59:54
Definitely. I mean, we learn much more from our mistakes than from ours. accesses, but only if we reflect about our mistakes. And I know, you know, when, when I’ve, you know, made mistakes, putting on workshop or something like that, and I’ll be flying back home after the workshop is killing me. And, and you know, there’s a part of me that says, don’t think about it, but there’s a part of me says, I’ve got to think about it. And I can’t rest until I realise, here’s the way I should have structured it. Here’s what I should have been done. And instead of when I walk out thinking, I never want to do another another workshop again, once I come to that realisation, my reaction is, when can I do the next one? I’m ready for it. And I’m eager for it. And that, for me, that’s one of the criterion of experts, is if I asked people tell me about the last mistake you made, and someone who’s you know, touted as, as having lots of capability says, I can’t think of any of a sudden I doubt if they’re a real expert. For me, the real experts are always aware of their last mistakes. Not that I’m an expert. But the people I studied the firefighters, the petrochemical plant operators, they’re always aware of mistakes they made and how they could have done it better. And that’s what allows them to continue to developing their skill.
Nippin Anand 1:01:32
And even better that they have the opportunity to do it in a simulated space, like shadowbox, isn’t it? Because so there’s a related question here, because in the safety critical industries, most people are absolutely hysterical about people making mistakes. So so there is this idea that if you have to learn and learning comes from reflection and from mistakes, then we don’t want that. So most organisations are in the doing mode, they don’t create those opportunity to reflect on mistakes. And I wonder what your thoughts are on that, Gary?
Gary Klein 1:02:12
Sure, I have a couple of reactions to that. First of all, we see in many industries, or many organisations, what they call a zero tolerance for mistakes, you don’t want any mistakes. The evidence is very mixed about the value of a zero tolerance for mistakes. And if anything, the evidence suggests that this leads to worse performance rather than better performance. Because when you announce that you have zero tolerance for mistakes, all of a sudden, people aren’t going to tell you about their mistakes, they’re going to hide their mistakes. And so you’re not going to learn from their from from mistakes, you’re not going to learn the organisation isn’t gonna isn’t going to learn, and it won’t progress. And so the zero tolerance notion is is I think, misguided, rather than saying, we certainly want to cut down on mistakes, but we want to also build a resilience capability for people to be able to pivot when when when they’ve made a mistake to notice it quickly, and to recover and, and prevent it from getting very serious. And then we want to find out not just that was a mistake, the usual tendency is let’s add another procedure to prevent that from happening, rather than doing a cognitive interview and say, why did the mistake happen? What confused you? What were you? What did you not notice? Why didn’t you notice it? What was going on so that there can be more general learning? Another mistake that I think another era that organisations make, is they conflate evaluation and training. And you sometimes hear that so and so was, you know, drama was was terminated during training, you know, for poor performance. Well, if somebody can be terminated from the programme during training, it’s not real training, people know that they can be dropped. At any point. They become very guarded, and very defensive, and very careful in what they do. And so the opportunity to learn to try things out to explore has been compromised. You still need to evaluate people. But you need to say, these are the trials where we are going to evaluate you, as opposed to these other trials where there’s no evaluation, do whatever you think is right. And only in that way can you can you learn and can you learn from mistakes?
Nippin Anand 1:04:51
That’s such a good point. I can’t tell you how many times I’ve seen simulated training exercise that creates so much drama and people that They go into an absolute shock when they make a mistake. And here you are doing an after action review, trying to find out why that person made the mistake. And that person is not even aware that he or she is in trauma. So what is that person going to tell you about what they learned and what they how they reflected on that learning. It’s bizarre, actually, it’s very interesting what you say.
Gary Klein 1:05:21
But there’s two issues here. One, you can put the person on their unnecessary pressure by saying if you get it wrong, you can be thrown out of the programme. But in a good high fidelity simulation, you do want some pressure, and you do want people to take it seriously. And to feel wrung out if something went wrong. And so it may be that the best time to debrief them is not when they come out of the simulator, when they’re still traumatised, but give them a little time to recover, and then say let’s, you know, let’s look at the screens. Look, let’s go over this. And let’s figure it out now that we can catch our breath.
Nippin Anand 1:06:08
Yes, it’s the yes, it’s, we have a very little understanding of the unconscious and how it operates. But But great, listen, I’ve taken far too much more time than we were intending to. So I’m very, very grateful to you for your time. I’m sorry, the technology didn’t work. But eventually we sorted it. And we we made it successful. So once again, thank you very much for your time. Yes.
Gary Klein 1:06:33
Thank you very much for the opportunity. I really enjoyed it. Take care.
Nippin Anand 1:06:37
Alright. See you. Bye. Bye, Mike. Thank you everyone for joining. And we will I will post the link and also a YouTube video of this with all the contact details for for Gary and the link to shadowbox. So yes, thanks once again for joining and we will see you again for another event. Bye bye.
So what did you think? If you found the podcast interesting, and you want to learn more about how to develop, nurture and enhance expertise in your organisations, Gary’s contact details can be found at the end of the transcript of this podcast on our website novellus.solutions. And if you’re interested to learn more about how we can understand and improve the quality of decision making in our organisations, I’m more than happy to discuss this with you. We have been working on this for many years now. Please don’t hesitate to write back to me and I will respond to you as soon as I can. Now for the best part. If you really enjoyed listening to this podcast and want to think reflect and dance with different perspectives, yes, dance with different perspectives. Follow me on LinkedIn on my company page Novellus solutions on email me at firstname.lastname@example.org, and I will add you to our mailing list. There is a great lineup of events planned in the next few months, so I wouldn’t want you to miss them at all. As usual to all you curious people. Thank you for wanting to know more than what you knew yesterday. It’s both very rare and refreshing to find true learners in this world. I wish you a pleasant day and night goodbye