
My guest for this podcast is Dr Robert Smith. In fact, Professor Dr Robert Smith. He’s a technologist, a complexity scientist, and entrepreneur, a writer, and a very sought-after public speaker. He’s an expert in artificial intelligence and he has worked with several companies and institutions around the world both in the private sector as well as in the public sector. He is also the author of the book Rage Inside The Machine: The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All.
He is a senior fellow of the computer science faculty at university college, London, where he co-founded the Center for Decision-Making Uncertainty.
Nippin Anand: Hello, everyone. Welcome back once again, to another episode of Embracing Differences with me, Nippin Anand. My guest tonight is Dr. Robert Smith. In fact, professor Dr. Robert Smith. He’s a technologist, a complexity scientist, and entrepreneur, a writer, and a very sought-after public speaker. He’s an expert in artificial intelligence and he has worked with several companies and institutions around the world both in private sector as well as in public sector. He is also the author of the book Rage Inside The Machine: The Prejudice of Algorithms and How to Stop the Internet Making Bigots of Us All. [00:01:00] He is a senior fellow of the computer science faculty at university college, London, where he co-founded the Center for Decision-Making Uncertainty.
I would let Rob introduce himself briefly now. Over to you Rob.
How are you doing?
Dr Robert Smith: [00:01:15] How’s things? I’m alright right. You know, same old. Lockdown is tiring, but yeah.
Nippin Anand: [00:01:21] Great. So maybe Rob, it might be a good idea to start with giving us a light introduction about yourself.
Dr Robert Smith: [00:01:30] Sure. I’m Rob Smith. I grew up in Alabama during any segregation, busing, then I went to the University of Alabama where I studied engineering. I ended up staying there for a lot longer than just an undergraduate degree.
I got my master’s in my PhD there, and then I became a professor there. And while I was there, I grew up a group, a research group on artificial intelligence, in particular evolutionary computation. Then once I got tenure, I [00:02:00] took my sabbatical in the UK and stayed here and never came back.
I led another research group at a University here in AI. Then I started a small business, which I grew up into a business to serve some of the biggest blue-chip companies in the world. And continued to do consulting in AI for lots of different people. My experience in AI, I took all of that and wrapped it up and realised that it had a connection to some of the things that I’ve seen in my youth. That AI was beginning to look a bit prejudiced and I put those ideas together into my book, Rage Inside The Machine to talk about how algorithms in my opinion, have an intrinsic tendency towards prejudice and that book came out last year. Since then, I’ve began dedicating myself to tech for good efforts and I’ve become the trustee of a new [00:03:00] charity and we’re all trying to work on making AI have better effects on the world. Both from industry to society.
Nippin Anand: [00:03:09] Great, and that’s really what interests me. Rob, I would like to ask you a very fundamental question at the start. You talked about this book, Rage of the Machine and in my view having, read the book in bits and pieces, I think the position you’re taking, and this is also called Rage Inside The Machine, let’s put it this way that’s the title of the book. I think the position you are taking is that there is some internal logic to technology, which leads it to work in the way it does as against, this is a very deterministic route, correct me if I’m wrong, if I’m completely wrong there, but this is a very determinist position that you are, proposing, as against some of the other theorists who talk about social construction of technology, which is basically to say that, it’s the human action that controls technology. What you’re saying is technology [00:04:00] itself, is determinist in nature? Am I right?
Dr Robert Smith: [00:04:03] Yeah here’s what I’m saying is actually that the foundations of AI as it exists are about simplification in generalisation, fundamentally quantitative models are effectively simplifying and generalising models and there’s a history to this. When you look back on what we now would call quantitative social science, it really has its origins in the eugenics movement. That’s where the use of statistics in social science emerges and the techniques developed for statistical analysis and ultimately for algorithms, all have this idea of simplifying and quantifying and categorising people.
Now that’s the scientific process. So there’s nothing wrong with the idea intrinsically of, that simplification, as long as it’s mediated by human beings, thinking about the fact [00:05:00] that can be a blunt instrument that causes a lot of intolerances and judgments of people and divisions as it has historically, if you look at the tools of quantitative social science, every one of them you can find an instance in which it’s been used to, be prejudiced against people. And, there’s a reason for that. Prejudice means to prejudge, and it means to basically take a judgment without all the evidence available, humans are extremely complex and their interactions are extremely complex.
When you take the quantitative tool, quantitative tools by their nature, simplify, and therefore they are excluding some of the factors that are involved in those human decisions, often accidentally or unintentionally. And unfortunately, those things sometimes can lead to conclusions that in the wrong hands can be used in a prejudice manner.
So essentially what I’m saying is, [00:06:00] the nature of quantitative social science is effectively on the borders of intolerance all the time. The thing we use to keep that from becoming intolerance is human wisdom and judgment, both as a society and as individuals. And when we lose control of that when, effectively we’re making quantitative judgments without a human mediator, we are in danger of building in systems that are intolerant.
Nippin Anand: [00:06:34] That makes a lot of sense Rob. So in a way, what you’re saying is that there’s nothing wrong with using technology for the purpose of artificial intelligence in my world, analyzing big data. You just have to be very careful how that analysis is then used or applied.
So you need to be very careful because this idea of technology giving you [00:07:00] some sort of an answer to a problem, is fundamentally naive then?
Dr Robert Smith: [00:07:04] Technology is great at giving answers to well-defined problems of, what in the book I can draw on some economists to use this term things that have truth uncertainty, where there’s truth uncertainty you can use technology to basically reduce that uncertainty or at least quantify that uncertainty. Where you have uncertainty about what meaning is, semantic uncertainty, or about what objects exist in the universe of discourse that you’re talking about, ontological uncertainty, those things aren’t treated well by statistics and probability and in fact, aren’t treated well by quantitative methods in general. Unfortunately, many of the problems that human beings deal with are about what the things actually mean are what things can or will or do exist in the universe of discourse. And in those areas, we have great adaptive [00:08:00] tools as societies and as individuals to basically cope with that in an ongoing fashion.
But those aren’t rote problems that can easily be reduced to algorithms. And I try to explain this in the book, in historical detail, but effectively the quote I always use is from box the statistician, a GP box, he said, all models are wrong. Some models are useful. It’s the nature of modeling – simplification in generalisation and quantification and categorisation.
It’s the nature of modeling that makes modeling wrong. But that doesn’t mean that models aren’t useful. They can be very useful. It’s just that you have to realise they always involve assumptions about what exists and then ultimately human beings have to make decisions about what things mean in some larger philosophical context. But where you have only uncertainty about; is a cold, hard fact true or not? Then, [00:09:00] technology is appropriate for those kinds of decisions.
Unfortunately, there aren’t a lot of real-world problems that reduced to that beyond the trivial. Usually, you need a human being in there, mediating what you mean and how you apply it.
Nippin Anand: [00:09:13] Sure, that makes a lot of sense Rob. So, now I know you’ve been looking at some really big issues in a society like racial discrimination and climate change and politics and whatnot, but if you take this problem that you have just outlined in an organisational context, my interest is in the area of safety and business performance. And, one of the things I constantly see is this whole idea of reducing text into some sort of algorithm.
So, for example, somebody reports a safety issue and we quickly turn it into some sort of a recognisable category. So, I think it would be great if you talk us through this idea of simplification, decontextualisation as [00:10:00] you call it in your book also, and then generalising it.
So, can you talk us a little bit more about how that logic actually works?
Dr Robert Smith: [00:10:07] Yeah, that’s great. That’s a great segway actually, because when I talked about this idea of ontological, uncertainty, and uncertainty about the things that exist in the universe, when you make out a form, our database schema or way of basically describing things, you’re intrinsically limiting the number of different things that can exist. Oftentimes, when things like accidents occur or innovations occur, the nature of them intrinsically – as they don’t fit categories well – so you end up in this situation where you’ve got a device or a form that’s built to basically satisfy what you knew existed already in the past.
But the thing you’re trying to use that form for is exactly the stuff you didn’t expect to happen. So, what you can expect to happen then is that the form won’t work well, or the [00:11:00] database schema won’t work well to categorise those things. So, what you end up doing is cramming it in and then leaving lots of space, where the thing that really mattered remains unfilled, because it doesn’t fit the form.
And when this happens – and people who’ve worked in complicated organisations know this really well; for instance, I did some work in the past on electronic health records. And what you find in those is that there’ll be a massive complex schema for basically check boxing what the patient has and what the patient’s condition is.
What you find is that doctors use those check boxes and those categories and codes and things to categorise illnesses, but then in the notes and the description notes, there’s all the meaning. They’ve put all the meaning in the description notes and even then, they’re not able to capture it all [00:12:00] because they’re busy, because it’s difficult to capture something as complex as the human condition, because it involves psychological factors, it involves lots of different stuff.
So, you get basically a loss of information and that’s why you need a doctor to have a personal relationship with a patient because the doctor in his head is holding information that isn’t recorded in the technical system. Now, I’d imagine the same thing is true of, things like accidents. If you capture them in a form, you’re going to miss information, you may fill it out into a description field, but then that description field is going to be limited in its ability to communicate. So, what you really need is a larger testimony from not only one person, but for many people, because another thing about something as complex as an organisational failure or an analysis of a situation or a safety report, is that multiple perspectives are actually where the reality lies.
And so you end up basically trying to cram that into forms and losing things that you really need [00:13:00] to know.
Nippin Anand: [00:13:02] Rob, there is so much to take from this, last few minutes that you spoke, it’s immensely helpful. So, the way I see it, as what you just said is that, the very act of reporting something, the very act of putting something on a paper is reductionist because you cannot capture everything that you observed.
And that’s Polyani’s kind of argument. How do you make tacit knowledge, more explicit? The other thing I picked up from what you just said was that a lot of what we deduce from that writing, is what we want to hear to stay in the certain space.
So, we want to hear the bit that we are comfortable with and safely ignore everything else, which is new knowledge. And that could be very helpful.
Dr Robert Smith: [00:13:51] Sure. And, essentially, what you say about tacit knowledge and encoding it and how difficult it is, because if you look at the history of AI; in [00:14:00] the seventies and eighties, AI was dominated by what was called expert systems. So effectively, systems with “if then rules” to describe knowledge and they were called expert systems because they were trying to capture the knowledge of experts. What people found out in trying to develop massive systems to capture expertise, is that capturing the tacit knowledge of an expert was extraordinarily difficult, very economically unfeasible, and effectively the field had a massive failure, what was called the AI winter. And the reason is, because capturing human knowledge about complex things is really hard. And that can be at the level of the expert or at the level of lay people, trying to describe their, basic interaction with the complex jobs that they do.
So that knowledge is very difficult to capture that’s well known. What’s happened in the modern era of AI after the AI, when what rose up as the internet. And we started looking at big data because it was freely [00:15:00] available, so the economic problem was solved. And then what we did is trying to try to extract knowledge from that sort of wisdom of crowds, type knowledge. That knowledge has implicitly statistical and statistical descriptions are reductionist in really quite the same way as “if then” systems. And I try to share this in the book; that there’s no real difference fundamentally between statistical description of a problem and an “if then” description of the problem that’s modified by some confidence factors. They’re the same thing in effect.
So, all of those things lead to this kind of tacit simplification that basically precludes the idea of encoding the complex knowledge that people have about their realities.
Nippin Anand: [00:15:46] Great. As, you were speaking, a thought appeared in my mind, Rob, is that how difficult it is to proceduralise things and as we, tend to do all the time.
But what you’re saying is that it’s [00:16:00] extremely hard to put that knowledge into some sort of a formal language. You have this tacit expertise in the organisation, and then you try to put it in any formal code and that becomes difficult. That becomes extremely difficult.
Dr Robert Smith: [00:16:14] Yeah. It becomes very difficult. And, actually one of the reasons that if you think about the way you think, if anybody thinks about the way they think they’ll realise that they’re deluding themselves if they feel that they’re simply following a bunch of rational procedures about things every day. That’s not how we operate and people who feel that they’re actually operating that way are probably being self-diluted. I don’t know anyone who purely operates on rational lives. And if you allow me a slight degression, this is the reason that Spock is such an interesting character in Star Trek by the way. That’s the reason that Gene Roddenberry invented a classic character is he’s the conundrum of trying to live a purely logical life in a human world.
[00:17:00] And that is a brilliant character. And in fact, most AI characters in fiction and interact with human beings carry this kind of dilemma. And the reason that dilemma exists as a object of discourse is because we all know we don’t operate very logically. And if we did, we’d probably be terribly dysfunctional in the world we live in.
The world we live in is full of surprises and innovations. It’s full of massive uncertainties, not to mention that the subtleties of what it means to be human and what moral obligations are. And because of that, it’s very difficult to capture that. Now we’re evolved creatures who have learned to deal with that individually and socially pretty well, but still, struggles with the meaning of life and what one should do, are at the core of humanity.
And that is actually the nature of humanity, to cope with that kind of uncertainty around the world. We’re trying to do that in technical systems, but in reality, [00:18:00] and what I’m trying to point out to most people is this is; in reality, the technical systems we have, they best deal with rote problems that are well-described. And when those problems border on real human decisions, it’s best for humanity to be involved in that decision-making.
Nippin Anand: [00:18:15] Yes, I can see where you’re going with this. And one of my biggest challenges is that in many organisations, many senior leaders I’ve interviewed, there is a strong belief that things happen, or things work well because people follow a certain set of predetermined procedures and guidelines. What would you say to that Rob?
Dr Robert Smith: [00:18:33] Oh, it’s interesting. There’s a great book, I think it’s called the checklist and it’s, written by a emergency surgeon who basically says he’s writing about organisations in general and how having checklists is really, valuable.
And as we know in pre-flight on an airplane, you go through checklists. If you look at the situation described in the movie Sully, where he has to land the [00:19:00] plane on water, he’s effectively following a checklist that he has in his head about how to deal with safety. And even then there was a great ambiguity that had be investigated.
Checklists are great, but you’ve got to have that element of human judgment for the unforeseen circumstance. And Sully is a kind of a perfect example because what happened was, he had to make a decision about whether he tried to go to one of the many New York airports to ditch this plane that had a bird strike or whether he should land; take the much more risky procedure of landing the plane on the Hudson River.
And he made a very instant human decision to go with one, rather than the other. Later, it was investigated to see whether that followed procedure and it was nearly found that it didn’t because they figured that there were airports he could have reached and not just landing in the river. But the mitigating factors came in is that human [00:20:00] decision-making takes time when you’re struck with a completely unforeseen situation, like a double bird strike, on a two-engine plane.
And that delay in making the decision is a part of the problem because it’s the situation no one has ever encountered in that exact way before. And so, you needed the human being. If we’d had a machine decision made there, it probably could have led to a much more disastrous outcome because the little bit of time it took to turn the plane, make the decision, might’ve led him to run to a building.
Yes. What’s, the point then Rob? Because what you’re talking about is that the checklists are helpful in certain situations.
Yeah, checklists are valuable. Procedures are valuable. Forms are valuable. I’m not, throwing those things out.
I’m just saying that when we turn over everything to a schematized or algorithmic system, then that system has [00:21:00] brittleness inside of it. Oftentimes nowadays, since we’re dealing with deep learning systems and systems that are, fairly opaque black boxes, where the brittleness is impossible to see – truly deeply impossible to see, a lot of times those brittlenesses are based on our prior assumptions about what will occur. Prior assumptions have built into them, all our existing biases. And so, that’s the kind of chain of reasoning. It’s very useful to use technical systems. I’m an engineer – I love technical systems.
It’s just, we’ve got to realise that the complexity of the world means that technical systems always have at their heart flaws. And the way that we cope with those holes and in technical systems is by using our own massive adaptive ability. Not only at the level of the individual, but the level of the organisation and ultimately at the level of the society. That’s the backstop that makes everything work well.
So yeah, technology, I’m all for it. I’m against [00:22:00] the idea that we’re anywhere near ready to turn a huge interest in complex human decisions over to machines.
Nippin Anand: [00:22:11] Great. And then I think the immediate question then is that as we do turn more towards technology for finding answers to most of the problems and then more organisations adopt, implement databases and sophisticated tools for analysing data.
What, is the way forward here Rob? What would be a sensible solution?
Dr Robert Smith: [00:22:30] Yeah, it’s really interesting what’s going on in the AI ethics community. I, just recently sat down and did a second read through the “I Triple E’s”; Institute for Electronic and Electrical Engineers, which is the largest technical society in the world.
I read through their recent 2019 guidelines on ethical technology and if you read those, it’s very interesting. It’s geared towards the idea of ethics, of satisfying – I think it’s called eudaimonia – that the idea of, being focused on overall cultural and human good. [00:23:00] But in every instance, when they talk about privacy or they talk about security or effectiveness, effectively their recommendation is put a human somewhere in the loop.
And so, the largest technical society in the world concentrated on electronics and electric is basically saying, keep humans in the loop for ethics. I think similar things can be said for things like safety or for human efficiency. I think ultimately, use the tools, but keep the humans in the loop because they’re the highly adaptive organism with values that effectively will make the technology work effectively.
Unfortunately, on the internet today, there’s a great deal of stuff that’s been turned over to machines, a black box of machines that basically makes decisions on its own, having huge social impact. The one that everybody’s aware of particularly this week is the [00:24:00] effect of social media on human behavior.
Social Media, a way to think about it is; it’s a newspaper where an algorithm is doing the editor’s job. And that is a pretty accurate description. That algorithm unfortunately, the values it’s programmed with, are about personalisation because personalisation is very tied toward advertising.
So effectively we have an algorithm that’s goal is basically to tell you exactly what you want to hear in order to keep you engaged. Without any human intervention, by the way. No one’s Facebook feed is controlled by a person. Everybody’s Facebook feed is controlled by an algorithm.
That has been done without a lot of thought. With a lot of thought about how to make a lot of money, but not without a lot of thought about the implications for what it’s gonna do to the system of human discourse. And what we’ve seen is a very bad outcome, which this week, and this month has certainly become pretty apparent.
That’s a [00:25:00] place where handing a huge part of human discourse over to algorithms has caused some major societal problems.
Nippin Anand: [00:25:11] So, Robin in a way, what you’re saying is that the combination of commercialisation of news and the way in which the algorithms work, lead us to some sort of unintended consequences, which is what it is today.
Is it really all unintended you think?
Dr Robert Smith: [00:25:28] I think it’s largely unintended. There’s a few things to mention here. One, do I believe that there is a programmer ecosystem that effectively the people programming the algorithms are inherently prejudiced? In a sense I do because the representation of women, minority groups, people of different backgrounds in the program or community is poor. It’s very poor. And that [00:26:00] people have unconscious prejudices. So that’s one source.
Another source is that big data often draws on data from the past. And for instance, in programming language understanding, oftentimes that’s done from corpuses of texts that are from the past. Those corpuses, they contain innate stereotypes and prejudices and biases in particular, around gender was one of the most obvious ones with regard to gender and profession. So that’s very encoded into existing bodies of texts that exists. So those things come over and will get brought into technical systems.
Do I think that there are organisations out there attempting to manipulate social media in order to have political effects? Absolutely. And one of the aspects of a highly simplified system is that the ability for someone to manipulate it is much higher [00:27:00] than with a system where a human beings in the loop. So all of those things exist, but at the core of each and every one of those things is the idea that algorithms intrinsically simplify. For instance, in your world, in the safety world, if you reduce everything to a form, then are there people out there who want to deceive about safety issues in order to continue to make profit? Probably. Will those people be able to better do that by exploiting the limitations of a form?
I would say that they would. That’s the reason that ultimately you have when you take it to the point of going to court, you have jurisprudence and you have examinations and you have testimony, because effectively those are efforts to pry into the real truth of form, allows you to basically obfuscate the truth because of its limitations if that makes sense.
Nippin Anand: [00:27:58] Which makes absolute sense, which goes [00:28:00] back to what we discussed earlier, which is that the whole idea of quantifying something into a report and reductionist is problematic because you will never get the meaning uncertainty, as you talked about earlier, into the spectrum.
But I have a question here because you talk about a racial prejudice. You talk about many societal issues. We talk about safety. It’s not very far from that because as much as it is a business performance issue, it’s also a very ethical or moral issue in many ways because if you’re not performing well as an organisation, when it comes to your safety records, then the society doesn’t view you in a very pleasant manner.
Now, the question then becomes that. But many of these datasets they’re looking at trying to analyze, understand that the state of safety in an organisation, start with a big question. And that big question is: how safe is an organisation? So, what they’re saying is: what are the traces of unsafety in the organisation? Hence we don’t [00:29:00] look at safety as such, which is how things go well, but we start to look at how things go bad in an organisation. So, that big question then drives everything else. So, I suppose what I’m trying to say is that there is a big question that drives all of this.
There’s the algorithms to or to look for things that, that we want to find in the first instance. And this is the classic problem of what is what you find. I’m just struggling with this idea of, yes, I agree with you that there’s an internal logic to it, but that internal logic is also driven by a bigger question, isn’t it?
Dr Robert Smith: [00:29:33] Yes and the idea of acceptable risk is intrinsic in many jobs, not in all jobs, there’s some jobs that are fairly risk-free but we’re dealing now with the pandemic, with the idea of acceptable risk. So, once you frame the idea of acceptable risks, you say, okay, we can accept people not being able to go to clothing shops, but we can’t accept people not being able to go buy food. [00:30:00] Once you frame it, that frame becomes locked and then what happens is it’s like a machine for generating exceptions. Then, effectively society will find ways to work around the rules. Then you get into the idea of guidance and you guide people you basically say, this is unacceptable, this is acceptable, but be guided by these more vague principles. In terms of safety and work, it’s essentially there are things you are said, you cannot do this. This is a hard rule. And then there are guidance’s in all of those instances, some degree of human interpretation has to come in.
And I think that with regard to safety, there’s another factor here that I want to bring up. Is the idea that technology is supposed to make us more safe in general, but in some cases it’s used very differently from that. For instance, Deliveroo is a system for delivering food to people, which is a great thing particularly now they are in the [00:31:00] pandemic.
However, the system of assigning people work with no human intervention basically does lead. If you’ve ever been driven from Nottinghill to Shepherd’s Bush on a Friday night outside the pandemic, you’ve seen how dangerous delivery drivers actually are in traffic. What’s happened is instead of technology making the world safer, it’s actually introduced a field of employment that actually is quite unsafe.
So, technology is supposed to free us all from the unsafe work that many of us face. But in fact, in pursuit of profit, sometimes it’s being used to actually place people in greater danger to have higher efficiency systems that basically are less safe. Yeah, I suppose that’s a side point but, the thing I’m trying to say really is that technology is not the ultimate solution to safety, but technology plays a very important role in safety, obviously.
The idea of monitoring, the idea [00:32:00] of reporting. Those are all very important things that can be enabled by technology. However, to think of technology as a panacea is to generate a situation where people will be placed in greater unsafety, I believe.
Nippin Anand: [00:32:15] Yeah and you’re absolutely right Rob.
And you talk about human in the loop. And my argument would be that in many organisations, when we analyze safety data, one could argue that there are many human beings in the loop, and still we ended up doing what we’re doing, which is very prejudiced way of looking at safety so I’m just struggling to understand how do you place that human in the loop so that you get a balanced understanding of what the artificial intelligence has to tell you?
Dr Robert Smith: [00:32:46] Yeah and a part of this is by understanding what the limit……..I think one of the points of my book is basically to say, it would be really good if people have a fundamental [00:33:00] understanding of the differences between machine intelligence and human intelligence.
And so that they can be applied in their appropriate places. And the limitations of machine intelligence are they don’t deal very well with the unforeseen. And they don’t deal very well with the resulting complexities of humans dealing with the unforeseen. And people who are investigators I think understand this because, what you do in your investigation of anything; of an engineering problem or an organisational problem is you basically say, okay, I’ve got this description of what has happened.
What don’t I believe about it? You question. Questioning is a fundamental aspect of investigation and it’s questioning the very structure of the knowledge you’ve been presented with. That questioning of structure is like the questioning of [00:34:00] ontologies or meaning. That’s the thing human beings are good at.
So if you put human beings in that role, the role of saying, here I’m presented with something in a framework. Is the framework and where are the holes in the framework? And does the situation I’m being presented within that framework actually match that framework? That’s the role that human beings play a really excellent job of, particularly when they work in groups.
Machines are structure. So they have a hard time questioning their own structure. They don’t have that ability to deal with that kind of uncertainty. So, it’s a matter of understanding which roles humans play well and which roles machines play well. And then using those to a synergistic effect.
Nippin Anand: [00:34:47] Great explanation Rob, I must say, because even if you go back in the industrial revolution from the time let’s say about 200 years, good, 200 years, we have been designing technologies [00:35:00] where we divide between what humans are good at and what technology is good at. And what you’re saying is that some sort of an integration between the two, some sort of an understanding between the two. To me, it appears more like a social division of work.
Which is: listen, you are good at doing this, which is dealing with what is already known and giving us the best result of that certain zone. But when it comes to uncertainty, when it comes to knowledge, that is still new, the novelty and the surprises that we have when we encounter, let human beings do that better work and together we can create a really good analysis of the situation.
Dr Robert Smith: [00:35:35] Yeah, it’s interesting, the industrialisation of everything that the area where people understand that the creative arts for instance, are a human activity because, people talk a lot about machine creativity. I think machines can do novel things but the thing about human creativity is it’s a part of a social discourse, right? Fashion, art, literature are all a [00:36:00] part of a social discourse. So, people create within that social discourse, people consume within that social discourse. And that’s why it has meaning. So, everyone knows that the creative arts aren’t the area where machines are going to excel most.
But what I think people fail to realise is how much human creativity is involved in many professions. If you look at surveys and economic work that says what jobs can be easily replaced, jobs like waitress or receptionist oftentimes will fall into the category of jobs that can be easily replaced because what do they do?
They take people’s name at the door, or they serve people. They take people’s orders and serve them things. But, if you have experience with the reality of waiting or the reality of greeting, you realise that’s a job that you employ somebody for, who has a great deal of skill at the creative interaction with others.
And that’s just one of [00:37:00] many examples. If you’re talking about, let’s say an accident investigator, I’m sure an accident investigator; a guy who’s a plotter, who has no creativity, who has no imagination, is probably a terrible accident investigator. A person who has diagnostic investigative skills, a guy who’s like House, on the TV series House who has this kind of leap of reasoning, that’s the kind of guy you want investigating tragedies, because tragedies always involve the unforeseen. And the unforeseen is very tied to the ability of human beings to be creative and imagine. So, I guess what I’m saying is this; is that I find that, in the industrialisation of things, we did have a segmenting of jobs that were non-creative and jobs that were creative.
But I think that we assume many jobs that human beings do aren’t creative, that really [00:38:00] in a deep way are. And when you put machines in that role, you lose something very important.
Nippin Anand: [00:38:08] What intrigued me the most was to bring creativity back into the organisations because we don’t realise how creative our roles are as professionals, even though it looks very mundane from the outside.
Dr Robert Smith: [00:38:21] Yeah, absolutely. And another point I’d say in that is that’s at the individual level, at the organisational level, it’s important as well. I read a really good book that I have misplaced, and I can’t find again, that was done by a CIA analyst, a senior analyst who had talked about healthy and unhealthy intelligence analysts’ communities. And a healthy community is one…let me first start about an unhealthy community. An unhealthy community is one where everyone is looking at the problems the same way, where the narrative, the story being told [00:39:00] about 9/11 or an investigation.
If the story’s all the same, then that’s an unhealthy community. What you want is a diversity of perspectives. And this is where the concept of diversity comes into this. At the level of thoughts or at the level of societies, a diversity of perspectives is the fuel for creative exploration.
So, this is diversity in the broadest sense, not just racial diversity or the way that terms used in that context, although that’s important as well. In the sense of having a diversity of perspectives at the organisational level, is the way that you’re able to creatively and imaginatively examine and solve your problems.
And I think that’s critically important and completely overlooked a lot of times in organisations as an important value.
Nippin Anand: [00:39:49] Rob, I can’t agree with you more in most of my work is around diversity of perspectives and you’re absolutely right. This is not about gender diversity. It’s not even about [00:40:00] cognitive diversity.
It’s really about diversity of perspectives, which is welcoming thoughts that don’t align with ours. And then those thoughts could come for various reasons; gender ethnicity being one. But also the way we are socialised, way we are educated, but also how our goals and targets define us as professionals.
And often that kind of diversity is overlooked. And I think there is a very good reason for that, which is that organisations want efficiency. And when I say efficiency means very short on efficiency. And that kind of stifles diversity because once you’ve made up your mind, you, were going to make a decision, you don’t want to hear otherwise because of a complete myriad of reasons. And one of the things I find the biggest constraints when it comes to diversity is the idea of regulation and compliance; that there is one way of doing it. And that’s the only way of doing it because that’s how regulation and compliance will measure you.
Dr Robert Smith: [00:40:53] Yeah, exactly. And then we’re back to another formal system. Regulation and compliance is a formal [00:41:00] system and that’s great. It’s great that exists. Regulations tend to generate their own sort of innovations ’cause people will adapt around them.
That can be either positive or negative if the regulatory system is blind to those adaptations. And if it is inflexible, if it’s never human mediated and then adapted, effectively regulations become a way of evolving ways to get around the regulations.
It’s interesting, if you think about the virus now, the coronavirus, you can observe where it’s where its next evolution is going to be because, if you have holes in the systems that are trying to contain coronavirus, the particles of coronavirus are going to adapt to those holes because effectively the virus where it’s able to basically transmit itself, is going to get better at that transmission because [00:42:00] those particles will survive and other particles will die off. So effectively, those holes in the system are where we can expect coronovirus to evolve. And this is one of the reasons that I personally feel that we’ve got to be very careful about what we do with schools.
Although young people, very young people, prepubescent individuals, currently aren’t great transmitters of coronavirus. If they’re transmitters at all, it’s going to get better at it. So, holes in systems are exactly where evolution occurs and that can be positive in the sense of innovations, filling a niche, or it can be negative in the sense of ways of getting around systems of control we’ve tried to put in place and where we need humans in regulatory processes is to be constantly adaptive in response to the adaptations, to the regulations.
Nippin Anand: [00:42:54] Great. There’s a lot to learn, but I’m just conscious of the time, Rob. [00:43:00] This is a great education, for me at least at a very personal level. Thank you so much for your time.
Dr Robert Smith: [00:43:05] Thanks. It’s been a great conversation. I think you can see that your ideas and my ideas are very sonorous.
Nippin Anand: [00:43:13] So what did you think? The whole experience left me thinking. What an interesting perspective. So many things to think about. Let me just summarise a few of them if I can. I think Rob starts with the logic of artificial intelligence, which is basically his idea of the Rage Inside The Machine, that there is some internal logic to the machine, which is simplification, reduction and generalisation of very complex human experiences.
Which he finds problematic. One example that made me think was this whole idea of report writing the reports that we write. Formal reports, safety reports, nameless reports. And he [00:44:00] says that in fact, the whole idea of writing leads to a very certain kind of knowledge. It excludes a lot of nuance and in simple words, what he’s trying to tell us is that what we experience, what we learn, what we retain in my memory is far more than what we express on that piece of paper.
I think it’s even more dangerous that we take that piece of paper as the truth and start to draw some very narrow meanings from there by putting certain categories into it, certain labels onto it, hashtags and labels and whatnot. So we have to be very mindful of that. Another thing that Rob talks about is the idea of artificial intelligence and how good it is at analyzing both retrospective knowledge that is knowledge of what has happened in the past, but also very technical, very cold, hard information that cannot be disputed. But when we talk about human experiences, when we talk about decision-making, I think what Rob warns us, is that [00:45:00] we need to take this with a little bit more skepticism, these kinds of analytical frameworks.
And if you really want to learn, improve and get better, we need to make human beings part of the loop to make sense of this analysis, particularly when it comes to dealing with new knowledge, surprises, novel situations, which technology apparently is not good at.
I really liked the idea of creativity in the way he says every job, no matter how menial we think it is, has some level of creativity. Even if we talk about somebody who does very mundane work in your company. And when we try to assign these jobs to machinery, to automation, that creativity is lost. And with that, our ability to solve some very uncertain situations.
He calls upon the need to create space for more creativity in our organisations. We may think that we can simplify some of our jobs, but the problem is that the moment you try to simplify a [00:46:00] job by putting some sort of an automation or some logic to it, you actually introduce another level of complexity into it. He gives the example of a receptionist job. It’s quite simple and very easy to understand.
And finally, he talks about adaptive regulation. How can we make our capacity to control and regulate more agile, more dynamic? And he says that regulation must come out of this idea of the static knowledge and try to engage and try to engage with new things, new situations, and learn and get better.
Otherwise we end up in a very classic situation where we think people are following everything that is prescribed in regulations and control when they are doing something completely outside of that. And the only time we come to know about it is when something bad happens, an accident or an incident.
I think that was a very interesting conversation. I really loved it. And I hope you enjoyed it too. I will leave you with these thoughts and I [00:47:00] think in the next podcast, I would like to introduce you to the idea of organisational failures with one of my very favourite authors or academic researchers, Lee Clark.
So, stay tuned. I will be back again.
Cookie | Duration | Description |
---|---|---|
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |