Season #1
Tony Fish

Can We Master the Art of Questioning in an Age of Instant Answers?

Overview

Tony Fish sat down with Chris Parker to explore the critical space between AI and humanity, a conversation that challenged conventional thinking about artificial intelligence and our relationship with it. Fish brings a unique perspective shaped by 30 years navigating uncertainty as a serial entrepreneur, investor, and board advisor. His latest book, Decision Making in Uncertain Times, focuses on helping leaders surface difficult, unasked questions, and this conversation proved to be exactly that kind of deep, uncomfortable exploration we need right now.

Rather than falling into the typical camps of tech optimism or dystopian fear, Fish argued that we haven’t yet understood either side of the AI debate deeply enough. He introduced his framework of the Peak Paradox, which challenges us to balance four competing domains: personal survival, individual optimization, organizational needs, and societal good. This tension, he suggested, is what makes us uniquely human, and it’s precisely what AI implementations often ignore in the rush toward efficiency.

The conversation ranged from consciousness and heroism to organizational design and measurement systems, ultimately centering on a provocative question: What is the one meaningful KPI we should have for AI? Fish’s answer challenges us to measure not how efficiently AI can answer predetermined questions, but whether it helps us ask better ones. For leaders, thinkers, and anyone grappling with AI’s impact on work and life, this conversation offers a framework for maintaining humanity at the center of technological transformation.

Tony Fish is a serial entrepreneur, investor, and board advisor with over 30 years of experience navigating complexity and uncertainty across multiple ventures and industries. He has authored six books, including Decision Making in Uncertain Times, which challenges conventional leadership approaches by focusing on thriving amid ambiguity and incomplete data. Known for surfacing difficult, often unasked questions, Tony works extensively with boards on governance, ethics, and strategic decision-making while teaching at leading business schools. His career reflects a consistent focus on ensuring technological and financial systems remain accountable to human values.

Key Topics & Timestamps

  • [01:32] Why Wrestle with the “AI and I” Debate?
    • Tony discusses the importance of moving beyond shallow, polarized debates about new technology, drawing parallels to historical reactions like the Luddites.
  • [02:48] The Two Camps of the AI Debate
    • Chris and Tony break down the common “tech optimist” versus “dystopian” views and how science fiction films have culturally shaped our understanding of AI.
  • [08:20] AI and The Hero’s Journey
    • How does the classic “hero’s journey” narrative apply to humanity’s current relationship with Generative AI, a technology that has presented a “threshold to a new world”?.
  • [12:20] The Four Paradoxes of Decision-Making
    • Tony introduces his “Peak Paradox” framework for dealing with complexity, which involves balancing four key areas:
      1. Survival
      2. Optimizing for the self
      3. Optimizing for the organization
      4. Optimizing for a better society
  • [21:09] Four Critical Questions for Implementing AI
    • Tony provides four essential questions that leaders should ask before implementing any AI system, focusing on its impact on human options, compassion, and adaptability.
  • [29:07] Reconstructing Our Systems for an AI Future
    • A discussion on how AI challenges traditional management hierarchies and reward systems, shifting the focus from seniority to value creation.
  • [40:04] A Final Question for You, the Listener
    • Tony and Chris leave the audience with a thought-provoking question to discuss with friends and family: What is the one meaningful KPI (Key Performance Indicator) we should have for AI?.

Memorable Quotes

  • “With AI and I right now, I don’t think we’ve understood almost either side of the debate. And the nuances within the debates are so shallow and they become very opinionated.” – Tony Fish
  • “If you were to become a superhero yourself, what part of you would you want to maintain as truly human?” – Chris Parker
  • “Does this AI system expand or constrain the range of meaningful options available to humans?” – Tony Fish
  • “What is the one meaningful KPI we should have for AI? … It will help me ask better questions rather than efficiently answer predetermined ones.” – Tony Fish
  • “My measure of success for AI would be to help the next generation be better people than we are.” – Chris Parker

The Ebullient Growth Agency helps leaders and teams bring AI to life in their organizations. Work directly with Chris to design strategies, workshops, and transformation programs that unlock growth and innovation with humanity at the center.

The Enablers Network partners with organizations around the world to navigate leadership, culture, and change in the age of AI. Together, we help leaders and teams adopt new technologies while staying deeply human and purpose-driven. Explore more at TheEnablersNetwork.com.

The GenAI Circle is a private network for practitioners, creators, and builders shaping the frontier of generative AI. Join peers who are experimenting, sharing, and co-creating the future of work, creativity, and intelligence.

The AI Collective brings people together for in-person and virtual meetups around the world. Connect with others exploring AI’s real-world impact in your city or industry.

The AI&I Show is made possible through the creative collaboration of Ahmed Mohsen, whose production and storytelling bring these conversations to life.  Reach out to Ahmed if your Saas business needs a product marketing boost like he provided to The AI&I Show!

Chris Parker: This is Chris Parker and this is the AI and I show and I’m having a conversation with Tony Fish who is a serial entrepreneur. He’s an investor. He’s a board adviser and he has 30 years of experience and I love this navigating uncertainty. He has published six books. The recent book was decision-making in uncertain times and I found that so relevant to our conversation about what does AI mean to I in this crazy uncertain time and he’s known for helping leaders and boards you know surface those difficult you know unasked questions and I’m fascinated about this conversation to see what kind of unasked questions we can surface in this time we have together So in order to kick us off, Tony, maybe can you share why AI and I as a topic attracted you to engage in this debate? So, so why is this relevant and meaningful to you as an I in that AI and I equation?

Tony Fish: Thank you, Chris, and thank you um for asking questions and opening a debate. Um I think at the beginning of all new technologies and you know I we often go back to the wonderful word of the lites and how the people would break down the mills because they didn’t understand them and reality is every technology has the same things that um as technology moves ahead there’s people who try to um stop it and I look at these things by going actually we should equally value the progress but we should equally value those who And we should be able to master the ability to see both sides of their views. And with AI and I right now, I don’t think we’ve understood almost either side of the debate. And the nuances within the debates are so shallow and they become very opinionated. And we see this politically that you’re one camp or the other camp. You can’t sit in both camps equally value them as we wrestle with it. And that’s where I’m at. That’s why I love what you’re trying to do. It seems to be let’s wrestle with this stuff and let’s get let’s get deep and dirty. Let’s get muddy. Let’s get let you know let’s let’s get a few bruises on ourselves because that’s the only way.

Chris Parker: And how would you describe those two camps? So, you know, I think there’s the tech optimist Silicon Valley, you know, bro culture, you know, maybe that’s what we called it a couple years ago. And then there’s the dystopian we’re all going to die. you know, it’s it’s it’s going to be disaster. How how would you maybe articulate that better based on your obsession?

Tony Fish: I I think you were spot on. I think one is um almost you go back to sort of the 1930s um uh with Flash Gordon and and some of those wonderful sort of images that we would be flying around by today in in none of it became true. And there were people who even at that point were still worried that if you went faster than a certain percentage as miles an hour, you know, the brain would fall out. And just a fascinating pieces and we’ve done it in history so many times that there’s the camps of the the optimists and the pessimists effectively, the dystopian utopian. Um, and I suppose so much of the utopian is now what we see as the media, you know, uh, Terminator, The Matrix, all of those those massive movies. Um, and actually, I think when people start to look back and they see how early, even Space Odyssey 2001, hello Dave, I’m not going to do that, and they’re starting to look at it and go, how did people that long ago actually have some of that thinking? and we just enjoyed it and we didn’t see it was going to be relevant and then it’s almost caught us up and biting our ass. And I think that’s yet they’ve become so cultural to us that we refer to them so often that I don’t even know if we know if they’re are utopian or dystopian views anymore.

Chris Parker: Well, is I I look at it maybe in a slightly different way that that in our modern certainly western culture, media, Hollywood is a way of expressing and discovering cultural possibilities that that we don’t otherwise have another avenue for. And so Skynet, you know, come, you know, is presented in a certain way. Maybe in the same way that comedy is a way of positioning a conversation maybe around politics or things that are uncomfortable and it it is a way of for us I think to wrestle with it and maybe have conversations about it you know like you know I remember you know whatever is the 90s when Matrix came out friend Brent we were in Santa Clara and I you know watching it and having a conversation after it like okay well what what what happened just now like h like like what did that mean to you And and so I think if it’s a if it’s a if it’s triggering a thoughts that can maybe help us prepare for my response, you know, I my response in this time um then maybe it’s helped. I I don’t know. But maybe it’s also too limiting because if we only have, you know, Terminator and Matrix and, you know, and it’s bounded our view of the possible because typically it’s not a rosy, hey, here’s a movie about, you know, AI replacing all the mundane work and people become, you know, fully enlightened, you know, like I haven’t seen that movie yet.

Tony Fish: I kind like I completely agree and so often you go down to conversations with people and they come back to their views are informed purely by those media films. They really haven’t got any depth below Yeah. Yet what to me what they should be doing is going who wrote the script? Yeah. And how did they get that view and when was that script written? Uh because to me they’re the things that say people were starting to think about that far earlier than than almost the rest of society comes along with.

Chris Parker: Yeah. I think if you ask those questions around Star Wars as well, you will find it an amazing cultural rabbit hole, you know, around that time. It’s a the story behind the story.

Tony Fish: Yeah. And and the oddity about Star Wars is is is I have a a deep affinity to it. Um, one is my next door neighbor when I was growing up. Um, so this was when I when I was um 13 years old. Uh, was the trumpeter on the signature theme tune. And because he was the trumpeter, he actually got tickets to the red light uh red um red carpet. So on the 4th of May 1980, I went up to the red carpet to the opening of the Empire Strikes Back and I saw the film when I was 13. And uh all reports are I don’t remember this conversation, but all reports are when I came away with my dad, I said, “Well, that’s a rubbish film. It’ll never catch on. The only thing we could clue from that is I shouldn’t be a film critic.”

Chris Parker: But yes, can you imagine why you thought that? Is it was it just not for a 13-year-old at that time?

Tony Fish: I don’t know. I I think I don’t think I saw the story. And actually, you know, now I’ve worked out, well, I haven’t worked out, but now it’s been explained to me that actually, you know, that’s the seventh part of the story. Then you’ve got to watch this and this and this, and if you actually watch them in order, then the story hangs together. Yeah. And I I just didn’t I couldn’t work out the story, and that was my problem.

Chris Parker: Well, yeah. Because you’re stepping in midstream. And um mentioning story, you know, the arc of of the Star Wars and most other myths, you know, is the hero’s journey. And I’m wondering if we can overlay the hero’s journey to AI and I, you know, so like, you know, two years ago, AI has been around for a while, but Gen AI just broke it. And you know, two years ago, the opportunity to cross the threshold into this new world has been presented to all of us. You know, it’s it’s just democratized because you can literally just either talk to it or type to it. Yeah. So, you don’t need Python knowledge or any other, you know, advanced skill. You can just go for it. Yeah. And um at the end of that, you know, there’s going to be betrayals and there’s going to be people that are going to help you along the way and there’s going to be some artifacts. You know, there’s this the whole aspect of the story. And at the end, we’re going to get into a new a new normal. So, I’m wondering I’m wondering if we can unpack a little bit if you have any suggestions or advice on on what people can put in their mental backpack, if you will, as they’re going on that hero’s journey. And if there’s anything that they can’t that they should not do or they they really should do or like like how can they maintain themselves and not get into this oh Skynet is the only definition of the future because Hollywood said so.

Tony Fish: Yeah. Um actually um for those who can get access to the BBC uh on the BBC website it’s worth going to their podcast section and looking for a guy called Rory Stewart. And Rory follows certain themes. And one of the themes he’s been following fairly recently is heroism. So he’s done about a seven or eight part series on what it meant to be a hero all the way back in Greek philosophy all the way up to Zalinski, but he then leaves on the on the last episode. He says, “What happens when AI becomes the hero?” And he it and in this in the last part of it, he basically um brings us all the way up to date with the superhero. So before heroes went into war and World War I and World War II changed that because you couldn’t stand up in the trenches and not die and he uses the um Lawrence Olivia example of you know you stand up basically died by shrapnel because suddenly there was this mass annihilation techniques um which came to the four. So the old hero vanished and you couldn’t be that type of hero. So out of that came this whole ideology of the poster and then is effectively the superhero. Um and then he’s carried that forward to today and how the skills and everybody else that there was superhuman and suddenly we’re looking at this thing which is superhuman. So how were we going to heroize it or what are we going to do with this thing which is beyond human capability? And he leaves it as a question. I leave it as a question, but I think it’s one we kind of need to wrestle with. I think it’s a great…

Chris Parker: Well, I think let’s let’s wrestle with it. And and um another view on that is AI can make you superhuman.

Tony Fish: Yes.

Chris Parker: So, so you know, general AI or you know when when it becomes more human, cognitive and and interactive and maybe even physical. Okay. We’ll have another podcast in three years when that happens. But right now what people have said is that these will give me and you superpowers you know extensions of ourselves. Um which can also be quite concerning meaning okay you know because if you have you know ultimate power comes ultimate responsibility and um if if you were if you were to become a superhuman superhero maybe this is an even better way of of couching this question. If you were to become a superhero yourself, what part of you would you want to maintain as truly human?

Tony Fish: Um so uh one of the one of in fact the final chapter in my the decision-making uncertain times books I present a framework um uh which is called the peak paradox and the peak paradox is how do we manage the paradoxes that we see in front of us because that’s how we deal with complexity and I draw the idea that there’s four major paradoxes we have to deal with on a daily basis which is one is survival have we got enough food water to to survive. Um the second one is uh as a paradox, how am I optimizing for myself? The third paradox is then how do I optimize for the organization for which I I work for or construct which I work for. And the last one is how do I optimize for a better society? Now people are paradoxes because people are drawn to one or the other and therefore the paradox is introduced because you’ve got to try and balance all of them at the same time. So you can’t leave at peak paradox which is where you try to balance all of those at the same time. Therefore you have to find where you as a human are able to wrestle with the tensions or live with the tensions or compromises that particular position creates. And that to me is what makes you delete you completely uniquely human as you as a human because only you probably can wrestle with those compromises and those tensions consciously that’s that’s what really that triggers me with this framework. It’s okay I am choosing the collects over me now and therefore the consequence is X. Now we don’t there’s a problem is I see with lots of humanity we don’t do that. we just end up in these quite often compromised positions which is where we start to see mental breakdown. We see you know work um situations flare up over very simple things because people haven’t realized they’re actually compromising or living under tensions they didn’t realize they didn’t want because it doesn’t feel natural to them.

Chris Parker: How can we apply this? So let’s let’s go down the the the the the vision of AI providing the general person superpowers. Yep. And then they need to start making decisions on Yes. whether they wield these superpowers for any one of those four domains, if you will. Yes. um what would that what would that mean to someone like like how would that become alive in their experience?

Tony Fish: Yeah. But then again, you know, what is going to be your superpower? Is it going to be hope? Is it going to be love? Is it being bold? Is it being courageous? Is it being curious? What what do you judge as a superpower? And I think we don’t even have that level of debate. So we go down to the pub, everyone wants to be faster, quicker, stronger. And actually their one idea of what a superpower is. Um my my one I suppose is that curiosity. If you if you if you want a superpower, how do you become more curious? How does your curiosity never stop being curious? How do you never stop actually? Because you can never understand everything. But it’s the connectivity between things which are unique. And so you can learn everything which AI can now learn everything and it can recall everything but it doesn’t mean it can connect random ideas to create something new and you know we we as a human human species still do not understand fundamentally what consciousness is. There’s 180 theoretical theories of what consciousness is which sit into about um 12 different ontologies. So there’s 12 major studies of what they believe consciousness is. And each and below each of those is a good 10 to 12 further sections. So we have no idea what consciousness is. We have no idea where some of this stuff comes from. We have varying theories which depending on which camp you’re in, they may or may not add up. Um, so we’re so many people come to I think some of these decisions about AI or thinking about AI without I think the reality of the lack of understanding. So hence the reason I always go back to what is the question we didn’t know we had to ask. And if we’re going to come to AI and debate AI, I think the first thing we should be doing is spending time to think about the questions we should be asking that we’ve made assumptions on.

Chris Parker: his consciousness. Is that what you mean?

Tony Fish: That’s that’s one example like what is being told and like Rory Stewart’s thing, what does it mean to be a hero? Because actually that’s fundamentally changed almost uh you know at least 10 times over the last 5,000 years and it’s just about to change again and very culturally sensitive as well.

Chris Parker: Incredibly culturally sensitive. Coming from my American background, everyone wants to be a hero. and from my, you know, Dutch adopted background, they don’t even know what the word means, you know. So, you know, of course they have heroes, usually football players and things, but not nearly as many. So, let me grab on to consciousness and and um still trying to navigate between this AI and I. Yep. And consciousness is something that we are aware of maybe not explicitly every day. and and the way that consciousness expresses itself could be in a form of a super superpower of curiosity or heroism or or something like that. But that there’s a it feels to me like this is an essence of humanness that maybe AI will give us the time to explore more, you know, in kind of an optimistic view.

Tony Fish: Uh and I I think what you’ve hit is actually something really important which is that kind of if an AI implementation gives you more time to be curious and more time to find those qualities that resist optimization and efficiency and effectiveness. So they they are they they effectively provide time for you to work out efficacy. What is right and therefore ask the question is this right? and how do I know it’s right and who decides if it’s right which are all the things that effectively efficiency and effectiveness wants to drive out of the system your ability to ask a question

Chris Parker: yeah I think coming at this a different angle like operational excellence you know take it to the nth dimension is death you know I think it’s because there’s just there’s just no movement there’s no opportunity for creativity And and do you think or have you seen um because we’re just you know our our baby toe is just dipped in the water with this Gen AI thing. Are you seeing that there’s any hints or clues that that the automation or the effectiveness or the time savings are being used for the right things and the right is very subjective. I know…

Tony Fish: it’s incredibly so I haven’t got a clue is is is the base answer and it’s um and the problem with data is everybody can use data to justify the outcome they they want to show which is you know um and lots of people are saying you know and Denning was one of those classics that if you haven’t got data you’ve only got an opinion the only problem is if you don’t understand who’s supervising the data that you’re now using you don’t understand and that your your thinking has been supervised by somebody else. So you all you’re doing is actually representing the opinion of the person who gave you the data rather than having your own opinion. People don’t like things like that because it fundamentally challenges them because sometimes they believe data is a truth or a right or or a fact and data is none of those things.

Chris Parker: Well, I think maybe they don’t like it because the conclusion of that is it’s their responsibility. Yeah. To ask a question. To ask a question or to to take a to take a decision informed on, you know, you’re writing about uncertainty. You know, we all live in uncertainty and there is no perfect decision context. You know, that that that there’s always these things that are, you know, based on your form four paradoxes.

Tony Fish: Um, I want to go back to answer your question, Chris, because I think it’s a good one, which is kind of like, um, perhap So, where I come from is is what are the questions I should be asking? And I’ve I’ve framed at the moment four questions. I don’t know if they’re good questions. I’m sure people will create better questions, and if they do, hey, drop them to you, Chris, and we’ll share them round. But these are the sorts of questions I’m trying to get people to ask. And the first one, you know, does this AI system expand or constrain constrain the range of meaningful options available to humans? Does this AI system So, we’re sitting in a board meeting and I’m asking the question, does the AI system we’re just about to implement expand or constrain the range of meaningful options available to our customers? They don’t like that question because if you’re into the world of optimization, it is absolutely only designed for one thing which is to constrain and lock people in and exploit.

Chris Parker: Well, connecting that to a different space in strategic decision-m um I’ve heard that and I’m sure there’s there’s people that have that have written on it. This isn’t my thinking. I have to point at Tim Ferris I guess or somebody that was on his that if you like strategically if you don’t have a clear decision then make a decision that opens up more decisions and so when when you’re when you just said that this is what just resonated with me is is this is this providing more a wider spectrum of surface area for playfulness and decision-m…

Tony Fish: yes perfect so the next question I take to people exactly the same you do the same thing which I love which is does it increase so again does this AI system Does it increase or decrease our collective capacity for novelty, compassion, and love? I’ll read it again. Does it increase or decrease our collective capacity for novelty, compassion, and love?

Chris Parker: It’s it’s a I love it that you named love.

Tony Fish: Yes. Um I wrote a book Lead from Love with Rumi and it was my reflections on the you know the 13th century uh Sufi mystic and I wrestled with because it is a business leadership book and I wrestled with naming love and I love it that you did because just as you experienced just now with me is it it it forces you to stop and say well that’s a word you know that that you know that that is a word that you need to to and if people are going to reflect on does this increase the capacity for love and empathy and compassion then that is a thought you should have. So I I I fully support that question.

Chris Parker: Yes. Because we cannot disassociate humans from business. And if we do become completely dispassionate in our business, then the reality is you are not going to decide for the collective good or the collective capacity of novelty or the collective good of compassion or the collection of love. Yet you leave the business i.e. you go home and what’s the one thing that you want to do? You want to be liked and you want to be loved. Deeply human connections. I I would even go further that that the management science peers and colleagues that I have that are you know hardcore metrics and you know run the machine not you know the people are disposable. I think that’s a shortterm plan because without human development and compassion you will not innovate and you will not find the next whatever breakthrough and your customers will feel that your people are tortured and your and it’s it’s going to be short term anyway. So yeah, I like it because this is actually it’s it’s pointing you know that business growth which of course financially we’re looking for is dependent on human growth and if you…

Tony Fish: know but third question third question is um does it again does this AI system um support local adaptation while maintaining beneficial connections? So let’s because we don’t want to be isolated. How do we become part of the network?

Chris Parker: Can you unpack that for me? Because I, you know, my technical background, I’m going down APIs and my head is going this whole strange direction.

Tony Fish: Yes. So, so does this AI system support local adaptation? Can you do some of that culture awareness pieces while maintaining the beneficial of connectivity glo you know, effectively globalization? So, can you deliver something that’s unique to Chris in his local village where he lives or his local city and the benefit he needs in the instance and and the personal context he’s got right now which isn’t about personalization because it’s now not forever. Um therefore, but but also achieving the economies of scale that we want to achieve through it because we understand the economic benefits of it. But we don’t treat everybody as effectively the same thing. So can it does it balance you the lo the local issues with the benefits that we understand that come through from through scale.

Chris Parker: This is based on your four paradoxes basically basically hey can you balance this can you do this and this and this and this as best you can…

Tony Fish: and if you don’t consciously and if and actually you can’t you can’t manage them all. So where are you going to compromise and where are you going to live with tension because then when you come to interviewing okay if you can articulate the tensions and compromises okay the person sat in front of you goes I agree I I can live with that tension I can live with that compromise and you’re going to build a team if you can’t call them out you’re going to bring team members in who basically become destructive because they want to move where you have come to that balance and that’s what we tend to And this is going to you know this starts now at scale. So you end up with people moving outside organizations even though they got the right skill set because they didn’t understand effectively the paradox is this organization has come to a conclusion about how they balance those tensions and compromises.

Chris Parker: I see my experience I I come at a little bit of a different angle but when you’re doing strategy work I really emphasize on what we’re not going to do and what we’re not going to achieve. Yes. And that’s actually a for me a better conversation because if you have 10 people around a table and everyone puts their ambition for the quarter or the year and we say, “Yeah, we’re going to do that.” You’re not, you know, you’re just not, you know. So, so have that conversation and say, “Okay, well, if we can if we let’s just if we can do one thing, yes, which is the one thing that we’re going to do and if you want to do some other stuff, fine, but what is the one thing that we have to do?” So but anyways it’s um fourth question…

Tony Fish: fourth question uh again does this AI system um help us recognize when our management systems themselves need reconstruction. So we use uh management systems and reward systems effect or or measurement systems um to uh provide both incentive plus renumeration plus bonuses which is how effectively we keep people focused on the task. and we say to you um this is the one task you’ve got to go and achieve. What we don’t then go back and question is is it the right thing now that we’ve moved forward six months because actually it is now baked in that you will get your bonus on us achieving that thing. So then what happens is we start diverting resources from doing the right thing that we should be and now be doing because your bonus and incentive and measurement system is all geared towards this new thing and we the one thing that’s going to happen now at speed…

Chris Parker: speed that’s yeah it was speed this is the dimension of speed that you’re bringing in because typically you would do this once every year or two and you would do some sort of compensation review or which now actually if we determine that actually it’s completely the wrong thing. The problem is that humans are very good at going, “No, I don’t want to change that because I know my bonus is going to come through because I’ve done X, Y, and Zed.”

Tony Fish: Mhm. Whereas if actually we can say, “Look, you’re still going to get your bonus, but we’re going to do this rather than that.” Suddenly we change the incentive way of thinking. Now the issues are that humans have this paradigm attachment problem that we get a basically we form an identity because of the role we’re performing and the level of seniority we’ve got and the organization from which we work and the model that organization has. And if you’re starting to challenge all of those things, you know, how does the human have an identity? And you you do you know you see this. know um a person wanders into a room and they’re introduced as the CEO of of a bank and everybody goes oh that’s fantastic and that person what you don’t realize their identity is not their name now or anything else they are now the chief exec of the bank so now if you challenge the banking model and say that that’s going to disappear the individual becomes incredibly defensive and their measurement systems how they reward everything else because it’s paradigm attached. They are they are identified to the model and the branding. And this is what we’re starting to see many of the human problems inside these organizations is humans are attached to certain ways of working, certain branding, certain and we’ve gone through this whole position of you know your brand, your identity and who you are and how you identify yourself. And suddenly we’re finding that that’s a real problem for us because it’s preventing us from being able to go we need to change faster if we’re going to be in this new world. But therefore, I would say who are you? So you’re sorry just to finish off Chris, which is why this podcast that you’re doing, I think so interesting, which is AI and I you kind of like nailed it in the in the question you’re asking.

Chris Parker: it um there’s so much in that last in the last question it’s really you know organizational dynamic and organizational design question and I don’t think people are great at designing those systems to begin with you know because that’s why hay points were in existence and stuff and all of these artificial mechanisms that we tried to scientifically make everything fair and and maybe remove all the tensions from these decisions between this the pay and the bonus and the packages of and titles of individual people. I think that we’re going to have to release some of that preciousness and then get into, you know, equal doesn’t always mean, you know, it’s it’s equal doesn’t mean equality. Is that the word? you know, like we can we can be very fair and not necessarily all the always the same because we’re shifting and changing. So, you know, on a quarterly basis, we’re going to have to adapt to that. I think that requires an incredible level of who you are, you know. So people are going to be have to be very um in touch with themselves, you know, like like like very solid beyond their title of the CEO of the bank and through that very trusting meaning meaning I’m going to be okay regardless if I don’t understand completely how the pay scale is going to work in the next couple of months.

Tony Fish: Y because whatever we’re we’re creating things we’re creating value and I am confident that this will return to me.

Chris Parker: Well that that’s that’s the really interesting part of this because…

Tony Fish: actually this and this is why the measurement systems become really interesting because actually you can start to understand one what value is to you and the organization but second you start to reward based on value. Now at the moment we have a hierarchy and pay goes up as you go up the hierarchy. very very simple but brutally rubbish and actually really is a it’s completely rubbish system because actually some of the largest value creators are right down the bottom. Now what what the new systems allow you to do is go actually we we can change the whole communication system. So there’s no need for hierarchy because hierarchy was about communications AI introduces actually flat and instant. So all of that vanishes and the ability to have what you need. Secondly, it can start to identify and help identify where value is created. So that has another layering to you can now start to actually reward based on value, not on seniority, position, job title, budgets, all of that stuff.

Chris Parker: It’s it’s a shift in power dynamic and and and the way I describe it to managers like like okay in the past with information scarce scarcity and decision- making you know at the top of the pyramid so like okay I know the answer and everyone else needs to do the work. I think what AI is going to do is flipping that whole paradigm that that actually the answers are…

Tony Fish: Oh, I love this. This is connecting right straight to this. The answers are available and where the value is whether you know the right question to ask. Yeah.

Chris Parker: So, you know, and then the people who know the right question to ask are the ones that actually have the problem. Of course there’s going to be some strategic problems and you know it’s going to be a bit vague but the um the ability to to within context at that moment even though you know it’s uncertain you know vaua world and all that um what volatile uncertain complex ambiguous world that we always that we live in um the front line will be more and more empowered to make…

Tony Fish: other than that so so vuker the odyssey of vuker it comes from the military And it comes from a lady called Jillian Stamp who was doing some work for the military and it was designed for situations. So a soldier is standing at a door and he’s about to kick the door down. Yeah. Okay. How do we train them? Because we know the situation is volatile. We know um we know it’s uncertain because they don’t know what’s on the other side of the door. They know it’s complex because of the multiple things that are going on and we know it’s ambiguous in terms of what the outcome’s going to be. So it’s how do we train the soldier that when they kick the doll down actually as many things that could possibly have been trained have been trained. What the business world has done is converted that piece of thinking into something they don’t really understand what it was for. They use it as a way of looking at complexity as opposed to a way the soldiers were trained that actually no matter what happens, they know how to deal with the situation. And then and of course in business we’re never trained to that level and we and it’s just basically inappropriate at that point. Um but it’s it is a nice buzzword to kick around um to to express a thought of of things are are very very unknown.

Chris Parker: I think what business also does and this is where AI can help is is of all the let’s talk about a customer service agent not a soldier kicking down a door but as a customer service agent picking up the phone um can also be nerve-wracking but AI can handle you know 97% of the mundane in that case because it’s not voua to use the word again and in fact there’s going to be a only a few times when a human intervention for whatever reason is is very appropriate and and other than other than that the the idea so the permutation of that idea is that what you did yesterday I can do again tomorrow and actually everything I did yesterday will scale tomorrow and that requires that the processes and methodologies that you put in are infinitely scalable and don’t change and I think that’s fundamentally…

Tony Fish: I violently disagree with that. Meaning because it’s got to be adaptive.

Chris Parker: Well, meaning the systems that I have set up with with self-learning AI that you can program these in order to get the response of this morning and automatically update call the SOP you know so there you should probably have a human in the loop of that but they will become more and more and more adaptable I feel and so yeah so that sorry that was my point this is the difference between automation…

Tony Fish: yeah yeah okay where where where we’re we’re automated because you know a you know back to the Adam Smith ideology of separation of of of work you know you don’t need one person who can make all the pin you need one person who can do one particular bespoke function and we can automate that and I totally understand that because we’re still manufacturing pins you know several hundred years later in the same way so those levels of automation really work but that’s not what humans do or want to receive when they’re not, you know. So, I I just think we need to separate automation ideologies from pro processes and methods which actually are a completely different scaling problem.

Chris Parker: Love it. I think I think every segment of this conversation we could unpack and go down a rabbit hole of of infinite length. And um Tony, what I’d like to wrap up with is you you presented four questions and these were predominantly in the context of a business conversation. Those questions which I think are super relevant. I’m I’m busy with with creating an AI adoption and strategy playbook with a bunch of other experts and I’m going definitely going to steal these.

Tony Fish: Yeah. And Yeah. Hey, do you know what I mean? Improve them, please. Please.

Chris Parker: And and to have those reflective, is it wise? Are we balancing the constraints? Um, is this good? You know, the these are the things that these are the little tensions that I want to build into these systems. So, people just don’t just go, yeah, you know, just go for it because well, sometimes you just need to go for it and get your get your black eye and come back and and redesign it. That’s fine. But um um what would be the question that we are inviting people when they stop listening to this to reflect among you know for themselves but ideally at dinner tonight amongst friends and the family and share what they heard here and then and say you know what I want to ask us all this question and have this conversation. What what do you think the most beautiful loving oh question could be that would trigger this conversation right now?

Tony Fish: Uh so so I write I I try to write about four questions a week which I publish on open governance.net. So the reality is if I gave one question today it’d be out of date by tomorrow because I want to create another one. So so I’m not going to do that. Um I think there’s one useful one which is great to have over the glass of wine, the beer, the water, um the barbecue, the meal, um you know, driving in the car, u and chatting to somebody or whatever situation you’re in. Um and it’s what and I’ll answer my own question so that helps people, but I think it’s a question that’s worth asking, which is what is the one meaningful KPI we should have for AI?

Chris Parker: what is the one meaningful cap KPI we should have for AI. Now I know I’ve argued that I don’t want one KPI and everything else.

Tony Fish: The way I’m coming at it…

Chris Parker: and just one thing for for those that maybe are not in the corporate world KPI is a key performance indicator which is a measure of success for example. Yes.

Tony Fish: Yeah. How do you measure measure? What’s the one thing you personally would want to see as a success measure for KPI uh for for AI? Sorry. And the one I’ve got is it will it will um help, excuse me, the one I’ve got is it will help me ask better questions rather than efficiently answer predetermined ones. It will help me ask better questions rather than how efficiently it can answer the predetermined ones because if I can ask better questions and it can help me ask better questions to me that is back to where you started. That’s my that’s a superpower.

Chris Parker: Yeah. That curiosity is your superpower. Yeah. It’s the h but it’s I think it’s part of humanity and humans are incredibly good at asking questions. I’m reflecting on my answer to that. Where my mind is going is and I don’t know I think these are all imperfect and wrong but where my mind immediately went is my measure of success for AI would be to help the next generation be better people than we are. That’s a lovely one. I love that. That’s really good. that that’s where my where I went was was there’s an opportunity here for humanity to improve if you will. I don’t know because humanity is pretty cool as it is right now but it’s you know to maybe do less stupid stuff perhaps by asking better questions learning faster.

Tony Fish: Uh but this is why it’s a nice question to ask because I don’t think there’s a right or wrong answer. It’s actually it’s an individual answer which allows you to unpack kind of what what you want because if it could give you what one thing and it and it does then actually it’s helped you on a on a path. So yeah, I’m that’s why I think it’s a great question to have as debate rather than you have to come away agreeing.

Chris Parker: Yeah. And and it it can be very individual as well. So, right before we close, uh, Tony Fish, you already mentioned opengovernance.net and that’s a medium page and you’re you’re publishing there a lot. There’s also, uh, right on the top of there, I see that the about the book, the decision-making in uncertain times. Y, we’ll include all that in the show notes. Uh before we go, can you repeat that question one more time as our way of saying goodbye to trigger and inspire hopefully the people that are listening to take a moment just take a pause and reflect that for themselves and and when inspired go share that moment and share that dialogue with with someone they care about.

Tony Fish: Yeah. Uh and to me take away a question when you sit there over the next cup of coffee and say what is the one meaningful KPI um for you that AI should be measured against?

Chris Parker: Great Tony Fish, thank you for wrestling what it means to play and exist in that space between AI and I.

Tony Fish: Thank you Chris. Love the conversation and thinking.

Related Episodes
Doc Searls
AI That Works For You vs. AI That Works For Them
Dr. Ammar Younas
Why Conscious Machines Might Be the Most Empathetic Citizens We’ve Ever Known
Gerd Leonhard
The Good Future Isn’t Automatic: Why AI Needs Our Will to Collaborate
John Sanei
From Fear to Freedom: How Your Internal Frequency Determines Your AI Future
The AI
When AI Becomes Personal: Finding Your Space Between Fear and Wonder

Chris Parker: This is Chris Parker and this is the AI and I show and I’m having a conversation with Tony Fish who is a serial entrepreneur. He’s an investor. He’s a board adviser and he has 30 years of experience and I love this navigating uncertainty. He has published six books. The recent book was decision-making in uncertain times and I found that so relevant to our conversation about what does AI mean to I in this crazy uncertain time and he’s known for helping leaders and boards you know surface those difficult you know unasked questions and I’m fascinated about this conversation to see what kind of unasked questions we can surface in this time we have together So in order to kick us off, Tony, maybe can you share why AI and I as a topic attracted you to engage in this debate? So, so why is this relevant and meaningful to you as an I in that AI and I equation?

Tony Fish: Thank you, Chris, and thank you um for asking questions and opening a debate. Um I think at the beginning of all new technologies and you know I we often go back to the wonderful word of the lites and how the people would break down the mills because they didn’t understand them and reality is every technology has the same things that um as technology moves ahead there’s people who try to um stop it and I look at these things by going actually we should equally value the progress but we should equally value those who And we should be able to master the ability to see both sides of their views. And with AI and I right now, I don’t think we’ve understood almost either side of the debate. And the nuances within the debates are so shallow and they become very opinionated. And we see this politically that you’re one camp or the other camp. You can’t sit in both camps equally value them as we wrestle with it. And that’s where I’m at. That’s why I love what you’re trying to do. It seems to be let’s wrestle with this stuff and let’s get let’s get deep and dirty. Let’s get muddy. Let’s get let you know let’s let’s get a few bruises on ourselves because that’s the only way.

Chris Parker: And how would you describe those two camps? So, you know, I think there’s the tech optimist Silicon Valley, you know, bro culture, you know, maybe that’s what we called it a couple years ago. And then there’s the dystopian we’re all going to die. you know, it’s it’s it’s going to be disaster. How how would you maybe articulate that better based on your obsession?

Tony Fish: I I think you were spot on. I think one is um almost you go back to sort of the 1930s um uh with Flash Gordon and and some of those wonderful sort of images that we would be flying around by today in in none of it became true. And there were people who even at that point were still worried that if you went faster than a certain percentage as miles an hour, you know, the brain would fall out. And just a fascinating pieces and we’ve done it in history so many times that there’s the camps of the the optimists and the pessimists effectively, the dystopian utopian. Um, and I suppose so much of the utopian is now what we see as the media, you know, uh, Terminator, The Matrix, all of those those massive movies. Um, and actually, I think when people start to look back and they see how early, even Space Odyssey 2001, hello Dave, I’m not going to do that, and they’re starting to look at it and go, how did people that long ago actually have some of that thinking? and we just enjoyed it and we didn’t see it was going to be relevant and then it’s almost caught us up and biting our ass. And I think that’s yet they’ve become so cultural to us that we refer to them so often that I don’t even know if we know if they’re are utopian or dystopian views anymore.

Chris Parker: Well, is I I look at it maybe in a slightly different way that that in our modern certainly western culture, media, Hollywood is a way of expressing and discovering cultural possibilities that that we don’t otherwise have another avenue for. And so Skynet, you know, come, you know, is presented in a certain way. Maybe in the same way that comedy is a way of positioning a conversation maybe around politics or things that are uncomfortable and it it is a way of for us I think to wrestle with it and maybe have conversations about it you know like you know I remember you know whatever is the 90s when Matrix came out friend Brent we were in Santa Clara and I you know watching it and having a conversation after it like okay well what what what happened just now like h like like what did that mean to you And and so I think if it’s a if it’s a if it’s triggering a thoughts that can maybe help us prepare for my response, you know, I my response in this time um then maybe it’s helped. I I don’t know. But maybe it’s also too limiting because if we only have, you know, Terminator and Matrix and, you know, and it’s bounded our view of the possible because typically it’s not a rosy, hey, here’s a movie about, you know, AI replacing all the mundane work and people become, you know, fully enlightened, you know, like I haven’t seen that movie yet.

Tony Fish: I kind like I completely agree and so often you go down to conversations with people and they come back to their views are informed purely by those media films. They really haven’t got any depth below Yeah. Yet what to me what they should be doing is going who wrote the script? Yeah. And how did they get that view and when was that script written? Uh because to me they’re the things that say people were starting to think about that far earlier than than almost the rest of society comes along with.

Chris Parker: Yeah. I think if you ask those questions around Star Wars as well, you will find it an amazing cultural rabbit hole, you know, around that time. It’s a the story behind the story.

Tony Fish: Yeah. And and the oddity about Star Wars is is is I have a a deep affinity to it. Um, one is my next door neighbor when I was growing up. Um, so this was when I when I was um 13 years old. Uh, was the trumpeter on the signature theme tune. And because he was the trumpeter, he actually got tickets to the red light uh red um red carpet. So on the 4th of May 1980, I went up to the red carpet to the opening of the Empire Strikes Back and I saw the film when I was 13. And uh all reports are I don’t remember this conversation, but all reports are when I came away with my dad, I said, “Well, that’s a rubbish film. It’ll never catch on. The only thing we could clue from that is I shouldn’t be a film critic.”

Chris Parker: But yes, can you imagine why you thought that? Is it was it just not for a 13-year-old at that time?

Tony Fish: I don’t know. I I think I don’t think I saw the story. And actually, you know, now I’ve worked out, well, I haven’t worked out, but now it’s been explained to me that actually, you know, that’s the seventh part of the story. Then you’ve got to watch this and this and this, and if you actually watch them in order, then the story hangs together. Yeah. And I I just didn’t I couldn’t work out the story, and that was my problem.

Chris Parker: Well, yeah. Because you’re stepping in midstream. And um mentioning story, you know, the arc of of the Star Wars and most other myths, you know, is the hero’s journey. And I’m wondering if we can overlay the hero’s journey to AI and I, you know, so like, you know, two years ago, AI has been around for a while, but Gen AI just broke it. And you know, two years ago, the opportunity to cross the threshold into this new world has been presented to all of us. You know, it’s it’s just democratized because you can literally just either talk to it or type to it. Yeah. So, you don’t need Python knowledge or any other, you know, advanced skill. You can just go for it. Yeah. And um at the end of that, you know, there’s going to be betrayals and there’s going to be people that are going to help you along the way and there’s going to be some artifacts. You know, there’s this the whole aspect of the story. And at the end, we’re going to get into a new a new normal. So, I’m wondering I’m wondering if we can unpack a little bit if you have any suggestions or advice on on what people can put in their mental backpack, if you will, as they’re going on that hero’s journey. And if there’s anything that they can’t that they should not do or they they really should do or like like how can they maintain themselves and not get into this oh Skynet is the only definition of the future because Hollywood said so.

Tony Fish: Yeah. Um actually um for those who can get access to the BBC uh on the BBC website it’s worth going to their podcast section and looking for a guy called Rory Stewart. And Rory follows certain themes. And one of the themes he’s been following fairly recently is heroism. So he’s done about a seven or eight part series on what it meant to be a hero all the way back in Greek philosophy all the way up to Zalinski, but he then leaves on the on the last episode. He says, “What happens when AI becomes the hero?” And he it and in this in the last part of it, he basically um brings us all the way up to date with the superhero. So before heroes went into war and World War I and World War II changed that because you couldn’t stand up in the trenches and not die and he uses the um Lawrence Olivia example of you know you stand up basically died by shrapnel because suddenly there was this mass annihilation techniques um which came to the four. So the old hero vanished and you couldn’t be that type of hero. So out of that came this whole ideology of the poster and then is effectively the superhero. Um and then he’s carried that forward to today and how the skills and everybody else that there was superhuman and suddenly we’re looking at this thing which is superhuman. So how were we going to heroize it or what are we going to do with this thing which is beyond human capability? And he leaves it as a question. I leave it as a question, but I think it’s one we kind of need to wrestle with. I think it’s a great…

Chris Parker: Well, I think let’s let’s wrestle with it. And and um another view on that is AI can make you superhuman.

Tony Fish: Yes.

Chris Parker: So, so you know, general AI or you know when when it becomes more human, cognitive and and interactive and maybe even physical. Okay. We’ll have another podcast in three years when that happens. But right now what people have said is that these will give me and you superpowers you know extensions of ourselves. Um which can also be quite concerning meaning okay you know because if you have you know ultimate power comes ultimate responsibility and um if if you were if you were to become a superhuman superhero maybe this is an even better way of of couching this question. If you were to become a superhero yourself, what part of you would you want to maintain as truly human?

Tony Fish: Um so uh one of the one of in fact the final chapter in my the decision-making uncertain times books I present a framework um uh which is called the peak paradox and the peak paradox is how do we manage the paradoxes that we see in front of us because that’s how we deal with complexity and I draw the idea that there’s four major paradoxes we have to deal with on a daily basis which is one is survival have we got enough food water to to survive. Um the second one is uh as a paradox, how am I optimizing for myself? The third paradox is then how do I optimize for the organization for which I I work for or construct which I work for. And the last one is how do I optimize for a better society? Now people are paradoxes because people are drawn to one or the other and therefore the paradox is introduced because you’ve got to try and balance all of them at the same time. So you can’t leave at peak paradox which is where you try to balance all of those at the same time. Therefore you have to find where you as a human are able to wrestle with the tensions or live with the tensions or compromises that particular position creates. And that to me is what makes you delete you completely uniquely human as you as a human because only you probably can wrestle with those compromises and those tensions consciously that’s that’s what really that triggers me with this framework. It’s okay I am choosing the collects over me now and therefore the consequence is X. Now we don’t there’s a problem is I see with lots of humanity we don’t do that. we just end up in these quite often compromised positions which is where we start to see mental breakdown. We see you know work um situations flare up over very simple things because people haven’t realized they’re actually compromising or living under tensions they didn’t realize they didn’t want because it doesn’t feel natural to them.

Chris Parker: How can we apply this? So let’s let’s go down the the the the the vision of AI providing the general person superpowers. Yep. And then they need to start making decisions on Yes. whether they wield these superpowers for any one of those four domains, if you will. Yes. um what would that what would that mean to someone like like how would that become alive in their experience?

Tony Fish: Yeah. But then again, you know, what is going to be your superpower? Is it going to be hope? Is it going to be love? Is it being bold? Is it being courageous? Is it being curious? What what do you judge as a superpower? And I think we don’t even have that level of debate. So we go down to the pub, everyone wants to be faster, quicker, stronger. And actually their one idea of what a superpower is. Um my my one I suppose is that curiosity. If you if you if you want a superpower, how do you become more curious? How does your curiosity never stop being curious? How do you never stop actually? Because you can never understand everything. But it’s the connectivity between things which are unique. And so you can learn everything which AI can now learn everything and it can recall everything but it doesn’t mean it can connect random ideas to create something new and you know we we as a human human species still do not understand fundamentally what consciousness is. There’s 180 theoretical theories of what consciousness is which sit into about um 12 different ontologies. So there’s 12 major studies of what they believe consciousness is. And each and below each of those is a good 10 to 12 further sections. So we have no idea what consciousness is. We have no idea where some of this stuff comes from. We have varying theories which depending on which camp you’re in, they may or may not add up. Um, so we’re so many people come to I think some of these decisions about AI or thinking about AI without I think the reality of the lack of understanding. So hence the reason I always go back to what is the question we didn’t know we had to ask. And if we’re going to come to AI and debate AI, I think the first thing we should be doing is spending time to think about the questions we should be asking that we’ve made assumptions on.

Chris Parker: his consciousness. Is that what you mean?

Tony Fish: That’s that’s one example like what is being told and like Rory Stewart’s thing, what does it mean to be a hero? Because actually that’s fundamentally changed almost uh you know at least 10 times over the last 5,000 years and it’s just about to change again and very culturally sensitive as well.

Chris Parker: Incredibly culturally sensitive. Coming from my American background, everyone wants to be a hero. and from my, you know, Dutch adopted background, they don’t even know what the word means, you know. So, you know, of course they have heroes, usually football players and things, but not nearly as many. So, let me grab on to consciousness and and um still trying to navigate between this AI and I. Yep. And consciousness is something that we are aware of maybe not explicitly every day. and and the way that consciousness expresses itself could be in a form of a super superpower of curiosity or heroism or or something like that. But that there’s a it feels to me like this is an essence of humanness that maybe AI will give us the time to explore more, you know, in kind of an optimistic view.

Tony Fish: Uh and I I think what you’ve hit is actually something really important which is that kind of if an AI implementation gives you more time to be curious and more time to find those qualities that resist optimization and efficiency and effectiveness. So they they are they they effectively provide time for you to work out efficacy. What is right and therefore ask the question is this right? and how do I know it’s right and who decides if it’s right which are all the things that effectively efficiency and effectiveness wants to drive out of the system your ability to ask a question

Chris Parker: yeah I think coming at this a different angle like operational excellence you know take it to the nth dimension is death you know I think it’s because there’s just there’s just no movement there’s no opportunity for creativity And and do you think or have you seen um because we’re just you know our our baby toe is just dipped in the water with this Gen AI thing. Are you seeing that there’s any hints or clues that that the automation or the effectiveness or the time savings are being used for the right things and the right is very subjective. I know…

Tony Fish: it’s incredibly so I haven’t got a clue is is is the base answer and it’s um and the problem with data is everybody can use data to justify the outcome they they want to show which is you know um and lots of people are saying you know and Denning was one of those classics that if you haven’t got data you’ve only got an opinion the only problem is if you don’t understand who’s supervising the data that you’re now using you don’t understand and that your your thinking has been supervised by somebody else. So you all you’re doing is actually representing the opinion of the person who gave you the data rather than having your own opinion. People don’t like things like that because it fundamentally challenges them because sometimes they believe data is a truth or a right or or a fact and data is none of those things.

Chris Parker: Well, I think maybe they don’t like it because the conclusion of that is it’s their responsibility. Yeah. To ask a question. To ask a question or to to take a to take a decision informed on, you know, you’re writing about uncertainty. You know, we all live in uncertainty and there is no perfect decision context. You know, that that that there’s always these things that are, you know, based on your form four paradoxes.

Tony Fish: Um, I want to go back to answer your question, Chris, because I think it’s a good one, which is kind of like, um, perhap So, where I come from is is what are the questions I should be asking? And I’ve I’ve framed at the moment four questions. I don’t know if they’re good questions. I’m sure people will create better questions, and if they do, hey, drop them to you, Chris, and we’ll share them round. But these are the sorts of questions I’m trying to get people to ask. And the first one, you know, does this AI system expand or constrain constrain the range of meaningful options available to humans? Does this AI system So, we’re sitting in a board meeting and I’m asking the question, does the AI system we’re just about to implement expand or constrain the range of meaningful options available to our customers? They don’t like that question because if you’re into the world of optimization, it is absolutely only designed for one thing which is to constrain and lock people in and exploit.

Chris Parker: Well, connecting that to a different space in strategic decision-m um I’ve heard that and I’m sure there’s there’s people that have that have written on it. This isn’t my thinking. I have to point at Tim Ferris I guess or somebody that was on his that if you like strategically if you don’t have a clear decision then make a decision that opens up more decisions and so when when you’re when you just said that this is what just resonated with me is is this is this providing more a wider spectrum of surface area for playfulness and decision-m…

Tony Fish: yes perfect so the next question I take to people exactly the same you do the same thing which I love which is does it increase so again does this AI system Does it increase or decrease our collective capacity for novelty, compassion, and love? I’ll read it again. Does it increase or decrease our collective capacity for novelty, compassion, and love?

Chris Parker: It’s it’s a I love it that you named love.

Tony Fish: Yes. Um I wrote a book Lead from Love with Rumi and it was my reflections on the you know the 13th century uh Sufi mystic and I wrestled with because it is a business leadership book and I wrestled with naming love and I love it that you did because just as you experienced just now with me is it it it forces you to stop and say well that’s a word you know that that you know that that is a word that you need to to and if people are going to reflect on does this increase the capacity for love and empathy and compassion then that is a thought you should have. So I I I fully support that question.

Chris Parker: Yes. Because we cannot disassociate humans from business. And if we do become completely dispassionate in our business, then the reality is you are not going to decide for the collective good or the collective capacity of novelty or the collective good of compassion or the collection of love. Yet you leave the business i.e. you go home and what’s the one thing that you want to do? You want to be liked and you want to be loved. Deeply human connections. I I would even go further that that the management science peers and colleagues that I have that are you know hardcore metrics and you know run the machine not you know the people are disposable. I think that’s a shortterm plan because without human development and compassion you will not innovate and you will not find the next whatever breakthrough and your customers will feel that your people are tortured and your and it’s it’s going to be short term anyway. So yeah, I like it because this is actually it’s it’s pointing you know that business growth which of course financially we’re looking for is dependent on human growth and if you…

Tony Fish: know but third question third question is um does it again does this AI system um support local adaptation while maintaining beneficial connections? So let’s because we don’t want to be isolated. How do we become part of the network?

Chris Parker: Can you unpack that for me? Because I, you know, my technical background, I’m going down APIs and my head is going this whole strange direction.

Tony Fish: Yes. So, so does this AI system support local adaptation? Can you do some of that culture awareness pieces while maintaining the beneficial of connectivity glo you know, effectively globalization? So, can you deliver something that’s unique to Chris in his local village where he lives or his local city and the benefit he needs in the instance and and the personal context he’s got right now which isn’t about personalization because it’s now not forever. Um therefore, but but also achieving the economies of scale that we want to achieve through it because we understand the economic benefits of it. But we don’t treat everybody as effectively the same thing. So can it does it balance you the lo the local issues with the benefits that we understand that come through from through scale.

Chris Parker: This is based on your four paradoxes basically basically hey can you balance this can you do this and this and this and this as best you can…

Tony Fish: and if you don’t consciously and if and actually you can’t you can’t manage them all. So where are you going to compromise and where are you going to live with tension because then when you come to interviewing okay if you can articulate the tensions and compromises okay the person sat in front of you goes I agree I I can live with that tension I can live with that compromise and you’re going to build a team if you can’t call them out you’re going to bring team members in who basically become destructive because they want to move where you have come to that balance and that’s what we tend to And this is going to you know this starts now at scale. So you end up with people moving outside organizations even though they got the right skill set because they didn’t understand effectively the paradox is this organization has come to a conclusion about how they balance those tensions and compromises.

Chris Parker: I see my experience I I come at a little bit of a different angle but when you’re doing strategy work I really emphasize on what we’re not going to do and what we’re not going to achieve. Yes. And that’s actually a for me a better conversation because if you have 10 people around a table and everyone puts their ambition for the quarter or the year and we say, “Yeah, we’re going to do that.” You’re not, you know, you’re just not, you know. So, so have that conversation and say, “Okay, well, if we can if we let’s just if we can do one thing, yes, which is the one thing that we’re going to do and if you want to do some other stuff, fine, but what is the one thing that we have to do?” So but anyways it’s um fourth question…

Tony Fish: fourth question uh again does this AI system um help us recognize when our management systems themselves need reconstruction. So we use uh management systems and reward systems effect or or measurement systems um to uh provide both incentive plus renumeration plus bonuses which is how effectively we keep people focused on the task. and we say to you um this is the one task you’ve got to go and achieve. What we don’t then go back and question is is it the right thing now that we’ve moved forward six months because actually it is now baked in that you will get your bonus on us achieving that thing. So then what happens is we start diverting resources from doing the right thing that we should be and now be doing because your bonus and incentive and measurement system is all geared towards this new thing and we the one thing that’s going to happen now at speed…

Chris Parker: speed that’s yeah it was speed this is the dimension of speed that you’re bringing in because typically you would do this once every year or two and you would do some sort of compensation review or which now actually if we determine that actually it’s completely the wrong thing. The problem is that humans are very good at going, “No, I don’t want to change that because I know my bonus is going to come through because I’ve done X, Y, and Zed.”

Tony Fish: Mhm. Whereas if actually we can say, “Look, you’re still going to get your bonus, but we’re going to do this rather than that.” Suddenly we change the incentive way of thinking. Now the issues are that humans have this paradigm attachment problem that we get a basically we form an identity because of the role we’re performing and the level of seniority we’ve got and the organization from which we work and the model that organization has. And if you’re starting to challenge all of those things, you know, how does the human have an identity? And you you do you know you see this. know um a person wanders into a room and they’re introduced as the CEO of of a bank and everybody goes oh that’s fantastic and that person what you don’t realize their identity is not their name now or anything else they are now the chief exec of the bank so now if you challenge the banking model and say that that’s going to disappear the individual becomes incredibly defensive and their measurement systems how they reward everything else because it’s paradigm attached. They are they are identified to the model and the branding. And this is what we’re starting to see many of the human problems inside these organizations is humans are attached to certain ways of working, certain branding, certain and we’ve gone through this whole position of you know your brand, your identity and who you are and how you identify yourself. And suddenly we’re finding that that’s a real problem for us because it’s preventing us from being able to go we need to change faster if we’re going to be in this new world. But therefore, I would say who are you? So you’re sorry just to finish off Chris, which is why this podcast that you’re doing, I think so interesting, which is AI and I you kind of like nailed it in the in the question you’re asking.

Chris Parker: it um there’s so much in that last in the last question it’s really you know organizational dynamic and organizational design question and I don’t think people are great at designing those systems to begin with you know because that’s why hay points were in existence and stuff and all of these artificial mechanisms that we tried to scientifically make everything fair and and maybe remove all the tensions from these decisions between this the pay and the bonus and the packages of and titles of individual people. I think that we’re going to have to release some of that preciousness and then get into, you know, equal doesn’t always mean, you know, it’s it’s equal doesn’t mean equality. Is that the word? you know, like we can we can be very fair and not necessarily all the always the same because we’re shifting and changing. So, you know, on a quarterly basis, we’re going to have to adapt to that. I think that requires an incredible level of who you are, you know. So people are going to be have to be very um in touch with themselves, you know, like like like very solid beyond their title of the CEO of the bank and through that very trusting meaning meaning I’m going to be okay regardless if I don’t understand completely how the pay scale is going to work in the next couple of months.

Tony Fish: Y because whatever we’re we’re creating things we’re creating value and I am confident that this will return to me.

Chris Parker: Well that that’s that’s the really interesting part of this because…

Tony Fish: actually this and this is why the measurement systems become really interesting because actually you can start to understand one what value is to you and the organization but second you start to reward based on value. Now at the moment we have a hierarchy and pay goes up as you go up the hierarchy. very very simple but brutally rubbish and actually really is a it’s completely rubbish system because actually some of the largest value creators are right down the bottom. Now what what the new systems allow you to do is go actually we we can change the whole communication system. So there’s no need for hierarchy because hierarchy was about communications AI introduces actually flat and instant. So all of that vanishes and the ability to have what you need. Secondly, it can start to identify and help identify where value is created. So that has another layering to you can now start to actually reward based on value, not on seniority, position, job title, budgets, all of that stuff.

Chris Parker: It’s it’s a shift in power dynamic and and and the way I describe it to managers like like okay in the past with information scarce scarcity and decision- making you know at the top of the pyramid so like okay I know the answer and everyone else needs to do the work. I think what AI is going to do is flipping that whole paradigm that that actually the answers are…

Tony Fish: Oh, I love this. This is connecting right straight to this. The answers are available and where the value is whether you know the right question to ask. Yeah.

Chris Parker: So, you know, and then the people who know the right question to ask are the ones that actually have the problem. Of course there’s going to be some strategic problems and you know it’s going to be a bit vague but the um the ability to to within context at that moment even though you know it’s uncertain you know vaua world and all that um what volatile uncertain complex ambiguous world that we always that we live in um the front line will be more and more empowered to make…

Tony Fish: other than that so so vuker the odyssey of vuker it comes from the military And it comes from a lady called Jillian Stamp who was doing some work for the military and it was designed for situations. So a soldier is standing at a door and he’s about to kick the door down. Yeah. Okay. How do we train them? Because we know the situation is volatile. We know um we know it’s uncertain because they don’t know what’s on the other side of the door. They know it’s complex because of the multiple things that are going on and we know it’s ambiguous in terms of what the outcome’s going to be. So it’s how do we train the soldier that when they kick the doll down actually as many things that could possibly have been trained have been trained. What the business world has done is converted that piece of thinking into something they don’t really understand what it was for. They use it as a way of looking at complexity as opposed to a way the soldiers were trained that actually no matter what happens, they know how to deal with the situation. And then and of course in business we’re never trained to that level and we and it’s just basically inappropriate at that point. Um but it’s it is a nice buzzword to kick around um to to express a thought of of things are are very very unknown.

Chris Parker: I think what business also does and this is where AI can help is is of all the let’s talk about a customer service agent not a soldier kicking down a door but as a customer service agent picking up the phone um can also be nerve-wracking but AI can handle you know 97% of the mundane in that case because it’s not voua to use the word again and in fact there’s going to be a only a few times when a human intervention for whatever reason is is very appropriate and and other than other than that the the idea so the permutation of that idea is that what you did yesterday I can do again tomorrow and actually everything I did yesterday will scale tomorrow and that requires that the processes and methodologies that you put in are infinitely scalable and don’t change and I think that’s fundamentally…

Tony Fish: I violently disagree with that. Meaning because it’s got to be adaptive.

Chris Parker: Well, meaning the systems that I have set up with with self-learning AI that you can program these in order to get the response of this morning and automatically update call the SOP you know so there you should probably have a human in the loop of that but they will become more and more and more adaptable I feel and so yeah so that sorry that was my point this is the difference between automation…

Tony Fish: yeah yeah okay where where where we’re we’re automated because you know a you know back to the Adam Smith ideology of separation of of of work you know you don’t need one person who can make all the pin you need one person who can do one particular bespoke function and we can automate that and I totally understand that because we’re still manufacturing pins you know several hundred years later in the same way so those levels of automation really work but that’s not what humans do or want to receive when they’re not, you know. So, I I just think we need to separate automation ideologies from pro processes and methods which actually are a completely different scaling problem.

Chris Parker: Love it. I think I think every segment of this conversation we could unpack and go down a rabbit hole of of infinite length. And um Tony, what I’d like to wrap up with is you you presented four questions and these were predominantly in the context of a business conversation. Those questions which I think are super relevant. I’m I’m busy with with creating an AI adoption and strategy playbook with a bunch of other experts and I’m going definitely going to steal these.

Tony Fish: Yeah. And Yeah. Hey, do you know what I mean? Improve them, please. Please.

Chris Parker: And and to have those reflective, is it wise? Are we balancing the constraints? Um, is this good? You know, the these are the things that these are the little tensions that I want to build into these systems. So, people just don’t just go, yeah, you know, just go for it because well, sometimes you just need to go for it and get your get your black eye and come back and and redesign it. That’s fine. But um um what would be the question that we are inviting people when they stop listening to this to reflect among you know for themselves but ideally at dinner tonight amongst friends and the family and share what they heard here and then and say you know what I want to ask us all this question and have this conversation. What what do you think the most beautiful loving oh question could be that would trigger this conversation right now?

Tony Fish: Uh so so I write I I try to write about four questions a week which I publish on open governance.net. So the reality is if I gave one question today it’d be out of date by tomorrow because I want to create another one. So so I’m not going to do that. Um I think there’s one useful one which is great to have over the glass of wine, the beer, the water, um the barbecue, the meal, um you know, driving in the car, u and chatting to somebody or whatever situation you’re in. Um and it’s what and I’ll answer my own question so that helps people, but I think it’s a question that’s worth asking, which is what is the one meaningful KPI we should have for AI?

Chris Parker: what is the one meaningful cap KPI we should have for AI. Now I know I’ve argued that I don’t want one KPI and everything else.

Tony Fish: The way I’m coming at it…

Chris Parker: and just one thing for for those that maybe are not in the corporate world KPI is a key performance indicator which is a measure of success for example. Yes.

Tony Fish: Yeah. How do you measure measure? What’s the one thing you personally would want to see as a success measure for KPI uh for for AI? Sorry. And the one I’ve got is it will it will um help, excuse me, the one I’ve got is it will help me ask better questions rather than efficiently answer predetermined ones. It will help me ask better questions rather than how efficiently it can answer the predetermined ones because if I can ask better questions and it can help me ask better questions to me that is back to where you started. That’s my that’s a superpower.

Chris Parker: Yeah. That curiosity is your superpower. Yeah. It’s the h but it’s I think it’s part of humanity and humans are incredibly good at asking questions. I’m reflecting on my answer to that. Where my mind is going is and I don’t know I think these are all imperfect and wrong but where my mind immediately went is my measure of success for AI would be to help the next generation be better people than we are. That’s a lovely one. I love that. That’s really good. that that’s where my where I went was was there’s an opportunity here for humanity to improve if you will. I don’t know because humanity is pretty cool as it is right now but it’s you know to maybe do less stupid stuff perhaps by asking better questions learning faster.

Tony Fish: Uh but this is why it’s a nice question to ask because I don’t think there’s a right or wrong answer. It’s actually it’s an individual answer which allows you to unpack kind of what what you want because if it could give you what one thing and it and it does then actually it’s helped you on a on a path. So yeah, I’m that’s why I think it’s a great question to have as debate rather than you have to come away agreeing.

Chris Parker: Yeah. And and it it can be very individual as well. So, right before we close, uh, Tony Fish, you already mentioned opengovernance.net and that’s a medium page and you’re you’re publishing there a lot. There’s also, uh, right on the top of there, I see that the about the book, the decision-making in uncertain times. Y, we’ll include all that in the show notes. Uh before we go, can you repeat that question one more time as our way of saying goodbye to trigger and inspire hopefully the people that are listening to take a moment just take a pause and reflect that for themselves and and when inspired go share that moment and share that dialogue with with someone they care about.

Tony Fish: Yeah. Uh and to me take away a question when you sit there over the next cup of coffee and say what is the one meaningful KPI um for you that AI should be measured against?

Chris Parker: Great Tony Fish, thank you for wrestling what it means to play and exist in that space between AI and I.

Tony Fish: Thank you Chris. Love the conversation and thinking.