Futurist Gerd Leonhard has spent decades exploring how exponential technologies will reshape our world. But in the age of AI, he sees a critical fork in the road. In this conversation, Leonhard challenges the Silicon Valley myth that machines can make us better, arguing instead that AI’s rise demands we consciously choose what remains human. He warns that while AI excels at logic, data, and efficiency, the real work of building a good future lies in collaboration, governance, and preserving human agency.
Leonhard discusses how AI is already changing his own work as a futurist, why automation is often an illusion, and why trust cannot be digitized. He critiques the business model of replacing humanity and calls for a new economic framework built on people, planet, purpose, prosperity, and peace. This isn’t a conversation about predicting the future. It’s about deciding which future we want and having the will to create it. For leaders navigating AI adoption, Leonhard offers a framework he calls CAVA, emphasizing augmentation over automation, and urges organizations to consciously decide what should never be handed to machines. The message is clear: we have all the technological cards. What we lack is the collective will to build the future we deserve.
Gerd Leonhard is one of the world’s leading futurists, widely recognized for his work on the intersections of humanity, technology, and society. He is the author of Technology vs. Humanity: The Coming Clash Between Man and Machine, a book that has become essential reading for conversations about the ethical and human implications of exponential technologies. For more than two decades, Gerd has advised business leaders, policymakers, and global organizations on how to navigate the accelerating future through his company The Futures Agency.
A former musician and digital media entrepreneur, Gerd brings both creative and strategic perspectives to his futurist work. His insights reach broad audiences through books, films, keynotes, and his platform GerdTube. He emphasizes that the role of the futurist is not to predict but to provoke reflection and ask what kind of world we want to create. Learn more at futuristgerd.com and gerdtube.com.
Key Topics & Timestamps
Memorable Quotes
The Ebullient Growth Agency helps leaders and teams bring AI to life in their organizations. Work directly with Chris to design strategies, workshops, and transformation programs that unlock growth and innovation with humanity at the center.
The Enablers Network partners with organizations around the world to navigate leadership, culture, and change in the age of AI. Together, we help leaders and teams adopt new technologies while staying deeply human and purpose-driven. Explore more at TheEnablersNetwork.com.
The GenAI Circle is a private network for practitioners, creators, and builders shaping the frontier of generative AI. Join peers who are experimenting, sharing, and co-creating the future of work, creativity, and intelligence.
The AI Collective brings people together for in-person and virtual meetups around the world. Connect with others exploring AI’s real-world impact in your city or industry.
The AI&I Show is made possible through the creative collaboration of Ahmed Mohsen, whose production and storytelling bring these conversations to life. Reach out to Ahmed if your Saas business needs a product marketing boost like he provided to The AI&I Show!
Chris Parker: Today I’m talking with Gerd Leonhard who’s coming in from Barcelona, but he’s typically in Switzerland and also the Canary Islands. He’s one of the world’s, I believe, most respected futurists. Uh, and he’s been talking and thinking about for decades around the intersection between humanity, technology, and the future. And so when framing my thought about AI and I, I’ve been very influenced by Gerd and his thinking. He wrote, I think back in 2017 around there, technology versus humanity, the book with a subtitle, the coming clash between man and machine. And I’m curious if we’ll play around with that a little bit. And in that decades of work, he has worked with some of the largest organizations in the world wrestling with this. Um, and for me, he’s not just a futurist as forecasting, but he is inviting to reflect. and his movie, I think it was the good future, is essentially an invitation, I believe, to reflect on a few dimensions of of what that and maybe we’ll get into that as far as like decommatization and digitalization. And there’s a there’s a few dimensions we can talk about, but today I’m hoping to go really personal um and wrestle with this dimension between okay, let’s AI representing all technology and I me and Gerd you um and what could this mean? And so what I’d like to kick off here because you’ve been working on this for for decades and with open AI and chat GPT bringing this you know commoditizing this where everyone is confronted with this what has surprised you the most in your own life about the impact in the last two years.
Gerd Leonhard: Yeah, you know, I’ve been looking at AI for over 10 years and in fact my my book technology versus humanity is kind of about AI without mentioning it a lot. You know, the the struggle between humans and machines, right? The convergence and that was quite clear. What has surprised me is that um there was kind of the Sputnik moment, you know, like the Russian satellite launch that triggered this arms race launching AI and getting it out there, you know, and now we truly have an arms race, you know, no longer Russia really, but China and the US. And so what has surprised me is is the power and the money that has gone behind that in a very short time as if this was like, you know, well, it is kind of the new internet, you know. Um and you can safely say that the the magnitude and the speed and the money behind it and uh American politics now as well uh that that has led to this huge acceleration uh and bringing with it a bunch of really amazing usefulness for my own work but also significant competition uh in forecasting. uh since CGP has launched my job has changed and I would say less people are asking me to forecast because you know they think the machine can forecast which it kind of can but but uh anyway so that has really changed my work and I’ve also been surprised how little resistance there is on this idea of a machine doing our thinking um because you know it boils down to the same thing and humans are naturally lazy uh we like tools that do the work for us and uh we don’t collaborate very well and uh paired with American politics which has been primarily about you know do whatever it takes to dominate right you the land of dawn um yeah so so that that has that has also surprised me as as an other kicker you know to this huge AGI discussion.
Chris Parker: if if we take your your work in forecasting and I haven’t thought of it in that angle but um um in my role as a as a tech leader we rely on Gartner and Forester and their sort of advice around technology and where things are going. So, not typically as far future as you go. Um, but indeed I’ve heard that, you know, the the the full business model of Gartner and Forester is in question because people can go to GPT and get, you know, an analysis on which AI widget, you know, is best. um if someone was asking you you know what is the benefit or what’s better is is it is a AI futurist forecast better than what a human could do or is it different like how do you answer that?
Gerd Leonhard: well I I think first of course there’s many companies in the field of consulting advisory work like KPMG for example seriously struggling because of AI um and and that is because a lot of the work that’s based on facts and numbers and projections. You know that work AI can do that work and we can take numbers from the last 40 years and project 10 years in the future. That is not as difficult as it seems uh in complex scenarios that humans have struggled with. You know the confluence of exponential technology, synthetic biology, nanotechnology, AI, quantum computing, you know that’s complicated for humans, not so much for AI. So AI is very good at complex scenarios based on data. But here’s the thing about forecasting. You know, forecasting is only very partly based on data. Uh as Steve Jobs said in his own work, you know, it’s it’s the confluence of art and technology. Um and you know, it’s a design question, right? It’s a question of uh uh storytelling. And I look at my work and I’ve done so at least for 10 years. It’s primarily about storytelling, not about gathering facts and data. And now we are quite simply we’re completely beaten by AI as far as logic and simple knowledge goes you know explicit knowledge uh and now it’s not just the stuff on the internet and then every every uh every book written uh that’s known to man you know can now be looked up now very soon it’s going to be in all languages of the world right uh and on top of that you can upload your own information your own database so if noises wants to know about the future they can upload their own model with their own data inside of their own firewall and they can query that. Right? So now essentially every possible variation of looking at data and analytics has been covered and this is only the beginning right because you know now we’re going to see all kinds of things like synthetic data and all that. So basically what that means is the hard part the data information logic part is being taken over by AI. uh and that doesn’t mean that we won’t have it because we do need that but the main thing about forecasting foresight and I don’t call that predicting the future observing the future right is it’s about stories and and this is the thing we have to understand about humans you know humans are about narratives what matters more for your life is how you feel how you look at things what the story is what you have experienced your your goosebumps your feelings your connections that that’s what matters to us logic numbers now the reality is We use that as an excuse. So we use the the stuff from Axos and Gardner and all the other guys and we say oh yeah that’s what I thought anyway and now I can make my plan right. Um so I think my work is much more about narrative and uh storytelling and touching people on the different level than you know the sort of intellectual computing part uh which AI is doing really well and I think thereby comes the competition right you just want to know the future of Switzerland based on on data you can just yeah you can drill endlessly and it’ll be it’ll be quite good.
Chris Parker: yeah the the the art of storytelling it reminds me something I did last here um was one of the one of the reasons for this podcast and this discovery is also for my boys they’re 15 and 16 um both with dyslexia so sometimes you know ingesting the data can be a challenge and particularly in history which a lot of reading and and and memorizing um and what I found is um and we’re in the Netherlands and of course this is it’s in Dutch but they have these really densely packed very dry history books you know and so like every paragraph has two facts facts and those facts of course need to be remembered wrote to memory you know so it can be you know regurgitated on a test and okay we can talk about the whole education system but what I did is I took I I I scanned in these books and isolated the individual facts and then I recreated it with with chat GPT in a narrative story Jack and Jill go back in history and they and they’re interacting and I think it worked a little bit better um because what I’m trying to do is figure out okay how how can this help them achieve their goal in a way which doesn’t just give them the answer and and I guess and that’s why we’re kind of playing around.
Gerd Leonhard: and that’s a really important point here I have two kids you know they’re 30 and 35 so a little bit different age and and they knew the world before the internet which is interesting but you know the way I look at you know there’s certain things that that humans and the way that we’re set up that we need that may not have other reasons Apart from the fact that we need it like handwriting, you know, do we really need handwriting? Well, you know, I I’m a terrible handwriter, but you know, every brain uh researcher and and analyst and psychos therapist and so will tell you if we don’t learn how to handwrite, we’re not going to our brain is not going to develop in the same way. And if we don’t learn how to speak other languages, you know, our brain will be in a in a different way. uh and you know all these things come down to the fact that what we are and what technology is is just a a huge difference in how it works and this is why AI to me I sometimes call alien intelligence right uh I think it was coined by u Yoshua Bengio and by uh Yann LeCun primarily it’s it’s outside of our own intelligence exists out there like a like an alien right and it’s it’s not very related to our own intelligence because you know we need other things for our intelligence to work you know we have a body and all these things this is why I think it’s really important for kids and for anybody to realize we have our own uh uh setup that needs nurturing and that includes downtime that includes digestion time it includes effort right includes all that oldfashioned stuff that open AI wants to get rid of you know so it’s like you know why bother why why make an effort and we can just ask our assistant, you know, or Zuckerberg says we should use his AI assistant, you know. So, so that that is a tough one, I think.
Chris Parker: I’m wondering if there’s if there’s a and it’s a rhetorical stupid question, I think, you know, is there a risk that paper, you know, paper maps, like the book of maps that we’ve had in our car in the past is now just gone because of Google Maps. Could writing, you know, the the need to write be gone in 10 years? Is is that a possible future or is that
Gerd Leonhard: Well, I I always say as long as we’re human, you know, as as you remain mostly human, it probably will not. even paper maps. You know, to me, for example, a lot of people who are going to a strange city to investigate and to be a tourist and to look around, you know, unfolding this big map on your dining room table, looking left and right, saying, “Oh, look at this over there, over here.” You this is our how our brain works, right? And so I buy a travel guide, you know, with the foldout map, you know, because yeah, of course, I’m 64, but still, you know, I I think I think if you’re 25, you like to have the map on the table. if you uh well you may not want to spend the €10. Yeah. But uh because yeah you can do that on the screen but it’s it just isn’t the same. So I think we have to remember that as long as we remain human in the sense of what we need and if we acknowledge what we need um and why we need it uh and why it’s necessary to know how to handwrite right or how to read for that matter right or you know uh all the things that humans have taken for granted for a long time. uh we shouldn’t do away with them because it’s kind of you know uh let’s say complicated or ephemeral or whatever you know and the same goes for spirituality and consciousness and all these things because a lot of tech companies and tech players they know this but their business model is to do away with it right I always say that the and this may be inadvertent but the the biggest business model today is to replace humanity you know to do away with it right
Chris Parker: can you unpack that a bit more like what what does that
Gerd Leonhard: Well, basically if I take away all the things that make us human, the complicated stuff, the things that computers can’t do, then I have the perfect mold for an AI based world, right? Because, you know, then I can do away with all the, you know, for example, I don’t do an interview with an employee. I put them in front of an AI camera like higher view or something. Uh, and then hire higher view says, oh, this guy’s lying or you know, he may be a criminal or, you know, analyze, you know, it does face recognition, right? Yeah. Um, and so I I can be lazy and just kick back in my office and do 50 interviews and then, you know, I think there’s this utter stupidity. It’s like, uh, yeah. What happens to everything else that is important when I hire somebody, you know, and and do I really trust what I definitely do not trust the machine to do that for me. And I certainly wouldn’t trust it for criminal justice is to scan somebody’s time in in the prison cell with a video camera and then decide if he’s going to do it again. You know, I I think that may be a possibility of getting more data, right? But it’s instantly dehumanizing. So, I think any company that scans your face with an AI camera, it’s dehumanizing. And technology is now pushing us towards this place of saying the more we can remove of these painstaking human moments, you know, the easier the world becomes. And that leads to a dehumanization that can make a boatload of money.
Chris Parker: you know, like uh you know, I’m uh I’m working Thursday again with a team of other experts around an AI adoption playbook. And I know the word playbook for me is triggering because who needs another playbook? What we’re doing is is trying to figure out a recipe for companies to adopt AI that is very well balanced kind of like a checklist or something like that that um that wrestles with some of these things. So um one of the steps that that we’ve definitely built in is is to consciously decide what not to automate what what what is not AI. You know, you know, if you were to talk to an organization and they were saying, you know, they asked you, “Hey, hey, hey, Garrett, what how could we think about what we should not give to the machine?” Like, like what is what is a mental framework do you think that that we could apply to that?
Gerd Leonhard: Yeah, you know, I do this all the time as well. I mean, this is my a majority of my work now is to help companies figure this out. I always say that basically if we’re looking at AI, we should keep our eyes on the on the things that are achievable and humanly uh sustainable. And I call that kava. So like the like the Spanish champagne, right? So kava stands for cognification. So making sure I have better data and I can read it better. Yeah, making things smart basically smart software. And A is for augmentation. So I can translate, I can load up PDF documents, I can quicker understand, I can make a podcast, you know, I can make a funny video. That’s augmenting uh virtualization. So I can get a digital twin and take a look at what’s happening there and and you know, get information and say, “Oh, you know, it looks like that’s going to break down next week or something, right?” Uh like it’s been done for a long time in factories, right? The last one and only the last one from Carava is automation. Okay? And people are obsessed with automation. I think it’s utterly wrong because it is actually very difficult to automate. Um, and it shouldn’t be our top goal because this is the CFO’s goal, right? Is to say if we can automate this work, we don’t hire new people or we fire existing people or this one person can do the work of five people, right? And this is of course an other illusion. Um, you know, automation is not such a great goal. It’s like you know if you say your goal is efficiency, productivity, automation you know this is a financial goal it’s not a business goal you know it’s of course you want to be more efficient you know but automation for example if you’re looking at self-driving cars that’s the best example in principle it works can be done in reality very difficult right so if you’re looking to automate things in your company then I think partial automation is usually the the solution so if you’re legal company you can automate you know non-disclosure agreements maybe right so that’s my approach.
Chris Parker: let me see if I can if I can personalize this um yesterday I was called by someone in my network who I have worked with before very deeply respect and this person um is in an executive role European pretty large company and his question to me was can I help them come up with an automation strategy or at least an AI adoption strategy so that they can reduce headcount by 10%. And I’m wrestling with whether I want to take this assignment meaning if the if if the goal is manpower reduction. But what I just heard from you and this is why I want to I want to dig into this is and I tend to believe you but but um if full-scale automation is very hard now maybe it’ll get easier in 10 years or 5 years who knows but certainly it is hard now is it a legitimate threat to automate people out of jobs or would it be the first part of Cava that would be more changing the jobs like like how how do you think this is going to impact people maybe me?
Gerd Leonhard: Well, I think the primary discussion is about augmentation versus automation, right? And I think automation will be obviously easy if your job is 90% routine, like a call center. So, we have 20 million people work in call centers. They’ll be automated. Yeah, that’s that’s the message, right? not a good message for them, but maybe there’ll be other things that they’re going to be doing on top of the AI, but generally speaking, like farmers, you know, we don’t have farmers going by hand to the land, you know, but we still have farmers. U and so I think that that there’s a bad news there. It’s also a good news because it generates new work. It makes new things possible. It makes things more, how can I say, you know, more efficient for the customer and things like that. But generally speaking, uh automation is very difficult with things that require any kind of judgment, any kind of understanding because we have to we have to comprehend that AI does not truly understand the world. It it reads the world, right? It it it is not a friend. It it’s a service, you know, it’s a platform, you know, and it does not care. It does not have any of those things that we put into it because it seems so human, you know. Um, so I think for a lot of companies that come to me with this goal, I say I’m I’m not interested because uh this is basically a mouse trap. You’re building an AI mouse trap either to remove your employees or to make more money with your customers. The customers will hate you as as has has been evidenced by recent discussion of airlines using AI for pricing, right? customers customers will hate you like Duolingo, you know and yes it can be really good if uh it works within reason to help the customer right uh they will either hate you or on the on the on the flip side uh employees will teach the AI and then you’re going to say well the AI can do like 50% but the rest of it we don’t care let’s just do away with the quality part of this and you know and if you do that your company is doomed Um and yes that will get much better in 10 years. So so this is a temporary uh reprief. Uh generally I’m thinking that uh AI brings the end of routine any routine whether it’s as dental hygienist or Yeah. And then the question is what routine do we want to retain? like you know I go to the barber shop if I had a beard you know uh and I spend a lot of money on that uh yeah we want to retain that uh for other reasons and and so these are complicated questions about employment. I think in 20 years we may indeed be at that moment where work is no longer a substantial part of our lives because AI does most of that.
Chris Parker: And a lot of people um myself included sort of define their meaning around their work. you know, I you know, because I I spend so much time on my work and okay, I might have a title, I might have a car, you know, I have this this stature in in in the in the in the community. Um, when all that stuff is shifting, h how do you advise that we stay human when our external reasons for being are changing so much? like like how can we remain calm and on point and and positive thinking, you know, get, you know, adopt this mindset of of the good future. Um, keep that in mind and keep working to that while well, there’s going to be a whole lot of change coming.
Gerd Leonhard: Uh, yes. Well, of course, this is not a question of technology or science because, you know, it’s morally neutral. It can do either one of those, right? Can be heaven or can be hell. uh I think the the potential for hell is pretty big here given that it’s moving very quickly and nobody cares about the side effects and and and this is the important part is if we want this to be the good future uh then we have to use all the amazing tech and this is not just AI but quantum nuclear fusion nanotechnology we have to embed it into a new economic circumstance um because you will be very happy and satisfied and and find self-realization if your economic issues are taken care of to some degree so that you don’t have to feel like oh I can’t work but I’m really great at making pictures or writing a book or you know which doesn’t pay so then then you’re in trouble I’d like to have kids but I can’t because I don’t have the money and so on so uh this brings up you know work is only one embodiment of self-realization right and especially for men it has been the embodiment of self-realization right uh and and that needs to change that is already changing. You can see a lot of millennials are saying I don’t want to work like this 10 12 hour a day uh I think will very quickly within 10 years get to 5 hours per day for the same money and the money question is a policy question right if your company is greedy let’s say a telecom company they replace 50% of the staff network maintenance with AI which which you can uh then they will just keep the money and distribute it to their shareholders or to themselves more more likely Right? And and you know, if that’s the policy, then that’s that’s not going to work. We need a social policy and a political policy based on the what I call the five Ps, right? People, planet, purpose, prosperity, peace. Um, so we’re moving into what Al Gore called, you know, 20 years ago, sustainable capitalism. Some people called a Star Trek society. But in many ways, you could say that is the mission, right? The mission cannot be to replace everything with technology. and then we’re suffering from anxiety or we’re sitting around in basically like basic income camps, you know, to be subdued and and watch Netflix. Yeah. Um I think we need to think a little bit further about the design of this society and those five Ps.
Chris Parker: Um I’m wondering if if I can personalize it a bit. um as I’m walking through life you know advising companies on some of these as well um more more in the now you know I’m not dealing so much in the future people are saying okay what can I do now um are there principles that you would share or maybe you’ve already written about that say okay as as you are developing these policies for your company could be a society as well but Let’s just say a company, these are some principles that would nudge us towards or direct us directly towards a good future based on those five Ps like is is there is there a ten commandments or or something that we could say, you know, as we’re going through life, you know, let’s just keep this stuff in mind.
Gerd Leonhard: Yeah. You know, on a personal note, like when I started doing this 25 years ago, when I came out of the internet business, you know, my primary interest was to grow my own business, you know, profit and growth and doing well. Um but when I achieved at least part of that uh you know I never went public with my companies or anything so I didn’t get rich rich in like like really rich right but I’m well off doing this and then I realized you know that is at a certain point uh at a certain point you can say your mission of profit and growth has been somewhat accomplished and you know you’re not going to be 100x you know uh doing this or anything else really. So there is a larger objective which came back into view and that that is how can I help people? How can I help the planet recover? How can I create purpose? You know for example for the millennials for my kids and people of that generation. Um and how do I think larger to create a system that makes more sense than just money. Um because here’s the thing about technology. Technology can solve many problems but it does not solve social, cultural, political, religious whatever issues. it makes them worse, right? And the same is true with money. You know, money is helpful, but at a certain point it makes it worse, right? You know, beyond a certain income, you’re not going to be any happier, right? So, uh, you know, that’s that’s kind of the rare equation.
Chris Parker: What do you mean by it makes it worse? And if if we’re on the path to a Star Trek society, which I hope we are, I would love that. Um, how does technology inherently make it worse? Is it is that
Gerd Leonhard: well techn technology is about efficiency right? So it makes everything efficient. Um not all technology is about efficiency but a lot of it is. So if if things become more efficient like social media has made it efficient for people to share their opinion but at the same time has made a very efficient surveillance and it’s made very efficient [ __ ] machines and polluting effects and manipulation engines and you know so you can say yes all these things are good you know we can have nuclear fusion. Okay, that would be super efficient energy, but it would also create a world where basically the the use of energy would be you could do anything, you know, it would not matter anymore uh if you spend more. So that that has other side effects. So the thing is that we tend to focus on the solutions but the side effects will leave to other people to to you know we privatize the the benefit and we socialize the negative. Yeah, that has been our posit our our thinking for a long time. So I think what we need to do with with this environment is to think of a way that says okay we can solve all these practical problems and then we solve our human problems our cultural political economic problems. We have to get away from a very old system. uh the stock market is 20 years behind in thinking today if you do really bad things uh for example Aramco or Meta right you can make a lot of money and that is inherently bad thing so when I advise companies I say look if you make these five Ps the cornerstone of your operation then it’s going to be you know to to pay a dividend you have to take all five boxes you can’t have a 10 in all five boxes so maybe you take five with people five with plan a 10 with profit. That’s already quite good, right? But you you can’t just say, well, I’m going to take a 10 with a profit and like um um what’s that company called? The defense company, Halliburton or something, right? So uh right or Peter Thiel’s company, you know, you you tick that money box and all the other boxes are unticked and you get a nice dividend. That is not going to be good for society. Yeah. So this is what we have to pursue. I think when we talk about AI in companies, one of the cardinal rules is we have to keep the human in the loop no matter what we do even if it’s less efficient if it costs more money because that is how we create real value personal value you know not commodity value uh and and this is why companies like Meta and so are struggling because they’re putting not the human in the loop but the tech in the loop right
Chris Parker: yeah and people feel it and I and I think they’re that they’re that they’re responding to that human in loop. Let me let me grab that and bring that to an extreme. Um, one of the things I believe is because everything can be copied faked deep faked and anything on the internet we will trust less and less and some sort of trust certification or trust mark comes in and and and I’m certain that that a a high percentage of what I see on Instagram when I’m snacking over there sometimes is just fake. Um, and it makes me want to seek out more human in the loop moments. You know, it’s like, okay, this is just fake. It’s entertainment, but it’s it’s I just don’t trust it. I my my inner compass doesn’t like it. Um, and it it comes up to the point of trust. Um I believe trust will be a significant measure of value you know like how much can I trust this person or this company perhaps based on these five Ps in order to be authentically caring is it how do you see that the interaction of people over time based on trust
Gerd Leonhard: yeah uh you know trust isn’t digital right Right. Uh it’s not I mean you can destroy trust digitally. Uh but you can’t you know trust is a feeling. It’s a understanding between people. It’s a uh it’s something that we build. It can be regained. It can be repaired and so on. It is inherently human. It’s like human agency is part of that is trust. And and you know it comes from many sort of unknown sources sometimes. Basically trust is something that I think currently technology can only be trusted to some degree like you trust the car to work or the airplane to work you know after much proof and you also have a bit of a mistrust left you know it could crash or you know or that sort of thing but to trust things with existential components like trusting uh technology AI with nuclear weapons and uh weapons systems uh would be just utterly foolish, right? Um partial trust, you know, like I trust IBM to do a good job with my CRM or ERP system or, you know, within reason. Uh I do not trust openet AI period because I think they’re they’re on a on a mission to essentially build digital humans and that is untrustworthy by by definition. Yeah. Um and and so I in my own life, you know, I stopped using companies. I stopped investing in companies that I don’t trust, that I can’t trust. I stopped working with companies like Facebook and Meta because I don’t trust what they do with it. Um and I think a lot of that, again, it’s not necessarily that that people are evil. It’s it’s that they inadvertently do evil things because the money is, you know, you’re going to make 500 billion, you’re going to make five trillion. Well, you rather make 5 trillion, right? Just by making a tiny shortcut that that has bad effects on humanity, you know? Yeah, I understand why, but it’s still not good. I don’t want to be part of it.
Chris Parker: Well, another dimension of this is, and again, I don’t know if it’s true because I read on the internet, is is the uh let me paraphrase the the prime minister of Albania, who stated that his society has some serious corruption issues, is recommending to have AIdriven o um government departments or departments and maybe even an AIdriven prime minister. And it was fascinating for me because the the conversation was couched in because we are corrupt. The humanness is corrupt here that would be better for us. And that kind of broke my brain, you know, it’s kind of the world and like like is that the way to fix this? You know, I guess it could be, but it’s like okay, I don’t trust myself, so I’m going to give this thing the keys to Yeah. my existential reality, as you said. It’s that Yeah. I don’t know what to do with that.
Gerd Leonhard: Yeah. There’s one assumption that we see in many places is that humans are evil and cannot be trusted. Um, and that has been a long assumption for a long time and many, you know, all of Hollywood, Netflix, Nollywood movies about the future are about that, right? We’re we’re basically evil and the fact is that we’re not. Uh, all you have to do is read Rutger Bregman’s book, Humankind, and and a thousand other books on this topic. We can be evil. Of course, there are evil people as we know too well without naming them at this moment. But yes, but generally we can collaborate. We have collaborated when we have a reason, right? Uh and we’re actually not as bad as it seems in getting things done when when the [ __ ] hits the fan, so to speak, right? Um and I think this underlying belief that we can use a machine to make us good, right? It’s just I mean this is a myth that is just Silicon Valley pure, right? It’s like yeah this AI can do it better because it’s not human. It’s like what? It’s like you know I can give birth with a birthing machine because not to inconvenience a woman, right? I this is I mean this is a true story called uh uh exogenesis, right? I mean it’s like okay when you hear this ideas you can only say like okay this is straight from the playbook of Bladeunner, right? Um and you know to me that is dehumanizing. It is uh it’s an insult to our uh humanity. And I think that’s also transhumanism follows the same path and this direction to where we’re like yeah of course I want to live longer you know do I want to live forever? It’s like no you know maybe not. So I I think these are these are really important issues that we need to say okay we’re going to take it until here and after here there is a a gray zone that we rather not go which is what for example the European AI act stipulates you know two levels that are okay the third one maybe not the fourth one definitely not right and we’re going to have to agree on what that is for AI uh so that we can take into account the benefits it’s interesting how much criticism the EU gets for that you know because it’s non competitive and things like that, but it’s I think it’s quite wise.
Chris Parker: So, I was at a at a social activity over the weekend and lady we were talking about AI and things and lady mentioned the the Matrix movies and she she she said, “Oh, I thought that story was horrible like like why were they pulling people out of these pods and then bringing them into these dirty spaceships and stuff, you know, like I want to just be left in the pod. They were clearly happy in the pod. What was wrong with the pod?” I was like, “Wow, I never thought of it that way. Maybe that will be a choice for us in the future.”
Gerd Leonhard: Well, you know, there are some people who prefer simulations like gamers and we know this is why gaming is so dangerous, right? Because you’re inside the simulation. You don’t go to the bathroom even because you don’t want to leave. And uh you know, I I understand. I’m just saying that that should not be the next human generation. And and this whole talk about AI doing things better, better judges, better better presidents, you know, I mean, this is just utterly misguided. And this will lead us to an AI society where we are essentially the pets of the machines. Yeah. Yeah.
Chris Parker: Wow. Well, Garrett, last question. Um, for people that are listening that are walking through life, um, hearing all this dystopian news as well as the tech optimism news, you know, that AI, you know, machines will have babies for you and, you know, life is going to be great. you’ll live forever as you know and then there’s also the you know the the dystopian uh sort of matrix and and bladeunner views. H what would you recommend that people keep as a mindset a thought a a guiding principle to help them individually navigate this over time? Is is there a is there a if you could say one thing and just just carry this with you and and and crit think critically about life moving forward based on this what could that be?
Gerd Leonhard: Well, Antonio Gramsci the the Italian writer he said we should have optimism of the heart and the mind uh know the heart and the the heart and the soul sorry and pessimism of the intellect.
Chris Parker: Yeah. So optimism of the heart and the soul and pessimism of the intellect. Wow. Okay. Okay. All right.
Gerd Leonhard: So, so I’m going to keep asking questions. I’m going to say, well, that that doesn’t strike me as possible or as good or you know, but generally I I’m extremely hopeful for our future because I do believe that we can do the right things. We should also forego this constant uh um intaking of bad news and bad things about humans. Uh because this is a story, right? And this is why I’m working on a campaign called I love the future. Uh like I love New York. Um New York was a was a really bad place in the 80s when they launched this campaign. You did not want to go there. And now people truly love New York. And this is a question of our viewpoint. So I recommend to people that to take a positive viewpoint but not a naive blinded viewpoint and ideological viewpoint necessarily but but to say okay you know we do have all the tools now all we need is the tell us the will to collaborate right and that’s why I think we should start a global branding campaign that says this is the reason why the future can be good and if you don’t do the good things then you get kicked out you don’t get to do the future uh because your plan isn’t good you know and that goes for politic ations and for companies and I think this is the place we need to get to.
Chris Parker: I love that the future is good. Um future is positive.
Gerd Leonhard: It’ll take work. I think it’s not going to happen by accident. It’ll take some delay. You know, we have all the cards. That is the reality. We have all the scientific technological cards. We don’t have currently the will that tell us the way to collaborate. This is where we need to work on. This is our job. this my job is is not to show people how great the tech is, but to say, well, how do we make it actually into a good future? Yeah. And and it takes more than tech to do that, you know, and and this is where governments come in and private individuals and organizations like the UN and others. The UN has to be completely rebooted for this purpose. It’s not fit for the purpose of uh of looking at the future in the current way because this purpose requires very very serious work to get and build the good future and the new future.
Chris Parker: Gerd Leonhard, thank you so much um talking to you. I get excited about this this good future. I I really appreciate that. And for listeners, you can find um a lot more if you go to gerdtube.com and that will link you through to his YouTube space, which is exploding at the moment because these things he’s been working and thinking about for decades are so very relevant right now. And also for organizations that would like some of this spice, some of this insight, some of this this humanness, human in the loop on these very sometimes too technical conversations. um and get your your board to have meaningful conversations about the future. You can find him at futuristgerd.com and there’s so much information there also about his books and his movies and other media. Garrett, thank you so much for joining.
Gerd Leonhard: Thank you, Chris.
Chris Parker: Today I’m talking with Gerd Leonhard who’s coming in from Barcelona, but he’s typically in Switzerland and also the Canary Islands. He’s one of the world’s, I believe, most respected futurists. Uh, and he’s been talking and thinking about for decades around the intersection between humanity, technology, and the future. And so when framing my thought about AI and I, I’ve been very influenced by Gerd and his thinking. He wrote, I think back in 2017 around there, technology versus humanity, the book with a subtitle, the coming clash between man and machine. And I’m curious if we’ll play around with that a little bit. And in that decades of work, he has worked with some of the largest organizations in the world wrestling with this. Um, and for me, he’s not just a futurist as forecasting, but he is inviting to reflect. and his movie, I think it was the good future, is essentially an invitation, I believe, to reflect on a few dimensions of of what that and maybe we’ll get into that as far as like decommatization and digitalization. And there’s a there’s a few dimensions we can talk about, but today I’m hoping to go really personal um and wrestle with this dimension between okay, let’s AI representing all technology and I me and Gerd you um and what could this mean? And so what I’d like to kick off here because you’ve been working on this for for decades and with open AI and chat GPT bringing this you know commoditizing this where everyone is confronted with this what has surprised you the most in your own life about the impact in the last two years.
Gerd Leonhard: Yeah, you know, I’ve been looking at AI for over 10 years and in fact my my book technology versus humanity is kind of about AI without mentioning it a lot. You know, the the struggle between humans and machines, right? The convergence and that was quite clear. What has surprised me is that um there was kind of the Sputnik moment, you know, like the Russian satellite launch that triggered this arms race launching AI and getting it out there, you know, and now we truly have an arms race, you know, no longer Russia really, but China and the US. And so what has surprised me is is the power and the money that has gone behind that in a very short time as if this was like, you know, well, it is kind of the new internet, you know. Um and you can safely say that the the magnitude and the speed and the money behind it and uh American politics now as well uh that that has led to this huge acceleration uh and bringing with it a bunch of really amazing usefulness for my own work but also significant competition uh in forecasting. uh since CGP has launched my job has changed and I would say less people are asking me to forecast because you know they think the machine can forecast which it kind of can but but uh anyway so that has really changed my work and I’ve also been surprised how little resistance there is on this idea of a machine doing our thinking um because you know it boils down to the same thing and humans are naturally lazy uh we like tools that do the work for us and uh we don’t collaborate very well and uh paired with American politics which has been primarily about you know do whatever it takes to dominate right you the land of dawn um yeah so so that that has that has also surprised me as as an other kicker you know to this huge AGI discussion.
Chris Parker: if if we take your your work in forecasting and I haven’t thought of it in that angle but um um in my role as a as a tech leader we rely on Gartner and Forester and their sort of advice around technology and where things are going. So, not typically as far future as you go. Um, but indeed I’ve heard that, you know, the the the full business model of Gartner and Forester is in question because people can go to GPT and get, you know, an analysis on which AI widget, you know, is best. um if someone was asking you you know what is the benefit or what’s better is is it is a AI futurist forecast better than what a human could do or is it different like how do you answer that?
Gerd Leonhard: well I I think first of course there’s many companies in the field of consulting advisory work like KPMG for example seriously struggling because of AI um and and that is because a lot of the work that’s based on facts and numbers and projections. You know that work AI can do that work and we can take numbers from the last 40 years and project 10 years in the future. That is not as difficult as it seems uh in complex scenarios that humans have struggled with. You know the confluence of exponential technology, synthetic biology, nanotechnology, AI, quantum computing, you know that’s complicated for humans, not so much for AI. So AI is very good at complex scenarios based on data. But here’s the thing about forecasting. You know, forecasting is only very partly based on data. Uh as Steve Jobs said in his own work, you know, it’s it’s the confluence of art and technology. Um and you know, it’s a design question, right? It’s a question of uh uh storytelling. And I look at my work and I’ve done so at least for 10 years. It’s primarily about storytelling, not about gathering facts and data. And now we are quite simply we’re completely beaten by AI as far as logic and simple knowledge goes you know explicit knowledge uh and now it’s not just the stuff on the internet and then every every uh every book written uh that’s known to man you know can now be looked up now very soon it’s going to be in all languages of the world right uh and on top of that you can upload your own information your own database so if noises wants to know about the future they can upload their own model with their own data inside of their own firewall and they can query that. Right? So now essentially every possible variation of looking at data and analytics has been covered and this is only the beginning right because you know now we’re going to see all kinds of things like synthetic data and all that. So basically what that means is the hard part the data information logic part is being taken over by AI. uh and that doesn’t mean that we won’t have it because we do need that but the main thing about forecasting foresight and I don’t call that predicting the future observing the future right is it’s about stories and and this is the thing we have to understand about humans you know humans are about narratives what matters more for your life is how you feel how you look at things what the story is what you have experienced your your goosebumps your feelings your connections that that’s what matters to us logic numbers now the reality is We use that as an excuse. So we use the the stuff from Axos and Gardner and all the other guys and we say oh yeah that’s what I thought anyway and now I can make my plan right. Um so I think my work is much more about narrative and uh storytelling and touching people on the different level than you know the sort of intellectual computing part uh which AI is doing really well and I think thereby comes the competition right you just want to know the future of Switzerland based on on data you can just yeah you can drill endlessly and it’ll be it’ll be quite good.
Chris Parker: yeah the the the art of storytelling it reminds me something I did last here um was one of the one of the reasons for this podcast and this discovery is also for my boys they’re 15 and 16 um both with dyslexia so sometimes you know ingesting the data can be a challenge and particularly in history which a lot of reading and and and memorizing um and what I found is um and we’re in the Netherlands and of course this is it’s in Dutch but they have these really densely packed very dry history books you know and so like every paragraph has two facts facts and those facts of course need to be remembered wrote to memory you know so it can be you know regurgitated on a test and okay we can talk about the whole education system but what I did is I took I I I scanned in these books and isolated the individual facts and then I recreated it with with chat GPT in a narrative story Jack and Jill go back in history and they and they’re interacting and I think it worked a little bit better um because what I’m trying to do is figure out okay how how can this help them achieve their goal in a way which doesn’t just give them the answer and and I guess and that’s why we’re kind of playing around.
Gerd Leonhard: and that’s a really important point here I have two kids you know they’re 30 and 35 so a little bit different age and and they knew the world before the internet which is interesting but you know the way I look at you know there’s certain things that that humans and the way that we’re set up that we need that may not have other reasons Apart from the fact that we need it like handwriting, you know, do we really need handwriting? Well, you know, I I’m a terrible handwriter, but you know, every brain uh researcher and and analyst and psychos therapist and so will tell you if we don’t learn how to handwrite, we’re not going to our brain is not going to develop in the same way. And if we don’t learn how to speak other languages, you know, our brain will be in a in a different way. uh and you know all these things come down to the fact that what we are and what technology is is just a a huge difference in how it works and this is why AI to me I sometimes call alien intelligence right uh I think it was coined by u Yoshua Bengio and by uh Yann LeCun primarily it’s it’s outside of our own intelligence exists out there like a like an alien right and it’s it’s not very related to our own intelligence because you know we need other things for our intelligence to work you know we have a body and all these things this is why I think it’s really important for kids and for anybody to realize we have our own uh uh setup that needs nurturing and that includes downtime that includes digestion time it includes effort right includes all that oldfashioned stuff that open AI wants to get rid of you know so it’s like you know why bother why why make an effort and we can just ask our assistant, you know, or Zuckerberg says we should use his AI assistant, you know. So, so that that is a tough one, I think.
Chris Parker: I’m wondering if there’s if there’s a and it’s a rhetorical stupid question, I think, you know, is there a risk that paper, you know, paper maps, like the book of maps that we’ve had in our car in the past is now just gone because of Google Maps. Could writing, you know, the the need to write be gone in 10 years? Is is that a possible future or is that
Gerd Leonhard: Well, I I always say as long as we’re human, you know, as as you remain mostly human, it probably will not. even paper maps. You know, to me, for example, a lot of people who are going to a strange city to investigate and to be a tourist and to look around, you know, unfolding this big map on your dining room table, looking left and right, saying, “Oh, look at this over there, over here.” You this is our how our brain works, right? And so I buy a travel guide, you know, with the foldout map, you know, because yeah, of course, I’m 64, but still, you know, I I think I think if you’re 25, you like to have the map on the table. if you uh well you may not want to spend the €10. Yeah. But uh because yeah you can do that on the screen but it’s it just isn’t the same. So I think we have to remember that as long as we remain human in the sense of what we need and if we acknowledge what we need um and why we need it uh and why it’s necessary to know how to handwrite right or how to read for that matter right or you know uh all the things that humans have taken for granted for a long time. uh we shouldn’t do away with them because it’s kind of you know uh let’s say complicated or ephemeral or whatever you know and the same goes for spirituality and consciousness and all these things because a lot of tech companies and tech players they know this but their business model is to do away with it right I always say that the and this may be inadvertent but the the biggest business model today is to replace humanity you know to do away with it right
Chris Parker: can you unpack that a bit more like what what does that
Gerd Leonhard: Well, basically if I take away all the things that make us human, the complicated stuff, the things that computers can’t do, then I have the perfect mold for an AI based world, right? Because, you know, then I can do away with all the, you know, for example, I don’t do an interview with an employee. I put them in front of an AI camera like higher view or something. Uh, and then hire higher view says, oh, this guy’s lying or you know, he may be a criminal or, you know, analyze, you know, it does face recognition, right? Yeah. Um, and so I I can be lazy and just kick back in my office and do 50 interviews and then, you know, I think there’s this utter stupidity. It’s like, uh, yeah. What happens to everything else that is important when I hire somebody, you know, and and do I really trust what I definitely do not trust the machine to do that for me. And I certainly wouldn’t trust it for criminal justice is to scan somebody’s time in in the prison cell with a video camera and then decide if he’s going to do it again. You know, I I think that may be a possibility of getting more data, right? But it’s instantly dehumanizing. So, I think any company that scans your face with an AI camera, it’s dehumanizing. And technology is now pushing us towards this place of saying the more we can remove of these painstaking human moments, you know, the easier the world becomes. And that leads to a dehumanization that can make a boatload of money.
Chris Parker: you know, like uh you know, I’m uh I’m working Thursday again with a team of other experts around an AI adoption playbook. And I know the word playbook for me is triggering because who needs another playbook? What we’re doing is is trying to figure out a recipe for companies to adopt AI that is very well balanced kind of like a checklist or something like that that um that wrestles with some of these things. So um one of the steps that that we’ve definitely built in is is to consciously decide what not to automate what what what is not AI. You know, you know, if you were to talk to an organization and they were saying, you know, they asked you, “Hey, hey, hey, Garrett, what how could we think about what we should not give to the machine?” Like, like what is what is a mental framework do you think that that we could apply to that?
Gerd Leonhard: Yeah, you know, I do this all the time as well. I mean, this is my a majority of my work now is to help companies figure this out. I always say that basically if we’re looking at AI, we should keep our eyes on the on the things that are achievable and humanly uh sustainable. And I call that kava. So like the like the Spanish champagne, right? So kava stands for cognification. So making sure I have better data and I can read it better. Yeah, making things smart basically smart software. And A is for augmentation. So I can translate, I can load up PDF documents, I can quicker understand, I can make a podcast, you know, I can make a funny video. That’s augmenting uh virtualization. So I can get a digital twin and take a look at what’s happening there and and you know, get information and say, “Oh, you know, it looks like that’s going to break down next week or something, right?” Uh like it’s been done for a long time in factories, right? The last one and only the last one from Carava is automation. Okay? And people are obsessed with automation. I think it’s utterly wrong because it is actually very difficult to automate. Um, and it shouldn’t be our top goal because this is the CFO’s goal, right? Is to say if we can automate this work, we don’t hire new people or we fire existing people or this one person can do the work of five people, right? And this is of course an other illusion. Um, you know, automation is not such a great goal. It’s like you know if you say your goal is efficiency, productivity, automation you know this is a financial goal it’s not a business goal you know it’s of course you want to be more efficient you know but automation for example if you’re looking at self-driving cars that’s the best example in principle it works can be done in reality very difficult right so if you’re looking to automate things in your company then I think partial automation is usually the the solution so if you’re legal company you can automate you know non-disclosure agreements maybe right so that’s my approach.
Chris Parker: let me see if I can if I can personalize this um yesterday I was called by someone in my network who I have worked with before very deeply respect and this person um is in an executive role European pretty large company and his question to me was can I help them come up with an automation strategy or at least an AI adoption strategy so that they can reduce headcount by 10%. And I’m wrestling with whether I want to take this assignment meaning if the if if the goal is manpower reduction. But what I just heard from you and this is why I want to I want to dig into this is and I tend to believe you but but um if full-scale automation is very hard now maybe it’ll get easier in 10 years or 5 years who knows but certainly it is hard now is it a legitimate threat to automate people out of jobs or would it be the first part of Cava that would be more changing the jobs like like how how do you think this is going to impact people maybe me?
Gerd Leonhard: Well, I think the primary discussion is about augmentation versus automation, right? And I think automation will be obviously easy if your job is 90% routine, like a call center. So, we have 20 million people work in call centers. They’ll be automated. Yeah, that’s that’s the message, right? not a good message for them, but maybe there’ll be other things that they’re going to be doing on top of the AI, but generally speaking, like farmers, you know, we don’t have farmers going by hand to the land, you know, but we still have farmers. U and so I think that that there’s a bad news there. It’s also a good news because it generates new work. It makes new things possible. It makes things more, how can I say, you know, more efficient for the customer and things like that. But generally speaking, uh automation is very difficult with things that require any kind of judgment, any kind of understanding because we have to we have to comprehend that AI does not truly understand the world. It it reads the world, right? It it it is not a friend. It it’s a service, you know, it’s a platform, you know, and it does not care. It does not have any of those things that we put into it because it seems so human, you know. Um, so I think for a lot of companies that come to me with this goal, I say I’m I’m not interested because uh this is basically a mouse trap. You’re building an AI mouse trap either to remove your employees or to make more money with your customers. The customers will hate you as as has has been evidenced by recent discussion of airlines using AI for pricing, right? customers customers will hate you like Duolingo, you know and yes it can be really good if uh it works within reason to help the customer right uh they will either hate you or on the on the on the flip side uh employees will teach the AI and then you’re going to say well the AI can do like 50% but the rest of it we don’t care let’s just do away with the quality part of this and you know and if you do that your company is doomed Um and yes that will get much better in 10 years. So so this is a temporary uh reprief. Uh generally I’m thinking that uh AI brings the end of routine any routine whether it’s as dental hygienist or Yeah. And then the question is what routine do we want to retain? like you know I go to the barber shop if I had a beard you know uh and I spend a lot of money on that uh yeah we want to retain that uh for other reasons and and so these are complicated questions about employment. I think in 20 years we may indeed be at that moment where work is no longer a substantial part of our lives because AI does most of that.
Chris Parker: And a lot of people um myself included sort of define their meaning around their work. you know, I you know, because I I spend so much time on my work and okay, I might have a title, I might have a car, you know, I have this this stature in in in the in the in the community. Um, when all that stuff is shifting, h how do you advise that we stay human when our external reasons for being are changing so much? like like how can we remain calm and on point and and positive thinking, you know, get, you know, adopt this mindset of of the good future. Um, keep that in mind and keep working to that while well, there’s going to be a whole lot of change coming.
Gerd Leonhard: Uh, yes. Well, of course, this is not a question of technology or science because, you know, it’s morally neutral. It can do either one of those, right? Can be heaven or can be hell. uh I think the the potential for hell is pretty big here given that it’s moving very quickly and nobody cares about the side effects and and and this is the important part is if we want this to be the good future uh then we have to use all the amazing tech and this is not just AI but quantum nuclear fusion nanotechnology we have to embed it into a new economic circumstance um because you will be very happy and satisfied and and find self-realization if your economic issues are taken care of to some degree so that you don’t have to feel like oh I can’t work but I’m really great at making pictures or writing a book or you know which doesn’t pay so then then you’re in trouble I’d like to have kids but I can’t because I don’t have the money and so on so uh this brings up you know work is only one embodiment of self-realization right and especially for men it has been the embodiment of self-realization right uh and and that needs to change that is already changing. You can see a lot of millennials are saying I don’t want to work like this 10 12 hour a day uh I think will very quickly within 10 years get to 5 hours per day for the same money and the money question is a policy question right if your company is greedy let’s say a telecom company they replace 50% of the staff network maintenance with AI which which you can uh then they will just keep the money and distribute it to their shareholders or to themselves more more likely Right? And and you know, if that’s the policy, then that’s that’s not going to work. We need a social policy and a political policy based on the what I call the five Ps, right? People, planet, purpose, prosperity, peace. Um, so we’re moving into what Al Gore called, you know, 20 years ago, sustainable capitalism. Some people called a Star Trek society. But in many ways, you could say that is the mission, right? The mission cannot be to replace everything with technology. and then we’re suffering from anxiety or we’re sitting around in basically like basic income camps, you know, to be subdued and and watch Netflix. Yeah. Um I think we need to think a little bit further about the design of this society and those five Ps.
Chris Parker: Um I’m wondering if if I can personalize it a bit. um as I’m walking through life you know advising companies on some of these as well um more more in the now you know I’m not dealing so much in the future people are saying okay what can I do now um are there principles that you would share or maybe you’ve already written about that say okay as as you are developing these policies for your company could be a society as well but Let’s just say a company, these are some principles that would nudge us towards or direct us directly towards a good future based on those five Ps like is is there is there a ten commandments or or something that we could say, you know, as we’re going through life, you know, let’s just keep this stuff in mind.
Gerd Leonhard: Yeah. You know, on a personal note, like when I started doing this 25 years ago, when I came out of the internet business, you know, my primary interest was to grow my own business, you know, profit and growth and doing well. Um but when I achieved at least part of that uh you know I never went public with my companies or anything so I didn’t get rich rich in like like really rich right but I’m well off doing this and then I realized you know that is at a certain point uh at a certain point you can say your mission of profit and growth has been somewhat accomplished and you know you’re not going to be 100x you know uh doing this or anything else really. So there is a larger objective which came back into view and that that is how can I help people? How can I help the planet recover? How can I create purpose? You know for example for the millennials for my kids and people of that generation. Um and how do I think larger to create a system that makes more sense than just money. Um because here’s the thing about technology. Technology can solve many problems but it does not solve social, cultural, political, religious whatever issues. it makes them worse, right? And the same is true with money. You know, money is helpful, but at a certain point it makes it worse, right? You know, beyond a certain income, you’re not going to be any happier, right? So, uh, you know, that’s that’s kind of the rare equation.
Chris Parker: What do you mean by it makes it worse? And if if we’re on the path to a Star Trek society, which I hope we are, I would love that. Um, how does technology inherently make it worse? Is it is that
Gerd Leonhard: well techn technology is about efficiency right? So it makes everything efficient. Um not all technology is about efficiency but a lot of it is. So if if things become more efficient like social media has made it efficient for people to share their opinion but at the same time has made a very efficient surveillance and it’s made very efficient [ __ ] machines and polluting effects and manipulation engines and you know so you can say yes all these things are good you know we can have nuclear fusion. Okay, that would be super efficient energy, but it would also create a world where basically the the use of energy would be you could do anything, you know, it would not matter anymore uh if you spend more. So that that has other side effects. So the thing is that we tend to focus on the solutions but the side effects will leave to other people to to you know we privatize the the benefit and we socialize the negative. Yeah, that has been our posit our our thinking for a long time. So I think what we need to do with with this environment is to think of a way that says okay we can solve all these practical problems and then we solve our human problems our cultural political economic problems. We have to get away from a very old system. uh the stock market is 20 years behind in thinking today if you do really bad things uh for example Aramco or Meta right you can make a lot of money and that is inherently bad thing so when I advise companies I say look if you make these five Ps the cornerstone of your operation then it’s going to be you know to to pay a dividend you have to take all five boxes you can’t have a 10 in all five boxes so maybe you take five with people five with plan a 10 with profit. That’s already quite good, right? But you you can’t just say, well, I’m going to take a 10 with a profit and like um um what’s that company called? The defense company, Halliburton or something, right? So uh right or Peter Thiel’s company, you know, you you tick that money box and all the other boxes are unticked and you get a nice dividend. That is not going to be good for society. Yeah. So this is what we have to pursue. I think when we talk about AI in companies, one of the cardinal rules is we have to keep the human in the loop no matter what we do even if it’s less efficient if it costs more money because that is how we create real value personal value you know not commodity value uh and and this is why companies like Meta and so are struggling because they’re putting not the human in the loop but the tech in the loop right
Chris Parker: yeah and people feel it and I and I think they’re that they’re that they’re responding to that human in loop. Let me let me grab that and bring that to an extreme. Um, one of the things I believe is because everything can be copied faked deep faked and anything on the internet we will trust less and less and some sort of trust certification or trust mark comes in and and and I’m certain that that a a high percentage of what I see on Instagram when I’m snacking over there sometimes is just fake. Um, and it makes me want to seek out more human in the loop moments. You know, it’s like, okay, this is just fake. It’s entertainment, but it’s it’s I just don’t trust it. I my my inner compass doesn’t like it. Um, and it it comes up to the point of trust. Um I believe trust will be a significant measure of value you know like how much can I trust this person or this company perhaps based on these five Ps in order to be authentically caring is it how do you see that the interaction of people over time based on trust
Gerd Leonhard: yeah uh you know trust isn’t digital right Right. Uh it’s not I mean you can destroy trust digitally. Uh but you can’t you know trust is a feeling. It’s a understanding between people. It’s a uh it’s something that we build. It can be regained. It can be repaired and so on. It is inherently human. It’s like human agency is part of that is trust. And and you know it comes from many sort of unknown sources sometimes. Basically trust is something that I think currently technology can only be trusted to some degree like you trust the car to work or the airplane to work you know after much proof and you also have a bit of a mistrust left you know it could crash or you know or that sort of thing but to trust things with existential components like trusting uh technology AI with nuclear weapons and uh weapons systems uh would be just utterly foolish, right? Um partial trust, you know, like I trust IBM to do a good job with my CRM or ERP system or, you know, within reason. Uh I do not trust openet AI period because I think they’re they’re on a on a mission to essentially build digital humans and that is untrustworthy by by definition. Yeah. Um and and so I in my own life, you know, I stopped using companies. I stopped investing in companies that I don’t trust, that I can’t trust. I stopped working with companies like Facebook and Meta because I don’t trust what they do with it. Um and I think a lot of that, again, it’s not necessarily that that people are evil. It’s it’s that they inadvertently do evil things because the money is, you know, you’re going to make 500 billion, you’re going to make five trillion. Well, you rather make 5 trillion, right? Just by making a tiny shortcut that that has bad effects on humanity, you know? Yeah, I understand why, but it’s still not good. I don’t want to be part of it.
Chris Parker: Well, another dimension of this is, and again, I don’t know if it’s true because I read on the internet, is is the uh let me paraphrase the the prime minister of Albania, who stated that his society has some serious corruption issues, is recommending to have AIdriven o um government departments or departments and maybe even an AIdriven prime minister. And it was fascinating for me because the the conversation was couched in because we are corrupt. The humanness is corrupt here that would be better for us. And that kind of broke my brain, you know, it’s kind of the world and like like is that the way to fix this? You know, I guess it could be, but it’s like okay, I don’t trust myself, so I’m going to give this thing the keys to Yeah. my existential reality, as you said. It’s that Yeah. I don’t know what to do with that.
Gerd Leonhard: Yeah. There’s one assumption that we see in many places is that humans are evil and cannot be trusted. Um, and that has been a long assumption for a long time and many, you know, all of Hollywood, Netflix, Nollywood movies about the future are about that, right? We’re we’re basically evil and the fact is that we’re not. Uh, all you have to do is read Rutger Bregman’s book, Humankind, and and a thousand other books on this topic. We can be evil. Of course, there are evil people as we know too well without naming them at this moment. But yes, but generally we can collaborate. We have collaborated when we have a reason, right? Uh and we’re actually not as bad as it seems in getting things done when when the [ __ ] hits the fan, so to speak, right? Um and I think this underlying belief that we can use a machine to make us good, right? It’s just I mean this is a myth that is just Silicon Valley pure, right? It’s like yeah this AI can do it better because it’s not human. It’s like what? It’s like you know I can give birth with a birthing machine because not to inconvenience a woman, right? I this is I mean this is a true story called uh uh exogenesis, right? I mean it’s like okay when you hear this ideas you can only say like okay this is straight from the playbook of Bladeunner, right? Um and you know to me that is dehumanizing. It is uh it’s an insult to our uh humanity. And I think that’s also transhumanism follows the same path and this direction to where we’re like yeah of course I want to live longer you know do I want to live forever? It’s like no you know maybe not. So I I think these are these are really important issues that we need to say okay we’re going to take it until here and after here there is a a gray zone that we rather not go which is what for example the European AI act stipulates you know two levels that are okay the third one maybe not the fourth one definitely not right and we’re going to have to agree on what that is for AI uh so that we can take into account the benefits it’s interesting how much criticism the EU gets for that you know because it’s non competitive and things like that, but it’s I think it’s quite wise.
Chris Parker: So, I was at a at a social activity over the weekend and lady we were talking about AI and things and lady mentioned the the Matrix movies and she she she said, “Oh, I thought that story was horrible like like why were they pulling people out of these pods and then bringing them into these dirty spaceships and stuff, you know, like I want to just be left in the pod. They were clearly happy in the pod. What was wrong with the pod?” I was like, “Wow, I never thought of it that way. Maybe that will be a choice for us in the future.”
Gerd Leonhard: Well, you know, there are some people who prefer simulations like gamers and we know this is why gaming is so dangerous, right? Because you’re inside the simulation. You don’t go to the bathroom even because you don’t want to leave. And uh you know, I I understand. I’m just saying that that should not be the next human generation. And and this whole talk about AI doing things better, better judges, better better presidents, you know, I mean, this is just utterly misguided. And this will lead us to an AI society where we are essentially the pets of the machines. Yeah. Yeah.
Chris Parker: Wow. Well, Garrett, last question. Um, for people that are listening that are walking through life, um, hearing all this dystopian news as well as the tech optimism news, you know, that AI, you know, machines will have babies for you and, you know, life is going to be great. you’ll live forever as you know and then there’s also the you know the the dystopian uh sort of matrix and and bladeunner views. H what would you recommend that people keep as a mindset a thought a a guiding principle to help them individually navigate this over time? Is is there a is there a if you could say one thing and just just carry this with you and and and crit think critically about life moving forward based on this what could that be?
Gerd Leonhard: Well, Antonio Gramsci the the Italian writer he said we should have optimism of the heart and the mind uh know the heart and the the heart and the soul sorry and pessimism of the intellect.
Chris Parker: Yeah. So optimism of the heart and the soul and pessimism of the intellect. Wow. Okay. Okay. All right.
Gerd Leonhard: So, so I’m going to keep asking questions. I’m going to say, well, that that doesn’t strike me as possible or as good or you know, but generally I I’m extremely hopeful for our future because I do believe that we can do the right things. We should also forego this constant uh um intaking of bad news and bad things about humans. Uh because this is a story, right? And this is why I’m working on a campaign called I love the future. Uh like I love New York. Um New York was a was a really bad place in the 80s when they launched this campaign. You did not want to go there. And now people truly love New York. And this is a question of our viewpoint. So I recommend to people that to take a positive viewpoint but not a naive blinded viewpoint and ideological viewpoint necessarily but but to say okay you know we do have all the tools now all we need is the tell us the will to collaborate right and that’s why I think we should start a global branding campaign that says this is the reason why the future can be good and if you don’t do the good things then you get kicked out you don’t get to do the future uh because your plan isn’t good you know and that goes for politic ations and for companies and I think this is the place we need to get to.
Chris Parker: I love that the future is good. Um future is positive.
Gerd Leonhard: It’ll take work. I think it’s not going to happen by accident. It’ll take some delay. You know, we have all the cards. That is the reality. We have all the scientific technological cards. We don’t have currently the will that tell us the way to collaborate. This is where we need to work on. This is our job. this my job is is not to show people how great the tech is, but to say, well, how do we make it actually into a good future? Yeah. And and it takes more than tech to do that, you know, and and this is where governments come in and private individuals and organizations like the UN and others. The UN has to be completely rebooted for this purpose. It’s not fit for the purpose of uh of looking at the future in the current way because this purpose requires very very serious work to get and build the good future and the new future.
Chris Parker: Gerd Leonhard, thank you so much um talking to you. I get excited about this this good future. I I really appreciate that. And for listeners, you can find um a lot more if you go to gerdtube.com and that will link you through to his YouTube space, which is exploding at the moment because these things he’s been working and thinking about for decades are so very relevant right now. And also for organizations that would like some of this spice, some of this insight, some of this this humanness, human in the loop on these very sometimes too technical conversations. um and get your your board to have meaningful conversations about the future. You can find him at futuristgerd.com and there’s so much information there also about his books and his movies and other media. Garrett, thank you so much for joining.
Gerd Leonhard: Thank you, Chris.