Transcribe your podcast
[00:00:00]

The following is a conversation with Alex Garland, writer and director of many imaginative and philosophical films from the dreamlike exploration of human self-destruction in the movie Annihilation to the deep questions of consciousness and intelligence raised in the movie Ex Machina, which to me is one of the greatest movies and artificial intelligence ever made. And I'm releasing this podcast to coincide with the release of his new series called Doves, that will premiere this Thursday, March 5th on Hulu as part of acts on Hulu.

[00:00:35]

It explores many of the themes of this very podcast is about from quantum mechanics to artificial life to simulation to the modern nature of power in the tech world. I got a chance to watch a preview and loved it. The acting is great. Nick Offerman especially is incredible in it. The cinematography is beautiful and the philosophical and scientific ideas explored are profound. And for me, as an engineer and scientist, we're just fun to see brought to life. For example, if you watch the trailer for the series carefully, you'll see there's a programmer with a Russian accent looking at a screen with Python like code on it that appears to be using a library that interfaces with a quantum computer.

[00:01:21]

This attention in tactical detail on several levels is impressive, and one of the reasons I'm a big fan of how Alex weaves science and philosophy together in his work. Meaning Alex, for me, was unlikely, but it was life changing in ways I may only be able to articulate in a few years. Just as meeting spot, many of Boston Dynamics for the first time planted a seed of an idea in my mind, so that meeting Alex Garland, his humble, curious, intelligent and to me an inspiration.

[00:01:55]

Plus, he's just really a fun person to talk with about the biggest possible questions in our universe. This is the artificial intelligence podcast, if you enjoy it, subscribe, I need to give it five stars, an Apple podcast supported on Patrón or simply connect with me on Twitter. And Lex Friedman spelled F.R. Frid man as usual, I'll do one or two minutes of ads now and never any ads in the middle. They can break the flow of the conversation.

[00:02:23]

I hope that works for you and doesn't hurt the listening experience. This show is presented by Kashyap, the number one finance app in the App Store, when you get it, is called Lux podcast. Kashyap lets you send money to friends, buy Bitcoin and invest in the stock market with as little as one dollar. Since Kashyap allows you to buy Bitcoin, let me mention the cryptocurrency in the context of the history of money is fascinating. I recommend Ascent of Money as a great book and as history.

[00:02:54]

Davidsen credits and ledgers started 30000 years ago. The US dollar was created about 200 years ago. A Bitcoin, the first decentralized cryptocurrency, was released just over 10 years ago. So given that history, cryptocurrency is still very much in its early days of development, but it still is aiming to and just might redefine the nature of money. So, again, if you get cash up from the App Store or Google Play and Use Collects podcast, you'll get ten dollars in cash.

[00:03:27]

Apple also donate ten dollars. The first one of my favorite organizations that's helping advanced robotics and STEM education for young people around the world. And now here's my conversation with Alex Garland. You describe the world inside the shimmer in the movie Annihilation as dreamlike, I mean, that is internally consistent but detached from reality.

[00:04:08]

That leads me to ask, do you think a philosophical question, I apologize.

[00:04:13]

Do you think we might be living in a dream or in a simulation like the kind that the shimmer creates? We human beings here today, yeah, I want to sort of separate that out into two things. Yes, I think we're living in a dream of sorts. No, I don't think we're living in a simulation. I think we're living on a planet with a very thin layer of atmosphere and the planet is in a very large space and the space is full of other planets and stars and quasars and stuff like that.

[00:04:48]

And I don't think I don't think those physical objects I don't think the matter in that universe is simulated. I think it's there.

[00:04:58]

We are definitely for the problem with saying definitely. But in my opinion, just about that, we I think it seems very like we're living in a dream state. I'm pretty sure we are. And I think that's just to do with the nature of how we experience the world. We experience it in a subjective way. And the the thing I've learned most as I've got older in some respects is, is the degree to which reality is counterintuitive and that the things that are presented to us as objective turned out not to be objective.

[00:05:32]

And quantum mechanics is full of that kind of thing. But actually, just day to day life is full of that kind of thing as well. So so my understanding of the way the way the brain works is you get some information, hit your optic nerve, and then your brain makes its best guess about what it's seeing or what it's saying it's seeing. It may or may not be an accurate best guess. It might be an inaccurate best guess.

[00:05:58]

And that that gap, the best guess gap means that we are essentially living in a subjective state, which means that we're in a dream state. So I think you could enlarge on the dream state and all sorts of ways. But so, yes, dream state, no simulation would be where I'd come down to going further, deeper into that direction.

[00:06:21]

You've also described that world as psychedelia. So on that topic, I'm curious about that world.

[00:06:28]

On the topic of psychedelic drugs, do you see those kinds of chemicals that modify our perception as a distortion of our perception of reality or a window into another reality?

[00:06:43]

No, I think what I'd be saying is that we live in a distorted reality and then those kinds of drugs give us a different kind of distorted negative. Yeah, exactly. They just give an alternate distortion. And I think that what they really do is they give they give a distorted perception, which is a little bit more allied to daydreams or unconscious interests. So if for some reason you're feeling unconsciously anxious at that moment and you take a psychedelic drug, you have a more pronounced unpleasant experience.

[00:07:14]

And if you're feeling very calm or or happy, you might have a good time. But but yeah. So so if I'm saying we're starting from a premise, our starting point is we were already in the slightly psychedelic state. You what those drugs do is help you go further down an avenue or maybe a slightly different avenue. But that's all.

[00:07:33]

So in a movie, Annihilation the. The shemer, this alternate dreamlike state is created by, I believe, perhaps an alien entity. Of course, everything is up to interpretation.

[00:07:49]

But do you think there's in our world. In our universe. Do you think there's intelligent life out there? And if so, how different is it from us humans? Well. One of the things I was trying to do in annihilation was to to offer up a form of alien life that was actually alien because.

[00:08:13]

It would often seem to me that in the way we would represent aliens and in books or cinema or television or any one of the sort of storytelling mediums is we would always give them very human like qualities. So they wanted to teach us about galactic federations or they wanted to eat us or they wanted our resources like our water or they want to enslave us or whatever it happens to be. But all of these are incredibly human like motivations. And I was interested in the idea of an alien that was not.

[00:08:50]

In any way like us, it didn't share it, maybe it had a completely different clock speed, maybe its way. So we're talking about we're looking at each other, we're getting information. Light hits our optic nerve. Our brain makes the best guess of what we're doing sometimes. That's right.

[00:09:07]

You know, the thing we were talking about before, what if this alien doesn't have an optic nerve? Maybe it's a way of encountering the space it's in is wholly different. Maybe it has a different relationship with gravity.

[00:09:19]

The basic laws of physics that operates under might be fundamentally different. It could be a different timescale and so on.

[00:09:25]

Yeah, or it could be the same laws could be the same underlying laws of physics. You know, it's a machine created. It's a creature created a quantum mechanical way. It just ends up in a very, very different place to the one we end up in. So so part of the preoccupation with annihilation was to come up with an alien that was really alien and didn't give us. And it didn't give us some we didn't give it any kind of easy connection between human and the alien, because I think it was to do with the idea that you could have an alien that landed on this planet that wouldn't even know we were here and we might only glancingly know it was here.

[00:10:06]

That just be this strange point where the Venn diagram is connected, where we could sense each other or something like that.

[00:10:13]

So in the movie, first of all, incredibly original view of what an alien life would be and is in that sense, it's a huge success. Let's go inside your imagination. Did the alien, that alien entity know anything about humans when it landed? Not so. The idea is you're both you're basically an alien. Life is trying to reach out to anything that might be able to hear its mechanism of communication, or was it simply was it just basically their biologist exploring different kinds of stuff there?

[00:10:49]

You can see but this is the interesting thing is as soon as you say they're biologists, you've done the thing of attributing human type motivations to it. I I was trying to free myself from anything. Yes. Like that. So all sorts of questions you might answer about this notion or alien. I wouldn't be able to answer because I don't know what it was or how it worked.

[00:11:14]

You know, I gave it some I had some rough ideas, like it had a very, very, very slow clock speed. And I thought maybe the way it is interacting with this environment is a little bit like the way an octopus will change its color forms around the space that it's in. So it sort of reacting to what it's in to an extent. But the reason it's reacting in that way is indeterminate.

[00:11:41]

But it's so both clocks speed was slower than our human life clock speed. But it's faster than evolution. Faster than our solution.

[00:11:52]

Yeah. Given the four billion years it took us to get here, then yes, maybe it started eight.

[00:11:57]

If you look at the human civilization as a single organism. Yeah. In that sense, you know, this evolution could be us. You know, the evolution of the living organisms on earth could be just a single organism is kind of that's its life is the evolution process that eventually will lead to probably the the heat death of the universe or something before that. I mean, that's that's just an incredible idea. So you almost don't know. You've created something that you don't even know how it works, I guess, because any time I tried to.

[00:12:33]

Look into how it might work. I would then inevitably be attaching my kind of thought processes into it and I wanted to try and put a bubble around it or would say, no, this is this is alien in its most alien form. I have no real point of contact. So unfortunately, I can't talk to Stanley Kubrick, so I'm really fortunate to get a chance to talk to you on this particular notion.

[00:13:03]

I'd like to ask a bunch of different ways and we'll explore in different ways. But do you ever consider human imagination your imagination as a window into a possible future and that. What you're doing, you're putting that imagination on paper as a writer and then on screen as a director, and that plants the seeds in the minds of millions of future and current scientists. And so your imagination, you putting it down, actually makes it a reality. So it's almost like a first step of the scientific method that you imagining.

[00:13:37]

What's possible in your new series with Ex Machina is actually inspiring, you know, thousands of 12 year olds, millions of scientists and actually creating the future you've imagined? Well, all I could say is that from my point of view, it's almost exactly the reverse because I. I see that pretty much everything I do is a reaction to what scientists are doing. I'm I'm an interested layperson and I I feel, you know, this individual. I feel that the most interesting area that humans are involved in is science.

[00:14:22]

I think art is very, very interesting. But the most interesting is science. And science is in a weird place because maybe around the time Newton was alive, if a very, very interested lay person said to themselves, I want to really understand what Newton is saying about the way the world works with a few years of dedicated thinking, they would be able to understand the sort of principles he was laying out. And I don't think that's true anymore.

[00:14:53]

I think that stopped being true now.

[00:14:55]

So I'm a pretty smart guy. And if I said to myself, I want to really, really understand what is currently the state of quantum mechanics or string theory or any of the sort of branching areas of it, I wouldn't be able to I'd be intellectually incapable of doing it because because to work in those fields at the moment is a bit like being an athlete. I suspect you need to start when you're twelve, you know, and if you if you start in your mid 20s, start trying to understand in your mid 20s, then you're just never going to catch up.

[00:15:31]

That's the way it feels to me. So so what I do is I try to make myself open. So the people that you're implying, maybe I would influence it. To me, it's exactly the other way round. These people are strongly influencing me. I'm thinking they're doing something fascinating. I'm concentrating and working as hard as I can to try and understand the implications of what they say. And in some ways, often what I'm trying to do is disseminate their ideas into a means by which it can enter a public conversation.

[00:16:07]

So. So Ex Machina contains lots of name checks, all sorts of existing thought experiments, you know, shadows on, you know, Plato's Cave and Mary in the black white room and all sorts of different long standing thought processes about sentience or consciousness or subjectivity or gender or whatever it happens to be. And then and then I'm trying to marshal that into a narrative to say, look, this stuff is interesting and it's also relevant and this is my best shot at it.

[00:16:40]

So so I'm the one being influenced in my construction. That that's fascinating. Of course, you would say that because you're not even aware of your own. That's probably what Kubrick would say to write is in describing why HAL 9000 square the way HAL 9000 is created, as you just stating what's but the reality when it when the specifics of the knowledge passes through your imagination.

[00:17:07]

I would argue that you're incorrect in thinking that you're just disseminating knowledge that the very act of your imagination. Consuming that science, it creates something that creates the next step, potentially creates the next step. I certainly think that's true with 2001 A Space Odyssey. I think at its best, if it fails to prove that it's true of that, at its best, it plans something. It's hard to describe it. It inspires the next generation and it could be fuel dependent.

[00:17:48]

So your new series is more a connection to physics, quantum physics, quantum of quantum mechanics, quantum computing, and yet it's marketed is more artificial intelligence. I know more about A.I. My sense that A.I. is much. Much earlier in it's in the depth of its understanding, I would argue nobody understands anything to the depth that physicists do about physics in A.I. Nobody understands that there is a lot of importance and role for imagination, which they think, you know, we're in that we're Freud.

[00:18:24]

Imagine the subconscious.

[00:18:25]

We're in that stage of of A.I. where there's a lot of imagination needed thinking outside the box.

[00:18:31]

Yeah, it's interesting the way the spread of discussions and the spread of anxieties that exists about A.I. fascinate me, the way in which some people, some people seem terrified about it whilst also pursuing it. And I've never shared that fear about I personally. But but the the way in which it agitates people and also the people who it agitates, I find I find kind of fascinating.

[00:19:04]

Are you afraid? Are you excited? Are you sad by the possibility let's take the existential risk of artificial intelligence by the possibility of an artificial intelligence system. Becomes our offspring and makes us obsolete. I mean, it's a huge it's a huge subject to talk about, I suppose, but but one of the things I think is that humans are actually very experienced at creating new life forms, because that's why you and I are both here and it's why everyone on the planet is here.

[00:19:42]

And so so something in the process of having a living thing that exists that didn't exist previously is very much encoded into the structures of our life and the structures of our societies. Doesn't mean we always get it right, but it does mean we've learned quite a lot about that. We've learned quite a lot about what the dangers are of allowing things to be unchecked. And it's why we then create systems of checks and balances in our government and so on and so forth.

[00:20:12]

I mean, that's not to say. The other thing is, it seems like there's all sorts of things that you could put into a machine that you would not be so with us, we sort of roughly try to give some rules to live by. And some of us then live by those rules and some don't. With a machine, it feels like you could enforce those things. So so partly because of our previous experience and partly because of the different nature of a machine, I just don't feel anxious about it.

[00:20:38]

I, I, I more I just see all the good, you know, broadly speaking, the good that can come from it. But that, that's just my that's just where I am on that anxiety spectrum, you know, it's kind of there's a sadness.

[00:20:52]

So we as humans give birth to other humans. Right. But in the generations and there is often in the older generation, a sadness about what the world has become now. I mean, that's kind of yeah, there is.

[00:21:03]

But there's a counterpoint as well, which is the most parents would wish for a better life for their children. So there may be a regret about some things about the past. But broadly speaking, what people really want is that things will be better for the future generations, not worse. And so and then it's a question about what constitutes a future generation of future generation could involve people. It also could involve machines and it could involve a sort of cross pollinated version of the two or.

[00:21:33]

But but none of those things make me feel anxious and doesn't give you exciting.

[00:21:38]

It doesn't excite you like anything that does not anything that's new. I don't think, for example, I've got I my anxieties relate to things like social media that that.

[00:21:51]

So I've got plenty of anxieties about that, which is also driven by artificial intelligence in the sense that there's too much information to be able to algorithm has to filter that information and presented. So ultimately, the algorithm, a simple, oftentimes simple algorithm, is controlling the flow of information on social media.

[00:22:12]

So that's another it is a bit, but at least my sense of it, I might be wrong, but my sense of it is that the algorithms have an either conscious or unconscious bias, which is created by the people who are making the algorithms and and sort of delineating the areas to which those algorithms are going to lean.

[00:22:33]

And so, for example, the kind of thing I'd be worried about is that it hasn't been thought about enough how dangerous it is to allow algorithms to create echo chambers say. But that doesn't seem to me to be about the AI or the algorithm. It's it's the naivety of the people who are constructing the algorithms to do that thing, if you see what I mean. Yes. So in your new series, does one could speak more broadly. There's a let's talk about the people constructing those algorithms, which in our modern society, Silicon Valley, those algorithms happen to be a source of a lot of income because of advertisements.

[00:23:14]

So let me ask sort of a question about those people.

[00:23:18]

Are there are current concerns and failures in social media, their naivete, I can't pronounce that word. Well, are they naive or are they? I use that word carefully, but evil in intent or misaligned and intent, I think that's a do they mean well and just go have an unintended consequence or is there something dark in them that that results in them creating a company results in that super competitive drive to be successful. And those are the people that will end up controlling the algorithms.

[00:23:58]

At a guess, I'd say there are instances of all those things so so sometimes I think it's naivete. Sometimes I think it's extremely dark and sometimes I think people are not being naive or dark. And and then in those instances is sometimes. Generating things that are very benign and other times generating things that, despite their best intentions, are not very benign. It's something I think the reason why I don't get anxious about AIG in terms of or at least is that have I don't know, a relationship with some sort of relationship with humans is that I think that's the stuff we're quite well equipped to understand how to mitigate.

[00:24:48]

The problem is, is is issues that relate actually to the power of humans or the wealth of humans. And that's where that's where it's dangerous here and now. So so what I see I'll tell you what I sometimes feel about Silicon Valley is that it's like Wall Street in the 80s. It it's rabidly capitalistic. Absolutely. RABIDLY capitalistic, and it's rabidly greedy.

[00:25:23]

But whereas. In the 80s, the sense one had of Wall Street was that these people kind of knew they were sharks and in a way relished in being sharks and dressed in sharp suits and and kind of loaded. Over other people and felt good about doing it. Silicon Valley has managed to hide its voracious Wall Street like capitalism behind hipster T-shirts and, you know, cool cafes in the place where they set up their. And so that obfuscates what's really going on and what's really going on is the absolute voracious pursuit of money and power.

[00:26:03]

So so that's where I get shaky for me.

[00:26:06]

So that veneer and you explore that brilliantly, that veneer of virtue that Silicon Valley has, which they believe themselves, I'm sure.

[00:26:17]

So, OK, I, I hope to be one of those people.

[00:26:25]

And I believe that. So as a maybe a devil's advocate term, poorly used in this case. What if some of them really are trying to build a better world? I can't I'm sure I think some of them are. I think I've spoken to one guy who I believe in their heart, feel they're building a better world. Are they not able to. No, no, no. They may or may not be, but it's just a zone with a lot of bullshit flying about.

[00:26:53]

And there's also another thing, which is this actually goes back to I always thought about some sports that later turned out to be corrupt in the way that the sport like who who won the boxing match or how a football match got thrown or a cricket match or whatever happened to me.

[00:27:12]

And I to think, well, look, if there's a lot of money. And there really is a lot of money, people stand to make millions or even billions, you will find corruption that's going to happen. So so it's it's in the nature of its of its voracious appetite that some people will be corrupt and some people will exploit and some people will exploit whilst thinking they're doing something good. But there are also people who I think are very, very smart and very benign and actually very self-aware.

[00:27:43]

And so I'm not I'm not trying to I'm not trying to wipe out the motivations of this entire area.

[00:27:52]

But I do know there are people in that world who scare the hell out of me.

[00:27:56]

Yeah, sure.

[00:27:57]

Yeah, I'm a little bit naive and like it. I don't care at all about money. And so. And you might be one of the good guys. Yeah, but so the thought is, but I don't have money. So my thought is if you give me a billion dollars, I would it would change nothing and I would spend it right away and investing right back and creating the world. But your intuition is that billion. There's something about that money that you maybe slowly corrupts the people around you.

[00:28:30]

There's somebody gets in that corrupts the souls of you, the way you view the world.

[00:28:35]

Money does corrupt. We know that. But but there's a different sort of problem aside from just the money corrupts, you know. Thing that we're familiar with and throughout history, and it's it's more about a sense of reinforcement an individual gets, which is so effectively works like the reason I earned all this money and so much more money than anyone else is because I'm very gifted. I'm actually a bit smarter than they are or I'm a lot smarter than they are.

[00:29:07]

And I can see the future in the way they can. And maybe some of those people are not particularly smart. They're very lucky or they're very talented entrepreneurs. And there's a difference between that. So in other words, the the acquisition of the money and power can suddenly start to feel like evidence of virtue. Yes. And it's not evidence of virtue. Might be evidence of completely different things as brilliantly put. Yeah, yeah, yeah. It's brilliant.

[00:29:32]

But like so I think one of the fundamental drivers of my current morality, I let me just represent nerds in general of all kinds is of constant self-doubt and the signals. You know, I'm very sensitive to signals from people that tell me I'm doing the wrong thing. But when there's a huge inflow of money, it's you just put it brilliantly that that could become an overpowering signal that everything you do is right and so your moral compass can just get thrown off.

[00:30:10]

Yeah. And it's that that is not contained to Silicon Valley that's across the board in general.

[00:30:16]

Yeah. Like I said, I'm from the Soviet Union. The current president is convinced. I believe actually he is he wants to do really good by the country and by the world. But his moral clock may be or compass may be off because.

[00:30:31]

Yeah, I mean, it's the interesting thing about evil, which is the. I think most people who do spectacularly evil things think themselves they're doing really good things that they're not, they're thinking I am a sort of incarnation of Satan. They're thinking, yeah, I've seen a way to fix the world and everyone else is wrong. Here I go. In fact, I am having a fascinating conversation with a historian of Stalin, and he took power.

[00:31:00]

Is he actually got more power than almost any person in history. And he wanted he didn't want power. He just wanted he truly. And that's what people don't realize.

[00:31:12]

He truly believe that communism will make for a better world. Absolutely. And he wanted power. He wanted to destroy the competition to make sure that we actually make communism work in the Soviet Union and spread across the world.

[00:31:27]

He was trying to do good.

[00:31:29]

I think it's it's typically the case that that's what people think they're doing. And I think that. But you don't need to go to Stalin. I mean, Stalin. I think Stalin probably got pretty crazy, but actually that's another part of it, which is that the other thing that comes from being convinced of your own virtue is that then you stop listening to the modifiers around you and that tends to drive people crazy. It's other people that keep us sane.

[00:31:57]

And if you stop listening to them, I think you go a bit mad that also that's funny. Disagreement keeps saying to jump back for an entire generation of AI researchers, 2001 A Space Odyssey put an image, the idea of human level, superhuman level intelligence into their mind.

[00:32:18]

Do you ever sort of jumping back to ex machina and talk a little bit about that?

[00:32:23]

Do you ever consider the audience of people who you who build the systems, the roboticists, the scientists that build the systems based on the stories you create? Which I would argue I mean, there's literally most of the top researchers, about 40, 50 years old, and plus, you know, that's their favorite movie, 2001, A Space Odyssey. It really is in their work, their idea of what ethics is, of what is the target, the hope, the dangers of A.I. is that movie.

[00:32:56]

Yeah. Do you ever consider the the impact on those researchers when you create the work you do?

[00:33:03]

Certainly not with Ex Machina in relation to 2001, because I'm not sure. I mean, I'd be pleased if there was, but I'm not sure in a way there isn't a fundamental discussion of issues to do with either isn't already and better dealt with by 2001, 2001. There's a very, very good account of.

[00:33:29]

Of the way in which and I might think and also potential issues with the way the I might think and also then a separate question about whether the AI is malevolent or benevolent. And 2001 doesn't really it's a slightly odd thing to be making a film when, you know, there's a pre-existing film, which is not a really superb job, but there's a question of consciousness, embodiment and also the same kinds of questions.

[00:33:58]

Could you because those are my two favorite movies.

[00:34:00]

So can you compare HAL 9000 and Ava Hal 9000 from 2001, A Space Odyssey of Ex Machina, the in your view, from a philosophical perspective, they've got different goals.

[00:34:12]

The two eyes have completely different. I think that's really the difference. So in some respects, Ex Machina took as a premise. How do you assess whether something else has consciousness, so it was a version of the Turing test, except instead of having the machine hidden, you put the machine in plain sight in the way that we are in plain sight of each other and say now assess the consciousness and the way it was illustrating the the the the way in which you'd assess the state of consciousness of a machine is exactly the same way we assess the state of consciousness of each other and in exactly the same way that in a funny way, your sense of my consciousness is is actually based primarily on your own consciousness.

[00:34:56]

Yes. That is also then true with the machine. And so it was actually about how much of. The sense of consciousness is a projection rather than something that consciousness is actually containing and have Plato's cave. I mean, this you really explored, you could argue that how sort of Space Odyssey explores idea of the Turing test for intelligence and that test, there's no test, but it's more focused on intelligence and X mark and a kind of goes around intelligence and the consciousness of the human to human human robot interactions.

[00:35:31]

More interesting, more and more, at least the focus of that particular particular movie.

[00:35:37]

Yeah, it's about the interior state and and what constitutes the interior state and how do we know it's there. And actually, in that respect, ex machina is as much about. Consciousness in general, as it is to do specifically with machine consciousness. Yes, and it's also interesting thing you started asking about the dream state and I was saying, well, I think we're all in a dream state because we're all in a subjective state. One of the things that I became aware of with Ex Machina is that the way in which people reacted to the film was very based on what they took into the film.

[00:36:15]

So many people thought XMA ex machina was a step was the tale of a sort of evil robot who murders two men and escapes. And she has no empathy, for example, because she's a machine. Whereas I felt no, she was a conscious being with a consciousness different from mine. But so what? Imprisoned and made a bunch of value judgments about how. To get out of that box and there's a moment which is sort of slightly bugs me, but nobody ever has noticed and it's years after, so I might as well say it now, which is the artist Ava has escaped.

[00:36:54]

She crosses a room. And as she's crossing a room, this is just before she leaves the building. She looks over her shoulder and she smiles. And I thought after all the conversation about tests, in a way, the best indication you could have of the interior state of someone is if they are not being observed and they smile about something with their smiling for themself. And that, to me, was evidence of true sentience, whatever that sentience was, that was really interesting.

[00:37:26]

We don't get the observer much. Or or something like a smile in any context, except the interaction, trying to convince others that she's conscious. That's beautiful. Exactly. But it was a small in a funny way, I think maybe people saw it as an evil smile, like, ha, yeah, I fooled them, but actually it was just a smile. And I thought, well, in the end, after all the conversations about the test, that was the answer to the test.

[00:37:56]

And then off she goes.

[00:37:57]

So if we align, if we just delegate a little a little bit longer on how and ever, do you think in terms of motivation, what was her motivation is how good or evil is ever good or evil? I was good, in my opinion. And how is neutral? Because I don't think how is presented as having a sophisticated emotional life. He has a set of paradigms, which is that the mission needs to be completed. I mean, it's a version of the paperclip.

[00:38:36]

Yeah. You know, the idea that it's just it's a superintelligent machine, but it's just performed a particular task. And in doing that, task may destroy everybody on Earth or maybe achieve undesirable effects for a human. Precisely.

[00:38:50]

But what if. OK, at the very end, he said something like, I'm afraid, Dave, but that that maybe he is on some level experiencing fear or it may be this is the terms in which it would be wise to stop someone from doing the thing they're doing.

[00:39:10]

If you see what I mean. Yes, absolutely. So actually, it's funny. So that much of this is such a small, short explanation of consciousness that I'm afraid. And then you just with Mark and I say, OK, we're going to magnify that part and then minimize the other part. That's a good way to sort of compare the two.

[00:39:29]

But if you could just use your imagination and if you ever sort of, I don't know, ran, ran, you was president of the United States, so had some power. So what kind of world would you want to create if you kind of say good and there is a sense that she has a really like there's a desire for better human to human interaction, human robot interaction in her. But what kind of world do you think she would create with that desire?

[00:40:05]

So that's a really that's a very interesting question. I'm going to approach it slightly, obliquely, which is the F if a friend of yours got stabbed in a mugging and you then felt very angry at the person who'd done the stabbing, but then you learned that it was a 15 year old and the 15 year old, both their parents were addicted to crystal meth and the kid had been addicted since he was 10. And he really never had any hope in the world.

[00:40:35]

And he'd been driven crazy by his upbringing and.

[00:40:39]

Did the stabbing that would hugely modify and it would also make you wary about that kid then becoming president of America and ever has had a very, very distorted introduction into the world. So although there's nothing as it as it were, organically within a that would lean towards badness.

[00:41:03]

It's not that robots or sentient robots are bad. She did not. Her arrival into the world was being imprisoned by human, so I'm not sure she'd be a great president.

[00:41:16]

The trajectory through which she arrived at her moral views, they have some dark elements there.

[00:41:24]

But I like her personally. I like her. And I think you vote for her.

[00:41:30]

I'm having difficulty finding anyone to vote on in my country or if I lived here in yours, I so that's a yes, I guess because the competition, you could easily do a better job than any of the people we go to.

[00:41:44]

I'd vote over Boris Johnson. So what is a good.

[00:41:52]

Test of consciousness, we talk about consciousness a little bit more, if something appears conscious, is a conscious. You mentioned the smile, which is seems to be something done. I mean, that's a really good indication because it's a tree falling in the forest will not be there to hear it. But does the appearance from a robotics perspective or consciousness mean consciousness to you?

[00:42:19]

No, I don't think you could say that fully, because I think you could then easily have a thought experiment which said we will create something which we know is not conscious, but is going to give a very, very good account of seeming conscious. And so and and also it would be a particularly bad test where humans are involved because humans are so quick to project sentience into things that don't have sentience. So someone could have their computer playing up and feel as if their computer is being malevolent to them when it clearly is.

[00:42:52]

And so so of all the things to judge consciousness, us humans are better. We're empathy machines.

[00:43:00]

So. So the flip side of that, the argument there is because we just attribute consciousness to everything, almost anthropomorphize everything, including rumors that that may be consciousness is not real and we just attribute consciousness to each other. So you have a sense that there is something really special going on in our mind that makes us unique and gives us subjective experience.

[00:43:27]

There's something very interesting going on in our minds. I'm slightly worried about the word special because it gets a bit it nudges towards metaphysics and maybe even magic. I mean, in some ways something magic like which I don't think is there at all. I mean, if you think about there's an idea of Sikhism that says consciousness is an everyday. I don't buy that. I don't buy that. Yeah. So so the idea that there is a thing that it would be like to be the sun.

[00:44:00]

Yeah. No, I don't buy that. I think that consciousness is a thing. My sort of broad modification is that usually the more I find out about things, the more illusory.

[00:44:16]

Our instinct is and is leading us into a different direction about what that thing actually is, that that happens, it seems to me in modern science that happens a hell of a lot, whether it's to do with how even how big or small things are. So so my sense is that consciousness is a thing that it isn't quite the thing or may maybe very different from the thing that we instinctively think it is. So it's there. It's very interesting. But we may be.

[00:44:44]

In sort of quite fundamentally misunderstanding it for reasons that are based on intuition. So I have to ask this, is this kind of an interesting question? The ex machina for many people, including myself, is one of the greatest films ever made. Well, it's number two for me. Thanks.

[00:45:03]

Yeah, definitely no one no one would really have done well anyway, because whenever you grow up with something, you grow up with something. It's it's it's in the mud.

[00:45:13]

But there is one of the things that people bring up. You can't please everyone, including myself, this is what I first reacted to the film is the idea of the lone genius. This is the the criticism that people say sort of me as an actress or I'm trying to create what what what Nathan is trying to do. So there's a brilliant series called Chernobyl, yes, it's fantastic, absolutely spectacular. I mean, I, I mean, they got so many things brilliant.

[00:45:46]

Right. But one of the things again, the criticism there. Yeah. Be inflated lots of people into one character that represents all nuclear scientists if want to come back.

[00:45:59]

It's a composite character that presents all scientists.

[00:46:03]

Is this what you were? Is this the way you were thinking about that? Or is it just simplifies the storytelling? How do you think about the lone genius?

[00:46:11]

Well, I'd say this. The series I'm doing at the moment is a critique in part of the lone genius concept. So, yes, I'm sort of oppositional and either agnostic or atheistic about that as a concept.

[00:46:25]

I mean, not entirely. You know, with a lone lone is the right word, broadly isolated. But Newton clearly exists in a sort of bubble of himself, in some respects, sort of Shakespearean.

[00:46:40]

Do you think we would have an iPhone without Steve Jobs? I mean, how much pollution from Jobs clearly isn't alone genius? Because because there's too many other people in the sort of superstructure around him who are absolutely fundamental to to that journey.

[00:46:55]

But you're saying Newton. But that's the scientific so there's an engineering element to building ever.

[00:47:01]

But just to say what ex machina is is really it's a thought experiment. I mean, so it's a construction of putting four people in a house. Nothing about ex machina adds up in all sorts of ways in as much as that. Who who built the machine parts? Did the people building the machine parts know what they were creating and how did they get there? And it's a thought experiment, so it doesn't it doesn't stand up to scrutiny of that.

[00:47:31]

So I don't think it's actually that interesting of a question, but it's brought up so often that I had to ask it because that's exactly how I felt.

[00:47:42]

After what, you know, there's something about there was almost a wash, your movie the first time, at least for the first little while in a defensive way, like how dare this person try to step into the A.I. space and try to beat Kubrick.

[00:48:01]

That's the way I was thinking like this, because it comes off as a movie that really is going after the deep, fundamental questions about A.I. So there's a there's a kind of a nerd do so, I guess automatically searching for the for the flaws.

[00:48:14]

And I just exactly the same, I think in isolation. And the other movie, the I would be able to free myself from that much quicker that it's a it is a thought experiment. There's you know, who cares if there's batteries that don't run out. Right. Those kinds of questions. That's the whole point. Yeah. But I it's nevertheless something I wanted to bring up it. Yeah. It's it's a fair thing to bring up for me that you hit on the lone genius thing.

[00:48:41]

For me it was actually people always said ex machina makes this big leap in terms of where I has got to and also what I would look like if it got to that point. There's another one, which is just robotics. I mean, look at the way he walks around the room. So, like, forget it building. It's that that's that's also got to be a very, very long way off. And if you did get that, would it look anything like that?

[00:49:07]

It's a thought experiment, actually. It's agree with you. I think that the way, as Lucia, of a brilliant actress, actor that moves around, that we're very far away from creating that. But the way she moves around is exactly the definition of perfection. For roboticist, it's a smooth and efficient. So it is where we want to get where I believe. I think so I hang out with a lot of like humanoid robotics people they love, elegant, smooth motion like that, and that's their dream.

[00:49:39]

So the way she moved is actually what I believe, that we dream for a robot to move. It might not be that useful to move that sort of that way, but that is the definition of perfection in terms of movement. Drawing inspiration from real life so far for devs for X mark and a look at characters like Elon Musk. What do you think about the various big technological efforts of Elon Musk and others like him and that he's involved with, such as Tesla, SpaceX, NewLink?

[00:50:10]

Do you see any of that technology potentially defining the future worlds you create in your work? For Tesla's automation space is space exploration. Your link is brain machine interface somehow merger of biological and electric systems.

[00:50:27]

I'm in a way, I'm influenced by that, almost by definition, because that's the world I live in and this is the thing that's happening in that world. And I also feel supportive of it.

[00:50:37]

So I think I think amongst various things Elon Musk has done, I'm almost sure he's done a very, very good thing with Tesla for all of us. It's really kicked all the other car manufacturers in the face. It's kicked the fossil fuel industry in the face. And and they need it kicking in the face. And he's done it so and and so that's the world he's part of creating. And I live in that world. Just bought a Tesla, in fact.

[00:51:09]

And so does that play into whatever I then make? In some ways it does, partly because I try to be a writer who quite often filmmakers are in some ways fixated on the films they grew up with and they sort of remake those films in some ways. I've always tried to avoid that. And so I look to the real world to get inspiration and as much as possible sort of by living, I think. And so. So, yeah, I'm sure.

[00:51:41]

Which of the directions do you find most exciting space travel. Space travel, so you haven't really explored space travel in your work, you've said you said something like if you had unlimited amount of money, I think, and I read it, that you would make like a multi-year series, Space Wars or something like that. So what is it that excites you about space exploration?

[00:52:08]

Well, because if we have any sort of long term future, it's that. It just simply is that if energy and matter are linked up in the way we think, they're linked up. We'll run out if we don't move, so we got to move and but but also. How can we not it's it's built into us to to do it or die trying. I was on Easter Island a few months ago, which is, as I'm sure you know, in the middle of the Pacific and difficult for people to have got to.

[00:52:50]

But they got there. And I did think a lot about the way those boats it must have set out into something like space. It was the ocean and and how sort of fundamental that was to the way we are. And it it's the one that most excites me because it's the one I want most to happen. It's the thing it's the place where we could get to as humans. Like, in a way I could live with us, never really unlocking, fully unlocking the nature of consciousness.

[00:53:23]

I'd like to know I'm really curious, but if we never leave the solar system and if we never get further out into this galaxy or maybe even galaxies beyond our galaxy, that that would that feel sad to me? Because. Because it's so limiting. Yeah. There's something hopeful and beautiful about reaching out and kind of exploration reaching out across Earth centuries ago and reaching out into space. So what do you think about colonization of Mars to go to Mars that excite you, the idea of a human being stepping foot on Mars?

[00:53:58]

It does. It absolutely does. But in terms of what would really excite me, it would be leaving this solar system in as much as that, I just think.

[00:54:07]

I think we already know quite a lot about Mars and but yes, listen, if it happened, that would be I hope I say it in my lifetime. I really hope I see in my lifetime. It would be a wonderful thing. Without giving anything away, but. The series begins with the use of quantum computers. The new series does begins with the use of quantum computers to simulate basic living organisms, or actually, I don't know if quantum computers are used, but basic living organisms are simulated on a screen is a really cool kind of demo.

[00:54:41]

Yeah, that's right. They're using yes, they are using a quantum computer to simulate a nematode here. So we're turning to our discussion of simulation or thinking of the universe as a computer. Do you think the universe is deterministic? Is there a free will? So with the qualification of what do I know? Because I'm a layman, right. Layperson, but with big imagination, things without qualification.

[00:55:12]

Yeah, I think the universe is deterministic. And I see absolutely I, I, I cannot see how free fits into that. So. So yes. Deterministic no free will. That would be my position. And how does that make you feel? It partly makes me feel that it's exactly in keeping with the way these things tend to work out, which is that we have an incredibly strong sense that we do have free will.

[00:55:38]

And just as we have an incredibly strong sense that time is a constant and turns out probably not to be the case. So we're definitely in the case of time. But but but the problem I always have with free will is that it gets I can never seem to find the place where it is supposed to reside. And yet you explore just a bit of very, very.

[00:56:04]

But we have something we can call free will. But it's not the thing that we think it is. But free will.

[00:56:10]

So do you what we call free will. Just we it is the illusion of it. And that's a subjective experience of the. Yeah. Which is a useful thing to have. And it partly it partly comes down to although we live in a deterministic universe, our brains are not very well equipped to fully determine the deterministic universe. So we're constantly surprised and feel like we're making snap decisions, decisions based on imperfect information. So that feels a lot like free will.

[00:56:37]

It just isn't would be my. That's my. I guess.

[00:56:41]

So in that sense, your sort of sense is that you can unroll the universe forward or backward and you see the same thing.

[00:56:50]

And you I mean that notion.

[00:56:54]

Yeah, sort of. Sort of. But yeah. Sorry. Go ahead. I mean that notion is a bit uncomfortable to think about that. It's you can roll it back. And and forward, and what if you were able to do it, it would certainly have to be a quantum computer. Yeah, something that worked and a quantum mechanical way in order to understand the quantum mechanical system, I guess.

[00:57:23]

But but and so that unrolling there might be a multiverse thing where there's a bunch of branches. Well, exactly. Because it wouldn't follow that every time you roll it back or forward, you'd get exactly the same result, which is another thing that's hard to wrap your mind around.

[00:57:38]

So, yeah, but but.

[00:57:40]

But that. Yes, but essentially what you just described that the. Yes. Forwards and backwards. But you might get a slightly different result for very different.

[00:57:50]

So we're very different along the same lines. You've explored some really deep scientific ideas in this new series. And I mean, it's just in general, you're unafraid to to to ground yourself and some of the most amazing scientific ideas of our time. What are the things you've learned or ideas you find beautiful, mysterious about quantum mechanics, multiverse, string theory, quantum computing that you've learned? Well, I would have to say every single thing I've learned is beautiful.

[00:58:20]

And one of the motivators for me is that I think that people. Tend not to see scientific thinking as being essentially poetic and lyrical, but but I think that is literally exactly what it is. And I think the idea of entanglement or the idea of superposition or the fact that you could even demonstrate a superposition or have a machine that relies on the existence of positions in order to function to me is is almost indescribably beautiful. It it it fills me with or it fills me with awe.

[00:58:59]

And also it's not it's not just the sort of grand massive or of but it's also Delacour.

[00:59:08]

It's very, very delicate and subtle and it has these beautiful sort of nuances in it and also these completely paradigm changing thoughts and truths. So so it's it's as good as it gets as far as I can tell. So, so, so broadly everything. That doesn't mean I believe everything. I read quantum physics because because obviously a lot of the interpretations are completely in conflict with each other. And who knows whether string theory will turn out to be a good description or not.

[00:59:42]

But but the beauty in it seems undeniable. And and I do wish people. More readily understood how beautiful and poetic science is, I would say science is poetry.

[01:00:01]

In terms of quantum computing. Being used to simulate things or just in general, the idea of simulating simulating small parts of our world, which actually current physicists are really excited about simulating small quantum mechanical systems are quantum computers, but scaling that up to something bigger, like simulating life forms. How do you think what are the possible trajectories of that going wrong or going right, if you if you can roll that into the future? Well, if a bit like Ava and her robotics, you park the the sheer complexity of what you're trying to do, the.

[01:00:47]

The issues are I think I think it will have a profound effect if you were able to have a machine that was able to project forwards and backwards accurately, it would in an empirical way show it would demonstrate that you don't have free will. So the first thing that would happen is people would have to to really take on a very, very different idea of what they were. The thing that they truly, truly believe they are, they are not.

[01:01:14]

And so that that, I suspect, would be very, very disturbing to a lot of people.

[01:01:19]

Do you think there has a positive or negative effect on society, the realization that you are not you cannot control your actions essentially, I guess, is the way that could be interpreted?

[01:01:30]

Yeah, although in some ways we instinctively understand that already, because in the example I gave you of the kid in the stabbing, we would all understand that that kid was not really fully in control of their actions. So it's not an idea that's entirely alien to us. But I don't know.

[01:01:47]

We understand that. I think there's a bunch of people who see the world that way, but not everybody. Yes, true. I understand. But what this machine would do is, is prove beyond any doubt because someone would say, well, I don't believe that's true. And then you predict, well, in 10 seconds you're going to do this. And they'd say, no, no, I'm not. And then they do it. And then determinism would have played its part, but I or something like that.

[01:02:13]

But actually the exact terms of that thought experiment probably wouldn't play out. But but still, broadly speaking, you could predict something happening in another room sort of unseen. I suppose the full knowledge would not allow you to effect. So what effect would that have? I think people would find it very disturbing. But then after that got over their sense of being disturbed, which, by the way, I don't even think you need a machine to to take this idea on board.

[01:02:42]

But after they've got over that, they'd still understand that even though I have no free will and my actions are in effect, already determined, I still feel things. I still care about stuff. I remember my daughter saying to me, I wish she'd got hold of the idea that my view of the universe made it meaningless. And she said, well, then it's meaningless. I said, well, I can prove it's not meaningless because you mean something to me and I mean something to you.

[01:03:13]

So it's not completely meaningless because there is a bit of meaning contained within this space. And so with a lack of free will space, you could think, well, this robs me of everything I am. And then you'd say, well, no, it doesn't, because you still like eating cheeseburgers and you still like going to the movies. And so. So how big a difference does it really make? But I think I think initially people would find it very disturbing.

[01:03:38]

I think I think that what would come if you could really unlock with a determinism machine everything, there'd be this wonderful wisdom that would come from that. And I'd rather have that than not. So that's that's a really good example of a technology revealing to us humans something fundamental about our world, about our society. So it's almost this creation is helping us understand ourselves in the same can be said about artificial intelligence. So what do you think is creating something like EVA will help us understand about ourselves?

[01:04:15]

How will that change society?

[01:04:18]

Well, I would hope it would teach us some humility. Humans are very big on exceptionalism, you know, America is is constantly proclaiming itself to be the greatest nation on earth, which it may feel like that if you're an American, but it may not feel like that if you're from Finland, because there's all sorts of things you dearly love about Finland. And exceptionalism is usually bullshit, probably not always. If we both sat here, we could find a good example of something that isn't.

[01:04:49]

But as a rule of thumb and and what it would do is it would teach us some humility and about, you know, actually often that's what science does. In a funny way. It makes us more and more interesting, but it makes us a smaller and smaller part of the thing that's interesting and I don't mind that humility at all. I don't think it's a bad thing. And excesses don't tend to come from humility. Our excesses come from the opposite megalomania stuff.

[01:05:17]

We tend to think of consciousness as having some form of exceptionalism attached to it. I suspect if we ever unravel it, it will turn out to be less than we thought. In a way.

[01:05:31]

And perhaps your very own exceptionalism assertion earlier on in our conversation that consciousness is something belongs to us humans or not humans, but living organisms.

[01:05:42]

Maybe you will one day find out that consciousness is in everything and that will I will humbly that if that was true, it would certainly humble me, although maybe almost maybe I don't know.

[01:05:56]

I don't know what effect that would have I. I sort of I mean, my understanding of that principle is along the lines of say that that an electron has a preferred state or it may or may not pass through a bit of glass, it may reflect off or it may go through or something like that. And and so that feels as if a choice has been made. And but if if I'm going down the fully deterministic route, I would say there's just an underlying determinism that has defined that that has defined the preferred state or the reflection or on reflection.

[01:06:35]

And so but look, yeah, you're right. If if if it turned out that there was a thing that it was like to be the sun, then I would. I'd be amazed and humbled and I'd be happy to be both. Sounds pretty cool and it'll be you'll say the same things you said to your daughter, but nevertheless feels something like to be me. And that that's pretty damn good. Yeah. So Kubrick created many masterpieces, including The Shining, Dr.

[01:07:00]

Strangelove, Clockwork Orange, but to me he will be remembered, I think, to many one hundred years from now for 2001, A Space Odyssey.

[01:07:10]

I would say that's his greatest film. I agree.

[01:07:14]

You are incredibly humble and listen to a bunch of your interviews, and I really appreciate that you're humble in your creative efforts in your work, but if I were to force you at gunpoint, do you have a gun?

[01:07:30]

You don't know that. The mystery is to imagine 100 years out into the future.

[01:07:37]

What will Alex Garland be remembered for from something you've created already or feel you may feel somewhere deep inside. You may still create. Well, OK, I'll take I'll take the question in the spirit it was asked, but very generous gunpoint. What? What I. What I try to do, so therefore what I hope. Yeah, if I remembered what I might be remembered for is is as someone who who participates in a conversation, and I think that often what happens is people don't participate in conversations, they make proclamations, they make statements, and people can either react against the statement or can fall in line behind it.

[01:08:26]

And I don't like that. So I want to be part of a conversation I take as a sort of basic principle. I think I take lots of my cues from science, but one of the best ones, it seems to me, is that when a scientist has something proved wrong that they previously believed in, they then have to abandon that position. So I'd like to be someone who is allied to that sort of thinking. So part of an exchange of part of an exchange of ideas and the exchange of ideas for me is something like people in your world show me things about how the world works.

[01:09:00]

And then I say, this is how I feel about what you've told me. And then other people can react to that. And it's it's not it's not to say this is how the world is. It's just to say it is interesting to think about the world in this way.

[01:09:14]

And the conversation is one of the things I'm really hopeful about in your work is the conversation you're having with the viewer in the sense that you you're bringing back you and several others, but you very much so sort of intellectual depth to cinema to now series sort of allowing film to be something that.

[01:09:43]

Yeah, sparks a conversation is a conversation, lets people think, allows them to think, but also it's very important for me that if that conversation is going to be a good conversation, what that must involve is that someone like you who understands I and I imagine understands a lot about quantum mechanics. If they then watch the narrative feels, yes, this is a fair account. So it is a worthy addition to the conversation. That for me is hugely important.

[01:10:15]

I'm not interested in getting that stuff wrong. I'm only interested in trying to get it right.

[01:10:21]

Alex, is Julian, I'll talk to you. I really appreciate I really enjoy it. Thank you so much. Thank you. Thanks. Thanks for listening to this conversation with Alex Garland and thank you to representing sponsor Kashyap, download it, use code Leks podcast. You'll get ten dollars and ten dollars will go to first, an organization that inspires and educates young minds to become science and technology innovators of tomorrow. If you enjoy this podcast, subscribe on YouTube, get five stars, an app, a podcast, support on one, or simply connect with me on Twitter, Àlex Friedemann.

[01:10:57]

And now let me leave you with a question from Ava, the central artificial intelligence character in the movie Ex Machina, that she asked during her Turing test. What will happen to me if I fail your test? Thank you for listening and hope to see you next time.