Transcribe your podcast
[00:00:00]

The following is a conversation with Dileep George, a researcher at the intersection of neuroscience and artificial intelligence, co-founder of the KARIUS Scott Phoenix and formerly co-founder of Numenta with Jeff Hawkins, who's been on this podcast, and Donna Dubinski from his early work on hierarchical temporal memory to recursive cortical networks to Today to Leap's, always sought to engineer intelligence that is closely inspired by the human brain. As a side note, I think we understand very little about the fundamental principles underlying the function of the human brain.

[00:00:38]

But the little we do know gives hints that may be more useful for engineering intelligence than any idea in mathematics, computer science, physics and scientific fields outside of biology. And so the brain is a kind of existence proof that says it's possible. Keep at it. I should also say that Britney's party is often overhyped and used as fodder, just as quantum computing for marketing speak, but I'm not afraid of exploring these sometimes overhyped areas since where there's smoke, there's sometimes fire.

[00:01:13]

Quick summary of the ads, three sponsors babble on earbuds and masterclass, please consider supporting this podcast by clicking the special links in the description to get the discount. It really is the best way to support this podcast. If you enjoy this thing, subscribe on YouTube, stars on our podcast, support our patron. Connect with me on Twitter, Àlex Friedemann. As usual, I do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation.

[00:01:45]

This show is sponsored by Babbo, an app and website that gets you speaking in a new language within weeks, Krabappel Dotcom and News collects together three months free. They offer 14 languages, including Spanish, French, Italian, German and, yes, Russian daily lessons of 10 to 15 minutes. Super easy, effective design by over 100 language experts. Let me read a few lines from the Russian poem Norge, which is Fanni Uptake by Alexander Block that you'll start to understand if you sign up to babble.

[00:02:22]

Norge Wolitzer, Fanni, Jarka, dismissing the two schools that we're sure we rejected Vaka, so good talk is Hoder yet? Now, I say that you'll only start to understand this poem because Russian starts with a language and ends with the vodka. Now the latter part is definitely not endorsed or provided by Babul, and we'll probably lose me the sponsorship. But once you graduate from battle, you can enroll my advanced course of late night Russian conversation over vodka.

[00:02:56]

I have not yet developed an app for that. It's in progress. So get started by visiting Babul Dotcom and use Culex to get three months free. This show is sponsored by retcon earbuds, get them by Rakhat dot com slash leks, they become my main method of listening to podcasts, audio books and music when I run, do push ups and pull ups or just living life. In fact, I often listen to brown noise with them. When I'm thinking deeply about something, it helps me focus their super comfortable pair easily.

[00:03:29]

Great sound, great bass, six hours of play time. I've been putting in a lot of miles to get ready for a potential ultramarathon and listening to audiobooks on World War Two.

[00:03:42]

The sound is rich and really comes in clear. So, again, get them at Byrock on dotcom slash Lex. This show is sponsored by a master class sign of a master class that looks to get a discount and to support this podcast. When I first heard about master class, I thought it was too good to be true. I still think it's too good to be true. For one hundred eighty bucks a year, you get an all access pass to watch courses from the list.

[00:04:08]

Some of my favorites, Chris Hadfield and Space Exploration Neil deGrasse Tyson are scientific thinking and communication will write creator of SIM City and Sims and Game Design. Every time I do this, I really want to play City before the game. Carlos Santana and Guitar Hero, Kasparov on chess. Danny on the ground on poker. Many more. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. By the way, you can watch it on basically any device once again, sign up a masterclass dotcom to get a discount and to support this podcast.

[00:04:45]

And now here's my conversation with Dileep George. Do you think we need to understand the brain in order to build it? Yes, if you want to build a brain, we need you to understand how it works. So Blue Brain or Henry Markham's project is trying to build a brain without understanding it, just trying to put details of the brain from neuroscience experiments into a giant simulation by putting more and more neurons more and more details. But that is not going to work because.

[00:05:40]

When he doesn't perform as what we expect it to do, then what do you do? You do. You just keep adding more details. How do you debug it? So it's so unless you understand unless you have a theory about how the system is supposed to work, how the pieces are supposed to fit together, what they're going to contribute, you can't you can't build it at the functional level, understand?

[00:06:01]

So can you actually linger on and describe the Blue Brain Project? It's kind of a fascinating principle and idea to try to simulate the brain as we're talking about a human brain.

[00:06:14]

Right? Right. Human brains and right. Brains or brains have lots in common that the cortex, the neocortex structure is very similar.

[00:06:25]

So initially they were trying to just simulate a cat brain and to understand the nature of evil and understand the nature of evil.

[00:06:36]

Or as it happens in most of the simulations, you you easily get one thing out, which is oscillations, you know. Yeah. If you if you simulate a large number of neurons, they oscillate and you can adjust the parameters and say that all just match the rhythm that we see in the brain, etc.. But I see. So like so the idea is, is a simulation at the level of individual neurons. Yeah.

[00:07:04]

So the Blue Brain Project, the original idea proposed was you you put very detailed biophysical neurons by physical models of neurons and you interconnect them according to the statistics of connections that we have found from real neuroscience experiments and then turn it on and see what happens. And this neural models are incredibly complicated in themselves because these neurons are model using this idea called Hodgkin Huxley models, which are about how signals propagate in a cable and there are active dendrites, all those phenomena which those phenomena themselves, we don't understand that well.

[00:07:53]

And then we put in connectivity, which is part guesswork, but observed. And of course, if we do not have any theory about how it is supposed to work, we you know, we just have to take whatever comes out of it us. OK, this is something interesting.

[00:08:12]

But in your sense, like these models of the way signal travels or like the axons and all the basic models, that there are two crude?

[00:08:21]

Oh, well, actually they are pretty detailed and pretty sophisticated and they do replicate the neural dynamics. If you take a single neuron and you you try to turn on the different channels, the calcium channels and the different receptors and see what the effect of turning on or off those channels are in the neurons spike output, people have built pretty sophisticated models of that.

[00:08:52]

And they are I would say, you know, in the regime of correct policy, the correctness that's interesting because you mentioned on several levels the correctness is measured by looking some kind of aggregate statistics.

[00:09:06]

It would be more the the spiking dynamics of the in dynamics or something. Yeah. Yeah. And yeah, these models, because they are they are going to the level of mechanism. Right. So they are basically looking at OK, what, what is the effect of turning on an iron channel. And, and you can, you can model that using electric circuits and then so they are model. So it is not just a function fitting. It is people are looking at the mechanism underlying it and putting that in terms of electric circuit theory, signal propagation theory and modeling that.

[00:09:43]

And so those models are sophisticated. But getting a single neuron model, 99 percent. Right. Does not still tell you how to you know, it would be the analog of getting a transistor model. Right, and now trying to build a microprocessor. And if you if you just observe, you know, if you did not understand how a microprocessor works, but you say, oh, I have. I know. Can model one transistor. Well, I now I will just try to interconnect the transistors according to whatever I could guess from the experiments and try to simulate it, then it is very unlikely that you will produce a functioning microprocessor.

[00:10:28]

You want to know when you want to produce a functioning microprocessor. You want to understand boolean logic. How does how do the the gates work all those things and then, you know, understand how do those gates. Implemented using transistors. Yeah, there's actually I remember this reminds me, this is a paper, maybe you're familiar with it. I remember going through in a reading group that approaches microprocessor from a perspective of a neuroscientist, I think it basically it uses all the tools that we have of neuroscience to try to understand, like if we just aliens showed up to study computers.

[00:11:05]

Yeah. And to see if those tools can be used to get any kind of sense of how the process works. I think the final the take away from the at least this initial exploration is that we're screwed. There's no way that the tools in neuroscience would be able to get us to anything like not even boolean logic. I mean, it's just a any aspect of the architecture of the function of the processes involved, the clocks that the timing all.

[00:11:39]

You can't figure that out from the tools of neuroscience.

[00:11:42]

Yes, I'm very familiar with this particular paper. I think it was called Can a neuroscientist understand a microprocessor or something like that?

[00:11:53]

Following the methodology in that paper, even electrical engineer would not understand microprocessors like this. So I don't think it is that bad in the sense of seeing neuroscientists do find valuable things by observing the brain, but they do find good insights. But those in state cannot be put together just as a simulation. You have to you have to investigate what are the computational underpinnings pinnings of those findings? How do you do all of them fit together from an information processing perspective?

[00:12:33]

You have to. You have to somebody has to painstakingly put those things together and build hypotheses. So I don't want to do this. All of neuroscience are saying, oh, they are not finding anything. Know that, you know, that people almost went to that level of a neuroscientist will never understand that that's not true. I think they do find lots of useful things, but it has to be put together in a computational framework.

[00:12:57]

Yeah, I mean, but, you know, just the assistance of listening to this podcast 100 hundred years from now, and there will probably be there's some non-zero probability they'll find your words laughable. There's like I remember you humans thought they understood something about the brain. They're totally clueless. There's a sensible neuroscience that we may be in the very, very early days of understanding the brain. But I mean, that's one perspective in your perspective. How far are we into understanding?

[00:13:33]

Any aspect of the brain, so that the dynamics of the individual and your communication to the how in a collective sense, how they're able to store information transfer information, how the intelligence that emerges, all that kind of stuff, where are we on that timeline? Yeah.

[00:13:53]

So, you know, timelines are very, very hard to predict. And you can, of course, be wrong and you can be wrong on either side. You know, we know that when we look back on the first flight was in 19 or three and nineteen hundred, there was a New York Times article on flying machines that do not fly. And and, you know, humans might not fly for another hundred years. That was what that article stated.

[00:14:22]

And so but no, they flew three years after that. So, you know, it's very hard to do so well.

[00:14:29]

And on that point, one of the Wright brothers, I think two years before said that, like he said, like some number, like 50 years, he has become convinced that it's it's it's impossible even during their experimentation.

[00:14:48]

Yeah, yeah. I mean, that's a tribute to one that's like the entrepreneurial battle of like depression of going through just like there's this is impossible. But, yeah, there's something even the person that's in it is not able to see estimate correctly.

[00:15:04]

Exactly. But I can I can tell from the point of, you know, objectively what are the things that we know about the brain and how that can be used to build a model which can then go back and inform how the brain works. So my way of understanding the brain would be to basically say, look at the insights neuroscientists have found, understand that from a computational angle, information processing angle, build models using that and then building that model, which functions we don't, which is a functional model, which is which is doing the task that we want to model to do.

[00:15:41]

It is not just trying to model phenomena in the brain. It is it is trying to do what the brain is trying to do on the whole functional level. And building that model will help you fill in the missing pieces that biology just gives you the hints. And building the model, you know, fills in the rest of the the pieces of the puzzle. And then you can go and connect that back to biology and say, OK, now it makes sense that this part of the brain is doing this or this layer in the cortical circuit is doing this and and and then continue it creatively, because now that will inform new experiments in neuroscience.

[00:16:22]

And, of course, you know, building the model and verifying that in the real world value will also tell you more about does the model actually work? And you can refine the model, find better ways of putting this neuroscience insights together. So so I would say is it is you know, it so neuroscientists alone, just from experimentation, will not be able to build a model of the of the brain or a functional model of the brain. So we you know, there's lots of efforts which are very impressive efforts in collecting more and more connectivity data from the brain.

[00:16:58]

You know, how how are the micro circuits of the brain connected with each other as a beautiful by the way, those are beautiful. And at the same time, those those do not itself by themselves convey the story of how does it work? Yeah. And and somebody has to understand, OK, why are they connected like that and what what are those things doing? And and we do that by building models and using hints from neuroscience and and repeat the cycle.

[00:17:29]

So what aspect of the brain are useful in this whole endeavor, which, by the way, I should say you're you're both a neuroscientist and a person. I guess the dream is to both understand the brain and to build DGI systems. So you're it's like an engineer's perspective of trying to understand the brain. So what aspects of the brain function speaking like you said you find interesting?

[00:17:56]

Yeah, quite a lot of things. All right. So one is, you know, if you look at the visual cortex and, you know, the visual cortex is is a large part of the brain. I forget the exact fraction, but it is it's a huge part of our brain area is occupied by just just vision. So vision visual cortex is not just a feed forward cascade of neurons. There are a lot more feedback connections in the brain compared to the food for what connections?

[00:18:27]

And it is surprising to the level of. Neuroscientists have actually studied this. If you go into neuroscience literature and poke around and ask, you know, have they studied, what will be the effect of blocking a neuron in our level it in level V1 and have they studied that? And you will say, yes, they have studied that every every possible combination. It's a it's not a random explanation at all. It's a very hypothesis driven. Right.

[00:19:01]

They're very experimental. Neuroscientists are very, very systematic in how they approach the brain because experiments are very costly to conduct. They take a lot of preparation. They they need a lot of control. So they they are very hypothesis driven in how they approach the brain. And often what I find is that when we have a question in in about how has anybody probed how lateral connections in the brain works? And when you go and read the literature, yes, people have probed it and people have proved it very systematically.

[00:19:32]

And they have hypotheses about how those lateral connections are supposedly contributing to visual processing. But of course, they haven't built very, very functional detailed models of it, by the way, in those studies.

[00:19:47]

Sorry to interrupt. Do they do they stimulate like a neuron in one particular area of the visual cortex and then see how the travel of the signal travels kind of thing? Fascinating.

[00:19:57]

Very, very fascinating experiments. So I can I can give you one example I was impressed with. This is so before going to that, let me let me give you a you know, a story of how the the layers in the cortex are organized. Correct. Visual cortex is organized into roughly four hierarchical levels. So we want to be for it. And in we have one of these three, there is another pathway. OK. OK, so there is this I'm talking about just object recognition and then in V1 itself.

[00:20:33]

So there is a very detailed microcircuit in Reburn itself. That is, that is organization within a level itself. The cortical sheet is organized into, you know, multiple layers and there are columnar structure. And this this layaways and column structure is repeated in BE1 want to be for all of them. Right. And and the connections between these layers within a level you know, in within itself there are six layers roughly, and the connections between them, there is a particular structure to them.

[00:21:06]

And now so one example of an experiment people did is when I when you present a stimulus, which is, let's say, requires separating the foreground from the background of an object. So it's a textured triangle on a textured background. And you can check does the surface at first or does the contours settle first? Settle, settle in the sense that the so that when you finally formed the perception of the of the triangle, you understand where the contours of the triangle are.

[00:21:48]

And you also know where the inside of the triangle is, right. That when you formed the final percent, now you can ask, what is the dynamics of forming that final percept? Do the do the neurons first find the edges and converge on where the edges are and then they find the inner surfaces? Or does it go the other way, the other way around? So so what's the answer? In this case? It turns out that it first settles on the edges.

[00:22:20]

It converges on the edge hypothesis first and then the the surfaces are filled in from the edges to the inside. That's fascinating. And and the detail to which you can study this, it's amazing that you can actually not only find the temporal dynamics of when this happens and then you can also find which layer in the in V1, which layer is encoding the edges, which layer is encoding the surfaces, and which is including the feedback which there is encoding the feed forward and what what the combination of them that produces the final person.

[00:22:59]

And these kinds of experiments stand out when you try to explain Elution's one one example of a favorite illusion of mind is the kind. It's a triangle. I don't know whether you are familiar with this one. So this is this is an example where it's a triangle. But, you know, the corners of the only the corners of the triangle are shown in the stimulate stimulus. So they look like kind of Pacman or the black Pacman.

[00:23:26]

And then you start to see your visual system hallucinate the edges. Yeah. And you can you know, you when you look at it, you will see a faint edge and you can go inside the brain and look, you know, do actually neurons signal the presence of this edge? And and if this signal, how do they do it? Because they are not receiving anything from the input in the input is black for those neurons. Right. So how do they signal it?

[00:23:54]

When does the signaling happen? Does it you know, so so if a real Kountouris percent in the input, then the neurons immediately signal, OK, that is a there is an edge here. Then when it is an illusory edge, it is clearly not in the input, it is coming from the context. So those neurons fired later and you can see that, OK, these are it's the feedback connection that is causing them to fire and they happen later and you can find the dynamics of them also.

[00:24:27]

So these studies are pretty impressive and very detailed. So, by the way, just step back. You said that there may be more feedback, connections to feed for connections. Yeah, first of all, this is just for like a machine learning, folks. Yeah. I mean, that that's crazy that there's all these feedback connections. I mean, we often think about I think thanks to deep learning, you start to think about the human brain as a kind of feed forward mechanism.

[00:25:02]

So what the heck are these feedback connections? Yeah, what was there? Was the dynamics or what are we supposed to think about them?

[00:25:11]

Yeah. So this is this fits into a very beautiful picture about how the brain works. Right. So the the beautiful picture of how the brain works is that our brain is building a model of the world. So our visual system is building a model of how objects behave in the world. And and we are constantly projecting that model back onto the world. So what we are seeing is not just a feed forward thing that just gets interpreted in different food, but we are constantly projecting our expectations onto the world and and what the final push is a combination of what we project onto the world, combined with what actual sensory input is almost like trying to calculate the difference and then trying to interpret the difference.

[00:25:58]

Yeah, it's I wouldn't put this calculating the difference. It's more like what is the best explanation for the input stimulus based on the model of the world.

[00:26:08]

They have got it. Got it. And that's where all the illusions come in. That's but that's that's an incredibly efficient, so efficient process. So the feedback mechanism, it just helps you constantly. Yeah. So, Hosn, how the world should be based on your world model and just looking at if there's novelty like trying to explain it, hence that's why movement. We detect movement really well, there's all these kinds of things, and this is like at all different levels of the cortex.

[00:26:43]

You're saying that everything happens at the lowest level. The highest level? Yes. Yeah. In fact, feedback and exercise are more prevalent in everywhere in the cortex. And and so one way to think about it and there's a lot of evidence for this is inference. So, you know, so basically, if you have a model of the world and then some evidence comes in, what we are doing is inference. You are trying to now explain this evidence using your model of the world.

[00:27:12]

And this inference includes projecting your model onto the evidence and taking the evidence back into the model and doing an iterative procedure. And this Iturbi procedure is what happens using the feet forward. Feedback, propagation and feedback affects what you see in the world, and it also affects feed forward propagation and examples are everywhere. We see these kinds of things everywhere. The idea that there can be multiple competing hypotheses in our model trying to explain the same evidence and then you have to kind of make them compete.

[00:27:53]

And one hypothesis will explain away the other hypothesis to this competition process.

[00:27:59]

What so you have competing models of the world that try to exploit. What do you mean by explain away?

[00:28:07]

So this is a classic example in a graphical models, probabilistic models. So if you what are those?

[00:28:16]

OK, I think it's useful to mention because we'll talk about the more. Yeah. Yeah. So neural networks are one class of machine learning models. You know, you have distributed a set of nodes which are called the neurons. You know, each one is doing a dot product and you can you can approximate any function using this multilevel network of neurons. So that's a class of models which are used for useful function approximation. There is another class of models in machine learning called probabilistic graphical models.

[00:28:50]

And you can think of them as each node in that model is variable, which is talking about something, you know, it can be a variable representing. It is an edge present in the input or not, and at the top of the network and node can be representing. Is there an object present in the world or not? And then so it can it is it is another way of encoding knowledge. And then you once you encode the knowledge, you can do inference in the right.

[00:29:29]

You know, what is the best way to explain some sort of evidence using this model that you encoded? So when you include the model, you are encoding the relationship between these different variables. How is the edge connected to my the model of the object? How is the surface connected to the model of the object? And then, of course, this is a very distributed, complicated model. And inference is how do you explain a piece of evidence when when a set of stimulus comes?

[00:29:58]

And if somebody tells me there is a 50 percent probability that there is an edge here in this part of the model, how does that affect my belief on whether I should think that there should be is the square representing the image? So so this is the process of inference. So one example of inference is having this experiment. The effect between multiple courses of graphical models can be used to represent causality in the world. So let's say, you know, your alarm at home can be triggered by a burglar getting into your house or it can be triggered by an earthquake but can because of the alarm going off.

[00:30:45]

So now you're you know, you're in your office. You heard a burglar alarm going off. You are heading home thinking that there's a burglar got in. But while driving home, if you hear on the radio that there was an earthquake in the vicinity, now you're in a strength of evidence for a burglar. Getting into your house is diminished because now that that piece of evidence is explained by the earthquake being present. So if you if you think about these two causes explaining at lower level variable, which is alarm.

[00:31:18]

Now, what we're seeing is that increasing the evidence for some cause, you know, there is evidence coming in from below for alarm being present. And initially it was flowing to a burglar being present. But now since somebody some side of it. Other costs, it explains a way this evidence and evidence will now flow to the other costs. This is two competing causal things, trying to explain the same evidence and the brain has similar kind of mechanism for doing so.

[00:31:49]

That's kind of interesting.

[00:31:50]

And that how is that all encoded in the brain?

[00:31:55]

Like where's the storage of information? Are we talking just maybe to get it a little bit more specific? Is it in the hardware of the actual connections? Is it in chemical communication? Is electrical communication to do you know?

[00:32:11]

So this is a paper that we are bringing out soon, which is this is the Cortical Microcircuits paper that I sent you a draft of. Of course, this is a lot of it is still hypothesis. One hypothesis that you can think of a cortical column as encoding a concept, a concept know think of it as an example of a concept is eastern edge percent or not or is is an object present or not. OK, so it can you can think of it as a binary variable, a binary random variable, the presence of an edge or not, or the presence of an object or not.

[00:32:49]

So each cortical column can be thought of as representing that one concept, one variable. And then the connections between these cortical columns are basically encoding the relationship between these random variables. And then there are connections within the cortical column that are each cortical column is implemented using multiple layers of neurons with very, very, very rich structure. There know there are thousands of neurons in a cortical column, but but there are structures similar across the different cortical contact.

[00:33:22]

And also these cortical columns like connect to a substructure called thalamus in the. So all all cortical columns pass through the substructure. So our hypothesis is that the connections between the cortical columns implement this. You know, that's where the knowledge is stored about how these different concepts, concepts connect to each other. And then the the neurons inside this critical column and in the thalamus in combination implement these actual computations and therefore inference, which includes explaining away and competing between the different hypothesis.

[00:34:01]

And it is all very. So what is amazing is that neuroscientists have actually done experiments to the tune of showing these things. They might not be putting it in the overall inference framework, but they will show things like if I took this high level neuron, it will inhibit through this complicated look through the thalamus. It will inhibit this other column. So they will do experiments. But do they use terminology of concepts, for example?

[00:34:32]

So, I mean, is it is it something where it's easy to anthropomorphize and think about concepts like he started moving into logic based kind of reasoning systems?

[00:34:47]

So I would just think of concepts in that kind of way, or is it a lot messier, a lot more gray area, you know, even even more gray if a more messy than the artificial neural network kinds of abstractions?

[00:35:05]

It's easiest way to think of it as a variable rate. It's a binary variable which is showing the presence or absence of something, something.

[00:35:13]

But I guess what I'm asking is, is that something that I was supposed to think of, something that's human interpretable of something it doesn't need to be it doesn't need to be human interpretable.

[00:35:24]

There's no need for it to be human interpretable.

[00:35:27]

But it's it's almost like you you will be able to find some interpretation of it because it is connected to other things. Yes. As you know.

[00:35:39]

And the point is, it's useful somehow. Yeah. It's useful as an entity in the graphic in connecting to the other entities that are, let's call them concepts.

[00:35:52]

OK, so by the way, what are these, the cortical microcircuits?

[00:35:57]

These are the cortical microcircuits. You know, that's what neuroscientists used to talk about, the circuits in in within a level of the cortex. So you can think of, you know, let's tingle in neural network, you know, artificial neural network terms. You know, people talk about the architecture of the suit, how many how many layers to build, you know, what is the and find out, etc. That is the macro architecture. So and then within a layer of the neural network, you can you know, the cortical neural network is much more structured with, you know, within a level that it's a lot more intricate structure there, but.

[00:36:36]

Even within an artificial neural network you can think of in feature detection, plus pulling as one one level, and so that is kind of a microcircuit, it's much more complex in the real brain. And and so within a level, what is that circuitry within a column of the cortex and between the layers of the cortex? That's the microcircuitry.

[00:36:58]

I love that terminology. Machine learning. People don't use the circuit terminology that they should. It's so nice. OK. OK, so that's that. That's the cortical microcircuit. So what's interesting about what can we say, what is the paper that you're working on, proposals about the ideas around these cortical microcircuits.

[00:37:21]

So this is a fully functional model for the micro circuits of the visual cortex.

[00:37:28]

So the paper focuses on your idea in our discussion now is focusing on vision. Yeah, the visual cortex. OK, yeah.

[00:37:36]

This is a model. This is a model that says this is how vision works. But this is this is a hypothesis. OK, so let me let me step back a bit. So we looked at neuroscience for insights on how to build a model, and we synthesized all those insights into a computational model. This is called recursive cortical network model that we we used for breaking capture us. And and we are using the same model for robotic picking and tracking of objects.

[00:38:08]

And that, again, is a vision system that becomes a computer vision system that the computer vision takes in images and outputs.

[00:38:15]

What on one side, it outputs the class of the image and also segments the image. And you can also ask it further committees. Where is the edge of the object? What is the interior of the object? So it's a model that you build to answer multiple questions. So you are not trying to build a model for just classification or just segmentation, except it's it's a it's a it's a joint model that can do multiple things. And so so that's the model that we built using insights from neuroscience.

[00:38:47]

And some of those insights are what is the role of feedback connections? What is the role of lateral connections? So all those things went into the model. The model actually uses feedback, connections, all of these ideas from you, from us.

[00:39:01]

So what what the heck is a recursive cortical network? Like what? What are the architecture approaches? Interesting aspects here, which is essentially a brain inspired approach to a computer vision.

[00:39:13]

Yeah. So there are multiple layers to this question. I can go from the very, very top and then zoom in. OK, so one important thing constraint that went into the model is that you should not think we can think of vision as something in isolation. We should not think perception as something as a preprocessor for cognition, perception and cognition are interconnected. And so you should not think of one problem in isolation from the other problem. And so that means if you finally want to have a system that understand concepts about the world and can learn it in a very conceptual model of the world and can listen and connect to language, all of those things you need to you need to have think all the way through and make sure that your perception system is compatible with your cognition system and language system and all of them.

[00:40:05]

And one aspect of that is top down controllability. What does that mean? So that means context, you know, so so think of, you know, you can close your eyes and think about the details of one object, right? I can I can zoom in further and further. I can know. So think of the ball in front of me. Right. And now you can think about, OK, what the cap of that ball looks.

[00:40:31]

You know, we can think about what spectrum that ball of the cap know. You can think about in a what will happen if something hits that. So you can you can you can manipulate your visual knowledge in cognition runways. Yes. And so this top down controllability and being able to simulate scenarios in the wood.

[00:40:58]

So you're not just a passive player in this perception game.

[00:41:03]

You you can you can control it. You can you have imagination. Correct.

[00:41:08]

So so so basically, you know, basically having it generating network, which is a model and and it is not just some arbitrary network. It has to be it has to be built in a way that it is controllable. Top-Down it is it is not just trying to generate a whole picture at once. You know, it's not trying to generate photorealistic things of the world. You you know, you don't have good photo, realistic models of the world.

[00:41:30]

Human brains do not have if I if I, for example, ask you the question, what is the color of the letter E in the Google logo? You have no idea. All they have seen it millions of times, hundreds of times. So, yeah, it's not our model, it's not photorealistic but but it is, but it has other properties that we can manipulate it in. And you can think about filling in a different color in that logo.

[00:41:56]

You can think about expanding the letter E, you know, you can see what so you can imagine the consequence of know actions that you have never performed. So so these are the kind of characteristics the density model need to have. So this is one constraint that went into our model. So this is when you read the just the perception side of the paper. It is not obvious that this was the constraint into the that went into the model, this top down controllability of the January model.

[00:42:23]

So what is top down controllability in a model look like? It's a really interesting concept. Fascinating concept.

[00:42:32]

What does that is that they were recursive. This gives you that or how do you.

[00:42:38]

Quite a few things. It's like what what does the model factor? Factories, you know, what are the what is the model representing different pieces in the puzzle? So. So in the RCN network, it thinks of the world know the background of an image is modeled separately from the foreground of the image. So the objects are separate from the background. They are different entities.

[00:43:02]

So there's a kind of segmentation that's built in fundamentally.

[00:43:06]

And then even that object is composed of parts. And also another one is the the shape of the object is different model from the texture of the object. Got it. So there's like these I've been, you know, friends of our show is. Yeah, yeah.

[00:43:28]

So there is he developed this like IQ test type of thing for our challenge for and it's kind of cool that there is these concepts, Prior's the designs that you bring to the table in order to be able to reason about basic shapes and things in the IQ test. So here you're making it quite explicit that here, here are the things that you should be. These are like distinct things. You should be able to model A..

[00:43:57]

Keep in mind that you you can derive this from much more general principles. It doesn't you don't need to explicitly put it us objects versus foreground versus background or the surface versus texture. No, these are these are derivable from a more fundamental principles of how, you know, what's the property of continuity of natural signals. What's the property of continuity of natural signals? Yeah, by the way, that sounds very poetic, but yeah. So you're saying that's a there's some low level properties from which emerges.

[00:44:31]

The idea that shapes should be different than like this should be a part of an object that should be. I mean. Exactly. Kind of like frontwards. I mean, there's an object. There's all these things that it's kind of crazy that we humans, I guess, evolved to have because it's useful for us to perceive the world.

[00:44:50]

And it derives mostly from the properties of natural signals. And yeah. And so natural signals.

[00:44:57]

So natural signals are the kind of things will perceive. And then in the natural world, I don't know. I don't know why that sounds so beautiful. Natural signals. Yeah.

[00:45:06]

As opposed to a QR code, which is an artificial signal that we created. Humans are not very good at classifying QR codes. We are very good at seeing something as a cat or a dog, but not very good at, you know, the classic where computers are very good at classifying QR codes. So our our visual system is tuned for natural signals and there are fundamental assumptions in the architecture that are derived from natural signal properties.

[00:45:32]

I wonder when you take a host Engeneic drugs, does that go into natural or is that close to the QR code as a whole? It's still natural, yeah, because it is still operating using our brains, by the way.

[00:45:45]

And on that topic, I mean, I haven't been following I think they're becoming legalized and I can't wait until they become legalized to degree. The vision science researchers could study it.

[00:45:57]

Yeah, just like through through medical chemical ways, modify it. There could be ethical concerns. But that's another way to study the brain, is to be able to chemically modify it. There's probably probably very long a way to figure out how to do it ethically.

[00:46:16]

Yeah, but I think there are studies on that already. Already. Yeah, I think so. Because it's not unethical to give it to rats. Oh that's true. That's true.

[00:46:29]

There's a lot of drugged up rats out there. OK, so it's OK.

[00:46:33]

So there's so there's these low level things from natural signals that the that the from with these properties will emerge.

[00:46:46]

Yes. But it is still a very hard problem on how to encode that. So you don't know that it's not. So you mentioned the Prior's French. I wanted to encode it in the abstract reasoning challenge, but it is not straightforward how to encode the suppliers. So so some of those challenges, the object completion challenges are things that we purely use social system to do. It is it looks like abstract reasoning, but it is purely an output of a division system.

[00:47:19]

For example, completing the corners of the triangle, completing the lines of the triangle. It's a purely visual system, property that is not abstract reasoning involved. It uses all these players, but it is stored in our visual system in a particular way that is amenable to inference. And and that is one of the things that we tackled in the, you know, basically saying, look, these are the prior knowledge which which will be derived from the world.

[00:47:46]

But then how is that prior knowledge represented in the model such that inference when when some piece of evidence comes in, can be done very efficiently and in a very distributed way? Because it is very there are so many ways of representing knowledge which is not amenable to very quick inference, you know, quick lookups. And so that's one core part of what we tackled in the Asian model. How do you encode we shall knowledge to do very quick inference and can you maybe comment?

[00:48:21]

And so folks listening to this in general may be familiar with different kinds of architectures of networks. What what are we talking about with the Russian? Are what does the architecture look like? What a different component is, a close neural networks as far away from your networks.

[00:48:39]

What does it look like? Yeah, so so you can think of the delta between the model and the convolutional neural network if people are familiar with convolutional networks. So convolutional networks have this feed for word processing cascade, which is called feature detectors and pooling. And that is repeated in the in the hierarchy, in a multilevel system. And if you if you want an intuitive idea what what is happening, feature detectors are detecting interesting Corkman in the input. It can be a line, a corner, a an eye or a piece of texture, etc.

[00:49:18]

and the pulling neurons are doing some local transformation of that and making it invariant to local transformations. So this is what the structure of convolutional neural network is. Recursive cortical network has a similar structure when you look at just the feed forward pathway. But in addition to that, it is also structured in a way that it is generating. So that, again, can run it backward and combined the forward with the backward, another aspect that it has is it has lateral connections.

[00:49:51]

This little consciousness, which is between so if you have an edge, you're on an edge here, it has connections between these edges. It is not just people with connections. It is something between these edges, which is the nodes representing these edges, which is to enforce compatibility between them. So otherwise, what'll happen is the constraints. It's a constraint is basically if you if you do just feature detection followed by pooling, then you're you're transformation's in different parts of the visual field are not coordinated.

[00:50:23]

Mm hmm. And so you can you will create Jaggard, when you generate from the model, you will create jagged things and uncoordinated transformations. So these lateral connections are enforcing the transformation's.

[00:50:39]

Is the whole thing still differentiable? Oh, no. No, it's not. It's not traducing backdrop.

[00:50:47]

OK, that's really important. So so there's this field forward. There's feedback mechanisms. There's some interesting connectivity. Things is still layered like. Yes. Are multiple layers.

[00:51:00]

OK, very, very interesting. And yeah, because of the interconnection between adjacent the connections across service constraints that keep the thing stable. Correct. OK, so what else.

[00:51:15]

And then there's this idea of doing inference and neural network does not do inference on the fly. So an example of why this inference is important is so one of the first applications that we showed in the paper was to track text based captures what it captures, by the way.

[00:51:37]

Yeah. By the way, one of the most awesome like the people don't use this term anymore is human computation. I think I love this term. The guy who created captures I think came up with this term.

[00:51:48]

Yeah, I love it. Yeah.

[00:51:51]

What, what are captures so captures are those things that you fill in when you're, if you're opening a new account in Google, they show you a picture. Usually it used to be a set of letters that you have to kind of figure out what what, what is the single characters and type it. And the reason Gap just exists is because, you know, Google or Twitter do not want automatic creation of accounts. You can use a computer to create millions of accounts and use that for nefarious purposes.

[00:52:28]

So you want to make sure that to the extent possible, the interaction that, you know, their system is having is with a human. So it's it it's called a human interaction proof. It captures the human interaction proof. So so this is a capture by design, things that are easy for humans to solve, but hard for computer, hard for robots. Yeah. So and text based captures. That was the one which is prevalent only around 2014, because at that time this captures very hard for computers to crack.

[00:53:02]

Even now they are actually in the sense of an arbitrary text-based capture will be unsolvable even now. But with the techniques that we have developed, it can be, you know, you can quickly develop a mechanism that solves the capture.

[00:53:15]

Oh, they've probably gotten a lot harder to the people that they've been getting clever and clever at generating these tax cuts. So, OK, so that was one of the things you've tested on is these kinds of captures in 2014 15 Caracazo. Right.

[00:53:31]

So what what I mean by by the way, why captures.

[00:53:36]

Yeah, yeah. Even now, I would say capture is a very, very good challenge problem. If you want to understand how human perception works and if you want to build systems that work like the human brain and I wouldn't say CAPTUS to solve a problem, we have cracked the fundamental defense of capturers, but it is not solved in the way that humans solve it. So I can give an example. I can take a five year old child who has just learned characters and show them any new captcha that we create.

[00:54:10]

They will be able to solve it. I can show you pretty much any new capture from any new website. You'll be able to solve it without getting any training examples from that particular style of capture.

[00:54:23]

You're assuming I'm human. Yeah. Yes. Yeah, that's right. So if you are human, otherwise I will be able to figure that out using this one. But this whole podcast is just the Turing test that long a long Turing test. And I say so. Yeah.

[00:54:39]

So she's human and humans can figure it out with very few examples or no training examples, not any examples from that particular style of capture and thinking, you know, so. Even now, this is unreachable for the current deep learning system, so basically there is no I don't think a system exists where you can basically say train on whatever you want and then now say, hey, I will show you a new captcha, which I did not show you in in the in the training set up.

[00:55:08]

Will the system be able to solve it still doesn't exist. So that is the magic of human perception. And Doug Hofstadter put this very beautifully in one of his docs. The the central problem in A.I. is what is the letter AI? If you can if you can build a system that reliably can detect all the variations of the letter, you don't even need to go to the the B and C. Yeah. You don't even need to be able to see all the strings of characters.

[00:55:41]

And so that that is the spirit at which with which we tackle that. What does he mean by that? I mean, is it like without training examples, try to figure out the fundamental elements that make up the letter A. In all of its forms, in all of its forms, it can be it can be made with two human standing leaning against each other, holding the hand, and it can be made of leaves. It can be.

[00:56:09]

Yeah.

[00:56:09]

You might have to understand everything about this world in order to understand the letter A.. Yeah, exactly. So it's common sense reasoning, essentially. Right.

[00:56:18]

So to finally to really solve finally to say that we have solved capture, you have to solve the whole problem.

[00:56:26]

So, yeah. OK, so how does this kind of RCN architecture help us to get a better job of that kind? Yeah.

[00:56:36]

So as I mentioned, one of the important things was being able to do inference, being able to dynamically do inferences.

[00:56:44]

Can you can you can you clarify what you mean? Because you said like neural networks don't do inference. Yeah. So what do you mean by inference in this context then.

[00:56:53]

So OK, so in captures what they do to confuse people is to make these characters crowd together. Yes. OK. And when you make the characters crowd together, what happens is that you will now start seeing combinations of characters or some other new character or an existing character. So you would you would put an R and end together. It will start looking like an M. And so locally there is very strong evidence for it being some incorrect character. But globally, the only explanation that fits together is something that is different from what you find locally.

[00:57:33]

Yes. So, so. So this is inference. You are basically taking local evidence and putting it in the global context and often coming to a conclusion locally, which is conflicting with the local information.

[00:57:47]

So actually. So you mean inference like in the words used when you talk about reasoning, for example, as opposed to like inference, which is a word with artificial neural networks, which is a single part of the network.

[00:58:01]

OK, so like you're basically to some basic forms of reasoning, like integration of like how local things fit into the picture and things like explaining away coming into this one, because you are you are explaining that piece of evidence as something else, because globally that's the only thing that makes sense.

[00:58:23]

So now you can amortize this influence by, you know, in a neural network, if you want to do this, what you can you can brute force it. You can just show it all combinations of things that you want to you want to, you're listening to work over. And you can, you know, like just trying to help out of that neural network. And it will look like it is doing inference on the fly. But it is it is really just doing amortised inference.

[00:58:52]

It is because you you have shown it a lot of these combinations during training time. So what you want to do is be able to do dynamic inference rather than just being able to show all those combinations in the training time. And that's something to be emphasized in the model.

[00:59:09]

What does it mean, dynamic inferences that has to do with the feedback thing? Yes, like what is dynamic? I am trying to visualize what dynamic inference would be in this case, like what is it doing with the input on the input? The first time is like what's changing over temporarily or what's the dynamics of this inference process?

[00:59:32]

So you can think of it as you have at the top of the model, the characters that you are trained on. They are the courses that you are trying to explain the pixels using the characters of the courses, the you know, the characters are the things that cause the pixels. Yeah, so there's a causality thing, so the reason you mention causality, I guess, is because there's a temporal aspect to this whole thing.

[00:59:58]

In this particular case, the temporal aspect is not important. It is more like when if if I turn the character on, the pixels will turn on. Yeah, it'll be after there's a little bit.

[01:00:08]

But OK, so it's causality in the sense of like a logic, causality like an inference.

[01:00:14]

OK, the dynamics is that even though locally it will look like OK, this is an A and and locally, just when I look at just that part of the image, it looks like an E, but when I look at it in the context of all the other causes, it might not er is not something that makes sense. So that is something you have to kind of, you know, recursively figure out. Yeah.

[01:00:39]

So OK, so and this thing perform pretty well on the captures. Correct. And.

[01:00:46]

I mean, is there some kind of interesting intuition you can provide, why did well, like what it looked like? Because there visualizations that could be human interpretable to us humans. Yes. Yes.

[01:00:57]

So the good thing about the model is that it is extremely it is not just doing a classification. It is. It is. It is. It is providing a full explanation for the scene. So when when it when it operates on a screen, it is coming back and saying, look, this is the part is the AI and these are the pixels that turned on. These are the pixels in the input that makes me think that it is an AI.

[01:01:25]

And also these are the portions like hallucinated. It provides a complete explanation of that form and then these are the contours this this is the interior and this is in front of this other object. So that that's the kind of explanation it but the inference network provides. So for that, that is useful and interpretable. And then the kind of errors it makes are also I don't want to read too much into it, but the kind of errors the network makes are very similar to the kinds of errors humans would make in a in a similar situation.

[01:02:07]

There's something about the structure that feels reminiscent of the way humans visual system works. Well, I mean, how hard coded is this to the capture problem, this idea?

[01:02:21]

Not really hard coded because it's the assumptions that I mentioned are generally right. It is more and those things can be applied in many situations, which are natural signals. So it's the foreground versus background factorization and the fact recession of the surfaces versus the contours. So these are all generally applicable assumptions and in our vision.

[01:02:46]

So why why capture why attack the capture problem, which is quite unique in the computer vision context versus like the traditional benchmark's of image that and all those kinds of image classification or even segmentation tests, all that kind of stuff. Do you feel like that's I mean, what what's your thinking about those kinds of benchmarks?

[01:03:08]

And in this in this context, I mean, those benchmarks are useful for deep learning algorithms where, you know, the settings that deep learning works in are here is my huge training set. I'm here is my test set. So did the training set is almost a hundred thousand times bigger than the test set in many, many cases. What we wanted to do was ensure that the training set is very smaller than the dataset. Yes, and I'm Kaptchuk is a problem that is by definition hard for computers and it has these good properties of strong general systems strong out of training, distribution, generalization.

[01:03:55]

If you are interested in studying that and putting having your model have that property, then it's a it's a good data set to tackle.

[01:04:04]

So is there have you attempted to which I think I believe there's quite a growing body of work and looking at Éminence and imaging that without training.

[01:04:15]

So like taking the basic challenges, how what tiny fraction of the training set can we take in order to do a reasonable job of the classification task?

[01:04:28]

Have you explored that angle on these classic benchmarks?

[01:04:32]

Yes. So we did do so. You know, it's not just capture. We so there was also versions of multiple versions of Imust, including the the standard version which we we inverted the problem, which is basically saying rather than train on sixty thousand training data, you know, how quickly can you get to high level accuracy with very little training data, which is there some performance you remember like how well how well did it do?

[01:05:02]

How many examples that we need?

[01:05:04]

Yeah I, I, I remember that it was, you know, on the order of tens or hundreds of examples to get into ninety five percent accuracy. And it was, it was definitely better than the systems, other systems out there at that time.

[01:05:21]

At that time. Yeah. Yeah. They're really pushing that. I think that's a really interesting space actually. I think there's an actual name for amnesty that like there's different names, the different sizes of training sets. I mean, people are like attacking this problem. I think it's super interesting. Yeah. It's funny how, like, the Emmis will probably be with us all the way to ajai is it is that it just didn't buy it.

[01:05:49]

It's a clean, simple data to to say the fundamentals of learning with just a catch is interesting. Not enough people. I don't know, maybe you can correct me, but I feel like I just don't show up as often in papers as they probably should.

[01:06:05]

That's correct. Yeah. Because, you know, usually these things have a momentum, you know, once once something gets established as the standard benchmark. Yeah. That is a there is a there is a dynamics of how do students operate and how kind of academic system works that pushes people to break that benchmark. Yeah, nobody wants to think outside the box, OK? OK, so good performance on the captures. What else is there interesting on the RCN side before we talk about the coracle microphone.

[01:06:43]

Yeah, so the same model. So the the important part of the model was that it trains very quickly with very little training data and it's quite robust to out of distribution perturbations. And and we are using that very fruitfully in advocates in many of the robotic stocks we are solving also.

[01:07:06]

Well, let me ask you this kind of touchy question.

[01:07:09]

I have to I spoke with your friend, colleague Jeff Hawkins, who is I mean, this I have to kind of ask there is a bit of your brain inspired stuff.

[01:07:21]

Yeah. And you make big claims. Big sexy claims. Yeah.

[01:07:25]

There's a you know, there's critics. I mean, machine learning. Subvert it.

[01:07:32]

Don't get me started on those people. Their criticism is good, but they're they're a bit over the top. There's quite a bit of sort of skepticism and criticism. You know, is this work really as good as it promises to be? Well, do you have thoughts on that kind of skepticism? Do you have comments and the kind of criticism I might have received about, you know, is this approach legit? Is this is this a promising approach?

[01:08:01]

Yeah. Or at least as promising as it seems to be, you know, advertised as.

[01:08:07]

Yeah, I can comment on it. So, you know, our our thin paper is published in Science, which I would argue is a very high quality journal, very hard to publish in. And, you know, usually it is indicative of the of the quality of the work. And I can I can I am very, very certain that the ideas that we brought together in that paper in terms of the importance of feedback connections are recursive inference. Lateral connections are coming to best explanation of the scene as the problem to solve, trying to solve recognition segmentation all jointly in a way that is compatible with higher level cognition, Top-Down attention, all those ideas that we brought together into something coherent and workable in the in the world and solving a challenge, tackling a challenging problem, I think that will that will stay.

[01:08:58]

And that that contribution I stand by right now. I can I can tell you a story which is funny in the context of this. Right. So if you read the abstract of the paper and the argument we are putting in, you know, we are putting in look, current deep learning systems take a lot of training data. They don't use these insights. And here is a new model, which is not a deep neural network. It's a graphical model.

[01:09:22]

It does inference. This is what the paper is right now. Once the paper is accepted and everything, it went to the press department in in science to place signs of it. We didn't do any press release has published. It was given to the press department. What did the what was the press release that they wrote up? A new deep learning model solves captures the skeletons. And so so you can see where, you know, what was being hyped in that thing.

[01:09:50]

Right. So since there is the there is a dynamic in the in the community of. So that's especially happens when there are lots of new people coming into the field and they get attracted to one thing. And some people are trying to think different compared to that. So there is some I think the scepticism in science is important and it is very much required. But it's also it's not scepticism usually it's mostly bandwagon effect that is happening rather than in what.

[01:10:23]

But that's not even that. I mean, I'll tell you what they react to, which is like I'm sensitive to as well.

[01:10:29]

If you if you look at just companies open and mind. Yeah. Because, I mean, just there's, uh, there's a little bit of a race to the Top and hype. Right.

[01:10:42]

It's it's like it doesn't pay off to be humble like where it's like and and the press is just irresponsible off. And they, they just I mean don't get me started on the state of journalism today. Like it seems like the people who write articles about these things, they literally have not even spent an hour on the Wikipedia article about what is on that. Networks like, yeah, they haven't like invested just even the language to laziness. It's like the robots beat humans, like they write this kind of stuff.

[01:11:22]

They're just and then and then, of course, the researchers are quite sensitive to that. Because it gets a lot of attention, like why did this word get so much attention? You know, that's that's over the top and people get really sensitive. You know, the same kind of criticism with opening. I did work with the Rubik's Cube, with the robot that people criticized. Same with you two and three. They criticize the same thing with the deep mines, with Alpha Zero.

[01:11:51]

I yeah, I'm sensitive to it, but. And of course, with your work, you mentioned deep learning, but there's something super sexy to the public about brain inspired.

[01:12:02]

I mean, that immediately grabs people's imagination, not even like neural networks, but like really brain is probably more like brain, like neural networks. That seems really compelling to people and to me as well to the world as a narrative.

[01:12:20]

And so people hook up the hook onto that. And sometimes you the skepticism engine turns on in the research community and they're skeptical.

[01:12:32]

But I think putting aside the ideas of the actual performance and captures or performance, any data set, I mean, to me, all these data sets are useless anyway. It's nice to have them, but in the grand scheme of things, they're silly toy examples. The point is, is their intuition about the idea is, just like you mentioned, bringing the ideas together in a unique way. Is there something there? Is there some value there? And this is going to stand the test of time.

[01:13:03]

Yes. And that's the hope.

[01:13:04]

That's the hope. I am my confidence that is very high. I, I don't feed brain in spirit as a marketing tool. You know, I am looking into the details of biology and I'm puzzling over those things. And I am I am grappling with those things. And so it is not a marketing tool at all. You know, you can use it as a marketing tool and and people often use it and you can get combined with them. And when people don't understand how are approaching the problem, it is it is easy to be misunderstood and think of it as purely marketing.

[01:13:41]

But that's not the way we are.

[01:13:43]

So you really I mean, as a scientist, you believe that if we kind of just stick to really understanding the brain, that's going to that's the right. Like, you should constantly meditate on the how does the brain do this? Because that's going to be really helpful for engineering intelligent systems.

[01:14:03]

Yes, you need to. So I think it is it's one input and it is it is helpful. But you you should know when to deviate from it, too. So an example is convolutional neural networks, right? Convolution is not an operation. Brain implements the visual cortex is not convolutional. Visual cortex has local receptive fields, local connectivity. But the know is that there is no translation. Invariance in the uh, the network beats in the visual cortex.

[01:14:40]

That is a computational trick, which is a very good engineering trick that we use for sharing the training between the different nodes. So, um, and that trick will be with us for some time. It will go away when we have a robot with eyes and heads that move. And so then that trick will go away. It will not be useful at that time. So also the brain doesn't. So the brain doesn't have translational invariance. It has the focal point, I guess a thing it focuses on, correct?

[01:15:13]

It does. It has a phobia and the phobia the rest of the fields are not like the copying of the weights, like the the weights in the center are very different from the weights in the periphery.

[01:15:25]

Yes. At the periphery. I mean, I did this, actually wrote a paper and just gotten the chance to really study peripheral peripheral vision, which is a fascinating thing, very under understood thing of what the you know, at the every level the brain does with the periphery into some funky stuff. So it's it's another kind of trick then. Convolutional like it does it it's you know, comissioner convolution in neural networks is a trick to for efficiency is efficiency trick.

[01:16:02]

And the brain does a whole nother kind of thing.

[01:16:04]

I guess so. So you need to understand the principles of processing so that you can still apply engineering tricks where you want to do. You don't want to be slavishly mimicking all the things of the brain. And so, yeah, so it should be one input and I think it is extremely helpful. But you it should be the point of really understanding so that you know when to deviate from it.

[01:16:28]

So, OK, that's really cool. That does work from a few years ago. So you you did work in Numenta with Jeff Hawkins. Yeah. With hierarchical type of memory. How is your just if you could just give a brief history, how is your view of the way the models of the brain changed over the past few years leading up to to now? Is there some interesting aspects? Were there was an adjustment to your understanding of the brain or is it all just building atop each other in terms of the higher level ideas, especially the ones Jeff wrote about in the book?

[01:17:06]

If you if you blurt out right in unintelligence, say unintelligence, if you blur out the details and if you just zoom out and at the high level idea of things are, I would say, consistent with what you wrote about. But but many things will be consistent with that because it's a blur. You know, when you when you're in deep learning systems are also, you know, multi-level, hierarchical, all of those things. Right. So so at the but in terms of the detail, a lot of things are different and those details matter a lot.

[01:17:39]

So so one point of difference I had with Jeff was how to approach, you know, how much of biological plausibility and realism do you want in the learning algorithms? So when I was there, this was, you know, almost 10 years ago now.

[01:17:59]

So I guess when you're having fun, I don't know. I don't know what Jeff thinks now. But ten years ago, the difference was that I did not want to be so constrained on seeing my learning algorithms. One need to be biologically plausible based on some field of biological plausibility available at that time. To me, that is a dangerous card to make because we are discovering more and more things about the brain all the time. New biophysical mechanisms, new channels are being discovered all the time.

[01:18:32]

So I don't want to up front Kloof.

[01:18:35]

I'm a learning algorithm just because we don't really understand the full the full of the biophysics or whatever of how the brain learns.

[01:18:45]

Exactly, exactly.

[01:18:46]

Let me ask Sergeant Troub like, what's our what's your sense what's our best understanding of how the brain learns?

[01:18:54]

So things like back propagation, credit assignment, so many of these algorithms have learning algorithms have things in common. Right. It is back. Propagation is one way of credit assignment. There is another algorithm called expectation maximization, which is another adjustment algorithm.

[01:19:13]

But is that your since the brain does something like this has to there is no way around it in the sense of saying that you do have to adjust the connections.

[01:19:23]

So and you're saying credit assignment, you have to reward the connections that were useful in making the correct prediction. Yeah, I guess. What about but it doesn't have to be differentiable. I mean it doesn't have to be different.

[01:19:36]

Yeah. Yeah. But you have to have it. You have a model that you start with you you have data comes in and you have to have a way of adjusting the model that it better fits the data. Yeah. So that that is all of learning and some of them can be using back to do that. Some of it can be using, you know, very local graph changes to do that. That can be you know, many of these learning algorithms have similar update properties locally in terms of what the neurons need to do locally.

[01:20:14]

I wonder if small differences and learning algorithms have huge differences in the actual fact. So the dynamics of I mean, sort of the reverse, like spiking if credit assignment is like a lightning versus like a rain storm or something like good weather or that there's like a looping local type of situation with a credit assignment, you know, whether there is like regularisation.

[01:20:46]

Like how? How how injects robustness into the whole thing, like whether it's chemical or electrical or mechanical. Yeah, all those kinds of things.

[01:21:00]

Yes, I feel like that. Yeah, I feel like those differences could be essential.

[01:21:07]

It could be it's just that you don't know enough to be on the learning side. You don't know enough to say that is definitely not the way the brain does it. Got it.

[01:21:19]

So you don't want to be stuck to it so that. Yeah.

[01:21:21]

So you've been open minded on that side of things, on the inference that on the recognition state I am much more amenable to being constrained because it's much easier to do experiments because it's OK. It is a stimulus. You know, how many steps did it get to take the answer? I can trace it back. I can I can understand the speed of that computation, etc. much more readily on the insurance side.

[01:21:45]

Got it. And then you can't do experiments on the learning side. So let's just go right into Cortical Microcircuits back, so what what are these ideas beyond recursive cortical networks that you're looking at now?

[01:22:02]

So we have made a path through a multiple of the steps that, as I mentioned earlier, we were looking at perception from the angle of cognition. Right. It was not just perception for perception sake. How do you how do you connect it to cognition? How do you learn concepts? And how do you learn abstract reasoning similar to some of the things Fansler talked about? Right. So so we have taken one pastorate basically saying what is the basic cognitive architecture that you need to have, which has a perceptual system, which has a system that learns dynamics of the world and then has something like a routine program learning system on top of it to learn concepts?

[01:22:50]

So we have we built one the the version point, one of that system. This was another science robotics paper. It is. It's the title of that paper was something like cognitive programs. How do you build cognitive programs?

[01:23:05]

And and the application there was on manipulation robotics. It was.

[01:23:11]

It was. So think of it like this. Suppose you wanted to tell a new person that you met. You don't know the language or that person uses you want to communicate to that person or to achieve some task. So I want to say, hey, you need to pick up all the red cups from the kitchen counter and put it here. How do you communicate that? You can show pictures. You can basically say, look, this is a starting state of the things like here.

[01:23:43]

This is the ending state. And and what does the person need to understand from that? The person to understand what conceptually happened in those pictures from the input output. Right. So, um, so we are looking at preverbal conceptual understanding without language. How do you how do you have a set of concepts that you can manipulate in your head? I'm from in a set of images of input and output. Can you infer what is happening in those images?

[01:24:14]

Got it with concepts that are pretty language, OK? So what does it mean for conceptually pretty language like, yeah, why?

[01:24:22]

Why so why is language so important here?

[01:24:27]

So I want to make a distinction between concepts that are just learned from text. But by just just repeating brute force text are you can you can start extracting things like, OK, cow is likely to be on grass.

[01:24:46]

So those kinds of things you can extract purely from text. But that's kind of a simple association thing rather than a concept as an abstraction of something that happens in the real world, you know, in a grounded way that I can I can simulate it in my mind and connect it back to the real world.

[01:25:06]

And you think kind of the visual, the visual world concepts in the visual world are somehow lower level than just the language.

[01:25:16]

The lower level kind of makes it feel like, OK, that's unimportant. Like it's more like, I would say, of the concepts in the visual and the motor system and, you know, the the the concept learning system, which if you cut off the language part just to just what we learn by interacting with the world and abstractions from that, that is a prerequisite for any real language understanding.

[01:25:44]

So you're so you disagree with Chomsky because he says language is at the bottom of everything?

[01:25:49]

No, I yeah, I disagree with Chomsky completely on coming from from Universal Grammar to. Yeah.

[01:25:57]

So that was a paper in science beyond the recursive cortical network. What were other interesting problems?

[01:26:03]

Are there the open problems and brain inspired approaches that you're thinking about?

[01:26:09]

I mean, everything is open. No, no. No problem is solved. Solved. But I think of perception as kind of the the the first thing that you have to build, but the last thing that you will be actually solved, because if you do not build a perception system in the right way, you cannot build a concept system in the right way. So so you have to build a perception system, however long that might be, you have to still build that and learn concept from there and then, you know, keep iterating and and finally, perception will get solved fully when perception, cognition, language, all those things work together.

[01:26:50]

Finally.

[01:26:51]

So what am I so great? We've talked a lot about perception, but then maybe on the concept side and like common sense or just general reasoning side, there's something, some intuition you can draw from the brain about how we could do that.

[01:27:08]

So I have I have this classic example, like suppose I give you a few sentences and then ask you a question, fulling that this is the natural language processing problem. Right. So I'm telling you, Salli pounded a nail on the ceiling. Hmm. OK, that's a sentence. Now I'm asking a question. What's the middle, horizontal or vertical? Vertical. OK, how did you answer that? I imagined, Sally, it was kind of hard to imagine what the hell she was doing, but it's what I imagine the visual of the whole situation.

[01:27:51]

Exactly, exactly. So, so so here I posed the question in natural language. The answer to that question was you you got the answer from actually simulating the scene.

[01:28:02]

Now, I can go more and more detail about, OK, what Sally standing on something while doing this, you know, could could she have been standing on a lifeboat to do this? You know, I could I could ask more and more questions about this. And I can ask make you simulate the scene in scene in more and more detail. Right. There is all that knowledge that you are accessing stored. It is not in your language system.

[01:28:25]

It is not. It was not just by reading text. You got that knowledge. It is stored from the everyday experiences that you have had from and by the by the age of five, you you have pretty much all of this right. And it is stored in your visual system, water system in a way such that it can be accessed through language. Got it. I mean, right here, the language is just on surfaces according to the whole visual cortex, and it does the whole feedback thing.

[01:28:55]

I mean, it is all reasoning kind of connected to the perception system. And somewhere you can do a lot of it.

[01:29:03]

You know, you can still do a lot of it by quick associations without having to go into the depth. And most of the time you will be right. Right. You can just do quick associations, but I can easily create tricky situations for you where the quickest decisions is wrong.

[01:29:19]

And you have to actually run the simulation so that figuring out how these concepts connect to have a good idea of how to do that. That's exactly what that does.

[01:29:31]

The one of the problems that we are working on and and the way we are approaching that is to basically saying, OK, you need to know the particular is that language is simulation control and your perceptual plus motor system is building a simulation of the world. And so so that's basically the way we're approaching it. And the first thing that we built was a controllable perceptual system. And we built a schema networks, which was a controllable dynamic system. Then we built a concept learning system that puts all these things together into programs as abstractions that you can run and simulate.

[01:30:12]

And now we are taking the step of connecting to language and it will be very simple examples. Initially, it will not build the three like examples, but it will be grounded simulation based language.

[01:30:25]

And for like do the querying would be like question answering kind of thing.

[01:30:32]

And it will be in some simple world initially on, you know, but it will be about, OK, can the system connect the language and ground it in the right way and run the right simulations to come up with the answer. And the goal is to try to do things that, for example, GPG three couldn't correct. Speaking of which, if we could talk about Chapter three a little bit, I think it's an interesting, thought provoking set of ideas that OpenAir is pushing forward, I think is good for us to talk about the limits and the possibilities and all that work.

[01:31:08]

So in general, what are your thoughts about this? Recently released a very large one hundred seventy five billion parameter language model.

[01:31:17]

So I have I haven't directly evaluated it yet from what I have seen on Twitter and know other people evaluating it. It looks very intriguing. You know, I am I am very intrigued by some of the properties it is displaying. And and, of course, the next generation part of that was already evident in jeopardy, too, you know, that it can generate coherent text over long distances. That was. But of course, the weaknesses are also pretty visible in saying that, OK, it is not really carrying a ball state around.

[01:31:50]

And, you know, sometimes you get sentences like I went up the hill to reach the valley or, you know, did some, you know, completely incompatible statements. Or when you're traveling from one place to the other, it doesn't take into account the time of travel, things like that.

[01:32:06]

So those things, I think, will happen less than did three because it is trained on even more data. And so and it has it can do even more longer distance coherence, but it will still have the fundamental limitations that it doesn't have a role model and it can run simulations in its head to find whether something is true in the world or not.

[01:32:30]

Do you think within so it's taking a huge amount of text from the Internet and forming a compress for a presentation. Do you think in that could could emerge something that's an approximation of a world model which essentially could be used for reasoning?

[01:32:47]

And it's a it's a I'm not talking about three, I'm talking about GBG four or five and ten.

[01:32:54]

Yeah. I mean they will look more impressive than the three. So you can if you take that to the extreme then a marketing of first-order. And if you if you go to I'm taking the other extreme. If you read Shannons book. Right, he has a model of English text which is based on Fuster, remarkable things, Second-order Marcovicci instead of marketing something that octoroon magazines look better than Fuster, remarkable things that. So does that mean a remarkable thing has a model of the world?

[01:33:30]

Yes, it does. So yes.

[01:33:32]

In that level, when you go higher order models or more sophisticated structured in the model like the transwoman networks, that, yes, they have a model of the world. But that is not a more. Off the wall, it's a model of the world and it will have interesting properties and it will be useful, but just scaling it up is not going to give us 80 or natural language, understanding or meaning.

[01:34:06]

The question is whether. Being forced to compress a very large amount of text, yeah, forces you to construct things that are very much like because the ideas of concepts and meaning is that this is a spectrum, actually.

[01:34:25]

Yeah. So in order to form the kind of compression.

[01:34:31]

Maybe it will be forced to figure out abstractions which look awfully a lot like the kind of things that we think about as far as concepts, as world models, as common sense that possible?

[01:34:48]

No, I don't think it is possible because the information is not there. The information is is there behind the text, right?

[01:34:56]

No. Unless somebody has written on all the details about how everything works in the world to the the absurd amounts, like, OK, it is easier to walk forward and backward that you have open the door to go out of the thing. Doctors wear underwear, you know, all these things somebody has written down somewhere or somehow. The program found it to be useful for compression from some other text. The information is not there.

[01:35:22]

That's an argument that like text is a lot lower fidelity than the, you know, the experience of our physical world.

[01:35:30]

Like 40000 words like think.

[01:35:34]

Well, in this case, pictures aren't really. So the the richest aspect of the physical world isn't even just pictures. It's the it's the interactivity with the world that, yeah, it's being able to interact. It's almost like. It's almost like if you could interact, so I disagree what maybe I agree with you, that picture's worth a thousand words, but a thousand are still.

[01:36:04]

You could say you could get it with liberty. That's I wonder if there's some interactive element where a system could live in text world, where it could be part of the chat, be part of, you know, talking to people.

[01:36:17]

It's interesting. I mean, fundamentally. So you're making a statement about the limitation of text.

[01:36:25]

OK, so let's say we have a text corpus that includes basically every experience we could possibly have.

[01:36:35]

I mean, just a very large corpus attacks and also interactive components. I guess the question is whether the neural network architecture, these very simple transformer's.

[01:36:46]

But if they had like hundreds of trillions or whatever comes after a trillion parameters, whether that could store the information needed, that's architecturally.

[01:37:01]

Do you have like do you have thoughts about the limitation on that side of things with neural networks?

[01:37:06]

I mean, so transwoman is still a feed for what neural network? It has a very interesting architecture, which is good for text modelling and probably some aspects of video modeling. But it is still a very proper architecture.

[01:37:20]

And you believe in the feedback mechanism of coercion and and also, you know, causality, know, being able to do counterfactual reasoning, being able to do interventions, which is actions in the world. So all those things require different kinds of models to be built. I don't think Transformer's captures that family. It is very good at statistical modeling of text and it will become better and better with more data, bigger models. But that is only going to get so far.

[01:37:57]

You know, finally, when you say I had this joke on Twitter saying that, hey, this is the model that read all of quantum mechanics and the theory of relativity and we are asking you to do X completion or, you know, we are asking you to solve simple puzzles that, you know, when when you have. Yeah. If you if you know that's not what you ask the system to do. Because we ask we lost the system to do experiments.

[01:38:22]

You know what. Yeah. And I come up with a hypothesis and, you know, revise the hypothesis is based on evidence from experiments, all those things. Right.

[01:38:31]

Those are the things that we want the system to do when we have it yet.

[01:38:34]

Not solve simple puzzles like impressive or something is generating a red button, an HMO which are all useful like this? No, not dissing the the usefulness of.

[01:38:47]

So I get by the way, I'm playing a little bit of a devil's advocate, so calm down Internet.

[01:38:54]

So I just I'm curious almost in which ways or dumb but large neural network will surprise us.

[01:39:05]

Yeah.

[01:39:06]

So like it's kind of I completely agree with your intuition is just that I don't want to dogmatically. A hundred percent put all the chips there is we've been surprised so much, even the current deputy, two and three years, so surprising.

[01:39:25]

Yeah, the soft play mechanisms of Alpha zero are really surprising and. I reinforce the fact that reinforcement learning works at all. To me, it's really surprising the fact that neural networks work at all is quite surprising, given how nonlinear the spaces. The fact is able to find local minima that are at all reasonable. It's very surprising. So it I wonder sometimes.

[01:39:55]

Mother, us humans just want it to not for ajai not to be such a dumb thing.

[01:40:05]

So just because exactly what you're saying is like the ideas of concepts and be able to reason with those concepts and and connect those concepts in like hierarchical ways, and then to be able to have world models and just everything we're describing in human language in this poetic way seems to make sense that that is what intelligence and reasoning are like. I wonder if the core of it it could be much dumber.

[01:40:32]

Well, finally, it is still connections and messages passing over. Right. Right. So that is done.

[01:40:40]

OK, so I guess the recursion, the the feedback mechanism, that does seem to be a fundamental kind of thing. Yeah, yeah. The idea of concepts also memory. Correct. Given episodic memory. Yeah. Yeah. That seems to be an important thing. So how do we get memory.

[01:41:00]

So we have another piece of work that which came out recently on how do you form episodic memory sinda and form abstractions from them. And we haven't figured out, you know, all the connections of that. But overall cognitive architecture. But well, yeah.

[01:41:16]

What are your ideas about how you could have observed memory? So at least it's very clear that you need to have two kinds of memory. That's very, very clear. Right, because there are things that happen are statistical patterns in the world. But then there is the the one timeline of things that happen only once in your life. Right. And this day is not going to happen ever again. And so and that needs to be stored, as it know, just a stream of string.

[01:41:49]

This is this is my experience. And then then the question is about how do you take that experience and connect it to the statistical part of it? How do you now say that? OK, I experience this thing now. I want to be careful about similar situations. And so so you need to be able to index that similarity using your other giants. That is in the model of the world that you have learned, although the situation came from the episode, you need to be able to index the other one.

[01:42:20]

So the episodic memory being implemented as an indexing or where the other model that you're building.

[01:42:32]

So the memories remain and they they they're an index into this, like the statistical thing that you form, statistical, causal, structural model that you built over over time. So so it's basically the idea is that the hippocampus is just storing or sequencing in a set of pointers that happens over time. And then whenever you want to reconstitute that memory and evaluate the different aspects of it, whether it was good or bad, do I need to encounter the situation again?

[01:43:10]

You need the cortex to reinstate you to replace that memory. So how do you find that memory? Which direction is the important direction, both directions, again, bidirectional. So I guess how do you retrieve the memory? So this is a hypothesis, right? So when you when you come to a new situation like your your cortex is doing influence over the new situation, and then, of course, the hippocampus is connected to different parts of the cortex.

[01:43:43]

And I know you have this déjà vu situation. Right. Okay. I have seen this thing before. And and then in the hippocampus, you can have an index of, OK, this is when it happened as fast as the timeline. And then then you can use the hippocampus to drive the the similar timelines to say now I am, I am.

[01:44:06]

Rather than being driven by my current input stimuli, I am going back in time and rewinding experience for playing it. But putting back into the cortex and then putting it back into the cortex, of course, affects what you are going to see next in your current situation. Got it.

[01:44:23]

Yeah. So that's that's the whole thing. Having a role model and then. Yeah. Connecting to the perception. Yeah. It does seem to be that that's what's happening and be on the neural networks side. It's, it's interesting to think of how we actually do that. Yeah. Yeah. To have a knowledge base. Yes.

[01:44:42]

It is possible that you can put many of these structures into neural networks and we will find ways of combining properties of neural networks and graphical models. So I mean, it's already started happening. This graph. Neural networks are kind of emerged between them and there will be more of that. So but to me, it is the direction is pretty clear in looking at biology and the history of our evolutionary history of intelligence. It is pretty clear that, OK, what we need is more structure in the models and modeling of the world and supporting dynamic inference.

[01:45:26]

Well, let me ask you this, a guy named Elon Musk, there's a company called NewLink and there's a general field called Brain Computer Interfaces. Yeah, it's kind of a interface between your two loves. Yes.

[01:45:41]

Brain and intelligence. So there's like very direct applications of brain computer interfaces for people with different conditions, more in the short term. Yeah, there is also these sci fi, futuristic kinds of ideas of A.I. systems being able to communicate in a hybrid way with a brain bidirectional.

[01:46:03]

Yeah. What are your thoughts about neural link and BCI in general as a possibility?

[01:46:09]

So I think this is a cool research area. And in fact, when I got interested in brains initially when I was enrolled at Stanford and when I got interested in brains, it was it was through a brain interface talk that I gave. That's when I even started thinking about the problem. So so it is definitely a fascinating research idea. And it is the applications are enormous. Right. So, you know, there is a science fiction scenario of, you know, brain statically communicating.

[01:46:41]

Let's, you know, let's keep that aside for the time being, even just the the intermediate milestones that pursuing, which are very reasonable, as far as I can see, being able to control an external limb, using Internet connections from the brain and being able to write things into the brain. So. So those are all good steps to take.

[01:47:05]

And they have enormous applications, you know, people losing limbs, being able to control prosthetics, quadriplegics being able to control something. So and therapeutics. And, you know, I also know about another company working in the corporate nomics. They're based on a different electorate, but trying to take some of the same problems. So I think it's a very also surgery, correct? Surgically implanted. Yeah. So, yeah, I think of it as a very, very promising field, especially when it is helping people overcome some limitations now at some point.

[01:47:44]

Of course, it will advance to the level of being able to communicate hard.

[01:47:49]

Is that problem, do you think? Like so so let's say we magically solve what I think is a really hard problem of doing all of this safely.

[01:47:59]

Yeah, so so like being able to connect electrodes and not just thousands, but like millions to the brain, I think it's very, very hard because you also do not know what the what will happen to the brain with that.

[01:48:14]

Right. In the sense of how does the brain adapt to something like that? And it's you know, as we were learning, as the brain is quite in terms of neuroplasticity is pretty malleable. Correct. So it's going to adjust. Correct. So the machine learning side, the computer side is going to adjust and then the brain is going to adjust. Exactly. And then what this is doesn't do is the kind of hallucinations you might get from this.

[01:48:39]

That might be pretty intense. Yeah, yeah.

[01:48:41]

So just connecting to all of Wikipedia, it's interesting whether we need to be able to figure out the basic protocol of the brain's communication schemes in order to get them to the machine and the brain to talk, because another possibility is the brain actually just adjusts to whatever the heck the computer is doing. Exactly.

[01:49:01]

That's the way I think that I find that to be a more promising way. It's basically saying, you know, OK, attached electrodes to some part of the cortex, OK? And maybe maybe if it is done from birth, the brain will adapt. It says that, you know, that part is not damaged. It was not used for anything. These electrodes are attached there. And now you you train that part of the brain to do this high bandwidth communication between something else.

[01:49:27]

Right. And if you do it like that, then it is brain adapting to. And of course, your external system is designed so that it is adaptable, just like we designed computers or mouse keyboard, all of them to be interacting with humans. So of course, that feedback system is designed to be human compatible, but now it is not trying to recover from the all of the brain. And, you know, now, you know, two systems trying to adapt to each other.

[01:49:58]

It's a brain adapting to one way, say the brain is connected to like the Internet is connected to. Just imagine connecting it to Twitter and just just just taking that stream of information. Yeah, but again, if we take a step back, I don't know what your intuition is.

[01:50:19]

I feel like that is not as part of a problem as the. Doing it safely. There's there's a huge barrier to surgery like that because because the biological system is a mush of like, weird stuff. Correct. So that the surgery part of it will be part of the the long term repercussions part of it. Again, I don't know what else will you know, we we often find after a long time in biology that that idea was wrong. Right.

[01:50:55]

So people used to cut off this the gland called the thymus or something. And then they found that, oh, no, that actually causes the cancer. So, yeah.

[01:51:08]

And then there's a subtle like millions of variables involved. But this whole process, the nice thing, just like again, with Elan, just like colonizing Mars, seems like a ridiculously difficult idea.

[01:51:20]

But in the process of doing it, we might learn a lot about the biology of the neurobiology of the brain, the neuroscience side of things. It's like if you want to learn something, do the most difficult version of it.

[01:51:34]

See, we learn the intermediate steps that they are taking. Sounded all very reasonable to me. So it's great. Well, but like everything with Elan as the timeline seems insanely fast. So that's the only open question.

[01:51:50]

Well, one, we've been talking about cognition a little bit. So reasoning. We haven't mentioned the other C word, which is consciousness. Do you ever think about that one? Is that useful at all in this whole context of what it takes to create an intelligent reasoning being?

[01:52:09]

Or is that completely outside of your like the engineer who tells it is not outside the realm, but it is on a day to day basis inform what we do. But it's more so in in many ways, the company name is connected to this idea of consciousness. What's the company name? My case. So I say this is the company name. And and so what does this mean? It's at the first level. It is about modeling the world and and it is internalizing the external actions.

[01:52:46]

So so you interact with the world and learn a lot about the world. And now after having learned a lot about the world, you can run. Those things in your mind without actually having to act in the world so you can run things like hideously just in your brain, and similarly you can experience another person's thoughts by having a model of how that person works and and running their, you know, putting yourself in some other person's shoes. So that is being vicarious now.

[01:53:19]

It's the same modeling apparatus that you are using to model the external world or some other person's thoughts. You can donate it to yourself. You can apply that same modeling thing is applied to your own modeling apparatus. Then that is what gives rise to consciousness, I think.

[01:53:38]

Well, that's more like self awareness. There's a hard problem of consciousness, which is. Like when the model becomes when the model feels like something, when this whole process is like it's like you really are in it, if you feel like an entity in this world, not just, you know that you are an entity, but it feels like something to be that entity. It.

[01:54:06]

It you know, and thereby we attribute this, you know, then it starts to be wearing something that has consciousness can suffer, you start to have these kinds of things that we can reason about that.

[01:54:19]

Yes, much, much heavier. It seems like there's much greater cost to your decisions and mortality is tied up into that. Like the fact that these things and. Right.

[01:54:35]

They first of all, I end at some point and then other things and and you know, that that somehow seems to be, at least for us humans, a deep motivator.

[01:54:47]

Yes. And that, you know, that that idea of motivation in general, we talk about goals and I but.

[01:54:54]

The goals aren't quite the same thing as like the, ah, mortality, it feels like it feels like, first of all, humans don't have a goal and they just kind of create goals at different levels.

[01:55:07]

They like make up goals because we're terrified by the mystery of the thing that gets us all. So we make these goals up.

[01:55:19]

So we're like a go generation machine as opposed to a machine which optimizes the trajectory towards a singular goal. So it feels like that's an important part of cognition, that whole immortality thing.

[01:55:34]

But it is it is a part of human cognition. But there is no reason for that mortality to come to the question for a artificial system because we can copy the artificial system. The problem with humans is that we can't clone you. I can't I you know, I can I can even clone us. You know that the hardware you would experience that was stored in your brain or your episodic memory, although it will not be captured in the new clock.

[01:56:09]

So but that is not the same with an ice system. Right.

[01:56:13]

So but it's also possible that the the thing that you mentioned, Will, with us humans is actually furnham of fundamental importance for intelligence. Like the fact that you can copy in our system. Yeah. Means that that A.I. system is not yet ajai. So like so if you look at existence proof. Yeah. If we reason for some existence proof, you could say that it doesn't feel like death is a fundamental property of an intelligence system. Correct.

[01:56:46]

We don't yet give me an example of an immortal intelligent being. We don't have those. It's very possible that, you know, that's that is a fundamental property of intelligence is a thing that has a deadline for it.

[01:57:04]

So think of it like this.

[01:57:07]

Suppose you invent a way to freeze people for a long time. It's not dying. Right. So so you can be frozen and woken up thousands of years from now. So it's not fear of death? Well, no, you're still it's it's not it's not about time.

[01:57:26]

It's about the knowledge that it's temporary and the that aspect of it. The finiteness of it, I think. Creates a kind of urgency, correct, for us, for humans, yeah, for humans, yes, and that that is part of our drives. But and that's why I'm not too worried about AAII, you know, having motivations to kill all humans then those kinds of things.

[01:57:56]

Why just wait, you know, so so why do you need to do that? I've never heard that before. That's a good point, because it's just murder seems like a lot of work just wouldn't work it out.

[01:58:13]

Probably hurt themselves. Let me ask you, people often kind of wonder. World-Class researchers such as yourself, what kind of books, technical fiction, philosophical were had an impact on you and your life and maybe once you could possibly recommend that others read, maybe if you have three books that pop into mind. Yeah.

[01:58:41]

So I definitely liked Judy Apple's book, Probabilistic Reasoning and Intelligence Systems. It's it's a very deep technical book. But what I liked is that there are many places where you can learn about probabilistic graphical models from. But throughout this book, the Apple kind of sprinkles his philosophical observations and he thinks Apple connects us to how the brain thinks and attention and resources, all those things. So so that whole thing makes it more interesting to read.

[01:59:12]

He emphasizes the importance of causality. So that was in his later book. So this was the first book, Probabilistic Reasoning in England. He mentions causality, but he hadn't really sunk into like, you know, how do you actually formulate that? Yeah. And the second book, Causality, it was 2000, the one in 2000. That one is really hard. So I would recommend that I. So that looks at the like the mathematical like his model of Newcastle.

[01:59:40]

I'll do calculus. Yes. But it does mathematics like the book of why is definitely more enjoyable for sure. Yeah. So yeah. So I would, I would recommend probabilistic reasoning in intelligent systems. Another book I liked was one from Doug Hofstadter a long time ago, though he has a book and a book, I think it is called The Mind's Eye. It was probably Hofstadter and Daniel Dennett together. Yeah.

[02:00:07]

So and it actually was I bought that book so much I haven't read it yet, but I, I can get an electronic version of it, which is annoying because I read everything on Kindle. Oh OK. So I had to actually purchase the physical is like one of the only physical books I have because. Yeah. Anyway there's a lot of people who recommended it highly so. Yeah.

[02:00:28]

And the third one I would definitely recommend reading is this is not a technical book. It is a history. It's it's the name of the book, I think is Bishop's Boys, it's about Wright brothers and and their their their pot and how it was. There are multiple books on this topic and all of them are great. It's fascinating how our flight was treated as an unsolvable problem. And and also, you know, what aspects did people emphasise are, you know, people thought, oh, it is all about the just powerful engines, you know, just need how powerful lightweight engines are.

[02:01:15]

And so, you know, some people thought of just how far can we just throw that thing?

[02:01:21]

You know, just throw it a catapult. Yeah. So it's a very fascinating.

[02:01:27]

And even after they made the invention, looking at people, not believing it and the social aspects of it, the social aspect really do draw any parallels between, you know, birds fly.

[02:01:42]

So there's the natural approach to to flight and then there's the engineered approach. Do you see the same kind of thing with the brain and are trying to engineer intelligence?

[02:01:54]

Yeah, it's it's a good analogy to have, of course, all analogies there, you know, little. Sure. So people in you know, I often use airplanes as an example of, hey, we didn't learn anything from birds, look. Right. But the funny thing is that and the the thing is airplanes don't flap wings. Right? This is what they say. The funny thing and the ironic thing is that that you don't need to flap to fly is something Wright brothers found by observing birds.

[02:02:30]

So, yeah, they have in their notebook in some of these books, they show that notebook drawings. They they make detailed notes about Buzzard's just soaring over thermals. Basically, they look flapping is not the important. Propulsion is not important problem to solve here. We want to solve control and wants to control propulsion will fall into place. All of all of these are people you know, they relate this by observing birds beautifully.

[02:03:02]

But that's actually brilliant because people do use that analogy. I'm going to remember that one. Give advice for people interested in artificial intelligence like young folks today, I talk to undergraduate students all the time, interested in neuroscience, interesting in understanding how the brain works. Is there advice you give them about their career, maybe about their life?

[02:03:26]

Sure. I think every, you know, every piece of advice should be taken with a pinch of salt, of course, because each person is different, their motivations are different. But I can I can definitely say if your goal is to understand the brain from the angle of wanting to build one, you know, then of being an experimental neuroscientist might not be the way to go about it. It might be a better way to pursue it might be through computer science, electrical engineering, machine learning and A.I. And of course, you have to study at the neuroscience, but that can do on your own if you are more attracted by finding something intriguing about discovering something interesting about the brain.

[02:04:14]

Then of course, it is better to be an experimentalist. So find that motivation. What are you. And of course, find your strengths to some people are very good experimentalists and they enjoy doing that and listening to see which department, if you're if you're picking in terms of like your education path, whether to go with, like, intensity, its brain and computer know the ABCs. Yeah.

[02:04:45]

In cognitive science or the Yes. Side of things. And actually the brain folks, the neuroscience folks are more and more now embracing of, you know, learning Tenzer flow.

[02:04:59]

Right. They they see the power of trying to engineer ideas that that they get from the brain into and then explore how those could be used to to create intelligent system. So that might be the right department actually to.

[02:05:18]

So this was a question in, you know, one of the Redwood Neuroscience Institute workshops that Jeff Hopkins organized almost 10 years ago. This question was put to a panel. Right. What what should we do? Undergrad Major, you should take if you want to understand the brain and and the majority opinion that one was the electrical engineering. Interesting, because, I mean, I'm a graduate. I got lucky in that way. But it I think it does have some of the right ingredients because you learn about circuits, you you learn about how you can construct circuits to, you know, do functions.

[02:05:57]

You learn about microprocessors, you learn information theory, you learn signal processing, you learn continuous math. So so in that way, it's a good step, too. If you want to go to computer science or neuroscience, you can it's it's a good step.

[02:06:12]

The down side, you're more likely to be forced to use Matlab.

[02:06:18]

So, so, so. Well, one of the one of the interesting things about I mean, this is changing. The world is changing. Like certain departments lagged on the programming side of things like developing good, good habits and software engineering. But I think that's more and more changing and and students can take that into their own hands, like learn to program. I feel like everybody should learn to program because it is like everyone in the sciences, because it empowers it puts the data at your fingertips so you can organize it.

[02:06:56]

You can find all kinds of things in the data. And then you can also for the appropriate sciences, build systems that like based on that. So like an engineer, intelligent systems, we already talked about mortality. So we had no ridiculous point.

[02:07:14]

But let me ask you, the you know, one of the things about intelligence is it's goal driven and you study the brain.

[02:07:30]

So the questions like what's the goal that the brain is operating under? What was the meaning of it all for us humans, in your view? What's the meaning of life?

[02:07:40]

The meaning of life is whatever you can stick out of it. It's completely open. It's open. Yeah. So there's nothing there's nothing like you mentioned. You like constraints.

[02:07:50]

So what's that? It's wide open.

[02:07:54]

Is there is there some useful aspect to think about in terms of like the openness of it and just the basic mechanisms of generating goals and studying. Cognition in the brain that you think about? Or is it just about because everything we've talked about, kind of the perception system is to understand the environment, like to be able to, like, not die. Exactly. Like not fall over and be able to.

[02:08:24]

You don't think we need to think about anything bigger than that?

[02:08:30]

Yeah, I think so, because it's basically being able to understand the machinery of the world that you can Pushtu, whatever else you want. Right. It's in the machinery of the world. This is really ultimately what we should be striving to understand. The rest is just the rest is just whatever the heck you want to do or whatever, whatever it is culturally popular.

[02:08:54]

I think it's a that's beautifully put. I don't think there's a better way to end the I'm so honored that you show up here and waste your time with me has been an awesome conversation. Thanks so much for talking to us.

[02:09:10]

Thank you so much. This was this was so much more fun than I.

[02:09:15]

Thank you. Thanks for listening to this conversation with George and thank you to our sponsors Babo on Airbus and Masterclass. Please consider supporting this podcast by going to Batbold dot com and use Scolex, going to buy Recontact Cognex and signing up a masterclass that cost Slack's click the links get the discount. It really is the best way to support this podcast. If you enjoy this thing, subscribe on YouTube review starting up a podcast support on page. Connect with me on Twitter.

[02:09:49]

Allex Friedemann spelled Yes without the E just after Idi Amin. And now let me leave you some words for Marcus Aurelius. You have power over your mind, not outside events. Realize this and you will find strength. Thank you for listening and hope to see you next time.