Professor Robin Hanson on hidden motives, emulating brains, and life in the universe.
Robin Hanson is an associate professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a Phd in social science from Cal Tech, master’s degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics. He’s recognized for his contributions to economics and prediction markets and in many other fields. He is the author of The Age of Em and Co-author of The Elephant in the Brain.
Alex: [00:00:00] Robin Hanson is an Associate Professor of Economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a PhD in social science from Caltech, Master's degrees in physics and philosophy, and nine years of experience as a research programmer in artificial intelligence and Bayesian statistics.
He's recognized for his contributions to economics and prediction markets and in many other fields. He's the author of The Age of Em and co-author of The Elephant in the Brain. We talked hidden motives, emulating brains, and life in the universe.
So why don't we start here? I wanted to start with hidden motives and a lot of what you talk about in one of your books, which the basic concept here is that correct me if I'm wrong when we don't know why we do something, which is more often than not, we tend to make something up to rationalize that.
Robin: [00:00:56] We have a story about why we do things that's somewhat independent of why we really do it but as our minds are trying to tell us a nice story and they're not so much trying to tell us why we're really doing things when that would look bad or, or less good than the story we could tell. So you know, your mind, your conscious mind is like the press secretary of your mind. It's not the king or the president. It's not job isn't to like, decide what to do. Its job is to figure, take whatever you've done and put a good spin on it. So the motives you think of as the motives you are having are the good spin motive.
They're the motives your mind came up with that look kind of okay. And that you can present to other people as how you're doing
Alex: [00:01:43] And there is sort of empirical, not empirical, but more scientific studies that have been done to sort of prove this, right?
Robin: [00:01:48] So you know, we're talking about our book called the elephant in the brain and motives in everyday life. And that's a book that's classified as psychology and the reviewers, mostly psychologists have basically said, we know this, this is a standard conclusion.
This isn't controversial. People are not very aware of their motives and they have motivated reasons to be wrong about them in certain ways. But the first third of our book is sort of reviewing this fact that we are often wrong about our motives. The last two thirds is going through 10 different areas of life and showing how big, a difference that can make to our understanding of those areas of life.
And that's where we think the novelty of our book lies in that most psychologists haven't necessarily embraced these views of these 10 areas of life. I am a more of an economist, a social scientist, where I've been studying these areas of life for all my career. And in all these areas of life, we've been puzzled by how the usual motive explanation doesn't fit very well with a lot of details.
And so the story in the book is the way to explain most of these puzzles and most of these areas is to realize that we're wrong about our basic motives. We are just not been very honest about why we do a wide range of things, including politics, charity, conversations, medicine religion education, these big areas of life.
We are just wrong about why we do that.
Alex: [00:03:11] And like you mentioned that sort of extrapolates in itself in a lot of ways. So you make, give the example among many other things, but as social institutions, as sort of instruments for human behavior in a way, right? Like school, like medicine, any of these other things, how do those play out?
Robin: [00:03:25] We set up these institutions and they do in fact achieve the ends that we actually have. But we say that we want them for some other purpose. And then when we are confronted with data, suggesting the institutions we have don't seem to do a very good job at the purposes we say we have, we scratch our heads and wonder if we could find a reforms or what the problem is when fundamentally the problem is we're just lying about what we want for the things we actually want they do fine.
Alex: [00:04:00] Well, that's sort of the fundamental game. We're all playing, right? Like you talk about norms a lot. And that reminds me of something I read in. I think it was, I want to say it was Sapiens by Harari and there's this, this concept, the way I like to describe it is you can sort of think of society as like a game of soccer, right?
Like we need to have rules in place so everybody can sort of get along and make things work, otherwise we devolve into chaos. Do you think about it in the same way? Is it a necessary thing that we need to have?
Robin: [00:04:31] Well, norms are one of the great human and innovations. Norms allowed human primate groups to become much larger and much more cohesive and less, less destructive than other primate groups were. So at least in the early days when we first invented them and first you use them, norms have made this enormous difference to us that allowed us to be human. And language allowed norms to be much more precise.
And you know, varied because without language it's hard to express the norms hard to say exactly what the rules are and to point out when somebody violates them. But with language, we can say what people have done and what the rules are. And with weapons, we were able to enforce the rules. In say a group of chimpanzees there's the dominant chimpanzee.
And if you don't like what he does, you have tough luck because you can't do much about it. Whereas in a group of humans if all the humans have weapons, then the top human with a weapon, can't really stand up to all the other humans with weapons. So they have, majority of the humans with weapons can make the top human do what they say, follow the rules.
Alex: [00:05:41] Do you think that human organization can exist without norms? Are they even worth reforming in any way if we're all sort of playing the game to a net benefit?
Robin: [00:05:51] Well, human norms showed up, you know, a million years ago. And 10,000 years ago, we developed sort of larger social organizations, including law. So law, in some senses, they substitute for norms, I mean, or a different way of enforcing norms.
And so in principle, we could have law without norms. That is instead of our innate sense of feeling that something was a violation and you know, talking to other people about how to do something, we could have law formally deal with the problem. Now you could say law is just a different way of enforcing the norms in which case yeah, that we still have norms, but it is a substantially different way.
And so Many people have wondered whether our world would work as well or badly we dropped the norms, if we just had formal rules and say full contracts, et cetera. And, you know, I can see it going both ways. The main reason we can enforce many laws at such a low cost is because people have the norm that they should support the law.
So say if you see a violation of a law, you'd just ignore it because nobody's paid you to report on it. Or do you feel like you have an obligation to enforce the law? So in some sense, legal system systems gained a lot of. Strength from the norm that people should be supporting along. And of course it's a standard observation that in neighborhoods say where the law doesn't have much legitimacy and support, then there's a lot of law breaking and it's pretty hard to enforce the law.
And so on the other hand, there are many ways in which the world is not very intuitive for ordinary people. And so laws often prevent us from doing interesting things that ordinary people can't understand would be good because they have these simple norms. And so simple norms do get in the way of a lot of innovation and change and improvement in our world.
On the one hand, on the other hand, norms are behind a lot of our ability to make our organizations work and to make Wallwork.
Alex: [00:07:53] So, should we even be acknowledging the elephant in the brain here? Is that something that we should be looking into and trying to reform in any way? If, like you said, there's technically some benefit that would come in reforming it, but societal structure, as we know it devolves in a way..
Robin: [00:08:10] So having or not having norms is somewhat different than exposing the hypocrisy that is our, you know, typical, critical enforcement of norms. So the book the Elephant in the Brain is not so much about the existence of norms as it is about the fact that it's not very honest about what the norms are or whether we're following them.
So, but you could ask, should we expose the hypocrisy? How far should we go to resist and reform that hipocrisy? So the first thing to notice is that this hypocrisy isn't some surface feature that we sort of acquired late in life. It's embedded in our nature, in human nature. This has been going on a very long time and a very deeply embedded in our habits and our styles and our interactions.
Honestly, it's just not something you can by force of will eradicate out of your life. I mean, you could take the analogy of say sexual desire. Some people decide at some point that that's just ugly and icky and they don't want any part of it, but it doesn't mean they can take it out of themselves, right?
It's still there. And similarly, our hypocrisy is just still there even if you don't approve of it. And it'll still drive you. So I would more say ask when and where it would be useful to expose it more or to try to overcome it more. Don't think that you can just do it all wholesale. Think of yourself as having a limited budget of honesty and asking, where should you spend this honesty?
And the first thing to realize is that your hypocrisy or dishonesty was created by evolution to assist you. That is evolution decided that you were better off not knowing some things and that if created you deluded about certain topics that would be in your interest. So if evolution is still right about that, then we would do you a disservice by exposing these things to you, and in trying to entice you to reform you would do worse. That is your hypocrisy allows you to sincerely project various things to the people around you that you want to project. And if you were more honest, you would find it hard to, you know, to tell everybody you love them and that you will always be there for them.
And you'll never betray them, because you might know that's not true. You should, you should focus on the places where maybe evolution has misjudged you, maybe your innate social skills, aren't as good as other people. And you need to think things through more concretely, maybe you're a manager or major a salesperson where understanding of these things are more important.
Maybe you're a social scientist or a policy maker who really needs to understand the world in order to do what you're supposed to do. Those would be the places where you might be more justified in overcoming your innate hypocrisy in order to understand what's actually going on.
Alex: [00:11:00] So if we were to zoom out from individual dynamics and look maybe a bit more broadly at society as a whole, I think this segues a bit into your other book.
Talking about compounding exponential growth and just the amount of progress that we've had over the last even just century. And certainly beyond that AI becomes pretty center stage in terms of you know, radical new paradigm shifting technologies. And there's these scenarios that you outline.
You have the scenario where we keep writing better software and that sort of eventually gets us to AI. Not what we call AI today, but AGI right. General intelligence. And that's, you know, linearly two to four centuries, I believe. The other two, I think are quite interesting. One is the super theory of intelligence, which I'd love you to get into.
And the third one, which is pretty much the basis for a lot of the writing that you've done is. Emulating a brain in some way. So what's your take on just the general timeframe for an emulation of a human brain.
Robin: [00:12:07] So innovation happens in various size chunks.
Most innovation is a lot of little chunks and they come at a relatively steady rate. And so that makes it feasible to project their progress forward. But not all innovation is in those small chunks in the last 70 years or so of computers the re of innovation has been actually relatively steady. And it's been a projection of relatively steady trades over previous centuries in terms of automations of jobs and the economy getting more effective because of various kinds of machines and automation.
And so if we were just to project those trends forward, then we would say two to four centuries until most tasks that people do today would be automated. And in some sense, most of the economy then would be automated. And there wouldn't necessarily be any threshold where suddenly things changed. It would just slowly get more efficient than we'd slowly move more tasks onto machines.
However, we have to admit that not all innovation comes in tiny chunks. Sometimes innovation comes in lumps that all have to go together. Now in the field of artificial intelligence, it's been a common intuition for a long time that lumps are comfortable.
And roughly every 30 years for a long time. We have had bursts of concern where people thought right now, a lump is happening. So people have seen bursts of new technology and new abilities in AI for a long time. And when those bursts were happening, they said, this must be a big new lump. And this new lump must be about to make a big difference.
And they've said, is this the time when all of a sudden all the jobs will get taken over? And so again, it's very common for people to have this intuition that not only will there be lots of little tiny things happening through approve AI. There'll be some really big lumps and you are expecting them to come.
And we expected them to come when 1930 and 1960 and 1990 and 2020. And we'll probably keep expecting them every. 30 years. Cause that just seems to be how people expect, but we haven't seen lumps like that and I'm skeptical they even exist. I think that we'll likely just continue on that path and get relatively incremental improvements until eventually, you know, they were really good. However one of the other paths you described is intrinsically lumpy.
So there's this idea of taking a brain, a human brain like yours and making a brain emulation out of it to do that. You scan that brain and find spatial, and chemical detail, figure out which cells are where, connected to what, and you have a computer model for how each kind of cell works.
And you set up a model of the whole brain where your model each cell, according to its type. And if your scan is good enough of that brain, and if your models of each cell is good enough, then that entire brain should have the same input output behavior as your brain does. And that would be a brain emulation. But the thing is it's half a brain emulation or a partially good brain emulation is pretty worthless.
It pretty much needs to be the whole brain emulated well, to be useful for anything. A somewhat good emulation is a crazy person who doesn't know what they're doing and doesn't doesn't work. So at some point we will have brain emulation, feasibility, I'd estimate, roughly in a century or so.
And at that point, there will be a sudden change. Suddenly these things will be available and cheap, whereas before they weren't. And so that scenario has to be of a relatively sudden transition. And so I've written this book, the age of em, work, love, and life when robots rule the earth, to try to work out the details of how that would play out.
Although my focus is on what it's like once it's in some sort of equilibrium, it's harder to predict transitions. And so I don't try in that book to predict the transition as much. I focus on what life would be like once we had a lot of them. So the transition I expect might take a decade. Or, or more.
And once the transition was over, the economy would grow much faster. So our economy doubles at the moment, every roughly 15 years, this economy might double every month, which case this new economy would go through as much change in say two years as we've gone through in the entire industrial era. In which case I say that's, as far as I'm going to predict, because no one knows what happens after that and I'm not going to tell you what.
So I, you know, I go through a lot of details in that book about how that would play out. And of course, when that happens, if it happens before to choose two to four centuries from now, then at that point in time, humans will be valuable workers. And so when you switch them for these emulations, which I call ems, and they are all so valuable workers, that is their jobs are partially automated, but not fully automated.
And so these emulations are valuable, but then over the course of the Em era, slowly they they automate more things. And plausibly one possible scenario for where the Em era ends is they reach a point where they automate almost everything as well. And now it's the Ems competing with the automation that they write, slowly getting better. Whereas today it's humans competing with the automation we write slowly getting better.
Alex: [00:17:33] So is the point or the theory then with Ems is that the next evolution, so to speak of humans takes place digitally or in virtual reality, what's the use of anything physical at that point?
Robin: [00:17:47] So emulations would like to look at virtual reality because that's cheap and easy for them. Their virtual reality can look as real to them as anything around you now looks to you. Okay. So for things that don't matter very much, they would like to look at pretty comfortable things, but this is a real physical world that needs real physical work.
These emulations need actual computers and energy and cooling and they structural support and communication lines. They have to, you know, do marketing and design, manufacturing, transport, distribution, all these things have to happen and they do them. And many of these things, they do need them to see and understand the real physical world.
You're managing a factory making computer chips, you have to see the factory and see the chips. So they would, when it was useful, see the real things the real way so that they could understand and deal with them. They would, you know, surround the real things they looked at with pretty virtual things when that was convenient, just because why not?
So I can see you right now. You're in a room. With painted walls. Now behind those walls are structural support and pipes for water and electric wires, but you'd rather not see those things. So you're in a room that's painted them over. The paint is virtual. You, you see a virtual world around you. They might have paintings on the wall carpet on the floor.
You have gone as far as you conveniently can to make the world, you see be a pretty comfortable world and to hide the structural things that you don't care so much about, but you know, they're really there. It's not, you're not being fooled. You know, that behind that panel above you is actually, you know, bulbs that have electricity and hot gas to produce light.
But you don't want to see that you want to deceive a nice flat panel up there. And so you do so similarly, the emulations will live in a virtual world in a sense that the things they see will be pretty inconvenient when they want them to be, but they won't be fooled. They won't not know where they are or what they're doing.
They will be perfectly well aware that when they need to see something that's real to deal with it, they will see the real thing and as much detail as it takes stuff to do what they need to do.
Alex: [00:20:10] And obviously that it changes the structure of society as well. Right? Like one thing that I think about quite a bit when it comes to this theory is just the removal or tell me if it's, it's just as simple as this, but is it just the amplification of human traits by removing these infrastructural barriers?
Robin: [00:20:31] Only in some areas. So I just more say, you know, humans are very flexible creatures. We're culturally plastic and we're general-- we have general minds. And so our human history is, has us living in very different worlds, across history as our abilities changed and our context changed. And we should expect that to happen in the future.
So Ems are just generic human minds in new contexts. And these new contexts will emphasize new things. And so we will become what it takes to live in those worlds just as we always have before. So if your ancestors had seen your world right now, they might be greatly impressed by some things and horrified by other things, but mostly confused because you live in a very different world than they did.
It's your world. You become comfortable with it and learn to deal with it, but that's not an innate human thing. For this particular world, the innate human thing is that you can learn and you are culturally plastic, and you have been able to learn to adapt to the world you're in. So Ems also live in a different world and they have adapted to it.
So it's a strange world to you, just like your world would be strange to your ancestors and it emphasizes different things. They have different work habits, different habits of politics. They have different places they live and things they do and issues that matter to them. But that's also true about you compared to your ancestors.
Alex: [00:21:54] So the last topic I thought would be great to touch on is in a way it seems like a natural progression because we've gone from individual selfish, hidden motives to, you know, society as a whole in the future with the advent or concept of Ems, and now for the universe as a whole, we're going to get a little bit philosophical here.
Obviously the great filter is something you talk about extensively and as far as life in the universe goes, I'd like to talk about maybe another side of the great filter. Right? Cause the, as far as I understand the concept basically is that, given the fact that we can't see anything, even given the time light takes to travel, statistically, if the universe were teeming with life, we should be seeing stuff.
Robin: [00:22:41] Well if if the universe were teeming with life that was old and free to expand and evolve, then we should see it teeming with life. If the universe was teeming with life, all locked down by zookeepers, making sure they don't escape cages, well, then it depends on the size of the cages then doesn't it.
Sure. But the key idea would be that, you know, our sort of life. Sort of expands and advances as much as it can because that's what competitive pressures do. So animals and, you know, biology on earth has expanded to as many niches as it could get into. And as many sources of energy as they could find, and humans have expanded the range of sources of energy available to us and materials we can get to.
And the kinds of structures we can produce them, competing human cultures, and people are producing a rapid expansion of our capacities. And we should just expect that to continue. We will continue to not only get more able, but to move into more places as, as we can. And so the natural result of all that over billions of years would be to fill the universe with activity and life and, you know, active creatures, doing stuff and remaking things, the way they want.
And so when we look out in the universe, that's not what we see. Right. So either they're just not there or something is greatly limited what they can do.
Alex: [00:24:01] So very, very simply if there's two scenarios, either we're alone or we're not right in the universe, what does that say about religion?
In my mind, this is something I've thought about, a bit-- if there is no other life in the universe, does that prove any existence of God? I don't know how much religion ties into what you think about, but,
Robin: [00:24:21] In the last six months, I've extended my great filter work to something I've called grabby aliens.
And the idea is that the universe is at the moment filling up with very advanced life who is radically changing the universe and filling it up with gods, if you will. And we look in the universe and we don't see any of that because there's a selection effect. If we could see that they would be here instead of us.
So we are at the moment in the universe's age when we are transitioning from this dead, empty universe that we see to this full of life, universe of what I call grabby alien because they expand out as fast as they can and grab what they find along the way and remake it to their purposes. And we now have a chance to become that if we don't kill ourselves and go on to choose to not only survive, but expand, we could become as gods at least our descendants millions and billions of years from now.
And that's a key choice we have, because it may well be that most advanced life like ourselves never does that. And it's only a small fraction that never does that, but I think we can be somewhat sure that in fact, we are in a universe at the moment filling up with this advanced life and we could join that if we played our cards right.
So the religious aspect there is to say you know, don't ask what an empty universe implies about God, because it's not an empty universe. You're just seeing the empty part of it because it takes time to get born. But the universe is right now, filling up with very advanced, very active life. And maybe you should ask, what is the universe very full of active life, mean about God, if you want to ask about God. But you know, one way to say it is the universe will be full of things.
We would call gods. If we saw them. And in fact, you know, if, if UFO are really aliens, maybe we're seeing some gods right now.
Alex: [00:26:24] That'd be something, right? And as far as that goes, as far as other life in the universe goes, as I understand it, In the context of the great filter, finding life on other planets would be like terrible news for us, though wouldn't it? Because the implication would be that we've yet to get to that point where everything sort of destroys itself.
Robin: [00:26:44] So the, so the key idea is that the origin of advanced life that fills the universe is very rare because otherwise we would see a lot more of it. We may hope to eventually become such a thing that expands out and fills the universe, but it's clearly very rare to produce that.
And we are somewhere along the path from simple, dead matter to that grabby alien expanding civilization. So the rate filtering question is where are we along that path in particular, how much is left in front of us along that path? How hard will it be? So it's a total rate filter with say a factor of 10 to the 16.
And we had 10 of the 15 behind us. Only one in 16th of the filter would be ahead of us, but still would mean we only have a one in 10 chance. Of making it because the filter is so big. So the question is where is, is all the filter behind us or some of it still in front of us and how much, and for that question, we have to basically ask, well, there's all these stages at which the filter could exist.
And any place out there that, you know, seem to be at our level or in the past would be evidenced that the steps behind us weren't as hard. And that would suggest the steps ahead of us might be harder, which is bad news. But you know, it depends on how independent the origin was. So for example, life on Mars probably isn't very bad news because in fact, life on earth and Mars probably shared a common origin.
That is like at the beginning of the solar system, there were a lot of rocks flying around, so life on earth would have been moved to Mars. Life on Mars would have been moved to earth. And so now if we see advanced life on Mars, then all the stages between the early life on earth and the advanced life on earth would have happened there as well.
And therefore that would have been not so hard. So those steps couldn't have been very hard steps and therefore the hard steps must've been somewhere else, which might be in our future, which is bad. So the key idea is there's all these different steps where the filter could be. And there's still lots of possibilities, including the very first step, but you know, one of the places is in front of us.
And so every past place you cross off and saying, Nope, it can't be in there then, you know, process of elimination, a step left in front of us is still sitting there and could be one of the bad ones.
Alex: [00:29:04] I'm curious, how do you reconcile a lot of this personally? Because it's, it's all very, very existential stuff.
So in your own life, how do you find a meaning in this whole mess that you're working in?
Robin: [00:29:17] Well, I find it extremely meaningful to have a big focus of my life being, figuring out the universe. It means I'm important. Right, I mean, if the universe is important and I'm figuring it out, then I'm important.
So I don't have any trouble finding meaning then now of course, I had to give up on thinking, I knew what the universe was or, and so at some point I had to say all these things I wish the universe would be, or I've been told the universe is, I don't know if it's any of those things. And I have to look at it fresh and say, okay, but what is the universe?
And if I couldn't figure that out at all, or I were just one in 10 billion people figuring that out, then I'm a pretty small person figuring out a pretty big universe that doesn't care about me. And I guess that's kind of discouraging. Then, hey, that's the way it really is. So that's why you should face up to it.
But, but it turns out not that many people are trying to figure out this universe. So I can be one of the few people who are doing it. And that gives me more meaning. I never had any right to that meaning and then there was never any guarantee I should have such meaning. It just happened to be there.
I'll take it, but honestly, If you're in a universe that hardly cares about you and you can't do much about it. Okay. Suck it up and pick yourself up, dust yourself off and like find meaning in and your life find something to do.
Okay all right. Nice to talk to you.
Alex: [00:30:38] Hey, pleasure. Thank you so much.
Robin: [00:30:40] Okay.
Alex: [00:30:40] Have a good one.