Interview with Robin Hanson

Robin Hanson is an economist.

A Pattern of Moles

Contents

    Max Raskin: You’ve told me you have a thing on my shtick?

    Robin Hanson: So, quite often, people want to hear interviews with celebrities where they want the real person behind the façade. For instance, somebody is a famous baseball player, politician, or tech mogul with this public persona. People often want to know, behind the scenes, what TV shows they watch, how they brush their teeth, and what foods they eat.

    I guess the idea is somehow that this public persona is unusual and people are wondering, how could someone have possibly produced such an interesting public persona? They must have some secret sauce in their personal lives that somehow makes that possible. And I just think, usually, there isn't much of a personal secret sauce. In fact, people who are famous for a particular thing, the reason is they just devote a lot of their lives to it, and so they don't actually have that much other life.

    So when you interview a baseball player about what he does in his free time, he doesn't have that much free time. That's the point. The way you become a big person is that you just really focus on it and then most of your unusual success in that thing is going to be found doing that thing and how you brush your teeth or talk to your mom…that sort of stuff is just typically not going to be very relevant.

    MR: So the genesis of this was that I was interested in Churchill…how much he drank and napped, and I was interested in certain writers and their writing habits. This is not so much to discover a secret sauce, but more to give readers a taste of the person behind the thing that they're known for.

    RH: Right. But the question is, does that have much to do with the thing they're known for? If we could see patterns, maybe…

    MR: But I’m not interested in learning to replicate.

    RH: But why, is the question. If they're not actually any different in their personal lives than everybody else, and their personal details don't actually matter much for the thing that we like them for, why does it even matter how they brush their teeth?

    MR: So Montaigne said something like, “Even on the highest throne in the land, man still sits on his own ass.” Even if you don't have much time to devote towards sleeping, you still have to sleep. I find it interesting to know when Churchill went to bed at night. For whatever reason, that interests me.

    RH: If you wanted to be successful and to find patterns in these things, that would make sense to me, but just to want to know the details without there being any patterns…

    Everybody has a pattern of moles on their back or something like…you might want to see the pattern of moles on my back, but do I really care that the pattern of moles on Churchill's back is different than anybody else’s? Well, what's the point?

    MR: Well a smarter answer may be that Freud wrote a book called The Psychopathology of Everyday Life that I'm sure you would think is nonsense, but he argues that parapraxis and things like this tell you something meaningful because of patterns.

    But honestly I’m just interested in it because I’m interested in it. It’s like hunger or pain — if I feel I’m hungry or in pain, I’m in pain. If I think I’m interested, then I’m interested. It’s just a self-reflexive quality.

    RH: Well, but that's the interesting part to me…if there were patterns. If there are no correlations with A and B, then A tells you nothing about B.


    Happy Wife, Happy Life

    MR: I'll start with my first question: Do you floss?

    RH: Yes.

    MR: Do you floss religiously?

    RH: Pretty much. More than every other day.

    MR: Wow. Okay. So that tells me something.

    RH: But I don't know what it would tell you exactly. Maybe it's a certain level of habitualism or discipline or something.

    MR: Yeah, something like that.

    So I got to be honest with you, this is the interview I've done that I'm the most excited for beforehand because I think that there's a possibility that the habits you have by virtue of being interested in things like cryonics, living long, rationalism, and things like that, you might actually have a different lifestyle than most of the people that I have interviewed.

    RH: We’ll find out, I guess. I'd be surprised if it was that different. I'm sure I'm weird in some ways and then being weird makes us more willing to be weird in other ways. Right? So there's a sense in which people who are pushing boundaries, they tend to be weird just because they have less to lose.

    MR: If an alien were to look at how you live your life, would it be materially different from the way your average American lives his life?

    RH: Well, the main difference is that I'm just obsessed with thinking about certain kinds of ideas, and I spend most of my time with it. But that's the way in which anybody who's noteworthy for X, typically, they just spend a lot of time and energy on X. That would be the main difference.

    MR: But do you put yourself in a freezer every morning for two hours?

    RH: No. You have to have a pretty big freezer to do that.

    MR: They do have these ice baths and things like that.

    RH: I have a pretty mundane income, and so I don't have a lot of extra money. What I do spend on the margin is just on ideas. I spend time thinking about things. I could consult more and have more money, and then I'd have money to have a freezer or something. But I don't.

    MR: You’re famous for being interested in brain freezing, right?

    RH: Right.

    MR: Do you take any daily vitamins and supplements?

    RH: I've studied health economics, and the main thing we learned there is that there's very little correlation between medicine and health. So that's put me off on all sorts of medical supplements and other medical things. I just don't.

    MR: Do you take any supplements?

    RH: I don't think so. I mean, my wife makes me take vitamins.

    MR: So you do take a vitamin?

    RH: But I wouldn't unless she pushed it, because the data I've seen seems to say it doesn't make much difference.

    MR: You have all this data and all these thoughts, but you listen to your wife. Smart.

    RH: Well, living together is a compromise. And usually, it's a compromise along many dimensions. So yes, I compromise with my wife.

    MR: What about exercise?

    RH: I get exercise in as convenient. I just have this elliptical and I do 20 minutes on the elliptical daily.


    Density of Information

    MR: Do you listen to music?

    RH: Usually audiobooks.

    MR: What are you listening to right now?

    RH: Well, I'm between audiobooks at this very moment. The last thing I listened was Oliver Twist, actually, because my wife pushed it. The Dispossessed is the most recent one I listened to.

    Before these, Middlemarch, which was excellent. I was very impressed with Middlemarch.

    MR: If you gave me a thousand years I couldn’t have guessed that.

    RH: Well, I didn't like it that much. I thought it wasn't very realistic compared to other books I've been listening to lately.

    MR: What speed do you listen on?

    RH: 1.2x, I think.

    Faster, I could hear the words, but then it feels less like I'm listening to something.

    MR: Do you listen to podcasts?

    RH: Not usually, because the density of information is just too low. I would rather look at something that's been well-thought-out and developed.

    I read a non-fiction book, The Corporation and the Twentieth Century, about corporate governance. And then, I'm roughly halfway through The History of the Decline and Fall of the Roman Empire, which is a 62-hour book.


    Vangelis Evangelist

    MR: Are you a music fan?

    RH: Fan? I'm probably in the middle of the distribution of the degree to which I'm a music fan, which therefore is not that much. There are people who are way into it. I think compared to most of the arts, music is the art that most compels me emotionally.

    MR: What do you think is the band you've listened to most in your life?

    RH: It might be the performer Vangelis. He’s an electronic musician famous for the music for Chariots of Fire.

    MR: What’s the art right behind you?

    RH: It's a website called Marimekko. They have a lot of clothes.

    MR: How do you keep track of your to-do list?

    RH: I don't. That is, I just don't keep a to-do list.

    MR: Interesting. So how do you know what to do each day?

    RH: So I have schedules, but otherwise, I'm pretty intuitive about priorities. I'm lucky to be a tenured professor, so I have a lot of discretionary time and security. And then I'm trying to study the things that seem most important and neglected, where I have an angle — that's my main strategy. But also, I spend a quarter of my time just browsing things, in the hope that I'll see something interesting or something new.

    MR: If I were to look at your web browsing history, what would I find?

    RH: Well, Twitter is going to be pretty common.

    I have spent more time slowly over the last couple of years talking to AIs, but I've learned what kind of questions I can ask where they will know the answer and those where they won't.

    MR: Do you use AI every day?

    RH: Probably most days. I guess I’m still probably looking at Twitter more than I'm at AI, but I'm slowly learning that they're getting better and what kind of questions I can ask.

    MR: Do you play any games?

    RH: I used to play video games with my kids when they were young, and that was a way me and my two sons would bond together. But since they got old enough to play games on their own, I stopped playing. I can see the part of myself that would get sucked in, and that's the thing that puts me off.

    MR: Do you have any hobbies?

    RH: Well, I like cats and I like snacks, if you want to call those hobbies.

    MR: What's your favorite snack?

    RH: Oh, I like Krispy Kreme doughnuts or a Dove bar.

    MR: When you write, do you eat? Do you drink? Do you listen to music?

    RH: Music is so compelling to me that I just can't work while listening to music. Some people can do it in the background, but I just can't. Sometimes when I'm listening to music, it feels like that's the most important thing there is. It's very compelling, but, interestingly, an hour later, it hardly seems that important. So there's an interesting way in which music distorts or changes your priorities in the moment.


    Sci-Fi and AI

    MR: Do you feel that way about TV or movies? Are you a movie or TV buff?

    RH: I'm more of a movie person than a TV person because I just like to see a whole story played out. It’s a much bigger ask to watch a whole series than it is to watch one movie.

    MR: Do you like sci-fi movies?

    RH: Less than I used to. And so, at the moment, I'm not even sure I really do. If I'm on the plane looking for movies to watch, I'll browse the sci-fi and I'll usually go, “Eh.”

    MR: Why is that?

    RH: When you're young, you don't know much about the world, and so, most sci-fi seems much more plausible. The more you know, the more you go, “Well, this can't happen.” It just becomes less interesting because it's just less believable.

    MR: Do you like hard science fiction?

    RH: It’s not just about the technical plausibility, it's also about the social plausibility.

    I'm a social scientist. Of course I do know enough technology to know when the technology is not plausible, but I also just know a lot more about human behavior and the arc of history and things like that to know what science fiction is plausible.

    MR: What’s the most important or your favorite science fiction novel of all time?

    RH: For many people Asimov’s Foundation stands out as this idea icon. When I've asked AIs to say what fictional character I’m most like, they tend to compare me to Hari Seldon from the Foundation series.

    I liked Vernor Vinge's novels a lot.

    MR: Are you worried —psychologically and emotionally — about the existential risks around AI?

    RH: Not really.

    MR: Really?

    RH: If you're asking about my psychology.

    MR: What about philosophically or intellectually?

    RH: I think anybody has to admit that large changes are worrisome just by the nature of large changes. Certainly, the arrival of human-level AI would be one of the biggest changes we've always expected to happen in the next few centuries. So that would mean a lot of our worry about change has to be concentrated around it. There's also the excitement or enthusiasm for change, but the worry has to be there too, right? You’d be stupid to be blasé about huge changes and not to think they couldn't possibly go wrong.

    MR: If you had to guess whether humans will be here in a substantial way…I know you had a great piece on the Fermi paradox.

    RH: So 25 years ago I wrote about the Great Filter, and then five years ago I returned to the topic and did work on Grabby Aliens that we've then published in astrophysical journals. I had a more mathematical model of it in the last few years.

    MR: How would you describe this to a lay reader?

    RH: Even without AI, most thoughtful futurists realize that in the long run, our descendants would become more powerful than us. And if they had conflicts with us, they'd win them, and they would probably change a lot in values and morals and things like that. That was just we've always expected. Now, we also more recently expect rates of change to increase, that is rates of change are speeding up, and that means between generations. And because lifetimes are increasing and rates of change are increasing, that means any one generation overlaps with more other generations before and after it. And that means there can be more conflicts between generations. But still, most everyone expects that future generations will outdo them in power and they will also just change in values. And that's what everybody's expected.

    MR: But the future generations will still exist.

    RH: Well, I'm using the word descendant in a pretty abstract way, that's now going to include AIs. That is, I could postulate that AIs would never happen. And even then, everybody expects our descendants to get more powerful and strange. And we also expect that for AI. But I think many people think it's much less okay if that happens for AI descendants than for other descendants. They view AI essentially as an invading armada from an alien planet that’ll be here in a century. They view AI as a co-existing other on its way to have conflict with us.

    MR: And that's not how you view it?

    RH: I think all evolved species, evolved minds will have two robust habits. One is rivalry and suspicion of co-existing competitors, and the other is indulgence toward descendants. And the co-existing rivals can be very similar to you and you can still have a suspicion and rivalry to them, and your descendants can be quite different and you'll still be indulgent to them. These are just two generic evolved habits.

    MR: Sounds good.

    RH: But with respect to AI, people have categorized them as the co-existing rivals, but they aren't…they don't exist yet. So from our point of view now, they are descendants, and I think we should and will in fact indulge them as descendants in the way that all evolved creatures indulge their descendants. And that's appropriate.

    You expect your descendants in the long run to, in essence, replace you. That’s what we've always expected, and that's okay. It's always been okay, and I think it's still okay with AI. The fact that they're made out of metal, I think just doesn't carry that much weight with me in terms of what attitudes I should have toward them. For many people, that's just the whole thing really. It's just, “Well, they're made out of metal, so they're just alien competitors.”

    MR: I would say that's how most people think about it. I think you’re probably in a really small minority here.


    Prestige

    MR: You have been surrounded by some of the people who are doing cutting-edge stuff that makes lots of money. Why aren't you a billionaire?

    RH: To be a billionaire, you have to join billionaire teams.

    MR: But no one ever wanted to give you equity in X, Y, or Z thing?

    RH: The closest I came was I was on the board of advisors for Augur, which was a prediction market.

    MR: I remember that.

    RH: I got the most money I ever got from that — when the coins went public and I sold my coins. But mostly, I just get very little for being an advisor. As you might know, mostly people pick boards of advisors not to get advice…they just pick it to get prestige associated with those advisors for their organizations. So the game of making a lot of money by being an advisor is mostly about the prestige you can sell, not about your advice.

    MR: I feel like you're a very prestigious person.

    RH: Well, you might think so, but most billionaires are really very focused on very traditional prestige markers.

    MR: I'm not expecting an oil tycoon or Wal Mart to put you on their board but what about Mark Zuckerberg?

    RH: Even in tech, they want Harvard graduates, they want New Yorker writers. They want traditional prestige as the people they buy.

    MR: Even a Peter Thiel?

    RH: Yes, indeed.

    MR: Interesting.

    RH: And they also, of course, want loyalty.


    Physics and Consciousness

    MR: Do you believe in God?

    RH: No.

    MR: Do you believe in an afterlife?

    RH: No. I think it's not at all crazy that there could be huge powers out there in the universe, far away in space and time.

    But the main thing I am suspicious of is the idea that these huge minds have been very involved in humanity's trajectory or answer prayer. Answering prayer is the most implausible. Even if there are huge powers who do know about us and who have done some things to influence humanity's trajectory, they're not listening to an individual's prayers and answering them. That's pretty silly.

    MR: What about consciousness — do you think at the end of the day it’s all material stuff?

    RH: I don't think it really makes much sense to talk about non-material stuff. There's just stuff.

    MR: But do you think consciousness is made up of just stuff?

    RH: I think there's just one thing people know about consciousness, and there really is only the one thing, and everything else are projections they make based on their priors of similarity. The one thing everybody knows is that they feel compelled to say that they are conscious right now. That's it.

    Then they infer from that they were conscious in the past or other people like them are conscious or they are conscious in the future. But in terms of data, you've got nothing. In fact, you don't even really have data that you are conscious right now. What you have is a strong presumption, and we'll never get any more data than this little, teeny piece of data we have right now. That's this striking fact. All the data we ever will get will be about correlations of things happening and things we see.

    MR: Basically you’re saying I'm never going to know whether you're a [philosophical] zombie or not?

    RH: You'll never have any data. If you know it'll be drawing inferences from your theoretical expectations, the correlations you expect. So people doubt AIs are conscious, but are confident other humans are, not because of any data. They just decide similarity is good enough.

    MR: And just in your gut — not a smart person answer — do you think other people are conscious?

    RH: As conscious as I am. But am I conscious?

    MR: Do you think you're conscious?

    RH: I am in doubt because I just realized I basically have no data.

    MR: It feels like you're conscious.

    RH: But what does it mean to feel like I'm conscious? What it means is I feel inclined to say that I am. That's all it means.

    MR: Well, you have that feeling right now.

    RH: But what does that mean?

    MR: I don't know. But you have it.

    RH: Right. But somehow, if the idea is this feeling is more than physical, I'm just not so sure what that means.

    MR: Oh, I wasn't saying that.

    RH: Obviously my brain has enough computational power and complexity and the habit of evaluating its actions and its circumstances in emotional valence terms, right? We have emotional systems, and they rank things and rate things all the time. And so, clearly, I've got a system that ranks and rates things. But does that mean it's more than physical?


    Simulation

    MR: Let me ask the last question of these smart questions — simulation stuff.

    RH: Simulation hypothesis is cute and it's fun to think through, but I think it's unlikely to be true for a particular technical reason, which is that when we look at how interested we are in the past, it fades quickly and it fades faster than the growth rate of society. So if you just put in any one year into Google Ngrams, you can see how quickly interest in that particular year fades away before and after.

    MR: Oh, that's cool.

    RH: And it's a consistent fast fade.

    MR: That's so cool.

    RH: So now, if we're postulating that future beings are simulating their past, they're going to simulate the past they're interested in, and that's going to be relatively recent

    MR: Isn’t that argument that that could be now?

    RH: Well, the question is how likely is it that they'll be simulating now compared to closer history? So we have games and movies and novels. We think about the past, but we usually think overwhelmingly about the recent past. We almost have no novels set in Ancient Egypt, for example. But that's relatively recent history. So the rate at which interest in the past declines is actually faster than the rate at which the economy grows. So you see the scenario that we are being simulated, it's an integral over every future date at which they might be simulating their past. Their interest in a particular time in the past of us being that far back.

    MR: Basically, the argument is that we are the Goldilocks generation.

    RH: But if the argument is that just we are the most interesting generational history for anybody to want to think about, that's just dumb.

    MR: Sorry, when I say Goldilocks, the argument goes that we are the generation that's close enough to when this techno simulation becomes possible, but not so far that life is terrible. I think this is his argument anyway. I don't believe in it because I believe in God.

    RH: God could allow this technology. It's not obvious that God prevents it.

    I think it's so more likely than not that the future will just eventually get big enough and they will do ancestor simulations. That is the existence of ancestor simulations, I think, is more likely than not. But the ancestors they will simulate will be recent ancestors, and we're too far back in time.

    MR: That’s the point I want to get at — you think we’re just too far back in time?

    RH: Right, we're not the ones that'll be most interesting.

    MR: What I think Bostrom would say is that we are so close because of the singularity stuff.

    RH: We're not.

    MR: You don't think we're so close?

    RH: Right.

    You were asking me about my AI worries, I'd say I still think we're a long way off.


    Not Rong

    MR: Do you read these blogs like the Less Wrong?

    RH: I just might look at something if somebody had a tweet about something and then the topic seemed interesting. But mostly I'm not tracking anything regularly…even my colleagues’ blogs.

    MR: Are you friends with these rationality people?

    RH: So I was at this weekend event called Manifest, and there were a lot of nerdy rationality people there, and I enjoyed that. That's once a year.

    MR: Are those people your friends?

    RH: My strongest friends are my colleagues who I go to lunch with very regularly. These tech nerds are people I meet once in a while, but I don't interact with that much.

    MR: Let me re-ask it this way: Do you mostly hang out with normal people or weird people?

    RH: I mostly don't hang out. I sit at my desk. I go home, I read at home, and then I don't get invited to many parties here in D.C.

    If I lived in San Francisco, I'd probably get invited to and go to more parties because they're more my type. But D.C. people aren't my type and they don't invite me to stuff. But I go to lunch regularly with my colleagues. That's my main socializing, really.

    MR: Who are some of these people?

    RH: Well, Richard Hanania, for example. Scott Alexander and Aella. I was invited to a Curtis Yarvin event, but I couldn't go.

    MR: What's the thing that is interesting you most right now?

    RH: I get obsessed with things for periods of years at a time. In the last two years, I've been obsessed with cultural drift. My main talk at Manifest was on cultural drift. So that's just been my obsession the last two years.

    As I've said, my basic intellectual strategy is look for important neglected stuff where I can find an angle. So even in something that I've done before, if lots of people get all over it, then I get less interested.

    MR: Is that how you feel about AI?

    RH: Certainly. I tried to write about AI decades ago and there wasn't much interest in it. In the last decade, there's lots of interest, but nobody cares about what I say.

    MR: Why do you think nobody cares?

    RH: Well, there was this AI risk movement that was just very eager to promote the idea of AI risk, and I was more of a skeptic for that, so I just didn't get invited to any of their meetings.

    MR: You were skeptic of the risk?

    RH: Right. So they were very eager to engage with people either who agreed with them or are high enough profile that even disagreeing was a credit to them, that somebody so high profile would take their stuff seriously. But I was neither of those.

    MR: That's so bizarre to me because I think of you as this intellectual giant who is behind all these things. But I guess you're just always five years too early. I should buy “cultural drift” futures.

    RH: It depends, too early for what, right?

    MR: Or maybe you influence people, like prediction markets for instance.

    RH: Right. But that's the point.

    If what you want to do is join a movement and have people say, “Yay you for being in our movement,” then you can't be one of the first. You have to be in it later, and you have to have something to bring to it, which is prestige you've gotten from somewhere else. So if I just focus on being the first and thinking things through first, I don't get much prestige that way. So I don't have much prestige to offer you later in the movement. All I have is the insight early. So that's been the position I will take.

    MR: What's on the cultural drift stuff? What’s going to be in fashion in a few years on that?

    RH: That's definitely something and nobody else is on it yet. I have a Quillette article, but I can just tell you.

    MR: I'll link to that, but it leads me to a better set of questions right now. Of all your writing that you've done in the past, is there a piece that you are either most proud of or would expose someone who has never read you to your way of thinking?

    RH: So I'm just not going to be good at that. I swim in my ocean, and I find it hard to know how to show me to somebody else who can't see me because I see myself too well. What I could say is if you want the work of mine that might most impress you, it might be my recent Grabby Aliens work.

    Or you could look at my academic publications before getting tenure, which are relatively mathematical and formal game theoretic stuff.

    If I'm going to say what's my biggest idea in my lifetime, it's probably Futarchy or decision markets, which are finally being experimented with in the last few years.


    Prediction Markets and Futarchy

    MR: Do you work with either Polymarket or Kalshi?

    RH: I met them, but I don't work with them in the sense that I don't have a role.

    MR: I don't understand why these people don't make you the king.

    RH: Then you don't understand how the world works.

    MR: I don't like how the world works. You should be the king.

    RH: Maybe you do understand. You just don't like it.

    MR: I don't like it. They should be selling t-shirts with your face on it.

    RH: But the world just actually doesn't that much care who's right about stuff, or right early. The world just doesn't really care.

    MR: I care. I care.

    RH: You might care, but you have to realize most of the world mostly doesn't care. Mostly people think being right early is good because it might lead you to get money early or prestige early or positions of power. Those are the reasons why I think it might be good to know things early, but otherwise, people don't really care much if you were right about stuff early, the world doesn't care.

    MR: One last question in this line: Is there a podcast that you've done that has gotten the most hits or people should look at?

    RH: I did a long podcast with Lex Fridman. I also did an interview with Sam Harris.

    I didn't talk about Futarchy in either of those cases, but I would say Futarchy is my best idea because I still think I've got a shot at remaking the world in the next century. The world just may adopt a much more competent, effective form of governance — one that I invented. And then, if so, that will be a big change in the world. The world will remember this time when it became much more competent.

    If you worry about AI risk or global warming or all sorts of other problems, one of the reasons you worry is because we just have incompetent government. If they were competent, you wouldn't have to worry. They would just be handling it. But they aren't. That's why you feel like you have to get worried about them. And we just have a lot of ways in which our society is just broken at deep levels because of our incompetent governance. So having competent governance would just remake our world in pretty dramatic ways. And there's actually reasonable chance that that will happen.

    MR: And prediction markets fit into this.

    RH: Well, a certain use of prediction markets for governance will make governance much more competent. And if so, we will just be in a new world where big important problems are just dealt with through big competent organizations whose job it is to deal with them. And you won't have to be amateur, “Gee, what are the problems with the world?” I mean, if you think about an oil company, there's not a lot of amateurs out there. “Gee, how can I help the world get more oil and make the pipelines work or something?” You don't worry about that. Why?

    MR: Price.

    RH: Because there's a system that works that does it, right? You're maybe worried about global warming or human rights or whatever it is because you think the world doesn't have a system to figure that out and make it work like it has for oil pipelines or movies even, or for food. You aren't out there saying, “Gee, how can we make sure the world has good enough tasty crackers to buy? I wonder how I could volunteer to make sure the world has good, tasty crackers.” You don't think that way. Why not? Because you believe the system for making tasty crackers is just fine, and you trust it. So if we had systems like that for other big problems in the world, you would be living in a very different world.

    MR: You're basically talking about the equivalent of discovering the free market for governance. The price mechanism, private property, and the entire mechanism of organizing our industrial society — you’re talking about doing that for governance problems.

    RH: You and I see that at least as a competent system and we trust it as a competent system. So the vision is we can have systems nearly that competent in government.


    Next
    Next

    Interview with Jade Bird