Interview with Nate Soares
Nate Soares is a computer scientist and the president of the Machine Intelligence Research Institute. He is the co-author of If Anyone Builds It, Everyone Dies.
Sci-Fi
Contents
Max Raskin: I wanted to start with sci-fi. Are you a fan of science fiction?
NS: In a sense. I read some as a kid.
MR: Are you an ongoing fan?
NS: It's not a big part of my personality. Ender's Game had an impact on me as a kid.
I enjoyed Vernor Vinge. He was actually at one of the AI alignment summits I was at in 2015. A Fire Upon the Deep is probably one of my favorite books. But these days I think there's a lot too much cynicism in popular media, and so if I'm going to read sci-fi stories I'll stick to the older stuff.
MR: What about movies? Were you into the Terminator movies or anything like that?
NS: No, I was never really a big movie person. I grew up in rural Vermont, and we just didn't watch a lot of TV. I've seen some movies, I've seen Terminator…I was just never really a movie guy.
MR: What about Ender's Game influenced you?
NS: I think I probably read it when I was 11, which is the age of Ender in the book. One thing that I found influential about it was that it showed 11-year-olds as having initiative, having thoughts, having skills in a way that it felt like the society around me when I was 11 was not realizing I had. I think that the school system is squandering quite a lot of child potential. (Not that everyone is Ender.)
But it felt like a nice story about a kid who’s brought to a space station to save the planet, while I was stuck learning biology for the 17th time because the other kids in rural Vermont hadn't learned it the first 16 times.
Bay Area Rationalist
MR: What community would you describe yourself as being a part of?
NS: I'm very much a recluse, but I definitely have a lot of friends in the Bay Area Rationalist scene, which my guess is that’s what you're asking about.
I think in any scene there's a difference between people who attend the meetups and some core group of friends who hang in each other's houses, and don't really ever go to any meetups.
MR: But that’s your scene? I mean you wouldn't describe yourself as part of the traditionalist Catholic scene in Virginia?
NS: Correct. I mean I was raised Catholic.
MR: Oh, really?
NS: Yes, but in Vermont, not Virginia.
MR: Do you believe in God?
NS: I do not.
MR: Do you believe in an afterlife?
NS: I do not.
MR: Do you believe in consciousness as being separate from material?
NS: I do not.
MR: Would you describe yourself as a materialist?
NS: Not in the consumerist sense.
I think a lot of philosophies bring in a lot of baggage, and I'm not saying I would take all the baggage of the traditional materialist philosophers. The fact that something can be made of parts does not mean it is not wonderful, does not mean it is not beautiful. I expect the stuff around us is made of parts, and that does not take away from their wonder.
MR: There’s nothing called “spirit” that is somehow tied up with our consciousness and survives after our bodily death?
NS: That's right. But I think in history of science certain babies have been thrown out with bathwater. When we learned that the seed of consciousness is the brain, a lot of scientists said, "Therefore we have no souls," when in fact, we do have a soul…it's just made of meat and it rots when you die. It turns out you can make a soul out of meat, and that's pretty cool. This is just semantics, but I would not give up on the idea that there's something to humans and consciousness that's beautiful, and worth preserving, and important. I would be happy to use the word “soul” for a thing humans have, I just don't believe you can detach it from the brain. I believe it rots when you die, and by rots I mean it goes away when you die, and then the substrate it used to be on rots.
MR: Is it fair to say you're a part of this rationalist community?
NS: Yeah, totally.
MR: How does your day differ from mine based on that? I'm assuming you go to the bathroom, and you sit on the toilet the same way I do.
NS: Yes. When I say I can be counted as a member of the rationalist community, I think that's mostly a question of who your friends are. I think I'm friends with some of the guys who administrate lesswrong.com, and that makes me part of the rationalist community.
So how does my day differ? Sometimes I talk with Oliver Habryka on Slack. In terms of how does having studied human rationality affect my thought patterns in my life? That’s a different question.
Running, Walking, and Communications Guidelines
MR: Do you have an interesting diet?
NS: No, not really. I'm one of the lucky few who I can eat whatever I want and be doing totally fine.
MR: Do you work out?
NS: I take a lot of long walks where I think about work problems or something. I'm fit. Sometimes I randomly run a 5K, sometimes in a race. I think my best 5K time was like 19:36, which is nothing sneeze at.
MR: When you run do you like listening to music?
NS: Not really. Usually, I'm not trying to run a sub-20 5K, I am trying to think, and I find that the faster pace I'm running, the harder it is to think, so usually it'll be a bit lighter, and usually I'll be thinking.
MR: But you don't listen to music when you run?
NS: Rarely. Sometimes if I finish a thought and I'm far from home, then maybe I'll put on some music on the way back.
MR: What kind of music do you like?
NS: When I'm running, something uplifting, maybe it'll be a little dance-y, sometimes it'll be pop-y. If I was looking for the stuff that I listen to more than the rest of the population, maybe it would be folks like Michael Nyman or Ludovico Einaudi, or modern instrumental.
But I also listen to the same thing as the next guy.
MR: Do you listen to Taylor Swift?
NS: I don't listen to Taylor Swift very much, but sometimes if a girlfriend's really into an album, I'll listen to it.
MR: Do you take naps?
NS: Yeah, sometimes.
MR: I noticed you wrote a long document giving guidelines on how people should engage in conversation with you.
NS: Yeah.
MR: That's something that most people do not do.
NS: That's right.
Recreational Mathematics
MR: Is there anything else like that…things most people do not do that you do?
NS: I do all sorts of things most people don't do. I'm not sure you should blame it on the rationalists.
For instance, right now I have been living out of Airbnb's and hotels for the past couple of years.
My hobby projects are probably different from most people's hobby projects.
MR: What are your hobby projects?
NS: I have some various math ideas I'm working on just for fun. Recreational mathematics.
MR: Like what? I have a doctorate in mathematics from MIT so you can explain.
NS: Oh, nice. Do you know much about any theorem provers?
MR: Yes
NS: Do you know of Agda, the theorem prover?
MR: No.
NS: It's barely a theorem prover. It's more of a programming language that has aspirations of being a theorem prover. You know about large cardinal axioms in set theory?
MR: Yes. Set theory — that's my expertise.
NS: Great. So basically, I’m trying to formalize a stronger recursion principle for a language like Agda. Which is more or less like trying to get larger cardinal axioms into theorem provers. There’s things you want to prove that you need a large cardinal for, and most of the modern theorem provers don't have anything like that strength, at least not in a fashion that’s computationally convenient. You can just take a larger cardinal axiom as an assumption, but it wouldn’t “compute,” you’d miss out on lots of the amenities you usually get when working with a language like Lean or Coq or Agda. So basically, trying to upgrade theorem provers is one thing I try and do in my free time.
MR: Are you using any large language models to do that?
NS: No; they're not really there yet. It'll be nice when they can do all of the shitty lemmas for me.
MR: I have a confession to make. When I said that I got my PhD in set theory at MIT, that's true in every sense, except that I didn’t get my PhD in set theory at MIT. I know nothing about mathematics, but I just figured it would be better if you thought that I was smarter. That you would just say it like how you would say it to a smart person.
NS: Yeah, maybe.
MR: I’m going to put this all in there.
NS: Great. To be clear, this is purely recreational. I haven't been doing very much of this while writing the book.
Google Gemini
MR: Do you use any of the models on a daily basis?
NS: It depends on the day. Weekly, probably. Daily, it depends on the day. Depends on what I'm doing.
MR: Do you feel comfortable sharing which one you use?
NS: I guess maybe it's a daily thing now because it's integrated into the Google search, and sometimes it spits out something that's somewhat helpful. So probably Google Gemini is on the top just due to convenience. One thing I was using it for was if I had some vaguely remembered source I wanted to cite. Describing the vaguely remembered source, I’d see if it could dig it up faster than I could, and often it could.
MR: My friend in college had this fantastic idea for a new kind of punctuation mark. It was squiggly quotation marks to represent an “almost quote” — when you don’t remember exactly what someone said. Do you endorse this?
NS: In my own notes, I use a tilde in front of the quotations for this concept, so I have this concept.
Basically, I think the word “like” in English is becoming a tool for this.
MR: Are you Android or iOS?
NS: I'm an Android user. I used to work at Google.
MR: Do you have ChatGPT on your phone?
NS: No, I don't.
MR: Why is that?
NS: Don't need it. Haven't found it that useful for things.
MR: Do you use Anthropic?
NS: Like I said, if I'm trying to dig up a vaguely remembered source, I find a language model very useful. But most of the other stuff I'm doing…an LLM isn't going to write my book.
MR: And why not?
NS: It would be gibberish.
MR: But it does pretty good if you wanted to write certain things.
NS: I don't find their outputs terribly readable yet.
I can see why some people have lives where they are using it a bunch more, but my hobby programming projects involve quite a lot of fiddly math that an LLM can't handle yet. Although I might try to get an LLM to handle the grunt work if and when I get back into my hobby projects post-book tour.
The book I'm writing is too precise, too technical, too much leaning on the arguments being valid, leaning on the arguments being specific and narrowly targeted. I'm not interacting with bureaucracies. If I was interacting with a bureaucracy, sure, I could use an LLM to generate the bureaucratic text.
I arranged my life in such that I don't have a lot of bureaucracies to interact with.
MR: How did you use it for research other than the quote researcher?
NS: I'm not talking to an LLM asking what it has as a source on X. I sort of know my sources, I know where my beliefs come from, I know the evidence, I know the observations. The LLMs are helpful for digging up an old paper from a poorly-articulated description of its contents.
MR: Sounds like you’re using it as a better Google?
NS: Yeah, like a better Google.
Twitter/X
MR: What's the homepage on your internet?
NS: It is blank.
MR: What do you think is your most-visited website?
NS: It might be Twitter.
MR: Do you like Twitter?
NS: I don't love it, but I think I have a fine feed that gives me a fine source of news. I think you've got to really beat the algorithm until it stops giving you garbage.
MR: Whose tweets are you excited to read?
NS: I read Eliezer's tweets. I read a bunch of my friend's tweets. Probably I'm most excited to read my friend's tweets. Aella is a friend, I like reading her tweets. Michael Keenan is a friend, who I think has particularly high-quality tweets. It’s rare, but he's just considerate, he's thoughtful, he's careful, accurate, precise. I like him. I like his tweets.
MR: Do you still feel strongly that dolphins are fish?
NS: Absolutely.
MR: Are there any positions that you've changed your mind on?
NS: I have all sorts of somewhat trollish positions like the “dolphins are fish” one. I don't remember a ton that I've changed my mind on, off the top of my head. It’s hard to index that way, but maybe seeing evidence about sports betting being bad for quite a lot of the population.
MR: Do you think a hotdog is a sandwich?
NS: I don't generally care about questions of semantics. The reason that I was vehement about dolphins being fish is that I think there's an ongoing situation where scientists find some technical concept that doesn't actually work to replace a useful day-to-day concept, but they try to replace it anyway. The dolphin tweet actually came out in part because people were starting to come for “berry.” You were seeing back at that time some people starting to say, "Hey, did you know that actually a strawberry isn't really a berry, and a banana actually is a berry?" I think maybe Neil deGrasse Tyson was one of them. They're talking about anatomical isomorphism or biological analogy. I don't know what words botanists would use, but they're saying a banana is anatomically isomorphic to a blueberry in a way that a strawberry isn't, and I'm like, "Oh, that's very interesting, but that's not what ‘berry’ means."
MR: There's a famous Supreme Court case in 1893, I think, called Nix v. Hedden. It was a unanimous opinion by Justice Horace Gray saying that tomatoes are vegetables and not fruits for the purposes of tariff law. Even though jerks like Neil deGrasse Tyson could say that ordinary usage doesn't count or something like that.
NS: Yes, there’s this other notion of fruit, which is the thing you would put on a fruit pizza…
MR: …a fruit pizza?
NS: Fruit pizza is a type of dessert. I don't know if you guys do it outside of New England.
MR: You should have said fruit salad! What’s a fruit pizza?
NS: I don't like fruit salads. Fruit pizza has way more cream cheese.
MR: There’s cream cheese in a fruit pizza?
NS: Yeah. A fruit pizza is basically a sugar cookie with cream cheese and fruit.
MR: I've never heard of such a thing.
Writing
MR: I find the rationalist community will make arguments that somehow just don’t correspond to reality, but I can't necessarily prove why. Their arguments are clever, but I think the worldview is off.
NS: One skill that I can consider an important skill that a lot of people in the rationalist community are missing, is the skill of not getting bullied into believing something. I think that's an important skill to have.
It's also important to develop the skill of being able to follow arguments, and notice when you are compelled, and change your mind when you are compelled. But there’s a difference between that and being bullied into a belief.
Separately, one of the critical skills of a rationalist is having degrees of belief rather than binary beliefs. One can always say of a topic, "I know very little, but here's my intuitions."
MR: So I want to talk about the book. It’s fantastic as a book.
NS: Thanks.
MR: Again, I think I disagree with it, but it’s just such a well-written book. Who influenced your writing style?
NS: I'm sure Eliezer influenced it to some degree…having him as a co-author sure as heck causes a lot of influence in this particular book you could say.
MR: Who's smarter, you or Eliezer?
NS: Probably Eliezer. But I can think longer. I just have more hours in me for thinking hard.
MR: Who's older, you or he?
NS: He's older.
MR: Did you like working with an editor?
NS: It was great.
MR: Who came up with the title?
NS: I think it was Eliezer. I think it might've started out as a phrase that we used in one of the chapters, and then we thought it would be the chapter title because it was so good and then a couple of minutes later we realized it should be the book title.
MR: What were the roads not traveled?
NS: Oh, I forget. The one good title thrusts all the bad ones out of memory.
MR: I worry about this book is that it's so well-written it might be obscuring the truth. It's so compelling and it's so persuasive that it might be obscuring what the reality is.
NS: We can all hope. We can all hope that I'm just totally wrong about reality in that I've been obscuring it for myself somehow. It's not my current guess. My guess is it’s persuasive because the arguments are true, and it's actually a pretty easy call.
If Anyone Builds It, Everyone Dies?
MR: What do you think is the biggest flaw in the argument?
NS: It looks to me like an easy, slam dunk argument.
MR: You're not saying it's a 100% because you're making a counterfactual argument or a prediction.
NS: Not a 100%, sure. But you never have 100%.
If there was a way to bet on it, I would take some bets. The question, “Where is the flaw in the argument that if we build AI like this it'll just kill us?” that’s sort of a different question than, “Is there hope that we live?”
There's plenty of hope that we live. The book is not titled, We All Die, the book is titled, If Anyone Builds It, Everyone Dies. If you're asking where is the hope? Where is the chance? Well, that's easy, the hope and the chance is that world leaders are like, "Oh, I actually don't want to die of this," and then they shut the thing down.
MR: So I wanted to talk to you about the chapter where you actually sketch out a scenario of how this happens. A big theme of the book is that things are weirder than we can even dream up.
NS: Yes…I can predict the end, but not the path, which is standard in science. When Leo Szilard comes up with the nuclear chain reaction in 1933, he would've an easy time saying there’s going to be something like nukes, something like nuclear power plants. He would have a hard time saying, "It'll be the U.S. doing it in 1945." This isn't a weird un-falsifiability of my theory, this is just how usual prediction happens.
MR: Exactly.
NS: Usually you can predict some things well, and other things you're like, "Who the heck knows, man?"
But I expect to die from this.
MR: You do?
NS: It’s my top guess.
MR: What's the probability that you think the reason we don't build superintelligent AI is because we make a decision not to build it? And then a second question is what’s the probability we don’t build it because we can’t build it.
NS: I think humanity can almost certainly get there. There's a question of time. I think a lot of people who noticed the field only when LLMs became visible don't have quite an understanding of how the field is able to blow past obstacles.
Back in the day of AlphaGo, DeepMind’s Go-playing AI, people might have said, "Oh, I don't know. We don't need to be worried about this AI stuff because this particular tree/policy network-based thing doesn't seem like it's going to scale to super intelligence. It's hitting a plateau." It didn't need to go all the way. People just invented LLMs instead. Now people are like, "Ah, well, maybe the LLM things, we don't need to worry about this too much because the LLMs might hit a wall." Okay, suppose they do, the field will invent something new instead. They'll just blow right through the next wall. There's a question of how many walls they have to blow through. I think the people who used to argue it'll never be possible, that theory, sure as heck was not predicting the computers start talking in 2022. It just doesn't seem very feasible to me that the field can't get there in time.
MR: So you think the first part of your title the “If anyone builds it…”
NS: …is a choice.
Not literally only a choice, maybe we avoid making AI for reasons other than choosing not to. Maybe there's thermonuclear war that knocks us back to the Stone Age. I don't know where the prediction markets put the chances of thermonuclear war, but maybe there's a half a percent chance of nuclear war.
MR: The second part of the title, “…everyone dies,” do you feel more confident in this than our ability to build it?
NS: More or less. I think if you wanted to quibble about this one, you start to get into quibbles where the situations where you don't die aren't nice.
MR: Like we’re all living like batteries for the Matrix or something like that.
NS: Yeah. You don’t die, but you probably should have.
Speculating about what machine super intelligences would do with humans if they want humans for some reason is not to put you in the mines because there are more efficient mining methods.
MR: Like we’d be kept around for some sick experiment or something like that?
NS: It doesn't necessarily need to be sick. It's no easier to get malice in there than it is to get benevolence. You can get neither, and you get some weird, strange alien thing.
MR: That's one of the great things about your book…whether it’s true or not it forces you to think through these thought experiments and they disrupt the usual patterns of thought. Who came up with the thought experiments, you or Eliezer?
NS: That was mostly Eliezer, but some of them are mine.
MR: At what point can I call you up and you’d admit you were wrong? What about a hundred years from now and this hasn't happened, and not because of a choice, but let's just say it didn't happen.
NS: I mean in five years, maybe, if AI goes fast enough. Or maybe not for twenty, if AI goes slow. It depends how fast AI goes. If at any point there's a super intelligence in the original sense of the word, not in the AI companies redefine super intelligence to mean a slightly better Google search, but if at some point there is an AI that exceeds humans at all cognitive tasks, you let that run for five years, and we're not all dead, and I'm like, "Gosh, I sure was wrong." Probably you can let a full-fledged superintelligence run for a week, and I might be like, "Well, I was wrong, it should’ve taken over already by now." There are probably ways for a full-fledged superintelligence to take over the world in a week.
More or Less Wrong
MR: So you think it's going to happen quickly?
NS: That's my top guess. If you have a superintelligence, you run it for a week and we're not dead, I'm like, "Well, that sure violates my best model." It doesn’t violate all my models. It might in fact be three years before it could take over, for all I know. But my top guess is it takes a week.
MR: But if a super intelligence that's been running for 50 years doesn’t kill us, you’d admit you were wrong?
NS: 50 years is insane. That just shouldn't happen on my models.
MR: That's giving yourself less leeway than I would think.
NS: My neck is all the way out, and I hope it gets chopped off.
MR: So I interviewed Curtis Yarvin, do you know him?
NS: I know of Curtis.
MR: I asked him if he believed in God and he said he was basically a Reddit atheist. Given where we are in technology that you actually believe we could be nearing End Times, does it make you change any of your eschatological opinions or believe in something supernatural — is there something interesting going on beyond the materialism of your life?
NS: No, deciding to believe in the supernatural just because times look tough just seems a little crazy. I don't really do crazy.
MR: Why do you think not?
NS: It's crazy.
MR: But a lot of people do “crazy.” There’s this philosopher Alvin Plantinga who calls it “sensus divinitatis.”
NS: Yeah, I don't know why they're doing crazy. It seems kind of crazy to me.
MR: But you don't think it's kind of fun?
NS: I have never really been tempted by madness.
MR: Interesting.
NS: I'm not sure why people find madness so seductive. I'm just like, "No thank you, madness."
MR: Why do you think you don't find it seductive?
NS: I don't think it should be seductive. Why would it be? I don't get what other people see in it at all.
MR: Why rationality?
NS: Why be friends with Oliver Habryka or why try to learn to think better?
MR: Why try to learn to think better?
NS: So you can think better. I just try to think better normally in my day-to-day operations because I do a lot of things by thinking. It'll be nice if it was a little bit better.
MR: I guess to what end?
NS: Well, my civilization seems to be trying to kill itself, and it sure would help for me to think more skillfully, as I'm trying to make it not do that.
MR: To what end?
NS: Also, my hobby projects will go a little bit better if I think better. Probably my relationships will go a little bit better if I think clearer, if I notice confusions or hiccups earlier.
MR: I definitely don't agree. I think maybe the first two are correct, but I think thinking better doesn't necessarily make your relationships better.
NS: You might be dating different women than me.
MR: Well, I'm married, so I can tell you for certain that that's not correct.
NS: I think in my case, people like to think of “thinking better” skills as trying to turn yourself into Spock, but really “thinking better” skills are often about noticing an issue before it becomes a bigger issue.
That has served me in many a relationship.