AI Won’t Wipe Out Humanity (Yet)

This week, we discuss the real and imagined dangers of generative artificial intelligence, which experts are eager to see regulated and contained.
Rows of metallic human heads with their eyes shut
Photograph: Balefire9/Getty Images

The idea that machine intelligence will one day take over the world has long been a staple of science fiction. But given the rapid advances in consumer-level artificial intelligence tools, the fear has felt closer to reality these past few months than it ever has before. The generative AI craze has stirred up excitement and apprehension in equal measure, leaving many people uneasy about where the future of this clearly powerful yet still nascent tech is going. This week, for example, the nonprofit group Center for AI Safety released a short statement warning that society should be taking AI as seriously as an existential threat as we do nuclear war and pandemics.

This week on Gadget Lab, we talk with WIRED senior writer Will Knight about how dangerous AI really is, and what guardrails we can put up to prevent the robot apocalypse.

Show Notes

Read Will’s story about the experts worried that AI is posing an existential threat to humanity. Read all WIRED’s coverage about AI.

Recommendations

Will recommends the novel Antimatter Blues by Edward Ashton. Mike recommends storing your food with Bee’s Wrap instead of single-use plastic products. Lauren recommends HBO’s Succession podcast, hosted by Kara Swisher.

Will Knight can be found on Twitter @willknight. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.

Transcript

Lauren Goode: Mike.

Michael Calore: Lauren.

Lauren Goode: I was thinking that we should talk about our own extinction today.

Michael Calore: You mean as journalists or podcasters or as human beings?

Lauren Goode: Maybe both.

Michael Calore: Great.

Lauren Goode: Yeah. Yeah. It's another podcast about AI, but I think that people are really going to want to listen to this one.

Michael Calore: So can I ask when are we going to give people something to listen to that's actually hopeful?

Lauren Goode: Well, actually I think that's part of the show too.

Michael Calore: OK, great.

Lauren Goode: So let's get to it.

[Gadget Lab intro theme music plays]

Lauren Goode: Hi everyone. Welcome to Gadget Lab. I am Lauren Goode. I'm a senior writer at WIRED.

Michael Calore: And I am Michael Calore. I'm a senior editor at WIRED.

Lauren Goode: And today our fellow writer, Will Knight, joins us from Cambridge, Massachusetts. Will, it's great to have you back on the Gadget Lab.

Will Knight: Hello. It's great to be back.

Lauren Goode: One of these days Will, we're going to reach out to you and just say, "Would you like to talk about cat gadgets or something?" But for now you are squarely in the realm of AI coverage. That's what we're having you on to talk about. Yes, we are talking about AI yet again, but this time it's a statement from a group of technologists who are warning of an existential threat to humanity. It was the one sentence statement that was heard around the tech world earlier this week. "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war." That came from the Center for AI Safety, which is a nonprofit, and it was signed by leading figures in the development of AI. Now, this is obviously not the first time that we've been warned of the perils of AI. Some of our listeners might recall the pause that hundreds of top technologists and researchers were calling for back in March, and so far has not really resulted in a pause. But all of these doomsday warnings had us wondering, what should we believe about the potential for AI harm and who among these researchers and technologists is offering the most trustworthy opinion right now? Will, we wanted to bring you in to talk about this. First, walk it back a little bit. Tell us a little bit about this statement from the Center for AI Safety. What spurred this?

Will Knight: Yeah, so I think what has spurred this as well as the previous letter calling for a pause, to a large degree, is the advances we've seen in these large language models, most notably GPT-4, which powers ChatGPT. Some of the performance has just exceeded what people working in AI expected. They expected some things would take 10 years to solve or would just be more intractable without new techniques. And suddenly this model is able to do things that kind of resemble some kind of reasoning and some sort of abstract thinking, even if it is debatable whether that's actually what it is. There's definitely been this moment where some people, and I've talked to AI researchers who weren't remotely, weren't very worried about existential risk at all and they've had this realization. That's one of the things that's kind of triggering this, I think. But we should also recognize that there's this kind of group within the AI research community that has long, many years been worried, were talking about existential risks, and this is a kind of philosophical question that they've had. And I think that this is kind of resurfacing a lot of those viewpoints and we're seeing quite a diverse set of perspectives being merged together in a really quite alarming reading statement. I think that it's important to also recognize there are a lot of AI researchers who aren't at all worried about this threat and offer a lot more perspective and say there are far more immediate threats that AI poses when it comes to things like bias or spreading misinformation or just getting things wrong in the content it provides, as well as things like climate change or just other issues that are a much more immediate risk to humanity, a more tangible risk. I think it's a great question and it's kind of a bit knotty to unpack.

Michael Calore: So it has been an impressive advancement over the last six months or so and anybody who's used any of these tools like Bing Chat or Google Bard has probably been pretty impressed by how capable it is. But speaking for myself, it has not made me think about nuclear war and global pandemic and some of the warnings that these experts and researchers are pointing to. How do you extrapolate that? How will AI actually destroy humanity?

Will Knight: That's a great question, and yeah, I've been trying to understand this myself and I've been talking to some people who've had that recognition. There's a guy, Sam Bowman, who's a researcher from NYU who joined Anthropic, one of the companies working on this with safety in mind, and he has a research lab that is newly set up to focus on safety. And so the argument that those people have is, or some people have such as him, are that these systems are showing signs of greater intelligence in some ways than expected, and they have objectives which they can reach in opaque ways that are difficult to predict, sometimes ways that are surprising. But then they're extrapolating from that and saying some way down the line, and it is a long way down the line, but they're also saying we don't know how far because things are moving more quickly. So for years people have been working on these AI systems and they've been able to trip them up and say, "Ah, that's how it doesn't match human intelligence." With GPT-4, there are some instances where they can't trip them up anymore and they can do things where they were like, "Well, I didn't expect it to do that at all." I think this kind of triggers like a sort of visceral idea of something outwitting people and then when you ask it to solve a problem and it does it in an unexpected way, they kind of imagine it potentially being given the objective of, I don't know, doing something very major for a society or something and then going to really extreme surprising and risky ends to do that. It does seem like a huge leap or extrapolation given that Bing Chat kind of breaks and gets things wrong all the time. But I think that that's at the core of it, there's this kind of idea of this technology which is fundamentally different somehow, that it's kind of intelligent and it's more intelligent than us and it can interact and sometimes try and even outwit people. I think that's kind of where it comes from.

Lauren Goode: Is that the same concern or related concern to it being sentient?

Will Knight: No, I don't think so. I think the question of sentience is, like, depends who you ask. There's some people who will argue that all sorts of systems can be sentient even if they're not biological and there's a spectrum of views on that. But I think that a lot of the people, at least that I've spoken to, would say that there's this risk doesn't require sentience. It can run out of control and do very unpredictable things regardless of whether it's actually sentient, whether you could say it was sentient or not.

Lauren Goode: Will, you're talking to a lot of researchers and technologists who are in this space, and honestly it feels a little bit like musical chairs sometimes. OK, sure, people could say the same thing about journalism, but let's focus on AI. A lot of these folks have come from Google or the same academic institutions or they were at OpenAI, now they're at Anthropic. It's really hard to map out who the key players are, aside from the most obvious, and figure out where they're aligned and where they diverge. Who would you say are the people who are most outspoken about AI right now and where the most divergent opinions are coming from?

Will Knight: Well, I mean, some of the most outspoken people right now are those at these big companies which have these vested interests. It's Sam Altman at OpenAI or executives at Google and Microsoft. They're being quite outspoken, I think for various reasons that aren't just about the concerns of AI. It does feel like there's, as you say, there's kind of slightly incestuous world of people who've drunk the Kool-Aid of superhuman AI being quite close. Then there are people such as the researchers who wrote one of the first papers pointing to this, so Timnit Gebru and Margaret Mitchell at Google, who wrote about the real tangible, current risks with large language models, such as them making things up and misleading people who were at Google at the time and then were either fired or asked to leave depending on the story told. There's definitely I see this kind of divide of people who are more focused on the near-term risks and kind of frustrated with this view of long-term existential angst, which seems to be coming from a lot of the, most people who tend to have a lot of the most financial power and interests, and then people who are warning about what we should be dealing with today and worried about that being a distraction. Then, yeah, Geoffrey Hinton is an interesting case because he's one of the most famous people in AI and one of the people who developed deep learning or pioneered deep learning, which underpins all of this machine learning that's gone into these models. And he recently kind of changed his position and became more worried about, or became more outspoken. He left Google and said we need to start worrying about the risks of this. He actually, I think, is concerned about the short-term risks as well as the long-term ones, but his concerns about AI running amok and eventually taking over or becoming out of control has gotten all the headlines and people have focused on that. But I think he's sort of worried somewhat about both.

Michael Calore: Will, we've already seen two of these big open letters warning of the immediate and future risks of AI. When are we going to see the third and who's going to write it?

Will Knight: Yeah, next week, I'm guessing signed by every single world leader. I do think it's a real risk that people are not going to take some of the real problems with AI seriously because of this. I mean, it's a long way off. And so if people are really worried about issues sort of aligning with this kind of apocalyptic stuff is an extreme way to go. I can feel there may be a more backlash to it. I mean, that's kind of already happening somewhat, but I can see it being more people just won't take these people so seriously.

Lauren Goode: Yeah, and as your reporting mentions, Will, the letter doesn't really address the more immediate harms like bias or surveillance or misinformation, which you referenced earlier. And so you do have to wonder if focusing on the big existential threats does detract from the things that we should be paying attention to at this moment in time. And I think that that's actually a good segue to our next segment. Let's take a quick break and we'll come back and talk about AI regulation.

[Break]

Lauren Goode: Earlier this month, one of the most prominent figures in generative AI, Sam Altman, who runs OpenAI, testified before Congress on the implications of products like ChatGPT. He not only explained in basic terms how this technology works, but he actually called for some form of regulation, telling Congress that it should work with the companies in this space to try to figure out rules and guardrails before the tech runs totally amok. Will, when I first read about this a couple of weeks ago, I have to admit, I wondered how sincere these calls for regulation are. The tech industry has this long history of positioning itself in DC in such a way that it seems like it wants to work with the lawmakers, but ultimately too much regulation would squash their businesses. Why is OpenAI calling for guardrails?

Will Knight: Yeah, I think it is kind of an echo of what we've seen before to some degree. And we've also seen Microsoft put out a blueprint for regulating AI. And Google is working with the EU supposedly on some regulation stop gap while they work out proper regulations. And I think it is a kind of, they know that some regulation’s coming, so saying, "Oh, we welcome it" makes them seem that they're not antagonistic and they can also maybe get to shape it. I think also it's worth recognizing two things that people in government really don't ... by and large have no idea what this stuff is. And so they are kind of more than happy to put themselves to some degree in the hands of the experts. The experts working on it will be like, "We are the ones who really understand this and this is how you regulate it." They kind of have the upper hand when it comes to understanding the technology, but also within the government, I've spoken to a few people in the government and they see a technology, I mean, they know that there are risks, but they see a technology that economists are saying could be enormously valuable to productivity and the economy, make the economy grow, and that the US has a lead in that over its rivals. So they do not want to do anything. I think what they're probably most concerned about is accidentally putting a damper on the technology. I think people like Sam Altman probably know that there's not going to be very that much appetite for real hard regulations around the technology, but also just trying to put on a good face and help shape them, really.

Michael Calore: Also, I feel like when a CEO of a big company that's a big player in the tech space talks about regulation, what they're really doing is they're setting themself up as the expert in the field, which increases the importance of their role in the conversation. It makes them a voice that cannot be ignored. It makes them a force that everybody has to follow. It's really about sort of inflating their importance to society and putting them in a position of greater power, isn't it?

Will Knight: Yeah, that's right. That's true. The truth is they've not even published details of how the most powerful of these AI models GPT-4 works. They can be the ones to say, "Well, we know how to control this because we're the only ones who have access to it." I mean, that's one of the problems with it. I think if there's something that's powerful, there's a real argument that it should be possible for lots of scientists to study it and examine it. They haven't sort of said that they feel that they're going to open it up and make it more transparent.

Lauren Goode: And that's at the heart of the matter in Europe, with Google, right? We were all covering Google IO a few weeks ago, and one of the things that emerged from that is, oh, actually, Bard, their generative AI chat tool, is not available in European countries. And some of our colleagues were scrambling to report that out because the way that Google was presenting it was just sort of like, "Here, here's this world-changing tool." But it turns out there are large parts of the world that don't have access to it, and it's because of the regulatory framework that exists in those places.

Will Knight: Yeah, that's a really big question is whether the US may adopt some sort of similar regulations where they require more transparency. But I've read a story today where it was saying that the Biden administration is kind of very divided about this, and there are a lot of people who are pushing back on the idea that we should do anything like that because, again, because they don't want to stymie that. We've seen that with other technologies, like self-driving cars. They didn't want to regulate them at all because they thought this is going to be an enormous, amazingly powerful technology, and the US is going to have a lead in it. It's understandable. But then now we're facing a lot of questions about how reliable these systems are, and they're having to introduce more regulations. Yeah, I think it may be repeating that.

Lauren Goode: What do you think the likelihood is that we will see real regulation around AI in the near future?

Will Knight: I think that we will see some regulation. I think that there's clear—that there's kind of this appetite, and one of the reasons actually all these things are in tension is that the US along with, say, China and Europe, are keen to set out their stall when it comes to regulation. Because if you are the one that defines the regulation, often they get adopted elsewhere. And it can be a kind of leadership role. I think that there'll be some regulations, but I think it will be most likely be quite weak in the US, relatively the most weak. And yeah, the question of what is really interesting. I've been talking to some filmmakers and writers in Hollywood who are really interested in what AI, what kind of threat that may pose, and they're sort of thinking about fakery, AI fakery, and I think one of the most obvious ones maybe around sort of regulation of deepfakes, requiring some restriction of deepfakes, or platforms to not allow those to be distributed without permission. But that's just one small aspect of a whole generative AI kind of stack.

Michael Calore: And I do think that some of this regulation will come from the private sector. Platforms imposing rules about what can be posted, companies imposing rules about what their tools can be used for. And that's not necessarily regulation, it's just self-containment.

Will Knight: Right, and often not terribly effective when the platforms do that or when companies say that. It doesn't stop people from doing things. And definitely seeing with these image manipulation tools, voice synthesis tools, that there's ... I mean, there's just a ton of voice deepfakes now going around on Twitter and more and more image and video manipulation, and it's getting just easier and easier to do it.

Lauren Goode: In fact, we taped an episode of Gadget Lab just a few weeks ago where our producer here, Boone, mimicked our voices and his own voice, and it was scarily good.

Will Knight: Oh, wow.

Lauren Goode: Boone really, he holds all the power here. He has all this audio of us that he can just put through a gen-AI tool, and that'd be the end of us. Will, you seem rather optimistic about the possibility of regulation here in the US, which I guess surprises me in some ways. I tend to think of these things as moving rather slowly. And to your earlier point that perhaps some of our Congress people don't fully understand the technology, and does that create some kind of barrier to things moving forward? But it seems like you think this is a real possibility.

Will Knight: Well, I guess I might have misspoken if I sounded optimistic. I think I'm optimistic that I feel like there's kind of a lot of momentum to try and have some regulation. I think that something will come about in the US but I suspect it would be very ... I suspect it'd probably be quite ineffectual if that's a measure of my—

Lauren Goode: Yeah, weak sauce. Will's prediction, weak sauce regulation.

Will Knight: But I've generally been surprised by how quickly the EU and China have moved and both have been sort of at the forefront of regulation. And I think that there is this, what is driving it would be less—if it does move quickly—perhaps it would be less about, "Oh, we really want to make sure this is done safely" than just the US having its rules set out in that kind of global discussion. You can deepfake me to say something much more articulate.

Michael Calore: Oh, don't worry. We will. We will.

Lauren Goode: We're on it.

Michael Calore: All we're using this recording for, it's just to capture your voice, to feed the model.

Will Knight: Right. That makes sense. Just get me to say as many things as possible.

Lauren Goode: Pretty soon you'll be on this podcast every single day. We will make this a daily podcast thanks to AI. All right. Let's take another quick break and we'll come back with our very human recommendations.

[Break]

Lauren Goode: Will, I'm excited to hear yours, because it's probably cat-related. What is your recommendation this week?

Will Knight: I'm now going to let you down. I wish I did have another cat-related suggestion. I mean, I just, cats on TikTok is always ... My feed is pretty much all cats.

Lauren Goode: It's your go-to.

Will Knight: Just cats. OK. I'm going to recommend the book Antimatter Blues by Edward Ashton. It's a follow up to Mickey7, which is being made into a movie. It's about a clone and he's a kind of hapless clone and his adventures on an off-world colony. Oh, actually, also, can I also recommend a TV show called Fired on Mars, which a friend of mine made? It's about a guy who goes, he's a graphic designer who works in a Mars colony and he gets fired and gets into all sorts of scrapes.

Lauren Goode: Is he working for Elon Musk? Because that would make a lot of sense.

Will Knight: It would make sense.

Lauren Goode: And where can people find that TV show?

Will Knight: That is on HBO.

Lauren Goode: Oh, OK.

Will Knight: It's actually on Max. Sorry, I got it completely wrong.

Michael Calore: Oh, don't worry. Everybody gets that completely wrong.

Lauren Goode: We're still calling it HBO on this show.

Michael Calore: Yeah. FKA HBO, now Max.

Lauren Goode: We're just going to give it a symbol eventually. Well, both of those sound delightful. Thank you for that recommendation. Those recommendations, plural, Will. Mike, what's your recommendation?

Michael Calore: Well, it has nothing to do with space travel or artificial intelligence, but I'm going to recommend—

Lauren Goode: Is it pickles?

Michael Calore: Oh, it could be pickle-adjacent.

Lauren Goode: What is this?

Michael Calore: Bee's Wrap.

Lauren Goode: OK, say more.

Michael Calore: OK. This is wrap, W-R-A-P.

Lauren Goode: Oh, OK. I thought it was just a bunch of bees—

Michael Calore: Rapping?

Lauren Goode: ... rapping. Have you heard the buzz about their latest song?

Michael Calore: Oh, no. No, no, no, no, no.

Lauren Goode: OK, sorry.

Michael Calore: Allow me to wax philosophically. This is food wrap. It takes the place of plastic wrap or plastic bags or any other sort of single-use plastic that you might use to preserve foods in your refrigerator or in your kitchen. It's a cotton sheet that is coated in beeswax and plant oils, and it sort of forms a seal around whatever you want to protect. You can put it on top of a bowl of hummus or a bowl of salsa. I have been using a small one to wrap half a zucchini or half an onion. If you only use half of a vegetable, normally you just put plastic wrap around it. But plastic is terrible as we know. It just sits in landfills and very, very slowly degrades into smaller bits of plastic that make their way into waterways and end up in our stomachs. We should all stop using plastic and here's a great way to do it. These I encountered, because we wrote about them on WIRED, we actually recommend them as one of our favorite ecofriendly products, and we needed to take a photograph of them. I bought some off of The Everything Store. I think it's $17 for a three-pack. The three-pack comes with a small one, a medium one, and a large one, which is just about all you need.

Lauren Goode: It depends on the size of the bees.

Michael Calore: It depends on the size of the ... no.

Lauren Goode: Sorry, my heart is all a flutter, just hearing about this.

Michael Calore: I'm sure. I'm sure.

Lauren Goode: I know these jokes probably really sting. Please continue.

Michael Calore: I just feel so defeated right now.

Lauren Goode: I will use the beeswax wrap.

Michael Calore: Yeah, Bee's Wrap is what it's called.

Lauren Goode: Bee's Wrap.

Michael Calore: There are multiple companies that make this. I'm going to recommend Bee's Wrap because it's the one that I bought that I like. And it's a three-pack for less than $20. You can buy it at hippie grocery stores or you can just buy it on the internet. But it's great. I use them all the time. And I will say that if you're washing them, which you should do in between uses, wash them in cold water, because washing them in hot water will ruin them because it'll melt the beeswax and it'll all run off into your sink.

Lauren Goode: This recommendation is the bee's knees.

Michael Calore: Lauren, I feel like I should save everybody from any more bee puns by asking you what your recommendation is.

Lauren Goode: Oh, all right. Moment of silence for the end of Succession. Great. Those of you who listened to our podcast know several weeks ago, I recommended that you watch Succession, season four had just begun, and I said watch it from the beginning so you can get completely enmeshed with the Roy family and feel emotionally attached to this Sunday night program on the channel formerly known as HBO. The season finale was this past week, it was epic, epic television. But if you were feeling like I was feeling afterward, just kind of empty and I want more, and now I'm just scrolling Instagram looking for Succession memes and texting all my friends and saying, "Oh my god, did you see what happened at the end?" You should listen to the Succession podcast. Yeah. Full disclosure, the person who hosts this podcast, Kara Swisher, I know her quite well. She sometimes takes my parking spot, which is a whole other story. But I used to work with Kara, she's a fantastic podcast interviewer, and she has a podcast for HBO that is the Succession follow-up podcast every week that you can listen to. And so this week she had interviews with Alexander Skarsgård, Jeremy Strong, and Mark Mylod—two of the actors on the program, and then the director. You have to check it out. It's great. It's just the kind of extra Succession content that you need, if you, like me, were grieving the end of the program.

Michael Calore: So that's a good recommendation. And props to Kara, and props to all of the people in journalism who do these podcasts about shows, but I'm kind of over the show has a podcast.

Lauren Goode: I tend to agree, except for this show.

Michael Calore: So this show is the one that leaves you wanting more, and the thing that you want more of is chatter about the show? And if they can do a well-produced podcast, then that's something that you'll listen to.

Lauren Goode: Right, because a lot of the podcasts about TV shows, and no discredit to them, but they're fandom shows. It's a couple of people who are really, really into the program like I am, and they're just sitting around talking about it like pals. Kara actually gets to talk to Brian Cox, or the director or the writer, or some of the stars of the show. And so you're like, you're hearing directly from them about how they develop their characters and how they were thinking about the plot lines. And I really enjoy that.

Michael Calore: So Kara has that thing that I call the machine, which is when she's interviewing you, you feel like you feel her presence.

Lauren Goode: You have been in her presence. We have been in her presence, and you know this.

Michael Calore: She's a very forceful interviewer.

Lauren Goode: She is.

Michael Calore: She's a very good interviewer, in that she gets her interview subjects to talk about things that no other interviewer is capable of getting them to talk about.

Lauren Goode: Very much so. She is very good at identifying people's weaknesses and going for it in the podcast. And it's different, I think when you're interviewing tech executives, like she does all the time where these are people in extreme positions of power and you're sort of, you're punching up and you are asking them the very hard questions about how they're running their businesses, versus when you also get to be an interviewer but a fan and talk to someone who is just completely in their creative element making a show. I think it's a slightly different vibe.

Michael Calore: So is this a soft pitch for you to do one of these next time?

Lauren Goode: No, I don't have the time, honestly. I would love to in an ideal world, certainly. But I already have another podcast, in case you didn't know, and I'm already just struggling to fit it all into the day. No, I'm not volunteering myself for a TV show podcast, but I enjoy listening to them.

Michael Calore: What if the cohost is Alexander Skarsgård?

Lauren Goode: I'm down. I'm there. Love the Skarsgård.

Michael Calore: Perfect.

Lauren Goode: All right, well, that's it for our show. Thank you so much, Will, for joining us and disabusing us of the notion that we're at this moment headed for an extension event because of AI.

Will Knight: Well, I certainly hope I'm right.

Lauren Goode: I do too. We both do. And thanks to all of you for listening. If you have feedback and you're still here, you can find all of us on Twitter, just check the show notes. We'd love to hear from you. Our producer is the excellent Boone Ashworth, the very real man himself. Goodbye for now. We'll be back next week.

[Gadget Lab outro theme music plays]