RADIO

The Future of AI: Who Will Hold Power Over the Army of Geniuses?

AI development companies like OpenAI and Google DeepMind are in a “reckless race” to build smarter AIs that may soon become an “army of geniuses.” But is that a good idea? And who would control this “army?” Glenn speaks with former OpenAI researcher and AI Futures Project Executive Director, Daniel Kokotajlo, who warns that the future is coming fast! He predicts who will likely hold power over AI and what this tech will look like in the near future. Plus, he explains why developers with ethical concerns, like himself, have been leaving these Silicon Valley giants in droves.

Transcript

Below is a rush transcript that may contain errors

GLENN: So we have Daniel Kokotajlo, and he's a former OpenAI researcher. Daniel, have you been on the program before? I don't think you have, have you?

DANIEL: No, I haven't.

GLENN: Yeah. Well, welcome, I'm glad you're here. Really appreciate it. Wanted to have you on, because I am a guy. I've been talking about AI forever.

And it is both just thrilling, and one of the scariest things I've ever seen, at the same time.

And it's kind of like, not really sure which way it's going.

Are -- how confident are you that -- what did you say?

DANIEL: It can go both ways. It's going to be very thrilling. And also very scary.

GLENN: Yeah. Okay.

Good. Good. Good.

Well, thanks for starting my Monday off with that. So can you tell me, first of all, some of the things, that you think are coming, and right around the corner that people just don't understand.

Because I don't think anybody. The average person, they hear this. They think, oh, it's like social media. It's going to be like the cell phone.
It's going to change everything. And they don't know that yet.

DANIEL: Yeah. Well, where to begin. I think so people are probably familiar with systems like ChatGPT now, which are large language models, that you can go have an actual normal conversation with, unlike ordinary software programs.

They're getting better at everything. In particular, right now, and in the next few years, the companies are working on turning them into autonomous agents stop instead of simply responding to some message that you send them, and then, you know, turning off. They would be continuously operating, roaming around, browsing the internet. Working on their own projects. On their own computers.

Checking in with you, sending messages. Like a human employee, basically.

GLENN: Right.

DANIEL: That's what the companies are working on now. And it's the stated intention of the CEOs of these companies, to build eventually superintelligence.

What is superintelligence? Super intelligence is fully eponymous AI systems, that are better at humans at absolutely everything.

GLENN: So on the surface -- that sounds -- that sounds like a movie, that we've all seen.

And you kind of -- you know, you say that, and you're like, anybody who is working on these.

Have they seen the same movies that I have seen?

I mean, what the heck? Let's bring -- let's just go see Jurassic park. I mean, ex-Machina. I don't -- I mean, is it just me? Or do people in the industry just go, you know, this could be really bad?

DANIEL: Yeah. It's a great question. And the answer is, they totally have seen those movies, and they totally think, yes, they can get rid of that. In fact, that's part of the founding story, of some of these companies.

GLENN: What? What do you mean? What do you mean?

DANIEL: So Shane Legg, who is I guess I'll give you the technical founder of Deep Minds, which is now part of Google Deep Minds. Which is one of the big three companies, building towards super intelligence.

I believe in his Ph.D. thesis, he discusses the possibility of superhuman AI systems, and how if they're not correctly aligned to the right values, if they're not correctly instilled with the appropriate ethics, that they could kill everyone.

And become a -- a superior competitor species to humans.

GLENN: Hmm.

DANIEL: Not just them. Lots of these people at these companies, especially early on. Basically had similar thoughts of, wow. This is going to be the biggest thing ever.

If it goes well, it could be the best thing that ever happens. If it goes poorly, it could literally kill everyone, or do something similarly catastrophic, like a permanent dystopia. People react to that in different ways. So some people voted to stay in academia.

Some people stayed in other jobs that they had, or funded nonprofit to do research about this other thing. Some people, decided, well, this is going to happen, then it's better good people like me and my friends are in charge, when it happens.

And so that's basically the founding story of a lot of these companies. That is sort of part of why Deep Minds was created, and part of why OpenAI was created.

I highly recommend going and reading some of the emails that surfaced in court documents, related to the lawsuits against OpenAI.

Because in some of those emails. You see some of the founders of OpenAI, talking to each other about why they founded OpenAI.

And basically, it was because they didn't trust Deep Mind to handle this responsibly. Anyway how --

GLENN: And did they go on to come up with -- did they go on to say, you know, and that's why we've developed this? And it's going to protect us from it? Or did they just lose their way.

What happens?

DANIEL: Well, it's an interesting sociological question.

My take on it is that institutions tend to be -- tend to conform to their incentives over time.

So it's been a sort of like -- there's been a sort of evaporating growing effect.

Where the people who are most concerned about where all this is headed, tend to not be the one to get promoted.

And end up running the companies.

And they tend to be the ones who, for example, be the ones who quit like me.

GLENN: Let's stop it for a second.

Let's stop it there for a second.

You were a governance researcher on OpenAI on scenario planning.

What does that mean?

DANIEL: I was a researcher on the government's team. Scenario funding is just one of several things that I did.

So basically, I mean, I did a couple of different things at OpenAI. One of the things that I did was try to see what the future will look like. So 2027 is a much bigger, more elaborate, more rigorous version of some smaller projects, that I sort of did when I was at OpenAI.

Like I think back in 2022, I wrote my own -- figuring out what the next couple of years were going to look like. Right? Internal scenario, right?

GLENN: How close are you?

DANIEL: I did some things right. I did some things wrong. The basic trends are (cut out), et cetera.

For how close I was overall, I actually did a similar scenario back in 2021, before I joined OpenAI.

And so you can go read that, and judge what I got right and what I got wrong.

I would say, that is about par for the course for me when I went to do these sorts of things. And I'm hoping that AI 27 will also be, you know, about that level of right and wrong.

GLENN: So you left.

DANIEL: The thing that I wrote in 2021 was what 2026 looks like, in case you want to look it up.

GLENN: Okay. I'll look it up. You walked away from millions of equity in OpenAI. What made you walk away? What were they doing that made you go, hmm, I don't think it's worth the money?

DANIEL: So -- so back to the bigger picture, I think. Remember, the companies are trying to build super intelligence.

It's going to be better than humans, better that night best humans at everything. While also being faster and cheaper. And you can just make many, many copies of them.

The CEO of anthropic. He uses this term. The country of geniuses. To try to visualize what it would look like.

Quantitatively we're talking about millions of copies.

Each one of which is smarter than the smartest geniuses.

While also being more charismatic. Than the most charismatic celebrities and politicians.

Everything, right?

So that's what they're building towards.

And that races a bunch of questions.

Is that a good idea for us to build, for example?

Like, how are we going to do that?
(laughter)
And who gets to control the army of geniuses.

GLENN: Right. Right.

DANIEL: And what orders are going to be give up?

GLENN: Right. Right.

DANIEL: They have some extremely important questions. And there's a huge -- actually, that's not even all the questions. There's a long list of other very important questions too. I was just barely scratching the surface.

And what I was hoping would happen, on OpenAI. And these other companies, is that as the creation of these AI systems get closer and closer, you know, it started out being far in the future. As time goes on, and progress is made. It starts to feel like something that could happen in the next few years. Right?

GLENN: Yes, right.

DANIEL: As we get closer and closer, there needs to be a lot more waking up and paying attention. And asking these hard questions.

And a lot more effort in order to prepare, to deal with these issues. So, for example, OpenAI created the super alignment team, which was a -- a team of technical researchers and engineers, specifically focused on the question of how do we make sure that we can put any values into these -- how do we make sure we can control them at all?

Even when they're smarter than us.

So they started that team.

And they said that they were going to give 20 percent of their compute to -- towards me on this problem, basically.

GLENN: How much -- how much percentage. Go ahead.

DANIEL: Well, I don't know. And I can't say. But as much as 20 percent.

So, yeah. 20 percent was huge at the time.

Because it was way more than the company, than any company was devoting to that technical question at the time. So at the time, it was sort of a leap forward.

It didn't pan out. As far as I know, they're still not anywhere near 20 percent. That's just an example of the sort of thing that made me quit. That we're just not ready. And we're not even taking the steps to get ready.

And so we are -- we're going to do this anyway, even though we don't understand it. Don't know how to control it. And, you know, it will be a disaster. That's basically what got me delayed.

GLENN: So hang on just a second. Give me a minute.

I want to come back and I want to ask you, do you have an opinion on who should run this? Because I don't like OpenAI.

I like X better than anybody, only because Elon Musk has just opened to free speech on everything. But I don't even trust him. I don't trust any of these people, and I certainly don't trust the government.

So who will end up with all of this compute, and do we get the compute?

And enough to be able to stop it, or enough to be able to be dangerous?

I mean, oh. It just makes your head hurt.

We'll go into that when we come back.

Hang on just a second. First, let me tell you about our sponsor this half-hour.

It's Preborn. Every day, across the country, there's a moment that happens behind closed doors. A woman, usually young, scared, unsure, walks into a clinic. With a choice in front of her. A world that seems like it's pressing in at all size.

And she just doesn't know what to do.

This is the way. You know, I hate the abortion truck thing. Where everyone is screaming at each other.

Can we just look at this mom for just a second? And see that in most cases, it's somebody who has nobody on their side.

That doesn't have any way to afford the baby.

And is scared out of their mind. And so they just don't know what to do. She had been told 100 times, you know, it's easy. This is just normal.

But when she goes to a Preborn clinic, if she happens to go there, she'll hear the baby's heartbeat.

And for the first time, that changes everything. That increases the odds that mom does not go through with an abortion at 50 percent.

Now, the rest of it is all in. But I don't have anybody to help me.

Sheets other thing that Preborn does. Because they care about mom, rather than the baby. That's what is always lost in this message. Mom is really important as well.

So they not only offer the free ultrasound. But they are there for the first two years. They help pay for what ever the mom needs.

All the checkups. All the visits. And the doctor. Even clothing. And everything. Really, honestly.

It's amazing. Twenty-eight dollars provides a woman with a free ultrasound.

And another moment. Another miracle. And possibly another life.

And it just saves two people not only the baby, but also a mom. Please dial #250. Say the key word baby.

#250. Key word baby or visit Preborn.com/Beck.

It's Preborn.com/Beck. It's sponsored by Preborn. Ten-second station ID.
(music)
Daniel Kokotajlo.

He's former OpenAI researcher. AI futures project executive director. And talking about the reckless race, to use his words, to build AGI.

You can find his work at AI-2027.com.

So, Daniel, who is going to end up with control of this thing?

DANIEL: Great question.

Well, probably no one.

And if not no one, probably some CEO or president would be my guess.
GLENN: Oh, that's comforting.

DANIEL: Like in general, if you wanted them to understand, like, you know, my views, the views of my team at the Future Project. And sort of how it all fits together. And why we came to these conclusions. You can go read our website, which has all of this stuff on it.

Which is basically our best guest attempt after predicting their future.

Obviously, you know, the future is very difficult to predict.

We will probably get a bunch of things wrong.

This is our best guess. That's AI-2027.com.

GLENN: Yes.

DANIEL: Yeah. So as you were saying, if one of these companies succeed in getting to this army of geniuses on the data centers. Super intelligence AIs. There's a question of, who controls them?

There's a technical question, of can -- does humanity even have the tools it needs to control super intelligence AIs?

Does anyone control them?

GLENN: I mean, it seems to me --

DANIEL: That's an unsolved question.

GLENN: I think anyone who understands this.

It's like, we get Bill Gates. But it's like a baby gate.

Imagine a baby trying to outsmart the parent.

You won't be able to do it.

You will just step over that gate.

And I don't understand why a super intelligence wouldn't just go, oh, that's cute.

Not doing that. You know what I mean?

DANIEL: Totally. And getting a little bit into the literature here.

So there's a division of strategies into AI's control techniques, and AI's alignment techniques.

So the control techniques are designed to allow you to control the super intelligence AI. Or the AGI, or whatever it is that you are trying to control.

Despite the fact that it might be at odds with you. And it might have different goals than you have.

Different opinions about how the future should be. Right?

So that's it sort of adversarial technique, where you, for example, restrict its access to stuff.

And you monitor it closely.

And you -- you use other copies of the AI, as watchers.

To play them off against each other.

But there's all these sort of control techniques. That are designed to work even if you can't trust the AIs.

And then there's a technique, which are designed to make the case that you don't need the control techniques, because the AIs are virtuous and loyal and obedient. And trustworthy, you know, et cetera.

Right? And so a lot of techniques are trying to sort of continue the specified values, deeply into the AIs, in robust ways, so that you never need the control techniques. Because they were never -- so there's lots of techniques. There's control techniques. Both are important fields of research. Maybe a couple hundred people working on -- on these fields right now.

GLENN: Okay. All right.

Hold on. Because both of them sound like they won't work.

RADIO

Could passengers have SAVED Iryna Zarutska?

Surveillance footage of the murder of Ukrainian refugee Iryna Zarutska in Charlotte, NC, reveals that the other passengers on the train took a long time to help her. Glenn, Stu, and Jason debate whether they were right or wrong to do so.

Transcript

Below is a rush transcript that may contain errors

GLENN: You know, I'm -- I'm torn on how I feel about the people on the train.

Because my first instinct is, they did nothing! They did nothing! Then my -- well, sit down and, you know -- you know, you're going to be judged. So be careful on judging others.

What would I have done? What would I want my wife to do in that situation?


STU: Yeah. Are those two different questions, by the way.

GLENN: Yeah, they are.

STU: I think they go far apart from each other. What would I want myself to do. I mean, it's tough to put yourself in a situation. It's very easy to watch a video on the internet and talk about your heroism. Everybody can do that very easily on Twitter. And everybody is.

You know, when you're in a vehicle that doesn't have an exit with a guy who just murdered somebody in front of you, and has a dripping blood off of a knife that's standing 10 feet away from you, 15 feet away from you.

There's probably a different standard there, that we should all kind of consider. And maybe give a little grace to what I saw at least was a woman, sitting across the -- the -- the aisle.

I think there is a difference there. But when you talk about that question. Those two questions are definitive.

You know, I know what I would want myself to do. I would hope I would act in a way that didn't completely embarrass myself afterward.

But I also think, when I'm thinking of my wife. My advice to my wife would not be to jump into the middle of that situation at all costs. She might do that anyway. She actually is a heck of a lot stronger than I am.

But she might do it anyway.

GLENN: How pathetic, but how true.

STU: Yes. But that would not be my advice to her.

GLENN: Uh-huh.

STU: Now, maybe once the guy has certainly -- is out of the area. And you don't think the moment you step into that situation. He will turn around and kill you too. Then, of course, obviously. Anything you can do to step in.

Not that there was much anyone on the train could do.

I mean, I don't think there was an outcome change, no matter what anyone on that train did.

Unfortunately.

But would I want her to step in?

Of course. If she felt she was safe, yes.

Think about, you said, your wife. Think about your daughter. Your daughter is on that train, just watching someone else getting murdered like that. Would you advise your daughter to jump into a situation like that?

That girl sitting across the aisle was somebody's daughter. I don't know, man.

JASON: I would. You know, as a dad, would I advise.

Hmm. No.

As a human being, would I hope that my daughter or my wife or that I would get up and at least comfort that woman while she's dying on the floor of a train?

Yeah.

I would hope that my daughter, my son, that I would -- and, you know, I have more confidence in my son or daughter or my wife doing something courageous more than I would.

But, you know, I think I have a more realistic picture of myself than anybody else.

And I'm not sure that -- I'm not sure what I would do in that situation. I know what I would hope I would do. But I also know what I fear I would do. But I would have hoped that I would have gotten up and at least tried to help her. You know, help her up off the floor. At least be there with her, as she's seeing her life, you know, spill out in under a minute.

And that's it other thing we have to keep in mind. This all happened so rapidly.

A minute is -- will seem like a very long period of time in that situation. But it's a very short period of time in real life.

STU: Yeah. You watch the video, Glenn. You know, I don't need the video to -- to change my -- my position on this.

But at his seem like there was a -- someone who did get there, eventually, to help, right? I saw someone seemingly trying to put pressure on her neck.

GLENN: Yeah. And tried to give her CPR.

STU: You know, no hope at that point. How long of a time period would you say that was?

Do you know off the top of your head?

GLENN: I don't know. I don't know. I know that we watched the video that I saw. I haven't seen past 30 seconds after she --

STU: Yeah.

GLENN: -- is down. And, you know, for 30 seconds nothing is happening. You know, that is -- that is not a very long period of time.

STU: Right.

GLENN: In reality.

STU: And especially, I saw the pace he was walking. He certainly can't be -- you know, he may have left the actual train car by 30 seconds to a minute. But he wasn't that far away. Like he was still in visual.

He could still turn around and look and see what's going on at that point. So certainly still a threat is my point. He has not, like, left the area. This is not that type of situation.

You know, I -- look, as you point out, I think if I could be super duper sexist for a moment here, sort of my dividing line might just be men and women.

You know, I don't know if it's that a -- you're not supposed to say that, I suppose these days. But, like, there is a difference there. If I'm a man, you know, I would be -- I would want my son to jump in on that, I suppose. I don't know if he could do anything about it. But you would expect at least a grown man to be able to go in there and do something about it. A woman, you know, I don't know.

Maybe I'm -- I hope --

GLENN: Here's the thing I -- here's the thing that I -- that causes me to say, no. You should have jumped in.

And that is, you know, you've already killed one person on the train. So you've proven that you're a killer. And anybody who would have screamed and got up and was with her, she's dying. She's dying. Get him. Get him.

Then the whole train is responsible for stopping that guy. You know. And if you don't stop him, after he's killed one person, if you're not all as members of that train, if you're not stopping him, you know, the person at the side of that girl would be the least likely to be killed. It would be the ones that are standing you up and trying to stop him from getting back to your daughter or your wife or you.

JASON: There was a -- speaking of men and women and their roles in this. There was a video circling social media yesterday. In Sweden. There was a group of officials up on a stage. And one of the main. I think it was health official woman collapses on stage. Completely passes out.

All the men kind of look away. Or I don't know if they're looking away. Or pretending that they didn't know what was going on. There was another woman standing directly behind the woman passed out.

Immediately springs into action. Jumps on top. Grabs her pant leg. Grabs her shoulder. Spins her over and starts providing care.

What did she have that the other guys did not? Or women?

She was a sheepdog. There is a -- this is my issue. And I completely agree with Stu. I completely agree with you. There's some people that do not respond this way. My issue is the proportion of sheepdogs versus people that don't really know how to act. That is diminishing in western society. And American society.

We see it all the time in these critical actions. I mean, circumstances.

There are men and women, and it's actually a meme. That fantasize about hoards of people coming to attack their home and family. And they sit there and say, I've got it. You guys go. I'm staying behind, while I smoke my cigarette and wait for the hoards to come, because I will sacrifice myself. There are men and women that fantasize of block my highway. Go ahead. Block my highway. I'm going to do something about it. They fantasize about someone holding up -- not a liquor store. A convenience store or something. Because they will step in and do something. My issue now is that proportion of sheepdogs in society is disappearing. Just on statistical fact, there should be one within that train car, and there were none.

STU: Yeah. I mean --

JASON: They did not respond.

STU: We see what happens when they do, with Daniel Penny. Our society tries to vilify them and crush their existence. Now, there weren't that many people on that train. Right?

At least on that car. At least it's limited. I only saw three or four people there, there may have been more. I agree with you, though. Like, you see what happens when we actually do have a really recent example of someone doing exactly what Jason wants and what I would want a guy to do. Especially a marine to step up and stop this from happening. And the man was dragged by our legal system to a position where he nearly had to spend the rest of his life in prison.

I mean, I -- it's insanity. Thankfully, they came to their senses on that one.

GLENN: Well, the difference between that one and this one though is that the guy was threatening. This one, he killed somebody.

STU: Yeah. Right. Well, but -- I think -- but it's the opposite way. The debate with Penny, was should he have recognize that had this person might have just been crazy and not done anything?

Maybe. He hadn't actually acted yet. He was just saying things.

GLENN: Yeah. Well --

STU: He didn't wind up stabbing someone. This is a situation where these people have already seen what this man will do to you, even when you don't do anything to try to stop him. So if this woman, who is, again, looks to be an average American woman.

Across the aisle. Steps in and tries to do something. This guy could easily turn around and just make another pile of dead bodies next to the one that already exists.

And, you know, whether that is an optimal solution for our society, I don't know that that's helpful.

In that situation.

THE GLENN BECK PODCAST

Max Lucado on Overcoming Grief in Dark Times | The Glenn Beck Podcast | Ep 266

Disclaimer: This episode was filmed prior to the assassination of Charlie Kirk. But Glenn believes Max's message is needed now more than ever.
The political world is divided, constantly at war with itself. In many ways, our own lives are not much different. Why do we constantly focus on the negative? Why are we in pain? Where is God amid our anxiety and fear? Why can’t we ever seem to change? Pastor Max Lucado has found the solution: Stop thinking like that! It may seem easier said than done, but Max joins Glenn Beck to unpack the three tools he describes in his new book, “Tame Your Thoughts,” that make it easy for us to reset the way we think back to God’s factory settings. In this much-needed conversation, Max and Glenn tackle everything from feeling doubt as a parent to facing unfair hardships to ... UFOs?! Plus, Max shares what he recently got tattooed on his arm.

THE GLENN BECK PODCAST

Are Demonic Forces to Blame for Charlie Kirk, Minnesota & Charlotte Killings?

This week has seen some of the most heinous actions in recent memory. Glenn has been discussing the growth of evil in our society, and with the assassination of civil rights leader Charlie Kirk, the recent transgender shooter who took the lives of two children at a Catholic school, and the murder of Ukrainian refugee Iryna Zarutska, how can we make sense of all this evil? On today's Friday Exclusive, Glenn speaks with BlazeTV host of "Strange Encounters" Rick Burgess to discuss the demon-possessed transgender shooter and the horrific assassination of Charlie Kirk. Rick breaks down the reality of demon possession and how individuals wind up possessed. Rick and Glenn also discuss the dangers of the grotesque things we see online and in movies, TV shows, and video games on a daily basis. Rick warns that when we allow our minds to be altered by substances like drugs or alcohol, it opens a door for the enemy to take control. A supernatural war is waging in our society, and it’s a Christian’s job to fight this war. Glenn and Rick remind Christians of what their first citizenship is.

RADIO

Here’s what we know about the suspected Charlie Kirk assassin

The FBI has arrested a suspect for allegedly assassinating civil rights leader Charlie Kirk. Just The News CEO and editor-in-chief John Solomon joins Glenn Beck to discuss what we know so far about the suspect, his weapon, and his possible motives.