The Prime Minister of Sweden has admitted to frequently using AI services “as a second opinion” in his governmental work. Glenn and his co-author on “The Great Reset,” Justin Haskins, discuss why this is problematic…but will probably also become more and more common.
Transcript
Below is a rush transcript that may contain errors
GLENN: Welcome to the Glenn Beck Program.Did you see the -- the video that was on Instagram going away, going around.
It's from a La Quinta hotel in Miami. And if you're watching TheBlaze, watch the screen.
I'll describe what's happening.
This person is checking into a hotel.
And there's a check in and out, right here.
VOICE: Just in case I lose one.
GLENN: This is a guy on a screen in the lobby.
VOICE: Please wait while we process your registration form.
Please note we have a strict policy of no smoking, no pets and no visitors allowed in any of guest rooms.
GLENN: So it's all automated.
There's not a real person at the front desk, at all. There's nobody at the front desk.
That is -- just bizarre!
STU: AI on or is it an actual guy?
GLENN: No, that's an actual guy.
I don't know if he's in America, or not.
It's an actual guy, someplace.
In the video, the guy is like, are you even in the hotel?
No, sir. We're not. There's nobody here. We just need you to do this.
It spits out your key. And, you know, everything else.
STU: Wow!
GLENN: It's --
STU: Amazing.
GLENN: Weird. It's weird. We have Justin Haskins who is here with us.
We have been talking about AI, and some of the Dark Future that is coming our way, if we're not careful with it. Justin, welcome to the program.
JUSTIN: Hi, Glenn.
STU: Hi. So the AI revolution that is here, we have a first that I know of, happening over in Europe with -- with the use of AI. You want to explain?
JUSTIN: Yes. This is an incredible story. This is something we actually predicted was going to happen, when we were writing Dark Future. And in the book, which came out, in 2023, but a lot of that writing was in 2022. So a few years ago, the Swedish Prime Minister, his name is Ulf Kristersson.
GLENN: Swedish.
JUSTIN: Would be -- I'm sorry. Did I get that wrong --
GLENN: No, Swedish. I just wanted to point out, this is not some weirdo. This is Sweden, and the Prime Minister. Go ahead.
JUSTIN: Correct. Yeah. So the Swedish Prime Minister was being interviewed by a business magazine. And in the interview, he just sort of voluntary says, that he frequently uses AI services, and he names the couple. One in particular, is Chat GPT, as a second opinion. That's a quote. A second opinion in his governmental work, asking things like -- and this is a quote. What have others done?
Should we think the complete opposite? He uses it for research. He uses it to help him to bounce ideas off of ChatGPT, to see if there are other kinds of new ways of doing policies.
And in the story, in the interview, he -- he says, it's not just him.
That his colleagues, in the legislature, are also doing this exact same thing.
They're using AI as sort of an adviser!
Now, they -- he was very clear to say, and he stirred up a huge controversy in Sweden.
That he and his staff have said, no. We're not -- it's not like we just do whatever ChatGPT tells us.
We're not putting sensitive information in there, either. So it's not in control of anything. But, yeah. We do use it, as an adviser, to help us, with things.
Now, obviously, there are all kinds of huge problems with this.
But on the -- at the same time. You sort of -- I mean, this is the world that we're going to have, everywhere.
I guarantee, that American politicians are using it all the time.
The CEOs are using it all the time. Already.
And that over the next couple of years, this is going to dramatically expand. Because at the end of the day, the members of your staff. Your advisers.
If you're a politician or a CEO. Or the head of a bank or something.
They're fallible people too.
So AI may not be perfect. But so are the people on your staff. And if AI is smarter than most people, why wouldn't you ask it these questions?
And so this is -- this is the first example of this, that I know of.
But this is just the tip of the iceberg. It's going to be a huge problem moving forward.
GLENN: Right. So this is not something that I -- I mean, I consult with AI.
I ask it. Help me think out of the box on this. I'm thinking this way. Is there any other way to look at it? I do that. I do that with people, et cetera, et cetera.
The problem here is, is what comes next?
There is -- there is -- AI is going to become so powerful, and so good, and many people are -- I just did this with a doctor.
I took all my back information, fed it all into ChatGPT.
And on the way to the doctor, just fed it all in. And said, what do you see? What does this mean? You know, how would you treat it, et cetera, et cetera?
And when I got into the doctor, I had questions for him, that were much more intelligent.
Because I had a has come on what some of these terms even mean. And there's nothing wrong with that. But there is going to come a time where ChatGPT will say, go this way. And the human will say, no. We're going this way.
And the room will say, no. I think we should go ChatGPT's way. And that's when you've lost control.
JUSTIN: That's exactly right. And how do you argue against something's decision. When that something is literally smarter than everything else in the room.
GLENN: And it's learned how to lie.
JUSTIN: Yes. It has.
And lies all the time.
People who use AI systems, frequently. And I do.
And I know you do.
And I know a lot of people on your staff do.
It claims that things are true. When they are not true.
It invents sources.
Out of thin air.
GLENN: Right.
And it's not -- not like I call it. And it's like, this doesn't make any sense.
It doesn't give up.
It lies to you some more.
And then it lies to you a third time. And then we have found, usually a third or fourth time, it then gives up and says, okay.
I was just summarizing this, and just putting that into a false story. And you're like, wait.
What?
So it's lying. It's knowing it's lying. It's feeding you what it thinks you want to hear.
And then putting -- if you don't -- if you just see the footnote. Oh, well. Washington Post.
And you don't click on it.
You're a mistake. That's a huge mistake.
It will say Washington Post. You'll click on it. And it will say no link found.
Or dead link.
Well, wait a minute.
How?
Why?
How did you just find this one, it's a dead link?
That's when it usually gives up.
It's crazy!
JUSTIN: That's right. And people say, well, people lie all the time.
And that's true. But people do not have the ability that artificial intelligence has to manipulate huge parts of the population, all on the same time.
STU: Correct. And it also -- it also --
JUSTIN: I don't understand people. I don't understand why AI makes all the decisions it makes.
GLENN: Correct. That's what I was going to say, it doesn't necessarily have all the same goals that a human would have. You know, as it continues to grow, it's going to have its own -- its own motive. And it may just be for self-survival. And another prediction came true, yesterday.
You see what ChatGPT did. They went from ChatGPT 4. To ChatGPT 5.
When they shut GPT-4 down. We were talking about this. But I have a relationship. I've made this model of this companion, and I'm in love with him or her. And you can't just shut him down.
They yesterday reversed themselves and said, okay. We'll keep four out, as well, but here's five.
And so they did that, because people are having relationships with ChatGPT. I told you that would happen, 20 years ago. It happened yesterday, for the first time. That's where it gets scary.
JUSTIN: Especially when those people are the Prime Minister of large countries.
That's when things really go nuts, and that's the world that we're already living in. We're living in that world now.
It's not hypothetical. We now know, we have leaders of mass -- very popularity countries, economic powerhouses.
Saying, hey.
Yeah. I use it all the time.
And so do all my colleagues. They use it too.
And, you know what, there's a ton of other people, as I said earlier, who are using it in secret, that we don't know about. And over time, as AI becomes increasingly more intelligent and it's interconnected, across the world, because remember, the same ChatGPT that's talking to the Prime Minister of Sweden is talking to me.
So it can connect dots that normally people can't connect. What is that going to do to society?
How will it be able to potentially manipulate people?
Are you even -- can AI designers even train it successfully, so that it won't do these things. I would argue, that it can't. That it's not possible. Because AI can make decisions for itself ultimately.
And it will.
So this is -- this is a huge, huge crisis. And the biggest take away is: Why does this not be headline news literally everywhere?
GLENN: Well, I don't think, A, the press knows what it's talking.
And, B, I don't think the average person is afraid of it yet.
I don't think people understand -- I mean, I've been on this train for 25. Almost 30 years. Twenty-eight years.
And I've been beating the drum on this one for a long time.
And it was such a distant idea.
Now it's not a distant idea. People are seeing it, but they're also seeing only the good things that are coming out of it right now.
They're not -- they're not thinking ahead. And saying, okay. But what does this mean?
I mean, I'm -- I'm working with some really big minds right now, in the AI world. And I don't want to tip my hand yet on something.
But I'm -- I'm working on something that I think should be a constitutional amendment.
And all of these big, big players are like, yes!
Thank you!
And so we're working on a constitutional amendment on something, regarding AI.
And it has to be passed.
It has to happen in the next two years, maximum!
And if we start talking about it now, maybe in two years, when all of these problems really begin to confront. Or, you know, confront us, as individuals.
And we begin to see them. Maybe, we will have planted enough sees, so people go, yeah. I want that amendment.
But we'll -- we'll see.
The future is not written yet.
We have to write it, as we get there.