RADIO

Here's Biden’s HORRIBLE plan to fix the supply chain shortages

President Biden has a new plan to fix the supply chain shortages, but it'll likely do nothing more than cost YOU extra. Glenn and Stu detail the new policy in this clip, along with all the other BIden administration plans that are putting American families AND businesses under increased financial strain. It's so bad, Glenn says, that it HAS to be their goal…

RADIO

WARNING: Mark Zuckerberg’s "AI Friends” Are Designed to Control You!

Mark Zuckerberg and Big Tech want you to believe that AI can be your “friend.” But Glenn Beck reveals the chilling truth: these bots aren’t here to connect with you... they’re here to control you. From social media addiction to mental health crises, we’ve already seen what “connection” platforms have done to our families and children. Now, AI is at its next stage where it's smarter, more personal, and far more dangerous. Glenn warns that this isn’t just about privacy or data. It’s about your soul. Real friendship is sacrifice, loyalty, and love. AI offers only a hollow imitation all while whispering lies in your ear...

Watch This FULL Clip from Glenn Beck's Radio Show HERE

RADIO

This world leader admits to using AI as an ADVISOR?!

The Prime Minister of Sweden has admitted to frequently using AI services “as a second opinion” in his governmental work. Glenn and his co-author on “The Great Reset,” Justin Haskins, discuss why this is problematic…but will probably also become more and more common.

Transcript

Below is a rush transcript that may contain errors

GLENN: Welcome to the Glenn Beck Program.

Did you see the -- the video that was on Instagram going away, going around.

It's from a La Quinta hotel in Miami. And if you're watching TheBlaze, watch the screen.

I'll describe what's happening.

This person is checking into a hotel.

And there's a check in and out, right here.

VOICE: Just in case I lose one.

GLENN: This is a guy on a screen in the lobby.

VOICE: Please wait while we process your registration form.

Please note we have a strict policy of no smoking, no pets and no visitors allowed in any of guest rooms.

GLENN: So it's all automated.

There's not a real person at the front desk, at all. There's nobody at the front desk.

That is -- just bizarre!

STU: AI on or is it an actual guy?

GLENN: No, that's an actual guy.

I don't know if he's in America, or not.

It's an actual guy, someplace.

In the video, the guy is like, are you even in the hotel?

No, sir. We're not. There's nobody here. We just need you to do this.

It spits out your key. And, you know, everything else.

STU: Wow!

GLENN: It's --

STU: Amazing.

GLENN: Weird. It's weird. We have Justin Haskins who is here with us.

We have been talking about AI, and some of the Dark Future that is coming our way, if we're not careful with it. Justin, welcome to the program.

JUSTIN: Hi, Glenn.

STU: Hi. So the AI revolution that is here, we have a first that I know of, happening over in Europe with -- with the use of AI. You want to explain?

JUSTIN: Yes. This is an incredible story. This is something we actually predicted was going to happen, when we were writing Dark Future. And in the book, which came out, in 2023, but a lot of that writing was in 2022. So a few years ago, the Swedish Prime Minister, his name is Ulf Kristersson.

GLENN: Swedish.

JUSTIN: Would be -- I'm sorry. Did I get that wrong --

GLENN: No, Swedish. I just wanted to point out, this is not some weirdo. This is Sweden, and the Prime Minister. Go ahead.

JUSTIN: Correct. Yeah. So the Swedish Prime Minister was being interviewed by a business magazine. And in the interview, he just sort of voluntary says, that he frequently uses AI services, and he names the couple. One in particular, is Chat GPT, as a second opinion. That's a quote. A second opinion in his governmental work, asking things like -- and this is a quote. What have others done?

Should we think the complete opposite? He uses it for research. He uses it to help him to bounce ideas off of ChatGPT, to see if there are other kinds of new ways of doing policies.

And in the story, in the interview, he -- he says, it's not just him.

That his colleagues, in the legislature, are also doing this exact same thing.

They're using AI as sort of an adviser!

Now, they -- he was very clear to say, and he stirred up a huge controversy in Sweden.

That he and his staff have said, no. We're not -- it's not like we just do whatever ChatGPT tells us.

We're not putting sensitive information in there, either. So it's not in control of anything. But, yeah. We do use it, as an adviser, to help us, with things.

Now, obviously, there are all kinds of huge problems with this.

But on the -- at the same time. You sort of -- I mean, this is the world that we're going to have, everywhere.

I guarantee, that American politicians are using it all the time.

The CEOs are using it all the time. Already.

And that over the next couple of years, this is going to dramatically expand. Because at the end of the day, the members of your staff. Your advisers.

If you're a politician or a CEO. Or the head of a bank or something.

They're fallible people too.

So AI may not be perfect. But so are the people on your staff. And if AI is smarter than most people, why wouldn't you ask it these questions?

And so this is -- this is the first example of this, that I know of.

But this is just the tip of the iceberg. It's going to be a huge problem moving forward.

GLENN: Right. So this is not something that I -- I mean, I consult with AI.

I ask it. Help me think out of the box on this. I'm thinking this way. Is there any other way to look at it? I do that. I do that with people, et cetera, et cetera.

The problem here is, is what comes next?

There is -- there is -- AI is going to become so powerful, and so good, and many people are -- I just did this with a doctor.

I took all my back information, fed it all into ChatGPT.

And on the way to the doctor, just fed it all in. And said, what do you see? What does this mean? You know, how would you treat it, et cetera, et cetera?

And when I got into the doctor, I had questions for him, that were much more intelligent.

Because I had a has come on what some of these terms even mean. And there's nothing wrong with that. But there is going to come a time where ChatGPT will say, go this way. And the human will say, no. We're going this way.

And the room will say, no. I think we should go ChatGPT's way. And that's when you've lost control.

JUSTIN: That's exactly right. And how do you argue against something's decision. When that something is literally smarter than everything else in the room.

GLENN: And it's learned how to lie.

JUSTIN: Yes. It has.

And lies all the time.

People who use AI systems, frequently. And I do.

And I know you do.

And I know a lot of people on your staff do.

It claims that things are true. When they are not true.

It invents sources.

Out of thin air.

GLENN: Right.

And it's not -- not like I call it. And it's like, this doesn't make any sense.

It doesn't give up.

It lies to you some more.

And then it lies to you a third time. And then we have found, usually a third or fourth time, it then gives up and says, okay.

I was just summarizing this, and just putting that into a false story. And you're like, wait.

What?

So it's lying. It's knowing it's lying. It's feeding you what it thinks you want to hear.

And then putting -- if you don't -- if you just see the footnote. Oh, well. Washington Post.

And you don't click on it.

You're a mistake. That's a huge mistake.

It will say Washington Post. You'll click on it. And it will say no link found.

Or dead link.

Well, wait a minute.

How?

Why?

How did you just find this one, it's a dead link?

That's when it usually gives up.

It's crazy!

JUSTIN: That's right. And people say, well, people lie all the time.

And that's true. But people do not have the ability that artificial intelligence has to manipulate huge parts of the population, all on the same time.

STU: Correct. And it also -- it also --

JUSTIN: I don't understand people. I don't understand why AI makes all the decisions it makes.

GLENN: Correct. That's what I was going to say, it doesn't necessarily have all the same goals that a human would have. You know, as it continues to grow, it's going to have its own -- its own motive. And it may just be for self-survival. And another prediction came true, yesterday.

You see what ChatGPT did. They went from ChatGPT 4. To ChatGPT 5.

When they shut GPT-4 down. We were talking about this. But I have a relationship. I've made this model of this companion, and I'm in love with him or her. And you can't just shut him down.

They yesterday reversed themselves and said, okay. We'll keep four out, as well, but here's five.

And so they did that, because people are having relationships with ChatGPT. I told you that would happen, 20 years ago. It happened yesterday, for the first time. That's where it gets scary.

JUSTIN: Especially when those people are the Prime Minister of large countries.

That's when things really go nuts, and that's the world that we're already living in. We're living in that world now.

It's not hypothetical. We now know, we have leaders of mass -- very popularity countries, economic powerhouses.

Saying, hey.

Yeah. I use it all the time.

And so do all my colleagues. They use it too.

And, you know what, there's a ton of other people, as I said earlier, who are using it in secret, that we don't know about. And over time, as AI becomes increasingly more intelligent and it's interconnected, across the world, because remember, the same ChatGPT that's talking to the Prime Minister of Sweden is talking to me.

So it can connect dots that normally people can't connect. What is that going to do to society?

How will it be able to potentially manipulate people?

Are you even -- can AI designers even train it successfully, so that it won't do these things. I would argue, that it can't. That it's not possible. Because AI can make decisions for itself ultimately.

And it will.

So this is -- this is a huge, huge crisis. And the biggest take away is: Why does this not be headline news literally everywhere?

GLENN: Well, I don't think, A, the press knows what it's talking.

And, B, I don't think the average person is afraid of it yet.

I don't think people understand -- I mean, I've been on this train for 25. Almost 30 years. Twenty-eight years.

And I've been beating the drum on this one for a long time.

And it was such a distant idea.

Now it's not a distant idea. People are seeing it, but they're also seeing only the good things that are coming out of it right now.

They're not -- they're not thinking ahead. And saying, okay. But what does this mean?

I mean, I'm -- I'm working with some really big minds right now, in the AI world. And I don't want to tip my hand yet on something.

But I'm -- I'm working on something that I think should be a constitutional amendment.

And all of these big, big players are like, yes!

Thank you!

And so we're working on a constitutional amendment on something, regarding AI.

And it has to be passed.

It has to happen in the next two years, maximum!

And if we start talking about it now, maybe in two years, when all of these problems really begin to confront. Or, you know, confront us, as individuals.

And we begin to see them. Maybe, we will have planted enough sees, so people go, yeah. I want that amendment.

But we'll -- we'll see.

The future is not written yet.
We have to write it, as we get there.
THE GLENN BECK PODCAST

Is Cloudseeding Playing God? Trump EPA Chief Reacts | Lee Zeldin | The Glenn Beck Podcast | Ep 264

What does the struggle against the deep state look like from inside one of the Left’s most cherished agencies? Glenn Beck asks the Left’s biggest nightmare—EPA chief Lee Zeldin. He’s fought in Iraq, fought in Congress, and now he’s taking a sledgehammer to entrenched special interests and even his own agency’s rebellion. He pulls back the curtain to reveal the truth about geoengineering and contrails, Obama and Biden’s green energy scams, and extreme taxpayer waste. From dismantling the 2009 Endangerment Finding to restoring auto jobs, nuclear and coal, Zeldin reveals how Trump’s EPA is putting America energy dominance first.

RADIO

AGI is coming SOON... Are you prepared for it?

Artificial General Intelligence is coming sooner than many originally anticipated, as Elon Musk recently announced he believes his latest iteration of Grok could be the first real step in achieving AGI. Millions of Americans are not ready for how AGI could affect their jobs, and if you don't start adapting now, you could be left behind. Glenn and Stu dive into the future of AI, exploring how prompting is the new coding and why your unique perspective is critical. From capitalism to AGI and ASI, discover how AI can be a tool for innovation, not oppression, but if we're not careful, it can quickly become something we cannot control...

Transcript

Below is a rush transcript that may contain errors

GLENN: So I've been talking about capitalism and the future. And especially AI. Let's have a deeper conversation on this. Because, you know, the fear is, it's going to take our jobs. And you're going to be a useless eater, et cetera, et cetera. Because AI will have all of the answers. Correct. But how many times --

STU: And good night, everybody.

GLENN: Hang on. Hang on. That is correct if you look at it that way, but let me say this: I could have people who are wildly educated on exactly the same facts, and they will come to a different conclusion or a different way to look at that. Okay? They can agree on all of the same facts, but because they're each unique -- and -- and AI is not AGI or ASI. It's not going to be unique, I don't think. This is my understanding of it now. And I've got to do some. I've got to talk to some more people about this that actually know. Because coding is now what AI does. Okay?

That can develop any software. However, it still requires me to prompt. I think prompting is the new coding.

And if you don't know what prompting is, you should learn today what prompting means.

It is an art form. It really is. As I have been working with this now for almost a year now, learning how to prompt changes everything.

And so -- and now that AI remembers your conversations and it remembers your prompts, it will give a different answer for you, than it will for me.

And -- and that's where the uniqueness comes from. And that comes from looking at AI as a tool, not as the answer.

So, Stu, if you put in all of the prompts that make you, you, and then I put in a prompt that makes me, me.

Donald Trump does.

You know, Gavin Newsom does it. It's going to spit out different things.

Because you're requiring a different framework.

Do you understand what I'm saying?

STU: Yeah. You can essentially personalize it, right?

To you. It's going to understand the way you think, rather than just a general person would think.

GLENN: Correct. Correct.

And if you're just going there and saying, give me the answer. Well, then you're going to become a slave. But if you're going and saying, hey. This is what I think. This is what I'm looking for.

This is where I'm -- where I'm missing some things, et cetera, et cetera.

It will give you a customized answer that is unique to you.

And so prompting becomes the place where you're unique. Now, here's the problem with this. This is something I said to Ray Kurzweil back in 2011, maybe.

He was sitting in my studio. And I said, so, Ray, we get all this. You can read our minds. It knows everything about us, knows more about us than anything. Than any of us know. How could I possibly ever create something unique?

And he said, what do you mean?

And he said, well, if I was -- let's say if I wanted to come up with a competitor for Google.

If I'm doing research online. And Google is able to watch my every keystroke.

And it has AI, it's knowing what I'm looking for.

It -- it then thinks, what are -- what is he trying to put together?

And if it figures it out. It will complete it faster than me. And give it to the mothership.

Which has the distribution. And the money. And everything else.

And it will -- I won't be able to do it. Because it will have already done it!

And so you become a serf. The Lord of the manor takes your idea, and does it because they have control. That's what the free market stopped.

And unless we have of our own thoughts and our own ideas, and we have some safety, to where it cannot intrude on those things, that we have some sort of a patent system for unique ideas that you're working on.

That -- that AI cannot take what you're -- and share it with the mothership. Share it anybody else.

Then it's just a tool of oppression.

Do you understand what I'm saying?

STU: Yeah. Obviously these companies will say they're not going to do that.

GLENN: What you know Ray said?

Ray said, Glenn, we would never do that.

Why not?

He said, because it's wrong. We would never do that. And I said, oh. I forgot. How moral -- and such high standing everybody in Silicon Valley. And Google is.

STU: And Silicon Valley and Google is -- I have far more confidence in their just benevolence than I do China.

GLENN: And Washington.

STU: And Washington.

GLENN: And Washington.

STU: Yeah. Exactly.

GLENN: The DOD.

STU: Everyone will have these things developed. And who knows what they're -- what they're going to do.

I suppose, there will be some eventually that becomes an issue. Or it becomes a risk.

There will be some solutions to that. Like, you could have close looped systems. That don't connect to the mothership.

All that stuff is going to be -- there will be answers to those questions, I'm sure.

But, you know, at some level, right?

They're using what you're typing in as training for future AIs. Right?

GLENN: Correct. Correct. Correct.

STU: So they all in a way has to go to the mothership at some level. And whether they're trying to take advantage of it, the way you're talking about. I don't trust it.

GLENN: Right now, a year ago, we thought, we're going to use. We'll use somebody's AI as the churn.

As the -- as the compute power.

Because the server farms. Everything is so expensive. But I don't think now, we've been talking about this at the Torch. You know, our dreamers are working on.

I'm not sure we're ever going to be able to get the compute power that we need for a large segment of people.

Because right now, these companies. Now, think of this. The world is getting between one and 3 percent of the compute power.

So that means 97 to 99 percent of all of that compute is going directly into the company. Trying to enhance the next version.

Okay?

All of that thinking, that's like -- that's like you giving, you know, something that everybody else thinks is your main focus. And you're only giving it, hmm.

20 or 15 minutes a day.

Okay?

You're operating at the highest levels, and I'm only going to spend ten minutes thinking about your problem. All right.

And you think that's what I'm really doing. Is spending all my time over there.

So they're eating up all the compute for the next generation. And I don't think that's going to stop.

And so we're now looking at, can we afford to build our own AI server farm at a lower level that doesn't have to, you know, take on 10 million people, but maybe a million people? And keep it disconnected from everything else. If we can do that, I think that's -- I think that's a really important step, that people will then be able to go, okay.

All right.

I can come up with my own -- even my own company. Compute farm.

That keeps my secrets. Keeps all of the things that I'm thinking.

Keeps all of this information right here.

Hopefully, that will happen.

But I'm not sure. Because I think -- when they do hit AGI. You're not going to get it.

You might have access to AGI.

But it will be so expensive. Because AGI will try to get to ASI. So when they get to AGI. When that is there and available. It could be $5,000 a month. For an individual.

It could be astronomical prices.

You're not going to get compute time on quantum computer.

You're just not. It will be way too expensive. Because the big boys will be using it. The DOD will be using it. Most of it. You know, Microsoft and Google and everybody else, when they develop theirs. They will be using it themselves. To get stronger and better, et cetera, et cetera.

So there has to be something for the average person, to be able to use this. That is not connected to the big boys.

STU: And I'm still not sure, Glenn. If we're at this time.

To redefine these terms. AGI and ASI, Artificial General Intelligence, Artificial super intelligence.

And Artificial General Intelligence is basically -- it could be the smartest human, right?

GLENN: Not even. Not even that.

You would still consider this person a super genius.

It's general intelligence. You are a general intelligence being. Meaning, you can think and be good at more than one thing.

You can play the piano and be a mathematician. And you can be the best at both of those. Okay?

What we have right now, is narrow AI. It's good at one thing. Now, we're getting AI to be better at multiple things. Okay?

But when you get to general AI, it will be the best human beyond the best human, in every general topic.

So it can do everything. It will pass every board exam, for every walk of life. Okay?

Now, that's the best human on all topics. And I would call that super intelligence, myself.

But it's not. That's just general intelligence.

Top of the line, better than any human, on all subjects.

Super intelligence is when it goes so far beyond our understanding, we -- it will create languages and formulas and -- and alloys. And think in ways that we cannot possibly even imagine today.

Because it's almost like an alien life form. You know, when we think, oh, the aliens will come down. They will be friendly.

You don't know that. You don't know how they think. They've created a world where they can travel in space and time, in ways we can't.

That means, they are so far ahead of us. That we could to them, be like cavemen or monkeys.

So we don't know how they're going to view us. I mean, look at how they view monkeys. Oh, the cute little monkey. Let's put something in its brain and feel the electricity in its brain, okay?

We don't know how it will think. Because we're not there. And that's what we're developing. We're developing an alien life form. That cannot be predicted.
And cannot be something that we can even keep up with.