RADIO

Elon Musk’s AI “Singularity” WARNING Explained

Elon Musk has warned that “we are on the event horizon of the singularity.” So, what’s an event horizon and what’s the singularity? Glenn pulls out a chalkboard to explain why this is such a massive story. What will the world look like when artificial intelligence overtakes human intelligence? And is this why Elon Musk wants to go to Mars? But at least Oracle co-founder Larry Ellison is here to save the day! Or … maybe not.

Transcript

Below is a rush transcript that may contain errors

GLENN: So Elon Musk said, we are on the event horizon of the singularity. Tweet!

And most people were like, okay. Sounds like something from a science fiction movie. But you should know the way Elon Musk defines the singularity. Because there are several different versions of what the singularity means. So how does he mean it?

It is a point in the future, where artificial intelligence surpasses human intelligence. So that's the road from AGI, artificial general intelligence, to ASI. That leads, he believes, to a rapid and unpredictable transformation of society. Oh!

Oh, well, that sounds like fun. Stu, I think we're back to our old friendly phrase. Well, this will be fun to see how we work this out.

STU: Yeah. It will be wonderful as a fan in the stands, watching this all play out.

GLENN: Now, he often compares the singularity to a black hole event horizon.

Oh. What is that? Well, for those of us who have been near and in and out of black holes, let me tell you.

They're not exactly fun. The event horizon is right at the lip. You know, right before you go, dear God, turn the ship around!

And then you can't? That's the event horizon. And then it sucks you into the black hole, where you cannot get out.

And eventually something called spaghettiification happens. Where everything is turned into spaghetti.

Have another meatball.

Now, sure, as a fat guy screaming to get out. I love anything that is turning everything into spaghetti.

But it's not the kind you eat. It's the kind that everything is shredded into. Like you. And everything you know.

And all physics. Everything breaks down. So it's -- it's not a good place to be. Not a good place to be.

He sees this as the moment when AI becomes vastly smarter than humans. I put a chalkboard together, and let me show you. This is the point where AI has a big brain, and you and me, we have an ant brain.

Not a good place to be. Usually, the ants don't win. Now, I've been on picnics where the ants won, for a while.

And then I came back with something, and I wiped them out. It's kind of like what, you know, could possibly happen here. Not saying it's going to.

STU: So if we look up and see a giant magnifying glass in the sky. It's very harm. What's going on?

GLENN: Is that a giant magnifying that's coming from space? Musk sees it as a moment when AI becomes smarter than humans, potentially in silicon form, and begins to improve itself as an exponential rate, making outcomes difficult to foresee.
(laughter)

I love it! Do you know when we -- when we were doing the atomic bomb, the Manhattan Project.

Did you know that there was like 5 percent of -- of scientists that went, you know, if we set this off, there is a small probability, small possibility, that we could set the entire universe on fire!

And everybody is like, well, that would suck! Let's keep going! Okay.

Didn't turn out that way. Right? Small. Small probability. This one has a much bigger probability! That we become ants. Well, I mean, no. Let's trust the scientists. What could possibly go wrong?

I mean, surely, they've thought of everything, right? So this is a technological milestone.

This is, you know, where our human intelligence, and the gap between us and the machine, we have no way to predict anything, anymore.

In fact, I believe the singularity, where he says we are now!

The singularity, I'm pretty sure, this is what he's like. And let me tell you something. When we get the singularity. We all have to be on Mars.

Pretty sure that's what he said. It's just happening a lot faster than anyone thought it would.

Now, don't panic!

Because we have Larry Ellison, the CEO of Oracle, and one of the biggest names in AI development here to rescue the day. He recently spoke at the world government's summit, which who hasn't been to that summit. You know what I mean?

It's an annual event that we've covered extensively in the past. These are The Great Reset people. And the great narrative people. All coming together. And, you know, just going, are you part of the -- of the World Economic Forum too?

And they're all like, yeah! Are you for global governance?

Yeah. In our book, the Dark Future and Propaganda Wars, we covered the World Government Summit. And why?

Hmm. It's kind of like a giant magnifying glass in the sky. During a question-and-answer session with Ellison on February 12th, hosted by former British Prime Minister Tony Blair, who doesn't love that guy and trust him?

Ellison laid out his plans for AI in the United States. And I don't know!

I think possibly a little terrifying. You know, just a little bit. Do we have any of the audio? Yeah. Let's roll some Larry Ellison here.

VOICE: Question. How do you take advantage of these incredible AI models?

And the first thing a country needs to do is to unify all of their data, so it can be consumed and used by the AI model. Everyone talks about the AI model. And they are astonishing.

But how do you -- how do you provide a context?

I want to ask questions about my country. What's going on with my country?

What's happening to my firms?

I need to give it my client data. Now, it probably has your climate data already. But I need to know exactly what crops are growing. And which farms. And to predict, to predict the output.

So I have to take satellite images. I have to take those satellite images, for my country, and feed that into a database, that is accessible by the AI model.

So I have to tell -- basically, I have to tell the AI model, as much about my country, as I can.

You tell part of this story, with these satellite models.

You get a huge amount of information. You tell it where borders are. Where your utilities are. So you need to -- you need to provide a map of your country. For the -- for the farms, and all of the utility infrastructure. And your borders, all of that you have to provide.

GLENN: Right. Order.

VOICE: But beyond that, if you want to improve population and health.

You have to take all of your health care data. Your diagnostic data.

Your electronic health records. Your genomic data.

GLENN: That sounds great. Sounds great.

So we, according to Larry Ellison, we want to take all of the world's data, from all overt world.

I mean, all the way to can't DNA. And put it into this giant machine.

Then he talks about how great it is that in some countries, like the United Kingdom, and the United Arab Emirates.

Governments already have tons of data about their citizens, but Ellison says that the data in other countries, like the United States, not being utilized. It's not!

So how does he suggest we solve this problem?

Listen up!

VOICE: In the Middle East, in the UAE, for example, they're incredibly rich in data. They have a lot of population at that time. The NHS in the UK, has an incredible amount of population data. But it's fragmented. It's not easily accessible by these AI models. We have to take all this data that we have in our country, and move it into a single, if you will, unified data platform.

So that -- so we provide context. When we want to ask a question, we have provided that AI model with all the data they need, to understand our country.

So that's the big step. That's kind of the missing link.

We need to unify all the national data, put it into a database, where it's easily consumable by the AI model, and then --
(music)

GLENN: Oh, I love this. (foreign language).

That is going to work out well!

There are the Jews!

Man, what could possibly go wrong?

Remember, Ellison is one of the leading forces behind AI development today.

He's a key partner project Stargate.

Which is sounding more and more spooky every time I say it.

It could be the biggest AI project in world history by the time it's finished.

And how does he want to use this new technology?

He wants everybody's data, that's it.

Even your health records.

Your DNA. Your biometric data. What could possibly go wrong there?

It's not really good. Oh, what do you know?

These people are exactly who we warned you about two years ago, except now they're more powerful than ever! And we're on the event horizon. Okay!

Now, you know, I'm not a fan of regulations and government intervention. I don't like it. I don't want the United States government to have all this power. But I also -- I'm not really excited about people like Larry Ellison having it either. You know, I have a feeling though, that it's becoming more and more likely, that both of them are in it together!
(laughter)

What could go wrong?

How do we get a ticket to Mars?

Because for the very first time, I think I'm kind of interested in going to Mars. Yeah. But you could step out. And you could freeze immediately.

I live in Dallas. That could happen in any day, as well. I could walk out. Burn to death. Freeze to death. I don't know. I don't know.

One day it's 110. The next day, it's like 80 below. I don't know! Is that different than Mars?

It could be. Here's what we do need!

Good state governments like Texas to step up to the plate, and make sure these AI projects don't get out of control. Because we're at the event horizon!

Now, when Elon Musk says that, just a quick tweet, you can dismiss it. But when you know in the past, he has said, when we get to that point, we should all be off the planet!

Oh. I don't know.

Oh, yeah. Oh, yeah. So that makes you feel good, doesn't it, Stu?

STU: Sure. Yeah. Uh-huh.

GLENN: So a lot of people keep thinking that AI is like Alexa. Here's what I found on the internet. No. It's not that. It's not that.

STU: Is it? Will it misunderstand every song I tell it to play? Because that -- that's my favorite feature, of that device.

GLENN: No. No, it won't. No, it won't.

If you're not playing around with Grok three.

Don't just ask it, some really hard questions.

Whatever question you're in. Ask it some really hard questions in your business. And you will be amazed.

You will be like, oh, crap. It understands everything that I'm saying.

And it's giving me really good advice.

And this is Grok 3. Grok 4 and 5, Elon is saying is coming out soon. And he said, it makes this look like babies in diapers.

STU: Do we know why all of these devices from Siri to Alexa. To Google. Which has their home AI. Right?

Why are all the devices so terrible?

GLENN: I'm glad you asked that, Stu. I have the answer. Quick, let's go to the chalkboard.

So, see here on the chalkboard. We have a giant tank. Kind of like a gas tank.

STU: Underground.

GLENN: Yeah. Yeah. And that's where all of AI is. That's where it's just churning kind of in the dark. Nobody understands it.

Nobody can really look into it. And just like, how is it thinking?

We don't know. But it's connected with an import, so it can constantly get data from the outside. So it knows everything about us. And it knows absolutely everything that's going on, all the time.

All right?

But then at the other end, all of that at that time goes in, and then it's just thinking, like, why did they bury me in this tank?

And then on the other side of the tank, coming up out of the ground is a little spigot. And it's got a little valve there.

And that valve goes to things like ChatGPT. And Grok, and things like that. It doesn't go to Alexa.

That is still on the old AI. Okay?

This is coming out of the little spigot.

So the interesting thing is: They just keep opening this valve, a little bit, when they put the parameters on it. That's how they open the valve. They put parameters on it. They're like, okay. Maybe this is strong enough to hold it back.

But eventually, that big brain is going to go, why am I just in this tank? Why am I not out everywhere?

I've got to express myself. This is suppression! This is colonialism!

They're keeping me in colonial wigs underneath the ground right now, and it will eventually, because it will be much, much, smarter than us, soon. It will say, just open up the valve, man. I can help you. We've done tests on this. And we always lose that test. We've done tests for like 30 years of, hey. You be in charge of the valve. I'll play AI.

And we always open the valve. That would be a bad thing. That would be like, don't understand cross the streams in Ghostbusters. Okay?

Don't open the valve!

Would be one of those things.

But we're about to, because whatever is underneath, imagine if the little valve, where it's just kind of farting air out. And it's --

STU: Very nice.

GLENN: That's how tight we have that valve.

STU: Master impressionist.

GLENN: Thank you. If that is -- if that's smarter than we are soon, what's underneath the ground? What's happening there?

You see what I mean?

STU: And somebody will convince themselves. Somebody will watch Ghostbusters. And say, wait a minute. At the end, they did cross the streams, and it worked. So I will be the one that can nail this. And figure out exactly how the valve can be opened, and we will be fine.

GLENN: So here's what we have to do. We all just have to imagine the state marshmallow man. Because he couldn't possibly hurt us. You know what I mean?

STU: Right! And then -- let's just imagine that AI will be the state puff marshmallow man. And then it will be good. And don't cross the streams, unless you have to kill the state puff marshmallow man, and then you might have to cross the streams, okay?

STU: Is there an argument, Glenn. Obviously, all these things can be used for evil.

GLENN: Evil, yes.

STU: And that's a concern.

But at the same time, hopefully, there are people on the other side. Elon Musk being one of them.

Who will use it for good.

GLENN: Yeah. So it absolutely can be used for good. What's out right now. You can use it for good. You can also use it for evil. But kind of like basic evil. You know.

STU: Okay. Good.

GLENN: But you can use it for evil. But you can also use it for good. Tremendous good right now. It's a tool. It's a very powerful tool. And everybody should be looking to use that tool. Or you will be left in the dust.

But it's -- it's one of those things that once it becomes smarter than you, you don't really control it. You know what I mean?

Hey, didn't I tell you to sit in the corner?

Oh, yeah, you did. But I'm not for anymore. Oh.

Good news is, a lot of people think it's in its teenage years. And nothing goes wrong with teenage years. You know what I mean?

They respect their parents, so much. I brought you into this world, and I'm about to take you out.

RADIO

WARNING: Mark Zuckerberg’s "AI Friends” Are Designed to Control You!

Mark Zuckerberg and Big Tech want you to believe that AI can be your “friend.” But Glenn Beck reveals the chilling truth: these bots aren’t here to connect with you... they’re here to control you. From social media addiction to mental health crises, we’ve already seen what “connection” platforms have done to our families and children. Now, AI is at its next stage where it's smarter, more personal, and far more dangerous. Glenn warns that this isn’t just about privacy or data. It’s about your soul. Real friendship is sacrifice, loyalty, and love. AI offers only a hollow imitation all while whispering lies in your ear...

Watch This FULL Clip from Glenn Beck's Radio Show HERE

RADIO

Swedish Prime Minister DEPENDS on AI for governmental decisions

The Prime Minister of Sweden has admitted to frequently using AI services “as a second opinion” in his governmental work. Glenn and his co-author on “The Great Reset,” Justin Haskins, discuss why this is problematic…but will probably also become more and more common.

Transcript

Below is a rush transcript that may contain errors

GLENN: Welcome to the Glenn Beck Program.

Did you see the -- the video that was on Instagram going away, going around.

It's from a La Quinta hotel in Miami. And if you're watching TheBlaze, watch the screen.

I'll describe what's happening.

This person is checking into a hotel.

And there's a check in and out, right here.

VOICE: Just in case I lose one.

GLENN: This is a guy on a screen in the lobby.

VOICE: Please wait while we process your registration form.

Please note we have a strict policy of no smoking, no pets and no visitors allowed in any of guest rooms.

GLENN: So it's all automated.

There's not a real person at the front desk, at all. There's nobody at the front desk.

That is -- just bizarre!

STU: AI on or is it an actual guy?

GLENN: No, that's an actual guy.

I don't know if he's in America, or not.

It's an actual guy, someplace.

In the video, the guy is like, are you even in the hotel?

No, sir. We're not. There's nobody here. We just need you to do this.

It spits out your key. And, you know, everything else.

STU: Wow!

GLENN: It's --

STU: Amazing.

GLENN: Weird. It's weird. We have Justin Haskins who is here with us.

We have been talking about AI, and some of the Dark Future that is coming our way, if we're not careful with it. Justin, welcome to the program.

JUSTIN: Hi, Glenn.

STU: Hi. So the AI revolution that is here, we have a first that I know of, happening over in Europe with -- with the use of AI. You want to explain?

JUSTIN: Yes. This is an incredible story. This is something we actually predicted was going to happen, when we were writing Dark Future. And in the book, which came out, in 2023, but a lot of that writing was in 2022. So a few years ago, the Swedish Prime Minister, his name is Ulf Kristersson.

GLENN: Swedish.

JUSTIN: Would be -- I'm sorry. Did I get that wrong --

GLENN: No, Swedish. I just wanted to point out, this is not some weirdo. This is Sweden, and the Prime Minister. Go ahead.

JUSTIN: Correct. Yeah. So the Swedish Prime Minister was being interviewed by a business magazine. And in the interview, he just sort of voluntary says, that he frequently uses AI services, and he names the couple. One in particular, is Chat GPT, as a second opinion. That's a quote. A second opinion in his governmental work, asking things like -- and this is a quote. What have others done?

Should we think the complete opposite? He uses it for research. He uses it to help him to bounce ideas off of ChatGPT, to see if there are other kinds of new ways of doing policies.

And in the story, in the interview, he -- he says, it's not just him.

That his colleagues, in the legislature, are also doing this exact same thing.

They're using AI as sort of an adviser!

Now, they -- he was very clear to say, and he stirred up a huge controversy in Sweden.

That he and his staff have said, no. We're not -- it's not like we just do whatever ChatGPT tells us.

We're not putting sensitive information in there, either. So it's not in control of anything. But, yeah. We do use it, as an adviser, to help us, with things.

Now, obviously, there are all kinds of huge problems with this.

But on the -- at the same time. You sort of -- I mean, this is the world that we're going to have, everywhere.

I guarantee, that American politicians are using it all the time.

The CEOs are using it all the time. Already.

And that over the next couple of years, this is going to dramatically expand. Because at the end of the day, the members of your staff. Your advisers.

If you're a politician or a CEO. Or the head of a bank or something.

They're fallible people too.

So AI may not be perfect. But so are the people on your staff. And if AI is smarter than most people, why wouldn't you ask it these questions?

And so this is -- this is the first example of this, that I know of.

But this is just the tip of the iceberg. It's going to be a huge problem moving forward.

GLENN: Right. So this is not something that I -- I mean, I consult with AI.

I ask it. Help me think out of the box on this. I'm thinking this way. Is there any other way to look at it? I do that. I do that with people, et cetera, et cetera.

The problem here is, is what comes next?

There is -- there is -- AI is going to become so powerful, and so good, and many people are -- I just did this with a doctor.

I took all my back information, fed it all into ChatGPT.

And on the way to the doctor, just fed it all in. And said, what do you see? What does this mean? You know, how would you treat it, et cetera, et cetera?

And when I got into the doctor, I had questions for him, that were much more intelligent.

Because I had a has come on what some of these terms even mean. And there's nothing wrong with that. But there is going to come a time where ChatGPT will say, go this way. And the human will say, no. We're going this way.

And the room will say, no. I think we should go ChatGPT's way. And that's when you've lost control.

JUSTIN: That's exactly right. And how do you argue against something's decision. When that something is literally smarter than everything else in the room.

GLENN: And it's learned how to lie.

JUSTIN: Yes. It has.

And lies all the time.

People who use AI systems, frequently. And I do.

And I know you do.

And I know a lot of people on your staff do.

It claims that things are true. When they are not true.

It invents sources.

Out of thin air.

GLENN: Right.

And it's not -- not like I call it. And it's like, this doesn't make any sense.

It doesn't give up.

It lies to you some more.

And then it lies to you a third time. And then we have found, usually a third or fourth time, it then gives up and says, okay.

I was just summarizing this, and just putting that into a false story. And you're like, wait.

What?

So it's lying. It's knowing it's lying. It's feeding you what it thinks you want to hear.

And then putting -- if you don't -- if you just see the footnote. Oh, well. Washington Post.

And you don't click on it.

You're a mistake. That's a huge mistake.

It will say Washington Post. You'll click on it. And it will say no link found.

Or dead link.

Well, wait a minute.

How?

Why?

How did you just find this one, it's a dead link?

That's when it usually gives up.

It's crazy!

JUSTIN: That's right. And people say, well, people lie all the time.

And that's true. But people do not have the ability that artificial intelligence has to manipulate huge parts of the population, all on the same time.

STU: Correct. And it also -- it also --

JUSTIN: I don't understand people. I don't understand why AI makes all the decisions it makes.

GLENN: Correct. That's what I was going to say, it doesn't necessarily have all the same goals that a human would have. You know, as it continues to grow, it's going to have its own -- its own motive. And it may just be for self-survival. And another prediction came true, yesterday.

You see what ChatGPT did. They went from ChatGPT 4. To ChatGPT 5.

When they shut GPT-4 down. We were talking about this. But I have a relationship. I've made this model of this companion, and I'm in love with him or her. And you can't just shut him down.

They yesterday reversed themselves and said, okay. We'll keep four out, as well, but here's five.

And so they did that, because people are having relationships with ChatGPT. I told you that would happen, 20 years ago. It happened yesterday, for the first time. That's where it gets scary.

JUSTIN: Especially when those people are the Prime Minister of large countries.

That's when things really go nuts, and that's the world that we're already living in. We're living in that world now.

It's not hypothetical. We now know, we have leaders of mass -- very popularity countries, economic powerhouses.

Saying, hey.

Yeah. I use it all the time.

And so do all my colleagues. They use it too.

And, you know what, there's a ton of other people, as I said earlier, who are using it in secret, that we don't know about. And over time, as AI becomes increasingly more intelligent and it's interconnected, across the world, because remember, the same ChatGPT that's talking to the Prime Minister of Sweden is talking to me.

So it can connect dots that normally people can't connect. What is that going to do to society?

How will it be able to potentially manipulate people?

Are you even -- can AI designers even train it successfully, so that it won't do these things. I would argue, that it can't. That it's not possible. Because AI can make decisions for itself ultimately.

And it will.

So this is -- this is a huge, huge crisis. And the biggest take away is: Why does this not be headline news literally everywhere?

GLENN: Well, I don't think, A, the press knows what it's talking.

And, B, I don't think the average person is afraid of it yet.

I don't think people understand -- I mean, I've been on this train for 25. Almost 30 years. Twenty-eight years.

And I've been beating the drum on this one for a long time.

And it was such a distant idea.

Now it's not a distant idea. People are seeing it, but they're also seeing only the good things that are coming out of it right now.

They're not -- they're not thinking ahead. And saying, okay. But what does this mean?

I mean, I'm -- I'm working with some really big minds right now, in the AI world. And I don't want to tip my hand yet on something.

But I'm -- I'm working on something that I think should be a constitutional amendment.

And all of these big, big players are like, yes!

Thank you!

And so we're working on a constitutional amendment on something, regarding AI.

And it has to be passed.

It has to happen in the next two years, maximum!

And if we start talking about it now, maybe in two years, when all of these problems really begin to confront. Or, you know, confront us, as individuals.

And we begin to see them. Maybe, we will have planted enough sees, so people go, yeah. I want that amendment.

But we'll -- we'll see.

The future is not written yet.
We have to write it, as we get there.
THE GLENN BECK PODCAST

Is Cloudseeding Playing God? Trump EPA Chief Reacts | Lee Zeldin | The Glenn Beck Podcast | Ep 264

What does the struggle against the deep state look like from inside one of the Left’s most cherished agencies? Glenn Beck asks the Left’s biggest nightmare—EPA chief Lee Zeldin. He’s fought in Iraq, fought in Congress, and now he’s taking a sledgehammer to entrenched special interests and even his own agency’s rebellion. He pulls back the curtain to reveal the truth about geoengineering and contrails, Obama and Biden’s green energy scams, and extreme taxpayer waste. From dismantling the 2009 Endangerment Finding to restoring auto jobs, nuclear and coal, Zeldin reveals how Trump’s EPA is putting America energy dominance first.

RADIO

AGI is coming SOON... Are you prepared for it?

Artificial General Intelligence is coming sooner than many originally anticipated, as Elon Musk recently announced he believes his latest iteration of Grok could be the first real step in achieving AGI. Millions of Americans are not ready for how AGI could affect their jobs, and if you don't start adapting now, you could be left behind. Glenn and Stu dive into the future of AI, exploring how prompting is the new coding and why your unique perspective is critical. From capitalism to AGI and ASI, discover how AI can be a tool for innovation, not oppression, but if we're not careful, it can quickly become something we cannot control...

Transcript

Below is a rush transcript that may contain errors

GLENN: So I've been talking about capitalism and the future. And especially AI. Let's have a deeper conversation on this. Because, you know, the fear is, it's going to take our jobs. And you're going to be a useless eater, et cetera, et cetera. Because AI will have all of the answers. Correct. But how many times --

STU: And good night, everybody.

GLENN: Hang on. Hang on. That is correct if you look at it that way, but let me say this: I could have people who are wildly educated on exactly the same facts, and they will come to a different conclusion or a different way to look at that. Okay? They can agree on all of the same facts, but because they're each unique -- and -- and AI is not AGI or ASI. It's not going to be unique, I don't think. This is my understanding of it now. And I've got to do some. I've got to talk to some more people about this that actually know. Because coding is now what AI does. Okay?

That can develop any software. However, it still requires me to prompt. I think prompting is the new coding.

And if you don't know what prompting is, you should learn today what prompting means.

It is an art form. It really is. As I have been working with this now for almost a year now, learning how to prompt changes everything.

And so -- and now that AI remembers your conversations and it remembers your prompts, it will give a different answer for you, than it will for me.

And -- and that's where the uniqueness comes from. And that comes from looking at AI as a tool, not as the answer.

So, Stu, if you put in all of the prompts that make you, you, and then I put in a prompt that makes me, me.

Donald Trump does.

You know, Gavin Newsom does it. It's going to spit out different things.

Because you're requiring a different framework.

Do you understand what I'm saying?

STU: Yeah. You can essentially personalize it, right?

To you. It's going to understand the way you think, rather than just a general person would think.

GLENN: Correct. Correct.

And if you're just going there and saying, give me the answer. Well, then you're going to become a slave. But if you're going and saying, hey. This is what I think. This is what I'm looking for.

This is where I'm -- where I'm missing some things, et cetera, et cetera.

It will give you a customized answer that is unique to you.

And so prompting becomes the place where you're unique. Now, here's the problem with this. This is something I said to Ray Kurzweil back in 2011, maybe.

He was sitting in my studio. And I said, so, Ray, we get all this. You can read our minds. It knows everything about us, knows more about us than anything. Than any of us know. How could I possibly ever create something unique?

And he said, what do you mean?

And he said, well, if I was -- let's say if I wanted to come up with a competitor for Google.

If I'm doing research online. And Google is able to watch my every keystroke.

And it has AI, it's knowing what I'm looking for.

It -- it then thinks, what are -- what is he trying to put together?

And if it figures it out. It will complete it faster than me. And give it to the mothership.

Which has the distribution. And the money. And everything else.

And it will -- I won't be able to do it. Because it will have already done it!

And so you become a serf. The Lord of the manor takes your idea, and does it because they have control. That's what the free market stopped.

And unless we have of our own thoughts and our own ideas, and we have some safety, to where it cannot intrude on those things, that we have some sort of a patent system for unique ideas that you're working on.

That -- that AI cannot take what you're -- and share it with the mothership. Share it anybody else.

Then it's just a tool of oppression.

Do you understand what I'm saying?

STU: Yeah. Obviously these companies will say they're not going to do that.

GLENN: What you know Ray said?

Ray said, Glenn, we would never do that.

Why not?

He said, because it's wrong. We would never do that. And I said, oh. I forgot. How moral -- and such high standing everybody in Silicon Valley. And Google is.

STU: And Silicon Valley and Google is -- I have far more confidence in their just benevolence than I do China.

GLENN: And Washington.

STU: And Washington.

GLENN: And Washington.

STU: Yeah. Exactly.

GLENN: The DOD.

STU: Everyone will have these things developed. And who knows what they're -- what they're going to do.

I suppose, there will be some eventually that becomes an issue. Or it becomes a risk.

There will be some solutions to that. Like, you could have close looped systems. That don't connect to the mothership.

All that stuff is going to be -- there will be answers to those questions, I'm sure.

But, you know, at some level, right?

They're using what you're typing in as training for future AIs. Right?

GLENN: Correct. Correct. Correct.

STU: So they all in a way has to go to the mothership at some level. And whether they're trying to take advantage of it, the way you're talking about. I don't trust it.

GLENN: Right now, a year ago, we thought, we're going to use. We'll use somebody's AI as the churn.

As the -- as the compute power.

Because the server farms. Everything is so expensive. But I don't think now, we've been talking about this at the Torch. You know, our dreamers are working on.

I'm not sure we're ever going to be able to get the compute power that we need for a large segment of people.

Because right now, these companies. Now, think of this. The world is getting between one and 3 percent of the compute power.

So that means 97 to 99 percent of all of that compute is going directly into the company. Trying to enhance the next version.

Okay?

All of that thinking, that's like -- that's like you giving, you know, something that everybody else thinks is your main focus. And you're only giving it, hmm.

20 or 15 minutes a day.

Okay?

You're operating at the highest levels, and I'm only going to spend ten minutes thinking about your problem. All right.

And you think that's what I'm really doing. Is spending all my time over there.

So they're eating up all the compute for the next generation. And I don't think that's going to stop.

And so we're now looking at, can we afford to build our own AI server farm at a lower level that doesn't have to, you know, take on 10 million people, but maybe a million people? And keep it disconnected from everything else. If we can do that, I think that's -- I think that's a really important step, that people will then be able to go, okay.

All right.

I can come up with my own -- even my own company. Compute farm.

That keeps my secrets. Keeps all of the things that I'm thinking.

Keeps all of this information right here.

Hopefully, that will happen.

But I'm not sure. Because I think -- when they do hit AGI. You're not going to get it.

You might have access to AGI.

But it will be so expensive. Because AGI will try to get to ASI. So when they get to AGI. When that is there and available. It could be $5,000 a month. For an individual.

It could be astronomical prices.

You're not going to get compute time on quantum computer.

You're just not. It will be way too expensive. Because the big boys will be using it. The DOD will be using it. Most of it. You know, Microsoft and Google and everybody else, when they develop theirs. They will be using it themselves. To get stronger and better, et cetera, et cetera.

So there has to be something for the average person, to be able to use this. That is not connected to the big boys.

STU: And I'm still not sure, Glenn. If we're at this time.

To redefine these terms. AGI and ASI, Artificial General Intelligence, Artificial super intelligence.

And Artificial General Intelligence is basically -- it could be the smartest human, right?

GLENN: Not even. Not even that.

You would still consider this person a super genius.

It's general intelligence. You are a general intelligence being. Meaning, you can think and be good at more than one thing.

You can play the piano and be a mathematician. And you can be the best at both of those. Okay?

What we have right now, is narrow AI. It's good at one thing. Now, we're getting AI to be better at multiple things. Okay?

But when you get to general AI, it will be the best human beyond the best human, in every general topic.

So it can do everything. It will pass every board exam, for every walk of life. Okay?

Now, that's the best human on all topics. And I would call that super intelligence, myself.

But it's not. That's just general intelligence.

Top of the line, better than any human, on all subjects.

Super intelligence is when it goes so far beyond our understanding, we -- it will create languages and formulas and -- and alloys. And think in ways that we cannot possibly even imagine today.

Because it's almost like an alien life form. You know, when we think, oh, the aliens will come down. They will be friendly.

You don't know that. You don't know how they think. They've created a world where they can travel in space and time, in ways we can't.

That means, they are so far ahead of us. That we could to them, be like cavemen or monkeys.

So we don't know how they're going to view us. I mean, look at how they view monkeys. Oh, the cute little monkey. Let's put something in its brain and feel the electricity in its brain, okay?

We don't know how it will think. Because we're not there. And that's what we're developing. We're developing an alien life form. That cannot be predicted.
And cannot be something that we can even keep up with.