'Singularity' Author Warns That Future AI Could Pose These Threats

William Hertling, author of the “Singularity Series,” joined Glenn on today’s show to talk about the future of technology and artificial intelligence. They tackled these questions and more:

  • What will it look like when humans and smart machines are “coexisting”?
  • Will we keep losing jobs to automation?
  • When will robots be able to diagnose our illnesses and replace doctors?
  • How will the human experience change as technology advances?
  • Will we be able to “opt out” of AI?

With every upside there looks to be a downside with the advancements in AI, tell us in the comment section below whether you are excited or ready to pull the plug.

This article provided courtesy of TheBlaze.

GLENN: I have been immersing myself in -- in future tech, to try to understand what is coming our way and what the -- the moral issues are of the near future.

What it means to each of us in our lives. What it means to be asked the question, am I alive?

Is this life? We have so many questions that we have to answer. And we're having trouble with just some of the basic things. And no one is really thinking about the future.

When you think about the future, and you think about robots or you think about AI, Americans generally think of the terminator. Well, that's not necessarily what's going to happen.

How do we educate our kids?

So I've been reading a lot of high-tech stuff. And in my spare time, I've been trying to read some novels. And I'm looking for the storytellers, the people who can actually tell a great story that is really based in what is coming. The -- the futurist or the -- the near future sci-fi authors, that can show us what's on the horizon.

And I found a series of books. It's called the -- the Singularity series. And I found them over the Christmas vacation. And I just last night finished the fourth one.

And they are really, really well-done. They are -- they get a little dark. But it also shows the positive side of what could be. And it was a balanced look, and a way to really understand the future that is coming and is on the horizon.

William Hertling is the author, and he joins us now. William, how are you, sir?

WILLIAM: I'm doing great. Thanks so much for having me on.

GLENN: Congratulations on a really good series.

This is self-published?

WILLIAM: Yep. It is self-published. I could not find a publisher who saw the vision of the series. But I self-published it, and people love it. So it gets the word out there.

GLENN: Yeah. You've won several awards for it, and I hope -- you know, I don't know what your sales have been like, but I hope your sales are really good. Because I -- I think it -- well, let me ask you this: What was the intent of the series for you?

WILLIAM: You know, what happened was, about ten years ago, I read two books back-to-back. One was Ray Kurzweil, The Singularity Is Near, which I know you've read as well.

GLENN: Yep.

WILLIAM: And the other one was Charles Straufman's (phonetic) Accelerometer, which is a fictional book about the singularity.

And what I really realized at that point in time was that we had the biggest set of changes that were ever going to face humanity. And they were coming. And they were in the very near future. Right? They're certainly coming in my lifetime. They're probably coming within the next ten years. And there's very little out there about that.

And as you said, most of the stories that are in media today are about these terminator-style stories. AI rises up. They take control of the machines. And we fight them in the battle. Which, of course, makes for a great movie. I would love to see the Terminator many times over, but what happens when it's not like that? What happens when it's sort of the quiet kind of AI story. And that's really what I wanted to explore. What happens when there's this new emergence of the first AI that's out there, and people realize they're being manipulated by some entity? And what do they do about it? How do they react?

GLENN: So I find this -- first of all, you lay it out so well. And the first book starts with the emergence of AI. And then moves -- I think the next book is, what? Ten years later, five years later --

WILLIAM: They're all ten years apart. Yeah. Basically explore different points of technology in the future.

GLENN: Right. So the last one is in the 2040s or in the 2050s. And it's a very different thing then than it starts out as.

WILLIAM: Yeah.

GLENN: And the thing I wanted to talk to you about is, first of all, can you just define -- because most people don't know the difference between AI, AGI, and ASI, which is really important to understand.

WILLIAM: Sure. So AI is out there today. It's any time programmers write a piece of software. Yet, instead of having a set of rules, you know, if you see this, then do that. Instead, the AI software is trained to make the decisions on its own. So AI is out there today. It's how you have self-driving cars. It's what selects the stories that you read on Facebook. It's how Google search results come about.

And AGI is the solution that artificial intelligence will become more general, right? All of the things that I mentioned, are very specific problems to be solved. How to drive a car is a very specific problem.

GLENN: So a good -- a good explanation of AI would be big blue, the chess-playing IBM robot.

It has no general intelligence. It does that.

WILLIAM: Exactly, right. And we have IBM's Watson, which is really good at making diagnoses about cancer. But you can't have a conversation about how you're feeling.

GLENN: Right.

WILLIAM: But AGI would. AGI would appear to be like a human being, conceivably. In that, it could talk and reason about a wide variety of topics, make decisions. Generally, use its intelligence to solve problems that it hasn't even seen before.

GLENN: Now, AGI can pass the Turing test?

WILLIAM: Yeah, so the Turing test is this idea that you've got a person in one room, chatting with someone in another room, and they have to decide, is that a human being, or is it a computer? And if they can't figure it out, then that is the Turing test.

And you pass the Turing test, if you can't distinguish between a computer and a person.

GLENN: How close are we to that?

WILLIAM: Well, I think we probably all have been fooled at least a couple of times when we've either gotten a phone call or made a phone call and we think that we're talking to a human being on the other end. Right? But it actually turns out that we're talking to a machine that routes our phonecall somewhere.

So, you know, we're there for a couple of sentences. But we're still pretty far away if you're going to have any meaningful conversation.

GLENN: And AGI is when the computer has the computing power of a human brain?

WILLIAM: Yeah.

GLENN: Okay. Now, that's not necessarily a scary thing. But it's what happens when you go from AGI to ASI, artificial super intelligence. And that can happen within a matter of hours. Correct?

WILLIAM: It can. There's a couple of different philosophies on that. But if you can imagine that -- think about the computer that you have today, versus the computer that you had ten years ago. Right?

It's vastly more powerful. Vastly more powerful than the one you had 20 years ago. So even if there's not these super rapid accelerations in -- in intelligence. Even if you just today had a computer that was the intelligence of a human being, you would imagine that ten years from now, it's going to be able to think about vastly more stuff. Much faster. Right?

So we could see even just taking advantage of increasing in computing power, we would get a much smarter machine. But the really dangerous, or not necessarily dangerous, but the part -- the really rapid change comes from when the AI can start making changes to itself.

So if you have today, programmers create AI. But in the future, AI can create AI. And the smarter AI gets, then in theory, the smarter the AI it can build. And that's where you can get this thing that kind of spirals out of control.

GLENN: So you get a handle on how fast this can all change, if you have an Apple i Pad 2, that was one of the top five supercomputers in 1998. Okay?

That was a top five supercomputer.

WILLIAM: Yeah.

GLENN: That's how fast technology is growing on itself.

All right. So, William, I want you to kind of outline what -- we're going to take a break, and I want you to come back and kind of outline why all of this stuff matters. What -- what is in the near future, that we're going to be wrestling with? And why people should care. When we come back.

GLENN: As you know, if you're a long-time listener of the program, I'm very fascinated with the future and what is coming. The future of tech and artificial intelligence.

William Hertling is an author and a futurist. He is the author of what's called The Singularity Series. It's a series of four novels, that kind of break it down and tell you what's coming. And break it down in an entertaining fashion. I highly recommend The Singularity Series. If you are interested in any of this, you need to start reading that, you will really enjoy that.

STU: William, I know Glenn is a big fan of your work and has been reading a lot about technology. I think a lot of people who are living their daily lives aren't as involved in this. I think a third or a half of the audience when you hear AI, don't even connect that to artificial intelligence, until you say it.

I know as a long-term NBA fan, I think Allen Iverson, honestly when I hear AI. Can you make the case, with everything going on in the world, why should people put this at the top of their priority list?

WILLIAM: Well, it's the scale of the change that's coming.

And probably the nearest thing that we're really going to see is over the next five years, we're going to see a lot more self-driving cars and a lot more automation in the workplace. So I think transportation jobs account for something like 5 percent of all jobs in the United States.

And whether you're talking about driving a car, a taxi, driving a delivery truck, all of those things are potentially going to be automated. Right? This is one of the first really big problems that AI is tackling. And AI is good at it. So AI can drive a car. And it can do a better job. It doesn't get tired. It doesn't just go out and drink before it drives, and it doesn't make mistakes.

Well, that's not quite true. They're going to make less mistakes, but they're going to make less mistakes than your typical human operator. So you know business likes to save money. And it likes to do things efficiently. And self-driving cars are going to be more cost-effective. They're going to be more efficient. So what happens to those 5 percent of the people today who have transportation jobs? Right?

This is probably going to be the biggest thing that affects us.

GLENN: I think, William, you know, that Silicon Valley had better start telling the story in a better fashion. Because as these things hit, we all know politicians on both sides, they'll just -- they'll blame somebody. They're telling everybody that I'm going to bring the jobs back.

The jobs aren't coming back. In fact, many, many more are going to be lost. Not to China, but by robotics and AI. And when that happens, you know, I can see, you know, politicians turning and saying, "It's these robot makers. It's these AI people."

WILLIAM: Yeah. Naturally. And yet, unfortunately, the AI genie is out of the bottle, right? Because we're investing in it. China is investing in it. Tech companies around the world are investing in it.

If we stop investing in it, even if we said, hey, we don't want AI, we don't like it, all that's going to do is put us at a disadvantage compared to the rest of the world. So it's not like we can simply opt out. It's not really -- we don't have that option. It's moving forward. So we need to participate in it. And we need to shape where it's going. And I think this is the reason why it's so important to me that more people understand what is AI and why it matters. Because we need to be involved in a public conversation about what we want society to look like in the future.

As we go out, if even more jobs are eliminated by AI, what does that mean? What if we don't have meaningful work for people?

GLENN: I think the thing I like about your book series is it starts out really hopeful. And it shows that, you know, this technology is not going to be something that we really are likely to refuse. Because it's going to make our life incredibly stable and easy in some ways.

And I kind of would like you to talk about a little about, you know, the stock market and the economy and war and everything else. Something that you talk about in your first novel. And show you when we come back, the good side, and then what it could turn into.

STU: So Allen Iverson is taking our transportation jobs?

GLENN: Yes, yes.

STU: Okay. That's what I got from that.

GLENN: We're talking to William Hertling. He is the author and futurist. The author of many books. His latest is The Kill Process. I'm talking to him about The Singularity Series. And the first one in there is the Avagadro Corp. And it starts out around this time. And it starts out with a tech center in Portland. And a guy is working on a program that will help everybody with their email. And all of a sudden he makes a couple of changes. And unbeknownst to him, it grows into something that is thinking and acting and changing on its own.

And, William, I would like you to take us through this. Because the first book starts out really kind of positive. Where you're looking at this -- and there's some spooky consequences -- but you're looking at it going, you know, I could see us -- I'd kind of like that. And by the end, in the fourth book, we've all been digitized. And we're in a missile, leaving the solar system because earth is lost.

A, do you think this is -- is this your prediction, or you just think this is a really kind of good story?

WILLIAM: Well, you know, I think a lot of it has the potential to be real. And I think one of the things you probably know from my reading is that I'm fairly balanced. What I see are the risks and the benefits. I think there's both.

GLENN: Yeah.

WILLIAM: I get very upset. There are so many people that are very dogmatic about artificial intelligence and the future. And they either say, hey, it's all benefits and there are no risks. Or they only talk about the risks without the benefits.

And, you know, there's a mix of both. And it's like any other technology. Right?

GLENN: We don't know.

WILLIAM: All of our smartphones -- we all find our smartphones to be indispensable. And at the same point in time, they affect us. Right? And they have negative affects. And society is different today than it was years ago, at the cost of our smartphones.

GLENN: But this is different though than anything else that we've seen like a smartphone. Because this is -- this is like, you know, an alien intelligence.

We don't have any way to predict what it's going to be like, or what it's going to do. Because it will be thinking. And it most likely will not be thinking like a human.

But can we start at the beginning, where, just give me some of the benefits that will be coming in the next, let's say, ten years that people will have a hard time saying no to.

WILLIAM: Sure. I mean, first of all, we already talked about self-driving cars, right? I think we all like to get into our car and be able to do whatever we want to do and not have to think about driving. That's going to free us up from a mundane task.

We're going to see a lot more automation in the workplace. Which means that the cost of goods and services will go down. So we'll be able to get more from less. So that will seem like an economic boom, to those of us that will afford it. Right? We will be able to enjoy more things. We'll have better experiences when we interact with AI. So today, if you have to go to the doctor, you'll wait to get a doctor's appointment. You'll go in. You'll have this rushed experience, more than likely, if you're here in the US. You'll get five minutes of their time, and you're hoping they will make the right diagnosis in the five minutes they're with you. That's going to be I think one of the really big changes over the five, ten years from now is we'll see a lot more AI-driven diagnosis.

So when you're having medical issues, you can go in, and you can talk to an AI that will be more or less indistinguishable than talking to the nurse when you walk into the doctor's office.

And by the time the doctor's sees you, there will already be a diagnosis made by the AI. And it likely will be more accurate than what the doctor would have done. And all they'll do is sign off on it.

GLENN: Yeah, I had a hard time -- until I started reading about Watson, I had a hard time believing that people would accept something from a machine. But they are so far ahead of doctors, if they're fed enough information.

They're so far ahead on, you know, predicting cancer and diagnosing cancer than people are. I think it's going to be a quick change. You're going to want to have the AI diagnose you.

WILLIAM: Right. Because that's going to be the best. Right? When we go to the doctor, we want the best. We don't want the second best.

GLENN: Right.

WILLIAM: So we're going to see a lot of that. And then, you know, ten to 15 years out -- you know, it's funny, I had a conversation with my daughter one day, and she asked, hey, Dad, when am I going to get to drive a car?

And I thought about her age, and I thought about that. And I was like, well, I'm not sure you're ever going to get to drive a car. Because where you are and when self-driving cars are coming, you may never drive a car.

And so you'll just get one, and it will take you where you want to go.

So there's going to be very -- they're both subtle and yet dramatic changes in society when you think about, hey, we're going to have a generation of people, and they will never have learned how to drive a car. Right? So their time will be free to do other things. They'll be different than we are.

GLENN: Do you see the -- you know, in your first book, you talk about, you know, AI changing, you know, the emails that are being sent and doing things on its own. And really manipulating people.

We are already at the point to where we accept the manipulation of what we see in our Facebook feed. But that's not -- there's -- there's -- that's not a machine trying to do anything, but give us what we want.

WILLIAM: Right.

GLENN: Do you see us very far away from, you know, hedge fund computers that can -- that can really manipulate the markets in a positive way or computers that can begin to manipulate for peace, as you put in your book, your first one?

WILLIAM: It's a good question. We're definitely going to see that. At a minimum, right? We can imagine that if you have an authoritarian government, they're going to distribute information to pacify people.

And that's not a good thing often. In some ways, it is. You know, if you have armed unrest, people will die. So there's a balance there. I think what we'll see is we'll just see lots of different people use technology in lots of different ways.

So maybe we don't have, you know, a hedge fund manipulating the markets in a positive way. Maybe it starts with a bunch of hackers in another country, manipulating the markets to make money. Right?

So I think we are going to see that distribution, that manipulation of information. And it's hard.

It out there now, right? There is content -- a lot of the content that you read on the web, whether it's a review of a restaurant or a business, a lot of it is generated by AI. And it's hard to tell what's AI versus a person writing a genuine review.

GLENN: Talking to William Hertling. He's an author and futurist. Author of a great series of novels called The Singularity Series. William, the -- the idea that intelligent -- not AI. Not narrow AI. But, you know, super intelligence or artificial general intelligence just kind of comes out of nowhere, as it does in your first novel, where it wasn't the intent of the programmer, is interesting to me.

I sat with a -- one of -- a bigger name from Silicon Valley, just last week. And we were talking about this. And he said, whoever controls AI, whoever gets this first is going to control the world.

He was talking to me privately about a need for almost a Manhattan Project for this. Do you see this as something that is just going to be sprung on us, or will it be taken, you know, in a lab? Intentionally?

WILLIAM: I think the odds are probably strongly biased towards in a lab. Both because they have the kind of deeper knowledge and expertise. You know, because they have the kind of raw computing power, right? So the folks at Google will have millions of times of computing power, than somebody who is outside a company like Google. So that alone -- it's like they have the computers that will have it in 15 to 20 years, right? That kind of computing power. And that makes AI a lot easier of a problem to solve.

So I think it's most likely to come out of a lab.

GLENN: If you're looking at, for instance, the lawsuit that was just filed against Google about the way they treat people with different opinions, et cetera, et cetera. My first thought is, good God, what are those people putting into the programming?

I mean, that -- that doesn't -- that doesn't work out well for people. Is there enough -- are there enough people that are concerned about what this can do and what this can be, that we have the safeguards with people?

WILLIAM: You know, I -- I really think we don't. I mean, think about the transportation system we have today and the robust set of safety mechanisms we have around it. Right?

So we want to drive from one place to another. We have a system of streets. We have laws that govern how you drive on those streets. We have traffic lights. Cars have antilock brakes. They have traction control. All these things are designed to prevent an accident.

If you get into an accident, we have all these harm reduction things. Right? We have seatbelts and airbags. After the fact, we have all this -- we have a whole system of litigation, right? We have ambulances and paramedics in the hospitals to take care of those damage results. In the future, we'll need that same sort of very robust system for AI. And we don't have anything like that today.

GLENN: And nobody is really thinking about it. Which is --

WILLIAM: Yeah, nobody is thinking about it comprehensively. And one thing you can imagine is, well, we'll wait until we have a problem, and then we'll put those safety mechanisms in place.

Well, the problem, of course, is that AI works at the speed of computers, not at the speed of people. And there's this scene in one of my books -- I'm sure you remember reading it -- where there's a character who witnesses a battle between two different AI factions.

GLENN: Yes.

WILLIAM: And the whole battle takes place, a lot of things happen between the two different AI factions, all in the time it takes the human character's adrenaline to get pumping.

And by the time he is primed and ready to fight, the battle is over. And they're into negotiations and how to resolve it, right?

GLENN: It's remarkable in reading that. That's a great understanding of -- of how fast this will -- things will move.

It's like one of the best action novels of war scenes I've ever seen. Really, really good. You know, page after page after page of stuff happening. And you get to the end, and you realize, "Oh, my gosh, this -- the human hasn't even hardly moved. He hasn't even had a chance to think about the first step that happened." And it's already over.

WILLIAM: Exactly. So this is why we need to be thinking about, how are we going to control AI? How are we going to safeguard ahead of time? We have to have these things in place, long before we actually have AI.

STU: Isn't it true though, William, that eventually some bad actor is going to be able to develop this and not put those safeguards in? And we're not going to have a choice. Eventually, the downside of this is going to affect everybody.

WILLIAM: You know, it's very true. And part of the reason why, I say, right? We can't opt out of AI. We can't not develop it. Because then we're just at a disadvantage of someone who does. And it gets even scarier as you move out. So one of the things I talk about in my third book, which is set around 2035. And I talk about neural implants. I think neural implants -- so basically a computer implanted in your brain, the purpose of which is mostly to get information in and out. Right? Both having a smartphone in our hands where we're trying to read information on the screen. We can get it directly in our head. It makes interaction much smoother, easier. And -- but it can also help tailor your brain chemistry. Right? So if you can imagine if you're someone who has depression or anxiety or a severe mental disability, that a neural implant could correct those things. You basically would be able to flip a switch and turn off depression or turn off anxiety.

STU: Wow.

GLENN: So, William, I'm unfortunately out of time. Could I ask you to come back tomorrow and talk and start there? Because that's really the third book. Start with the neuroimplants and where it kind of ends up with technology. Because it is remarkable. And in reading the real science behind it, it's real. It's real.

WILLIAM: It sure is. It's coming.

GLENN: Yeah. Could you come back maybe tomorrow?

WILLIAM: Sure. I would be happy too.

GLENN: Okay. Thank you so much, William. William, author and futurist. He is the author of The Singularity Series.

STU: You should get one of those things, Glenn. That thing logical alter your brain. William Hertling is the author of all these books. There's four of them in this series, The Singularity Series. Plus, Kill Process just came out. That's WilliamHertling.com.

Let me ask you this, Glenn, is this the write way to think about it? This comes in from Twitter, @worldofStu. To understand the difference between AI, artificial intelligence, and AGI, Artificial General Intelligence.

So if there's a self-driving car, and it's AI, you say, take me to the bar, and it says calculating route. Beginning travel.

Okay? If you say it to AGI, take me to the bar, it responds, your wife says you drink too much and my sensors say you put on a few pounds, routing to the gym.

GLENN: I have a feeling, you're exactly right.

STU: That's terrible.

POLL: Is GLOBAL WARMING responsible for the fires in L.A.?

Apu Gomes / Stringer | Getty Images

As wildfires sweep across California and threaten to swallow up entire neighborhoods in Los Angeles, one question is on everyone's mind: What went wrong?

So far over 45 square miles of the city have been scorched, while the intense smoke is choking out the rest of L.A. Thousands of structures, including many family homes, have been destroyed, and many more are at risk as firefighters battle the flames. Many on the left, including Senator Bernie Sanders, have been quick to point to climate change as the cause of the devastating fires, citing the chronic lack of rain in L.A.

Others, including Glenn, have pointed out another potential cause: the severe mismanagement of the forests and water supply of Los Angeles and California in general. Unlike many other states and most other forested countries, California does not clear out the dead trees and dry vegetation that builds up on the forest floor and acts as kindling, fueling the fire as it whips through the trees.

On top of this, California has neglected its water supply for decades despite its crucial role in combating fires. The state of California has not built a new major water reservoir to store and capture water since the 1970s, leading to repeat water shortages in Southern California. To top it off, Gavin Newsom personally derailed a 2020 Trump order to divert water from areas of the state with excess water to parched Southern California. Why? To save an already functionally extinct fish. Now firefighters in L.A. are running out of water as the city is engulfed in flames. At least the fish are okay...

But what do you think? Are the wildfires a product of years of mismanagement? Or a symptom of a changing climate? Let us know in the poll below:

Is climate change responsible for the fires in L.A.?

Are the L.A. fires a product of years of mismanagement? 

Do you think controlled burns are an effective way to prevent wildfires?

AI Singularity? ChatGPT rates Glenn's 2025 predictions

KIRILL KUDRYAVTSEV / Contributor | Getty Images

On this week's Glenn TV special, Glenn divulged his top predictions for 2025. While some of his predictions spelled hope for current geopolitical issues like the war in Ukraine, others took a more harrowing turn, from AI reaching singularity to a major banking crisis and a "Summer of Rage 2.0."

But what does ChatGPT think? Glenn's head researcher asked ChatGPT about the likelihood of each of Glenn's predictions, and the results spell trouble for 2025.

Which of Glenn's predictions did ChatGPT say will come true? Find out below:

1. The internet will be destroyed and reborn through AI.

Summary: AI will restructure the internet, centralize control with tech giants, and raise concerns over censorship.

ChatGPT Probability: 90%

Further Explanation:

Glenn began with a harrowing fact: the internet, as we know it, is slowly dying. We don’t truly have access to "the internet" in its entirety, but rather, we have a small sliver curated by those who control the indexes and brokers of the web. The slow decline of the internet is evident in the increasing irrelevance of many existing pages and documents, with countless dead links and broken websites. This issue demonstrates the growing problem of content disappearing, changing, or becoming irrelevant without updates to reflect these changes.

To address this growing problem, experts suggest that a massive "reboot" of the internet is necessary. Rather than continuing to patch up these issues each year, they argue that a thorough cleaning of the digital space is required, which is where AI comes into play. Google has already proposed using AI to scour the web and determine which content is still relevant, storing only active links. Glenn worries that we will embrace AI out of convenience to fix the problems facing the internet but ignore the widening door to the potential dangers that such convenience brings.

2. AI and ChatGPT innovations will be integrated into everyday life.

Summary: AI will dominate search engines, become personal assistants, and spark regulatory battles over ethics.

ChatGPT Probability: 70%

Further Explanation:

Glenn predicted that AI systems like ChatGPT will increasingly serve as gatekeepers, determining what information is accessible and valid. While this centralization will enhance user convenience, it raises serious ethical concerns about bias, manipulation, and censorship. These innovations mark the beginning of an expansion in the concept of "being human," with AI digital assistants becoming integrated into everyday life in ways that could significantly change how we interact with technology. However, these advancements will prompt regulatory battles, as governments push for stricter AI oversight, especially in light of concerns over privacy and "misinformation."

3. AI will attain singularity.

Summary: AI progress will remain uneven, with no imminent singularity expected despite rapid advancements.

ChatGPT Probability: 20%

Further Explanation:

The prediction that AI will reach "the singularity" in 2025 means that it will surpass human intelligence, leading to rapid, exponential growth. Glenn pointed to AI’s rapid progress, such as ChatGPT’s growth from 0% to 5% in four years, and an expected jump to 87% by the end of the year. However, the debate about benchmarks for achieving Artificial General Intelligence (AGI) remains muddled, as there is no clear definition of what constitutes "the singularity." Glenn believes one key indicator will be the unemployment rate in key industries, which could become a major indicator of AGI's impact by 2026.

While AI is advancing quickly in specific areas, like natural language processing, vision, and robotics, ChatGPT cautions that achieving AGI, and thereby the singularity, is still far off and that continuous, unbroken exponential growth in AI innovation is also unlikely. Therefore, ChatGPT concludes, that while significant advancements in AI are expected, the idea of an unimpeded, straight-line trajectory toward the singularity within the next year is unrealistic.

4. There will be a ceasefire between Russia and Ukraine.

Summary: A temporary ceasefire will freeze borders but will leave future conflict inevitable.

ChatGPT Probability: 80%

Further Explanation:

Both Ukraine and Russia are exhausted, depleting their manpower and munitions. With Donald Trump’s return to the political scene, Glenn predicts that his involvement could lead to negotiations and a temporary ceasefire. While the borders may remain as they are for the time being, the unresolved tensions would likely leave the door open for renewed conflict in the future. This temporary resolution would provide both sides with the breathing room they need, but it could set the stage for continued instability down the line.

5. There will be a second 'Summer of Rage.'

Summary: Anti-Trump protests will escalate into violent riots, targeting infrastructure and triggering martial law in areas.

ChatGPT Probability: 75%

Further Explanation:

Anticipating a summer of intense protests, Glenn predicts that groups like Antifa, BLM, and Occupy Wall Street, likely collaborating with formal unions and socialist organizations, will escalate their opposition to Trump’s policies. As protests grow, Trump will be vilified, and the right will be labeled fascist, with predictable media images depicting the separation of families and the chaos unfolding in major cities.

This prediction envisions a scenario similar to the Summer of Rage in the 1960s, with violent riots and widespread destruction in over 100 major cities. College campuses will be sites of massive protests, police stations may be directly targeted, and critical agencies like ICE, Border Patrol, and Homeland Security headquarters could be assaulted. As tensions escalate, National Guard troops may be deployed, and parts of Washington, D.C., could experience a "martial law" atmosphere. While the prediction sees the protests turning violent and disruptive, the real question is how suburban "soccer moms" will react when these riots hit closer to home.

6. The largest anti-Western 'caliphate' will emerge.

Summary: Middle Eastern factions may consolidate to control energy routes, destabilizing global markets.

ChatGPT Probability: 60%

Further Explanation:

Following Biden's controversial tenure and failures in handling the Middle East, a new anti-Western Caliphate will emerge, as various terrorist groups like Al-Qaeda, ISIS, the Houthis, and the Taliban unite under several leaders rather than one. These groups will receive support from Russia, North Korea, and China, creating a formidable alliance. Their objective will be to control approximately 30% of the world’s energy supply by seizing key oil routes through the Persian Gulf, the Gulf of Oman, and the Red Sea. This would give them dominion over critical global trade routes, including the Suez Canal. As alliances among these groups form, the longstanding Sunni-Shia conflict will be momentarily set aside in favor of unity against common enemies, with the U.S. and its allies as primary targets.

Europe will be too fractured to intervene, leaving the U.S. and Israel to confront this rising threat alone. The involvement of Russia and China will further complicate the situation, as both nations seek to undermine U.S. influence in Ukraine and Taiwan while securing access to energy markets in the Middle East. This prediction suggests that Biden’s foreign policy decisions will leave a lasting legacy of instability in the region. The necessity for the U.S. to increase domestic energy production, through policies like increased drilling, will become a national security issue in the face of this emerging threat.

7. China will invade a neighboring country.

Summary: China could target weaker nations under the guise of peacekeeping to assert dominance.

ChatGPT Probability: 55%

Further Explanation:

After years of military posturing, China’s aggressive rhetoric and actions have begun to lose their credibility, with the world perceiving its military buildup as a paper tiger. As the U.S. faces increasing isolation, and global conflicts in Europe and the Middle East divert attention, China will seize the opportunity to strike. However, it will target a country that is unlikely to mount a significant defense or provoke a strong reaction. This eliminates major regional powers like Taiwan, Japan, and the Philippines from the list of potential targets.

Countries such as Kyrgyzstan, Nepal, Laos, and Vietnam may become focal points for Chinese aggression. Vietnam and Bangladesh are particularly compelling targets, as they are emerging alternatives for U.S. and Western companies shifting manufacturing away from China. A Chinese invasion of these nations could impact U.S. interests by compelling tactical responses, such as deploying ships for air superiority and missile defense.

8. The U.S. stock market will collapse and ensue a banking crisis.

Summary: Rising rates and layoffs may trigger a stock market downturn and small business disruptions.

ChatGPT Probability: 50%

Further Explanation:

In a bid to boost the economy for the 2024 election cycle and secure a Democratic victory, Federal Reserve Chairman Jerome Powell, along with key figures from major banks, kept interest rates and policies favorable to financial institutions. This led to a temporary surge in stock prices just before the election. However, the anticipated economic boost failed to materialize due to broader political dynamics. Now, Powell is advocating for tighter policies, raising interest rates to cool an economy that he claims has become overheated, setting the stage for a stock market crash and a federal government funding crisis.

Glenn predicted that this manufactured crisis will have far-reaching consequences, starting with major disruptions on Wall Street and spilling into Main Street, resulting in layoffs, bankruptcies, and widespread economic instability. The Fed's role in shaping these events will dominate political discussions, and the economic fallout will force President Trump to take ownership of the crisis. Small businesses are advised to fortify their supply chains and secure favorable long-term contracts to mitigate the risks of rising prices and potential disruptions as the financial situation worsens in 2025.

9. North Korea will provoke South Korea.

Summary: Small-scale attacks by North Korea will distract from larger conflicts involving China and Russia.

ChatGPT Probability: 40%

Further Explanation:

In a potential move orchestrated by China to divert global attention from its own ambitions, North Korea may provoke South Korea with a calculated attack. This could involve a limited strike, such as firing ballistic missiles at a South Korean naval vessel, claiming it had intruded into North Korean waters, or attacking a military base along the border under the pretext of border violations or espionage. The primary goal of North Korea’s actions would be to test the waters and assess the West's reactions, particularly the U.S.'s willingness to intervene.

10. Those connected to Sean 'Diddy' Combs and Jeffery Epstein will be revealed. 

Summary: Investigations into scandals face resistance from powerful players, making progress unlikely.

ChatGPT Probability: 15%

Further Explanation:

Glenn predicts that the lists of individuals connected to the late financier Jeffrey Epstein and hip-hop mogul Diddy will be released. The release of these lists would likely trigger a significant public outcry, as it could implicate high-profile figures in serious scandals. However, the investigation and disclosure of such lists would require substantial evidence and resources and may face significant resistance from powerful industry players.

While media pressure and public opinion could push for transparency, the political and legal complexities surrounding such a release might hinder progress in the investigations. Given the challenges involved, ChatGPT says this prediction holds a relatively low probability, but it remains a topic of speculation and intrigue in the ongoing fallout from the Epstein case.

11. Trump will appoint 2 Supreme Court justices.

Summary: Retirements could allow Trump to reshape the court further right, but it's unlikely within the year.

ChatGPT Probability: 25%

Further Explanation:

Gless predicts that the aging U.S. Supreme Court may see retirements or unexpected vacancies, potentially allowing President Donald Trump to appoint two more justices. If such vacancies occur, it would shift the balance of the court further to the right. However, ChatGPT says this prediction is less likely due to the unpredictable nature of retirements and the political challenges associated with confirming Supreme Court appointments, particularly if the Senate is divided or controlled by a party opposing Trump.

12. The U.S. will establish a special relationship with Greenland.

Summary: Strengthened ties with Greenland are possible but forcing a special relationship is improbable.

ChatGPT Probability: 35%

Further Explanation:

Donald Trump has previously shown interest in Greenland, particularly in 2019 when he proposed the idea of purchasing the island, sparking significant controversy. Greenland, an autonomous territory of Denmark, holds strategic geopolitical and resource-based importance, making it a key area of interest for the U.S., especially in light of its proximity to Russia. However, ChatGPT says attempting to force a "special relationship" with Greenland would be difficult, as both Greenland's government and Denmark would likely resist such overtures, considering the complexities of sovereignty and international relations. Despite the strategic importance, this prediction holds a moderate probability due to political and diplomatic constraints.

13. The U.S. will take control of the Panama Canal. 

Summary: Re-negotiating Panama Canal control is highly unlikely due to political and diplomatic realities.

ChatGPT Probability: 10%

Further Explanation:

The Panama Canal, which was transferred to Panama’s control in 1999 following the Panama Canal Treaty, has remained under Panama's sovereignty ever since. Glenn, however, says he believes Trump's efforts to renegotiate control over the canal will succeed. However, ChatGPT says that given the historical context and the sensitivity of national sovereignty, the likelihood of Trump successfully regaining control of the canal is quite low.

To learn more, can watch the entire GlennTV special here:

The BIZZARE connection between the Vegas Cybertruck bomber and mystery drones

CHANDAN KHANNA / Contributor, Paula Bronstein / Contributor | Getty Images

Unfortunately, in recent times Americans have become far too accustomed to tragic mass shootings and attacks.

But the Cybertruck bombing that occurred outside of the Trump Hotel in Las Vegas earlier this month is different. Not only did the method and outcome of the attack differ from the begrudging norm, but the manifesto left behind tells a captivating and horrifying-if-true story that potentially sheds light on the most frustrating mystery of 2024. On his radio show, Glenn highlighted some of the strange and harrowing claims made by the bomber, and he was not convinced that they were just the ramblings of a madman.

What happened during the bombing? What did the bomber hope to achieve? And what does his manifesto potentially reveal about our government and the secrets they keep from us?

The bombing

Las Vegas Review-Journal / Contributor | Getty Images

On January 1st, 2025, a rented Tesla Cybertruck full of gas tanks, fireworks, and other explosives pulled up to the front door of the Trump Hotel in Las Vegas. Just before 8:40 a.m., the truck exploded before bursting into flame, injuring seven nearby people, all of whom are in stable condition. Aside from the minor injuries and minimal damage to the hotel, the explosion was absorbed and redirected by the truck, with the only death being that of the bomber, who allegedly shot himself before triggering the explosion.

The bomber has been identified as a former Army Special Forces Master Sergeant with a promising military career. He had given no sign of his intentions to his family and friends before the attack, and according to the Pentagon, he showed no red flags. While there may not have been any obvious signs, Glenn speculated that the bomber may have been suffering from PTSD and/or a traumatic brain injury, which is backed by the Army's admission that the bomber had received counseling through its Preservation of the Force and Family program.

The manifesto

Ethan Miller / Staff | Getty Images

Two different documents that were allegedly authored by the bomber have been discovered. The first was found on the bomber's phone and is composed of a list of grievances against the United States, a call to Americans to rally behind Donald Trump and Elon Musk, and an outline for a militia takeover of D.C.

The bomber also asserted that his attack was not an act of terrorism, but a "wake-up call" designed to attract attention, which he explained was the purpose behind the fireworks present in the explosion. He also claimed the attack was designed to "cleanse [his] mind" of the "brothers" he lost and the lives he took during his time in the Army, which further corroborates the theory that he was suffering from PTSD.

The second document was emailed to retired Army intelligence officer Sam Shoemate, who revealed its contents on The Shawn Ryan Show podcast. The bomber claimed the government was hunting him due to his knowledge of top-secret information relating to classified technologies. The bomber also alleged knowledge of war crimes committed in Afghanistan by the United States that resulted in the death of thousands of civilians.

The bomber's email gave several names and other information that he suggested could be used to verify his claims, but as of now, it is unclear how much, if any, of his story has been verified.

The connection

YELIM LEE / Contributor | Getty Images

Where do the mystery drones that have been plaguing the skies above New Jersey enter the story?

The bomber claimed the drones are operated by the Chinese and are a part of the same program that launched the spy balloon in 2023. He claimed these drones use a "gravitic" propulsion system, and are the most serious threat to national security due to their ability to transport an "unlimited payload" with unparalleled speed and stealth. He went on to claim that the drones originated from a Chinese submarine parked off the East Coast.

While these claims appear far-fetched, Glenn pointed out that if he is right about this, we are in grave danger. China or other foreign powers could have weapons of mass destruction parked over every major city, every military installation, or even the White House, and we would be powerless to stop them. We know our government lies to us regularly. Would anybody be surprised if they were hiding world-altering tech from us? Trump's reelection has given us another opportunity to demand answers and learn the truth.

Glenn: The Left's January 6th narrative doesn't hold four years later

Kent Nishimura / Los Angeles Times | Getty Images

Four years ago yesterday, the events of January 6th, 2021 unfolded—an event that the Left repeatedly said was the darkest day in our country's history. Yet, as time passes, the narrative surrounding that day has started to unravel, revealing uncomfortable truths that demand both explanation and accountability.

For millions of Americans, January 6th marked a dividing line, a day that deepened the fractures within our society. Emotions ran high, and trust in the institutions that were sworn to protect us was shattered, a portion of which will only be restored by dramatic action. This trust continues to erode as new details emerge, revealing gaping holes in the Left's narrative about January 6th.

The lies that surrounded the events of that day were not mere "misinformation"—they were bombshells that forced us to confront a much darker reality about our government’s actions. And these revelations must become the message we take from January 6th: the true nature of our current government, its accountability, and the lengths to which it will go to protect its version of events—even when it is a lie.

Let’s begin with the pipe bombs. On January 6th, Americans were told that two pipe bombs had been found near the RNC and DNC headquarters and that they could have caused catastrophic harm. The pipe bomb was placed at the DNC headquarters the night before January 6th. Interestingly, the security sweep of the building the next morning did not find it. Then Kamala Harris was transported in the height of January 6th. Conveniently, all the records detailing the event were “accidentally” deleted by the Secret Service.

Surveillance footage was ignored, cameras were turned just hours before the bombs were planted, and we were told that critical cell phone data was somehow “corrupted.” But it wasn't. The only thing that was corrupted was our own government and FBI. According to the cell phone companies, the FBI simply never asked for the information. Leads were never pursued. Four years later, the identity of the bomber remains a mystery.

Why would federal agencies neglect this critical investigation into an event that allegedly was going to destroy the republic or kill the future vice president? Was the lack of action intentional, perhaps a convenient distraction to justify escalating security measures and cast a broader shadow over what they hoped would unfold that day? These are not wild conspiracy theories; these are questions every citizen must ask. Because now we know that our government lied to us.

We must also address the FBI’s role on that fateful day. We’ve learned that 26 FBI informants were present on the ground during the events at the Capitol. Let that sink in. What were they doing there? Were they infiltrating the crowd? Were they acting as provocateurs? The presence of these informants raises serious questions about how much of the chaos that day was organic and how much of it was orchestrated. If the FBI had informants on the ground, why wasn’t the situation under control before it escalated?

Four years ago, I called for the protesters to stop. I said that this isn't who we are, and these people should go to jail. I still stand by the belief that if you hurt anyone, broke any windows, or damaged property, you should be held accountable and serve a just punishment. But today, I’m deeply concerned that many of those who were not violent or engaged in damage are still languishing in jail, some facing sentences of up to 20 years. What’s more disturbing is the growing evidence that the chaos that unfolded was not an accident—it was part of a broader agenda.

Amid the chaos, the finger was pointed squarely at one man: Donald Trump. But new information paints a vastly different picture. Just days before January 6th, President Trump authorized the deployment of the National Guard, citing concerns over potential unrest. Yet, his request was ignored—rejected outright by House Speaker Nancy Pelosi and the Capitol Police. Why? Who in the chain of command made the decision to disregard the president’s directive? Had the National Guard been allowed to deploy, it’s possible much of the mayhem that followed could have been prevented. But instead, that opportunity was squandered, and the media narrative was shaped to fit a political agenda—one that painted Trump as the instigator, when in fact, he sought to prevent the violence that ultimately occurred.

And then, there’s the tragic death of Ashley Babbitt. A decorated Air Force veteran, Babbitt was shot and killed by a Capitol Police officer while attempting to climb through a broken window. Her death was quickly ruled justified, and the officer involved was shielded from scrutiny. But now, we learn that the officer violated multiple procedural rules and could face criminal charges. Why was her death dismissed so quickly by both the media and the government? In an era where police actions are scrutinized heavily, why was this officer not held accountable?

As we look back, it's clear that January 6th was chiefly about the perversion of justice by the very institutions that were supposed to protect us. Big-tech corporations and global entities like telecoms and airlines offered up location data on innocent Americans who were simply in Washington, D.C., on January 6th. No warrants. No due process. They handed over personal data without question, and the FBI used it without hesitation.

What the FBI did with that data, how Americans there on that day didn't stand a chance in D.C. courts, how our politicians and federal law enforcement knew what was going on yet did nothing to prevent it, the calling off of the National Guard—what does this tell you about our country? Our government, our justice system, and our institutions were complicit in undermining the very principles they were created to uphold.

They are trying to create a system that thrives on division and chaos, a system that uses fear as a tool to control the American people. If the federal agencies can lie, manipulate, and withhold the truth about January 6th, what else are they capable of? What are they willing to do to maintain their grip on power?

Four years later, on the anniversary of January 6th, we must demand the truth—not the sanitized, politically convenient version. We deserve the full, unvarnished truth. We must hold accountable those in power who orchestrated, covered up, or ignored the events of that day. We must never allow the lies and the unanswered questions of January 6th to fade into the political ether. We must ensure that the truth is told and that those who lied to us are held accountable.