Artificial General Intelligence is coming sooner than many originally anticipated, as Elon Musk recently announced he believes his latest iteration of Grok could be the first real step in achieving AGI. Millions of Americans are not ready for how AGI could affect their jobs, and if you don't start adapting now, you could be left behind. Glenn and Stu dive into the future of AI, exploring how prompting is the new coding and why your unique perspective is critical. From capitalism to AGI and ASI, discover how AI can be a tool for innovation, not oppression, but if we're not careful, it can quickly become something we cannot control...
Transcript
Below is a rush transcript that may contain errors
GLENN: So I've been talking about capitalism and the future. And especially AI. Let's have a deeper conversation on this. Because, you know, the fear is, it's going to take our jobs. And you're going to be a useless eater, et cetera, et cetera. Because AI will have all of the answers. Correct. But how many times --STU: And good night, everybody.
GLENN: Hang on. Hang on. That is correct if you look at it that way, but let me say this: I could have people who are wildly educated on exactly the same facts, and they will come to a different conclusion or a different way to look at that. Okay? They can agree on all of the same facts, but because they're each unique -- and -- and AI is not AGI or ASI. It's not going to be unique, I don't think. This is my understanding of it now. And I've got to do some. I've got to talk to some more people about this that actually know. Because coding is now what AI does. Okay?
That can develop any software. However, it still requires me to prompt. I think prompting is the new coding.
And if you don't know what prompting is, you should learn today what prompting means.
It is an art form. It really is. As I have been working with this now for almost a year now, learning how to prompt changes everything.
And so -- and now that AI remembers your conversations and it remembers your prompts, it will give a different answer for you, than it will for me.
And -- and that's where the uniqueness comes from. And that comes from looking at AI as a tool, not as the answer.
So, Stu, if you put in all of the prompts that make you, you, and then I put in a prompt that makes me, me.
Donald Trump does.
You know, Gavin Newsom does it. It's going to spit out different things.
Because you're requiring a different framework.
Do you understand what I'm saying?
STU: Yeah. You can essentially personalize it, right?
To you. It's going to understand the way you think, rather than just a general person would think.
GLENN: Correct. Correct.
And if you're just going there and saying, give me the answer. Well, then you're going to become a slave. But if you're going and saying, hey. This is what I think. This is what I'm looking for.
This is where I'm -- where I'm missing some things, et cetera, et cetera.
It will give you a customized answer that is unique to you.
And so prompting becomes the place where you're unique. Now, here's the problem with this. This is something I said to Ray Kurzweil back in 2011, maybe.
He was sitting in my studio. And I said, so, Ray, we get all this. You can read our minds. It knows everything about us, knows more about us than anything. Than any of us know. How could I possibly ever create something unique?
And he said, what do you mean?
And he said, well, if I was -- let's say if I wanted to come up with a competitor for Google.
If I'm doing research online. And Google is able to watch my every keystroke.
And it has AI, it's knowing what I'm looking for.
It -- it then thinks, what are -- what is he trying to put together?
And if it figures it out. It will complete it faster than me. And give it to the mothership.
Which has the distribution. And the money. And everything else.
And it will -- I won't be able to do it. Because it will have already done it!
And so you become a serf. The Lord of the manor takes your idea, and does it because they have control. That's what the free market stopped.
And unless we have of our own thoughts and our own ideas, and we have some safety, to where it cannot intrude on those things, that we have some sort of a patent system for unique ideas that you're working on.
That -- that AI cannot take what you're -- and share it with the mothership. Share it anybody else.
Then it's just a tool of oppression.
Do you understand what I'm saying?
STU: Yeah. Obviously these companies will say they're not going to do that.
GLENN: What you know Ray said?
Ray said, Glenn, we would never do that.
Why not?
He said, because it's wrong. We would never do that. And I said, oh. I forgot. How moral -- and such high standing everybody in Silicon Valley. And Google is.
STU: And Silicon Valley and Google is -- I have far more confidence in their just benevolence than I do China.
GLENN: And Washington.
STU: And Washington.
GLENN: And Washington.
STU: Yeah. Exactly.
GLENN: The DOD.
STU: Everyone will have these things developed. And who knows what they're -- what they're going to do.
I suppose, there will be some eventually that becomes an issue. Or it becomes a risk.
There will be some solutions to that. Like, you could have close looped systems. That don't connect to the mothership.
All that stuff is going to be -- there will be answers to those questions, I'm sure.
But, you know, at some level, right?
They're using what you're typing in as training for future AIs. Right?
GLENN: Correct. Correct. Correct.
STU: So they all in a way has to go to the mothership at some level. And whether they're trying to take advantage of it, the way you're talking about. I don't trust it.
GLENN: Right now, a year ago, we thought, we're going to use. We'll use somebody's AI as the churn.
As the -- as the compute power.
Because the server farms. Everything is so expensive. But I don't think now, we've been talking about this at the Torch. You know, our dreamers are working on.
I'm not sure we're ever going to be able to get the compute power that we need for a large segment of people.
Because right now, these companies. Now, think of this. The world is getting between one and 3 percent of the compute power.
So that means 97 to 99 percent of all of that compute is going directly into the company. Trying to enhance the next version.
Okay?
All of that thinking, that's like -- that's like you giving, you know, something that everybody else thinks is your main focus. And you're only giving it, hmm.
20 or 15 minutes a day.
Okay?
You're operating at the highest levels, and I'm only going to spend ten minutes thinking about your problem. All right.
And you think that's what I'm really doing. Is spending all my time over there.
So they're eating up all the compute for the next generation. And I don't think that's going to stop.
And so we're now looking at, can we afford to build our own AI server farm at a lower level that doesn't have to, you know, take on 10 million people, but maybe a million people? And keep it disconnected from everything else. If we can do that, I think that's -- I think that's a really important step, that people will then be able to go, okay.
All right.
I can come up with my own -- even my own company. Compute farm.
That keeps my secrets. Keeps all of the things that I'm thinking.
Keeps all of this information right here.
Hopefully, that will happen.
But I'm not sure. Because I think -- when they do hit AGI. You're not going to get it.
You might have access to AGI.
But it will be so expensive. Because AGI will try to get to ASI. So when they get to AGI. When that is there and available. It could be $5,000 a month. For an individual.
It could be astronomical prices.
You're not going to get compute time on quantum computer.
You're just not. It will be way too expensive. Because the big boys will be using it. The DOD will be using it. Most of it. You know, Microsoft and Google and everybody else, when they develop theirs. They will be using it themselves. To get stronger and better, et cetera, et cetera.
So there has to be something for the average person, to be able to use this. That is not connected to the big boys.
STU: And I'm still not sure, Glenn. If we're at this time.
To redefine these terms. AGI and ASI, Artificial General Intelligence, Artificial super intelligence.
And Artificial General Intelligence is basically -- it could be the smartest human, right?
GLENN: Not even. Not even that.
You would still consider this person a super genius.
It's general intelligence. You are a general intelligence being. Meaning, you can think and be good at more than one thing.
You can play the piano and be a mathematician. And you can be the best at both of those. Okay?
What we have right now, is narrow AI. It's good at one thing. Now, we're getting AI to be better at multiple things. Okay?
But when you get to general AI, it will be the best human beyond the best human, in every general topic.
So it can do everything. It will pass every board exam, for every walk of life. Okay?
Now, that's the best human on all topics. And I would call that super intelligence, myself.
But it's not. That's just general intelligence.
Top of the line, better than any human, on all subjects.
Super intelligence is when it goes so far beyond our understanding, we -- it will create languages and formulas and -- and alloys. And think in ways that we cannot possibly even imagine today.
Because it's almost like an alien life form. You know, when we think, oh, the aliens will come down. They will be friendly.
You don't know that. You don't know how they think. They've created a world where they can travel in space and time, in ways we can't.
That means, they are so far ahead of us. That we could to them, be like cavemen or monkeys.
So we don't know how they're going to view us. I mean, look at how they view monkeys. Oh, the cute little monkey. Let's put something in its brain and feel the electricity in its brain, okay?
We don't know how it will think. Because we're not there. And that's what we're developing. We're developing an alien life form. That cannot be predicted.
And cannot be something that we can even keep up with.