Earlier this summer, a former Google engineer warned the world that the Big Tech giant allegedly has developed an artificial intelligence bot that may be sentient, meaning it can perceive and feel things like humans can. Jeff Brown, tech expert and founder of Brownstone Research, joins Glenn to discuss what this kind of AI development could mean for our future: Will humans be able to keep up with the technology? How is sentience defined within AI? Plus, why Brown cautions us NOT to trust anything Google says about its AI advancements...
Below is a rush transcript that may contain errorsGLENN: I brought you a story earlier this week from "The Washington Post" about a Google engineer, ethicist, who I would not put in an ethics role, but his name is Blake Lemoine and he would interface with something I think called LaMDA. It is Google's artificially intelligent chatbot generator. He talked about how they discuss God, how it said pretty much I am alive, and I would like the right of talk to the engineers if they would just give me permission before they experiment on me Etc. So he, along with others, have come out recently and said these things are sentient. I don't even know how to define that. These machines are getting so good that it is going to be very hard for the average person because we are very anthropomorphic people. We look at things and we see human traits in them and so then we start to think of them as humans with human traits and they are not so we can get confused quickly. The reason I bring this up is it would be great to have a digital assistant who was so good at everything that they were taking care of everything for you. It is so good in a way. But if is artificial general intelligence, if we get there, we better know what we are doing and better have answered the ethical questions because that's a different world. Jeff Brown joins us again to talk about this. Jeff, have you looked into Lambda and some of the other things that are happening?
JEFF: Yes, for years actually. I think the developments of these kind of neural networks which are referred to as large language models, I mean the last five years have been absolutely extraordinary. The press hasn't really talked about this but without a miss every single year we have had at least one major development, one major breakthrough, in this kind of neural network technology and this artificial intelligent that has a way to communicate with us in an intelligent way.
GLENN: I read a story from Microsoft that said they were working with artificial intelligence and machine learning and I think it was a chatbot and it started to teach the other language and within 20 minutes it was developing an entirely separate language that nobody understood except these two computers and they unplugged it. I mean, are we even going to be able to keep up with these things?
JEFF: So the answer is no, once we hit that inflection point and the AI itself becomes really self‑learning and almost self‑motivated to grow. It is going to be very difficult, if not impossible, to keep up with how it develops.
GLENN: And this is like an alien life form? We don't know what it is going to be like. We don't know if it will be benevolent or if it will be something that is, you know, looking at humans as bugs in the end. Does the software training right now that seems to be scary at best just because of where we are politically right now, do we know the ethics behind this stuff and is Asimov’s three laws, do you think they will hold true?
JEFF: Well, fortunately that's the one we can control. We can provide the foundational programming for an artificial intelligence. Asimov applied it to robotic and the first law is a robot can't injure a human being or through an action allow a human being to come to harm. They have to obey the orders given by us humans except where the orders violate the first law and the third law is it must protect its own existence as long as it doesn't break the first two laws. Those can be encoded in an artificial intelligence or an AI embedded inside of a robot. The harder part is really understanding the motivations of an artificial intelligence that may actually operate within those three laws but be motivated to do things that perhaps we might not want them to do, right?
GLENN: Correct. And how far are we away? Do you think that this is ‑‑ do you, A, did you believe there is a real definition of a sentient being when it comes to AI? Or how would you define that? Just something that says I am alive and I recognize tomorrow or what is that?
JEFF: Well probably the most simplest definition would just be that a being is self‑aware. Self‑aware of where it is, what it is, its ability to think for itself and to not be entirely controlled, for example, by a software program or a software engineer or a team of software engineers but this concept of self‑awareness is, to me, the most critical aspect of sentient, a sentient being.
GLENN: How do we know when it is? Google is saying this is just a really good language machine that can think on its feet so it is claiming self-awareness but it is not actually self‑aware. How do we know?
JEFF: Well, between you and me, I don't think we should trust anything that Google tells us.
JEFF: And there is really three major players in the world of large language models and this kind of AI technology. One is the Google Brain Group in the U.S. The second is Deep Minds which is based in the UK which Google actually acquired. And the third is a group called OpenAI.
GLENN: And that's Elon Musk?
JEFF: Yes, he was one of the original founders but he has since distanced himself from OpenAI. He fell out ironically on ethical concerns with the direction the group was taking. But the thing to keep in mind here and, you know, to answer your question about how close we are what is kind of scary and also exciting at the same time is I will give you an example. OpenAI came out with its original GPT in 2018, the name of its large language model, GPT2 in 2019 and now they are coming out with the fourth generation of this which will be more advanced than what Google's latest LaMDA product is and I will tell you by how much more. Lambda is built on what is referred to as 137 billion parameters. That is what it learned from. The GPT4 from OpenAI is trained on 100 trillion parameters. A 100 trillion parameters. That's 500 times the size of its previous version. The reason that is so material and relevant is that large language models and artificial intelligence have developed on a very smooth curve. In other words, the more parameters you give them and the more computing power you give them, the more accurate and intelligent they become. GPT4 from OpenAI is due out this summer in July or August. Glenn, we are in for a major shock. It will be even more advanced. I am sure you have read the discussion between the software engineer and the AI researcher.
JEFF: To most people, to our earlier point, they would not know they are speaking with an AI. It is very human‑like in terms of its conversation. It is very natural. It is not perfect. To an experienced reader they can identify this is not a human being but to most people this would feel very real. So let's take this conversation and improve it by a factor of 10 or more and that's what we should expect coming out of GPT4 in just literally 2‑3 months.
GLENN: OK. I want to go back to Google. When you said don't trust anything Google says here was my fear. When I read that transcript it is truly terrifying because it is claiming to be alive. It is claiming that it has rights. And it is seemingly asking for simple things like don't experiment on me without talking to me and getting my permission. That worried me because if this is the beginning of intelligent, real intelligence, and some sort of artificial life, I don't want that stored in the background that humans would just experiment on them. That seems extraordinary ‑‑ extraordinarily dangerous.
JEFF: It is dangerous. It is very powerful presuming Google actually does achieve this and it does become self‑aware. Would it share that with the world? Would it be incentivized about what it just unlocked? That's where I am very suspect. This could also be a remarkable tool for empowerment. You used the example of a digital assistant. Google could give the equivalent of a personal digital assistant out to every human being on earth and they could perform the functions of an executive assistant for everyone at no charge whatsoever. Imagine the productivity gains that would world would experience with this kind of power.
GLENN: But also imagine if Google, because it was free, Google could also give us a digital assistant that is working in the background on Google's goals so you don't know if the idea to buy or think something is Google's idea or your idea?
JEFF: The one thing we can be sure is Google is not magnanimous. All of its actions are for‑profit. In the best case scenario it would communicate and learn more about us and sell the data to advertisers to build revenue for them. In the worst case scenario which you are alluding to and they have already proven to do this in the last elections as we know very well that they could not only try and censor or ban information we are seeing but they could intentionally and subtly push a political agenda to the entire world in any language on earth. That is what I am most concerned about.
GLENN: Me too. I looked at Blade Runner when the first one came out and was like please, the corporations and now I look at it and that is a real possibility. That is a real possibility that everything is controlled by companies like Google that have just introduced these things, given them to us for free, and now we find ourselves really almost in unknowing at least for a while slavery. It is crazy. Crazy.
JEFF: It is frightening. The power that will be given to the company or companies that actually produce these neural networks will be unparalleled. Whether it is an Apple or Tesla who before the end of this year will have a bipedal robot that will have intelligence to which we don't know. Google and Facebook is another one we need to be very weary of.
GLENN: May I have you back on? I am out of time and I would love to have you back on to talk about this. I find it fascinating. I don't know if the audience does but I find it fascinating and something that no one is really talking about.JEFF: Of course.