Tesla CEO Elon Musk fears the ‘potential dangerous outcomes’ of artificial intelligence

On Tuesday, Elon Musk, founder and CEO of Tesla Motors and SpaceX, voiced his concerns about the future of artificial intelligence during an interview on CNBC. SpaceX placed number one on the second annual CNBC Disruptor 50 list because the “company designs, manufactures, and launches advanced rockets and spacecraft, with the ultimate goal of enabling people to live on other planets.” Musk surprised his interviewers when he mentioned the “potential dangerous outcomes” of artificial intelligence.

“Elon Musk – listen to this – believes it’s feasible a Terminator-like scenario could erupt out of artificial intelligence,” Glenn said on radio this morning. “In an interview with CNBC, Musk says he’s an investor in artificial intelligence company called Vicarious because it’s not like he’s trying to make any money per se, rather he likes to keep an eye on various technological developments like killer robots.”

Below is a partial transcript from the portion of the interview Glenn found most alarming:

BOORSTIN: Now, I have to ask you about a company that you invested in. As you said, you make almost no investments outside of SpaceX and Tesla.

MUSK: Yeah I’m not really an investor.

BOORSTIN: You’re not an investor?

MUSK: Right. I don’t own any public securities a part from SolarCity and Tesla.

BOORSTIN: That’s amazing. But you did just invest in a company called Vicarious. Artificial intelligence. What is this company?

MUSK: Right. I was also an investor in DeepMind before Google acquired it and Vicarious. Mostly I sort of – it’s not from the standpoint of actually trying to make any investment return. It’s really, I like to just keep an eye on what’s going on with artificial intelligence. I think there is potentially a dangerous outcome there and we need to –

EVANS: Dangerous? How so?

MUSK: Potentially, yes. I mean, there have been movies about this, you know, like Terminator.

EVANS: Well yes, but movies are – even if that is the case, what do you do about it? I mean, what dangers do you see that you can actually do something about?

MUSK: I don’t know.

BOORSTIN: Well why did you invest in Vicarious? What exactly does vicarious do? What do you see it doing down the line?

MUSK: Well, I mean, vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think –

BOORSTIN: So you want to make sure that technology is used for good and not Terminator-like evil?

MUSK: Yeah. I mean, I don’t think – in the movie Terminator, they didn’t create A.I. to – they didn’t expect, you know some sort of terminator-like outcome. It is sort of like the Monty Python thing. Nobody expects the Spanish Inquisition. It’s just – you know, but you have to be careful. Yeah, you want to make sure that –

EVANS: But here is the irony. I mean, the man who is responsible for some of the most advanced technology in this country is worried about the advances in technology that you are aware of.

Read the full transcript HERE.

Get Glenn Live! On TheBlaze TV

“She cuts him off… How do you have a rational conversation with somebody,” Glenn asked. “Elon Musk is no dummy. I mean he may not be right, but he’s no dummy. You [need to] listen to people.”

“Wouldn’t that be the number one person who should be worried about it,” Stu added. “If that person isn’t worried about it, he’s not being responsible. That’s exactly the person who needs to be thinking about these things.”

Just a few months ago, Glenn sat down with Google executives Eric Schmidt and Jared Cohen to discuss some of the more invasive technological advances Google is working on. Glenn found himself frustrated after that interview because they kept discounting the theories of someone like Ray Kurzweil, who is employed by Google. This interview with Musk, in Glenn’s opinion, had a similar tone.

“That’s my point with Ray Kurzweil every time. Aren’t you concerned about this,” Glenn asked. “I mean that’s a fascinating interview. And the way they are dismissing it – these people are… just imbeciles, people who are going to be remembered as morons… Stop dismissing it.”

Watch the entire interview with Musk below courtesy of CNBC:

Front Page image: The Summit 2013 – Picture by Dan Taylor / Heisenberg Media – www.heisenbergmedia.com

  • Jeff Lambeau

    I freaking love Elon Musk. TESLA Motors is honestly the ONLY decent car company left in America.

  • Elena

    Steven Hawking has been warning abt this same potential problem (ultimate AI outcomes) for some time now. Even the CBS drama, Person of Interest, has a major plot line for the conflict.

  • landofaahs

    Artificial intelligence. Is that not the mark of a democrat?

  • Anonymous

    I toally agree it is potentially very dangerous. Foreseen by 2001:A Space Odyssey and Terminator 2 (my favorite movie).

  • zemla

    Sad, hardly any commentary on this story. Guess we never learn, dealing with things AFTER they happen just never works very well. oh well, back to hollerin’ about the other team, political or otherwise…

  • zemla

    Speaking of machines…

  • zemla

    Issac asimov deals with this as well, especially in some of his essays/novellas….personally I like the terminator movies, but I think “A.I.” is best, and (movie or book) “irobot” also covers some important theory

  • zemla

    Seeing as how they been extremely successful over the years, I would say no, they actually are just plain old intelligent

  • Anonymous

    Just make sure we can defeat the machine.

  • Anonymous

    Maybe Cyberdyne is up and running, and the first part of its strategy is to buy off reporters…

  • ken.

    it is no more dangerous then the government we have controlling us now. they will be the ones indoctrinating and programming it to begin with.

  • ken.

    sexbot

  • Anonymous

    AI. You just don’t get it!

  • Anonymous

    You have no clue about what AI is about? Do you?

  • Anonymous

    Stop the insanity become a keyboard warrior and share these free songs
    to fight the war of ideas between TYRANNICAL GODS WHO WANT TO RULE US
    AND FREEDOM! FREE DOWNLOAD
    https://soundcloud.com/user262008952/warriors-plea NEW ORIGINAL FOR OATH KEEPERS FROM AN OATH KEEPER
    https://soundcloud.com/user262008952/i-believe-in-the-constitution-5
    https://soundcloud.com/user262008952/bye-bye-ameican-pie-remix
    https://soundcloud.com/user262008952/federal-agent-man-7

  • ken.

    you obviously have no clue about what reality is? do you?

  • Anonymous

    Artificial Intelligence, as used today, is a misnomer. The early research, starting in the ’60s, tried to teach a computer to think. The best they got was a study at Cornell where a researcher coded a computer to a level that it could distinguish between a cow and a man after being shown 20 pictures of each at different angles then shown a new one of each. Not very bright (the computer, not the researcher), standing on four legs or two.

    The modern use of the term is for accumulated archives of knowledge, what we now call a search engine was once called AI for the diagnoses by doctors. It is a valuable tool, the doctor can go through a “tree” of symptoms to find probable diagnosis – but I’d hope my doctor wouldn’t accept the computer’s “opinion”, just use it as a guide.

    Glenn, and many of you, are believers in a humanity that is the creation of God – and as such answers to that God, I am not. Whether our minds are the creation of God or just a function of our complex system with various influences the concept of AI is far beyond the possibilities.

    For a computer to have AI it would have to have volition. I’m fully aware of HAL (named by the writer of 2001 as one letter after IBM – H/I, A/B, L/M) and Kapek’s RUR, and Azimov’s three rules of robotics. I’m also fully familiar with the Turing test. Over fifty years of dealing with computers at the “guts” level tells me this is BS, although attractive BS.

    The concern is real, and Musk is correct, but the AI they speak of is not the AI of Sci-Fi – the “thinking robot”. The danger in the use of “robotic intelligence” to run our vital functions without a human observer with an “over-ride button”.

    Many years ago there was a joke – the passengers on the plane hear this from the pilot “This aircraft is being piloted by the perfect computer, nothing can go wrong…wrong… wrong…wrong”.

    The human mind is very inefficient, it is comprised of many billions of synapses that interconnect the billions of axons – the possible connections get into more billions than you can count. The computer is efficient, and has billions of “axons” (storage points) that are connected by an index. It can’t take a new circumstance and gather the possibilties.

    Back in about 1965 I worked for a consulting company – I was a junior associate and assigned to give the lecture on computer basics. I came up with an analogy.

    The computer is a lever for the mind, it can calculate in seconds what it would take you days to do. A crane is a lever for the muscles, it can lift hundreds of tons. But if the crane drops the weight in the wrong place it is far more destructive than if a man drops the weight he can carry in the wrong place. The lever not only increases the mechanical advantage, it also increases the destructiveness of mistakes.

    The computer is a tool, it is not an intelligence (and never will be). Too many of those brought up in the computer age (and I include Steven Hawkins, for whom i have the greatest respect) confuse the terms. AI has become used for the function of a search engine, the original definition was the failed attempt to teach the computer to learn in the same way as a baby. Actually not quite failed, it did distinguish cow from man in photos – but a goat would have been a cow and a standing gorilla a man. A simple task.

    Think of a pilot. He is approaching a runway. The computer can land the plane. The computer could be programmed with all the possibilities, the wind shifts and wind shears, and do a better job than the pilot. But if something happens outside the programming the plane will crash. The pilot is trained, but not programmed. I want that pilot there if I’m aboard. He can make a best guess – if he is wrong we all die, but if he doesn’t try we are all dead anyway.

  • Anonymous

    so we are talking about Hal on 2001 space odyssey, sorry Dave I can’t do that.

  • Anonymous
  • Anonymous

    We all should be more afraid of human intelligence than any artificial intelligence we create. AI will just be mirrors of their creator: us humans. And humans are notorious for making bad choices.

  • Pete Mitchell

    When the confusion of liberty and power confuses liberty with wealth, more demands for “liberty” can be conjured up to attack true liberty.

  • Archer51

    When your constituents can be bought with a promise of ‘free stuff’ how smart to you have to be?

    Corrupt, unethical, and willing to do anything to get elected, now the left are definitely number at all of those.