I want to show you what's coming and why you must work on repairing any flaws in your credibility, and do not play games with politics or anything else. I want to explain to you where we're headed.
We are already in a place where we don't trust anything, right?
If something comes out and it's about a politician, we tend to dismiss it because there's a lot of power behind that. And, you know, there are a lot of reasons people would make accusations.
People are saying now, "hey, we have to believe the accusers." No, we have to listen to the accusers. And then we have to listen to the other side, and then we have to make a decision.
Right now, we're not doing that. Somebody comes out and accuses, we believe it. And it's over. It's really dangerous.
Who do you believe? Well, if there was videotape. Well, if there was audio.
Let me give you a story. It's from Wired Magazine:
Turning a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality.
Now, zebraification was a really big deal. Taking a video of a horse in a field that's walking by a fence. And that is the original. Then, they take this and they make it a zebra. This algorithm could take that horse and make it look like a zebra.
Sure there were little glitches here or there, but it was pretty close. And it was like, "oh, my gosh. Look at a zebra." If you weren't looking for it, you might not spot it, okay? Zebraificiation. The article continues:
Other tinkerers, for example, have used the zebrafication tool to turn shots of black bears into believable photos of pandas, apples into oranges, and cats into dogs. A Redditor used a different machine learning algorithm to edit porn videos to feature the faces of celebrities.
Now, this has just come out in the last couple of days. So now you can take a porn video, just a regular person can take a porn video and now superimpose the faces of celebrities in those videos. Now, it's still pretty crude. But that's just on the market.
You could do this. There's a new startup called Lyrebird. Machine-learning experts are synthesizing convincing audio from one-minute samples of a person's voice.
Now, do you remember? About a year ago, they were working on this, and it may have been Lyrebird. And they took audio of Donald Trump, Hillary Clinton, and Barack Obama, and it was pretty good. It had the tone down. But you could still tell it was a computer. This team of scientists had put this together and put it out, and it was a story and we reported on it.
This is not the same story. Now, you can go to the website and do this to your own voice.
I've just put in some lines into this, and they're crunching it now. It takes a while to crunch and synthesize my voice. I haven't heard it myself, but I know what it sounded like a year ago when a team of scientists did it. Now you can do it on an app.
Back to the story:
The engineers developing Adobe’s artificial intelligence platform, called Sensei, are infusing machine learning into a variety of groundbreaking video, photo, and audio editing tools. These projects are wildly different in origin and intent, yet they have one thing in common: They are producing artificial scenes and sounds that look stunningly close to actual footage of the physical world....
But this boom will have a dark side, too. Some AI-generated content will be used to deceive, kicking off fears of an avalanche of algorithmic fake news. Old debates about whether an image was doctored will give way to new ones about the pedigree of all kinds of content, including text. You’ll find yourself wondering, if you haven’t yet: What role did humans play, if any, in the creation of that album/TV series/clickbait article?
At this point, there is enough video of me for the best of AI, I am told, to make a video of me doing and saying anything. And they will have the mannerisms down pat. You will be able to falsify video. Now, this is at the NSA or Google (top-of-the-food-chain) level, that they're able to do those things now.
We're talking about being able to create them for pennies on the dollar in apps. In realtime.
You won't be able to trust your eyes or your ears. The article goes on:
Currently there are two ways to produce audio or video that resembles the real world. The first is to use cameras and microphones to record a moment in time, such as the original Moon landing. The second is to leverage human talent, often at great expense, to commission a facsimile. So if the Moon descent had been a hoax, a skilled film team would have had to carefully stage Neil Armstrong’s lunar gambol. Machine learning algorithms now offer a third option, by letting anyone with a modicum of technical knowledge algorithmically remix existing content to generate new material.
Now, here's why this is a problem: someone asks, "is there any evidence that something happened?"
You might reply, "well, "I have this picture." Right now, they're doing this with Donald Trump. He says, "I never met her." Well, she is at Mar-a-Lago, right next to him and his wife with a bunch of other people.
Okay. So the excuse is, well, that was just a line. I don't remember all the people that I meet and take pictures of. Okay. I agree with that.
However, if you could take that photo and you could make it so you see him with his hand down, going behind her, on her butt, if you could take that photo and show it from the other side, well, now you have something, right? And, I mean, look at the photo. Look at the photo. It's right there.
What if it was fake? Oh, yeah, right. It's fake. No one would believe it.
Now, you would say, "well, they'd be able to tell."
No, they won't. Russia. I told you this story about a year ago. Russia came out with a doctored photo of the moon landing.
It showed the astronaut in front of the Apollo. But on the Apollo 11, it had a Russian flag on the Apollo 11. It had a Russian name on the suit. A Russian flag on his shoulder. And he was planting the Russian flag on the moon.
Now, anybody could PhotoShop that. However, once you start to go in, you can do forensics on that and say, "okay, this has been Photoshopped."
Russia offered a million dollars to anyone that could prove that that was a doctored photo. They have gotten the algorithms down so well, that you can now not tell, even with forensics, whether that is real or fake.
Think of the ramifications of this. You will not be able to believe your ears. You will not be able to believe your eyes.
We are going to have to have extraordinary discipline, extraordinary patience. Extraordinary discernment. We are going to need to listen to the small, still voice to say, "this is right. This is wrong. Go here. Go there."
So the next thing you would say is, "well, we have to see your face." Well, do you have the new iPhone 10?
Face ID. It measures thirty thousand points on your face.
So it automatically makes you like a puppet. It's a virtual puppet.
And it's watching your face and expression. So if you move your eyebrows, it moves its eyebrows. It syncs with your voice. Now, think of that not as a cute, funny little bear, but as a picture of someone else. You now have Donald Trump's face on --- as an app, and you now can speak. And it will digitalize his face to move like your face.
That's where we're at. We're using it as a bear now. But very soon, you'll be able to digitalize any face and make it say things that it didn't say. You're not going to know the truth.
It's the instinctive stuff. Our eyes and our ears. "I saw it myself." We're losing that. And I think we're going to lose it in the next 18 to 24 months.
When that happens, we better know what's true inside of ourselves. We better know who our spouses are. We better know who the people are that we're with. And they better know you.
Because the world is about to change.