BuzzFeed Writer: Why Can’t Google, Facebook Get a Grip on Fake News?

What’s going on?

Humans haven’t been replaced by machines yet in at least one area: spotting news hoaxes. BuzzFeed senior writer Charlie Warzel joined Glenn and Stu today to talk about the tech world’s fake news problem and urge lawmakers to sit up and take notice of developing technology before it gets completely out of hand.

Give me the quick version:

After the tragic shooting in Florida last week, journalists and researchers noticed dozens of hoaxes that were going viral; impersonations of journalists; and posts and videos that claimed the victims were actors. All of those things violate the rules for platforms like Facebook, YouTube and Twitter.

Parkland marked the third time in four months that these tech companies had slipped up by allowing total misinformation about tragedies to be shared freely on their platforms, BuzzFeed reported. Why can’t they seem to do better?

Politicians need to wake up.

As technology advances, it’s getting more and more difficult to know what’s real and what’s fake. Warzel urged lawmakers to put in “safeguards” now before obscure Reddit threads become mainstream misinformation. How will we trust our eyes and ears when video and audio can be easily faked?

This article provided courtesy of TheBlaze.

GLENN: Every once in a while, we need to take a step back. Everybody right now is screaming, fake news, fake news. Both sides are doing it, and in some ways, both sides are right.

We're getting to a place that soon, you're not going to be able to believe your eyes and ears. And people don't really realize this. There's a guy named Aviv Ovadya. He predicted the fake news explosion. And now he's saying, oh, yeah, yeah, yeah, yeah. But that's just the beginning. That's nothing compared to what's on the recent or -- or near horizon.

STU: Yeah. Infopocalypse, potentially. And there's a great story about this in Buzzfeed from Charlie Warzel. It's a story about what's coming next.

Charlie Warzel is a reporter for Buzzfeed. Also writes something -- one of my favorite things to read, which because it's about Infowars and sort of that conspiracy media. And it's -- his last name is Warzel. It's called InfoWarzel, which is the greatest name of all time. It's a newsletter, and it's really worth your attention as well. He joins us now from Montana, Charlie, is that where you are?

CHARLIE: That's right. Missoula, Montana. Thanks for having me.

GLENN: You bet.

So, Charlie, I can't seem to get people to really get their arms around the idea that soon, we're not going to even know what reality is, and we don't -- we won't care.

JORDAN: Well, it's -- it's complicated, to some extent. But the best way that I can describe it is that these sort of hall of mirrors that we're sort of experiencing online right now. As you guys were saying earlier, everyone is sort of calling fake news with -- with sort of bad actors, acting in bad faith, putting out, you know, propaganda and content that's designed to manipulate. That isn't true.

All those things that we see, you know, in our Facebook feeds, in Twitter right now.

It's all going to potentially get far worse because the technology is going to allow it to come from people that perhaps we know.

So the -- you know, the -- the fake news that you're seeing, the misinformation, the propaganda, it could start coming from, you know, a loved one. You know, you could start getting emails from them, telling you things that didn't happen that were generated algorithmically. So it's not really that something new is going to happen. It's that everything happening right now, all this unrest, discord, confusion, and difficulty, sort of parsing reality, is going to become so much more sophisticated because of technology, that hasn't even been invented yet.

GLENN: What do you mean that you're going to get -- that you'll get something from your loved ones?

CHARLIE: Sure. So Aviv, the researcher who I spoke with, alongside many others who are doing, you know, really great work, sort of understanding how these platforms work. And the technology that's on the horizon. Aviv has this -- this term. And it's called laser fishing. So regular fishing, or spearphishing is when you maybe get a link from something -- an email address that is a couple characters off from somebody you know. And it's saying, hey, click this link. And then that link asks you for, you know, your password information. It's sort of a classic hacker trick. It's pretty low-tech.

This would sort of be something that would happen. Laser fishing is using AI and sort of this artificial intelligence and machine learning to understand things about you, understand the people that you talk to.

The conversation you have across social media with other people. Mine all that information. And then use it to manipulate you. So instead of getting an email from someone who -- who sounds like they could be somebody you own, the email is going to come from ostensibly someone you know, and it's going to have information that's pertinent to you. Information that you were perhaps expecting to hear from. So you're so much more likely to believe this information. And then offer things up.

You know, there's a lot of people -- Nigerian princes on the internet who are asking for money. But what if that person is your brother. And your brother says that he had a car accident. And he's stuck and needs to repair his car. Because you were having a conversation about, you know, cars and money or something like that along the line.

So this is -- being able to manipulate people, at the click of a mouse or a button, in this -- in this artificial intelligence way. And I think that -- I think that we're -- we're falling for the low-tech, low-fi stuff right now. So it's going to be hard to imagine, you know, how we can get up to speed on the other stuff.

STU: And the future of this, Charlie, goes even further than just say an email. It could be even audio or video coming from the people that you know convincing you to do something that winds up completely burning you.

CHARLIE: Absolutely. And I think you can see this not just in people asking for money, or you know, asking you for information. But this can be -- this can be used to manipulate government and diplomacy.

GLENN: Uh-huh.

CHARLIE: It's not hard to envision -- and many people sort of have already been talking about this. But it's not hard to envision any lawmaker has hundreds of hours of footage on themselves, either audio or video on the internet. The machine learning programs can take that. Can absorb it. And then what they can -- what they can do with that is -- is produce very hard -- hard to verify and real-looking video of people saying anything.

So, you know, you could have a video of Donald Trump potentially down the line, really antagonizing in -- in an aggressive way, North Korea.

And the stakes of that get higher and higher as the reaction times are -- are shorter. And people have to respond.

So you could really escalate, you know, political and -- and, you know, diplomatic tensions using this kind of technology.

GLENN: So I was talking about this, at the beginning of the year. And I laid out just some crazy predictions. And one of them was, if be the not this election of 2018, by 2020, this will be used in an effective way. And we may not know about it until after the election. But we are that close to this kind of stuff being used. Would you agree with that?

CHARLIE: Well, I think with the artificial intelligence stuff, with the video and audio manipulation, we may be a little further down the line from that. Because the real worry is not just some incredibly sophisticated programmer or one-off type person is going to be able to use this, who has, you know, access -- proprietary technology.

The real thing is when it becomes democratized, when you can manipulate -- when anyone with two or three hours of research on the internet, can do this.

And that, I think we're a little bit further off, but not too far. There are some -- some forums.

There's a forum on the site Reddit, which is called deepfakes. And it is where people are manipulating video right now.

Some of it is awful. Some of it is pornographic and very disturbing. But others are just -- you can go and look for yourself, are funny. People putting Nicholas Cage's face on Arnold Schwarzenegger.

GLENN: I don't know why Nicholas Cage is this guy. But his face is almost on everybody.

(laughter)

CHARLIE: He's an internet sensation.

GLENN: Yeah, he is.

CHARLIE: But, you know, it speaks to -- when people are kind of playing around with this, having fun with it, doing it in their spare time because it's entertaining, that is sort of a harbinger of something that is sort of scary, which you could in two or three hours, figure out how to do this yourself.

I think we're a little further than -- I think 2020, who knows. But it's definitely coming.

GLENN: I hope you're right.

Tell me a little bit about what Aviv talks about and describes as reality apathy.

CHARLIE: Sure.

It's basically the combination of all of this that we're talking about. Which is these sophisticated technological tools to sort of distort what's real and what's not. To the point where you become overwhelmed by the idea of all -- say you're being laser fished by, you know, 20 people. And when you go online and try to click a news link, you're not sure where the source is coming from, whether it's something you can trust, whether it's something you're not.

You're just besieged by what you believe is misinformation, but you can't even tell. So you start to disengage.

You know, if your inbox is something where you don't know what you're getting, what's real or what's not, you're going to maybe give up. And that is sort of -- that works also with -- with diplomacy. If people start, you know, spoofing calls to Congress, to lobby their lawmakers about some political issue, if that happens in a -- in a spoofing way so much that people can't get through on the lines, they're going to stop participating in -- in democracy, in that particular way. They might, you know, stop going online and sharing their own opinions or feel unsafe. They might just say, you know what, the news, it's just not worth it for me. That's scary.

GLENN: But going the other way as well, if you see a bunch of stuff that is fake and you don't know what to believe, somebody in power could actually be doing some really bad stuff. And nobody would know. Nobody would pay attention. They would say, well, that's just fake. Because that's what the politician would say.

CHARLIE: Yeah, an informed citizenry is a cornerstone of democracy.

GLENN: So how do we inform ourselves, going forward? Who is standing against this? How do we protect -- I mean, you can't put the genie back in the bottle. What do we do?

CHARLIE: Well, I think -- this is why I wanted to highlight Aviv's work. And, you know, I -- he's becoming labeled as sort of the person who called the misinformation fake news crisis before it became a thing. He's one of many. There are -- there are, you know, dozens of researchers like this, who are lobbying tech companies, thinking about this, on sort of the vanguard of this movement.

And I think journalists, news organizations, highlighting these people's work, giving them a platform to talk about this, is the first step. The second step is really, you know, putting pressure on these technology companies. And not just Facebook or Google or Twitter. But, you know, the hardware makers. People like Adobe, who -- people like potentially Apple. Companies that are starting -- that are going to be making this audio visual technology. And making them sort of understand that innovation is okay.

But we have to learn our lessons from, you know, this whole fake news situation that we're dealing with right now. And build this technology responsibly, with all of these sort of externalities baked in, and understand what we can -- that these things can be abused. So let's put in the safeguards now, instead of later.

STU: I think you could see tech companies at times, be a little bit absorbed by self-interest. But they're not nefarious actors, right?

My -- my issue with this, when I try to find optimism in the future here, Charlie, is eventually state actors. Hacker groups. Someone with actual nefarious intent, that you can't go and lobby and you don't have people with ethics trying to deal with are going to get control of this stuff and do things that are going to be really harmful and maybe irreversible.

CHARLIE: I think that is potentially true. I mean, all of this -- it's difficult. Because we're in speculation territory. It's difficult as a journalist, writing about this about going too far. You know, scaring people too much. But, I mean, I think what this -- what the last 18 months of sort of information crisis world that we're in, should be teaching us right now. Is that this is everyone's problem. Law makers, you know, need to get smart on this stuff quick. They need to, you know, be putting pressure on --

GLENN: Not going to happen.

CHARLIE: And I think they need to spend time, you know, really understanding this technology --

GLENN: Yes.

CHARLIE: -- themselves. And getting the government ready. There's not a lot of task forces here, to combat computational propaganda or misinformation.

GLENN: Charlie, look how we're dealing with Russia. Everybody is talking about, oh, well, Donald Trump, Hillary Clinton. Russia. Look at what Russia is doing. We can get to the rest of that and, you know, if somebody did something, they should go to jail. But we're missing the point, that Russia has come in and -- and announced, in advance, what they were going to do. And they did it.

CHARLIE: I think that what -- state-sponsored actors, all of this -- it's clearly manipulatable by them. And I think that we -- I think that that's certainly one -- one piece of the puzzle. I think that -- I think that this technology, we've spent so long thinking that this technology is a -- a universal positive. That there's no negative externalities to connecting the world.

And I think that that is, you know -- that's a naive look at this. And I think that we need to sort of change the way that we message about this technology, that it's just as much a force for -- for evil, potentially. As it is a force for good. And for, you know, the free circulation of information. So I think some of it just has to do with our mindset with this. This is -- you know, a new innovation is not good just by definition.

GLENN: Right.

CHARLIE: You have to earn that.

GLENN: Charlie, I had been concerned about this for a very long time. I was really glad to see your article and the fact that it was on Buzzfeed and people are reading it. And I'd love to stay in touch with you and have you on the program again, as we follow this story. Thank you very much, Charlie.

CHARLIE: Thanks for having me.

(music)

STU: Leave you with one last quote from Aviv Ovadya, the expert Charlie talked to: Alarmism can be good. You should be alarmist about this stuff. We are so screwed, it's beyond what most of us can imagine.

I mean, jeez. It's scary. Charlie Warzel tweeted from @worldofStu. But he's @CWarzel on Twitter. You can get his work on Buzzfeed. It's really interesting stuff. He dives into a lot of weird worlds. And it's really compelling.

The FEC is bad. The House of Representatives isn't doing anything to make it better.

When it passed H.R. 1 by a vote of 234-193 on Monday, Congress attempted to address a laundry list of nationwide problems: rampant gerrymandering, voting rights, and the vulnerability of elections to foreign interference, among other concerns. But H.R. 1, billed as the "For the People Act," also takes a shot at reforming the Federal Election Commission (FEC). It fails.

The FEC isn't good at enforcing the nation's campaign finance laws, and, when it is does, it's often an entire election cycle after the given offense. As it is, candidates don't have much difficulty circumventing campaign finance laws, undermining the fairness of elections and opening the door to further corruption.

RELATED: Lawmakers are putting the death penalty on trial

The FEC was created by the Federal Election Campaign Act following the Watergate scandal, as Congress sought a better way to police federal campaign laws and prevent future presidents from interfering with investigations as Nixon had. The FEC has six commissioners, and no more than three can be of the same party. Four votes are required for most actions taken by the agency, and that hasn't been an issue for most of its history. But since 2008, the frequency of 3-3 tie votes has increased dramatically. It's why the FEC is slow to investigate cases and even slower to prosecute offenses. Supporters of H.R. 1 complain, with good reason, that the FEC has become toothless. But H.R. 1's reforms introduce new and potentially volatile problems.

FEC's rampant dysfunction won't be fixed by H.R. 1— the bill doesn't get at what actually went wrong. Since its inception, the FEC has been able to operate without excessive gridlock, and, for the most part, it still does. At the height of FEC turmoil in 2014, the FEC only had a tied vote 14 percent of the time (historically, it has been closer to one to four percent of the time) on substantive matters, although many of these tie votes occur on matters that are particularly contentious. The greater problem afflicting the FEC is touched upon by NBC Washington's findings that the Republican and Democratic commissioners of the FEC almost always vote as blocs. At various times, both Republican and Democratic commissioners have put party interests ahead of their agency's responsibilities.

At various times, both Republican and Democratic commissioners have put party interests ahead of their agency's responsibilities.

H.R. 1's Democratic supporters instead believe the FEC's six-commissioner structure makes it dysfunctional. H.R. 1 introduces a new system of five commissioners —two from each party and one independent, eliminating tie votes. But that independent commissioner's de facto role as a tiebreaker would grant them far too much power. Save for Senate approval, there's nothing preventing a president from appointing an "independent" like Bernie Sanders or Angus King.

The bill's proponents are aware of this problem, creating a Blue Ribbon Advisory Panel that will help inform the president's decisions. But this panel has problems of its own. The Blue Ribbon Advisory Panel's decisions are non-binding and not public, a result of its exemption from the Federal Advisory Committee Act (FACA), which ensures the transparency of advisory committees. There are arguments against FACA's necessity, the panel's deliberate exemption from the law undermines the idea that its goal is to ensure non-partisanship. Instead, H.R. 1 will allow future presidents to tilt the scales of the FEC in their favor, a fate the post-Watergate creators of the FEC were so desperate to avoid they originally had members of Congress picking commissioners before the Supreme Court ruled it unconstitutional. Apparently, the solution to excessive gridlock is one-party control.

H.R. 1 also seeks to grant unilateral powers to the Chair of the commission in the name of expediency, again giving leverage to the Chair's party, and allows the General Counsel to take actions independent of commission votes. While some of the FEC's problems, such as its notoriously slow pace and the delayed appointment of commissioners under Presidents Obama and Trump, might be solved with legislation, the consolidation of power in the hands of a few at the expense of the FEC's integrity is not a winning strategy.

The FEC is afflicted by the same problem that has afflicted governments for as long as they have existed – governments are made up of people, and people can be bad. The Founders, in their wisdom, sought to limit the harm bad actors could do once in power, and the FEC's current structure adheres to this principle. Currently, the consequences of bad actors in the FEC is dysfunction and frustration. But under H.R. 1's reforms, those consequences could be blatant corruption.

Michael Rieger is a contributor for Young Voices. Follow him on Twitter at @EagerRieger.

On Monday's radio program, Glenn Beck and Stu Burguiere discussed former Starbucks CEO and progressive Howard Schultz, a lifelong Democrat who has not only been disowned by the Democrat Party but he can no longer set foot inside of a Starbucks store because of his success in business.

In this clip, Stu explained how at one time Starbucks only sold coffee in bags until Schultz, an employee at the time, convinced the company to open a Starbucks cafe.

Click here to watch the full episode.

At one point, the owners came close to closing down the cafe, but Schultz eventually managed to purchase the company and transform it into the empire that it is today.

Stu continued, describing how Schultz, a lifelong Democrat, went on to implement liberal corporate policies that earned the company a reputation for being a "beacon" of liberalism across the country.

"And now he (Schultz) can't even get into the Democrat Party," Stu said."That is craziness," Glenn replied.

Citing a "60 Minutes" interview, Glenn highlighted the journey that Schultz traveled, which started in the New York City projects and evolved, later becoming the CEO of a coffee empire.

"This guy is so American, so everything in business that we want to be, he has taken his beliefs and made it into who he is which is very liberal," Glenn explained.

Catch more of the conversation in the video below.


This article provided courtesy of TheBlaze.

This weekend, March 17, Rep. Rashida Tlaib will be speaking at (Council on American Islamic Relations) CAIR-Michigan's 19th annual "Faith-Led, Justice Driven" banquet.

Who knows what to expect. But here are some excerpts from a speech she gave last month, at CAIR-Chicago's 15th annual banquet.

RELATED: CLOSER LOOK: Who is Rep. Ilhan Omar?

You know the speech is going to be good when it begins like this:


CAIR-Chicago 15th Annual Banquet: Rashida Tlaib youtu.be


It's important to remember CAIR's ties to the Muslim Brotherhood. Think of CAIR as a spinoff of HAMAS, who its two founders originally worked for via a Hamas offshoot organization (the Islamic Association for Palestine (IAP)).

A 2009 article in Politico says feds "designated CAIR a co-conspirator with the Holy Land Foundation, a group that was eventually convicted for financing terrorism."

The United Arab Emirates has designated CAIR a terrorist organization.

In 1993, CAIR spokesman Ibrahim Hooper told a reporter for the Minneapolis Star Tribune:

I wouldn't want to create the impression that I wouldn't like the government of the United States to be Islamic sometime in the future.

In 1998, CAIR co-founder Omar Ahmad said:

Islam isn't in America to be equal to any other faith, but to become dominant. The Koran … should be the highest authority in America, and Islam the only accepted religion on Earth.

Notice the slight underhanded jab at Israel. It's just one of many in her speech, and is indicative of the growing anti-Semitism among Democrats, especially Tlaib and Omar.

Most of the speech, as you might expect, is a long rant about the evil Donald Trump.

I wonder if she realizes that the Birth of Jesus pre-dates her religion, and her "country." The earliest founding of Palestine is 1988, so maybe she's a little confused.

Then there's this heartwarming story about advice she received from Congressman John Dingell:

When I was a state legislator, I came in to serve on a panel with him on immigration rights, and Congressman Dingell was sitting there and he had his cane, if you knew him, he always had this cane and he held it in front of him. And I was so tired, I had driven an hour and a half to the panel discussion at the University of Michigan Ann Arbor campus. And I sit down, my hair is all messed up, and I said, 'Oh, my God, I'm so tired of this. I don't know how you've been doing it so long Congressman. They all lie.' And he looks at me and he goes. (She nods yes.) I said, 'You know who I'm talking about, these lobbyists, these special interest [groups], they're all lying to me.' … And he looks at me, and he goes, 'Young lady, there's a saying in India that if you stand still enough on a riverbank, you will watch your enemies float by dead.'

What the hell does that mean? That she wants to see her enemies dead? Who are her enemies? And how does that relate to her opening statement? How does it relate to the "oppression" her family faced at the hand of Israel?

Glenn Beck on Wednesday called out Reps. Ilhan Omar (D-Minn.) and Rashida Tlaib (D-Mich.) for their blatantly anti-Semitic rhetoric, which has largely been excused by Democratic leadership. He noted the sharp contrast between the progressive principles the freshmen congresswomen claim to uphold and the anti-LGBTQ, anti-feminist, anti-Israel groups they align themselves with.

Later this month, both congresswomen are scheduled to speak at fundraisers for the Council on American-Islamic Relations, a pro-Palestinian organization with ties to Islamic terror groups including Hamas, Hezbollah, al-Qaeda, and the Islamic State.

Rep. Tlaib will be speaking at CAIR-Michigan's 19th Annual Banquet on March 17 in Livonia, Michigan, alongside keynote speaker Omar Suleiman, a self-described student of Malcolm X with links to the Muslim Brotherhood. Suleiman has regularly espoused notably "un-progressive" ideas, such as "honor killings" for allegedly promiscuous women, mandatory Hijabs for women, death as a punishment for homosexuality, and men having the right to "sex slaves," Glenn explained.

Rep. Omar is the keynote speaker at a CAIR event on March 23 in Los Angeles and will be joined by Hassan Shibly, who claims Hezbollah and Hamas are not terrorist organizations, and Hussam Ayloush, who is known for referring to U.S. armed forces as radical terrorists.

Watch the clip below for more:


This article provided courtesy of TheBlaze.