Facebook is developing AI to bust 'offensive' Live video: report

Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f305024%2fap_16320621988661
Https%3a%2f%2fblueprint-api-production.s3.amazonaws.com%2fuploads%2fcard%2fimage%2f305024%2fap_16320621988661

Facebook has long been vigilant about keeping your News Feed free of "inappropriate" content. That's relatively simple when you're talking about material that can be reviewed in full after it's posted — but what happens if something goes wrong during a livestream? 

A new initiative is reportedly in the works to build up the social network's flagging system for offensive content in a particularly difficult area: Facebook Live. 

SEE ALSO: Facebook wants to teach you all about how AI works

Facebook has previously relied in part on a system that depended on users to report offensive materials, which are then checked by Facebook employees against "community standards." 

But at a recent roundtable at Facebook HQ in Menlo Park, Joaquin Candela, the company's director of applied machine learning, told reporters that they're testing artificial intelligence that can detect offensive content. 

The new flagging protocol is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” Candela said, according to Reuters.

Such an algorithm was tested back in June to screen videos posted in support of extremist groups — but going forward it will be applied to Facebook Live broadcasts to keep violent events and amateur erotica off the network.

According to Candela, the AI system is still being honed, and it will likely act as an alert, rather than a one-stop jury, judge and executioner of explicit streams. 

“You need to prioritize things in the right way so that a human [who] looks at it, an expert who understands our policies, [would also take] it down,” he said.

But what is "inappropriate," exactly?

As helpful as an AI-flagging system might be, there are still major questions about what should and shouldn't be considered "inappropriate." Facebook came under fire back in September after it removed a famous image from the Vietnam War — and that was under the old system, with a human moderator making the decision.

Yann LeCun, Facebook's director of AI research, declined to give Reuters a specific comment on their story but did address censorship in broader terms. He's aware of the tenuous position this type of system presents.  

“These are questions that go way beyond whether we can develop AI,” he said. “Tradeoffs that I’m not well placed to determine.”

Those "tradeoffs" could have a real cost. Difficult, important broadcasts that might otherwise be flagged — like the streaming of violent encounters with law enforcement or the aftermath of a shooting — must be treated with careful consideration. 

If a machine is at the controls without the benefit of human reasoning and context, important levels of nuance could be lost. The human element of the equation will reportedly still be in play to make the final decision, but people aren't perfect, either. In determining the guidelines for what's considered appropriate for Facebook Live broadcasts, the decision-makers need to keep these issues at the forefront of their policies.  

Remember, this AI flagging system is only being tested for now — it's not yet in use on the Facebook you scroll through every day. Still, there's no doubt it — or something like it — is coming soon, once the company has determined that an AI can be trusted with our most sensitive content. 

BONUS: This coat doubles as a sleeping bag to keep the homeless warm in the winter