Advertisers and regulators are still frustrated with Facebook's brand safety (FB, TWTR, SNAP, SPOT)

This story was delivered to Business Insider Intelligence "Digital Media Briefing" subscribers hours before appearing on Business Insider. To be the first to know, please click here.

Among both advertisers and regulators, frustration with Facebook’s brand safety failures is intensifying. 

Facebook Reported Actions Against Content Violations
Facebook Reported Actions Against Content Violations

Business Insider Intelligence

Advertisers may hold Facebook more accountable for inadequate actions in response to concerns. Despite grievances, advertisers have thus far had little choice but to continue spending on Facebook to meet client objectives. Facebook still commands 20.6% of the overall digital ad market in 2018, per eMarketer. But that leeway could be shrinking. 

  • Ad giant GroupM is reportedly pursuing alternatives to Facebook amid transparency and brand safety concerns. Amid growing demand for brand-safe inventory, ad giant GroupM is now proactively designing alternatives to the duopoly. The ad giant’s programmatic unit Xaxis is developing a high-quality 6-second video ad format — tailored to mobile device usage — to run across a mix of alternate platforms like Snapchat, Spotify, Twitter, and even TV content viewed on mobile. While the format has a ways to go before it’s fully adopted, GroupM’s reaction to Facebook’s inadequacies and relative unresponsiveness to brand priorities could serve as a bellwether for the industry.

  • Facebook-owned Instagram will likely also suffer heat from advertisers concerned about brand safety. Barely three months old, the app’s video-focused section IGTV has reportedly recommended videos of potential child exploitation and genital mutilation, according to an investigation by Business Insider. Instagram intends for IGTV to rival YouTube as a repository for user-generated video content and goose the social app’s ad sales growth beyond the main feed. But IGTV appears to have also inherited the video giant's sins: YouTube has dealt with its own share of problematic content since it was founded 13 years ago.

Likewise, oversight agencies are likely to get more serious about problematic content appearing on tech platforms, and Facebook in particular.

  • Government oversight, particularly in the EU, is advancing on tech platforms in response to a rise in malicious content. UK regulators are now drafting regulation to establish an internet regulator similar to British communications watchdog Ofcom, BuzzFeed News reports. Among the proposed regulations being considered is a “takedown time” requirement that would mandate that sites remove nefarious content — including hate speech, terrorist content, and child abuse images — within a certain timeframe. 

  • Ad industry watchdogs are also upping brand-safety stakes for platforms. The Media Rating Council (MRC) — the agency responsible for ad industry measurement oversight — issued an update to its existing Ad Verification Guidelines, with additional requirements around content and brand safety, per MediaPost.

Overall, content validation is set to become a bigger pursuit across the industry amid a broad accountability push and growing need for more sophisticated solutions to nefarious content online. So far, AI and machine learning systems can detect the vast majority of bad content on platforms, but their remaining flaws still require human intervention, like fact-checkers and content moderators.

Scrutiny, for tech companies, is dangerous — as it comes from those who support its business (advertisers), and those who might try to restrict it (regulators). To meet demands on both sides, platforms need improved content validation tools yesterday. Going forward, the area will be a huge investment focus. 

To receive stories like this one directly to your inbox every morning, sign up for the Digital Media Briefing newsletter. Click here to learn more about how you can gain risk-free access today.

See Also: