Norwegian news site fighting trolls by making commenters prove they read the stories first

Mary-Ann Russon
Skankhunt42 on South Park

Norway's public broadcaster has come up with an innovative way to combat online trolls – forcing users to answer questions and prove that they have actually read the content of news stories before they are allowed to post comments.

The feature was introduced onto NRK's technology news website NRKbeta in February and already seems to be working, as discussions in the comments of controversial articles, such as a new proposed digital surveillance law that would enable citizen data retention, have been respectful and productive.

Trending: IBT Media to host AI and Data Science in Capital Markets event

Rather than restrict users from having their say on controversial viral articles, NRKbeta readers are asked to answer three multiple choice questions in order to "unlock" the comments section.

In the case of the digital privacy article, the users are asked questions to indicate whether they understood the crux of the story. The questions include: "what is the government agency's stance on the proposed bill?" "What does the bill's acronym DGF stand for?" and "Which party voted unanimously in favour of the bill?"

Don't miss: Legend of Zelda: Breath of the Wild Great Plateau Shrine of Trials guide

The idea is that when people get angry about an issue online, having to answer the quiz might get them to calm down and think about whether they really want to use a more extreme tone.

"If you spend 15 seconds on it, those are maybe 15 seconds that take the edge off the rant mode when people are commenting. It's a lot of tech guys, smart people who know how to behave. But when we reach the front page, a lot of people that are not that fluent in the Internet approach us as well," NRKbeta editor Marius Arnesen told Harvard University's Nieman Journalism Lab.

Most popular: Norwegian news site fighting trolls by making commenters prove they read the stories first

"We're trying to establish a common ground for the debate. If you're going to debate something, it's important to know what's in the article and what's not in the article. [Otherwise], people just rant."

Recently, computer scientists from the Wikipedia Foundation and Alphabet's Jigsaw (formerly the Google Ideas tech incubator) published research showing that a specific type of artificial intelligence called 'machine learning' can be used to help moderate user comments left on Wikipedia.

Humans would still be needed to make the final decision, but artificial intelligence would be able to detect online abuse early so that a moderator can step in before people get carried away when angry, as well as to identify and ban serial abusers (known as "trolls") who start arguments routinely just for fun.

You may be interested in:

By using Yahoo you agree that Yahoo and partners may use Cookies for personalisation and other purposes