Google has tasked over 10,000 independent contractors to flag "offensive or upsetting" content to make sure queries such as "Did the Holocaust happen" do not lead visitors to sites that promote misinformation, hate speech or hoaxes. According to Search Engine Land, Google's worldwide quality raters assess the company's system by evaluating and rating search results based on a lengthy set of guidelines. These guidelines have now been updated to include a new section about "Upsetting-Offensive" content that raters will need to flag. The manual instructs raters on how to assess the quality of a website and score it based on whether or not it meets users' needs.
The new "Upsetting-Offensive" flag includes content that promotes hate or violence against a group of people based on race, ethnicity, religion, gender, nationality, age, sexual orientation, disability or veteran status. Content with racial slurs, hate speech, graphic violence, how-to information about harmful activities are also covered in this category.
Last year, Google drew fierce criticism after the first result for the query "Did the Holocaust happen" returned a link to neo-Nazi website Stormfront.
The guidelines featured various examples for possible queries including one for a search on "Holocaust history". The document states that a link from Stormfront should be flagged as "Upsetting-Offensive" because "this result is a discussion of how to convince others that the Holocaust never happened".
"Because of the direct relationship between Holocaust denial and anti-Semitism, many people would consider it offensive."
However, a search result from The History Channel should not be flagged because it is "factually accurate information" and does not exist to "promote hate or violence against a group of people, contain racial slurs or depict graphic violence".
The manual included examples for a search on "racism against blacks" that prompted a page for the neo-Nazi website Daily Stormer as well as a query for "Islam" that returned a link to far-right activist Jan Morgan's website.
These contractors cannot directly alter Google's search results or cause a page to drop in rankings if they give it a lower rating. However, the data produced by these quality raters are used by Google's coding team to improve Google's search algorithm and AI systems. Results that are marked "Upsetting-Offensive" will still show up in search results if someone is "explicitly seeking this type of content."
"Remember that users of all ages, genders, races, and religions use search engines," the guidelines read. "One especially important need is exploring subjects which may be difficult to discuss in person. For example, some people may hesitate to ask what racial slurs mean.
"People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial to society."
Google has already begun testing these new guidelines with some of its quality raters and used the resulting data to improve search rankings in December.
"We're explicitly avoiding the term 'fake news,' because we think it is too vague," Paul Haahr, one of Google's senior engineers told Search Engine Land. "Demonstrably inaccurate information, however, we want to target."
"We will see how some of this works out. I'll be honest. We're learning as we go."
You may be interested in:
- Nazis, racism, hatred: Why is Google still failing to police its own autocomplete function?
- Google's search engine will no longer tell users that the Holocaust never happened
- Google removes 'are Jews evil' suggestion from autocomplete search results
- Google asks thousands of 'quality raters' to flag offensive, Holocaust denial search results