Advertisement

'The problem of gendered language is universal' – how AI reveals media bias

<span>Photograph: Maskot/Getty Images</span>
Photograph: Maskot/Getty Images

If, during an election campaign, you heard one candidate described as “brave” and another candidate described as “strong”, which of the two would you be more likely to vote for? If the answer to this question seems obvious to you, that’s because logically it is. But it also demonstrates the power of language to shape our thinking and influence our behaviour.

“Gendered language is understood as language that has a bias towards a particular gender [and] reflects and maintains pre-existing social distinctions,” explains Roxana Lupu, an expert in applied linguistics. It shows us two things – not only does it “signal the presence of sexism in the society”, but it also “reinforces those beliefs and perceptions”. To put it simply: gendered language is that which promotes bias towards one gender, while simultaneously entrenching such bias further.

For a relatively new field of study in sociolinguistics (gendered language only rose to academic prominence in the 1970s), it has had no shortage of attention – emerging alongside second-wave feminism, it deepened the collective understanding of how gender discrimination is proliferated, both directly and indirectly.

Related: Making the edit: why we need more women in Wikipedia

Lupu believes the media plays a fundamental role in disseminating gendered language among the population. “We need to raise awareness to drive change,” she says.

But raising awareness is hindered by a lack of information on just how big the problem is. That’s where artificial intelligence (AI) comes in.

“Never before have we had the capability to analyse language in such a meaningful way at such massive scale,” says Rich Wilson, owner of Deviance (a technology company that focuses on language analytics). “This represents a huge opportunity for broad areas such as cultural or gender research,” he continues, which means that “evidence is now indisputable and quantifiable rather than just anecdotal.”

It was precisely this thinking that inspired a recent media coverage study conducted by a female-led marketing agency, Mac+Moore, with the support of Deviance. As marketeers, the brand’s founders, Jess MacIntyre and Natalie Moores, spend a large portion of their time discussing the power of language and messaging with their clients. “We work closely with companies to craft and shape the way they communicate with their audience,” Moores says, “so we know better than most how language can be a very powerful and persuasive tool – and has the ability to shape people’s perception.”

The difference in the media’s treatment of men and women is a topic that has been growing in coverage over the past decade. Savvy brands such as Gillette have been using their marketing campaigns to highlight – and challenge – gender discrimination and how it damages women. But Mac+Moore wanted to take this one step further. “We wanted to produce a data set that irrefutably demonstrated how gendered language is used in the media,” says Moores, “so that we had hard evidence that couldn’t just be dismissed as ‘an opinion’.”

Using a technique known as comparative linguistics (where two data sets are analysed in relation to one another), Deviance’s software would enable them to analyse in a detailed way any linguistic differences in the way men and women are described in the source material. Not only this, but it would enable the analysis of articles by publications from all across the UK’s media landscape at a volume higher than humans alone have ever been able to process – and in only a matter of hours. “AI is perfect because it allows the analysis to be completely removed from any bias that we may have – it allows for complete neutrality,” says MacIntyre.

“We chose the Labour leadership race as source material because it’s so topical and, what’s more, there’s never been a female leader of the party, but the odds of one being elected in this contest were four to one,” says Moores. “Statistically, it is more likely than ever that a woman will be elected, which would enable us to see with more clarity how gendered language is affecting the candidates’ chances – for better or for worse.”

And so they fed 145,000 words through the software, sourced from recent coverage of all five candidates from a broad cross-section of the media’s online content – amounting to 250 articles in total.

The results were startling: articles covering the only man in the race, Keir Starmer, were 4.4 times more likely to describe him using words meaning “preferred” and “favoured”, whereas the female candidates were 1.9 times more likely to be described using words such as “brave” (arguably patronising in this context), “sad”, “violent/angry”, and “dislike”.

Moreover, the results show that there is a huge focus on gender through the use of titles such as “Ms” or “Mrs”, which they were three times more likely to use for female candidates, whereas Starmer was referred to mostly by just his surname or the honorific “Sir”, which holds a positive connotation. Finally, Starmer was 1.6 times more likely to be discussed in terms of professional employment, politics, law and order, and belonging to a group, whereas the female candidates were much more likely to be discussed in relation to their families and, particularly, their fathers.

Related: Ten years on, why are there still so few women in tech?

Not just this, but the web scraper tool used on the first analysis picked up the content of digital advertising on each website. “This revealed that whenever a female candidate was discussed, ads were served against the content for clothing, fashion and beauty,” says MacIntyre. “This never happened for Keir Starmer – the adverts served in articles for him were much more gender neutral.” This, they believe, indicates an entrenched data bias in the software used by digital ad services that could potentially influence who consumes the content, “the implication being that articles written about female candidates are only relevant for female readers”, says MacIntyre.

“There is strong evidence to support the theory that women are being portrayed and represented within the media in an overly negative and gendered way, which could be impacting the outcome of election campaigns,” says Moores, “and the implications of this are potentially huge both in politics but also wider society.”

The two women are energised by the research and plan to use the results to push for companies to think more carefully about how content is presented. “Although these results tell the story of one leadership election, the problem of gendered language is universal,” says MacIntyre.

More than anything, the study demonstrates how AI can drive forward our awareness of the scale of the problem of gendered language: the first step to addressing the issue. “The media has a responsibility to contribute to an equal society,” says Lupu. For Wilson, “if AI can help to highlight a path to progress, then we should grasp that opportunity with both hands.” MacIntyre agrees: “After all, if the world is changing, why shouldn’t our language change too?”