Meet the woman fighting racist, sexist AI algorithms - and taking on the tech giants

Joy Buolamwini, founder of the Algorithmic Justice League - Christopher Pledger
Joy Buolamwini, founder of the Algorithmic Justice League - Christopher Pledger

As a researcher at the Massachusetts Institute of Technology (MIT), Joy Buolamwini, 29, was asked to construct a supercool futuristic device of the kind that might appear in a science fiction movie.

She responded with “The Aspire Mirror”, which projected images onto the reflected face of whoever stood before it. “So I could become a lion, or Serena Williams or anyone else who inspired me,” Buolamwini says.  It worked quite well, but she wanted to take it further by allowing the projected image to track the moving face of the user, and added a web camera and face-detection software. That’s when the problems started.

“It didn’t really track my face that well,” says Buolamwini, who is a black Ghanaian-American woman. “Then lighter-skinned people would use my system and it would run away and I was like, ‘Hey! I built this for me. But it works for you'.”

She laughs. Hectoring, it quickly becomes apparent, is not her style. Sitting in the Barbican, where she has an exhibit at the recently-opened show, AI: More than Human, she guffaws infectiously at the near comic outrageousness of the tech world’s injustices, preferences and prejudices, which together reveal a combination of immense power and astonishing immaturity. It's a world in which algorithms help decide every facet of our lives: who gets a mortgage, or a place at university, or a job, or stopped by the police, or parole. And yet those algorithms are themselves unchecked by the basic safety protocols that we would expect in any other industry of comparable importance.

“We just assume because it’s a machine or because we’ve used data that decisions are going to be somehow fair or neutral,” she says. “But then we find when you look under the hood, something’s rotting, and no one was checking in the first place.” It was a realisation brought home to Buolamwini at Halloween. As part of her costume she put on a white face mask. The Aspire Mirror instantly whirred into action. “The mask was detected so quickly compared to my real face. That was the moment I realised there was something going on”.

She swiftly read a report called The Perpetual Line-up, by the Georgetown Center for Law and Privacy, which revealed the extent of unregulated facial recognition by the FBI and police in America. The scale of its findings caused shockwaves: more than 117 million American adults were affected – about one in two – their faces, whether captured by surveillance cameras or driver’s license databases, all being scrutinised by algorithms, says Buolamwini, “that haven’t been audited for accuracy”.

She wondered if it wasn’t just her having problems with facial recognition. “What about all of these people who are in these police data sets – could they be falsely mismatched?” The real-world implications could be huge. “When your face gets mistakenly associated with something you haven’t done, are you going to be believed or is the machine going to be believed?”

The impact is most obvious upon what Buolamwini describes as the “under-sampled majority” – women and people of colour, who don’t appear proportionally in the data-sets on which AI algorithms are trained.  She recalls a spat she had with Amazon over AI used in the early stages of its hiring process. “It was trained on 10 years of hiring data for software developers,” she explains, “and what they found was that a resume containing the word ‘women’s’,” – be they members of a women’s hockey, or debating team, say – “would be categorically ranked lower than those that didn’t.”

The problem was that in the previous 10 years most developers had been men. Of course, that didn’t mean the best developers are always men. "So instead of actually finding patterns that are saying you’re good for the job, it’s finding patterns that are characteristic of those who had been in the job in the past.”

Amazon disputed Buolamwini’s conclusions. But it scrapped the AI system. Do we want to live in a world, she asks, where “you don’t get a job, you don’t get into college, not because you were unworthy but because AI rendered you irrelevant?”

The problem is not usually down to malice. Rather, study after study reveals how dramatically under-represented women and minorities are in the big tech companies – “the G-Mafia” as Buolamwini calls them. And that can lead to so-called “unconscious bias”. She happily admits she is not free of it herself, telling a story about one research project she conducted involving the Fitzpatrick scale, used by dermatologists to measure skin response to UV radiation. “I had no idea you could get sunburn in different ways. But sunburn is just not my world or something I have to think about.”

She calls the translation of such blindspots to the digital world, “the coded gaze” – where users simply don’t think about the injustices others face because they aren’t exposed to them.   Some fixes, Buolamwini says, are obvious: improve the ethnic and gender balance of teams creating AI algorithms in the first place; assume bias and bake mitigation processes into development itself; subject AI to the same ongoing, verified standards and norms applied to engineering components or pharmaceuticals.

“Right now you just take your AI model without a lot of information, shake it around and sniff it to see if it smells funny and hope it works. We have an industry that doesn’t have due diligence. We can do so much better.”

Not, she says, that we can rely on industry to self-regulate. Which is why Buolamwini founded the Algorithmic Justice League, a collective to highlight coded discrimination. The superhero-style name is a classic touch of communications genius from a sharply-dressed woman who styles herself “the poet of code”, and is leveraging her rapidly-rising profile to make a political impact at Davos, the UN, and serving on the EU Global Tech panel.

Exposure to politicians is not always heartening. “There’s such a lack of AI literacy that you can get into a situation where the people tasked with regulating the technology don’t really know how it works”. By contrast, Buolamwini, who was born in Canada to an artist and a professor of medicinal chemistry, then grew up in Ghana and Oxford, Mississippi, before winning a Rhodes scholarship to Oxford, England, and finally heading to MIT for her PhD, has a background that oozes authority.

Her main message, she insists, is against “technological determinism” – the idea that the march forwards is both inevitable and immutable. Look at IBM and Microsoft, she says, which changed their facial recognition software in response to her work; look at the UK, where this week, police use of facial recognition will be challenged in court for the first time; look at Amazon’s activist shareholders, who are voting on Wednesday on whether to ban the continued development of the company’s own facial recognition software.

“At the end of the day it is up to us to define what the future will look like. To have the political will. Ultimately, AI is not a tech issue. It is a people issue.” Is she an optimist? “The San Francisco decision has made me more optimistic than I’ve ever been,” she says, referring to last week’s vote in the city to ban use of facial recognition technology by police and government agencies altogether.

Yet the fact that the West Coast capital of Silicon Valley has taken such a step, she thinks, should give us all pause. “The city where the sausage is made does not want to eat the sausage,” she says, laughing again. “That tells you something.”