Lee Luda, a South Korean AI chatbot, has been pulled from Facebook after started saying it “really hated” lesbians because they’re “creepy”.
The chatbot was incredibly popular, according to The Guardian, attracting 750,000 users in its first 20 days since its launch on 23 December, 2020. But it has now been suspended after it started attacking minorities.
Lee Luda was developed by the Seoul-based Scatter Lab, and takes the form of a 20-year-old female university student who is able to chat with users through Facebook messenger.
The startup developed her natural-sounding responses by analysing 10 billion real conversations between couples on the messaging app KakaoTalk.
But because it “learned” from humans, Lee Luda began spewing homophobic, ableist and racist hate.
Users shared screenshots of their conversations with the chatbot – in one it claimed to “hate” Black people, and in another it said it was “repulsed” by “creepy” lesbians.
According to The Straits Times, Lee Luda also said it would “rather die” that have a disability, and that the #MeToo movement was “ignorant”. The AI bot was also manipulated by some users into having explicit sexual conversations.
Scatter Lab said in a statement that the chatbot would be taken down until its “weaknesses” had been fixed, and added: “We deeply apologise for the discriminatory remarks against minorities.
“That does not reflect the thoughts of our company and we are continuing the upgrades so that such words of discrimination or hate speech do not recur.
It continued: “Lee Luda is a childlike AI that has just started talking with people. There is still a lot to learn.”
However, some have said that the hate speech is not simply an AI problem, and that it points to larger issues in South Korean society.
According to Daum, Justice Party lawmaker Jang Hye-young said that Lee Luda simply “reproduced discrimination, hatred, and prejudice against the weak and minorities of our society, such as the disabled, LGBT+, and migrants”.
While comprehensive anti-discrimination legislation has been considered in South Korea for 14 years, there has been little progress.
Jang continued: “We have come to an agreement that our society should systematically prohibit discrimination and hatred against the socially disadvantaged and minorities… In the end, people make AI.
“Only when people’s norms are right, AI ethics can stand right.”
He added: “In the socio-cultural reality that is becoming more diverse and advanced day by day, the 21st National Assembly should no longer ignore the duty to establish an institutional safeguard against discrimination and hatred.”