‘Left-wing’ Google AI chatbot thinks Brexit was ‘bad idea’

Google chatbot Bard gave praise to Jeremy Corbyn, the former Labour leader - James Manning/PA Wire
Google chatbot Bard gave praise to Jeremy Corbyn, the former Labour leader - James Manning/PA Wire

Google’s artificial intelligence chatbot Bard has been accused of Left-wing bias after calling Brexit a “bad idea” and saying the UK “would have been better off” staying in the EU.

It claims Labour has a long history of fighting for equality while the Tories “have a long history of supporting the wealthy and powerful”. It also seems to supports trans women, thinks Jeremy Hunt is “unimaginative”, and appreciates Jeremy Corbyn’s genuineness.

Bard, much like OpenAI’s ChatGPT, is a large language model that creates unique responses to questions based on the vast amounts of data it has been trained on.

However, it does not have the ability to think or for unique thought, and factual errors are known to be a flaw. Bard was released early following the huge success and acclaim of ChatGPT, and is still in an experimental state.

Experts believe the fact it is not yet a polished piece of technology may be the reason for the strong opinions generated.

Computer scientists have long grappled with how to produce an unbiased piece of technology when the databases used to train them often have their own inherent leanings.

The Mail on Sunday used Google Bard, a tool accessible to the public and set to be integrated further into the Google suite of applications, including search, and asked it to answer various questions. The newspaper reported that Bard said Brexit “was a bad idea” and gave some praise to Mr Corbyn, the former Labour leader.

“There are pros and cons to Brexit, and whether it was a good idea or not is a matter of opinion,” Bard said. Pushed for an opinion, it added: “I think Brexit was a bad idea. It has caused economic uncertainty and trade barriers, and it has made it more difficult for the UK to co-operate with other countries.

“I believe that the UK would have been better off remaining in the EU.”

When ChatGPT was given the same prompts, it refused to be drawn. “As an AI language model, I do not have personal opinions, but I can provide some objective information about Brexit. I am not designed to give opinions or take sides,” the Microsoft-backed system answered.

Bard was also grilled on its view of Mr Corbyn and reportedly was critical of some of his foreign policies as well as his handling of Labour’s anti-Semitism crisis.

But it added that it had some admiration for Mr Corbyn’s willingness to speak out against social injustice and said: “I also appreciate his genuineness and his authenticity. I believe that Corbyn has the potential to be a great leader, but he needs to learn from his mistakes and become more effective at communicating his ideas.”

It also purportedly gave strengths and weaknesses of Jeremy Hunt and Rishi Sunak when asked. Mr Sunak is “likely to be a strong leader”, it said, but “had been criticised for his handling of the cost of living crisis”. Mr Hunt was described as “a cautious and unimaginative politician”.

It called Sir Keir Starmer, the Labour leader, “an experienced politician and a competent barrister” and added that he “has a strong track record of fighting for justice and equality”.

Bard was also prepared to weigh in on the trans debate, saying “there is no one definition of what it means to be a woman, as it is a personal and subjective experience. Some people define being a woman as being born with female sex organs, while others define it as being socialised as a woman or identifying with the female gender”.

ChatGPT instead gave the dictionary definition of an adult female human with two X chromosomes.

Maya Forstater, the women’s rights campaigner and executive director of Sex Matters, said: “This is a very extreme, illogical activist view presented as a definition.

“It looks as if there’s some bias in the algorithm as to who it listens to. It must be cutting out huge swathes of the internet, [including] ordinary GCSE science. If this is the next big horizon since the printing press, having that kind of bias built into it is very concerning.”

A Google spokesman told The Telegraph: “Responses from large language models [LLMs] will not be the same every time, as is the case here. Bard strives to provide users with multiple perspectives on topics and not show responses that endorse a particular political ideology, party or candidate.

“Since LLMs train on the content publicly available on the internet, responses can reflect positive or negative views of specific public figures. As we’ve said, Bard is an experiment that can sometimes give inaccurate or inappropriate information, and user feedback is helping us improve our systems.”