AI-powered writing assistants like ChatGPT ‘put ideas into people’s heads’

Close up stock photograph of a mature man studying a see-through computer monitor that’s displaying text provided by an AI (artificial intelligence) chatbot.
AI-powered chatbots actually influence what people say and think (Getty)

AI-powered writing assistants such as ChatGPT are creeping into everything from office tools to search tools - but using them actually puts ideas into people’s heads.

Researchers found that people using an AI writing tool programmed with one idea (that social media is good, for example) are twice as likely to write a paragraph agreeing with the assistant.

Maurice Jakesch of Cornell University asked more than 1,500 participants to write a paragraph answering the question, “Is social media good for society?”

People who used an AI writing assistant that was biassed for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI’s help.

Read more: A 1988 warning about climate change was mostly right

The biases basked into AI writing tools – whether intentional or unintentional – could have concerning repercussions for culture and politics, researchers said.

Co-author Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech says, “We’re rushing to implement these AI models in all walks of life, but we need to better understand the implications.

“Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society – shifts in language and opinions.”

This is the first study to show that the process of writing with an AI-powered tool can sway a person’s opinions.

Jakesch steered a large language model to have either positive or negative opinions of social media.

Read more: Why economists worry that reversing climate change is hopeless

Participants wrote their paragraphs – either alone or with one of the opinionated assistants – on a platform he built that mimics a social media website.

The platform collects data from participants as they type, such as which of the AI suggestions they accept and how long they take to compose the paragraph.

People who co-wrote with the pro-social media AI assistant composed more sentences arguing that social media is good, and vice versa, compared to participants without a writing assistant. These participants also were more likely to profess their assistant’s opinion in a follow-up survey.

The researchers explored the possibility that people were simply accepting the AI suggestions to complete the task quicker.

But even participants who took several minutes to compose their paragraphs came up with heavily influenced statements.

The survey revealed that a majority of the participants did not even notice the AI was biassed and didn’t realise they were being influenced.

“The process of co-writing doesn’t really feel like I’m being persuaded,” said Naaman. “It feels like I’m doing something very natural and organic – I’m expressing my own thoughts with some aid.”

When repeating the experiment with a different topic, the research team again saw that participants were swayed by the assistants. Now, the team is looking into how this experience creates the shift, and how long the effects last.

Just as social media has changed the political landscape by facilitating the spread of misinformation and the formation of echo chambers, biased AI writing tools could produce similar shifts in opinion, depending on which tools users choose.

For example, some organisations have announced they plan to develop an alternative to ChatGPT, designed to express more conservative viewpoints.

These technologies deserve more public discussion regarding how they could be misused and how they should be monitored and regulated, the researchers said.

“The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies,” Jakesch said, “the more careful we might want to be about how we’re governing the values, priorities and opinions built into them.”

Watch: Artificial intelligence gives fashion advice