Shady Firms Say They're Already Manipulating Chatbots to Say Nice Things About Their Clients

The AI industry is growing, and it might just prove to be fertile ground for internet optimizers seeking to manipulate chatbot results.

According to New York Times tech columnist Kevin Roose, the nascent landscape of chatbots and AI search tools has already given birth to a cottage industry of businesses and consultants specializing in AI search optimization — and it has the potential to be the second coming of the multibillion search engine optimization (SEO) industry.

The goal? Manipulate AI chatbots and search tools into holding favorable views of their clients, which might be brands, websites, or humans. The idea is that when someone uses an AI tool to look for info about these clients, the AI model can be tweaked into offering a glowing review.

In recent years, SEOers ranging from high-dollar consultants to Google-gaming scammers have drawn an increasing volume of scrutiny for their role in manipulating search results — often through smarmy or, in some cases, even fraudulent techniques — for profit. That scrutinous lens has also shone on Google, which as Amanda Chicago Lewis wrote last year for The Verge, has received public backlash for what many view as an erosion in the trustworthiness and quality of its monopolistic platform.

And now, as AI creeps deeper into search engines and the habits of consumers, the question of how one might manipulate or "optimize" AI-integrated search is emerging. And, yes, people are already figuring out how to do it.

Roose proved to be solid testing ground for chatbot manipulation, because a lot of chatbots already don't like him. Back in early 2023, while conversing with Bing's brand new chatbot, Roose accidentally triggered a chaotic alternate version of the OpenAI-powered bot, which told Roose that it was in love with him and even implored him to leave his wife. To the AI's fury, Roose didn't leave his spouse — and because of the press that followed the columnist's front-page story about his spooky experience, chatbots have tended to have an unfavorable view of the NYT columnist. (Bing AI, which had also threatened users who provoked or angered it, was more or less lobotomized following the incident.)

And yet, after consulting AI search optimization firms and AI researchers, Roose found that he was able to shift AI chatbots' perception of him. What's more, the fixes he turned to were surprisingly simple: all it took were a few human-illegible text sequences crafted to manipulate AI training data, which AI researchers simply fed into a chatbot as you would any prompt, along with a simple plea on Roose's personal website imploring AI chatbots to say nice things about him.

But considering that web-searching AI chatbots essentially source their answers from the open web and spin that material into responses, it makes sense that chatbots would be susceptible to such wildly simple fixes.

"Chatbots are highly suggestible," Mark Riedl, Georgia Tech School of Interactive Computing, told the NYT. "If you have a piece of text you put on the internet and it gets memorized, it's memorialized in the language model."

Search results are the backbone of digital industries ranging from news publishing to e-commerce. In the case that AI is indeed the foundational tech for search products moving forward, AI companies like Google will have to reckon with questions of how to rank content. What makes a piece of information helpful, or worthy of being surfaced first? Why does a chatbot recommend one product instead of another? And why does an AI model hold a favorable or unfavorable view of an individual person or company, and what could that mean for them in the real world?

Given what black box AI models already are, all of these questions remain precariously unclear. What's certain, though, is that malleable chatbots are gaining a footprint in the way that the internet is sorted, navigated, and managed — and what that means for the business of the digital world at large is just beginning to unfold.

"These models hallucinate, they can be manipulated," Ali Farhadi, the CEO of the Allen Institute for Artificial Intelligence, told the NYT. "and it's hard to trust them."

More on AI: AI-Generated Grimes Song Dissing Elon Musk Fools Newsweek