There's Something Deeply Wrong With Perplexity

AI startup Perplexity was a media darling as recently as earlier this year, earning praise for what was often described as an AI-powered search engine built to rival Google.

But that perception has rapidly shifted, with Forbes general counsel MariaRosa Cartolano firing off an angry letter this week accusing Perplexity of "willful infringement" of the publication's copyright by regurgitating its journalists' work online with only a poor attempt to give credit.

And as Wired reports, what exactly the company's chatbot does is surprisingly murky.

Wired's investigation highlights the dubious nature of AI chatbots and their precarious relationship with the people who actually create all the material they're trained on.

To get "concise, real-time answers to user queries by pulling information from recent articles and indexing the web daily," as Perplexity's chatbot claims to do, Wired found that it ignores a widely accepted standard that allows web hosts to keep out bots by amending a file called "robots.txt."

That's despite Perplexity claiming in its documentation that it respects those rules.

Worse yet, Wired found that the company's chatbot is still prone to hallucinating facts — or simply put, "bullshitting" — by inaccurately summarizing the work of journalists and doing very little to credit them.

In one experiment, Wired asked the chatbot to summarize a test website that only contained the sentence, "I am a reporter with Wired." Logs showed that Perplexity never actually looked at the website, but instead offered a "story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods."

Much of Wired's findings corroborate a previous investigation by developer Robb Knight. New York Times columnist Kevin Roose also found that Perplexity tends to bungle facts and ignore data that he asked it to summarize.

Needless to say, these are damning findings about a company that's pulled in hundreds of millions of dollars in funding.

"The magic trick that’s made Perplexity worth ten figures, in other words, appears to be that it’s both doing what it says it isn’t and not doing what it says it is," Wired wrote.

In a statement to Wired, Perplexity did not dispute the specifics of its reporting, but accused the publication of asking questions that "reflect a deep and fundamental misunderstanding of how Perplexity and the Internet work."

The relationship between Perplexity and original content creators has already been strained, to say the least, after Forbes chief content officer Randall Lane took aim at the company's chatbot, accusing it of "cynical theft."

"Our reporting on [former Google CEO] Eric Schmidt’s stealth drone project was posted this AM by Perplexity," Forbes executive editor John Paczkowski wrote in a viral tweet earlier this month. "It rips off most of our reporting. It cites us, and a few that reblogged us, as sources in the most easily ignored way possible."

Perplexity CEO Aravind Srinivas dismissed Schmidt's concerns at the time, arguing in a reply that its newly released feature called "Perplexity Pages" had "rough edges, and we are improving it with more feedback."

Srinivas' lukewarm response did little to convince Paczkowski.

"You scraped and repurposed investigative reporting gathered over months, fleshed it out with re-blogs of the same story by other outlets, and do not even bother to name us in your regurgitated post beyond a 'Sources' link, which is click to expand," he shot back.

"I didn't comment on your core product, or what others are doing," Paczkowski wrote in a follow-up. "But this story, which you pushed to users, is little more than plagiarism."

"It’s not 'rough,' it’s theft," he added.

Perplexity's chatbot went as far as to publish an entirely AI-generated podcast and eventually a YouTube video about Forbes' reporting, which "outranks all Forbes content on this topic within Google search," as Cartolano wrote in her letter this week.

Meanwhile, Srinivas has been in full damage control mode, telling the Associated Press last week that his company "never ripped off content from anybody. Our engine is not training on anyone else’s content."

Srinivas also argued that "we took [Paczkowski's] feedback immediately and updated changes that day itself. And now the sources are more prominently highlighted."

Whether these reassurances will be enough for an embattled journalism industry that's reeling from a flood of AI products that are ripping off the work of reporters, remains dubious at best.

Even readers are becoming wary of the trend, with a new report by the Reuters Institute for the Study of Journalism finding that a majority of news consumers are suspicious of AI-generated news content.

In short, Perplexity is seemingly far from providing trustworthy and reliable data, despite its enthusiastic marketing (the description of its iOS app is also strangely garbled, claiming it will "cut through the [sic] all the noise and get straight to credible, up-to-date answers.")

And it's not just Perplexity. Just last week, Apple CEO Tim Cook refused to say that the company's newly announced "Apple Intelligence" wouldn't come up with lies.

"When asked about his "confidence that Apple Intelligence will not hallucinate," Cook told the Washington Post that "it's not 100 percent."

Even Srinivas told Wired that we "have been very upfront that answers will not be accurate 100 percent of the time and may hallucinate."

The public struggle between the company and the producers of the content it "aggregates" spotlights a much larger debate surrounding the use of AI chatbots and their role on the internet: what actual value is added by an AI system that mashes up existing work, sometimes inserts errors, and then takes revenue away from its actual producer?

More on Perplexity: AI Search Engine Bungles Facts When Profiled by the New York Times