Accounting Firm's AI Caught Telling Customers About Each Others' Financial Records

Blabbermouth

Sage Group, a self-described "leader" in accounting and financial tech for businesses, temporarily suspended its Sage Copilot AI assistant after it was caught giving out customer's financial records, The Register reports.

All you had to do to get this information, which allegedly included recent transactions, was ask. 

"A customer found when they asked [Sage Copilot] to show a list of recent invoices, the AI pulled data from other customer accounts including their own," a source told The Register last week. 

Thankfully, the customer reported the issue right away. A Sage spokesperson told the outlet that its currently early-access AI assistant was taken offline for several hours on Monday to address the issue.

No Big Deal

This incident would justifiably be considered a major breach of trust, both in the organization and the experimental technology it's deploying.

ADVERTISEMENT

But Sage doesn't see it that way. In a statement to The Register, its spokesperson described the AI blunder as a "minor issue" that only involved a "small amount" of customers.

Sage Group also denied that its AI leaked sensitive customer information. 

"The issue showed unrelated business information to a very small number of customers," the spokesperson told The Register. "At no point were any invoices exposed. The fix is fully implemented and Sage Copilot is performing as expected."

Model Behavior

This certainly wouldn't be the first time that an AI model has divulged information it shouldn't have. Data privacy remains one of the most pressing concerns surrounding the technology as it's deployed by businesses both large and small.

Such mishaps are a symptom of AI models' difficulty to control, and a consequence of the vast amounts of data used in their training stage, ingested during conversations and interactions, and that they're given access to by the organizations using them. (Fearful of leaks, many large companies, including the likes of Amazon and major banks, have instructed employees not to use AI chatbots.)

ADVERTISEMENT

There's a mountain of evidence to justify these worries. Google Deepmind scientists showed over a year ago that an older version of ChatGPT could be manipulated into leaking phone numbers and emails

Demonstrating the reckless streak inherent in these AI models, researchers at Anthropic, the creators of the Claude chatbot, recently revealed that even the industry's smartest LLMs could easily be tricked into ignoring their own guardrails by feeding them a prompt with randomly capitalized letters and typos.

We are clearly still in a very dodgy stage of the technology. But that evidently hasn't stopped countless companies from trusting AI models with their customers' data.

More on AI: Large Publisher Lays Off More Than 100 Employees After Striking Deal With OpenAI