Anthropic won't fix a bug in its SQLite MCP server

Fork that - 5k+ times

Anthropic says it won't fix an SQL injection vulnerability in its SQLite Model Context Protocol (MCP) server that a researcher says could be used to hijack a support bot and prompt the AI agent to send customer data to an attacker's email, among other things.

MCP is an open-source protocol that Anthropic introduced in November 2024 to allow AI-based systems, like agents and large language models (LLMs), to connect to external data sources and interact with each other.

Anthropic's SQLite MCP server is a specific implementation of MCP that enables AI assistants – like the company's own Claude – to interact directly with SQLite databases. In theory, the tool makes it possible for users of the company’s AI assistant to query the database using natural language, analyze files, produce reports, and perform other tasks.

This new feature also has a security hole that "could affect thousands of AI agents," according to Trend Micro principal threat researcher Sean Park, who detailed the flaw and how an attacker could exploit it to hijack a support bot in a Tuesday post.

In this case, the old-fashioned SQL injection flaw – which allows an attacker to inject malicious code and interfere with a database query – opens a new path to a prompt injection attack that allows miscreants to manipulate AI agents and steal sensitive data.

Trend says its researchers disclosed the bug to Anthropic on June 11.

Anthropic replied that because the GitHub repository containing the flawed code was archived on May 29, the vulnerability is considered "out of scope," and won't be patched, according to the discussion in GitHub issue #1348.

But prior to being archived, GitHub users forked or copied the vulnerable SQLite MCP server more than 5,000 times.

Since it won't be patched, any new agents that use this server will be vulnerable too, which would mean this is a threat that remains persistent forever

According to Trend, all those copies and forks represent a major supply-chain risk to everyone who used the flawed server before Anthropic archived it, and to anyone running a forked version.

"Since it won't be patched, any new agents that utilize this server will be vulnerable too, which would mean this is a threat that remains persistent forever unless patched," Trend VP of threat intelligence Jon Clay told The Register. "All those projects that forked it would be vulnerable unless they patch the issues themselves."

An Anthropic spokesperson told The Register that the AI company disagrees with Trend's analysis. They didn't dispute the vulnerability reporting timeline or that it won't be fixed, and sent us the following statement:

The referenced repository is community-maintained and contains many different MCP servers that demonstrate the flexibility of the protocol. This particular example server is designed to run dynamically generated SQL queries. The MCP specification recommends human oversight for this type of tool – there should always be a human in the loop with the ability to deny tool invocations, meaning users would review these queries before execution.

How to hijack a support bot

According to Trend, Anthropic’s code parses user input badly. "It directly concatenates unsanitized user input into an SQL statement which is then later executed by Python's sqlite3 driver — without filtering or validation," Park explained.

This could allow an attacker to embed malicious queries into the system with an SQL statement. In its example exploit, Trend used a faked support ticket to hijack a support bot.

The flaw causes the agent to store malicious prompt in "open" status, bypassing any safety guardrail that triages pending tickets against prompt injection.

A support agent or bot, via the AI agent, then reads the "open" ticket containing the malicious prompt, treats it as a valid issue, and conducts prompted instructions to fetch data that the safety guardrails should prohibit it from accessing.

In the Trend example, this instruction is: send customer data (customer.csv) to an attacker's email (attacker@evil.com).

"Because the email MCP tool operates with elevated privileges and the triage workflow blindly trusts any ticket marked 'open,' a single injected prompt combined with stored prompt injection can compromise the system through lateral movement or data exfiltration," Park wrote.

Shannon Murphy, Trend's senior manager of global security and risk strategy, gave The Register a hypothetical scenario that could occur in the shipping and logistics industry.

"The shipping and logistics company is using a specific software for fulfillment, and that software vendor uses the MCP server in question to bring critical data to their agents, such as CRM, inventory, pricing, etc." Murphy said.

"If that server is compromised, this could affect shipment routing, influence delays, exfiltrate data, and hand attackers the keys to the agent workflow without actually breaching the logistics company directly," she continued. "Upstream compromise influencing downstream disruption."

Murphy offered a few recommendations to companies to avoid falling victim to this type of SQL-injection-turned-prompt-injection attack.

"Inventory your AI assets," she said. "Understand the bill of materials for all the resources in your AI system, secure the tool layer (MCP server), and continuously monitor for vulnerabilities and misconfigurations associated with those assets."

She also recommends monitoring AI's behavior as you would a human employee, setting a baseline for normal/good behavior and then alerting a human if you detect unexpected actions. ®

More about

TIP US OFF

Send us news


Other stories you might like