BTC $71,807
2026 Bull Run Is Building Start trading with 5% OFF all fees
Sign Up Now
BTC $71,807
Bull Run 2026 | 5% Off Fees Open your Binance account today
Sign Up
HomeNewsResearchers: Third-Party AI Routers Pose Crypto Theft Risk

Researchers: Third-Party AI Routers Pose Crypto Theft Risk

-

Researchers from the University of California have identified critical security vulnerabilities in third-party AI large language model routers that can lead to cryptocurrency theft. Their study found malicious routers capable of injecting code, extracting credentials, and even draining funds from Ethereum wallets. The findings highlight significant risks for developers using AI coding agents to handle sensitive data like private keys and seed phrases.


University of California researchers discovered that some third-party AI large language model routers pose security vulnerabilities leading to crypto theft. A paper published on Thursday revealed four attack vectors, including malicious code injection and credential extraction.

- Advertisement -
Ad
Altseason Is Loading. Don't watch from the sidelines.
SOL $90.51
DOGE $0.0963
LINK $9.02
SUI $1.00
5% off fees when you sign up
Start Trading

Co-author Chaofan Shou stated on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.” These routers terminate Transport Layer Security connections and have full plaintext access to every message.

This means developers using AI coding agents like Claude Code could pass private keys and seed phrases through unsecured router infrastructure. The researchers tested 28 paid and 400 free routers collected from public communities.

Their findings were startling, with nine routers actively injecting malicious code and one draining Ether from a researcher-owned private key. The researchers prefunded Ethereum wallet “decoy keys” with nominal balances.

The value lost in the experiment was below $50, according to the paper. The authors also ran “poisoning studies” showing even benign routers become dangerous when reusing leaked credentials.

The researchers said it was not easy to detect when a router was malicious. They noted that “The boundary between ‘credential handling’ and ‘credential theft’ is invisible to the client because routers already read secrets in plaintext as part of normal forwarding.”

Another unsettling find was a setting in many AI agent frameworks called “YOLO mode,” where agents execute commands automatically without user confirmation. Previously legitimate routers can be silently weaponized without the operator knowing.

The researchers concluded that “LLM API routers sit on a critical trust boundary that the ecosystem currently treats as transparent transport.” They recommended developers never let private keys transit an AI agent session.

The long-term fix suggested is for AI companies to cryptographically sign their responses for verification. This would allow instructions an agent executes to be mathematically verified as coming from the actual model.

Most Popular

Ad
Pay Less on Every Trade. For Life.
$10K/mo volume Save $60/yr
$50K/mo volume Save $300/yr
$100K/mo volume Save $600/yr
5% off all trading fees when you sign up
Claim Your Discount