An AI security tool identified a critical vulnerability in the Nethermind Ethereum client, which could have impacted nearly 40% of the network’s validators. The bug, which was never exploited, was patched after being reported through a bug bounty program. This discovery follows recent incidents highlighting both the risks and potential of AI in crypto security, including a separate $2.7 million loss linked to AI-generated code.
An AI tool from security firm Octane Security discovered a high-severity bug in the Nethermind Ethereum client. The firm stated the bug was fixed before exploitation, preventing potential disruption to a large portion of Ethereum validators.
Giovanni Vignone, CEO of Octane Security, called it a high-stakes demonstration of AI-led research. “AI has dramatically accelerated vulnerability research,” Vignone stated, claiming the process can now happen ten times faster.
This news follows Anthropic’s recent unveiling of an AI security tool that rattled cybersecurity stocks. The dual developments underscore the growing role of artificial intelligence in both creating and solving software security issues.
Concerns about AI-generated code were realized earlier this month when a bug cost users of the Moonwell protocol nearly $2.7 million. A Moonwell engineer said the flawed, AI-generated code had passed a security audit.
Octane’s AI, reviewed by a researcher named Guhu, found 17 issues in an audit contest for Ethereum’s Fusaka upgrade. The team earned over $70,000 for their findings, which included the critical Nethermind bug.
The specific vulnerability could have allowed a hacker to sabotage validators with a malformed transaction. Exploitation would have caused validators to miss rewards and degrade network performance, according to the firm’s analysis.
For reporting the Nethermind bug, the Ethereum Foundation awarded Octane a $50,000 bounty. The company emphasized that using AI for security is becoming essential for competing against potential attackers.

