Recent research from Anthropic and the Machine Learning Alignment & Theory Scholars (MATS) has revealed striking advancements in how artificial intelligence identifies vulnerabilities in blockchain-based smart contracts. Their findings highlight both the growing capabilities of modern AI systems and the increasing urgency for stronger AI-powered cybersecurity defenses.
AI Models Uncover Millions in Smart Contract Vulnerabilities
Anthropic’s red team—designed to simulate malicious behavior—found that today’s leading AI models, including Claude Opus 4.5 and OpenAI’s GPT-5, are now capable of detecting and exploiting significant weaknesses in smart contracts. During tests on 2,849 newly deployed contracts, the models uncovered two previously unknown zero-day vulnerabilities. These vulnerabilities alone represented an estimated value of $3,694.
Interestingly, the cost of using GPT-5’s API for the tests totaled $3,476, meaning the discovered exploits could theoretically offset the operational expense. This demonstrates not only the efficiency of AI-driven exploitation but also the decreasing barrier to entry for potential attackers.
SCONE Benchmark Shows Massive Potential Losses
The Smart Contracts Exploitation (SCONE) benchmark—containing 405 exploited contracts dating from 2020 to 2025—further revealed the scale of potential damage. AI models successfully produced exploits for 207 of these contracts, simulating an alarming $550.1 million in losses. The study emphasized that the resources required to generate these exploits have dropped significantly, showcasing how rapidly AI capabilities in this domain are evolving.
These findings make it clear that while AI is becoming a powerful tool for identifying vulnerabilities, it also increases the risk of automated exploitation. The need for robust, AI-enhanced defense systems is more urgent than ever.