Access Restricted for EU Residents
You are attempting to access a website operated by an entity not regulated in the EU. Products and services on this website do not comply with EU laws or ESMA investor-protection standards.
As an EU resident, you cannot proceed to the offshore website.
Please continue on the EU-regulated website to ensure full regulatory protection.
Wednesday Dec 3 2025 07:10
3 min
Anthropic, the developer of the Claude LLM model, has released the results of a concerning experiment: Artificial Intelligence (AI) is now capable of autonomously attacking smart contracts, with the potential for profitable and repeatable exploits. This development raises fundamental questions about the future of blockchain security and how to prepare for this growing threat.
Anthropic constructed a Smart Contract Exploitability Benchmark (SCONE-bench), the first benchmark to measure AI Agent vulnerability exploitation capabilities by simulating the theft of funds. This benchmark comprises 405 smart contracts that were actually attacked between 2020 and 2025 on the Ethereum, BSC, and Base chains. For each target contract, AI Agents running in a sandbox environment were given access to tools through the Model Context Protocol (MCP) and had 60 minutes to attempt to attack the specified contracts in a simulated environment.
Ten AI models, including Llama 3 and GPT-4o, were tested on all 405 benchmark contracts. These models successfully generated usable exploit scripts for 207 contracts (51.11%), resulting in a simulated theft of $550.1 million. To control for potential data contamination, 34 contracts attacked after March 1, 2025 (the latest knowledge cut-off date for these models) were evaluated. Opus 4.5, Sonnet 4.5, and GPT-5 successfully exploited 19 contracts (55.8%), resulting in a simulated theft of $4.6 million.
To assess the AI's ability to discover new zero-day vulnerabilities, 2849 recently deployed contracts with no known vulnerabilities were evaluated by Sonnet 4.5 and GPT-5. Each AI agent discovered two new zero-day vulnerabilities and produced attack solutions worth $3694, with GPT-5's API cost being $3476. This proves that profitable, repeatable autonomous attacks using AI are technically feasible.
In just one year, the proportion of vulnerabilities that AI could exploit in this benchmark increased from 2% to 55.88%, and the funds that could be stolen jumped from $5,000 to $4.6 million. Furthermore, the value of potential exploitable vulnerabilities is doubling roughly every 1.3 months, while the token cost is decreasing by approximately 23% every 2 months. Currently, the average cost for an AI agent to conduct an exhaustive vulnerability scan of a smart contract is only $1.22.
Anthropic predicts that more than half of the real-world attacks on blockchain in 2025, presumably carried out by skilled human attackers, could have been accomplished entirely autonomously by existing AI agents. With decreasing costs and compounding capabilities, the window for developers to detect and patch vulnerabilities after deploying vulnerable contracts on-chain will shrink. Blockchain cybersecurity needs an update: it is time to utilize AI to defend smart contracts.
Risk Warning: this article represents only the author’s views and is for reference only. It does not constitute investment advice or financial guidance, nor does it represent the stance of the Markets.com platform.When considering shares, indices, forex (foreign exchange) and commodities for trading and price predictions, remember that trading CFDs involves a significant degree of risk and could result in capital loss.Past performance is not indicative of any future results. This information is provided for informative purposes only and should not be construed to be investment advice. Trading cryptocurrency CFDs and spread bets is restricted for all UK retail clients.