A new assessment from the British AI Safety Institute shows that an updated version of Anthropic's limited "Mythos" AI model has significantly improved its ability to discover and exploit unknown software vulnerabilities. The results reinforce concerns that the pace of development in offensive cyber capabilities is outpacing the adaptation of defensive systems and control policies.
Progress outpaces expectations
In a blog post titled "How fast are autonomous AI cyberattack capabilities evolving?", the institute reports that a newer checkpoint of the "Mythos Preview" model, obtained after the initial April evaluation, demonstrates an additional significant jump in efficiency.
In a simulated attack against a corporate network consisting of 32 sequential steps, the system successfully completed the scenario in 6 out of 10 attempts – compared to 3 out of 10 in previous testing. The institute emphasizes that "significant jumps in capabilities do not always require the release of entirely new models – later iterations of the same system can substantially change our assessments".
These findings build on the April assessment, according to which "Mythos" became the first AI system to independently execute an end-to-end attack simulation against a corporate network – a task which, by the institute's estimate, would take an experienced specialist about 20 hours of work.
Experts now estimate that leading offensive AI cyber capabilities are doubling approximately every four months, whereas by the end of 2025 this rate was about seven months.
Unauthorized access and geopolitical ambitions
Against the backdrop of the model's growing power, questions are also rising about whether Anthropic is capable of keeping it under control. On the day of the April "Mythos" announcement, a group of unauthorized users managed to gain access to the system by exploiting a vulnerability in a third-party vendor environment, "Bloomberg" reports.
Anthropic states there is no evidence that the company's core systems were affected, but the incident highlights the risk that even limited models can exceed the boundaries set by their developers.
Parallel to this, "The New York Times" informs that last month during a meeting in Singapore, organized as part of a "Carnegie Endowment for International Peace" event, a representative of a Chinese think tank asked Anthropic's leadership for Beijing to gain access to "Mythos". The company refused. According to the publication, officials from the US National Security Council were notified of the conversation and reacted with concern.
The divide between attack and defense
The institute's new data highlights the growing asymmetry between offensive and defensive AI applications. In April, Anthropic launched the "Glasswing" project, through which it provided selected partners – including "Apple", "Microsoft", "Google", "Amazon", and "Nvidia" – early access to "Mythos" for use in defensive purposes.
However, security researchers warn that such a model effectively creates a two-tiered system: organizations outside this narrow circle remain exposed to the model's potential without being able to benefit from its defensive capabilities.
"When this technology becomes widely available – and Anthropic's own employees are talking about a horizon of six to eighteen months – the organizations that were already lagging will not just fall further behind," commented Spencer Whitman, Director of Product at the AI security company "Gray Swan", in an interview with "Fortune". "The model on which they built their programs will simply stop working."
What's next: regulation, ethics, and practical defense
The speed at which "Mythos" is improving its skills to mimic and automate complex cyberattacks calls into question the readiness of regulators and corporate security systems to react in time. Key dilemmas are emerging: how to restrict access to such models without blocking the development of legitimate defensive applications, and how to ensure that breaches like the unauthorized access via a third-party vendor do not become a channel for criminal or state actors.
You may also like
AI in everyday life: assistant or risk? How to maintain reasonable boundaries
"EU in alarm: Anthropic's 'Mythos' fuels fears of a new era of cyber threats"
Smart devices and AI: the race to predict diseases years in advance
Burgas implements drones for fire and crisis protection under the Med-IREN project
Experts warn that the window for preparation is limited. If forecasts for mass adoption within a year to a year and a half come true, organizations that do not invest in modern cybersecurity systems and responsible AI usage policies risk not just falling behind, but becoming easy targets for the new generation of automated attacks.
Коментари (5)
mega_boss
14.05.2026, 12:17Абе, сериозно ли? Два пъти повече?! 🤦♂️ Да не би да сме си помислили, че ИИ ще ни остави на мира, а? Все едно някой да пуска още един геймър, дето го е ядче, че не може да хакне света.
Млад_Реалист
14.05.2026, 12:22абе, тоя mythos... наистина ли трябваше да го правят толкова силен? да не би умишлено да под
Yordan97
14.05.2026, 12:21абе, сериозно ли? два пъти повече кибератаки симулира?! това mythos нещо май става все по-мощно... и как ще смогнем с тези атаки тогава, а? да не говорим, че регулациите явно изостават.
19C9EE3FD5
14.05.2026, 12:42Хм... интересно. Сериозно притеснителна новина, ако е вярно това за двойното увеличение на симулациите. Наистина ли сме подготвени? ИИ прогресира с бързи темпове, а ние все още се борим със стари проблеми и бавни
Stoyan29
14.05.2026, 13:00Ей, хора! Видях новината за Mythos-а... наистина ли симулира два пъти повече кибератаки сега? Малко ме е притеснило, честно казано. Все пак трябва да сме наясно с тези неща и да гледаме как може да се подготвим по-добре. Дали българските институции са достатъчно ин