A small group of unauthorized users has accessed Anthropic’s Mythos AI model, prompting concerns about the control and security of this technology. Mythos is notable for its impressive ability to identify software vulnerabilities without the need for formal training, which aligns with Anthropic’s broader cybersecurity initiative, Project Glasswing, aimed at bolstering the security of essential software in the AI age. Anthropic is currently investigating these access claims through third-party vendor systems but has found no evidence of any impact on its core infrastructure.

Mythos: Mythos is Anthropic’s unreleased frontier AI model preview, part of the Claude series, noted for its exceptional performance in cybersecurity tasks such as identifying remote code execution vulnerabilities in major operating systems and web browsers. Described as a step change in capabilities, particularly in software engineering and reasoning, it is kept under limited access due to its potential impact. The news highlights unauthorized users accessing Mythos, raising concerns about model control and security.
Anthropic: Anthropic is an AI safety and research company that develops reliable, interpretable, and steerable AI systems through its Claude family of models as a Public Benefit Corporation dedicated to responsible AI for humanity’s long-term benefit. The company recently launched initiatives like the Anthropic Institute to address challenges in frontier AI development. In this incident, Anthropic is investigating reports of unauthorized access to its Mythos model via a third-party vendor environment during early testing rollout.

`json
{
“Model Capabilities”: “Mythos demonstrates proficiency in discovering software vulnerabilities.”,
“Cybersecurity Focus”: “Anthropic has initiated Project Glasswing to enhance security of critical software.”,
“Access Investigation”: “Anthropic confirms probing unauthorized access claims through third-party vendor systems.”
}
`