The EU unit responsible for overseeing AI technologies is struggling to assess Anthropic’s new elite hacking AI model, Claude Mythos Preview, due to insufficient access to the technology and a lack of specialized experts. This model is considered a significant advancement in AI-driven vulnerability discovery and ethical hacking capabilities, yet Anthropic has chosen to withhold it from public release, sharing it only with select vetted organizations. The situation raises concerns about the EU AI Office’s ability to regulate high-risk AI models effectively amidst a potential cybersecurity crisis.

Anthropic: Anthropic is an AI safety and research company developing the Claude family of large language models with a focus on reliability, interpretability, and steerability. The company recently unveiled Claude Mythos Preview, a specialized AI model excelling in cybersecurity testing by autonomously discovering vulnerabilities in major operating systems and web browsers. This model is central to the news as the EU unit responsible for its scrutiny lacks access to the technology and necessary expertise, raising oversight concerns.

`json
{
“AI Model”: “Claude Mythos Preview is an advanced AI model focused on ethical hacking and vulnerability discovery.”,
“Regulation”: “The EU AI Office encounters difficulties in assessing high-risk AI technologies due to inadequate technical access and a lack of specialized personnel.”,
“Cybersecurity”: “Anthropic considers Mythos a significant development in the industry, limiting its release to selected organizations.”
}
`