Sam Altman has accused Anthropic of employing “fear-based marketing” tactics regarding their AI model, Claude Mythos, which he claims may exaggerate the risks associated with artificial intelligence. This accusation comes amid ongoing tensions between the leadership of OpenAI and Anthropic, particularly concerning claims about model capabilities and safety. While Claude Mythos has shown strong performance in cybersecurity tasks, including vulnerability discovery, its public release has been delayed due to safety concerns, further fueling the debate over whether warnings about AI dangers are based on genuine caution or are merely strategic promotional tactics.

Anthropic: Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems through its Claude family of models. It recently announced Claude Mythos Preview, its most capable frontier model with exceptional cybersecurity capabilities, which it has withheld from public release due to concerns over potential misuse. Sam Altman accused Anthropic of using fear-based marketing to promote Mythos while overstating associated AI risks.
Sam Altman: Sam Altman is the CEO of OpenAI, leading efforts to develop advanced generative AI technologies. He has been outspoken on AI governance and the pace of innovation in the field. In recent comments, he criticized Anthropic’s promotion of Claude Mythos as relying on exaggerated fears to market the model.

AI Rivalry: Tensions between OpenAI and Anthropic leaders surface in public critiques over model capabilities and safety claims.
Claude Mythos: Anthropic’s latest frontier model excels in cybersecurity tasks like vulnerability discovery but remains unreleased publicly over safety concerns.
Safety Marketing: Debate arises on whether warnings about powerful AI models constitute legitimate caution or promotional fear tactics.