Anthropic recently unveiled a preview of its advanced frontier model, Mythos, intended for use by a select group of partner organizations focused on cybersecurity. This limited release is part of Project Glasswing, a security initiative involving twelve partners who will employ the model to enhance defensive security and protect critical software. Although not explicitly designed for cybersecurity, Mythos will analyze both proprietary and open-source software to detect code vulnerabilities. Anthropic reports that Mythos has uncovered thousands of zero-day vulnerabilities, including many critical issues, some dating back one or two decades. The model, which supports the Claude AI system, is regarded as one of Anthropic’s most sophisticated, featuring advanced coding and reasoning capabilities.
Partners such as Amazon, Apple, Microsoft, and others will share insights gained to benefit the broader technology sector, while the preview itself remains restricted to select organizations, totaling forty in all. The company is also in ongoing consultations with federal officials regarding Mythos’ deployment, despite legal challenges with the Pentagon over concerns about supply-chain risks and restrictions on autonomous surveillance. Previously, details of Mythos, initially named “Capybara,” were inadvertently leaked due to human error, revealing it as significantly more powerful than earlier models. Moreover, last month, Anthropic unintentionally exposed thousands of source code files during a software update, causing disruptions on GitHub as they attempted to resolve the issue.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…




