Anthropic Reveals Claude Mythos: A New Frontier in Cybersecurity
On Tuesday, Anthropic announced its upcoming artificial intelligence model, Claude Mythos, which has exhibited remarkable proficiency in identifying software vulnerabilities.
This innovative model has uncovered thousands of flaws in widely-utilized applications that remain without patches or solutions, prompting the San Francisco-based startup to collaborate with cybersecurity experts to strengthen defenses against potential hacking threats.
Mike Krieger from Anthropic Labs disclosed at the HumanX AI conference in San Francisco, “We have a new model that we’re explicitly not releasing to the public.”
Instead of a public launch, Anthropic is permitting cybersecurity professionals and engineers within the open-source community to engage with Mythos, using the model as a defensive instrument. “This approach essentially arms them ahead of time,” Krieger elaborated.
The advancements in AI model capabilities have raised alarms regarding malicious actors employing such technologies to decipher passwords or penetrate encryption that safeguards sensitive data.
Notably, the oldest vulnerabilities highlighted by Mythos date back 27 years, none of which had previously been detected by their original developers before being identified by the AI model, as indicated by Anthropic.
Mythos marks the latest iteration in Anthropic’s Claude AI series, and a recent leak of some of its underlying code has prompted the company to issue a blog post emphasizing unprecedented cybersecurity hazards.
In their communication, Anthropic noted, “AI models have achieved a level of coding capability where they can outperform nearly all but the most adept humans in discerning and exploiting software vulnerabilities.” The ramifications of such developments for economies, public safety, and national security could be dire.
According to Anthropic, the vulnerabilities unveiled by Mythos were often nuanced and challenging to detect without the aid of AI. For instance, the model identified a previously overlooked flaw in video software that had undergone testing over five million times by its developers.
Project Glasswing
As a precautionary measure, Anthropic has shared a version of Mythos with cybersecurity firms such as CrowdStrike and Palo Alto Networks, in addition to tech giants Amazon, Apple, and Microsoft, under the initiative known as “Glasswing.”
Networking powerhouses Cisco and Broadcom, along with the Linux Foundation—an entity that advocates for the free and open-source Linux operating system—are also collaborating on this project.
Anthony Grieco, Cisco’s chief security and trust officer, stated in a joint release, “This work is too important and too urgent to pursue individually.”
He emphasized that “AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no turning back.”
Approximately 40 organizations involved in the design, maintenance, or operation of computer systems are reportedly participating in Glasswing.
According to Anthropic, project partners will exchange their insights gleaned from Mythos, with Anthropic providing around $100 million worth of computing resources to facilitate the initiative.
Preliminary efforts utilizing AI models have demonstrated their potential to discover and rectify software and hardware vulnerabilities at previously unattainable speeds and scales.
“The interval between the discovery of a vulnerability and its exploitation by an adversary has dramatically contracted—what once took months can now occur within minutes thanks to AI,” stated Elia Zaitsev, chief technology officer at CrowdStrike.
“The Claude Mythos Preview illustrates what is now achievable for defenders at scale, while adversaries are likely to seek to exploit these same capabilities.”

Furthermore, Anthropic has engaged in discussions with the US government concerning Mythos, despite a directive issued by the White House in February that annulled all contracts with the startup.
This mandate is currently on hold, following a federal court judge’s ruling pending the resolution of a legal challenge put forth by Anthropic.
Source link: News-shield.com.






