Google counters hackers leveraging AI to target an undisclosed flaw in a company’s cybersecurity

Try Our Free Tools!
Master the web with Free Tools that work as hard as you do. From Text Analysis to Website Management, we empower your digital journey with expert guidance and free, powerful tools.

Google Disrupts AI-Powered Cyberattack, Raising Alarm Over Security Risks

On Monday, Google announced significant developments in the realm of cybersecurity, revealing its successful intervention against a nefarious criminal organization that sought to exploit an undisclosed digital vulnerability utilizing artificial intelligence.

This incident has escalated concerns among both governmental and private sectors regarding the burgeoning threats posed by AI in the domain of cybersecurity.

While specifics about the assailants and their intended target were limited, John Hultquist, the head analyst in Google’s threat intelligence division, underscored the gravity of the situation.

He indicated that this marks a pivotal moment long forewarned by cybersecurity experts: malicious actors now have AI at their disposal to significantly enhance their capabilities to infiltrate computer systems globally.

“It’s here,” Hultquist declared emphatically. “The era of AI-driven vulnerability and exploitation is upon us.”

This development coincides with remarkable advancements in AI’s proficiency in detecting vulnerabilities, highlighted by the recent introduction of the Mythos model by Anthropic.

Amidst this backdrop, the administration of President Donald Trump has re-evaluated its strategy regarding the scrutiny of powerful AI models prior to their public dissemination.

Having previously fulfilled a campaign pledge to dismantle certain regulatory frameworks established by Democratic leader President Joe Biden, the Trump administration now finds itself sending mixed signals about the appropriate level of governmental oversight in the realm of AI.

“There are divergent views on whether a regulatory response is warranted,” remarked Dean Ball, a senior fellow at the Foundation for American Innovation and former White House tech policy advisor. “I am not inherently a supporter of regulation,” he continued, “but in this context, it seems necessary.”

Google Identifies AI’s Role in Cyberattack

A hand holding a smartphone displaying the Google search homepage on its screen.

In its findings, Google indicated it had detected a cohort of influential “threat actors” orchestrating a sophisticated operation predicated on an exploit they had discovered.

This vulnerability enabled them to circumvent two-factor authentication to gain access to a widely-utilized online system administration tool, which Google opted not to disclose.

Designated as a zero-day exploit, this form of cyberattack leverages a previously unknown security flaw. The term “zero-day” denotes that security engineers have had no time to devise a remedy for the vulnerability.

Following the identification of the threat, Google proactively informed the impacted organization and law enforcement agencies, successfully averting potential damage.

In tracing the hackers’ digital tracks, the company uncovered evidence of their reliance on an AI large language model—akin to the underlying technology used in popular chatbots—to uncover the susceptibility.

Google refrained from disclosing the specific AI model implicated in the cyberattack, asserting it was likely not their proprietary Gemini or Anthropic’s Claude Mythos.

The firm also withheld details regarding the suspected perpetrating group, though there was no indication of ties to any hostile nation-state. Nonetheless, they noted that factions associated with China and North Korea have been investigating similar methodologies.

Hultquist emphasized that, as opposed to government operatives who typically operate with caution and restraint, cybercriminals stand to gain immensely from AI’s “remarkable capability for speed” in identifying and weaponizing security flaws.

“There exists a race between defenders and attackers, with the latter striving to obtain sensitive data for extortion or launch ransomware attacks,” he explained in an interview. “AI is poised to provide them a substantial advantage due to its rapid response capabilities.”

Anthropic’s Mythos Ignites Regulatory Discussions

A smartphone displaying the word Anthropic lies on a wooden desk near a mug and two potted plants.

Last week, the Trump administration’s Commerce Department announced newly forged agreements with tech titans Google, Microsoft, and Elon Musk’s xAI to scrutinize their most advanced AI models prior to public introduction. However, this announcement subsequently vanished from the Commerce Department’s website.

This incident illustrates the mixed messages emanating from the Trump administration since the unveiling of Anthropic’s Mythos, which was touted as a groundbreaking model with extraordinary capabilities in hacking and cybersecurity. Due to its potential implications, it was released only to a select cohort of trusted entities.

To address these emerging threats, Anthropic initiated Project Glasswing, consolidating efforts with major tech companies such as Amazon, Apple, Google, and Microsoft, as well as financial institutions like JPMorgan Chase, to safeguard critical software against the conceivable severe risks stemming from the new model.

However, its relationship with the U.S. government has been complicated by a legal and public confrontation with both the Pentagon and Trump regarding the military utilization of its AI capabilities.

Moreover, its chief competitor, OpenAI, has unveiled a similar model, announcing its intent to release a specialized cybersecurity variant of ChatGPT exclusively for “defenders responsible for securing critical infrastructure,” aimed at assisting them in identifying and rectifying code vulnerabilities.

Ball expressed optimism that, in the long term, AI tools with enhanced coding capabilities could bolster defenses against the frequent cyberattacks plaguing institutions such as hospitals and schools.

Nonetheless, he cautioned that an immense pool of software code—potentially totaling trillions of lines—is vulnerable if AI mechanisms are unleashed to exploit inherent flaws.

Strengthening this software may require years, a process Ball believes could benefit from proactive coordination on the part of the U.S. government.

A person in a hoodie uses a laptop in an office with large screens displaying the word SOFTWARE and coding data.

Meanwhile, he anticipates a “transitional period” during which cybersecurity risks may escalate significantly, perhaps leading to a decidedly more perilous global landscape.

Source link: Audacy.com.

Disclosure: This article is for general information only and is based on publicly available sources. We aim for accuracy but can't guarantee it. The views expressed are the author's and may not reflect those of the publication. Some content was created with help from AI and reviewed by a human for clarity and accuracy. We value transparency and encourage readers to verify important details. This article may include affiliate links. If you buy something through them, we may earn a small commission — at no extra cost to you. All information is carefully selected and reviewed to ensure it's helpful and trustworthy.

Reported By

Neil Hemmings

I'm Neil Hemmings from Anaheim, CA, with an Associate of Science in Computer Science from Diablo Valley College. As Senior Tech Associate and Content Manager at RS Web Solutions, I write about AI, gadgets, cybersecurity, and apps – sharing hands-on reviews, tutorials, and practical tech insights.
Share the Love
Related News Worth Reading