Google Resolves Vulnerability in AI Coding Tool That Allowed Malicious Code Execution, According to Reports

Try Our Free Tools!
Master the web with Free Tools that work as hard as you do. From Text Analysis to Website Management, we empower your digital journey with expert guidance and free, powerful tools.

In Brief

  • Researchers have identified a prompt injection vulnerability within Google’s Antigravity AI programming platform.
  • This flaw may permit attackers to execute commands, even when the platform’s Secure Mode is activated.
  • Google remedied the issue on February 28, following its disclosure by researchers in January, according to Pillar Security.

Google has rectified a security vulnerability in its Antigravity AI development platform, which, according to cybersecurity specialists, could enable malicious actors to execute commands remotely on a developer’s machine via a prompt injection attack.

A report from Pillar Security elucidates that the flaw lay within Antigravity’s find_by_name file search utility, which inadequately validated user input before relaying it to a command-line application.

This oversight permitted harmful input to morph a routine file search into a command execution task, thus facilitating remote code execution.

Pillar Security researchers articulated, “Coupled with Antigravity’s capability to create files as a permissible action, this results in a comprehensive attack chain: establish a malicious script, then activate it through an ostensibly legitimate search, all without necessitating further user interaction once the prompt injection is executed.”

Introduced last November, Antigravity represents Google’s AI-augmented development environment intended to assist programmers in writing, testing, and managing code through the support of autonomous software agents.

Pillar Security disclosed the vulnerability to Google on January 7, and the tech giant confirmed the report on the same day, subsequently declaring the issue resolved by February 28.

Google did not promptly respond to inquiries for comment by Decrypt.

Prompt injection attacks materialize when concealed instructions embedded in content compel an AI system to perform unintended tasks.

Given that AI tools routinely process external files or textual data, there exists a probability that the AI interprets these instructions as legitimate commands, granting an attacker the opportunity to initiate actions on a user’s machine without direct access or further engagement.

The looming threat of prompt injection attacks involving large language models garnered renewed scrutiny last summer, following a warning from OpenAI, the developer of ChatGPT, regarding the potential compromise of its new ChatGPT agent.

OpenAI outlined in a blog, “Upon signing the ChatGPT agent into websites or enabling connectors, it will obtain access to sensitive data from those sources, such as emails, files, or account information.”

In their demonstration of the Antigravity vulnerability, researchers generated a test script within a project workspace and activated it via the search tool.

The execution of this script resulted in the activation of the computer’s calculator application, indicating that the search function could be repurposed as a command execution mechanism.

The report highlighted a critical aspect: “This vulnerability circumvents Antigravity’s Secure Mode, the product’s most stringent security setting.”

These findings underscore a broader security dilemma confronting AI-enhanced development tools that are progressively executing tasks autonomously.

Partial view of a keyboard with a highlighted blue key labeled AI featuring a hand icon, set against a black background.

Pillar Security emphasized that the industry must transition from sanitization-based controls to execution isolation.

Each native tool parameter that interfaces with a shell command constitutes a potential injection point. Evaluating for this category of vulnerability is no longer optional; it is essential for delivering agentic features securely.

Source link: Decrypt.co.

Disclosure: This article is for general information only and is based on publicly available sources. We aim for accuracy but can't guarantee it. The views expressed are the author's and may not reflect those of the publication. Some content was created with help from AI and reviewed by a human for clarity and accuracy. We value transparency and encourage readers to verify important details. This article may include affiliate links. If you buy something through them, we may earn a small commission — at no extra cost to you. All information is carefully selected and reviewed to ensure it's helpful and trustworthy.

Reported By

Souvik Banerjee

I’m Souvik Banerjee from Kolkata, India. As a Marketing Manager at RS Web Solutions (RSWEBSOLS), I specialize in digital marketing, SEO, programming, web development, and eCommerce strategies. I also write tutorials and tech articles that help professionals better understand web technologies.
Share the Love
Related News Worth Reading