Major Tech Companies Collaborate with Government to Enhance Cybersecurity
(CNN) — Google, Microsoft, and xAI will be providing unreleased iterations of their artificial intelligence models to the government in an effort to mitigate cybersecurity vulnerabilities, as announced by the National Institute of Standards and Technology.
This collaboration surfaces following the introduction of Anthropic’s formidable Mythos AI model, which escalated apprehensions regarding AI’s implications for cybersecurity last month, effectively prompting the White House to contemplate a formal evaluation process for artificial intelligence.
The new agreements empower the Centre for AI Standards and Innovation (CAISI) within the U.S. Department of Commerce to assess emerging AI models and their prospective consequences on national security and public welfare prior to their deployment.
Moreover, CAISI will undertake research and testing of AI models post-deployment, having already conducted over 40 evaluations in the field.
“Independent and rigorous measurement science is pivotal for comprehending frontier AI and its ramifications for national security,” stated CAISI Director Chris Fall. “These enhanced partnerships with the industry augment our capacity to serve the public interest during this crucial juncture.”
Mythos, which Anthropic claims is “significantly advanced” compared to its contemporaries in terms of cybersecurity capabilities, has incited a flurry of concerns among governmental bodies, financial institutions, and utility services over the preceding month.
The organisation conveyed its reluctance to release the model to the public, limiting access to a select cadre of approved entities. Additionally, Anthropic has briefed senior officials within the U.S. government regarding the model’s functionalities.

OpenAI announced last week that it would extend access to its most sophisticated AI models to all vetted government tiers, aiming to preempt AI-related threats.
These partnerships are anticipated to facilitate CAISI’s testing efforts by providing augmented resources, as noted by Jessica Ji, senior research analyst at Georgetown’s Centre for Security and Emerging Technology.
“They lack the abundant resources (including personnel and technical expertise) that large tech firms possess, hindering their ability to rigorously test these models,” she commented.
The White House is presently seeking consultations with a cadre of experts to deliberate on a potential governmental review process for novel AI models, as confirmed by CNN. This initiative marks a departure from the previous administration’s laissez-faire approach toward AI regulation.
The New York Times first reported on the formation of this working group on Monday.
“Any policy announcements will emanate directly from the President. Speculation regarding potential executive orders is unfounded,” remarked a White House spokesperson to CNN.

While Microsoft routinely evaluates its creations, CAISI brings additional “technical, scientific, and national security expertise” to the table, remarked Microsoft Chief Responsible AI Officer Natasha Crampton in a statement.
Source link: Wlfi.com.






