The U.S. Treasury Department’s Initiative on AI Guidance
The U.S. Treasury Department has initiated the release of comprehensive guidance designed to assist financial services firms in the secure integration of artificial intelligence, all while ensuring compliance with regulatory mandates.
“Treasury will unveil a series of six resources crafted collaboratively with industry stakeholders alongside federal and state regulatory partners, aimed at promoting secure and resilient AI throughout the U.S. financial landscape,” the agency disclosed on Wednesday.
Subsequently, on Thursday, the Treasury announced the release of the first two resources in this series: an AI lexicon aimed at elucidating crucial terminology—“emphasizing frequently used terms that possess specific meanings within the context of AI deployment in the financial sector”—and a tailored version of the National Institute of Standards and Technology’s AI Risk Management Framework, concentrating specifically on financial services.
This latter document features a questionnaire to assess firms’ AI maturity, a matrix correlating AI-related risks with available security controls, and guidance for their implementation.
According to the Treasury, these resources are the culmination of discussions among state and federal regulators, financial executives, and other significant stakeholders, part of the department’s Artificial Intelligence Executive Oversight Group (AIEOG).
The coordinating council of the financial services sector collaborated with a similar organization to establish this group.
Cory Wilson, the Treasury’s Deputy Assistant Secretary for Cybersecurity and Critical Infrastructure Protection, stated that these resources are designed to empower small and medium-sized financial institutions “to leverage the capabilities of AI to bolster cyber defenses and implement AI solutions with greater security.”
Members of AIEOG addressed various work streams—including governance, fraud prevention, identity management, and transparency—to inform the Treasury’s recommendations.
“By prioritizing practical implementation over prescriptive mandates,” the Treasury emphasized, “the resources aim to enable financial institutions to adopt AI with increased confidence and security, thereby enhancing resilience and cybersecurity while fostering innovation across the sector.”
Addressing a Growing Demand Amid Risks
The resources from the Treasury are strategically crafted to satisfy the financial services sector’s increasingly pronounced appetite for AI automation.
Banks are eager to deploy AI for enhanced fraud prevention, insurers are inclined to utilize it for risk evaluation, and securities markets are interested in harnessing it to analyze transactions.
Approximately one-third of the functions performed by capital markets, insurers, and banks exhibit “high potential for full automation” via AI, as noted in a January 2025 report by the World Economic Forum.
However, this technological advancement inherently harbors substantial risks. Defective AI models could compromise sensitive financial data, and biased algorithms risk entrenching systemic discrimination.
Furthermore, errant AI could destabilize rapidly evolving, interconnected markets. The phenomenon of numerous institutions employing the same AI models could yield “synchronized market movements and amplified volatility patterns that transcend traditional algorithmic trading risks,” as indicated in a report by the RAND Corporation in September 2025.
“As AI systems become more advanced, they may exhibit behaviors that are challenging to predict or monitor effectively,” it cautions.

The RAND report further observes that “regulators encounter the formidable task of overseeing and evaluating these systems’ collective behaviors.”
According to an October 2025 report from the G20’s Financial Stability Board, however, few regulators are proactively supervising the financial sector’s utilization of AI—frequently due to a lack of necessary capacity or expertise.
Source link: Cybersecuritydive.com.






