Anthropic Explores Acquisition of Fractile’s Inference Accelerators
Recent reports suggest that Anthropic, an emerging leader in AI development, has initiated preliminary negotiations with Fractile, a London-based startup specializing in chip technology.
According to The Information, insiders reveal that this dialogue could introduce Fractile as an additional supplier of AI server silicon, augmenting Anthropic’s current procurement from established giants like Nvidia, Google, and Amazon.
However, it appears Fractile’s chips may not achieve commercial viability until around 2027, thereby positioning their integration beyond Anthropic’s immediate plans and aligning with the timeline of their collaboration with Google and Broadcom on Tensor Processing Units (TPUs).
Founded in 2022 by Oxford PhD Walter Goodwin, Fractile is innovating an inference chip that integrates memory and computational capabilities on a singular die, utilizing SRAM, circumventing the need for data transfers to off-chip DRAM.
This architectural choice addresses one of the principal bottlenecks associated with the efficient execution of large-scale AI models.
Goodwin elaborated in a conversation with Fortune in July 2024, highlighting that Fractile’s design allows for the data required for computations to be located in close proximity to the transistors responsible for processing.
Initial simulations indicated that Fractile could potentially execute large language models at a rate up to 100 times faster and at a cost 10 times lower compared to Nvidia’s GPUs; however, it is noteworthy that test chips have yet to be produced.
Fractile has successfully garnered $15 million in seed funding, co-led by Kindred Capital, the NATO Innovation Fund, and Oxford Science Enterprises.
The venture is currently in discussions to secure an additional $200 million at a valuation exceeding $1 billion, with notable investors such as Founders Fund, 8VC, and Accel expressing interest.
The Fractile team boasts engineers hailing from esteemed organizations like Graphcore, Nvidia, and Imagination Technologies, while concurrently developing a comprehensive software ecosystem to complement their hardware offerings.
Strikingly, Anthropic has strategically evaded reliance on any singular chip vendor by deploying its AI models across Nvidia GPUs, Amazon’s Trainium processors via Project Rainier, and Google’s TPUs in an agreement finalized last October that ensures over 1GW of computational capacity.
In early April, this arrangement expanded to encompass 3.5GW of TPU capacity slated for deployment between 2027 and 2031.
This budding interest in Fractile coincides with the burgeoning demand for Anthropic’s existing infrastructure.
Reports indicate that the company’s annualized revenue run rate surpassed $30 billion in March, a striking leap from approximately $9 billion at the conclusion of 2025.
However, mounting inference costs have adversely impacted gross margins. Unlike competitors such as OpenAI and xAI, which are in the throes of building out extensive data center infrastructures, Anthropic has chosen a model of renting compute capacity from multiple providers and maintaining leverage through a diversified chip supply.

Fractile stands among several startups targeting inference operations with SRAM-based or near-memory designs, joining the ranks of firms like Groq and Cerebras.
In a significant industry movement, Nvidia completed a $20 billion acquisition of Groq in December, subsequently introducing its own dedicated inference accelerator, Groq 3 LPX, reflecting the growing commercial imperative to enhance cost-efficiency in scaling AI operations.
Source link: Tomshardware.com.






