Hammerspace and Xsight Labs Partner to Flatten AI Storage Architecture, Eliminating Legacy Servers and Cutting Data TCO in Half

15 October 2025 | NEWS

Hammerspace and Xsight Labs Partner to Flatten AI Storage Architecture, Eliminating Legacy Servers and Cutting Data TCO in Half

New architecture eliminates legacy servers, slashing data storage TCO, and unlocks 800Gbps direct GPU-to-flash storage performance for the most demanding gigawatt-scale AI token factories

News Highlights:

  • This collaboration advances the Open Flash Platform (OFP) vision of a democratized, efficient, and radically simplified data storage infrastructure; offering >10x storage density, 90% less energy, and 60% longer operational life, which equates to 50% lower Total Cost of Ownership.
  • Hammerspace has selected the Xsight Labs E1, an 800G line-rate DPU, to enable a flattened, open-network architecture for AI data storage.
  • The solution eliminates legacy storage servers, allowing GPUs to access memory directly across the Ethernet network at 800Gbps bandwidth, dramatically reducing energy costs while increasing overall performance

-Xsight Labs announced its partnership with Hammerspace to advance an open, efficient, and simplified solution for AI data storage.

Why it Matters:

Today, the industry uses up to 9x more power and spends 5x more money than necessary to maintain legacy storage systems built around rotating media, x86 servers, and complex networking layers. By flattening the network and removing costly, power-intensive storage servers, the OFP approach allows storage devices to communicate directly with each other, unlocking massive performance, increasing efficiency, and driving OPEX savings. Furthermore, the OFP blueprint makes flash and the NFP file system the foundation for AI infrastructure and radically simplifies the data path.

A New Era of AI Infrastructure

Hammerspace has selected Xsight’s E1 800G DPU to realize the Open Flash Platform (OFP) initiative’s vision for the next generation of warm storage AI infrastructure.

“AI is forcing us to extend our extreme scale High Performance Computing (HPC) data management capabilities into new frontiers,” said Gary Grider, Director of HPC at Los Alamos National Laboratory. “Open standards pave the way to sustainable new capabilities, and the Open Flash Platform is a great example. It promises to provide a simpler and more energy-efficient capability to scale performance. This collaboration between Hammerspace and Xsight Labs demonstrates this promise.”

The OFP architecture redefines the data center by removing the traditional storage server, the expensive middleman, from the storage path. Instead, flash storage connects directly to the network using open standards such as Network File System (NFS) and Linux, creating a dramatically simpler and more enduring architecture.

The Xsight Labs E1 DPU, integrated with Hammerspace’s orchestration software, enables GPUs to access data directly from flash devices across the network without traversing layers of x86 servers. This direct path eliminates network bottlenecks and accelerates AI training and inference workloads, allowing organizations to scale performance linearly as they grow flash capacity.

Hammerspace selected Xsight Labs’ E1 DPU as the best fit to support the OFP vision of a flattened AI warm storage architecture, which required a DPU with the Arm core density, memory bandwidth, and 800Gbps Ethernet connectivity necessary for high-performance environments.

Perspectives:

“Legacy storage is collapsing under the weight of AI,” said David Flynn, CEO of Hammerspace. “By fusing our orchestration software with Xsight’s 800G DPU, we flatten the data path and turn every Linux device into shared storage. The result is an open architecture that scales linearly with performance, slashes cost, and feeds GPUs at the speed AI demands.”

“Hammerspace’s orchestration software allows every network element, regardless of memory size, to function as a flat layer zero storage node,” said Ted Weatherford, VP Business Development, Xsight Labs. “The scalability and performance story here will set the pace for the entire AI industry. Hammerspace’s stack coupled with our E1 DPU which is silently a full-blown Edge server offers the performance leading warm-storage solution. The magic is we are fast-piping warm storage directly to the GPU clusters eliminating all the legacy x86 CPUs. The solution offers an ExaByte per rack, connected directly with hundreds of giant Ethernet pipes, thus simplifying the AI infrastructure profoundly.”