Innodisk has introduced a new advanced CXL memory module to meet the need for greater memory bandwidth in AI servers. AI servers are expected to account for 65% of the server market in 2024, according to Trendforce.
There is an urgent need for greater memory bandwidth and capacity, as AI servers require 1.2 TB of memory for efficient operation, Innodisk said. Traditional DDR memory solutions cannot meet these demands as the number of CPU cores continues to multiply, leading to underutilized CPU resources and increasing latency between different protocols, the company added.
CXL is an open standard supported by major industry players in cloud data center, networking communications and edge server markets. The new CXL memory module overcomes the limitations of conventional DIMM channels, Innodisk said.
Key specifications include 32 GB/s of bandwidth and data transfer speeds up to 32 GT/s via the PCIe Gen5 ×8 interface, meeting the fast processing capabilities necessary for AI workloads. When equipped with four 64-GB CXL memory modules, a server with eight 128-GB DRAM modules can increase its memory capacity by 30% and bandwidth by 40%. This meets memory requirements without additional DIMM slots.
The CXL memory module also enables memory pooling, which optimizes memory resource sharing between CPUs and components. This results in reduced redundant memory usage and improved overall system efficiency.
Innodisk plans to ship the CXL memory module in the first quarter of 2025. It features an E3.S 2T form factor based on the EDSFF standard, allowing for flexible memory expansion and easy module swapping within the server. The company said it is one of the industry’s first in this form factor.
Learn more about Innodisk