NVIDIA contributes to the Vera Robin Rack Innovation in the OCP community

NVIDIA contributes to the Vera Robin Rack Innovation in the OCP community

(Rocas Terrice/Shutter Stock)

NVIDIA continues its cooperation with the Open Computer Project (OCP) through a new rack level design partnership for its upcoming Vera Rubin system. At this week’s OCP Global Summit, the company announced that its Vera Rubin would include a number of innovations designed to be in accordance with the MGX rack OCP standards.

Updates are part of NVIDIA’s “Giga Scale AI factories”, or an attempt to change the DATA data centers to replace the facilities that integrate computations, power and cooling as a united design. According to the Dealer, the data center product marketing manager in NVIDIA, the Vera Robin System expands the company’s Open MGX architecture, which was first shared with the OCP first and has been used in several server designs since then.

“We all know that the demand for AI is bursting,” the delayer said during a press briefing. Data centers are preparing at the giga scale at AI factories that receive intelligence and receive taxes. ” “But all of all, to maximize this income, networking, computing, mechanicals, power and cooling, all have to design as one. We are at the center of the change to look at an open, mutual cooperation across the grid.”

Rack surface innovations

The new Vera Robin Rack design improves multiple hardware, which means increasing performance and speeding up deployment. One of these improvements is a new liquid cold bus bar that is capable of supplying up to 5,000 AMPs of the current, which design NVIDIA says that large -scale AI supports power density and delivery for workloads. The completion of the bus bar is the advanced supercaps that provide 20 times more energy storage than the Blackwell Generation, which says NVIDIA says the grid will help reduce the increase in demand for electricity and that more computing resources will be allowed.

Mechanically, the Vera Rubin Computing Tray introduced a PCB midplan to create a cable -free admission, reduced the assembly time and improved the service. A new modular expansion in the front of the tray will support the integration of Rubin CPXGPU and Connect X9 sprinkles. This system is fully designed to operate at the liquid cooling and at 45 ° C, which NVIDIA claims will eliminate the incompetence visible in other solutions, which require 32 ° C or less. The NVDIA confirmed that the Vera Rubin MGX Rack -level innovation will be displayed on the OCP show floor and after this program the OCP community will be contributed.

Power and contact for Giga Scale System

NVIDIA is also introducing a new 800 volt DC power architecture that is designed to replace the Legacy 415 volt AC system in data centers. The approach transmits the power conversion upwards, directs DC to the rack to reduce energy loss and facilitate the power route from grid to computing node. By removing multiple layers of AC-DC conversion, NVIDIA claims that the design will simplify the entire system, which will make more GPU and per watt in the Factory per AI factory.

(Shutter stock)

Several partners have planned to adopt 800 volt architecture in the next generation data centers, including Foxkin, which is developing a 40 MW facility to help the NVIDIA system in Taiwan, as well as Oracle Cloud and Corio. Ann Wadia said she was working with more than 20 companies across the hardware steak to make AI factories a joint blueprint for scaling.

Along with the new power design, NVIDIA highlights its Nvlink Fusion Environmental system updates, which enables high bandout interconnect directly into CPU-GPU integration and computing nodes. The Intel X86 will produce the processor that connects NVIDIA directly using NVLink Fusion, while the Samsung Foundry will offer customs CPU and XPU manufacturing to meet the growing demand for a variety of computers. Fujitsu is also connecting its Monaka series CPU with NVIDIA GPUS through NVLink Fusion.

Maintain OCP compatibility

Although some industry colleagues have detected a double wide layout, NVIDIA Vera Rubin maintains a single wide OCP rack form element. The design minimizes copper cableting and shorten the interconnected distances, allowing the highest NVlink data rates with fewer cables, Dealers said. He noted that a double wide setup would need “flyover” cables between the sides of the rack, which would reduce complexity and signal. NVIDIA continues with a wide OCP architecture that is used in multiple existing systems, a configuration dealers that prove to be mature and well.

Vera Robin Rack updates are compatible with NVIDIA’s OCP community with mutual interference -capable hardware design contribution strategies, while controlling its basic technologies. By designing its next -generation rack architecture based on OCP standards, the company aims to accelerate the adoption of a large -scale AI system specially designed unified computu, power, and cooling designs. For more details, read the NVIDIA blog at this link.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *