top of page
infiniband-new-visual-hero-image-06.png

NVIDIA® Networking

Complex, ultra-fast, & built for extreme-size
nvidia-elite-partner-badge-rgb-for-screen.png

NVIDIA® Quantum-2 InfiniBand Switches

As computing requirements continue to grow exponentially, NVIDIA® Quantum-2 InfiniBand, the world’s only fully off-loadable, in-network computing platform, provides the dramatic leap in performance required to enable HPC, AI and hyperscale cloud infrastructures achieve unmatched performance with less cost and complexity.

​

The NVIDIA® ConnectX-7 NDR 400Gb/s InfiniBand host channel adapter (HCA) provides the highest networking performance available to take on the world’s most challenging workloads. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA® In-Network Computing engines to provide additional acceleration to deliver the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers.

​

The NVIDIA® Quantum-2 modular switches provide scalable port configurations from 512 ports to 2,048 ports of 400Gb/s InfiniBand (or 4,096 ports of 200Gb/s) with a total bi-directional throughput of 1.64 petabits per second - 5x over the previous-generation InfiniBand modular switch series, enabling users to run larger workloads with fewer constraints. The 2,048-port switch provides an unprecedented 6.5x greater scalability over the previous generation, with the ability to connect more than a million nodes with just 3 hops using a DragonFly+ network topology.

​

Offloading operations is crucial for AI workloads. The third-generation NVIDIA® SHARP technology allows deep learning training operations to be offloaded and accelerated by the Quantum-2 InfiniBand network, resulting in 32x higher AI acceleration power. When combined with NVIDIA® Magnum IO™ software stack, it provides out-of-the-box accelerated scientific computing.

High Level Benefits

  • The NVIDIA® Quantum InfiniBand Platform is a comprehensive end-to-end solution that includes Quantum switches, ConnectX adapters, BlueField DPUs LinkX cables and transceivers, and a comprehensive suite of acceleration and management software

  • In-Network Computing engines for accelerating applications performance and scalability

  • Standard - backward and forward compatibility - protecting data center investments

  • ConnectX adapters with Virtual Protocol Interconnect (VPI) technology supports both InfiniBand and Ethernet

  • High data throughput, extremely low latency, high message rate, RDMA, GPUDirect, GPUDirect Storage

  • Advanced adaptive routing, congestion control and quality of service for highest network efficiency

  • Self Healing Network for highest network resiliency

  • LinkX provides a full array of DACs, ACCs, AOCs and transceivers for every speed and reach needed in QSFP28 and QSFP56 form factors and at the lowest bit-error-ratio and lowest-latency

routers-gateway-2c50-d.jpg

Routers and Gateway Systems

InfiniBand systems provide the highest scalability and subnet isolation using InfiniBand routers, InfiniBand long-reach connections (NVIDIA® MetroX®-2), and InfiniBand to Ethernet gateway systems (NVIDIA® Skyway™). The latter is used to enable a scalable and efficient way to connect InfiniBand data centers to Ethernet infrastructures

programmable-dpus-2c50-d.jpg

Data Processing Units (DPUs)

The NVIDIA® BlueField® DPU combines powerful computing, high-speed networking, and extensive programmability to deliver software-defined, hardware-accelerated solutions for the most demanding workloads. From accelerated AI computing to cloud-native supercomputing, BlueField redefines what’s possible

infiniband-switches-2c50-d.png

Infiniband Switches

InfiniBand switch systems deliver the highest performance and port density available. Innovative capabilities such as NVIDIA®  Scalable Hierarchical Aggregation and Reduction Protocol (SHARP™) and advanced management features such as self-healing network capabilities, quality of service, enhanced virtual lane mapping, and NVIDIA® In-Network Computing acceleration engines provide a performance boost for industrial, AI, and scientific applications.

infiniband-adapters-2c50-d.jpg

Infiniband Adapters

InfiniBand host channel adapters (HCAs) provide ultra-low latency, extreme throughput, and innovative NVIDIA® In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for today's modern workloads.

evens-2c50-d.png

LinkX InfiniBand Cables and Transceivers

NVIDIA® LinkX® cables and transceivers are designed to maximize the performance of HPC networks, requiring high-bandwidth, low-latency, highly reliable connections between InfiniBand elements.

bottom of page