Mellanox HDR 200G Infiniband

Reduce cost and infrastructure complexity while ensuring the highest productivity

Mellanox InfiniBand Switches and Adapters Provide Advanced Levels of Data Center IT Performance,
Efficiency and Scalability

Mellanox's family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.

Mellanox switches includes a broad portfolio of Edge and Modular switches supporting 40, 56,100 and 200Gb/s port speeds and ranging from 8 ports to 800 ports.

These switches allow IT managers to build the

most cost-effective and scalable switch fabrics ranging from small clusters up to 10's-of-thousands of nodes, and can carry converged traffic with the combination of assured bandwidth and granular quality of service ensuring the highest productivity.

Value Propositions
  • Mellanox switches come with port configurations from 8 to 800 at speeds up to 200Gb/s per port with the ability to build clusters that can scale out to ten-of-thousands of nodes.

  • Mellanox switches delivers high bandwidth with sub 90ns latency to get the highest server efficiency and application productivity.

  • Best price/performance solution with error-free 40-200Gb/s link speed.

  • World’s smartest switches, enabling in-network computing through Co-Design SHARP™ technology.

  • Scalability and subnet isolation using InfiniBand routing and InfiniBand to Ethernet gateway capabilities.

  • Real-Time Scalable Network Telemetry with built-in hardware sensors for rich traffic data collection.

Edge Switches

8 to 40-port non-blocking 40 to 200Gb/s InfiniBand Switch Systems

The Mellanox family of switch systems provide the highest-performing fabric solutions in a 1U form factor by delivering up to 16Tb/s of non-blocking bandwidth with the lowest port-to-port latency. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters.

The edge switches, offered as externally managed or as managed switches, are designed to build the most efficient switch fabrics through the use of advanced InfiniBand switching technologies such as Adaptive Routing, Congestion Control and Quality of Service.

Modular Switches

108 to 800-port full bi-directional bandwidth 40 to 200Gb/s InfiniBand Switch Systems

Mellanox modular switches provide the highest density switching solution, scaling from 8.64Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds of up to 200Gb/s.

Its smart design provides unprecedented levels of performance and makes it easy to build clusters that can scale out to thousands-of-nodes.

The InfiniBand modular switches deliver director-class availability required for mission-critical application environments.

The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.

  • Industry-leading energy efficiency, density, and cost savings

  • Ultra low latency

  • Granular QoS for Cluster, LAN and SAN traffic

  • Maximizes performance by removing fabric congestions

  • Quick and easy setup and management

  • Fabric Management for cluster and converged I/O applications

Virtual Protocol Interconnect (VPI) flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.

Virtual Protocol Interconnect ® 

The Mellanox switch family enables efficient computing for clusters of all sizes from the very small to the extremely large while offering near-linear scaling in performance. Advanced features such as static routing, adaptive routing, and congestion management allows the switch fabric to dynamically detect and avoid congestion and to re-route around points of congestion. These features ensure the maximum effective fabric performance under all types of traffic conditions.

Sustained Network Performance

Mellanox switches reduce complexity by providing seamless connectivity between InifiniBand, Ethernet and Fibre Channel based networks. You no longer need separate network technologies with multiple network adapters to operate your data center fabric. Granular QoS and guaranteed bandwidth allocation can be applied per traffic type. This ensures that each type of traffic has the resources needed to sustain the highest application performance.

Reduce Complexity

Reduce Environmental Costs

Improved application efficiency along with the need for fewer network adapters allows you to accomplish the same amount of work with fewer, more cost-effective servers. Improved cooling mechanism and reduced power and heat consumption allow data centers to reduce the cost associated with physical space.

Enhanced Management Capabilities

Mellanox managed InfiniBand switches come with an onboard subnet manager, enabling simple out-of-the-box fabric bring up for up to 2K nodes. MLNX-OS® (SX6000, SB7000 and QM8000 families) chassis management provides administrative tools to manage the firmware, power supplies, fans, ports, and other interfaces. All Mellanox switches can also be coupled with Mellanox’s Unified Fabric Manager™  (UFM  ) software for managing scale-out InfiniBand computing environments.

UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric. UFM boosts application performance and ensures that the fabric is up and running at all times. MLNX-OS provides a license activated embedded diagnostic tool, Fabric Inspector, to check node-to-node, node-to-switch connectivity and ensure the fabric health. + images from the pdf brochure

ConnectX®-6 Single/Dual-Port Adapters supporting 200G with VPI® 

Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter without CPU intervention.

Application acceleration with CORE-Direct™ and GPU communication acceleration brings further levels of performance improvement. Mellanox InfiniBand adapters' advanced acceleration technology enables higher cluster efficiency and large scalability to tens-of-thousands of nodes.

ConnectX®-6 adapter cards, the world’s first 200Gb/s HDR InfiniBand and Ethernet network adapter cards, introduce new acceleration engines for maximizing High Performance, Machine Learning, Web 2.0, Cloud, Data Analytics and Storage platforms. 

ConnectX®-6 with Virtual Protocol Interconnect® provides two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 200 million messages per second, providing the highest performance and most flexible solution for the most demanding applications and markets. 

ConnectX®-6 is a groundbreaking addition to the Mellanox ConnectX® series of industry-leading adapter cards. In addition to all the existing innovative features of past versions, ConnectX®-6 offers a number of enhancements to further improve performance and scalability. ConnectX®-6 VPI supports HDR, HDR100, EDR, FDR, QDR, DDR and SDR InfiniBand speeds as well as 200, 100, 50, 40, 25, and 10Gb/s Ethernet speeds. ConnectX-6 VPI   is the perfect product to lead HPC data centers toward Exascale levels of performance and scalability. 

ConnectX®-6 offers a crucial innovation to network security by providing block-level encryption. Data in transit undergoes encryption and decryption as it is stored or retrieved. The encryption/decryption is offloaded by the ConnectX®-6 hardware, saving latency and offloading CPU. ConnectX®-6 block-level encryption offload enables protection between users sharing the same resources, as different encryption keys can be used.