Mellanox ConnectX-5 Infiniband/Ethernet Host Bus Adapter
Mellanox ConnectX-5 Infiniband/Ethernet Host Bus Adapter
SKU: 1047188169 MFR: NVIDIA Corporation
Product Description
Mellanox ConnectX-5 Infiniband/Ethernet Host Bus Adapter - PCI Express 3.0 x8 - 100 Mbit/s - 2 x Total Infiniband Port(s) - 2 x Total Expansion Slot(s) - QSFP28 - Plug-in Card
Manufacturer : NVIDIA Corporation
Manufacturer Part No : MCX556M-ECAT-S35A
Features
- Socket Direct enabling 100Gb/s for servers without x16 PCIe slots
- Tag matching and rendezvous offloads
- Adaptive routing on reliable transport
- Burst buffer offloads for background checkpointing
- NVMe over Fabric (NVMe-oF) offloads
- Back-end switch elimination by host chaining
- Enhanced vSwitch/vRouter offloads
- Flexible pipeline
- RoCE for overlay networks
- Up to 100Gb/s connectivity per port
Product Details
SOCKET DIRECT
ConnectX-5 Socket Direct provides 100Gb/s port speed even to servers without x16 PCIe slots by splitting the 16-lane PCIe bus into two 8-lane buses one of which is accessible through a PCIe x8 edge connector and the other through a parallel x8 Auxiliary PCIe Connection Card connected by a dedicated harness. Moreover the card brings improved performance to dual-socket servers by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe x8 interface. In such a configuration Socket Direct also brings lower latency and lower CPU utilization. The direct connection from each CPU to the network means the Interconnect can bypass a QPI (UPI) and the other CPU optimizing performance and improving latency. CPU utilization is improved as each CPU handles only its own traffic and not traffic from the other CPU.
Socket Direct also enables GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.
Mellanox Multi-Host™ technology which was first introduced with ConnectX-4 is enabled in the Mellanox Socket Direct card allowing multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.
HPC ENVIRONMENTS
ConnectX-5 delivers high bandwidth low latency and high computation efficiency for high performance data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload hardware support for out-of-order RDMA Write and Read operations as well as additional Network Atomic and PCIe Atomic operations support.
ConnectX-5 VPI utilizes both IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technologies delivering low-latency and high performance. ConnectX-5 enhances RDMA network capabilities by completing the Switch Adaptive-Routing capabilities and supporting data delivered out-of-order while maintaining ordered completion semantics providing multipath reliability and efficient support for all network topologies including DragonFly and DragonFly+.
ConnectX-5 also supports Burst Buffer offload for background checkpointing without interfering in the main CPU operations and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.
- ConnectX-5 Infiniband/Ethernet Host Bus Adapter
- 35cm Harness
- Short Bracket
Dropship
Get it by Sat, Jan 04 - Thu, Jan 09
Update
Mellanox ConnectX-5 Infiniband/Ethernet Host Bus Adapter - PCI Express 3.0 x8 - 100 Mbit/s - 2 x Total Infiniband Port(s) - 2 x Total Expansion Slot(s) - QSFP28 - Plug-in Card
Manufacturer : NVIDIA Corporation
Manufacturer Part No : MCX556M-ECAT-S35A
Features
- Socket Direct enabling 100Gb/s for servers without x16 PCIe slots
- Tag matching and rendezvous offloads
- Adaptive routing on reliable transport
- Burst buffer offloads for background checkpointing
- NVMe over Fabric (NVMe-oF) offloads
- Back-end switch elimination by host chaining
- Enhanced vSwitch/vRouter offloads
- Flexible pipeline
- RoCE for overlay networks
- Up to 100Gb/s connectivity per port
Product Details
SOCKET DIRECT
ConnectX-5 Socket Direct provides 100Gb/s port speed even to servers without x16 PCIe slots by splitting the 16-lane PCIe bus into two 8-lane buses one of which is accessible through a PCIe x8 edge connector and the other through a parallel x8 Auxiliary PCIe Connection Card connected by a dedicated harness. Moreover the card brings improved performance to dual-socket servers by enabling direct access from each CPU in a dual-socket server to the network through its dedicated PCIe x8 interface. In such a configuration Socket Direct also brings lower latency and lower CPU utilization. The direct connection from each CPU to the network means the Interconnect can bypass a QPI (UPI) and the other CPU optimizing performance and improving latency. CPU utilization is improved as each CPU handles only its own traffic and not traffic from the other CPU.
Socket Direct also enables GPUDirect® RDMA for all CPU/GPU pairs by ensuring that all GPUs are linked to CPUs close to the adapter card and enables Intel® DDIO on both sockets by creating a direct connection between the sockets and the adapter card.
Mellanox Multi-Host™ technology which was first introduced with ConnectX-4 is enabled in the Mellanox Socket Direct card allowing multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces.
HPC ENVIRONMENTS
ConnectX-5 delivers high bandwidth low latency and high computation efficiency for high performance data intensive and scalable compute and storage platforms. ConnectX-5 offers enhancements to HPC infrastructures by providing MPI and SHMEM/PGAS and Rendezvous Tag Matching offload hardware support for out-of-order RDMA Write and Read operations as well as additional Network Atomic and PCIe Atomic operations support.
ConnectX-5 VPI utilizes both IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technologies delivering low-latency and high performance. ConnectX-5 enhances RDMA network capabilities by completing the Switch Adaptive-Routing capabilities and supporting data delivered out-of-order while maintaining ordered completion semantics providing multipath reliability and efficient support for all network topologies including DragonFly and DragonFly+.
ConnectX-5 also supports Burst Buffer offload for background checkpointing without interfering in the main CPU operations and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.
- ConnectX-5 Infiniband/Ethernet Host Bus Adapter
- 35cm Harness
- Short Bracket
Recently Viewed
We’re a veteran-owned small business with full-stack expertise. We’ll help you design, coordinate, and manage technology solutions that will drive your business forward.
- About Us
- Contact Us
- My Account
- Prospective Investors
Quick Links
Contracts
Contact Us
-
6402 Corporate Dr, Suite 103,
Indianapolis, IN 46278 - (317) 243-1750
- sales@tekmentum.com
- Mon-Fri: 8:00am-5:00pm
© 2005-2023, Tekmentum. All Rights Reserved. A KPaul, LLC Company.
- Choosing a selection results in a full page refresh.
- Opens in a new window.