Mellanox ConnectX-5 EN 100Gigabit Ethernet Card

Mellanox ConnectX-5 EN 100Gigabit Ethernet Card

SKU: 1046044107      MFR: NVIDIA Corporation

Product Description

Mellanox ConnectX-5 EN 100Gigabit Ethernet Card - PCI Express 3.0 x16 - 1 Port(s) - Optical Fiber - 100GBase-X - Plug-in Card

Manufacturer : NVIDIA Corporation

Manufacturer Part No : MCX545A-CCAN

Features

  • OCP 2.0 and OCP3.0 offerings
  • PCIe Gen4 support 200Gb/s throughput
  • Burst buffer offloads for background checkpointing
  • Mellanox Multi-Host™Technology support
  • NVMe over Fabric (NVMe-oF) offloads
  • Back-end switch elimination by host chaining
  • Enhanced vSwitch/vRouter offloads
  • Flexible pipeline
  • RoCE for overlay networks

Product Details

Product Type100Gigabit Ethernet Card
Environmentally FriendlyYes
Environmental CertificationRoHS-6
Media Type SupportedOptical Fiber
Total Number of Ports1
Expansion Slot TypeQSFP28
Brand NameMellanox
Form FactorPlug-in Card
ManufacturerNVIDIA Corporation
Product NameConnectX-5 EN 100Gigabit Ethernet Card
Network Technology100GBase-X
Product LineConnectX-5 EN
Manufacturer Part NumberMCX545A-CCAN
Manufacturer Website Addresshttp://www.nvidia.com
Marketing InformationSingle and dual-port 100GbE PCIe Gen4 intelligent RDMA-enabled network adapter card with advanced application offload and Multi-Host capabilities for Machine Learning Web2.0 Cloud and Storage platforms

ConnectX-5 EN supports up to two ports of 25Gb/s or 100Gb/s Ethernet connectivity sub-750ns latency and a very high message rate plus PCIe Gen4 support and NVMe over Fabric offloads providing the highest performance and most flexible solution for Open Compute Project servers and storage appliances while supporting the most demanding applications and markets including machine learning data analytics and more.

ConnectX-5 EN adapter cards are available in various form factors to meet the needs of every data center including OCP 2.0 type 1 and type 2 as well as OCP 3.0.

Machine Learning and Big Data Environments

Data analytics has become an essential function within many enterprise data centers clouds and Hyperscale platforms. Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first OCP card to deliver 200Gb/s throughput ConnectX-5 dual-port 100GbE is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require.

ConnectX-5 EN for Open Compute Project (OCP) utilizes RoCE (RDMA over Converged Ethernet) technology delivering low-latency and high performance.

ConnectX-5 also supports GPUDirect® and Burst Buffer offload for background checkpointing without interfering in the main CPU operations and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.

Storage Environments

NVMe storage devices are gaining popularity offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhacements by providing NVMe-oF target offloads enabling very efficient NVMe storage access with no CPU intervention and thus improved performance and lower latency.

As with the earlier generations of ConnectX adapters standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design Host Chaining by which different servers can interconnect directly without involving the Top of the Rack (ToR) switch. Alternatively the Multi-Host technology that was first introduced with ConnectX-4 can be used. Mellanox Multi-Host™ technology when enabled allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces. With the various new rack design alternatives ConnectX-5 lowers the total cost of ownership (TCO) in the data center by reducing CAPEX (cables NICs and switch port expenses) and by reducing OPEX by cutting down on switch port management and overall power usage.

Cloud and Web2.0 Environments

Cloud and Web2.0 customers that are developing their platforms on Software Defined Network (SDN) environments are leveraging their servers' Operating System Virtual-Switching capabilities to enable maximum flexibility
Host InterfacePCI Express 3.0 x16
Product FamilyConnectX-5 EN
View full details
Regular price $1,571.99 USD
Regular price Sale price $1,571.99
Sale Sold out

Dropship

Get it by Wed, Jan 08 - Mon, Jan 13


Update

   

Shipping calculated at checkout.

Mellanox ConnectX-5 EN 100Gigabit Ethernet Card - PCI Express 3.0 x16 - 1 Port(s) - Optical Fiber - 100GBase-X - Plug-in Card

Manufacturer : NVIDIA Corporation

Manufacturer Part No : MCX545A-CCAN

Features

  • OCP 2.0 and OCP3.0 offerings
  • PCIe Gen4 support 200Gb/s throughput
  • Burst buffer offloads for background checkpointing
  • Mellanox Multi-Host™Technology support
  • NVMe over Fabric (NVMe-oF) offloads
  • Back-end switch elimination by host chaining
  • Enhanced vSwitch/vRouter offloads
  • Flexible pipeline
  • RoCE for overlay networks

Product Details

Product Type100Gigabit Ethernet Card
Environmentally FriendlyYes
Environmental CertificationRoHS-6
Media Type SupportedOptical Fiber
Total Number of Ports1
Expansion Slot TypeQSFP28
Brand NameMellanox
Form FactorPlug-in Card
ManufacturerNVIDIA Corporation
Product NameConnectX-5 EN 100Gigabit Ethernet Card
Network Technology100GBase-X
Product LineConnectX-5 EN
Manufacturer Part NumberMCX545A-CCAN
Manufacturer Website Addresshttp://www.nvidia.com
Marketing InformationSingle and dual-port 100GbE PCIe Gen4 intelligent RDMA-enabled network adapter card with advanced application offload and Multi-Host capabilities for Machine Learning Web2.0 Cloud and Storage platforms

ConnectX-5 EN supports up to two ports of 25Gb/s or 100Gb/s Ethernet connectivity sub-750ns latency and a very high message rate plus PCIe Gen4 support and NVMe over Fabric offloads providing the highest performance and most flexible solution for Open Compute Project servers and storage appliances while supporting the most demanding applications and markets including machine learning data analytics and more.

ConnectX-5 EN adapter cards are available in various form factors to meet the needs of every data center including OCP 2.0 type 1 and type 2 as well as OCP 3.0.

Machine Learning and Big Data Environments

Data analytics has become an essential function within many enterprise data centers clouds and Hyperscale platforms. Machine learning relies on especially high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first OCP card to deliver 200Gb/s throughput ConnectX-5 dual-port 100GbE is the perfect solution to provide machine learning applications with the levels of performance and scalability that they require.

ConnectX-5 EN for Open Compute Project (OCP) utilizes RoCE (RDMA over Converged Ethernet) technology delivering low-latency and high performance.

ConnectX-5 also supports GPUDirect® and Burst Buffer offload for background checkpointing without interfering in the main CPU operations and the innovative transport service Dynamic Connected Transport (DCT) to ensure extreme scalability for compute and storage systems.

Storage Environments

NVMe storage devices are gaining popularity offering very fast storage access. The evolving NVMe over Fabric (NVMe-oF) protocol leverages the RDMA connectivity for remote access. ConnectX-5 offers further enhacements by providing NVMe-oF target offloads enabling very efficient NVMe storage access with no CPU intervention and thus improved performance and lower latency.

As with the earlier generations of ConnectX adapters standard block and file access protocols can leverage RoCE for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks.

ConnectX-5 enables an innovative storage rack design Host Chaining by which different servers can interconnect directly without involving the Top of the Rack (ToR) switch. Alternatively the Multi-Host technology that was first introduced with ConnectX-4 can be used. Mellanox Multi-Host™ technology when enabled allows multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces. With the various new rack design alternatives ConnectX-5 lowers the total cost of ownership (TCO) in the data center by reducing CAPEX (cables NICs and switch port expenses) and by reducing OPEX by cutting down on switch port management and overall power usage.

Cloud and Web2.0 Environments

Cloud and Web2.0 customers that are developing their platforms on Software Defined Network (SDN) environments are leveraging their servers' Operating System Virtual-Switching capabilities to enable maximum flexibility
Host InterfacePCI Express 3.0 x16
Product FamilyConnectX-5 EN

Recently Viewed