Lenovo ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 2-port PCIe VPI Adapter
Lenovo ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 2-port PCIe VPI Adapter
SKU: 1059929058 MFR: Lenovo Group Limited
GTIN: 889488491384
Product Description
Lenovo ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 2-port PCIe VPI Adapter - PCI Express 4.0 x16 - 100 Gbit/s - 2 x Total Infiniband Port(s) - 2 x Total Expansion Slot(s) - QSFP56 - Plug-in Card
Manufacturer : Lenovo Group Limited
Manufacturer Part No : 4C57A14178
Features
- InfiniBand host bus adapter - Delivers industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications
- The PCI Express 4.0 x16 interface provides smooth data transfer and outstanding performance
- The data speed of 100 Gbit/s maximizes system performance and saves time during critical tasks
Product Details
ThinkSystem Servers:
- SR630 V2 (7Z70 / 7Z71)
- SR650 V2 (7Z72 / 7Z73)
- SR670 V2 (7Z22 / 7Z23)
- SR635 (7Y98 / 7Y99)
- SR655 (7Y00 / 7Z01)
- SR645 (7D2Y / 7D2X)
- SR665 (7D2W / 7D2V)
- SR850 V2 (7D31 / 7D32)
- SR860 V2 (7Z59 / 7Z60)
- SR950 (7X11 / 7X12)
- SR630 (7X01 / 7X02)
- SR650 (7X05 / 7X06)
- SR670 (7Y36 / 7Y37)
- SR850 (7X18 / 7X19)
- SR850P (7D2F / 2D2G)
- SR860 (7X69 / 7X70)
PCI Express Interface
- PCIe 4.0 x16 host interface (also supports a PCIe 3.0 host interface)
- Support for PCIe x1 x2 x4 x8 and x16 configurations
- PCIe Atomic
- TLP (Transaction Layer Packet) Processing Hints (TPH)
- PCIe switch Downstream Port Containment (DPC) enablement for PCIe hot-plug
- Advanced Error Reporting (AER)
- Access Control Service (ACS) for peer-to-peer secure communication
- Process Address Space ID (PASID) Address Translation Services (ATS)
- IBM CAPIv2 (Coherent Accelerator Processor Interface)
- Support for MSI/MSI-X mechanisms
Connectivity
- One or two QSFP56 ports
- Supports passive copper cables with ESD protection
- Powered connectors for optical and active cable support
InfiniBand
- Supports interoperability with InfiniBand switches (up to HDR100)
- When used in a PCIe 3.0 slot total connectivity is up to 100 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
- When used in a PCIe 4.0 slot total connectivity is up to 200 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 100 Gb/s each
- HDR100 / EDR / FDR / QDR / DDR / SDR
- IBTA Specification 1.3 compliant
- RDMA Send/Receive semantics
- Hardware-based congestion control
- Atomic operations
- 16 million I/O channels
- 256 to 4Kbyte MTU 2Gbyte messages
- 8 virtual lanes + VL15
Ethernet (requires firmware 20.28.1002 or later)
- Support interoperability with Ethernet switches (up to 100GbE as 2 lanes of 50Gb/s data rate)
- When used in the PCIe 3.0 slot total connectivity is 100 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
- When used in the PCIe 4.0 slot total connectivity is 200 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 100 Gb/s each
- Supports 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
- Ethernet speed must be set; auto-negotiation is currently not supported (planned for a later firmware update)
- IEEE 802.3bj 802.3bm 100 Gigabit Ethernet
- IEEE 802.3by Ethernet Consortium 25 50 Gigabit Ethernet supporting all FEC modes
- IEEE 802.3ba 40 Gigabit Ethernet
- IEEE 802.3ae 10 Gigabit Ethernet
- IEEE 802.3az Energy Efficient Ethernet
- IEEE 802.3ap based auto-negotiation and KR startup (planned for a later firmware update)
- IEEE 802.3ad 802.1AX Link Aggregation
- IEEE 802.1Q 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN) - Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
Enhanced Features
- Hardware-based reliable transport
- Collective operations offloads
- Vector collective operations offloads
- PeerDirect RDMA (GPUDirect) communication acceleration
- 64/66 encoding
- Enhanced Atomic operations
- Advanced memory mapping support allowing user mode registration and remapping of memory (UMR)
- Extended Reliable Connected transport (XRC)
- Dynamically Connected transport (DCT)
- On demand paging (ODP)
Dropship
Get it by Sat, Jan 25 - Thu, Jan 30
Update
Lenovo ThinkSystem Mellanox ConnectX-6 HDR100/100GbE QSFP56 2-port PCIe VPI Adapter - PCI Express 4.0 x16 - 100 Gbit/s - 2 x Total Infiniband Port(s) - 2 x Total Expansion Slot(s) - QSFP56 - Plug-in Card
Manufacturer : Lenovo Group Limited
Manufacturer Part No : 4C57A14178
Features
- InfiniBand host bus adapter - Delivers industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications
- The PCI Express 4.0 x16 interface provides smooth data transfer and outstanding performance
- The data speed of 100 Gbit/s maximizes system performance and saves time during critical tasks
Product Details
ThinkSystem Servers:
- SR630 V2 (7Z70 / 7Z71)
- SR650 V2 (7Z72 / 7Z73)
- SR670 V2 (7Z22 / 7Z23)
- SR635 (7Y98 / 7Y99)
- SR655 (7Y00 / 7Z01)
- SR645 (7D2Y / 7D2X)
- SR665 (7D2W / 7D2V)
- SR850 V2 (7D31 / 7D32)
- SR860 V2 (7Z59 / 7Z60)
- SR950 (7X11 / 7X12)
- SR630 (7X01 / 7X02)
- SR650 (7X05 / 7X06)
- SR670 (7Y36 / 7Y37)
- SR850 (7X18 / 7X19)
- SR850P (7D2F / 2D2G)
- SR860 (7X69 / 7X70)
PCI Express Interface
- PCIe 4.0 x16 host interface (also supports a PCIe 3.0 host interface)
- Support for PCIe x1 x2 x4 x8 and x16 configurations
- PCIe Atomic
- TLP (Transaction Layer Packet) Processing Hints (TPH)
- PCIe switch Downstream Port Containment (DPC) enablement for PCIe hot-plug
- Advanced Error Reporting (AER)
- Access Control Service (ACS) for peer-to-peer secure communication
- Process Address Space ID (PASID) Address Translation Services (ATS)
- IBM CAPIv2 (Coherent Accelerator Processor Interface)
- Support for MSI/MSI-X mechanisms
Connectivity
- One or two QSFP56 ports
- Supports passive copper cables with ESD protection
- Powered connectors for optical and active cable support
InfiniBand
- Supports interoperability with InfiniBand switches (up to HDR100)
- When used in a PCIe 3.0 slot total connectivity is up to 100 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
- When used in a PCIe 4.0 slot total connectivity is up to 200 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 100 Gb/s each
- HDR100 / EDR / FDR / QDR / DDR / SDR
- IBTA Specification 1.3 compliant
- RDMA Send/Receive semantics
- Hardware-based congestion control
- Atomic operations
- 16 million I/O channels
- 256 to 4Kbyte MTU 2Gbyte messages
- 8 virtual lanes + VL15
Ethernet (requires firmware 20.28.1002 or later)
- Support interoperability with Ethernet switches (up to 100GbE as 2 lanes of 50Gb/s data rate)
- When used in the PCIe 3.0 slot total connectivity is 100 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 50 Gb/s each or one 100 Gb/s active link and the other a standby link
- When used in the PCIe 4.0 slot total connectivity is 200 Gb/s:
- One port adapter supports a single 100 Gb/s link
- Two-port adapter supports two connections of 100 Gb/s each
- Supports 100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE
- Ethernet speed must be set; auto-negotiation is currently not supported (planned for a later firmware update)
- IEEE 802.3bj 802.3bm 100 Gigabit Ethernet
- IEEE 802.3by Ethernet Consortium 25 50 Gigabit Ethernet supporting all FEC modes
- IEEE 802.3ba 40 Gigabit Ethernet
- IEEE 802.3ae 10 Gigabit Ethernet
- IEEE 802.3az Energy Efficient Ethernet
- IEEE 802.3ap based auto-negotiation and KR startup (planned for a later firmware update)
- IEEE 802.3ad 802.1AX Link Aggregation
- IEEE 802.1Q 802.1P VLAN tags and priority
- IEEE 802.1Qau (QCN) - Congestion Notification
- IEEE 802.1Qaz (ETS)
- IEEE 802.1Qbb (PFC)
- IEEE 802.1Qbg
- IEEE 1588v2
- Jumbo frame support (9.6KB)
Enhanced Features
- Hardware-based reliable transport
- Collective operations offloads
- Vector collective operations offloads
- PeerDirect RDMA (GPUDirect) communication acceleration
- 64/66 encoding
- Enhanced Atomic operations
- Advanced memory mapping support allowing user mode registration and remapping of memory (UMR)
- Extended Reliable Connected transport (XRC)
- Dynamically Connected transport (DCT)
- On demand paging (ODP)
Recently Viewed
We’re a veteran-owned small business with full-stack expertise. We’ll help you design, coordinate, and manage technology solutions that will drive your business forward.
- About Us
- Contact Us
- My Account
- Prospective Investors
Quick Links
Contracts
Contact Us
-
6402 Corporate Dr, Suite 103,
Indianapolis, IN 46278 - (317) 243-1750
- sales@tekmentum.com
- Mon-Fri: 8:00am-5:00pm
© 2005-2023, Tekmentum. All Rights Reserved. A KPaul, LLC Company.
- Choosing a selection results in a full page refresh.
- Opens in a new window.