NVIDIA® ConnectX®-5 InfiniBand adapter cards provide a high performance and flexible solution with up to two ports of 100Gb/s InfiniBand and Ethernet connectivity, low latency, and a high message rate, plus an embedded PCIe switch and NVMe over Fabrics offloads. These intelligent remote direct memory access (RDMA)-enabled adapters provide advanced application offload capabilities for high-performance computing (HPC), cloud
hyperscale, and storage platforms.
ConnectX-5 adapter cards for PCIe Gen3 and Gen4 servers are available as stand-up PCIe cards and Open Compute Project (OCP) Spec 2.0 form factors. Selected models also offer NVIDIA Multi-Host™ and NVIDIA Socket Direct™ technologies.
Basic information |
|
model |
MCX556A-EDAT |
brand |
Mellanox (NVIDIA) |
Adapter type |
VPI (Virtual Protocol Interconnect) Dual-Protocol Adapter Card |
bus interface |
PCIe 3.0 x16 (Compatible with PCIe 4.0 x16) |
Heat dissipation design |
Passive Cooling (Fanless) |
network protocol |
|
Support protocol |
EDR InfiniBand (100Gb/s) + 100GbE Ethernet |
Port configuration |
Dual-Port QSFP28 (Compatible with QSFP+/QSFP) |
transmission speed |
100Gb/s per Port, 200Gb/s Aggregate Bandwidth |
Physical attribute |
|
measure |
Full-Height Bracket Design (Dimensions Not Specified) |
power consumption |
Not Specified (Typical 10-15W) |
Performance characteristics |
|
Hardware acceleration function |
- Collective Operations Offload (MPI Tag Matching/AlltoAll Offload)
- ASAP2 Virtual Switch Acceleration |
data check |
T10 DIF Data Integrity Verification |
congestion control |
Hardware Congestion Control (HCC) |
Connecting technology |
RDMA/RoCE v2 Support |
Application scenario |
|
High performance computing (HPC) |
AI Training, Distributed Storage, Massively Parallel Computing |
data centre |
NFV (Network Functions Virtualization), Low-Latency Cloud Storage |
compatibility |
|
Operating system support |
Linux, Windows, VMware |
Certification standard |
OCP 2.0 Compliance |
Detailed Photos
Product features:
Label Matching and Assembly Point Unloading
Adaptive routing for reliable transmission
Burst buffer unloading for background checkpoints
NVMe over Fabric (NVMe-oF) uninstallation
Embedded PCIe
Enhanced vSwitch/vRouter uninstallation
RoCE covering network
PCIe Gen 4.0 support
Meet RoHS standards
ODCC compatibility
Product advantages:
Up to 100Gb/s connection per port.
Industry-leading throughput, low latency, low CPU utilization and high message rate.
Design of Innovative Rack for Storage and Machine Learning Based on Host Chain Technology
Intelligent interconnection for x86, Power and GPU-based computing and storage platforms.
Advanced storage features, including NVMe over Fabric offload.
Intelligent network adapter supporting flexible pipeline programmability
Cutting-edge performance of virtualized networks, including network function virtualization (NFV)
Impeller of efficient service chain capability
Efficient I/O consolidation to reduce data center cost and complexity.
After Sales Service
Q:Are you trading company or manufacturer?A:We are factory.
Q:How long is your delivery time? A:Generally it is 2-5 days if the goods are in stock. or it is 10 days if the goods are not in stock, it is according to quantity.
Q:Do you have MOQ?
A:For stock products,we don't have MOQ. For custom product,we ask for low MOQ.
Q:Can you offer a better price?Any discount?
A:Yes,surely. We're source factory and providing reasonable price.
Q:What is your warranty period?
A:Different products have different warranties.This warranty does not cover damage caused by man-made damage,improper care,negligence,normal wear tear,and force majeure such as natural disasters,earthquakes,fires ,etc.