HPE 200Gb HDR QSFP56 2P OCP3 x16 Ethernet/InfiniBand Network Adapter NIC PCIe4

This listing is for 1 (one) HPE Mellanox/nVidia Dual Port 200GbE (Ethernet) and HDR (InfiniBand) QSFP56 OCP3 network adapterPlease note: This adapter requires OCP3 slotWill work in any server, does not have to be HPEWe can integrate this in a full solution, please contact usPlease note: this adapter requires additional OCP x16 bandwidth upgrade cable for HPE Proliant DL Gen10 Plus and Gen10 Plus V2 Servers to have 200Gb/s rates on both ports.We have these cables, please see here: https://www.ebay.com/itm/185442614289Please contact us with any questionsThank youASLOC: OFC-ST-R1-S6Manufacturer specifications:MCX653436A-HDAI SpecificationsPlease make sure to install the ConnectX-6 OCP 3.0 card in a PCIe slot that is capable of supplying 80W.PhysicalSize: 2.99 in. x 4.52 in (76.00mm x 115.00mm)Connector: Dual QSFP56 InfiniBand and Ethernet (copper and optical)Retention Mechanism: Internal LockProtocol SupportEthernet: 200GBASE-CR4, 200GBASE-KR4, 200GBASE-SR4, 100GBASE-CR4, 100GBASE-KR4, 100GBASE-SR4, 50GBASE-R2, 50GBASE-R4, 40GBASE-CR4, 40GBASE-KR4, 40GBASE-SR4, 40GBASE-LR4, 40GBASE-ER4, 40GBASE-R2, 25GBASE-R, 20GBASE-KR2, 10GBASE-LR,10GBASE-ER, 10GBASE-CX4, 10GBASE-CR, 10GBASE-KR, SGMII, 1000BASE-CX, 1000BASE-KX, 10GBASE-SRInfiniBand: IBTA v1.3Auto-Negotiation(a): 1X/2X/4X SDR (2.5Gb/s per lane), DDR (5Gb/s per lane), QDR (10Gb/s per lane), FDR10 (10.3125Gb/s per lane), FDR (14.0625Gb/s per lane), EDR (25Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), HDR (50Gb/s per lane) portData Rate:Ethernet1/10/25/40/100/200 Gb/sInfiniBandSDR/DDR/QDR/FDR/EDR/HDR100/HDRPCI Express Gen 4.0: SERDES @ 16.0GT/s, 16 lanes (3.0 and 1.1 compatible)Power and AirflowVoltage: 3.3VAUX, 12VPowerCable TypeActive ModeSTBY ModeTypical Power(b)Passive Cables21.4W6.6WMaximum PowerPassive Cables26.7W10.45WMaximum power available through QSFP56 port: 4.55WAirflow(d)Cable TypeHot Aisle  - HSK to PortCold Aisle @35C - Port to HSKActive Mode @55CSTBY Mode @45CActive ModeSTBY ModeEnvironmentalTemperatureOperational0°C to 55°CNon-operational-40°C to 70°CHumidityOperational10% to 85% relative humidity Non-operational10% to 90% relative humidity Altitude (Operational)3050mRegulatorySafetyCB / cTUVus / CEEMCCE / FCC / VCCI / ICES / RCMRoHSRoHS complianta. ConnectX-6 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. b. Typical power for ATIS traffic load.c. For both operational and non-operational states.d. Airflow numbers are measured while using NVIDIA HDR optic cable. The maximum allowed temperature (internal sensor) for NVIDIA HDR optic cable is 75C. 
This listing is for 1 (one) HPE Mellanox/nVidia Dual Port 200GbE (Ethernet) and HDR (InfiniBand) QSFP56 OCP3 network adapter

Please note: This adapter requires OCP3 slot

Will work in any server, does not have to be HPE

We can integrate this in a full solution, please contact us

Please note: this adapter requires additional OCP x16 bandwidth upgrade cable for HPE Proliant DL Gen10 Plus and Gen10 Plus V2 Servers to have 200Gb/s rates on both ports.

We have these cables, please see here: 
https://www.ebay.com/itm/185442614289

Please contact us with any questions

Thank you

AS
LOC: OFC-ST-R1-S6

Manufacturer specifications:

MCX653436A-HDAI Specifications

Please make sure to install the ConnectX-6 OCP 3.0 card in a PCIe slot that is capable of supplying 80W.

Physical

Size: 2.99 in. x 4.52 in (76.00mm x 115.00mm)

Connector: Dual QSFP56 InfiniBand and Ethernet (copper and optical)
Retention Mechanism: Internal Lock
Protocol Support

Ethernet: 200GBASE-CR4, 200GBASE-KR4, 200GBASE-SR4, 100GBASE-CR4, 100GBASE-KR4, 100GBASE-SR4, 50GBASE-R2, 50GBASE-R4, 40GBASE-CR4, 40GBASE-KR4, 40GBASE-SR4, 40GBASE-LR4, 40GBASE-ER4, 40GBASE-R2, 25GBASE-R, 20GBASE-KR2, 10GBASE-LR,10GBASE-ER, 10GBASE-CX4, 10GBASE-CR, 10GBASE-KR, SGMII, 1000BASE-CX, 1000BASE-KX, 10GBASE-SR

InfiniBand: IBTA v1.3
Auto-Negotiation(a): 1X/2X/4X SDR (2.5Gb/s per lane), DDR (5Gb/s per lane), QDR (10Gb/s per lane), FDR10 (10.3125Gb/s per lane), FDR (14.0625Gb/s per lane), EDR (25Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), HDR (50Gb/s per lane) port

Data Rate:Ethernet1/10/25/40/100/200 Gb/s
InfiniBandSDR/DDR/QDR/FDR/EDR/HDR100/HDR
PCI Express Gen 4.0: SERDES @ 16.0GT/s, 16 lanes (3.0 and 1.1 compatible)
Power and Airflow






Voltage: 3.3VAUX, 12V
PowerCable TypeActive ModeSTBY Mode

Typical Power(b)

Passive Cables21.4W6.6W
Maximum PowerPassive Cables26.7W10.45W
Maximum power available through QSFP56 port: 4.55W

Airflow(d)

Cable TypeHot Aisle  - HSK to PortCold Aisle @35C - Port to HSK
Active Mode @55CSTBY Mode @45C

Active Mode

STBY Mode
Environmental

Temperature

Operational

0°C to 55°C

Non-operational

-40°C to 70°C

Humidity

Operational

10% to 85% relative humidity 

Non-operational10% to 90% relative humidity 
Altitude (Operational)3050m
RegulatorySafety

CB / cTUVus / CE

EMCCE / FCC / VCCI / ICES / RCM
RoHSRoHS compliant


a. ConnectX-6 adapters supplement the IBTA auto-negotiation specification to get better bit error rates and longer cable reaches. This supplemental feature only initiates when connected to another NVIDIA InfiniBand product. 
b. Typical power for ATIS traffic load.
c. For both operational and non-operational states.
d. Airflow numbers are measured while using NVIDIA HDR optic cable. The maximum allowed temperature (internal sensor) for NVIDIA HDR optic cable is 75C. 


eBay integration by