Sales leaders of DELL servers in Ukraine.

Quick search filter of DELL servers

NVIDIA ConnectX-7 OCP NIC 3.0 Server Solutions

Overview of the 2-port 200GbE NVIDIA ConnectX-7 OCP NIC 3.0

We will look at the NVIDIA ConnectX-7 OCP NIC 3.0 card in the 3.0 form factor. It is a PCIe Gen5 x16 card, so it can support two 200GbE or NDR200 InfiniBand ports. Also, instead of the standard PCIe card size, it has the OCP NIC 3.0 form factor used by most server manufacturers. Let's move on to the hardware.

The main feature of Connect-X 7 is two QSFP112 ports for connecting to a network with a speed of up to 200 Gbps. Because the CX753436M is a VPI card, they can carry Ethernet or NDR200 InfiniBand traffic.

Our cards feature a pull-tab design, which is our preferred OCP NIC 3.0 format. Cloud providers prefer this format because it allows for easy card replacement without having to open the case. Many large legacy server vendors that charge service contract premiums have different OCP NIC 3.0 blocking mechanisms and faceplates for their cards.

Here it is worth paying attention to the fact that the QSFP112 cages have small heatsink fins on the side. The main problem with the OCP NIC 3.0 design is that as the network speed increases, more heat is generated to cool the modules. These smaller heatsinks, along with higher quality NVIDIA optics, tend to help the optics last longer. If you buy cheap high-speed optics and use them in a card like this, readers have told us that it can lead to disaster. Use higher quality optics or DAC for such cards, as the air in the servers is usually preheated before it reaches the optics.

We also see the PCIe Gen5 x16 end slot, which provides full bandwidth on this card. Note that with each generation since Mellanox, there are often cards that have ports that cannot run simultaneously at full speed. Order codes/part numbers and server design are important. Some servers only provide x8 lanes for OCP NIC 3.0 slots, so be careful.

Amen
NVIDIA ConnectX-7 adapters are about more than just port speed. Being able to run on Ethernet or Infiniband means they have a lot of features for RDMA (including GPUDirect RDMA), storage acceleration, overlay networks like VXLAN, GENEVE, NVGRE, and even things like network offload, which will be more important for systems like the upcoming NVIDIA Oberon platform, etc. There are tons of features here, so it's probably worth just checking them out. As these devices reach PCIe speeds, offloading becomes increasingly important because it prevents valuable CPU and GPU resources from being wasted.

How can we help?

For more detailed information about the DELL PowerEdge R760 server with DDR5 4800 or the DELL PowerEdge R750 server with DDR4 3200, you can find it on our website SERVER SOLUTIONS , to find out the cost of the server, go to the DELL Server Configurator link .

Leave a comment

Please note, comments need to be approved before they are published.