HPE advertises support for 270 ports via dual HDR200 InfiniBand fabrics — but that seems to clash with practical hardware limits:

  • A single ConnectX-7 card supports up to 16 HDR ports.
  • To get to 270, you’d need 17 NICs, which most server chassis simply can’t accommodate.
  • Even PCIe 4.0 x16 has a 32GB/s bandwidth ceiling, which doesn’t cover 270 ports running at full line rate (~50GB/s aggregate).

So we’re wondering:

  • Are these ports virtualized (SR-IOV, NPAR, etc.)?
  • Does the platform rely on an external switch fabric like SN3700, with the NS9 X5 acting more like a termination point?
  • Is there any official doc or performance benchmark (like ib_write_bw) confirming how this scales in real deployments?
4 Spice ups

Using this doc as reference https://www.hpe.com/psnow/doc/4aa4-2988enw?jumpid=in_pdfviewer-psnow

Ethernet ports per networking I/O controller: Four 25G SR MMF (fiber) or four 10G SR MMF (fiber) or four 10GBASE-T (copper) and one 1000BASE-T. Up to 270 networking ports per system.
Max 56 I/O controllers per system

So not quite sure why its only 270 as (56*5=280) but presumably there is some strange limit somewhere.

It’s key to spot their use of ‘system’ and ‘cpu’ throughout the non-stop product literature.
System refers to a node of multiple compute CPUs, and compute CPU refers to what you might traditionally think of as a ‘server’
So in other words the 270 ports applies to a max size system node of 16 individual NS9 X5 units.

3 Spice ups