Skip to main content

Technical specifications

Summary of the technical specifications of solution. Depending on the model, some features might not be available, or some specifications might not apply.

Note
  • Up to six SD665-N V3 trays can be installed in the DW612S 6U enclosure. For more information on the number of trays in the enclosure, see GPU power and maximum number of trays in the enclosure.

  • SD665-N V3 tray contains one compute node on the right and one GPU node on the left (when viewed from front of the DW612S enclosure)

  • The GPU node contains the NVIDIA HGX H100 4-GPU board and the network board (4 Connect-X 7).

Processor
Compute node
  • Supports up to two 4th Gen AMD® EPYCTM processors per node.

  • Supports processors with up to 96 cores, and configured TDP ratings up to 400W.

  • Up to 4 XGMI links at up to 32 GT/s

  • SP5 socket (LGA 6096)

Note
  1. Use the Setup Utility to determine the type and speed of the processors in the node.
  2. For a list of supported processors, see Lenovo ServerProven website.
Memory

See Memory module installation rules and order for detailed information about memory configuration and setup.

Compute node:
  • Slots:
    • 24 DIMM slots per node, 12 DIMMs per processor.

  • Type:
    • Lenovo DDR5 at up to 4800 MT/s

  • Protection:
    • ECC

  • Supports (depending on the model):
    • 16 GB, 32 GB, 64 GB, and 128 GB ECC RDIMM

    • 24GB, 48GB, and 96 GB ECC RDIMM

    Note
    128 GB RDIMM is supported with thermal limitation.
  • Minimum:
    • 256 GB per node with sixteen 16 GB RDIMMs per node. (8 DIMMs per processor)

  • Maximum:

    • Up to 3 TB memory with twenty-four 128 GB RDIMMs per node. (12 DIMMs per processor)

Important
  • The tray supports independent mode with the following configurations:

    • 8 DIMMs per processor, total of 16 DIMMs per node

    • 12 DIMMs per processor, total of 24 DIMMs per node

  • Before installing 24 Gb DRAM RDIMMs, make sure to update the UEFI firmware to the latest version first, then remove all existing 16 Gb DRAM RDIMMs.

  • Mixing DIMM speed is not supported.

Storage expansion
Compute node:
  • Supports up to two 7 mm simple-swap NVMe solid-state drives (SSD) per compute node.

  • Supports up to two E3.s 1T simple-swap NVMe solid-state drives (SSD) per compute node.

  • Supports one M.2 drive per node. (Requires the M.2 interposer assembly)

    For a list of supported M.2 drives, see Lenovo ServerProven website.

Graphics processing unit (GPU)

NVIDIA HGX H100 4-GPU board

Integrated functions and I/O connectors
Compute node:
  • An OSFP module with either two 400Gb or two 800Gb OSFP ports, connecting to four ConnectX-7 chip sets on the network board.

  • Lenovo XClarity Controller (XCC), which provides service processor control and monitoring functions, video controller, and remote keyboard, video, mouse, and remote drive capabilities.
  • Front operator panel

  • KVM breakout cable connector.

    The KVM breakout cable includes VGA connector, serial port connector, and USB 3.0 (5 Gbps) / 2.0 connector. XCC mobile management is supported by USB connector on the KVM breakout cable.

    For more information, see KVM breakout cable.

  • External LCD diagnostics handset connector

  • One Gigabit RJ45 Ethernet port, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. Port 1 is shared between operating system and Lenovo XClarity Controller.

Note

Lenovo XClarity Controller connection is mutually exclusive between RJ45 Ethernet connector and 25Gb SFP28 Port 1.

  • Video controller (integrated into Lenovo XClarity Controller)
    • ASPEED
    • SVGA compatible video controller
    • Avocent Digital Video Compression
    • Video memory is not expandable
    Note
    Maximum video resolution is 1920 x 1200 at 60 Hz.
  • Hot-swappable System Management Module 2 (SMM2)

    Note
    See SMM2 User Guide for more details about System Management Module.
Network
Compute node:
  • An OSFP module with either two 400Gb or two 800Gb OSFP ports, connecting to four ConnectX-7 chip sets on the network board.

  • One Gigabit Ethernet port with RJ45 connector, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. Port 1 is shared between operating system and Lenovo XClarity Controller.

Note

Lenovo XClarity Controller connection is mutually exclusive between RJ45 Ethernet connector and 25Gb SFP28 Port 1.

Electrical input
SD665-N V3 tray installed in DW612S enclosure
  • Supports nine hot-swap 2600W power supplies.
    • Sine-wave input (50-60 Hz) required

    • Input voltage for 2600W power supplies:

      • 200-208 Vac, 240 Vdc (output up to 2400W only)

      • 208-240 Vac, 240 Vdc

    • Nine power supplies: 8+1 without oversubscription

  • Supports three hot-swap DWC 7200W power supplies.

    • Input voltage:

      • 200-208 Vac (work as 6900W, supports up to 4 trays in the enclosure)

      • 220-240 Vac, 240 Vdc (work as 7200W)

    • Three DWC PSUs: work as 8+1 without oversubscription

CAUTION
Note
  • Refer to SMM2 web interface for more details of solution power status.

  • Mixing PSUs manufactured by different vendors is not supported.

Minimal configuration for debugging

SD665-N V3 tray installed in DW612S enclosure

  • One DW612S Enclosure

  • One SD665-N V3 tray

  • Two processors on the compute node

  • One NVIDIA HGX H100 4-GPU board and network board (4 Connect-X 7)

  • 2 DIMMs per node in slot 6 and slot 19. (one DIMM per processor)

  • Two CFF v4 power supplies or one DWC PSU

  • One drive (any type) (If OS is needed for debugging)

Operating systems
Supported and certified operating systems:
  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise Server

References: