Skip to main content

Technical specifications

Summary of the technical specifications of solution. Depending on the model, some features might not be available, or some specifications might not apply.

Note
  • The 6U enclosure supports up to six trays.

  • Each tray contains two nodes; one left node and one right node (when viewed from front of enclosure).

  • SD650-I V3 tray contains one compute node and one GPU node.

Processor
Compute node:
  • Supports two 4th Gen Intel® Xeon® Scalable processors per node.

  • Supports processors with up to 60 cores, base speeds up to 3.7 GHz, and TDP ratings up to 350W.

  • Intel® Xeon® CPU Max processors

    For more information on Intel® Xeon® CPU Max operating system support, see Intel® Xeon® CPU Max operating systems support.

  • UPI links at higher width (x24) and speed: 12.8, 14.4 and 16 GT/s.

    • SD650-I V3 supports 3 UPI

  • New Socket Technology (Socket E with PCIe 5.0) LGA4677.

Note
  1. Use the Setup Utility to determine the type and speed of the processors in the node.
  2. For a list of supported processors, see Lenovo ServerProven website.
Memory

See Memory module installation rules and order for detailed information about memory configuration and setup.

Compute node:
  • Memory type and capacity:
    • 4th Gen Intel® Xeon® Scalable processors (Sapphire Rapids, SPR)

      • Lenovo DDR5 at 4800 MT/s

      • 3DS RDIMM at 4800 MT/s

      • Capacity:

        • 32 GB and 64 GB ECC RDIMM

        • 128 GB 3DS RDIMM

    • 5th Gen Intel® Xeon® Scalable processors (Emerald Rapids, EMR)

      • Lenovo DDR5 at 5600 MT/s

      • 3DS RDIMM at 5600 MT/s

      • Capacity:

        • 32 GB and 64 GB ECC RDIMM

        • 48GB, and 96 GB ECC RDIMM (supported by both MCC and XCC processors)

        • 128 GB 3DS RDIMM

  • Protection:
    • ECC

    • SDDC (for x4-based memory DIMMs)

    • ADDDC (for x4-based memory DIMMs)

    • Memory mirroring

  • Supports (depending on the model):
    • 32 GB, and 64 GB ECC RDIMM

    • 128 GB 3DS RDIMM

  • Minimum:
    • 512 GB on the compute node.

  • Maximum:

    • Up to 1.5 TB memory with sixteen 64 GB RDIMMs per node.

    • Up to 2 TB memory with sixteen 128 GB 3DS RDIMMs per node.

Important
  • The tray only supports fully populated processor and memory configuration (2 processors and 16 DIMMs per node).

  • Mixing DIMM speed is not supported.

  • Before installing 24 Gb DRAM RDIMMs to a system with 4th Gen Intel Xeon processors (codenamed Sapphire Rapids), make sure to update the UEFI firmware to the latest version first, then remove all existing 16 Gb DRAM RDIMMs.

  • ADDDC not supported for 9x4 ECC DIMM (Value)

Storage expansion
Compute node:
  • Supports up to four 7 mm simple-swap SATA / NVMe solid-state drives (SSD) per node.

  • Supports up to two 15 mm simple-swap SATA / NVMe solid-state drive (SSD) per node.

  • Supports up to two E3.S simple-swap NVMe solid-state drive (SSD) per node.

  • Supports one M.2 drive per node. (Requires the M.2 interposer assembly)

    For a list of supported M.2 drives, see Lenovo ServerProven website.

Attention
As a general consideration, do not mix standard 512-byte and advanced 4-KB format drives in the same RAID array because it might lead to potential performance issues.
Expansion slots
Compute node:
  • Two PCIe 5.0 x16 PCIe slots on the front per node, supports one of the following:
    • Supports up to two 75W Half-Height Half-Length PCIe 5.0 x16 PCIe adapters with riser card. (Mutually exclusive with internal storage drives)

Graphics processing unit (GPU)

Four Intel® Data Center GPU Max Series 1550 128GB 600W 4-GPU OAM Board, with 128 GB Intel® Xeon® CPU Max

Note
To prevent potential thermal issues, change the Misc setting in the BIOS from Option3 (default value) to Option1 if the following two conditions are met:
  • The server is equipped with a GPU adapter.

  • The UEFI firmware version is USE126F or later.

For the method of changing the Misc setting, see https://support.lenovo.com/us/en/solutions/TT1832.
Integrated functions and I/O connectors
Compute node:
  • Lenovo XClarity Controller (XCC), which provides service processor control and monitoring functions, video controller, and remote keyboard, video, mouse, and remote drive capabilities.
  • Front operator panel

  • KVM breakout cable connector

  • External LCD diagnostics handset connector

  • One Gigabit Ethernet port with RJ45 connector, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. One port is shared between operating system and Lenovo XClarity Controller.

  • Lenovo XClarity Controller connection is mutually exclusive between RJ45 Ethernet connector and 25Gb SFP28 Port 1.

  • Video controller (integrated into Lenovo XClarity Controller)
    • ASPEED
    • SVGA compatible video controller
    • Avocent Digital Video Compression
    • Video memory is not expandable
    Note
    Maximum video resolution is 1920 x 1200 at 60 Hz.
  • Hot-swappable System Management Module 2 (SMM2)

    Note
    See SMM2 User Guide for more details about System Management Module.
Network

DW612S Enclosure

10/100/1000 Mb Ethernet port dedicated for System Management Module (SMM2).

Compute node
  • One Gigabit Ethernet port with RJ45 connector, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. One port is shared between operating system and Lenovo XClarity Controller.

Lenovo XClarity Controller connection is mutually exclusive between RJ45 Ethernet connector and 25Gb SFP28 Port 1.

Storage controllers

Onboard SATA ports with software RAID support (Intel VROC SATA RAID, supporting RAID levels 0, 1, 5, and 10)

Onboard NVMe ports with software RAID support (Intel VROC NVMe RAID)
  • Intel VROC standard: requires an activation key and supports RAID levels 0, 1, and 10

  • Intel VROC Premium: requires an activation key and supports RAID levels 0, 1, 5, and 10

  • Intel VROC Boot (for 5th Gen processors): requires an activation key and supports RAID level 1 only

Electrical input
SD650-I V3 installed in DW612S enclosure
  • Supports nine hot-swap 2400W or 2600W AC power supplies.
    • Sine-wave input (50-60 Hz) required

    • Input voltage for 2400W power supplies:

      • 200-240 Vac, 240 Vdc

    • Input voltage for 2600W power supplies:

      • 200-208 Vac, 240 Vdc (output up to 2400W only)

      • 208-240 Vac, 240 Vdc

    • Nine power supplies: 8+1 without oversubscription

    • 2400W AC power supplies is Delta only.

    Note

    Mixing PSUs manufactured by different vendors is not supported.

  • Supports three hot-swap DWC 7200W power supplies.

    • Input voltage:

      • 200-208 Vac (work as 6900W)

      • 220-240 Vac, 240 Vdc (work as 7200W)

    • Three DWC PSUs: work as 8+1 without oversubscription.

CAUTION
Note
Refer to SMM2 web interface for more details of solution power status.
Minimal configuration for debugging

SD650-I V3 installed in DW612S enclosure

  • One DW612S Enclosure

  • One SD650-I V3 tray (with one compute node and one GPU node)

  • Two processors on the compute node

  • Four Intel® Data Center GPU Max Series on the GPU node

  • 16 DIMMs on the compute node

  • Two CFF v4 power supplies (2400W or above) or one DWC PSU

  • One drive (any type) (If OS is needed for debugging)

Operating systems

Compute node:

Supported and certified operating systems:
  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise Server

References: