Skip to main content

Technical specifications

Summary of the technical specifications of solution. Depending on the model, some features might not be available, or some specifications might not apply.

  • The 6U enclosure supports up to six trays.

  • Each tray contains two nodes; one left node and one right node (when viewed from front of enclosure).

  • SD650-I V3 tray contains one compute node and one GPU node.

Compute node:
  • Supports two 4th Gen Intel® Xeon® Scalable processors per node.

  • Supports processors with up to 60 cores, core speeds up to 2.3 GHz, and TDP ratings up to 350W.

  • HBM (High Bandwidth Memory). (Future proof)

  • UPI links at higher width (x24) and speed: 12.8, 14.4 and 16 GT/s.

    • SD650-I V3 supports 3 UPI

  • New Socket Technology (Socket E with PCIe 5.0) LGA4677.

  1. Use the Setup Utility to determine the type and speed of the processors in the node.
  2. For a list of supported processors, see Lenovo ServerProven website.

See Memory module installation rules and order for detailed information about memory configuration and setup.

Compute node:
  • Slots:
    • 16 DIMM slots per node, 8 DIMMs per processor.

  • Type:
    • Lenovo DDR5 at 4800 MT/s

  • Protection:
    • ECC

    • SDDC (for x4-based memory DIMMs)

    • ADDDC (for x4-based memory DIMMs)

    • Memory mirroring

    • Memory sparing

  • Supports (depending on the model):
    • 32 GB, and 64 GB ECC RDIMM

    • 128 GB 3DS RDIMM

  • Minimum:
    • SD650-I V3: 512 GB on the compute node.

  • Maximum:

    • Up to 1 TB memory with sixteen 64 GB RDIMMs per node.

    • Up to 2 TB memory with sixteen 128 GB 3DS RDIMMs per node.

  • The tray only supports fully populated processor and memory configuration (2 processors and 16 DIMMs per node).

  • Mixing DIMM speed is not supported.

  • ADDDC not supported for 9x4 ECC DIMM (Value)

Storage expansion
Compute node:
  • Supports up to four 7 mm simple-swap SATA / NVMe solid-state drives (SSD) per node.

  • Supports up to two 15 mm simple-swap SATA / NVMe solid-state drive (SSD) per node.

  • Supports up to one 42 mm / 60 mm / 20 mm / 110 mm M.2 drive per node. (Requires the M.2 interposer assembly)

As a general consideration, do not mix standard 512-byte and advanced 4-KB format drives in the same RAID array because it might lead to potential performance issues.
Expansion slots
Compute node:
  • Two PCIe 5.0 x16 PCIe slots on the front per node, supports one of the following:
    • Supports up to two 75W Half-Height Half-Length PCIe 5.0 x16 PCIe adapters with riser card. (Mutually exclusive with internal storage drives)

Graphics processing unit (GPU)
GPU node:
  • Four Intel OAM GPU, up to 600 W, with 128 GB HBMe

  • Four Intel OAM GPU, up to 450 W, with 96 GB HBMe (future proof)

Integrated functions and I/O connectors
Compute node:
  • Lenovo XClarity Controller (XCC), which provides service processor control and monitoring functions, video controller, and remote keyboard, video, mouse, and remote drive capabilities.
  • Front operator panel

  • KVM breakout cable connector

  • External LCD diagnostics handset connector

  • One Gigabit Ethernet port with RJ45 connector, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. One port is shared between operating system and Lenovo XClarity Controller.

  • Video controller (integrated into Lenovo XClarity Controller)
    • ASPEED
    • SVGA compatible video controller
    • Avocent Digital Video Compression
    • Video memory is not expandable
    Maximum video resolution is 1920 x 1200 at 60 Hz.
  • Hot-swappable System Management Module 2 (SMM2)

    See SMM2 User Guide for more details about System Management Module.

DW612S Enclosure

10/100/1000 Mb Ethernet port dedicated for System Management Module (SMM2).

Compute node
  • One Gigabit Ethernet port with RJ45 connector, shared between operating system and Lenovo XClarity Controller.

  • Two 25Gb SFP28 ports. One port is shared between operating system and Lenovo XClarity Controller.

Storage controllers
Compute node:
  • 6 Gbps SATA:

    • Onboard SATA AHCI (non-RAID)

    • RAID 0, 1, 5, 10 with onboard SATA RAID (Intel RSTe)

  • PCIe x4 NVMe:

    • Onboard NVMe

    • Intel VROC Premium, supports Raid 0, 1, 5, 10 based on storage configuration.

Electrical input
SD650-I V3 installed in DW612S enclosure
  • Supports six or nine hot-swap 2400W or 2600W AC power supplies.
    • Sine-wave input (50-60 Hz) required

    • Input voltage: 200-240 Vac

    • Nine power supplies: 8+1 without oversubscription

    • 2400W AC power supplies is Delta only.

  • Supports three hot-swap DWC 7200W power supplies.

    • Input voltage:

      • 200-208 Vac (work as 6900W)

      • 220-240 Vac, 240 Vdc (work as 7200W)

    • Three DWC PSUs: work as 8+1 without oversubscription.

Power supplies and redundant power supplies in the enclosure must be with the same brand, power rating, wattage or efficiency level.
Refer to SMM2 web interface for more details of solution power status.
Minimal configuration for debugging

SD650-I V3 installed in DW612S enclosure

  • One DW612S Enclosure

  • One SD650-I V3 tray (with one compute node and one GPU node)

  • Two processors on the compute node

  • Four Intel OAM GPU on the GPU node

  • 16 DIMMs on the compute node

  • Two CFF v4 power supplies (2400W or above) or one DWC PSU

  • One drive (any type) (If OS is needed for debugging)

Operating systems

Compute node:

Supported and certified operating systems:
  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise Server