Skip to main content

Technical specifications

Summary of the technical specifications of the server. Depending on the model, some features might not be available, or some specifications might not apply.

Processor

Supports multi-core Intel® Xeon® processors, with integrated memory controller and Intel Mesh UPI (Ultra Path Interconnect) topology.

  • Up to two Intel® Xeon® 6 processors with P-cores (Granite Rapids-SP, GNR-SP) or E-cores (Sierra Forest-SP, SRF-SP) with new LGA 4710 socket

  • Up to 86 cores per socket for GNR-SP and 144 cores per socket for SRF-SP

  • Up to four UPI links at up to 24 GT/s

  • Thermal Design Power (TDP): up to 350 watts

For a list of supported processors, see: Lenovo ServerProven website.

Memory

See Memory module installation rules for detailed information about memory configuration and setup.

For a list of supported memory options, see Lenovo ServerProven website.

Note
SlotsServers without Compute Complex Neptune Core Module32 dual inline memory module (DIMM) connectors that support up to 32 TruDDR5 DIMMs
Servers with Compute Complex Neptune Core Module16 dual inline memory module (DIMM) connectors that support up to 16 TruDDR5 DIMMs
Memory module typesTruDDR5 6400MHz RDIMMs16 GB (1Rx8), 32 GB (2Rx8), 48 GB (2Rx8)
TruDDR5 6400MHz 10x4 RDIMMs32 GB (1Rx4), 64 GB (2Rx4), 96 GB (2Rx4), 128 GB (2Rx4)
TruDDR5 6400MHz 3DS RDIMMs256 GB (4Rx4)
TruDDR5 8800MHz MRDIMMs32 GB (2Rx8), 64 GB (2Rx4)
CXL memory modules96 GB, 128 GB
Speed6400 MHz RDIMMs
  • 1 DPC: 6400 MT/s

  • 2 DPC: 5200 MT/s

8800 MHz RDIMMs
  • 1 DPC: 8000 MT/s

CapacityServers without Compute Complex Neptune Core Module
  • Minimum: 16 GB

  • Maximum: 8 TB: 32 x 256 GB 3DS RDIMMs

Servers with Compute Complex Neptune Core Module
  • Minimum: 32 GB: 2 x 16 GB

  • Maximum: 4 TB: 16 x 256 GB 3DS RDIMMs

Internal drives
Front
For the servers without Compute Complex Neptune Core Module:
  • Up to four 2.5-inch hot-swap SAS/SATA/NVMe drives

  • Up to four 2.5-inch hot-swap SAS/SATA/NVMe drives and two front M.2 hot-swap SAS/SATA/NVMe drives

  • Up to eight 2.5-inch hot-swap AnyBay/U.3/SAS/SATA/NVMe drives

  • Up to eight 2.5-inch hot-swap U.3/SAS/SATA/NVMe drives and two front M.2 hot-swap SAS/SATA/NVMe drives

  • Up to ten 2.5-inch hot-swap AnyBay/U.3/SAS/SATA/NVMe drives

  • Up to six 2.5-inch hot-swap SAS/SATA drives and four 2.5-inch hot-swap AnyBay drives

  • Up to six 2.5-inch hot-swap SAS/SATA drives, two 2.5-inch hot-swap NVMe drives, and two 2.5-inch hot-swap AnyBay drives

  • Up to 16 1T E3.S drives

  • Up to eight 2T E3.S drives

  • Up to 12 1T E3.S drives and two front M.2 hot-swap SAS/SATA/NVMe drives

For the servers with Compute Complex Neptune Core Module:
  • Up to four 2.5-inch hot-swap SAS/SATA/NVMe drives

  • Up to eight 2.5-inch hot-swap U.3/SAS/SATA/NVMe drives

  • Up to ten 2.5-inch hot-swap U.3/SAS/SATA/NVMe drives

  • Up to 12 1T E3.S drives

Inside
  • Up to two internal M.2 non-hot-swap SAS/SATA/NVMe drives

Rear
For the servers without Compute Complex Neptune Core Module:
  • Up to two 2.5-inch hot-swap U.3/SAS/SATA/NVMe drives

  • Up to two M.2 hot-swap SAS/SATA/NVMe drives

For the servers with Compute Complex Neptune Core Module:
  • Up to two M.2 hot-swap SAS/SATA/NVMe drives

Expansion slots

Depending on the model, your server supports up to three PCIe slots in the rear.

  • PCIe x16/x16, full-height + full-height

  • PCIe x16/x16/x16, full-height + low-profile + low-profile

  • PCIe x16/x16, full-height + low-profile

  • PCIe x16, full-height

Note
For the servers with Compute Complex Neptune Core Module, only PCIe x16/x16, full-height + low-profile and PCIe x16, full-height are supported.

Graphics processing unit (GPU)

Your server supports the following GPUs:

  • NVIDIA® L4 (Half-length, single-wide)

Note
  • A GPU power cable connected to a riser card is needed when GPU power is greater than or equal to 75 W.

  • For GPU supporting rules, see Thermal rules.

  • The GPU cannot be installed when Compute Complex Neptune Core Module is installed.

Integrated functions and I/O connectors
  • Lenovo XClarity Controller (XCC), which provides service processor control and monitoring functions, video controller, and remote keyboard, video, mouse, and remote drive capabilities.
  • One XCC system management port on the rear to connect to a systems-management network. This RJ-45 connector is dedicated to the Lenovo XClarity Controller functions and runs at 10/100/1000 Mbps speed.
  • A group of two or four Ethernet connectors on OCP module

  • Up to four USB 3.2 Gen1 (5 Gbps) ports:

    • Two on the rear of the server 1

    • (Optional) Two on the front of the server (including One USB 2.0 connector with XCC system management)

  • One internal USB 3.2 Gen1 (5 Gbps) port

  • External LCD diagnostics handset connector on the front of the server

  • (Optional) One Mini DisplayPort on the front of the server 2

  • One VGA connectors on the rear of the server

  • (Optional) One serial port connector on the rear of the server 3

Note
  1. The lower USB connector at the rear functions as a USB 2.0 connector with XCC system management when there are no USB connectors at the front.

  2. The maximum video resolution is 1920 x 1200 at 60 Hz.

  3. Available when the serial port cable is installed in the server.

Network
OCP module
Note
  • The server features two OCP slots: OCP 1 and OCP 2, which are located on the rear side.

  • The installation priority of OCP slots in configurations with two processors is as follows:

    • Configurations with only one OCP module: A x8 OCP module is installed in OCP slot 1; a x16 OCP module is installed in OCP slot 2.

    • Configurations with two OCP modules: OCP slot 1 > OCP slot 2; x16 > x8

  • OCP module 1 take priority over OCP module 2.

RAID support

Onboard NVMe ports with software RAID support (Intel VROC NVMe RAID) and JBOD

Note
Intel VROC Boot only supports two drives corresponding to the same controller and the same processor.
DescriptionRAID levels
Intel® VROC standard0, 1, 10
Intel® VROC Premium0, 1, 5, 10
Intel® VROC Boot1
ThinkSystem RAID 5350-8i PCIe 12Gb Adapter0, 1, 5, 10
ThinkSystem RAID 9350-8i 2GB Flash PCIe 12Gb Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem RAID 9350-16i 4GB Flash PCIe 12Gb Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem RAID 940-8i 4GB Flash PCIe Gen4 12Gb Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem RAID 940-16i 8GB Flash PCIe Gen4 12Gb Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem 4350-16i SAS/SATA 12Gb HBAN/A
ThinkSystem 440-16i SAS/SATA PCIe Gen4 12Gb HBAN/A
ThinkSystem 440-16i SAS/SATA PCIe Gen4 12Gb Internal HBAN/A
ThinkSystem RAID 940-16i 8GB Flash PCIe Gen4 12Gb Internal Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem RAID 940-8e 4GB Flash PCIe Gen4 12Gb Adapter0, 1, 5, 6, 10, 50, and 60
ThinkSystem RAID 545-8i PCIe Gen4 12Gb Adapter0, 1, 10
ThinkSystem 440-16e SAS/SATA PCIe Gen4 12Gb HBAN/A
System fan
  • Supported fan-pack types:

    • Standard fan-pack 4056 (28000 RPM, single rotor)

    • Performance fan-pack 4056 (28000 RPM, dual rotors)

    • Ultra fan-pack 4056 (31000 RPM, dual rotors)

  • Fan redundancy: N+1 redundancy, one redundant fan rotor

    • One processor: three hot-swap dual-rotor system fan-packs (one redundant fan rotor)

    • Two processors: four hot-swap dual-rotor system fan-packs (one redundant fan rotor)

Note
  • The redundant cooling by the fans in the server enables continued operation if one rotor fails.

  • When the system is powered off but still plugged in to AC power, and XCC has detected that OCP modules are installed, fan-pack 2 and 3 may continue to spin at a much lower speed. This is the system design to provide proper cooling.

Electrical input and power policy

Electrical input for power supply units

Common Redundant Power Supply (CRPS) and CRPS Premium are supported as listed below:

CAUTION
  • 240 V dc input is supported in Chinese Mainland ONLY.

  • Power supply with 240 V dc input cannot support hot plugging power cord function. Before removing the power supply with dc input, please turn off server or disconnect dc power sources at the breaker panel or by turning off the power source. Then, remove the power cord.

Power supply100–127 V ac200–240 V ac240 V dc-48 V dcHVDC

240 -380V dc

HVAC

200 -277V ac

CRPSCRPS Premium
800-watt 80 PLUS Platinum    
1300-watt 80 PLUS Platinum    
1300-watt -48V DC      
1300-watt HVAC/HVDC 80 PLUS Platinum     
800-watt 80 PLUS Titanium   
1300-watt 80 PLUS Titanium   
2000-watt 80 PLUS Titanium     

Power policy for power supply units

Following is one or two power supply units for redundancy or over-subscription (OVS) support:

Note
  • Only CRPS Premium PSUs support Over-subscriptions (OVS), Zero Output Mode, and Virtual-Reseat (VR).

  • The following Lenovo XClarity Controller options are supported only when CRPS Premium PSUs are installed:

    • Power Redundant options such as Zero Output Mode and Non-redundant.

    • AC Power Cycle Server option under Power Action.

  • Mixing of CRPS PSUs from different vendors are not supported.

  • 1+0 indicates that the server has only one power supply unit installed and the system does not support power redundancy, while 1+1 indicates that two power supply units are installed and redundancy is supported.

TypeWattsRedundancyOVS
CRPS Premium800-watt 80 PLUS Titanium1+0××
1+1
1300-watt 80 PLUS Titanium1+0××
1+1
1300-watt -48V DC1+1
1300-watt HVAC/HVDC 80 PLUS Platinum1+1
2000-watt 80 PLUS Titanium1+1
CRPS800-watt 80 PLUS Platinum1+1×
800-watt 80 PLUS Titanium1+1×
1300-watt 80 PLUS Platinum1+1×
1300-watt 80 PLUS Titanium1+1×
Minimal configuration for debugging
  • Servers without Compute Complex Neptune Core Module

    • One processor in processor socket 1

    • One memory module in slot 7

    • One power supply unit

    • One HDD/SSD drive, one M.2 drive (if OS is needed for debugging)

    • Three system fan-packs (Fan-packs 1, 2, and 3)

  • Servers with Compute Complex Neptune Core Module

    • Two processors

    • One memory module in slots 7 and 23

    • One power supply unit

    • One HDD/SSD drive, one M.2 drive (if OS is needed for debugging)

    • Four system fan-packs (Fan-packs 1, 2, 3, and 4)

Operating systems
Supported and certified operating systems:
  • Microsoft Windows Server

  • Red Hat Enterprise Linux

  • SUSE Linux Enterprise Server

  • VMware ESXi

  • Canonical Ubuntu

References: