Skip to main content

Features and specifications

Use this information to view specific information about the compute node, such as compute node hardware features and the dimensions of the compute node.

Note
  1. Power, cooling, and chassis systems management are provided by the NeXtScale n1200 Enclosure chassis.
  2. The operating system in the compute node must provide USB support for the compute node to recognize and use USB media drives and devices. The NeXtScale n1200 Enclosure chassis uses USB for internal communication with these devices.

The following information is a summary of the features and specifications of the NeXtScale nx360 M5 compute node.

Microprocessor (depending on the model):
  • Supports up to two Intel Xeon E5-2600 v3 series multi-core microprocessors (one installed)
  • Level-3 cache
  • Two QuickPath Interconnect (QPI) links speed up to 9.6 GT per second
Note
  • Use the Setup utility to determine the type and speed of the microprocessors in the server.
  • For a list of supported microprocessors, see the Lenovo ServerProven website.
Memory:
  • 16 dual inline memory module (DIMM) connectors
  • Type: Low-profile (LP) double-data rate (DDR4) DRAM
  • Supports 4 GB, 8 GB, 16 GB RDIMMs, and 32 GB LRDIMMs with up to 512 GB of total memory on the system board
Integrated functions:
  • Integrated management module 2.1 (IMM2.1), which consolidates multiple management functions in a single chip
  • Concurrent COM/VGA/2x USB (KVM)
  • System error LEDs
  • Two network ports (two 1 Gb Ethernet ports on the system)
  • Supports up to one optional ML2 network adapter
  • One optional System Management RJ-45 to connect to a systems management network. This system management connector is dedicated to the integrated management module 2.1 (IMM2.1) functions
  • (Optional) Hardware RAID supportability for RAID level-0, RAID level-1, RAID level-5, RAID level-6, or RAID level-10
  • Wake on LAN (WOL)
Drive expansion bays (depending on the model):

Supports up to eight 3.5-inch SATA (if the storage tray is installed, up to 7 in the storage tray and 1 in the compute node), two 2.5-inch SATA/SAS, six 2.5-inch hot-swap SATA/SAS (while no PCIe adapter is installed; if the 2U GPU tray is installed, up to 4 in the 2U GPU tray and 2 in the compute node), or four 1.8-inch solid-state drives (with signal 6 Gb only).

Attention
As a general consideration, do not mix standard 512-byte and advanced 4-KB format drives in the same RAID array because it might lead to potential performance issues.
Table 1. Supported hard disk drives combinations
 RAID adapterFront HDD (hot-swap)Rear HDD (simple-swap)
HDD form factor 2.5-inch x 23.5-inch x 12.5-inch x 21.8-inch x 4
Supported HDD configurations in compute nodeRear RAID adapter (x8 rear RAID riser)V   
 V  
  V 
   V
V V 
Onboard SATA mode (non RAID) V  
  V 
   V
Upgradeable firmware:

All firmware is field upgradeable.

PCI expansion slots (depending on your model):
  • Compute node
    • Front slot: PCI Express x16 (PCIe3.0, full-height, half-length)
    • ML2 slot: PCI Express x16 (support 50 mm in height only)
    • Rear slot: PCI Express x8 (PCIe3.0, full-height, half-length)
  • GPU tray
    • Two PCI Express x16 slots (PCIe3.0, full-height, full-length)
Size:
  • Compute node
    • Height: 41 mm (1.6 in)
    • Depth: 659 mm (25.9 in)
    • Width: 216 mm (8.5 in)
    • Weight estimation (based on the LFF HDD within computer node): 6.17 kg (13.6 lb)
  • Storage tray
    • Height: 58.3 mm (2.3 in)
    • Depth: 659 mm (25.9 in)
    • Width: 216 mm (8.5 in)
    • Weight estimation (with 7 hard disk drives installed): 8.64 kg (19 lb)
  • GPU tray
    • Height: 58.3 mm (2.3 in)
    • Depth: 659 mm (25.9 in)
    • Width: 216 mm (8.5 in)
    • Weight estimation (with no GPU adapter installed): 3.33 kg (7.34 lb)
Electrical input:
  • 12 V DC
Environment:

The NeXtScale nx360 M5 compute node complies with ASHRAE class A3 specifications.

Server on1
  • Temperature: 5°C to 40°C (41°F to 104°F) up to 950 m2
  • Humidity, non-condensing: -12°C dew point (10.4°F) and 8% to 85% relative humidity3.4
  • Maximum dew point: 24°C (75°F)
  • Maximum altitude: 3,050 m (10,000 ft) & 5°C to 28°C (41°F to 82°F)
  • Maximum rate of temperature change: 20°C/hr (68°F/hr) for hard disk drives5
Server off6:
  • Temperature: 5°C to 45°C (41°F to 113°F)
  • Relative humidity: 8% to 85%
  • Maximum dew point: 27°C (80.6°F)
Storage (non-operating):
  • Temperature: 1°C to 60°C (33.8°F to 140.0°F)
  • Maximum altitude: 3,050 m (10,000 ft)
  • Relative humidity: 5% to 80%
  • Maximum dew point: 29°C (84.2°F)
Shipment (non-operating):7
  • Temperature: -40°C to 60°C (-40°F to 140.0°F)
  • Maximum altitude: 10,700 m (35,105 ft)
  • Relative humidity: 5% to 100%
  • Maximum dew point: 29°C (84.2°F)8
Specific supported environment
  • Processor E5-2699 v3, E5-2697 v3, E5-2667 v3, E5-2643 v3, E5-2637 v3, E5-2699A v4 and E5-2699R v4: Temperature: 5°C to 30°C (41°F to 86°F); altitude : 0 to 304.8 m (1,000ft)

  • GPU Intel 7120P: Temperature: 5°C to 30°C (41°F to 86°F); altitude : 0 to 304.8 m (1,000ft)
  • With rear side hard disk drives installed: Temperature: 5°C to 30°C (41°F to 86°F); altitude : 0 to 304.8 m (1,000ft)
Specific GPGPU supported environment
  • For all GPGPUs which the TDP is higher than 120W
    • Temperature: 5°C - 30°C (41°F to 86°F)
    • Altitude : 0 - 950 m (3,117 ft)
    • Chassis configuration must be homogeneous
  • Intel Co-processor 7120p
    • 1pcs and 2pcs
      • Temperature: 5°C - 30°C (41°F to 86°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 15 mm
      • GPU-to-GPU Traffic Optimized Mode: Support
    • 3pcs and 4pcs
      • Temperature: 5°C - 25°C (41°F to 77°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 9 mm
      • GPU-to-GPU Traffic Optimized Mode: Not support
  • Nvidia K80
    • 1pcs and 2pcs
      • Temperature: 5°C - 30°C (41°F to 86°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 15 mm
      • GPU-to-GPU Traffic Optimized Mode: Support
    • 3pcs and 4pcs
      • Temperature: 5°C - 27°C (41°F to 80°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 9 mm
      • GPU-to-GPU Traffic Optimized Mode: Not support
  • Nvidia K40
    • 1pcs to 4pcs
      • Temperature: 5°C - 30°C (41°F to 86°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 15 mm
      • GPU-to-GPU Traffic Optimized Mode: Support
  • Nvidia K1
    • 1pcs to 3pcs
      • Temperature: 5°C - 30°C (41°F to 86°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 15 mm
  • Nvidia K2
    • 1pcs to 4pcs
      • Temperature: 5°C - 30°C (41°F to 86°F)
      • Altitude : 0 - 900 m (2,953 ft)
      • Front HDD Thickness: less or equal to 15 mm
Particulate contamination:
Attention
  • Design to ASHRAE Class A3, temperature: 36°C to 40°C (96.8°F to 104°F), with relaxed support:
    • Support cloud such as workload with no performance degradation acceptable (turbo-off)
    • Under no circumstance, can any combination of the worst case workload and configuration result in system shutdown or design exposure at 40°C
    • The worst case workload (such as linpack and turbo-on) may have performance degradation
  • Airborne particulates and reactive gases acting alone or in combination with other environmental factors such as humidity or temperature might pose a risk to the compute node. For information about the limits for particulates and gases, see Particulate contamination.
Note
  1. Chassis is powered on.
  2. A3 - Derate maximum allowable temperature 1°C/175 m above 950 m.
  3. The minimum humidity level for class A3 is the higher (more moisture) of the -12°C dew point and the 8% relative humidity. These intersect at approximately 25°C. Below this intersection (~25°C), the dew point (-12°C) represents the minimum moisture level; above the intersection, relative humidity (8%) is the minimum.
  4. Moisture levels lower than 0.5°C DP, but not lower -10 °C DP or 8% relative humidity, can be accepted if appropriate control measures are implemented to limit the generation of static electricity on personnel and equipment in the data center. All personnel and mobile furnishings and equipment must be connected to ground via an appropriate static control system. The following items are considered the minimum requirements:
    1. Conductive materials (conductive flooring, conductive footwear on all personnel who go into the datacenter; all mobile furnishings and equipment will be made of conductive or static dissipative materials).
    2. During maintenance on any hardware, a properly functioning wrist strap must be used by any personnel who contacts IT equipment.
  5. 5°C/hr for data centers employing tape drives and 20°C/hr for data centers employing disk drives.
  6. Chassis is removed from original shipping container and is installed but not in use, for example, during repair, maintenance, or upgrade.
  7. The equipment acclimation period is 1 hour per 20°C of temperature change from the shipping environment to the operating environment.
  8. Condensation, but not rain, is acceptable.
  9. When booting Windows Server 2012 or Windows Server 2012 R2 with a legacy VGA device and two or more NVIDIA GRID K1 or four or more NVIDIA GRID K2 cards, one of the NVIDIA GPUs is unavailable for use. The GPU appears in Windows Device Manager with a yellow bang, and the device status is reported as Windows has stopped this device because it has reported problems. (Code 43). The remaining seven GPUs function normally.
  10. NVIDIA Grid K2 card specifically should only be used with one 8 pin aux power cable rather than both the 8 pin and 6 pin aux power cable in NeXtScale.
  11. IMM cannot detect presence for those simple swap disks, or disks that do not connect to LSI raid card that support agentless feature. Health status for these disks are not available. IMM interface will not list these disks. If there is not any other disk that can be detected by IMM, Local storage will be shown as Unavailable in IMM System Status page.
  12. Mixing nx360 M4 Compute Node Type 5455 and nx360 M5 Compute Node Type 5465 in n1200 Enclosure is not supported.