Skip to main content

Memory problems

Use this information to resolve issues related to memory.

Displayed system memory less than installed physical memory

Complete the following steps until the problem is resolved:
  1. Make sure that:
    • No error LEDs are lit on the operator information panel.

    • No DIMM error LEDs are lit on the system board.

    • Memory mirrored channel does not account for the discrepancy.

    • The memory modules are seated correctly.

    • You have installed the correct type of memory.

    • If you changed the memory, you updated the memory configuration in the Lenovo XClarity Provisioning Manager.

    • All banks of memory are enabled. The server might have automatically disabled a memory bank when it detected a problem, or a memory bank might have been manually disabled.

    • There are no memory errors when the server is at the minimum memory configuration.

    • When DCPMMs are installed:

      1. If the memory is set in App Direct or Mixed memory mode, all the saved data have been backed up, and created namespaces are deleted before any DCPMM is replaced.

      2. Refer to DC Persistent Memory Module (DCPMM) setup and see if the displayed memory fits the mode description.

      3. If DCPMMs are recently set in Memory mode, turn it back to App Direct mode and examine if there is namespace that has not been deleted (see DC Persistent Memory Module (DCPMM) setup).

      4. Go to the Setup Utility, select System Configuration and Boot Management > Intel Optane DCPMMs > Security, and make sure all the DCPMM units are unlocked.

  2. Reseat the DIMMs, and then restart the server.

  3. Run memory diagnostics. When you start a solution and press the key according to the on-screen instructions, the LXPM interface is displayed by default. (For more information, see the “Startup” section in the LXPM documentation compatible with your server at Lenovo XClarity Provisioning Manager portal page.) You can perform memory diagnostics with this interface. From the Diagnostic page, go to Run Diagnostic > Memory test.

  4. Check the POST error log:

    • If a DIMM was disabled by a systems-management interrupt (SMI), replace the DIMM.

    • If a DIMM was disabled by the user or by POST, reseat the DIMM; then, run the Lenovo XClarity Provisioning Manager and enable the DIMM.

  5. Run memory diagnostics. When you start a solution and press the key according to the on-screen instructions, the LXPM interface is displayed by default. (For more information, see the “Startup” section in the LXPM documentation compatible with your server at Lenovo XClarity Provisioning Manager portal page.) You can perform memory diagnostics with this interface. From the Diagnostic page, go to Run Diagnostic > Memory test or DCPMM test.

    Note
    When DCPMMs are installed, run diagnostics based on the mode that is set presently:
    • App Direct mode:

      • Run Memory Test for DRAM memory modules.

      • Run DCPMM Test for DCPMMs.

    • Memory and Mixed memory mode:

      Run both Memory Test and DCPMM Test for DCPMMs.

  6. Move the suspect DIMMs in one channel to another channel of the same processor that is a supported configuration, and then restart the server. If the problem is related to a memory module, replace the failing memory module.
    Note
    When DCPMMs are installed, only adopt this method in Memory mode.
  7. Replace the DIMM.

  8. Restart the server.

Attempt to change to another DCPMM mode fails

After the DCPMM mode is changed and the system is successfully restarted, if the DCPMM mode stays the same instead of being changed, check the DRAM DIMMs and DCPMM capacity to see if it meets the requirement of the new mode (see DC Persistent Memory Module (DCPMM) setup).

Extra namespace appears in an interleaved region

If there are two created namespaces in one interleaved region, VMware ESXi ignores the created namespaces and creates an extra new namespace during system booting. Delete the created namespaces in either the Setup Utility or the operating system before the first booting with ESXi.