Skip to main content

Shutting down the node

To shut down the impaired node, you must determine the status of the node and, if necessary, take over the node so that the healthy node continues to serve data from the impaired node storage.

Before you begin

  • If you have a cluster with more than two nodes, it must be in quorum. If the cluster is not in quorum or a healthy node shows false for eligibility and health, you must correct the issue before shutting down the impaired node.

    System Administration Guide

About this task

You might want to erase the contents of your caching module before replacing it.
  1. Although data on the caching module is encrypted, you might want to erase any data from the impaired caching module and verify that the caching module has no data:
    1. Erase the data on the caching module: system controller flash-cache secure-erase run [-slot] slot#
    2. Verify that the data has been erased from the caching module: system controller flash-cache secure-erase show -node node_name
      The output should display the caching module status as erased.
  2. If AutoSupport is enabled, suppress automatic log creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=number_of_hours_downh

    Example

    The following AutoSupport message suppresses automatic log creation for two hours: cluster1:*> system node autosupport invoke -node * -type all -message MAINT=2h
  3. Disable automatic giveback from the console of the healthy node: storage failover modify –node local -auto-giveback false
  4. Take the impaired node to the LOADER prompt:
    If the impaired node is displaying...Then...
    The LOADER promptGo to the next step.
    Waiting for giveback...

    Press Ctrl-C, and then respond y.

    System prompt or password prompt (enter system password)Take over or halt the impaired node: storage failover takeover -ofnode impaired_node_name

    When the impaired node shows Waiting for giveback..., press Ctrl-C, and then respond y.