Skip to main content

Assigning ownership for replaced drives

If you replaced drives when restoring hardware at the disaster site or you had to zero drives or remove ownership, you must assign ownership to the affected drives.

Before you begin

The disaster site must have at least as many available drives as it did prior to the disaster.

The drives shelves and drives arrangement must meet the requirements in the Required MetroCluster IP components and naming conventions section of the MetroCluster IP Installation and Configuration Guide.

MetroCluster IP installation and configuration

About this task

These steps are performed on the cluster at the disaster site.

This procedure shows the reassignment of all drives and the creation of new plexes at the disaster site. The new plexes are remote plexes of surviving site and local plexes of disaster site.

This section provides examples for four-node configurations. For eight-node configurations, you must account for the additional nodes on the second DR group. The examples make the following assumptions:
  • Site A is the disaster site.

  • node_A_1 has been replaced.

  • node_A_2 has been replaced.

    Present only in four-node MetroCluster configurations.

  • Site B is the surviving site.

  • node_B_1 is healthy.

  • node_B_2 is healthy.

    Present only in four-node MetroCluster configurations.

The controller modules have the following original system IDs:

Number of nodes in MetroCluster configurationNodeOriginal system ID
Fournode_A_14068741258
node_A_24068741260
node_B_14068741254
node_B_24068741256

You should keep in mind the following points when assigning the drives:

  • The old-count-of-disks must be at least the same number of disks for each node that were present before the disaster.

    If a lower number of disks is specified or present, the healing operations might not be completed due to insufficient space.

  • The new plexes to be created are remote plexes belonging to the surviving site (node_B_x pool1) and local plexes belonging to the disaster site (node_B_x pool0).

  • The total number of required drives should not include the root aggr disks.

    If n disks are assigned to pool1 of the surviving site, then (n-3) disks should be assigned to the disaster site with the assumption that the root aggregate uses three disks.

  • None of the disks can be assigned to a pool that is different from the one to which all other disks on the same stack are assigned.

  • Disks belonging to the surviving site are assigned to pool 1 and disks belonging to the disaster site are assigned to pool 0.

  1. Assign the new, unowned drives based on whether you have a four-node MetroCluster configuration:
    • For four-node MetroCluster configurations, assign the new, unowned disks to the appropriate disk pools by using the following series of commands on the replacement nodes:
      1. Systematically assign the replaced disks for each node to their respective disk pools:

        disk assign -s sysid -n old-count-of-disks -p pool
        From the surviving site, you issue a disk assign command for each node:
        cluster_B::> disk assign -s node_B_1-sysid -n old-count-of-disks -p 1 
        (remote pool of surviving site)
        cluster_B::> disk assign -s node_B_2-sysid -n old-count-of-disks -p 1
        (remote pool of surviving site)
        cluster_B::> disk assign -s node_A_1-old-sysid -n old-count-of-disks -p 0
        (local pool of disaster site)
        cluster_B::> disk assign -s node_A_2-old-sysid -n old-count-of-disks -p 0
        (local pool of disaster site)

        The following example shows the commands with the system IDs:
        cluster_B::> disk assign -s 4068741254 -n 21 -p 1  
        cluster_B::> disk assign -s 4068741256 -n 21 -p 1
        cluster_B::> disk assign -s 4068741258 -n 21 -p 0
        cluster_B::> disk assign -s 4068741260 -n 21 -p 0

      2. Confirm the ownership of the disks: storage disk show -fields owner, pool

        storage disk show -fields owner, pool
        cluster_A::> storage disk show -fields owner, pool
        disk owner pool
        -------- ------------- -----
        0c.00.1 node_A_1 Pool0
        0c.00.2 node_A_1 Pool0
        .
        .
        .
        0c.00.8 node_A_1 Pool1
        0c.00.9 node_A_1 Pool1
        .
        .
        .
        0c.00.15 node_A_2 Pool0
        0c.00.16 node_A_2 Pool0
        .
        .
        .
        0c.00.22 node_A_2 Pool1
        0c.00.23 node_A_2 Pool1
        .
        .
        .
  2. On the surviving site, turn on automatic disk assignment again: storage disk option modify -autoassign on *

    Example

    cluster_B::> storage disk option modify -autoassign on *
    2 entries were modified.

  3. On the surviving site, confirm that automatic disk assignment is on: storage disk option show

    Example

     cluster_B::> storage disk option show
    Node BKg. FW. Upd. Auto Copy Auto Assign Auto Assign Policy
    -------- ------------- ----------- ----------- ------------------
    node_B_1 on on on default
    node_B_2 on on on default
    2 entries were displayed.

    cluster_B::>