Skip to main content

Reassigning disk ownership for root aggregates to replacement controller modules (MetroCluster FC configurations)

If one or both of the controller modules or NVRAM cards were replaced at the disaster site, the system ID has changed and you must reassign disks belonging to the root aggregates to the replacement controller modules.

About this task

Because the nodes are in switchover mode and healing has been done, only the disks containing the root aggregates of pool1 of the disaster site will be reassigned in this section. They are the only disks still owned by the old system ID at this point.

This section provides examples for four-node configurations. For eight-node configurations, you must account for the additional nodes on the second DR group. The examples make the following assumptions:
  • Site A is the disaster site.

  • node_A_1 has been replaced.

  • node_A_2 has been replaced.

    Present only in four-node MetroCluster configurations.

  • Site B is the surviving site.

  • node_B_1 is healthy.

  • node_B_2 is healthy.

    Present only in four-node MetroCluster configurations.

The old and new system IDs were identified in Determining the system IDs of the old controller modules.

The examples in this procedure use controllers with the following system IDs:

Number of nodesNodeOriginal system IDNew system ID
Fournode_A_140687412581574774970
node_A_240687412601574774991
node_B_14068741254unchanged
node_B_24068741256unchanged
  1. With the replacement node in Maintenance mode, reassign the root aggregate disks: disk reassign -s old-system-ID -d new-system-ID

    Example

    *> disk reassign -s 4068741258 -d 1574774970
  2. View the disks to confirm the ownership change of the pool1 root aggr disks of the disaster site to the replacement node: disk show

    Example

    The output might show more or fewer disks, depending on how many disks are in the root aggregate and whether any of these disks failed and were replaced. If the disks were replaced, then Pool0 disks will not appear in the output.

    The pool1 root aggregate disks of the disaster site should now be assigned to the replacement node.

    *> disk show
    Local System ID: 1574774970

    DISK OWNER POOL SERIAL NUMBER HOME DR HOME
    ------------ ------------- ----- ------------- ------------- -------------
    sw_A_1:6.126L19 node_A_1(1574774970) Pool0 serial-number node_A_1(1574774970)
    sw_A_1:6.126L3 node_A_1(1574774970) Pool0 serial-number node_A_1(1574774970)
    sw_A_1:6.126L7 node_A_1(1574774970) Pool0 serial-number node_A_1(1574774970)
    sw_B_1:6.126L8 node_A_1(1574774970) Pool1 serial-number node_A_1(1574774970)
    sw_B_1:6.126L24 node_A_1(1574774970) Pool1 serial-number node_A_1(1574774970)
    sw_B_1:6.126L2 node_A_1(1574774970) Pool1 serial-number node_A_1(1574774970)

    *> aggr status
    Aggr State Status
    node_A_1_root online raid_dp, aggr
    mirror degraded
    64-bit
    *>
  3. View the aggregate status: aggr status

    Example

    The output might show more or fewer disks, depending on how many disks are in the root aggregate and whether any of these disks failed and were replaced. If disks were replaced, then Pool0 disks will not appear in output.

    *> aggr status
    Aggr State Status
    node_A_1_root online raid_dp, aggr
    mirror degraded
    64-bit
    *>
  4. Delete the contents of the mailbox disks: mailbox destroy local
  5. If the aggregate is not online, bring it online: aggr online aggr_name
  6. Halt the node to display the LOADER prompt: halt