Skip to main content

Overview of the switchover process

The MetroCluster switchover operation enables immediate resumption of services following a disaster by moving storage and client access from the source cluster to the remote site. You must be aware of what changes to expect and which actions you need to perform if a switchover occurs.

During a switchover operation, the system takes the following actions:

  • Ownership of the disks that belong to the disaster site is changed to the disaster recovery (DR) partner.

    This is similar to the case of a local failover in a high-availability (HA) pair, in which ownership of the disks belonging to the partner that is down is changed to the healthy partner.

  • The surviving plexes that are located on the surviving site but belong to the nodes in the disaster cluster are brought online on the cluster at the surviving site.

  • The sync-source storage virtual machine (SVM) that belongs to the disaster site is brought down only during a negotiated switchover.

    Note
    This is applicable only to a negotiated switchover.
  • The sync-destination SVM belonging to the disaster site is brought up.

While being switched over, the root aggregates of the DR partner are not brought online.

The metrocluster switchover command switches over the nodes in all DR groups in the MetroCluster configuration. For example, in an eight-node MetroCluster configuration, it switches over the nodes in both DR groups.

If you are switching over only services to the remote site, you should perform a negotiated switchover without fencing the site. If storage or equipment is unreliable, you should fence the disaster site, and then perform an unplanned switchover. Fencing prevents RAID reconstructions when the disks power up in a staggered manner.

Note
This procedure should be only used if the other site is stable and not intended to be taken offline.

Availability of commands during switchover

The following table shows the availability of commands during switchover:

CommandAvailability
storage aggregate createYou can create an aggregate:
  • If it is owned by a node that is part of the surviving cluster

You cannot create an aggregate:

  • For a node at the disaster site
  • For a node that is part of the surviving cluster
storage aggregate deleteYou can delete a data aggregate.
storage aggregate mirrorYou can create a plex for a non-mirrored aggregate.
storage aggregate plex deleteYou can delete a plex for a mirrored aggregate.
vserver createYou can create an SVM:
  • If its root volume resides in a data aggregate owned by the surviving cluster

You cannot create an SVM:

  • If its root volume resides in a data aggregate owned by the disaster-site cluster
vserver deleteYou can delete both sync-source and sync-destination SVMs.
network interface create -lifYou can create a data SVM LIF for both sync-source and sync-destination SVMs.
network interface delete -lifYou can delete a data SVM LIF for both sync-source and sync-destination SVMs.
lif createYou can create LIFs.
lif deleteYou can delete LIFs.
volume createYou can create a volume for both sync-source and sync-destination SVMs.
  • For a sync-source SVM, the volume must reside in a data aggregate owned by the surviving cluster
  • For a sync-destination SVM, the volume must reside in a data aggregate owned by the disaster-site cluster
volume deleteYou can delete a volume for both sync-source and sync-destination SVMs.
volume moveYou can move a volume for both sync-source and sync-destination SVMs.
  • For a sync-source SVM, the surviving cluster must own the destination aggregate
  • For a sync-destination SVM, the disaster-site cluster must own the destination aggregate
snapmirror breakYou can break a SnapMirror relationship between a source and destination endpoint of a data protection mirror.

Differences in switchover between MetroCluster FC and IP configurations

In MetroCluster IP configurations, because the remote disks are accessed through the remote DR partner nodes acting as iSCSI targets, the remote disks are not accessible when the remote nodes are taken down in a switchover operation. This results in differences with MetroCluster FC configurations:

  • Mirrored aggregates that are owned by the local cluster become degraded.

  • Mirrored aggregates that were switched over from the remote cluster become degraded.

Note
When unmirrored aggregates are supported on a MetroCluster IP configuration, the unmirrored aggregates that are not switched over from the remote cluster are not accessible.