Skip to main content

Updating a two-node MetroCluster configuration

You can upgrade and in some cases downgrade ONTAP nondisruptively for a two-node MetroCluster configuration. This method has several steps: initiating a negotiated switchover, updating the cluster at the failed site, initiating switchback, and then repeating the process on the cluster at the other site.

About this task

  • This procedure is for two-node MetroCluster configurations only.

    Do not use this procedure if you have a four-node MetroCluster configuration.

  • For downgrades, this procedure is only for downgrading from ONTAP 9.0 or earlier.

    You cannot use this procedure to downgrade a two-node MetroCluster configuration from ONTAP 9.1 or ONTAP 9.2, which can only be done disruptively.

  1. Set the privilege level to advanced, entering y when prompted to continue: set -privilege advanced
    The advanced prompt (*>) appears.
  2. On the cluster to be upgraded, install the new ONTAP software image as the default: system node image update -package package_location -setdefault true -replace-package true

    Example

    cluster_B::*> system node image update -package http://www.example.com/NewImage.tgz -setdefault true -replace-package true
  3. Verify that the target software image is set as the default image: system node image show

    Example

    The following example shows that NewImage is set as the default image:
    cluster_B::*> system node image show
    Is Is Install
    Node Image Default Current Version Date
    -------- ------- ------- ------- -------------------- -------------------
    node_B_1
    OldImage false true X.X.X MM/DD/YYYY TIME
    NewImage true false Y.Y.Y MM/DD/YYYY TIME
    2 entries were displayed.
  4. If the target software image is not set as the default image, then change it: system image modify {-node * -iscurrent false} -isdefault true
  5. Verify that all cluster SVMs are in a health state: metrocluster vserver show
  6. On the cluster that is not being updated, initiate a negotiated switchover: metrocluster switchover

    The operation can take several minutes. You can use the metrocluster operation show command to verify that the switchover is completed.

    Example

    In the following example, a negotiated switchover is performed on the remote cluster (cluster_A). This causes the local cluster (cluster_B) to halt so that you can update it.
    cluster_A::> metrocluster switchover

    Warning: negotiated switchover is about to start. It will stop all the data
    Vservers on cluster "cluster_B" and
    automatically re-start them on cluster
    "cluster_A". It will finally gracefully shutdown
    cluster "cluster_B".
    Do you want to continue? {y|n}: y
  7. Verify that all cluster SVMs are in a health state: metrocluster vserver show
  8. Resynchronize the data aggregates on the surviving cluster: metrocluster heal -phase aggregates
    After upgrading MetroCluster IP configurations to ONTAP 9.5 or later, the aggregates will be in a degraded state for a short period before resynchronizing and returning to a mirrored state.

    Example

    cluster_A::> metrocluster heal -phase aggregates
    [Job 130] Job succeeded: Heal Aggregates is successful.
  9. Verify that the healing operation was completed successfully: metrocluster operation show

    Example

    cluster_A::> metrocluster operation show
    Operation: heal-aggregates
    State: successful
    Start Time: MM/DD/YYYY TIME
    End Time: MM/DD/YYYY TIME
    Errors: -
  10. Resynchronize the root aggregates on the surviving cluster: metrocluster heal -phase root-aggregates

    Example

    cluster_A::> metrocluster heal -phase root-aggregates
    [Job 131] Job succeeded: Heal Root Aggregates is successful.
  11. Verify that the healing operation was completed successfully: metrocluster operation show

    Example

    cluster_A::> metrocluster operation show
    Operation: heal-root-aggregates
    State: successful
    Start Time: MM/DD/YYYY TIME
    End Time: MM/DD/YYYY TIME
    Errors: -
  12. On the halted cluster, boot the node from the LOADER prompt: boot_ontap
  13. Wait for the boot process to finish, and then verify that all cluster SVMs are in a health state: metrocluster vserver show
  14. Perform a switchback from the surviving cluster: metrocluster switchback
  15. Verify that the switchback was completed successfully: metrocluster operation show

    Example

    cluster_A::> metrocluster operation show
    Operation: switchback
    State: successful
    Start Time: MM/DD/YYYY TIME
    End Time: MM/DD/YYYY TIME
    Errors: -
  16. Verify that all cluster SVMs are in a health state: metrocluster vserver show
  17. Repeat all previous steps on the other cluster.
  18. Verify that the MetroCluster configuration is healthy:
    1. Check the configuration: metrocluster check run

      Example

      cluster_A::> metrocluster check run
      Last Checked On: MM/DD/YYYY TIME
      Component Result
      ------------------- ---------
      nodes ok
      lifs ok
      config-replication ok
      aggregates ok
      4 entries were displayed.

      Command completed. Use the "metrocluster check show -instance"
      command or sub-commands in "metrocluster check" directory for
      detailed results.
      To check if the nodes are ready to do a switchover or switchback
      operation, run "metrocluster switchover -simulate" or "metrocluster
      switchback -simulate", respectively.
    2. If you want to view more detailed results, use the metrocluster check run command: metrocluster check aggregate show metrocluster check config-replication show metrocluster check lif show metrocluster check node show
    3. Set the privilege level to advanced: set -privilege advanced
    4. Simulate the switchover operation: metrocluster switchover -simulate
    5. Review the results of the switchover simulation: metrocluster operation show

      Example

      cluster_A::*> metrocluster operation show
      Operation: switchover
      State: successful
      Start time: MM/DD/YYYY TIME
      End time: MM/DD/YYYY TIME
      Errors: -
    6. Return to the admin privilege level: set -privilege admin
    7. Repeat these substeps on the other cluster.

After you finish

You should perform any post-upgrade or post-downgrade tasks.