Skip to main content

How to migrate from a two-node switchless cluster to a cluster with Cisco Nexus 3232C cluster switches

If you have a two-node switchless cluster, you can migrate nondisruptively to a two-node switched cluster that includes Cisco Nexus 3232C cluster network switches.

  • The configurations must be properly set up and functioning.

    The two nodes must be connected and functioning in a two-node switchless cluster setting.

  • All cluster ports must be in the up state.
  • The Cisco Nexus 3232C cluster switch must be supported.
  • The existing cluster network configuration must have the following:
    • A redundant and fully functional Nexus 3232C cluster infrastructure on both switches
    • The latest RCF and NX-OS versions on your switches
    • Management connectivity on both switches
    • Console access to both switches
    • All cluster logical interfaces (LIFs) in the up state without having been migrated
    • Initial customization of the switch
    • All ISL ports enabled and cabled

Procedure summary

  • I. Display and migrate physical and logical ports (Steps 1-10)
  • II. Shut down the reassigned LIFs and disconnect the cables (Steps 11-14)
  • III. Enable the cluster ports (Steps 15-20)
  • IV. Enable the reassigned LIFs (Steps 21-33)

The examples in this procedure use the following switch and node nomenclature:

  • Nexus 3232C cluster switches, C1 and C2.
  • The nodes are n1 and n2.
Note
The examples in this procedure use two nodes, each utilizing two 100 GbE cluster interconnect ports e3a and e3b. The Lenovo Press has details about the cluster ports on your platforms.
  • n1_clus1 is the first cluster logical interface (LIF) to be connected to cluster switch C1 for node n1.
  • n1_clus2 is the first cluster LIF to be connected to cluster switch C2 for node n1.
  • n2_clus1 is the first cluster LIF to be connected to cluster switch C1 for node n2.
  • n2_clus2 is the second cluster LIF to be connected to cluster switch C2 for node n2.
  • The number of 10 GbE and 40/100 GbE ports are defined in the reference configuration files (RCFs) available on the Lenovo Datacenter Support Download page for a DM model.
Note
The procedure requires the use of both ONTAP commands and Cisco Nexus 3000 Series Switches commands; ONTAP commands are used unless otherwise indicated.
  1. If AutoSupport is enabled on this cluster, suppress automatic case creation by invoking an AutoSupport message: system node autosupport invoke -node * -type all - message MAINT=xh
    In MAINT=xh, x is the duration of the maintenance window in hours.
    Note
    The AutoSupport message notifies technical support of this maintenance task so that automatic case creation is suppressed during the maintenance window.
  2. Determine the administrative or operational status for each cluster interface:
    1. Display the network port attributes: network port show -role cluster
      cluster::*> network port show –role cluster
      (network port show)
      Node: n1
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- -----
      e3a Cluster Cluster up 9000 auto/100000 -
      e3b Cluster Cluster up 9000 auto/100000 - -
      Node: n2
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- -----
      e3a Cluster Cluster up 9000 auto/100000 -
      e3b Cluster Cluster up 9000 auto/100000 -
      4 entries were displayed.


    2. Display information about the logical interfaces and their designated home nodes: network interface show -role cluster
      cluster::*> network interface show -role cluster
      (network interface show)
      Logical Status Network Current Current Is
      Vserver Interface Admin/Oper Address/Mask Node Port Home
      ----------- ---------- ---------- ------------------ ------------- ------- ---
      Cluster
      n1_clus1 up/up 10.10.0.1/24 n1 e3a true
      n1_clus2 up/up 10.10.0.2/24 n1 e3b true
      n2_clus1 up/up 10.10.0.3/24 n2 e3a true
      n2_clus2 up/up 10.10.0.4/24 n2 e3b true

      4 entries were displayed.

    3. Verify that switchless cluster detection is enabled using the advanced privilege command:network options detect-switchless-cluster show

      The output in the following example shows that switchless cluster detection is enabled:

      cluster::*> network options detect-switchless-cluster show
      Enable Switchless Cluster Detection: true
  3. Verify that the appropriate RCFs and image are installed on the new 3232C switches and make any necessary site customizations such as adding users, passwords, and network addresses.
    You must prepare both switches at this time. If you need to upgrade the RCF and image software, you must follow these steps:
    1. Go to the Lenovo Datacenter Support Site for a DM Series Product.
    2. Download the appropriate version of RCF.
    3. Refer to the Cisco website for switch image downloads.
  4. On Nexus 3232C switches C1 and C2, disable all node-facing ports C1 and C2, but do not disable the ISL ports e1/31-32.
    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    The following example shows ports 1 through 30 being disabled on Nexus 3232C cluster switches C1 and C2:

    C1# configure
    C1(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C1(config-if-range)# shutdown





    C2# configure
    C2(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C2(config-if-range)# shutdown

  5. Connect ports 1/31 and 1/32 on C1 to the same ports on C2 using supported cabling.
  6. Verify that the ISL ports are operational on C1 and C2: show port-channel summary

    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    The following example shows the Cisco show port-channel summary command being used to verify the ISL ports are operational on C1 and C2:

    C1# show port-channel summary
    Flags: D - Down P - Up in port-channel (members)
    I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed
    S - Switched R - Routed
    U - Up (port-channel)
    M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Port-
    Group Channel Type Protocol Member Ports
    -------------------------------------------------------------------------------
    1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)

    C2# show port-channel summary
    Flags: D - Down P - Up in port-channel (members)
    I - Individual H - Hot-standby (LACP only) s - Suspended r - Module-removed
    S - Switched R - Routed
    U - Up (port-channel)
    M - Not in use. Min-links not met
    --------------------------------------------------------------------------------

    Group Port- Type Protocol Member Ports
    Channel
    --------------------------------------------------------------------------------
    1 Po1(SU) Eth LACP Eth1/31(P) Eth1/32(P)

  7. Display the list of neighboring devices on the switch.
    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    The following example shows the Cisco command show cdp neighbors being used to display the neighboring devices on the switch:

    C1# show cdp neighbors
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
    S - Switch, H - Host, I - IGMP, r - Repeater,
    V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute
    Device-ID Local Intrfce Hldtme Capability Platform Port ID
    C2 Eth1/31 174 R S I s N3K-C3232C Eth1/31
    C2 Eth1/32 174 R S I s N3K-C3232C Eth1/32
    Total entries displayed: 2
    C2# show cdp neighbors
    Capability Codes: R - Router, T - Trans-Bridge, B - Source-Route-Bridge
    S - Switch, H - Host, I - IGMP, r - Repeater,
    V - VoIP-Phone, D - Remotely-Managed-Device, s - Supports-STP-Dispute
    Device-ID Local Intrfce Hldtme Capability Platform Port ID
    C1 Eth1/31 178 R S I s N3K-C3232C Eth1/31
    C1 Eth1/32 178 R S I s N3K-C3232C Eth1/32
    Total entries displayed: 2

  8. Display the cluster port connectivity on each node: network device-discovery show

    The following example shows the cluster port connectivity displayed for a two-node switchless cluster configuration:

    cluster::*> network device-discovery show
    Local Discovered
    Node Port Device Interface Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1 /cdp
    e3a n2 e3a DM7100F
    e3b n2 e3b DM7100F
    n2 /cdp
    e3a n1 e3a DM7100F
    e3b n1 e3b DM7100F


  9. Migrate the n1_clus1 and n2_clus1 LIFs to the physical ports of their destination nodes: network interface migrate -vserver cluster -lif lif-name source-node source-node-name -destination-port destination-port-name

    You must execute the command for each local node as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus1 -source-node n1 
    –destination-node n1 -destination-port e3b
    cluster::*> network interface migrate -vserver cluster -lif n2_clus1 -source-node n2
    –destination-node n2 -destination-port e3b

  10. Verify the cluster interfaces have successfully migrated: network interface show -role cluster

    The following example shows the Is Home status for the n1_clus1 and n2_clus1 LIFs has become false after the migration is completed:

    cluster::*> network interface show -role cluster
    (network interface show)
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
    n1_clus1 up/up 10.10.0.1/24 n1 e3b false
    n1_clus2 up/up 10.10.0.2/24 n1 e3b true
    n2_clus1 up/up 10.10.0.3/24 n2 e3b false
    n2_clus2 up/up 10.10.0.4/24 n2 e3b true
    4 entries were displayed.
  11. Shut down cluster ports for the n1_clus1 and n2_clus1 LIFs, which were migrated in step 10: network port modify -node node-name -port port-name -up-admin false

    You must execute the command for each port as shown in the following example:

    cluster::*> network port modify -node n1 -port e3a -up-admin false 
    cluster::*> network port modify -node n2 -port e3a -up-admin false
  12. Ping the remote cluster interfaces and perform an RPC server check: cluster ping-cluster -node node-name

    The following example shows node n1 being pinged and the RPC status indicated afterward:

    cluster::*> cluster ping-cluster -node n1 

    Host is n1 Getting addresses from network interface table...
    Cluster n1_clus1 n1 e3a 10.10.0.1
    Cluster n1_clus2 n1 e3b 10.10.0.2
    Cluster n2_clus1 n2 e3a 10.10.0.3
    Cluster n2_clus2 n2 e3b 10.10.0.4
    Local = 10.10.0.1 10.10.0.2
    Remote = 10.10.0.3 10.10.0.4
    Cluster Vserver Id = 4294967293 Ping status:
    ....
    Basic connectivity succeeds on 4 path(s)
    Basic connectivity fails on 0 path(s) ................
    Detected 9000 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.3
    Local 10.10.0.1 to Remote 10.10.0.4
    Local 10.10.0.2 to Remote 10.10.0.3
    Local 10.10.0.2 to Remote 10.10.0.4
    Larger than PMTU communication succeeds on 4 path(s) RPC status:
    1 paths up, 0 paths down (tcp check)
    1 paths up, 0 paths down (ucp check)
  13. Disconnect the cable from e3a on node n1.
    You can refer to the running configuration and connect the first 100 GbE port on the switch C1 (port 1/7 in this example) to e3a on n1 using cabling supported for Nexus 3232C switches.
  14. Disconnect the cable from e3a on node n2.
    You can refer to the running configuration and connect e3a to the next available 100 GbE port on C1, port 1/8, using supported cabling.
  15. Enable all node-facing ports on C1.
    For more information on Cisco commands, see the guides listed in the Cisco Nexus 3000 Series NX-OS Command References.

    The following example shows ports 1 through 30 being enabled on Nexus 3232C cluster switches C1 and C2:

    C1# configure
    C1(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C1(config-if-range)# no shutdown
    C1(config-if-range)# exit
    C1(config)# exit
  16. Enable the first cluster port, e3a, on each node: network port modify -node node-name -port port-name -up-admin true
    cluster::*> network port modify -node n1 -port e3a -up-admin true 
    cluster::*> network port modify -node n2 -port e3a -up-admin true
  17. Verify that the clusters are up on both nodes: network port show -role cluster
    cluster::*> network port show –role cluster
    (network port show)
    Node: n1
    Ignore
    Speed(Mbps) Health Health
    Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
    --------- ------------ ---------------- ---- ---- ----------- -------- -----
    e3a Cluster Cluster up 9000 auto/100000 -
    e3b Cluster Cluster up 9000 auto/100000 - -

    Node: n2
    Ignore
    Speed(Mbps) Health Health
    Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
    --------- ------------ ---------------- ---- ---- ----------- -------- -----
    e3a Cluster Cluster up 9000 auto/100000 -
    e3b Cluster Cluster up 9000 auto/100000 -

    4 entries were displayed.
  18. For each node, revert all of the migrated cluster interconnect LIFs: network interface revert -vserver cluster -lif lif-name

    You must revert each LIF to its home port individually as shown in the following example:

    cluster::*> network interface revert -vserver cluster -lif n1_clus1 
    cluster::*> network interface revert -vserver cluster -lif n2_clus1
  19. Verify that all the LIFs are now reverted to their home ports: network interface show -role cluster
    The Is Home column should display a value of true for all of the ports listed in the Current Port column. If the displayed value is false, the port has not been reverted.
    cluster::*> network interface show -role cluster
    (network interface show)
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
    n1_clus1 up/up 10.10.0.1/24 n1 e3a true
    n1_clus2 up/up 10.10.0.2/24 n1 e3b true
    n2_clus1 up/up 10.10.0.3/24 n2 e3a true
    n2_clus2 up/up 10.10.0.4/24 n2 e3b true
    4 entries were displayed.
  20. Display the cluster port connectivity on each node: network device-discovery show
    cluster::*> network device-discovery show
    Local Discovered
    Node Port Device Interface Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1 /cdp
    e3a C1 Ethernet1/7 N3K-C3232C
    e3b n2 e3b DM7100F
    n2 /cdp
    e3a C1 Ethernet1/8 N3K-C3232C
    e3b n1 e3b DM7100F

  21. Migrate clus2 to port e3a on the console of each node: network interface migrate cluster -lif lif-name -source-node source-node-name -destination-node destination-node-name -destination-port destination-port-name

    You must migrate each LIF to its home port individually as shown in the following example:

    cluster::*> network interface migrate -vserver cluster -lif n1_clus2 -source-node n1 
    –destination-node n1 -destination-port e3a
    cluster::*> network interface migrate -vserver cluster -lif n2_clus2 -source-node n2 –destination-node n2 -destination-port e3a
  22. Shut down cluster ports clus2 LIF on both nodes: network port modify

    The following example shows the specified ports being set to false, shutting the ports down on both nodes:

    cluster::*> network port modify -node n1 -port e3b -up-admin false     
    cluster::*> network port modify -node n2 -port e3b -up-admin false
  23. Verify the cluster LIF status: network interface show
    cluster::*> network interface show -role cluster
    (network interface show)
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
    n1_clus1 up/up 10.10.0.1/24 n1 e3a true
    n1_clus2 up/up 10.10.0.2/24 n1 e3a false
    n2_clus1 up/up 10.10.0.3/24 n2 e3a true
    n2_clus2 up/up 10.10.0.4/24 n2 e3a false
    4 entries were displayed.
  24. Disconnect the cable from e3b on node n1.
    You can refer to the running configuration and connect the first 100 GbE port on switch C2 (port 1/7 in this example) to e3b on node n1, using the appropriate cabling for the Nexus 3232C switch model.
  25. Disconnect the cable from e3b on node n2.
    You can refer to the running configuration and connect e3b to the next available 100 GbE port on C2, port 1/8, using the appropriate cabling for the Nexus 3232C switch model.
  26. Enable all node-facing ports on C2.

    The following example shows ports 1 through 30 being enabled on Nexus 3132Q-V cluster switches C1 and C2:

    C2# configure
    C2(config)# int e1/1/1-4,e1/2/1-4,e1/3/1-4,e1/4/1-4,e1/5/1-4,e1/6/1-4,e1/7-30
    C2(config-if-range)# no shutdown
    C2(config-if-range)# exit
    C2(config)# exit
  27. Enable the second cluster port, e3b, on each node: network port modify

    The following example shows the second cluster port e3b being brought up on each node:

    cluster::*> network port modify -node n1 -port e3b -up-admin true     
    cluster::*> network port modify -node n2 -port e3b -up-admin true
  28. For each node, revert all of the migrated cluster interconnect LIFs: network interface revert

    The following example shows the migrated LIFs being reverted to their home ports.

    cluster::*> network interface revert -vserver Cluster -lif n1_clus2     
    cluster::*> network interface revert -vserver Cluster -lif n2_clus2

  29. Verify that all of the cluster interconnect ports are now reverted to their home ports: network interface show -role cluster

    The Is Home column should display a value of true for all of the ports listed in the Current Port column. If the displayed value is false, the port has not been reverted.
    cluster::*> network interface show -role cluster
    (network interface show)
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ---------- ---------- ------------------ ------------- ------- ----
    Cluster
    n1_clus1 up/up 10.10.0.1/24 n1 e3a true
    n1_clus2 up/up 10.10.0.2/24 n1 e3b true
    n2_clus1 up/up 10.10.0.3/24 n2 e3a true
    n2_clus2 up/up 10.10.0.4/24 n2 e3b true
    4 entries were displayed.
  30. Verify that all of the cluster interconnect ports are in the up state: network port show -role cluster
  31. Display the cluster switch port numbers through which each cluster port is connected to each node: network device-discovery show
    cluster::*> network device-discovery show
    Local Discovered
    Node Port Device Interface Platform
    ----------- ------ ------------------- ---------------- ----------------
    n1 /cdp
    e3a C1 Ethernet1/7 N3K-C3232C
    e3b C2 Ethernet1/7 N3K-C3232C
    n2 /cdp
    e3a C1 Ethernet1/8 N3K-C3232C
    e3b C2 Ethernet1/8 N3K-C3232C
  32. Display discovered and monitored cluster switches: system cluster-switch show
    cluster::*> system cluster-switch show

    Switch Type Address Model
    --------------------------- ------------------ ---------------- ---------------
    C1 cluster-network 10.10.1.101 NX3232CV
    Serial Number: FOX000001
    Is Monitored: true
    Reason:
    Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1)
    Version Source: CDP

    C2 cluster-network 10.10.1.102 NX3232CV
    Serial Number: FOX000002
    Is Monitored: true
    Reason:
    Software Version: Cisco Nexus Operating System (NX-OS) Software, Version 7.0(3)I6(1)
    Version Source: CDP 2 entries were displayed.

  33. Verify that switchless cluster detection changed the switchless cluster option to disabled: network options switchless-cluster show
  34. Ping the remote cluster interfaces and perform an RPC server check: cluster ping-cluster -node node-name
    cluster::*> cluster ping-cluster -node n1 
    Host is n1 Getting addresses from network interface table...
    Cluster n1_clus1 n1 e3a 10.10.0.1
    Cluster n1_clus2 n1 e3b 10.10.0.2
    Cluster n2_clus1 n2 e3a 10.10.0.3
    Cluster n2_clus2 n2 e3b 10.10.0.4
    Local = 10.10.0.1 10.10.0.2
    Remote = 10.10.0.3 10.10.0.4
    Cluster Vserver Id = 4294967293
    Ping status:
    ....
    Basic connectivity succeeds on 4 path(s)
    Basic connectivity fails on 0 path(s) ................
    Detected 9000 byte MTU on 32 path(s):
    Local 10.10.0.1 to Remote 10.10.0.3
    Local 10.10.0.1 to Remote 10.10.0.4
    Local 10.10.0.2 to Remote 10.10.0.3
    Local 10.10.0.2 to Remote 10.10.0.4
    Larger than PMTU communication succeeds on 4 path(s) RPC status:
    1 paths up, 0 paths down (tcp check)
    1 paths up, 0 paths down (ucp check)
  35. Enable the cluster switch health monitor log collection feature for collecting switch-related log files: system cluster-switch log setup-passwordsystem cluster-switch log enable-collection
    cluster::*> system cluster-switch log setup-password
    Enter the switch name: <return>
    The switch name entered is not recognized.
    Choose from the following list:
    C1
    C2

    cluster::*> system cluster-switch log setup-password

    Enter the switch name: <strong className="ph b">C1
    </strong>RSA key fingerprint is e5:8b:c6:dc:e2:18:18:09:36:63:d9:63:dd:03:d9:cc
    Do you want to continue? {y|n}::[n] y

    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>

    cluster::*> system cluster-switch log setup-password

    Enter the switch name: C2
    RSA key fingerprint is 57:49:86:a1:b9:80:6a:61:9a:86:8e:3c:e3:b7:1f:b1
    Do you want to continue? {y|n}:: [n] y

    Enter the password: <enter switch password>
    Enter the password again: <enter switch password>

    cluster::*> system cluster-switch log enable-collection

    Do you want to enable cluster log collection for all nodes in the cluster?
    {y|n}: [n] y

    Enabling cluster switch log collection.

    cluster::*>

    Note
    If any of these commands return an error, contact Lenovo support.
  36. If you suppressed automatic case creation, re-enable it by invoking an AutoSupport message: system node autosupport invoke -node * -type all -message MAINT=END