Skip to main content

Installing the Reference Configuration File (RCF)

You can install the RCF after setting up the Nexus 9336C-FX2 switch for the first time.You can also use this procedure to upgrade your RCF version.

The examples in this procedure use the following switch and node nomenclature:
  • The names of the two Cisco switches are cs1 and cs2.
  • The node names are cluster1-01, cluster1-02, cluster1-03, and cluster1-04.
  • The cluster LIF names are cluster1-01_clus1, cluster1-01_clus2, cluster1-02_clus1, cluster1-02_clus2 , cluster1-03_clus1, cluster1-03_clus2, cluster1-04_clus1 , and cluster1-04_clus2.
  • The cluster1::*> prompt indicates the name of the cluster.
Note
The procedure requires the use of both ONTAP commands and Cisco Nexus 9000 Series Switches commands; ONTAP commands are used unless otherwise indicated.
  1. Display the cluster ports on each node that are connected to the cluster switches: network device-discovery show
    cluster1::*> network device-discovery show
    Node/ Local Discovered
    Protocol Port Device (LLDP: ChassisID) Interface Platform
    ----------- ------ ------------------------- ---------------- --------
    cluster1-01/cdp
    e0a cs1 Ethernet1/7 N9K-C9336C
    e0d cs2 Ethernet1/7 N9K-C9336C
    cluster1-02/cdp
    e0a cs1 Ethernet1/8 N9K-C9336C
    e0d cs2 Ethernet1/8 N9K-C9336C
    cluster1-03/cdp
    e0a cs1 Ethernet1/1/1 N9K-C9336C
    e0b cs2 Ethernet1/1/1 N9K-C9336C
    cluster1-04/cdp
    e0a cs1 Ethernet1/1/2 N9K-C9336C
    e0b cs2 Ethernet1/1/2 N9K-C9336C
    cluster1::*>
  2. Check the administrative and operational status of each cluster port.
    1. Verify that all the cluster ports are up with a healthy status: network port show –role cluster
      cluster1::*> network port show -role cluster

      Node: cluster1-01
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/100000 healthy false
      e0d Cluster Cluster up 9000 auto/100000 healthy false

      Node: cluster1-02
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/100000 healthy false
      e0d Cluster Cluster up 9000 auto/100000 healthy false
      8 entries were displayed.

      Node: cluster1-03

      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/10000 healthy false
      e0b Cluster Cluster up 9000 auto/10000 healthy false

      Node: cluster1-04
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/10000 healthy false
      e0b Cluster Cluster up 9000 auto/10000 healthy false
      cluster1::*>
    2. Verify that all the cluster interfaces (LIFs) are on the home port: network interface show -role cluster
      cluster1::*> network interface show -role cluster
      Logical Status Network Current Current Is
      Vserver Interface Admin/Oper Address/Mask Node Port Home
      ----------- ------------------ ---------- ----------------- ------------ ------- ----
      Cluster
      cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true
      cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true
      cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true
      cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true
      cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true
      cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true
      cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true
      cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true
      8 entries were displayed.
      cluster1::*>
    3. Verify that the cluster displays information for both cluster switches: system cluster-switch show -is-monitoring-enabled-operational true
      cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true
      Switch Type Address Model
      --------------------------- ------------------ ---------------- -----
      cs1 cluster-network 10.233.205.90 N9K-C9336C
      Serial Number: FOCXXXXXXGD
      Is Monitored: true
      Reason: None
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
      9.3(5)
      Version Source: CDP

      cs2 cluster-network 10.233.205.91 N9K-C9336C
      Serial Number: FOCXXXXXXGS
      Is Monitored: true
      Reason: None
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
      9.3(5)
      Version Source: CDP
      cluster1::*>
  3. Disable auto-revert on the cluster LIFs.
    cluster1::*> <strong className="ph b">network interface modify -vserver Cluster -lif * -auto-revert false
    </strong>
  4. On cluster switch cs2, shut down the ports connected to the cluster ports of the nodes.
    cs2(config)# interface eth1/1/1-2,eth1/7-8
    cs2(config-if-range)# shutdown
  5. Verify that the cluster ports have migrated to the ports hosted on cluster switch cs1. This might take a few seconds. network interface show -role cluster
    cluster1::*> network interface show -role cluster
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ----------------- ---------- ------------------ ------------- ------- ----
    Cluster
    cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0a true
    cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0a false
    cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0a true
    cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0a false
    cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0a true
    cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0a false
    cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0a true
    cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0a false
    8 entries were displayed.
    cluster1::*>
  6. Verify that the cluster is healthy: cluster show
    cluster1::*> cluster show
    Node Health Eligibility Epsilon
    -------------------- ------- ------------ -------
    cluster1-01 true true false
    cluster1-02 true true false
    cluster1-03 true true true
    cluster1-04 true true false
    4 entries were displayed.
    cluster1::*>
  7. Clean the configuration on switch cs2 and perform a basic setup.
    1. Clean the configuration. This step requires a console connection to the switch.
      cs2# <strong className="ph b">write erase
      </strong>Warning: This command will erase the startup-configuration.
      Do you wish to proceed anyway? (y/n) [n] y
      cs2# reload
      This command will reboot the system. (y/n)? [n] y
      cs2#
    2. Perform a basic setup of the switch.
  8. Copy the RCF to the bootflash of switch cs2 using one of the following transfer protocols: FTP, TFTP, SFTP, or SCP. For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.
    This example shows TFTP being used to copy an RCF to the bootflash on switch cs2:
    cs2# copy tftp: bootflash: vrf management
    Enter source filename: Nexus_9336C_RCF_v1.6-Cluster-HA-Breakout.txt
    Enter hostname for the tftp server: 172.22.201.50
    Trying to connect to tftp server......Connection to Server Established.
    TFTP get operation was successful
    Copy complete, now saving to disk (please wait)...
  9. Apply the RCF previously downloaded to the bootflash.

    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.

    This example shows the RCF file Nexus_9336C_RCF_v1.6-Cluster-HA-Breakout.txt being installed on switch cs2:
    cs2# copy Nexus_9336C_RCF_v1.6-Cluster-HA-Breakout.txt running-config echo-commands
  10. Examine the banner output from the show banner motd command. You must read and follow these instructions to ensure the proper configuration and operation of the switch.
    cs2# show banner motd

    ******************************************************************************
    * Lenovo Reference Configuration File (RCF)
    *
    * Switch : Nexus N9K-C9336C-FX2
    * Filename : Nexus_9336C_RCF_v1.6-Cluster-HA-Breakout.txt
    * Date : 10-23-2020
    * Version : v1.6
    *
    * Port Usage:
    * Ports 1- 3: Breakout mode (4x10G) Intra-Cluster Ports, int e1/1/1-4, e1/2/1-4
    , e1/3/1-4
    * Ports 4- 6: Breakout mode (4x25G) Intra-Cluster/HA Ports, int e1/4/1-4, e1/5/
    1-4, e1/6/1-4
    * Ports 7-34: 40/100GbE Intra-Cluster/HA Ports, int e1/7-34
    * Ports 35-36: Intra-Cluster ISL Ports, int e1/35-36
    *
    * Dynamic breakout commands:
    * 10G: interface breakout module 1 port <range> map 10g-4x
    * 25G: interface breakout module 1 port <range> map 25g-4x
    *
    * Undo breakout commands and return interfaces to 40/100G configuration in confi
    g mode:
    * no interface breakout module 1 port <range> map 10g-4x
    * no interface breakout module 1 port <range> map 25g-4x
    * interface Ethernet <interfaces taken out of breakout mode>
    * inherit port-profile 40-100G
    * priority-flow-control mode auto
    * service-policy input HA
    * exit
    *
    ******************************************************************************
  11. Verify that the RCF file is the correct newer version: show running-config
    When you check the output to verify you have the correct RCF, make sure that the following information is correct:
    • The RCF banner
    • The node and port settings
    • Customizations
    The output varies according to your site configuration. Check the port settings and refer to the release notes for any changes specific to the RCF that you have installed.
  12. After you verify the RCF versions and switch settings are correct, copy the running-config file to the startup-config file.
    For more information on Cisco commands, see the appropriate guide in the Cisco Nexus 9000 Series NX-OS Command Reference guides.
    cs2# copy running-config startup-config [########################################] 100% Copy complete
  13. Reboot switch cs2. You can ignore the “cluster ports down” events reported on the nodes while the switch reboots.
    cs2# reload
    This command will reboot the system. (y/n)? [n] y
  14. Apply the same RCF and save the running configuration for a second time.
    cs2# copy Nexus_9336C_RCF_v1.6-Cluster-HA-Breakout.txt running-config echo-commands
    cs2# copy running-config startup-config [########################################] 100% Copy complete
  15. Verify the health of cluster ports on the cluster.
    1. Verify that e0d ports are up and healthy across all nodes in the cluster: network port show -role cluster
      cluster1::*> network port show -role cluster

      Node: cluster1-01
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/10000 healthy false
      e0b Cluster Cluster up 9000 auto/10000 healthy false

      Node: cluster1-02
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/10000 healthy false
      e0b Cluster Cluster up 9000 auto/10000 healthy false

      Node: cluster1-03
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/100000 healthy false
      e0d Cluster Cluster up 9000 auto/100000 healthy false

      Node: cluster1-04
      Ignore
      Speed(Mbps) Health Health
      Port IPspace Broadcast Domain Link MTU Admin/Oper Status Status
      --------- ------------ ---------------- ---- ---- ----------- -------- ------
      e0a Cluster Cluster up 9000 auto/100000 healthy false
      e0d Cluster Cluster up 9000 auto/100000 healthy false
      8 entries were displayed.
    2. Verify the switch health from the cluster (this might not show switch cs2, since LIFs are not homed on e0d).
      cluster1::*> network device-discovery show -protocol cdp
      Node/ Local Discovered
      Protocol Port Device (LLDP: ChassisID) Interface Platform
      ----------- ------ ------------------------- ----------------- --------
      cluster1-01/cdp
      e0a cs1 Ethernet1/7 N9K-C9336C
      e0d cs2 Ethernet1/7 N9K-C9336C
      cluster01-2/cdp
      e0a cs1 Ethernet1/8 N9K-C9336C
      e0d cs2 Ethernet1/8 N9K-C9336C
      cluster01-3/cdp
      e0a cs1 Ethernet1/1/1 N9K-C9336C
      e0b cs2 Ethernet1/1/1 N9K-C9336C
      cluster1-04/cdp
      e0a cs1 Ethernet1/1/2 N9K-C9336C
      e0b cs2 Ethernet1/1/2 N9K-C9336C

      cluster1::*> system cluster-switch show -is-monitoring-enabled-operational true
      Switch Type Address Model
      --------------------------- ------------------ ---------------- -----
      cs1 cluster-network 10.233.205.90 NX9-C9336C
      Serial Number: FOCXXXXXXGD
      Is Monitored: true
      Reason: None
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
      9.3(5)
      Version Source: CDP

      cs2 cluster-network 10.233.205.91 NX9-C9336C
      Serial Number: FOCXXXXXXGS
      Is Monitored: true
      Reason: None
      Software Version: Cisco Nexus Operating System (NX-OS) Software, Version
      9.3(5)
      Version Source: CDP

      2 entries were displayed.
    Note
    You might observe the following output on the cs1 switch console depending on the RCF version previously loaded on the switch:
    2020 Nov 17 16:07:18 cs1 %$ VDC-1 %$ %STP-2-UNBLOCK_CONSIST_PORT: Unblocking port port-channel1 on VLAN0092. Port consistency restored.
    2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_PEER: Blocking port-channel1 on VLAN0001. Inconsistent peer vlan.
    2020 Nov 17 16:07:23 cs1 %$ VDC-1 %$ %STP-2-BLOCK_PVID_LOCAL: Blocking port-channel1 on VLAN0092. Inconsistent local vlan.
  16. On cluster switch cs1, shut down the ports connected to the cluster ports of the nodes.
    The following example uses the interface example output from step 1:
    cs1(config)# interface eth1/1/1-2,eth1/7-8
    cs1(config-if-range)# shutdown
  17. Verify that the cluster LIFs have migrated to the ports hosted on switch cs2. This might take a few seconds. network interface show -role cluster
    cluster1::*> network interface show -role cluster
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ------------------ ---------- ------------------ ------------------- ------- ----
    Cluster
    cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0d false
    cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true
    cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0d false
    cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true
    cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0b false
    cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true
    cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0b false
    cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true
    8 entries were displayed.
    cluster1::*>
  18. Verify that the cluster is healthy: cluster show
    cluster1::*> cluster show
    Node Health Eligibility Epsilon
    -------------------- -------- ------------- -------
    cluster1-01 true true false
    cluster1-02 true true false
    cluster1-03 true true true
    cluster1-04 true true false
    4 entries were displayed.
    cluster1::*>
  19. Repeat Steps 7 to 14 on switch cs1.
  20. Enable auto-revert on the cluster LIFs.
    cluster1::*> network interface modify -vserver Cluster -lif * -auto-revert True
  21. Reboot switch cs1.You do this to trigger the cluster LIFs to revert to their home ports. You can ignore the “cluster ports down” events reported on the nodes while the switch reboots.
    cs1# reload
    This command will reboot the system. (y/n)? [n] y
  22. Verify that the switch ports connected to the cluster ports are up.
    cs1# show interface brief | grep up
    .
    .
    Eth1/1/1 1 eth access up none 10G(D) --
    Eth1/1/2 1 eth access up none 10G(D) --
    Eth1/7 1 eth trunk up none 100G(D) --
    Eth1/8 1 eth trunk up none 100G(D) --
    .
    .
  23. Verify that the ISL between cs1 and cs2 is functional:show port-channel summary
    cs1# show port-channel summary
    Flags: D - Down P - Up in port-channel (members)
    I - Individual H - Hot-standby (LACP only)
    s - Suspended r - Module-removed
    b - BFD Session Wait
    S - Switched R - Routed
    U - Up (port-channel)
    p - Up in delay-lacp mode (member)
    M - Not in use. Min-links not met
    --------------------------------------------------------------------------------
    Group Port- Type Protocol Member Ports Channel
    --------------------------------------------------------------------------------
    1 Po1(SU) Eth LACP Eth1/35(P) Eth1/36(P)
    cs1#
  24. Verify that the cluster LIFs have reverted to their home port: network interface show -role cluster
    cluster1::*> network interface show -role cluster
    Logical Status Network Current Current Is
    Vserver Interface Admin/Oper Address/Mask Node Port Home
    ----------- ------------------ ---------- ------------------ ------------------- ------- ----
    Cluster
    cluster1-01_clus1 up/up 169.254.3.4/23 cluster1-01 e0d true
    cluster1-01_clus2 up/up 169.254.3.5/23 cluster1-01 e0d true
    cluster1-02_clus1 up/up 169.254.3.8/23 cluster1-02 e0d true
    cluster1-02_clus2 up/up 169.254.3.9/23 cluster1-02 e0d true
    cluster1-03_clus1 up/up 169.254.1.3/23 cluster1-03 e0b true
    cluster1-03_clus2 up/up 169.254.1.1/23 cluster1-03 e0b true
    cluster1-04_clus1 up/up 169.254.1.6/23 cluster1-04 e0b true
    cluster1-04_clus2 up/up 169.254.1.7/23 cluster1-04 e0b true
    8 entries were displayed.
    cluster1::*>
  25. Verify that the cluster is healthy: cluster show
    cluster1::*> cluster show
    Node Health Eligibility Epsilon
    -------------------- ------- ------------- -------
    cluster1-01 true true false
    cluster1-02 true true false
    cluster1-03 true true true
    cluster1-04 true true false
    4 entries were displayed.
    cluster1::*>
  26. Ping the remote cluster interfaces to verify connectivity: cluster ping-cluster -node local
    cluster1::*> cluster ping-cluster -node local
    Host is cluster1-03
    Getting addresses from network interface table...
    Cluster cluster1-03_clus1 169.254.1.3 cluster1-03 e0a
    Cluster cluster1-03_clus2 169.254.1.1 cluster1-03 e0b
    Cluster cluster1-04_clus1 169.254.1.6 cluster1-04 e0a
    Cluster cluster1-04_clus2 169.254.1.7 cluster1-04 e0b
    Cluster cluster1-01_clus1 169.254.3.4 cluster1-01 e0a
    Cluster cluster1-01_clus2 169.254.3.5 cluster1-01 e0d
    Cluster cluster1-02_clus1 169.254.3.8 cluster1-02 e0a
    Cluster cluster1-02_clus2 169.254.3.9 cluster1-02 e0d
    Local = 169.254.1.3 169.254.1.1
    Remote = 169.254.1.6 169.254.1.7 169.254.3.4 169.254.3.5 169.254.3.8 169.254.3.9
    Cluster Vserver Id = 4294967293
    Ping status:
    ............
    Basic connectivity succeeds on 12 path(s)
    Basic connectivity fails on 0 path(s)
    ................................................
    Detected 9000 byte MTU on 12 path(s):
    Local 169.254.1.3 to Remote 169.254.1.6
    Local 169.254.1.3 to Remote 169.254.1.7
    Local 169.254.1.3 to Remote 169.254.3.4
    Local 169.254.1.3 to Remote 169.254.3.5
    Local 169.254.1.3 to Remote 169.254.3.8
    Local 169.254.1.3 to Remote 169.254.3.9
    Local 169.254.1.1 to Remote 169.254.1.6
    Local 169.254.1.1 to Remote 169.254.1.7
    Local 169.254.1.1 to Remote 169.254.3.4
    Local 169.254.1.1 to Remote 169.254.3.5
    Local 169.254.1.1 to Remote 169.254.3.8
    Local 169.254.1.1 to Remote 169.254.3.9
    Larger than PMTU communication succeeds on 12 path(s)
    RPC status:
    6 paths up, 0 paths down (tcp check)
    6 paths up, 0 paths down (udp check)