Skip to main content

Cabling the network

Review the following information to understand how to cable the ThinkAgile VX appliances to the network.


Graphic showing the network cabling

Network typeRequired/optionalFrom To
In-band management network:
  • Communication with ESXi hosts

  • Communication between the vCenter server appliance and ESXi hosts

  • vSAN storage traffic

  • vMotion (virtual machine migration) traffic

  • iSCSI storage traffic (if present)

RequiredPort 0 on NIC10 Gbps Data Switch #1
RequiredPort 1 on NIC10 Gbps Data Switch #2
OptionalPort 2 on NIC10 Gbps Data Switch #1
OptionalPort 3 on NIC10 Gbps Data Switch #2
Out-of-band management network:
  • Initial server discovery on the network via the SLP protocol

  • Server power control

  • LED management

  • Inventory

  • Events and alerts

  • BMC logs

  • Firmware updates

  • OS provisioning via remote media mount

RequiredBMC network connector1 Gbps Management Switch
Data or user networkRequired10 Gbps Data Switch #1 and #2External network
Note
  • On Out-of-band network

    • The out-of-band management network does not need to be on a dedicated physical network.It can be included as part of a larger management network.

    • The ThinkAgile VX Deployer, Lenovo XClarity Integrator (LXCI) must be able to access this network to communicate with the XCC modules.

    • During the initial cluster deployment and subsequent operations, the XCC interfaces should be accessible over this network to the deployer utility as well as xClarity Integrator (LXCI), xClarity Administrator (LXCA), etc., management software.

  • On network redundancy

    • Active-standby redundancy mode:

      When only 2 ports (ports 0 to 1) are connected to the 2 top-of-rack switches, you can configure the redundancy mode as active-standby mode. If the primary connection fails or the primary switch fails, the connection fails over.

    • Active-active redundancy mode:

      When 4 ports (ports 0 to 3) are connected to the 2 top-of-rack switches, you can configure the redundancy mode as active-active mode. If one connection fails, the other connections are still active. Also, loads are balanced across the ports.

    • Optionally, some switches might also support the virtual link aggregation (vLAG) protocol or equivalent, which connects the two top-of-rack switches via dedicated links and makes the switches appear as a single logical switch to the downstream hosts. In this case, the two connections going to the switches from the host can be configured as active-active links so that you can get load-balancing across the ports as well as a 20 Gb aggregate bandwidth.

  • On distributed vSwitches

    The distributed vSwitches essentially form a logical switch that spans all hosts in the cluster. The physical ports on each host become logical uplink ports on the distributed vSwitch. As opposed to a standard vSwitch, distributed vSwitches provide advanced configuration options, such as traffic policy, link aggregation (LACP), and traffic shaping.

    The number of created distributed switches is determined by the number of physical ports on each host that are connected to top-of-rack switches:

    • If only two ports on each host are connected, a single distributed vSwitch will be created to carry all types of traffic, including ESXi management, vMotion, internal VM, XCC management, vSAN storage traffic, and external network traffic.

    • If four ports connected, two distributed vSwitches will be created. The vSAN storage traffic will be carried on the second distributed vSwitch.


    Graphic showing the port configuration on the data switches