Skip to main content

Hardware scaling

ThinkAgile CP supports non-disruptive hardware upgrades and non-disruptive scaling of both compute and storage.

The minimum system is a compute block with two compute nodes, a storage block with two storage controllers and eight SSDs, and either one (non-redundant) or two interconnect switches.1 Additional compute nodes may be added one node at a time without disruption to existing workloads. ThinkAgile CP automatically detects new nodes. Once the new nodes are connected to the interconnect switches and brought online, they are added to a migration zone. Either new workloads may be initiated on the node, or existing workloads from ThinkAgile CP or other non-ThinkAgile CP platforms may be non-disruptively migrated to the node. When a compute block is full (either four or eight nodes, depending on the hardware used), a new compute block is first added and racked to hold the new node, before following the steps previously described for bringing workloads to the new node.

SSDs may be added to storage blocks in groups of eight SSDs at a time. When the second or later group of eight SSDs are being added to a storage block, ThinkAgile CP automatically restripes data across the additional drives added, and this slightly increases the background load until the restripe completes. Other than the additional background activity,2 the addition of new SSDs is non-disruptiveto existing workloads.

When a storage block is full (with 24 SSDs or three groups of eight SSDs3), a new storage block with eight (or other allowed multiple of eight) SSDs may be added and racked and made part of a storage pool. When storage for new workloads is assigned to this storage pool, space in this new storage block may be used.4

Compute nodes and storage blocks are added until all the downlink ports (44x10 Gb) on the internal interconnects are used up. A pair of interconnect switches and all the compute and storage that can be attached to it is called a stack. After the stack is full, we will need to add another pair of interconnect switches that starts a new stack, before we can add more compute nodes or storage blocks and SSDs. Stack-to-stack communication occurs using one or more of the 40 GbE uplink ports on the interconnects.

1 Each interconnect has 48x10 GbE ports and 4x40 GbE ports.
2 Performed at a lower priority than user reads and writes.
3 32 SSDs or four groups of 8 SSDs with Dell hardware.
4 In the near future, we will also allow for migration of data from existing workloads to enable better load balancing.