Skip to main content

Removing the drive from the RAID array

This section covers the steps required to remove a storage drive from the RAID array, before physically removing the drive from the storage enclosure.

In the example, we assume that a drive has failed.

To check the status of the RAID array, use the following command:

# cat /proc/mdstat

Following is an example output:

Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
md124 : active raid0 md125[0]
4657247232 blocks super 1.2 512k chunks

md125 : active raid5 dm-7[7] dm-6[6] dm-5[5] dm-4[4] dm-3[3](F) dm-2[2] dm-1[1] dm-0[0]
4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/6] [UUU_UUU]
[>....................] recovery = 1.5% (12246008/776229888) finish=62.7min speed=202807K/sec
bitmap: 0/6 pages [0KB], 65536KB chunk

md126 : active raid1 sda[1] sdb[0]
118778880 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sda[1](S) sdb[0](S)
6306 blocks super external:imsm

unused devices: <none>

To view detailed information about the status of the RAID array, use the following command:

# mdadm --detail /dev/md/md5

Following is an example output:

/dev/md/md5:
Version : 1.2
Creation Time : Mon Aug 20 11:03:31 2018
Raid Level : raid5
Array Size : 4657379328 (4441.62 GiB 4769.16 GB)
Used Dev Size : 776229888 (740.27 GiB 794.86 GB)
Raid Devices : 7
Total Devices : 8
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Mon Aug 20 11:50:30 2018
State : clean, degraded, recovering
Active Devices : 6
Working Devices : 7
Failed Devices : 1
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 16K

Consistency Policy : bitmap

Rebuild Status : 20% complete

Name : any:md5
UUID : 5f7b873c:16d6d418:7b5f6fde:247566a7
Events : 162

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1
2 253 2 2 active sync /dev/dm-2
7 253 7 3 spare rebuilding /dev/dm-7
4 253 4 4 active sync /dev/dm-4
5 253 5 5 active sync /dev/dm-5
6 253 6 6 active sync /dev/dm-6

3 253 3 - faulty /dev/dm-3

In this example, dm-3 is marked as failed (dm-3[3](F)).

To remove a drive from the RAID array, follow these steps:

  1. Determine the SED mapping for dm-3:

    # ls -l /dev/mapper/sed*

    Following is an example output:

    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed0 -> ../dm-0
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed1 -> ../dm-1
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed2 -> ../dm-2
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed3 -> ../dm-3
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed4 -> ../dm-4
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed5 -> ../dm-5
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed6 -> ../dm-6
    lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed7 -> ../dm-7

    In this example, dm-3 is mapped to sed3.

  2. Determine the NVME namespace — SDE drive mapping for sd3:

    # cryptsetup status /dev/mapper/sed3

    Following is an example output:

    /dev/mapper/sed3 is active and is in use.
    type: LUKS1
    cipher: aes-xts-plain64
    keysize: 512 bits
    device: /dev/nvme3n1
    offset: 4096 sectors
    size: 1552723968 sectors
    mode: read/write

    The output displays the following information:

    • sed3 is an encrypted device mapped to /dev/nvme3n1

    • /dev/nvme3n1 is a namespace associated with /dev/nvme3

  3. Check the current RAID configuration:

    # /usr/share/tacp/lenovo/tacp-nvme-control.py -display

    Following is an example output:

    discovering NVMe disks. this operation will take a few seconds.
    Chassis UUID: 500e0eca08057b00
    Number of canisters: 2
    This is the bottom (primary) canister and has the controller id 33
    The top (secondary) canister has the controller id 34
    This chassis has the following controller ids: 33 , 34

    NVMe devices: 8
    ----------------

    NVMe control device: /dev/nvme0
    Slot: 8
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02194
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 42 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme1
    Slot: 7
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02193
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 41 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme2
    Slot: 6
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02233
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 41 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme3
    Slot: 5
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02191
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 40 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme4
    Slot: 4
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02189
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 40 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme5
    Slot: 3
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02198
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 40 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme6
    Slot: 1
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02188
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 39 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    NVMe control device: /dev/nvme7
    Slot: 2
    Manufacturer: SAMSUNG MZWLL800HEHP-00003
    Serial number: S3HCNX0JC02195
    Firmware: GPNA9B3Q
    Capacity: 800166076416 bytes 800.166076416 GB
    Drive Temp: 40 C
    Drive Health: Ok
    Total namespaces: 1
    Namespace id(s): 1
    Namespace 1 size: 794996768768 bytes, 794.996768768 GB
    Namespace 1 is attached to controller id(s): 33

    The output shows that drive /dev/nvme3 is in slot 5.

  4. Remove the sed3 drive from the RAID array:

    # mdadm --manage /dev/md/md5 --remove /dev/mapper/sed3

    Following is an example output:

    Output: mdadm: hot removed /dev/mapper/sed3 from /dev/md/md5

    The sed3 drive is nor removed from the RAID array.

    Re-check the status of the RAID array to verify that sed3 drive has been removed:

    # cat /proc/mdstat

    Following is an example output:

    Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
    md124 : active raid0 md125[0]
    4657247232 blocks super 1.2 512k chunks

    md125 : active raid5 dm-7[7] dm-6[6] dm-5[5] dm-4[4] dm-2[2] dm-1[1] dm-0[0]
    4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/6] [UUU_UUU]
    [===========>.........] recovery = 58.5% (454378348/776229888) finish=26.4min speed=202487K/sec
    bitmap: 0/6 pages [0KB], 65536KB chunk

    md126 : active raid1 sda[1] sdb[0]
    118778880 blocks super external:/md127/0 [2/2] [UU]

    md127 : inactive sda[1](S) sdb[0](S)
    6306 blocks super external:imsm

    unused devices: <none>

    To view detailed information about the status of the RAID array, use the following command:

    # mdadm --detail /dev/md/md5

    Following is an example output:

    /dev/md/md5:
    Version : 1.2
    Creation Time : Mon Aug 20 11:03:31 2018
    Raid Level : raid5
    Array Size : 4657379328 (4441.62 GiB 4769.16 GB)
    Used Dev Size : 776229888 (740.27 GiB 794.86 GB)
    Raid Devices : 7
    Total Devices : 7
    Persistence : Superblock is persistent

    Intent Bitmap : Internal

    Update Time : Mon Aug 20 12:18:05 2018
    State : clean, degraded, recovering
    Active Devices : 6
    Working Devices : 7
    Failed Devices : 0
    Spare Devices : 1

    Layout : left-symmetric
    Chunk Size : 16K

    Consistency Policy : bitmap

    Rebuild Status : 62% complete

    Name : any:md5
    UUID : 5f7b873c:16d6d418:7b5f6fde:247566a7
    Events : 480

    Number Major Minor RaidDevice State
    0 253 0 0 active sync /dev/dm-0
    1 253 1 1 active sync /dev/dm-1
    2 253 2 2 active sync /dev/dm-2
    7 253 7 3 spare rebuilding /dev/dm-7
    4 253 4 4 active sync /dev/dm-4
    5 253 5 5 active sync /dev/dm-5
    6 253 6 6 active sync /dev/dm-6

  5. Close the SED:

    # cryptsetup close sed3

The drive is now removed from the RAID array.
The drive can now be physically removed from storage block slot 5. Follow the steps in Removing a hot-swap drive