Skip to main content

Example of a clean drive configuration

This section covers an example of a clean drive configuration.

To check the storage drive configuration, use the following command:

# lsblk

Following is an example output:

NAME                                                              MAJ:MIN RM SIZE   RO TYPE  MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 200M 0 part
├─sda2 8:2 0 1G 0 part
├─sda3 8:3 0 2G 0 part
├─sda4 8:4 0 110.1G 0 part
└─md126 9:126 0 113.3G 0 raid1
├─md126p1 259:0 0 200M 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md /boot
├─md126p3 259:2 0 2G 0 md [SWAP]
└─md126p4 259:3 0 110.1G 0 md /

sdb 8:16 0 119.2G 0 disk
├─sdb1 8:17 0 200M 0 part
├─sdb2 8:18 0 1G 0 part
├─sdb3 8:19 0 2G 0 part
├─sdb4 8:20 0 110.1G 0 part
└─md126 9:126 0 113.3G 0 raid1
├─md126p1 259:0 0 200M 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md /boot
├─md126p3 259:2 0 2G 0 md [SWAP]
└─md126p4 259:3 0 110.1G 0 md /

sr0 11:0 1 1024M 0 rom
nvme0n1 259:4 0 740.4G 0 disk
└─sed0 253:0 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme1n1 259:10 0 740.4G 0 disk
└─sed1 253:1 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme2n1 259:9 0 740.4G 0 disk
└─sed2 253:2 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme3n1 259:6 0 740.4G 0 disk
└─sed3 253:3 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme4n1 259:8 0 740.4G 0 disk
└─sed4 253:4 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme5n1 259:5 0 740.4G 0 disk
└─sed5 253:5 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme6n1 259:11 0 740.4G 0 disk
└─sed6 253:6 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

nvme7n1 259:7 0 740.4G 0 disk
└─sed7 253:7 0 740.4G 0 crypt
└─md125 9:125 0 4.3T 0 raid5
└─md124 9:124 0 4.3T 0 raid0
└─vdo_data 253:9 0 4.3T 0 dm
└─vdo 253:10 0 43.4T 0 vdo
├─CBS_POOL_data 253:8 0 43.3T 0 dm
│└─CBS_POOL 253:12 0 43.3T 0 dm
│ ├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
│ └─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm
└─CBS_POOL_meta 253:11 0 16G 0 dm
└─CBS_POOL 253:12 0 43.3T 0 dm
├─SPECIAL_METADATA_STORE_4bd8d3ce75b1465082619c1f3fc15fb2 253:13 0 1G 0 dm
└─SPECIAL_TEMPLATE_STORE_cd6d34c281054ef690a89030bef839ec 253:14 0 43.3T 0 dm

To check the status of the RAID array, use the following command:

# cat /proc/mdstat

Following is an example output:

Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
md124 : active raid0 md125[0]
4657247232 blocks super 1.2 512k chunks

md125 : active raid5 dm-7[7](S) dm-6[6] dm-5[5] dm-4[4] dm-3[3] dm-2[2] dm-1[1] dm-0[0]
4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/7] [UUUUUUU]
bitmap: 0/6 pages [0KB], 65536KB chunk

md126 : active raid1 sda[1] sdb[0]
118778880 blocks super external:/md127/0 [2/2] [UU]

md127 : inactive sda[1](S) sdb[0](S)
6306 blocks super external:imsm

unused devices: <none>


To view detailed information about the status of the RAID array, use the following command:

# mdadm --detail /dev/md/md5

Following is an example output:

/dev/md125:
Version : 1.2
Creation Time : Mon Aug 20 11:03:31 2018
Raid Level : raid5
Array Size : 4657379328 (4441.62 GiB 4769.16 GB)
Used Dev Size : 776229888 (740.27 GiB 794.86 GB)
Raid Devices : 7
Total Devices : 8
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Mon Aug 20 11:04:34 2018
State : clean
Active Devices : 7
Working Devices : 8
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 16K

Consistency Policy : bitmap
Name : any:md5
UUID : 5f7b873c:16d6d418:7b5f6fde:247566a7
Events : 2

Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
1 253 1 1 active sync /dev/dm-1
2 253 2 2 active sync /dev/dm-2
3 253 3 3 active sync /dev/dm-3
4 253 4 4 active sync /dev/dm-4
5 253 5 5 active sync /dev/dm-5
6 253 6 6 active sync /dev/dm-6
7 253 7 - spare /dev/dm-7

To check the SED mappings, use the following command:

# ls -l /dev/mapper/sed*

Following is an example output:

lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed0 -> ../dm-0
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed1 -> ../dm-1
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed2 -> ../dm-2
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed3 -> ../dm-3
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed4 -> ../dm-4
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed5 -> ../dm-5
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed6 -> ../dm-6
lrwxrwxrwx. 1 root root 7 Aug 20 11:03 /dev/mapper/sed7 -> ../dm-7

Use the following command to list the NVME namespaces:

# ls /dev/nvme?

Following is an example output:

/dev/nvme0 /dev/nvme1 /dev/nvme2 /dev/nvme3 /dev/nvme4 /dev/nvme5 /dev/nvme6 /dev/nvme7

To check the drive firmware level, use the following command and check the values in the FW Rev column:

$ nvme list

Following is an example output:

Node             SN                 ... Namespace Usage                     Format         FW Rev
---------------- ------------------ --------- ------------------------- -------------- --------
/dev/nvme0n1 S3HCNX0JC02194 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme1n1 S3HCNX0JC02193 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme2n1 S3HCNX0JC02233 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme3n1 S3HCNX0JC02191 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme4n1 S3HCNX0JC02189 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme5n1 S3HCNX0JC02198 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme6n1 S3HCNX0JC02188 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q
/dev/nvme7n1 S3HCNX0JC02195 1 795.00 GB / 795.00 GB 512 B + 0 B GPNA9B3Q

To display the current RAID configuration, use the following command:

# /usr/share/tacp/lenovo/tacp-nvme-control.py -display

Following is an example output:

discovering NVMe disks. this operation will take a few seconds.
Chassis UUID: 500e0eca08057b00
Number of canisters: 2
This is the bottom (primary) canister and has the controller id 33
The top (secondary) canister has the controller id 34
This chassis has the following controller ids: 33 , 34

NVMe devices: 8
----------------

NVMe control device: /dev/nvme0
Slot: 8
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02194
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 41 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme1
Slot: 7
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02193
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 41 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme2
Slot: 6
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02233
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 40 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme3
Slot: 5
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02191
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 40 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme4
Slot: 4
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02189
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 39 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme5
Slot: 3
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02198
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 39 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme6
Slot: 2
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02188
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 39 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33

NVMe control device: /dev/nvme7
Slot: 1
Manufacturer: SAMSUNG MZWLL800HEHP-00003
Serial number: S3HCNX0JC02195
Firmware: GPNA9B3Q
Capacity: 800166076416 bytes 800.166076416 GB
Drive Temp: 39 C
Drive Health: Ok
Total namespaces: 1
Namespace id(s): 1
Namespace 1 size: 794996768768 bytes, 794.996768768 GB
Namespace 1 is attached to controller id(s): 33