storage command
Use this command to display and configure (if supported by the platform) information about the server's storage devices that are managed by the IMM2.
Option | Description | Values |
---|---|---|
-list | List the storage targets managed by the IMM2. | controllers|pools|volumes|drives Where target is:
|
-list -target target_id | List the storage targets managed by the IMM2 according to the target_id. | pools|volumes|drives ctrl[x]|pool[x] Where target and target_id are:
|
-list flashdimms | List the Flash DIMMs managed by the IMM2. | |
-list devices | Display the status of all disks and Flash DIMMS managed by the IMM2. | |
-show target_id | Display information for the selected target that is managed by the IMM2. | Where target_id is: ctrl[x]|vol[x]|disk[x]|pool[x] |flashdimm[x] 3 |
-show target_id info | Display detailed information for the selected target that is managed by the IMM2. | Where target_id is: ctrl[x]|vol[x]|disk[x]|pool[x] |flashdimm[x] 3 |
-show target_id firmware3 | Display the firmware information for the selected target that is managed by the IMM2. | Where target_id is: ctrl[x]|disk[x]|flashdimm[x]2 |
-showlog target_id<m:n|all>3 | Display the event logs of the selected target that is managed by the IMM2. | Where target_id is: ctrl[x]4 m:n|all Where m:n is one to the maximum number of event logs Where all are all of the event logs |
-evtfwd on | Enable RAID event forwarding feature. | |
-evtfwd off | Disable RAID event forwarding feature. | |
-evtfwd status | Show RAID event forwarding status. | The status info: feature: on/off warning: asserted //show this line if feature is on and warning is asserted error: asserted //show this line if feature is on and error is asserted |
-evtfwd deassert all | De-assert all forwarded RAID events. | Including warning RAID events and error RAID events. |
-evtfwd deassert warning | De-assert warning RAID events. | |
-evtfwd deassert error | De-assert error RAID events. | |
-config ctrl -scanforgn -target target_id3 | Detect the foreign RAID configuration. | Where target_id is: ctrl[x]5 |
-config ctrl -imptforgn -target target_id3 | Import the foreign RAID configuration. | Where target_id is: ctrl[x]5 |
-config ctrl -clrforgn -target target_id3 | Clear the foreign RAID configuration. | Where target_id is: ctrl[x]5 |
-config ctrl -clrcfg -target target_id3 | Clear the RAID configuration. | Where target_id is: ctrl[x]5 |
-config drv -mkoffline -target target_id3 | Change the drive state from online to offline. | Where target_id is: disk[x]5 |
-config drv -mkonline -target target_id3 | Change the drive state from offline to online. | Where target_id is: disk[x]5 |
-config drv -mkmissing -target target_id3 | Mark the offline drive as an unconfigured good drive. | Where target_id is: disk[x]5 |
-config drv -prprm -target target_id3 | Prepare an unconfigured good drive for removal. | Where target_id is: disk[x]5 |
-config drv -undoprprm -target target_id3 | Cancel the prepare an unconfigured good drive for removal operation. | Where target_id is: disk[x]5 |
-config drv -mkbad -target target_id3 | Change the unconfigured good drive to a unconfigured bad drive. | Where target_id is: disk[x]5 |
-config drv -mkgood -target target_id3 | Change an unconfigured bad drive to a unconfigured good drive. or Convert the just a bunch of disks (JBOD) drive to an unconfigured good drive. | Where target_id is: disk[x]5 |
-config drv -addhsp -[dedicated pools] -target target_id3 | Assign the selected drive as a hot spare to one controller or to existing storage pools. | Where target_id is: disk[x]5 |
-config drv -rmhsp -target target_id3 | Remove the hot spare. | Where target_id is: disk[x]5 |
-config vol -remove -target target_id3 | Remove one volume. | Where target_id is: vol[x]5 |
-config vol -set [-N] [-w] [-r ] [-i] [-a] [-d] [-b] -target target_id3 | Modify the properties of one volume. |
|
-config vol -add<[-R] [-D disk] [-H disk] [-1 hole]> [-N] [-w] [-r]37 | Create one volume for a new storage pool when the target is a controller. or Create one volume with an existing storage pool when the target is a storage pool. |
|
-config vol -add[-i] [-a] [-d] [-f] [-S] [-P] -target target_id3 | Create one volume for a new storage pool when the target is a controller. or Create one volume with an existing storage pool when the target is a storage pool. |
|
-config vol -getfreecap[-R] [-D disk] [-H disk] -target target_id3 | Get the free capacity amount of the drive group. |
|
-help | Display the command usage and options | |
Note
|
storage [<em className="ph i">options</em>]
option:
-config <em className="ph i">ctrl|drv|vol</em> -option [<em className="ph i">-options</em>] -target <em className="ph i">target_id</em>
-list <em className="ph i">controllers|pools|<em className="ph i">volumes|drives</em></em>
-list <em className="ph i">pools</em> -target <em className="ph i">ctrl[x]</em>
-list <em className="ph i">volumes</em> -target <em className="ph i">ctrl[x]|pool[x]</em>
-list <em className="ph i">drives</em> -target <em className="ph i">ctrl[x]|pool[x]</em>
-list devices
-list flashdimms
-show <em className="ph i">target_id</em>
-show {<em className="ph i">ctrl[x]|pool[x]|disk[x]|vol[x]|flashdimm[x]</em>} <em className="ph i">info</em>
-show {<em className="ph i">ctrl[x]|disk[x]|flashdimm[x]}</em><em className="ph i">firmware</em>
-showlog <em className="ph i">ctrl[x]</em><em className="ph i">m:n|all</em>
-evtfwd <on|off|deassert|status><em className="ph i">Configure warning/error RAID events forwarding as warning/error IMM events</em>
-h <em className="ph i">help</em>
system> storage
-config ctrl -clrcfg -target ctrl[0]
ok
system>
system> storage
-config ctrl -clrforgn -target ctrl[0]
ok
system>
system> storage
-config ctrl -imptforgn -target ctrl[0]
ok
system>
system> storage
-config ctrl -scanforgn -target ctrl[0]
Detect 1 foreign configuration(s) on controller ctrl[0]
system>
system> storage
-config drv -addhsp -dedicated pool[0-1] -target disk[0-0]
ok
system>
system> storage
-config drv -addhsp -target disk[0-0]
ok
system>
system> storage
-config drv -mkbad -target disk[0-0]
ok
system>
system> storage
-config drv -mkgood -target disk[0-0]
ok
system>
system> storage
-config drv -mkmissing -target disk[0-0]
ok
system>
system> storage
-config drv -mkoffline -target disk[0-0]
ok
system>
system> storage
-config drv -mkonline -target disk[0-0]
ok
system>
system> storage
-config drv -prprm -target disk[0-0]
ok
system>
system> storage
-config drv -rmhsp -target disk[0-0]
ok
system>
system> storage
-config drv -undoprprm -target disk[0-0]
ok
system>
system> storage
-config vol -add -1 1 -target pool[0-1]
ok
system>
system> storage
-config vol -add -R 1 -D disk[0-0]:disk[0-1] -w 1 -r 2 -i 0 -a 0 -d 0 -f 0
-N LD_volume -S 100000 -P 64K -H disk[0-2] -target ctrl[0]
ok
system>
system> storage
-config vol -getfreecap -R 1 -D disk[0-0]:disk[0-1] -H disk[0-2] -target ctrl[0]
The drive group configuration is good with free capacity 500000MB
system>
system> storage
-config vol -remove -target vol[0-1]
ok
system>
system> storage
-config vol -set -N LD_volume -w 0 -target vol[0-0]
ok
system>
system> storage
-list controllers
ctrl[0] ServerRAID M5110e(Slot No. 0)
ctrl[1] ServerRAID M5110f(Slot No. 1)
system>
system> storage
-list drives
disk[0-0] Drive 0
disk[0-1] Drive 1
disk[0-2] Drive 2
system>
system> storage
-list flashdimms
flashdimm[1] Flash DIMM 1
flashdimm[4] Flash DIMM 4
flashdimm[9] Flash DIMM 9
system>
system> storage
-list pools
pool[0-0] Storage Pool 0
pool[0-1] Storage Pool 1
system>
system> storage
-list volumes
system>storage -list volumes
vol[0-0] Volume 0
vol[0-1] Volume 1
Vol[0-2] Volume 2
system>
system> storage
-list drives -target ctrl[0]
disk[0-0] Drive 0
disk[0-1] Drive 1
disk[0-2] Drive 2
system>
system> storage
-list drives -target pool[0-0]
disk[0-0] Drive 0
disk[0-1] Drive 1
system>
system> storage
-list pools -target ctrl[0]
pool[0-0] Storage Pool 0
system>
system> storage
-list volumes -target ctrl[0]
vol[0-0] Volume 0
vol[0-1] Volume 1
system>
system> storage
-list volumes -target pool[0-0]
vol[0-0] Volume 0
vol[0-1] Volume 1
system>
system> storage
-show ctrl[0] firmware
Total Firmware number: 2
Name: RAID Firmware1
Description: RAID Firmware
Manfacture: IBM
Version: 4.01(3)T
Release Date: 01/05/2013
Name: RAID Firmware2
Description: RAID Firmware
system>
system> storage
-show ctrl[0] info
Product Name: ServerRAID M5110e
Firmware Package Version: 23.7.0.1.2
Battery Backup: Installed
Manufacture: IBM
UUID: 1234567890123456
Model Type / Model: 1234AHH
Serial No.: 12345678901
FRU No.: 5005076049CC4
Part No.: LSI2004
Cache Model Status: Unknown
Cache Model Memory Size: 300MB
Cache Model Serial No.: PBKUD0XTA0P04Y
PCI Slot Number: 0
PCI Bus Number: 2
PCI Device Number: 2
PCI Function Number: 10
PCI Device ID: 0x1000
PCI Subsystem Device ID: 0x1413
Ports: 2
Port 1: 12345678901234
Port 2: 12345678901235
Storage Pools: 2
pool[0-0] Storage Pool 0
pool[0-1] Storage Pool 1
Drives: 3
disk[0-0] Drive 0
disk[0-1] Drive 1
disk[0-2] Drive 2
system>
system> storage
-show disk[0-0] firmware
Total Firmware number: 1
Name: Drive
Description:
Manufacture:
Version: BE24
Release Date:
system>
system> storage
-show disk[0-0] info
Product Name: ST98394893
State: Online
Slot No.: 0
Disk Type: SATA
Media Type: HHD
Health Status: Normal
Capacity: 100.000GB
Speed: 6.0Gb/s
Current Temperature: 33C
Manufacture: ATA
Device ID: 5
Enclusure ID: 0x00FC
Machine Type:
Model:
Serial No.: 9XKJKL
FRU No.:
Part No.:
system>
system> storage
-show flashdimm[15]
Name: CPU1 DIMM 15
Health Status: Normal
Operational Status: Online
Capacity(GB): 400GB
Model Type: DDR3
Part Number: 93E40400GGM101PAT
FRU S/N: 44000000
Manuf ID: Diablo Technologies
Temperature: 0C
Warranty Writes: 100%
Write Endurance: 100%
F/W Level: A201.0.0.49152
system>
system> storage
-show pool[0-0]
RAID State: RAID 0
RAID Capacity: 67.000GB (0.000GB free)
Drives: 2
disk[0-0] Drive 0
disk[0-1] Drive 1
Volumes: 2
vol[0-0] Volume 0
vol[0-1] Volume 1
system>
system> storage
-show pool[0-1] info
RAID State: RAID 1
RAID Capacity: 231.898GB (200.000GB free)
Holes: 2
#1 Free Capacity: 100.000GB
#2 Free Capacity: 100.000GB
Drives: 2
disk[0-1] Drive 1
disk[0-2] Drive 2
Volume: 1
vol[0-1] LD_volume
system>
system> storage
-show vol[0-0]
Name: Volume 0
Stripe Size: 64KB
Status: Offline
Capacity: 100.000GB
system>
system> storage
-show vol[0-0] info
Name: LD_volume
Status: Optimal
Stripe Size: 64KB
Bootable: Not Bootable
Capacity: 231.898GB
Read Policy: No Read Ahead
Write Policy: Write Through
I/O Policy: Direct I/O
Access Policy: Read Write
Disk Cache Policy: Unchanged
Background Initialization: Enable
system>