重新分配灾难站点上的池 1 磁盘的磁盘所有权(MetroCluster IP 配置)
如果在灾难站点更换了一个或两个控制器模块或 NVRAM 卡,则系统标识已更改,必须将属于根聚合的磁盘重新分配给替换控制器模块。
关于本任务
由于节点处于切换模式,因此在此任务中仅会重新分配包含灾难站点 pool1 根聚合的磁盘。它们是此时仍由旧系统标识拥有的唯一磁盘。
此任务在灾难站点的替换节点上执行。
此任务在维护模式下执行。
这些示例假设已满足以下前提:
站点 A 是灾难站点。
node_A_1 已更换。
node_A_2 已更换。
站点 B 是存活站点。
node_B_1 运行状况良好。
node_B_2 运行状况良好。
在确定旧控制器模块的系统标识中确定旧的和新的系统标识。
此过程中的示例使用具有以下系统标识的控制器:
节点 | 原始系统标识 | 新系统标识 |
---|---|---|
node_A_1 | 4068741258 | 1574774970 |
node_A_2 | 4068741260 | 1574774991 |
node_B_1 | 4068741254 | 未更改 |
node_B_2 | 4068741256 | 未更改 |
- 在替换节点处于维护模式的情况下,根据系统是否配置了 ADP,使用正确的命令重新分配根聚合磁盘。
在出现提示时可以继续重新分配。
系统使用 ADP 使用此命令进行磁盘重新分配: 是 disk reassign -s old-system-ID -d new-system-ID -p old-partner-system-ID 否 disk reassign -s old-system-ID -d new-system-ID 示例
以下示例显示了在非 ADP 系统上重新分配驱动器:
*> disk reassign -s 4068741256 -d 1574774970
Partner node must not be in Takeover mode during disk reassignment from maintenance mode.
Serious problems could result!!
Do not proceed with reassignment if the partner is in takeover mode. Abort reassignment (y/n)? n
After the node becomes operational, you must perform a takeover and giveback of the HA partner
node to ensure disk reassignment is successful.
Do you want to continue (y/n)? y
Disk ownership will be updated on all disks previously belonging to Filer with sysid 537037643.
Do you want to continue (y/n)? y
disk reassign parameters: new_home_owner_id 537070473 , new_home_owner_name
Disk 0m.i0.3L14 will be reassigned.
Disk 0m.i0.1L6 will be reassigned.
Disk 0m.i0.1L8 will be reassigned.
Number of disks to be reassigned: 3 - 销毁邮箱磁盘的内容:mailbox destroy local
在出现提示时可以继续销毁操作。
示例
以下示例显示了 mailbox destroy local 命令的输出:
*> mailbox destroy local
Destroying mailboxes forces a node to create new empty mailboxes,
which clears any takeover state, removes all knowledge
of out-of-date plexes of mirrored volumes, and will prevent
management services from going online in 2-node cluster
HA configurations.
Are you sure you want to destroy the local mailboxes? y
...............Mailboxes destroyed.
*> - 如果已更换磁盘,则会有必须删除的故障本地丛。
- 显示聚合状态:aggr status
示例
在以下示例中,丛 node_A_1_aggr0/plex0 处于故障状态。
*> aggr status
Aug 18 15:00:07 [node_B_1:raid.vol.mirror.degraded:ALERT]: Aggregate node_A_1_aggr0 is
mirrored and one plex has failed. It is no longer protected by mirroring.
Aug 18 15:00:07 [node_B_1:raid.debug:info]: Mirrored aggregate node_A_1_aggr0 has plex0
clean(-1), online(0)
Aug 18 15:00:07 [node_B_1:raid.debug:info]: Mirrored aggregate node_A_1_aggr0 has plex2
clean(0), online(1)
Aug 18 15:00:07 [node_B_1:raid.mirror.vote.noRecord1Plex:error]: WARNING: Only one plex
in aggregate node_A_1_aggr0 is available. Aggregate might contain stale data.
Aug 18 15:00:07 [node_B_1:raid.debug:info]: volobj_mark_sb_recovery_aggrs: tree:
node_A_1_aggr0 vol_state:1 mcc_dr_opstate: unknown
Aug 18 15:00:07 [node_B_1:raid.fsm.commitStateTransit:debug]: /node_A_1_aggr0 (VOL):
raid state change UNINITD -> NORMAL
Aug 18 15:00:07 [node_B_1:raid.fsm.commitStateTransit:debug]: /node_A_1_aggr0 (MIRROR):
raid state change UNINITD -> DEGRADED
Aug 18 15:00:07 [node_B_1:raid.fsm.commitStateTransit:debug]: /node_A_1_aggr0/plex0
(PLEX): raid state change UNINITD -> FAILED
Aug 18 15:00:07 [node_B_1:raid.fsm.commitStateTransit:debug]: /node_A_1_aggr0/plex2
(PLEX): raid state change UNINITD -> NORMAL
Aug 18 15:00:07 [node_B_1:raid.fsm.commitStateTransit:debug]: /node_A_1_aggr0/plex2/rg0
(GROUP): raid state change UNINITD -> NORMAL
Aug 18 15:00:07 [node_B_1:raid.debug:info]: Topology updated for aggregate node_A_1_aggr0
to plex plex2
*> - 删除发生故障的丛:aggr destroy plex-id
示例
*> aggr destroy node_A_1_aggr0/plex0
- 显示聚合状态:aggr status
- 停止节点以显示装入程序提示符:halt
- 对灾难站点的另一个节点重复上述步骤。
提供反馈