IBM FlashSystem

IBM FlashSystem

Find answers and share expertise on IBM FlashSystem

 View Only
Expand all | Collapse all

Stopping a controller from an existing Storwise cluster for maintenance

  • 1.  Stopping a controller from an existing Storwise cluster for maintenance

    Posted Fri July 14, 2023 03:26 AM

     Hi,

      We have an older cluster (2076-624 and 2076-724 running 8.2.x) with 4 controllers/iogroups on 2 datacenters ( 2 controllers on each DC) spanning a common fiber-channel network.  We want to power-off one controller for maintenance to move it from one DC to another. This controller will be empty, without any volumes or hosts depending on it. It doesn't have any quorum disks and neither the configuration node.

      I didn't find any dedicated procedure for this kind of task . Is it enough to just shutdown the controller without impacting the rest of the cluster ? Or do I need to perform some other steps before shutting it down ?

     Thanks.



    ------------------------------
    Adrian Nicolae
    ------------------------------


  • 2.  RE: Stopping a controller from an existing Storwise cluster for maintenance

    Posted Mon July 17, 2023 03:16 AM

    Hi Adrian,

    I suppose, by talking controller, you mean a control enclosure forming an I/O group - true?

    Basically, you may shut down the two node canisters in the control enclosure to be relocated.

    However, even though this I/O group does not serve any volumes or other functions, there might be still some dependencies related to it. For instance, this could be FlashCopy bitmaps for volumes, that were moved from this I/O group into another.

    To ensure, shutting down this component does not impact anything else, it is highly advisable to check for depencies up front by using the CLI.

    • note the enclosure id 
    • determine both node canister ids in this enclosure by checking lsnodecanister 
    • check for any dependencies:
      • lsdependentvdisks -node <node_id> ; do this for both node canister ids
      • lsdependentvdisks -enclosure <enclosure_id> 

    Last, but not least: when you shut down the node canisters either via GUI or CLI, do not apply any -force parameter, which would override automatic dependency checks. The system will warn you, if anything depends on these node canisters.

    More information is available in IBM Docs article Shutting down a node using the CLI



    ------------------------------
    Best regards, 

    Christian Schroeder
    IBM Storage Virtualize Support with Passion
    ------------------------------