Primary Storage and IBM Spectrum Virtualize

Expand all | Collapse all

creating changes to SVC clusters

  • 1.  creating changes to SVC clusters

    Posted Tue August 29, 2017 04:41 PM

    I am working with 2 sites that has a 4-node SVC cluster and it is split up between the 2. Due to some issues I am in need of going to 2 separate clusters and I am really hopeful to be able to find a way to do this actiona without distruption. Have any of you been through this process and can share how I can go about this? Thank you very much.

  • 2.  RE: creating changes to SVC clusters

    Posted Tue August 29, 2017 04:51 PM

    Hi Ceasar, unfortunately I am not aware of a supported way of doing that non-disruptively.  I did work with a customer who was doing the same thing and they ended up creating a separate new cluster (requires temporary nodes and some temporary storage).  You can use metro mirroring to copy the data to the new cluster, but you will need a host outage to change the host to use the data on the new cluster. This host outage can be scheduled host by host. 

  • 3.  RE: creating changes to SVC clusters

    Posted Tue August 29, 2017 06:17 PM

    Tom thanks for the response. Because we already have production systems running in each site we just look to avoid rebuilding either site. I guess I should have mentioned too that this isn't a stretch cluster, instead it is a hyperswap cluster with an io group in each site. 

    Do you think I could still use the solution you had your customer use?

  • 4.  RE: creating changes to SVC clusters

    Posted Wed August 30, 2017 09:28 AM

    Unfortunately no, a Hyperswap volume cannot also be in a replication relationship.  However, it does open other possibilities.  The following is a high-level example and assumes that ALL volumes are in a Hyperswap relationship and your workload can run in either data center.  While I have not done the following, theoretically it should be possible.

    Assuming ALL volumes are in a Hyperswap relationship and your workload can run in either data center, make sure your workload is all running on hosts in data center "A".  You should then be able to remove the mirror copies in the "B" data center leaving all hosts running on un-mirrored volumes in the "A" data center (rmvolumecopy).  When this is completely done you should have nothing using the storage and nodes in the "B" data center.  You could then remove the storage and the nodes in the "B" data center, cleanup the zoning, change the topology from Hyperswap to Standard (chsystem), get rid of your third site quorums, and remove all of the site indicators from everything.

    At that point you should have all hosts running on a standard SVC cluster in data center "A" and unused nodes and storage in data center "B".  The cluster in "A" is no longer a Hyperwap cluster.  You should then be able to either setup a new cluster in "B" and replicated to it for DR protection or redeploy the equipment from "B" to "A".

    There are a lot more details to think about (cleaning up CV's in data center "A", are you using consistency groups?, what does your physical SAN infrastructure look like and will you keep it that way (a Hyperswap config want a dedicated SAN and ISL's for the node to node traffic))...but this is a high-level outline that I think should work.

    There are lots of cautions... as soon as you start you no longer have HA protection so think about your backups and what you will do if a disaster happens in the middle of this.   Open proactive PMR's before starting, make all changes in a maintenance window, and if you are not very comfortable with SVC/storwize it would be a good idea to get a professional service contract with someone who can help (IBM or Business Partner).  Some of these steps are probably very infrequently done by IBM customers (for example changing the topology from Hyperswap to Standard) so I would say they are relatively high risk.

    If you want to talk about what you are trying to accomplish send me an email (my email address is in my profile) we can set up a time to talk.