Primary Storage

 View Only
  • 1.  Single IO Group - Preferred Node

    Posted Sat July 16, 2022 04:25 PM
    When a volume is automatically assigned to a preferred node (say Node 1):
    --Under normal operating conditions, does it always stay on Node 1

    Automatic 'Node' assignment refers to which nodes have the least number of volumes.
    (Consider a 2 node system)

    But what happens if I create:

    Volume1 - 8,000 GB
    Volume2 -  1 GB
    Volume3 - 10,000 GB
    Volume4 -  2 GB

    Does the system assign the 8TB and 10TB to Node 1 ?
    and assign the 1GB and 2GB to Node 2 ?

    If the System automatically assigns nodes, does it 'later on' automatically rebalance those nodes
    i.e. Is there an automatic 'movedisk' command run? (I cant imagine this would be the case)


  • 2.  RE: Single IO Group - Preferred Node

    Posted Mon July 18, 2022 03:24 AM
    Have a look at:

    T Masteen

  • 3.  RE: Single IO Group - Preferred Node

    Posted Mon July 18, 2022 04:15 AM
    Thanks - I get what Barry's saying

    My 'question' still remains that

    1) If all my 'heavy write' applications were stuck on Preferred NODE=1 then
    All the write to disk destaging would process through NODE1

    I suppose my lack of understanding would be

    1) If the Destaging process uses very little CPU and RAM , then it not too big an issue??

    If Destaging was a CPU intensive operation - then it would be an issue?

    2) I fully understand round robin reads are not an issue 

    The all


  • 4.  RE: Single IO Group - Preferred Node

    Posted Mon July 18, 2022 05:06 AM
    According: IBM SAN Volume Controller Best Practices and Performance Guidelines for IBM Spectrum Virtualize Version 8.4.2

    5.2 Guidance for creating volumes
    By default, the preferred node, which owns a volume within an I/O group, is selected in a load balancing basis. Although it is not easy to estimate the workload when the volume is created, distribute the workload evenly on each node within an I/O group.

    5.10.1 Changing the preferred node of a volume within an I/O group
    Changing the preferred node within an I/O group can be done with concurrent I/O. However, it can lead to some delay in performance and in the case of some specific operating systems or applications, they might detect some time-outs.

    This operation can be done by using the CLI and GUI. This is not done automatically.

    In today's clusters with Flashsystems as backend devices (with their own cache), I haven't seen any problems with cache destaging yet.

    T Masteen

  • 5.  RE: Single IO Group - Preferred Node

    IBM Champion
    Posted Mon July 18, 2022 05:28 AM
    Edited by Nezih Boyacioglu Mon July 18, 2022 05:32 AM
    You can monitor node and write cache utilizations on Spectrum Control. As Mr. T said probably you will not see any overloaded node issue.
    Of course there is some limitations but it depends on how many hosts connected to your storage, how many volumes assigned to hosts, how many fc logins per fc port, what is your host os, how your host manages the multipath.

    For example if your host is Vmware, with Vmware 7.0 and later we recommend to use HPP instead of NMP as multipath plugin. And choose Load Balance - Latency (LB-Latency) path selection scheme. This configuration monitors all paths and if one path exceeds its latency sensitive threshold value Vmware discards this path and uses other (healthy) paths. 

    More details will be on forthcoming "VMware and FlashSystem Best Practices" Redbook (will be published within couple of weeks)

    Nezih Boyacioglu

  • 6.  RE: Single IO Group - Preferred Node

    Posted Mon July 18, 2022 06:09 AM
    The automatic "preferred" node selection is indeed based on the number of vdisks, size is not taken into account because at creation time it is impossible to predict the IO rate on a particular vdisk.

    In most cases, hosts see the paths to vdisks that go via preferred node as the "optimal" paths,
    The paths via the other node of same IOgroup is considered as a "standby" path so not used for host traffic in normal operations.

    But to be honest, I think it is OK that the IO's on an IOgroup are not equally distributed across both nodes of an IOgroup :
    Each node of an IOgroup should be able to carry the load of all IO's :
    Example is firmware upgrades where each node is not available for some time.
    Also if you encounter a failure of 1 node, system should still perform as before - only redundancy should have been lost in this case

    Hans Populaire

  • 7.  RE: Single IO Group - Preferred Node

    Posted Mon July 18, 2022 01:48 PM
    The mappings are not automatically rebalanced between controllers.  You can manually change the primary node, and the i/o groups which can access the vdisks.  Switching these automatically typically causes an interruption or a pause in I/O which is undesirable.

    Switching between nodes in the same I/O group is minor, because cache sync should be pretty low latency.  Even so, the cache segment is primary/secondary.  Going to the wrong node will push the write to the primary node if it's available, then sync it back to the secondary, increasing latency.

    All of this is to ensure that writes and reads happen in order.  Without it, you could read stale data while an earlier write was still being serviced on the other node.

    Josh-Daniel Davis
    Highland Village TX

  • 8.  RE: Single IO Group - Preferred Node

    Posted Tue July 19, 2022 12:16 PM

    in small environments we have not yet been able to identify any problems with current systems / generations of controller nodes in any environment, we monitor the systems in great details with various tools, these would Report high workloads immediately when latencys comes and affect the performance.

    with special requirements with smal count of Path on a FS system you could possibly Expect a 16G FC interface to be over-utilized. But you would recognize that immediately.

    To my knowledge, the SpectrumVirtualize systems do not have an automatism as known from the old DS (LSI engenio based), for example, which shift the preferred path when there is a lot of load, and in my opinion it is also not useful.
    On LSI engenio Plattforms is trigger the prefered path rebalance by the vMotion data migration a annoying behavior.

    I think too many automatic processes always bring problems, I like to distribute the volumes manually when I add new ones or delete old ones.
    Of course, using VVols can get a bit confusing, but VVols are always confusing and that's why I don't like them either.

    Sebastian Besler vvbasti