Power Virtual Server

 View Only

Live Data migration across different IOPS using Power Virtual Server

By SRIKANTH JOSHI posted 7 days ago

  

Introduction:

In current hybrid cloud era, majority of the customers are looking for their infrastructure to be in public cloud and they would like to invest more on workloads rather than maintaining infrastructure. When we have workloads running in hybrid cloud, data becomes crucial, and it also depends on how much IOPS speed workload is needed. Also, customers would like to move data from one IOPS speed to other IOPS speed based on their workload characteristics. For example, customer would like to run SAP HANA workload would like to run with higher IOPS, meanwhile same customer would like to run devOPs related workload on lower IOPS. 

Need of flexible IOPS feature :

Power Virtual Server (PowerVS) offers variety of hybrid cloud features in hybrid cloud environment. In variety of features, live data migration from one IOPS speed to other IOPS speed is released in 4Q-2024. With this feature, customers can move their workload from one IOPS to other IOPS. For example, if customer has deployed their VMs devOPs workload in high IOPS and this workload doesn’t need high IOPS. With this feature, they can move their workload from high IOPS to low IOPS and this is live data migration. With this, customer can make sure their billing is less for low IOPS workload.

This blog explains how customers can use Flexible IOPS feature and how we can seamlessly move data from one IOPS speed to other IOPS without hampering any impact to workload.

Flexible IOPS feature with different tiers:

PowerVS has different tiers for different IOPS speed and there is different billing for different tiers. 

  • ·      tier0 is with speed 25 IOPS
  • ·      tier1 is with speed 10 IOPS
  • ·      tier3 is with speed 3 IOPS
  • ·      tier5k (AKA Flexible IOPS) is with speed 5000 IOPS

With Flexible IOPS feature, one can move volume from one tier to other tier and there by the data present in those volumes will be moved.

Demo on moving volume from one tier to other tier with workload running

In this blog, we are running IO workload on VM and showcasing how we can move volume attached to VM from one tier to other tier.

  • Deployed an AIX VM from IBM Cloud UI

As we can from above snapshot , we are deploying VM with boot volume in tier3 and data volumes in tier5k. Now, VM is in a position to run volumes spread across 2 different tiers.

  • Attaching volumes from all 4 different tiers

With above snapshot, we can see we are attaching tier0 volume to VM which we deployed in step 1.

  • Final view of VM which has data volumes attached from all 4 different tiers.

  • We have started IO workload inside VM:

[root@dnd-srik] /

# hostname;what /unix|grep build;oslevel -s

dnd-srik

         _kdb_buildinfo unix_64 Apr 21 2023 13:58:32 2316B_73B

7300-01-02-2320

[root@dnd-srik] /

#

[root@dnd-srik] /

# lspv

hdisk0          none                                None                       

hdisk1          00fa00d66c59c9d7                    rootvg          active     

hdisk2          none                                None                       

hdisk3          none                                None                       

hdisk4          none                                None                       

hdisk5          none                                None                       

hdisk6          none                                None                       

hdisk7          none                                None                       

hdisk8          none                                None                       

hdisk9          none                                None                       

hdisk10         none                                None                       

hdisk11         none                                None                       

hdisk12         none                                None                       

hdisk13         none                                None                       

hdisk14         none                                None                       

hdisk15         none                                None                       

hdisk16         none                                None                       

hdisk17         none                                None                       

hdisk18         none                                None                       

hdisk19         none                                None                       

hdisk20         none                                None                       

hdisk21         none                                None                       

[root@dnd-srik] /

#

[root@dnd-srik] /tmp/paws_2.4_vios_ci

# iostat -d 2

System configuration: lcpu=8 drives=23 paths=176 vdisks=1

Disks:         % tm_act     Kbps      tps    Kb_read   Kb_wrtn

hdisk19         100.0     400.0     100.0        512       288

hdisk15         100.0     400.0     100.0        472       328

hdisk21         100.0     398.0      99.5        480       316

hdisk16         100.0     400.0     100.0        488       312

hdisk17         100.0     398.0      99.5        460       336

hdisk14         100.0     398.0      99.5        512       284

hdisk20         100.0     400.0     100.0        456       344

hdisk18         100.0     398.0      99.5        480       316

hdisk13         100.0     398.0      99.5        492       304

hdisk12         100.0     398.0      99.5        472       324

hdisk11         100.0     400.0      99.5        528       272

cd0               0.0       0.0       0.0          0         0

hdisk7            2.0     340.0      85.0        396       284

hdisk8            1.5     428.0     107.0        536       320

hdisk10           1.0     352.0      88.0        396       308

hdisk9            1.5     394.0      98.5        408       380

hdisk6            2.0     370.0      92.5        456       284

hdisk1            0.0       0.0       0.0          0         0

hdisk2            0.5     386.0      96.5        456       316

hdisk4            1.5     402.0     100.5        468       336

hdisk0            0.0       0.0       0.0          0         0

hdisk3            3.0     354.0      88.5        396       312

hdisk5            2.0     384.0      96.0        464       304

Disks:         % tm_act     Kbps      tps    Kb_read   Kb_wrtn

hdisk19         100.0     398.0      99.5        496       300

hdisk15         100.0     396.0      99.0        460       332

hdisk21         100.0     398.0      99.5        476       320

hdisk16         100.0     398.0      99.5        496       300

hdisk17         100.0     398.0      99.5        492       304

hdisk14         100.0     398.0      99.5        432       364

hdisk20         100.0     398.0      99.5        472       324

hdisk18         100.0     400.0     100.0        472       328

hdisk13         100.0     398.0      99.5        492       304

hdisk12         100.0     400.0     100.0        488       312

hdisk11         100.0     400.0     100.0        492       308

cd0               0.0       0.0       0.0          0         0

hdisk7            3.5     376.0      94.0        464       288

hdisk8            1.5     366.0      91.5        452       280

hdisk10           3.5     376.0      94.0        460       292

hdisk9            1.5     372.0      93.0        432       312

hdisk6            3.0     402.0     100.5        452       352

hdisk1            0.0       0.0       0.0          0         0

hdisk2            1.5     364.0      91.0        432       296

hdisk4            0.5     372.0      93.0        464       280

hdisk0            0.0       0.0       0.0          0         0

hdisk3            3.5     418.0     104.5        456       380

hdisk5            2.0     370.0      92.5        480       260

Topas Monitor for host:dnd-srik                 EVENTS/QUEUES    FILE/TTY       

Wed Jun 19 07:38:18 2024   Interval:2           Cswitch    8199  Readch  7330.3K

                                                Syscall   53784  Writech 5641.0K

CPU     User% Kern% Wait% Idle%   Physc  Entc%  Reads      1835  Rawin         0

Total     4.6   9.9  18.8  66.7    0.17  66.35  Writes      803  Ttyout      743

                                                Forks         0  Igets         0

Network    BPS  I-Pkts  O-Pkts    B-In   B-Out  Execs         0  Namei         2

Total    1008.    1.50    2.00   105.5   902.5  Runqueue      0  Dirblk        0

                                                Waitqueue   0.0                 

Disk    Busy%      BPS     TPS  B-Read  B-Writ                   MEMORY         

hdisk10   1.5     460K   115.0    262K    198K  PAGING           Real,MB    3072

hdisk6    2.5     448K   112.0    282K    166K  Faults       10  % Comp     61  

hdisk13   4.5     430K   107.5    258K    172K  Steals        0  % Noncomp   7  

hdisk4    3.5     410K   102.5    236K    174K  PgspIn        0  % Client    7  

hdisk20  100.0    402K   100.0    224K    178K  PgspOut       0                 

hdisk14  100.0    400K   100.0    256K    144K  PageIn        0  PAGING SPACE   

hdisk18  100.0    400K   100.0    242K    158K  PageOut       0  Size,MB     512

hdisk8    2.5     400K   100.0    234K    166K  Sios          0  % Used      7  

hdisk12  100.0    400K   100.0    252K    148K                   % Free     93  

hdisk19  100.0    400K   99.50    252K    148K  NFS (calls/sec) 

hdisk11  100.0    400K   100.0    218K    182K  SerV2         0  WPAR Activ    0

hdisk21  100.0    398K   99.50    224K    174K  CliV2         0  WPAR Total    0

hdisk15  100.0    398K   99.50    248K    150K  SerV3         0  Press: "h"-help

hdisk17  100.0    398K   99.50    242K    156K  CliV3         0         "q"-quit

hdisk16  100.0    398K   99.50    244K    154K  SerV4         0 

hdisk5    2.0     394K   98.50    250K    144K  CliV4         0 

hdisk7    2.5     392K   98.00    248K    144K

hdisk2    3.0     390K   97.50    246K    144K

hdisk9    2.0     388K   97.00    242K    146K

hdisk3    1.5     382K   95.50    230K    152K

hdisk1    0.0        0       0       0       0

hdisk0    0.0        0       0       0       0

cd0       0.0        0       0       0       0

FileSystem          BPS    TPS  B-Read  B-Writ

Total             743.5   0.50   743.5       0

Name           PID  CPU%  PgSp Owner

filemon    11993834  4.9 5.24M root           

vfc_kpro    2163014  2.9 4.19M root           

topas      43581762  0.7 13.9M root           

trclogio   11862702  0.1  512K root           

xmgc         852254  0.0  448K root           

getty      11665770  0.0  668K root           

randtime   66912520  0.0 1.34M root           

randtime   59113754  0.0 1.76M root           

randtime   56033724  0.0 1.94M root           

gil         1573172  0.0  960K root           

randtime   19923310  0.0 3.55M root           

randtime   66781444  0.0 1.34M root           

randtime   27918690  0.0 2.72M root           

randtime   26476854  0.0 3.78M root           

randtime   55706034  0.0 1.96M root           

randtime    5571256  0.0 1016K root           

randtime   47448502  0.0 4.59M root           

randtime   18743632  0.0 2.82M root           

randtime   57999864  0.0 1.83M root           

  • Now, with workload running on VM, we are moving volumes from one tier to other tier. For example, we are moving volume in tier1 to tier5k (Flexible IOPS):

  • Another example, where we are moving another volume from tier5k to tier3:

  • Now, lets check the workload on VM:

Topas Monitor for host:dnd-srik                 EVENTS/QUEUES    FILE/TTY       

Wed Jun 19 07:41:12 2024   Interval:2           Cswitch    8262  Readch  7228.1K

                                                Syscall   53647  Writech 5518.7K

CPU     User% Kern% Wait% Idle%   Physc  Entc%  Reads      1809  Rawin         0

Total     4.5   9.7   6.5  79.2    0.16  65.06  Writes      772  Ttyout      633

                                                Forks         0  Igets         0

Network    BPS  I-Pkts  O-Pkts    B-In   B-Out  Execs         0  Namei         3

Total    898.0    1.50    2.00   97.00   801.0  Runqueue   2.50  Dirblk        0

                                                Waitqueue   0.0                 

Disk    Busy%      BPS     TPS  B-Read  B-Writ                   MEMORY         

hdisk8    3.0     446K   111.5    286K    160K  PAGING           Real,MB    3072

hdisk9    3.0     416K   104.0    250K    166K  Faults        0  % Comp     61  

hdisk16  100.0    400K   100.0    222K    178K  Steals        0  % Noncomp   7  

hdisk2    3.0     400K   100.0    248K    152K  PgspIn        0  % Client    7  

hdisk3    3.0     400K   100.0    226K    174K  PgspOut       0                 

hdisk14  99.5     400K   100.0    252K    148K  PageIn        0  PAGING SPACE   

hdisk13  100.0    400K   100.0    244K    156K  PageOut       0  Size,MB     512

hdisk20  100.0    400K   100.0    244K    156K  Sios          0  % Used      7  

hdisk12  100.0    398K   99.50    254K    144K                   % Free     93  

hdisk19  100.0    398K   99.50    252K    146K  NFS (calls/sec) 

hdisk11  100.0    398K   99.50    230K    168K  SerV2         0  WPAR Activ    0

hdisk15  100.0    398K   99.50    236K    162K  CliV2         0  WPAR Total    0

hdisk18  100.0    396K   99.00    254K    142K  SerV3         0  Press: "h"-help

hdisk10   2.5     394K   98.50    224K    170K  CliV3         0         "q"-quit

hdisk6    0.5     380K   95.00    248K    132K  SerV4         0 

hdisk5    3.5     378K   94.50    234K    144K  CliV4         0 

hdisk4    3.0     374K   93.50    226K    148K

hdisk21   2.0     374K   93.50    228K    146K

hdisk7    2.0     358K   89.50    218K    140K

hdisk17   2.0     354K   88.50    210K    144K

hdisk1    0.0        0       0       0       0

hdisk0    0.0        0       0       0       0

cd0       0.0        0       0       0       0

FileSystem          BPS    TPS  B-Read  B-Writ

Total             633.5   0.50   633.5       0

Name           PID  CPU%  PgSp Owner

filemon    11993834  5.4 5.27M root           

vfc_kpro    2163014  2.8 4.19M root           

topas      43581764  0.7 13.9M root           

trclogio   11862702  0.0  512K root           

getty      11665770  0.0  668K root           

randtime   27001158  0.0 3.79M root           

randtime   62587268  0.0 1.57M root           

randtime    2032204  0.0 1.21M root           

randtime   28442994  0.0 3.86M root           

randtime   12059132  0.0 3.24M root           

randtime   21496222  0.0 2.58M root           

randtime   59900210  0.0 1.72M root           

randtime   47382964  0.0 4.59M root           

randtime   64160180  0.0 1.49M root           

randtime   36766064  0.0 4.18M root           

randtime   63570338  0.0 1.52M root           

randtime   24642046  0.0 3.19M root           

randtime   55116192  0.0 1.99M root           

randtime   17826096  0.0 3.48M root           

randtime   12321234  0.0 3.25M root           

  From above, we can see workload is continuously running even though we moved volume from one tier to other tier

Volume movement from one tier to other tier via IBM Cloud CLI :

Using ibmcloud pi vol action command, we can move volume from one tier to other tier.

For example :

srikanthjoshi@Srikanths-MacBook-Pro ~ % ibmcloud pi vol get edab821e-f4b8-4ee5-903c-78432b7a8520

Getting Volume edab821e-f4b8-4ee5-903c-78432b7a8520 under account PPCaaS Test Account as user srikanth.joshi@in.ibm.com...

                  

ID                 edab821e-f4b8-4ee5-903c-78432b7a8520

Name               DND-srik-vol-tier1

Profile            tier5k

Status             in-use

Size               1

Created            2024-06-19T09:28:49.000Z

Updated            2024-06-19T12:40:06.000Z

Shareable          false

Bootable           false

Storage Pool       general-flash-2

IO Throttle Rate   5000 iops

PVMInstanceIDs     7eacf3b1-7d0f-4038-ad12-a0393617eeef

WWN                600507681284817BE80000000001EDDA

srikanthjoshi@Srikanths-MacBook-Pro ~ % ibmcloud pi vol action edab821e-f4b8-4ee5-903c-78432b7a8520 --target-tier tier0

Performing action on volume edab821e-f4b8-4ee5-903c-78432b7a8520 under account PPCaaS Test Account as user srikanth.joshi@in.ibm.com...

OK

Action on Volume ID edab821e-f4b8-4ee5-903c-78432b7a8520 successful.

srikanthjoshi@Srikanths-MacBook-Pro ~ % ibmcloud pi vol get edab821e-f4b8-4ee5-903c-78432b7a8520

Getting Volume edab821e-f4b8-4ee5-903c-78432b7a8520 under account PPCaaS Test Account as user srikanth.joshi@in.ibm.com...

                  

ID                 edab821e-f4b8-4ee5-903c-78432b7a8520

Name               DND-srik-vol-tier1

Profile            tier0

Status             in-use

Size               1

Created            2024-06-19T09:28:49.000Z

Updated            2024-06-19T12:46:52.000Z

Shareable          false

Bootable           false

Storage Pool       general-flash-2

IO Throttle Rate   25 iops

PVMInstanceIDs     7eacf3b1-7d0f-4038-ad12-a0393617eeef

WWN                600507681284817BE80000000001EDDA

srikanthjoshi@Srikanths-MacBook-Pro ~ %

Advantages:

As we can see, customers will have huge advantage of moving volumes from one IOPS to other IOPS with data present on them. This will help customers to reduce their billing for workload which they don’t need high IOPS and it also helps to make sure volume movement from one tier to other tier is independent of workload running on VM.

References:

·       Power Virtual Server overview : https://www.ibm.com/products/power-virtual-server

·       Power Virtual Server API Doc : https://cloud.ibm.com/apidocs/power-cloud

0 comments
12 views

Permalink