Hi Charin,
This entry implies you have a PMR (Support Case open) with a severity 3.
If so, I need to point out that this community page is NOT the way to update the PMR
and may be completely ignored by the support team because they may never see it.
Work the problem with the support team, answer all questions quickly, and supply extra relevant information (don't hold back facts).
Best of luck, Nigel
------------------------------
Nigel Griffiths
------------------------------
Original Message:
Sent: Thu January 26, 2023 01:41 AM
From: CHARIN KUMJUDPAI
Subject: AIX GLVM the "gmvgstat" command shown "GLVM Status: Failed" and some of GLVM nodes shown "Sync not 100%"
ENV:
AIX: 7200-05-02-2114
glvm.rpv.client: 7.2.5.1::APPLY:COMPLETE:10/22/21:11;13;09
glvm.rpv.server: 7.2.5.0::APPLY:COMPLETE:10/22/21:11;20;06
IMAPCT: IBM local would like to changed to severity 2 because the number of stale PPs are growing up.
Problem description:
1. Problem background.
The network between DC1 and DC2 site goes down since 25Jan about 09:00PM to 26Jan about 01:00AM
2. After fixed network issue done.
Then on AIX GLVM getting issue as "gmvgstat" command show "GLVM Status: Failed" and some of GLVM nodes shown "Sync not 100%"
The output of "rpvstat -n" and "gmvgstat" commands.
root@efs-db1-p1:[/root]# rpvstat -n
Remote Physical Volume Statistics:
Comp Reads Comp Writes Comp KBRead Comp KBWrite Errors
RPV Client cx Pend Reads Pend Writes Pend KBRead Pend KBWrite
------------------ -- ----------- ----------- ------------ ------------ ------
hdisk19 1 2251 8645 10759 102509 0
0 0 0 0
192.168.245.26 Y 2251 8645 10759 102509 0
0 0 0 0
hdisk18 1 2251 1916 10759 40668 0
0 0 0 0
192.168.245.26 Y 2251 1916 10759 40668 0
0 0 0 0
hdisk20 0 1755 5720618 881 481988707 3
0 0 0 0
192.168.245.26 N 1755 5720618 881 481988707 3
0 0 0 0
hdisk15 1 2251 9525 10759 71113 0
0 0 0 0
192.168.245.26 Y 2251 9525 10759 71113 0
0 0 0 0
hdisk14 1 2251 199154 10759 23126432 0
0 0 0 0
192.168.245.26 Y 2251 199154 10759 23126432 0
0 0 0 0
hdisk13 0 1954 5895909 980 48283219 3
0 0 0 0
192.168.245.26 N 1954 5895909 980 48283219 3
0 0 0 0
hdisk12 1 2601 5270552 18141 49016525 0
0 0 0 0
192.168.245.26 Y 2601 5270552 18141 49016525 0
0 0 0 0
hdisk11 1 2601 1967929 18141 39870846 0
0 1 0 4
192.168.245.26 Y 2601 1967929 18141 39870846 0
0 1 0 4
hdisk10 1 2601 1237280 18141 15717223 0
0 2 0 256
192.168.245.26 Y 2601 1237280 18141 15717223 0
0 2 0 256
root@efs-db1-p1:[/root]# gmvgstat
GMVG Name PVs RPVs Tot Vols St Vols Total PPs Stale PPs Sync
--------------- ---- ---- -------- -------- ---------- ---------- ----
orad-prd-gmvg 4 4 8 1 129496 620 99%
oraa-prd-gmvg 2 2 4 0 38636 0 100%
oraf-prd-gmvg 2 2 4 0 64748 0 100%
orab-prd-gmvg 1 1 2 1 76918 9444 87%
root@efs-db1-p1:[/root]#
3. IBM local suggested to run "varyonvg" as below detail.
root@efs-db1-p1:[/root]# lsvg -o
oraf-prd-gmvg
orad-prd-gmvg
orab-prd-gmvg
oraa-prd-gmvg
caavg_private
rootvg
root@efs-db1-p1:[/root]# varyonvg orad-prd-gmvg
root@efs-db1-p1:[/root]# 0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume ora1d-prd-loglv.
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume ora-prd-d1dclv.
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume ora-prd-d1rlv.
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume ora-prd-d2dlv.
0516-932 /etc/syncvg: Unable to synchronize volume group orad-prd-gmvg.
root@efs-db1-p1:[/root]# varyonvg orab-prd-gmvg
root@efs-db1-p1:[/root]# 0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume orab-prd-loglv.
0516-1296 lresynclv: Unable to completely resynchronize volume.
The logical volume has bad-block relocation policy turned off.
This may have caused the command to fail.
0516-934 /etc/syncvg: Unable to synchronize logical volume ora-prd-bkuplv.
0516-932 /etc/syncvg: Unable to synchronize volume group orab-prd-gmvg.
root@efs-db1-p1:[/root]#
Please advise steps to fix issue and to be continue and resume GLVM synchronize complete 100% between both site.
------------------------------
CHARIN KUMJUDPAI
------------------------------