Originally posted by: The_Doctor
Environment --> Power6 Blade - 12 GB - AIX 6.1.4.0
During periods of high user activity I noticed that VMM is scanning / freeing pages at a fairly high rate. Nothing unusual there, BUT I also noticed that VMM is maintaining 2 pools - a 4K and a 64K page pool.
Further analysis shows that VMM page replacement scanning is almost 100% on the 4K pool.... while the activity on the 64K pool is pretty much idle. Here's 2 snippets taken with vmstat -P 4K 1 60 & vmstat -P 64K 1 60:
pgsz memory page time ----- -------------------------- ------------------------------------ -------- siz avm fre re pi po fr sr cy hr mi se 4K 2461504 2087767 1154 0 0 0 105 20915 0 09:44:19 4K 2461504 2087767 1096 0 0 0 807 20675 0 09:44:20 4K 2461504 2087765 1104 0 0 0 845 5868 0 09:44:21 4K 2461504 2087764 1167 0 0 0 1060 2914 0 09:44:22 4K 2461504 2087764 1210 0 0 0 2018 14674 0 09:44:23 4K 2461504 2087765 1198 0 0 0 4589 18238 0 09:44:24 4K 2461504 2087764 1159 0 0 0 15054 15060 0 09:44:25 4K 2461504 2087764 1092 0 0 0 15427 15637 0 09:44:26 4K 2461504 2087764 1131 0 0 0 4625 4791 0 09:44:27 4K 2461504 2087764 1108 0 0 0 6497 7654 0 09:44:28 4K 2461504 2087765 1119 0 0 0 3086 3152 0 09:44:29 4K 2461504 2087764 1195 0 0 0 6588 9120 0 09:44:30 4K 2461504 2087771 1193 0 0 0 9389 14388 0 09:44:31 4K 2461504 2087843 1120 0 0 0 9320 12061 0 09:44:32 4K 2461504 2087841 1182 0 0 0 11267 11410 0 09:44:33 4K 2461504 2087842 1129 0 0 0 1094 1135 0 09:44:34 4K 2461504 2087841 1096 0 0 0 3400 5230 0 09:44:35 4K 2461504 2087869 1169 0 0 0 5706 9106 0 09:44:36 4K 2461504 2087870 1092 0 0 0 4349 5438 0 09:44:37 4K 2461504 2087871 1198 0 0 0 4357 5535 0 09:44:38 pgsz memory page time ----- -------------------------- ------------------------------------ -------- siz avm fre re pi po fr sr cy hr mi se 64K 42764 40714 2050 0 0 0 0 0 0 09:44:19 64K 42764 40714 2050 0 0 0 0 0 0 09:44:20 64K 42764 40714 2050 0 0 0 0 0 0 09:44:21 64K 42764 40712 2052 0 0 0 0 0 0 09:44:22 64K 42764 40712 2052 0 0 0 0 0 0 09:44:23 64K 42764 40712 2052 0 0 0 0 0 0 09:44:24 64K 42764 40712 2052 0 0 0 0 0 0 09:44:25 64K 42764 40712 2052 0 0 0 0 0 0 09:44:26 64K 42764 40712 2052 0 0 0 0 0 0 09:44:27 64K 42764 40712 2052 0 0 0 0 0 0 09:44:28 64K 42764 40712 2052 0 0 0 0 0 0 09:44:29 64K 42764 40712 2052 0 0 0 0 0 0 09:44:30 64K 42764 40712 2052 0 0 0 0 0 0 09:44:31 64K 42764 40712 2052 0 0 0 0 0 0 09:44:32 64K 42764 40712 2052 0 0 0 0 0 0 09:44:33 64K 42764 40712 2052 0 0 0 0 0 0 09:44:34 64K 42764 40712 2052 0 0 0 0 0 0 09:44:35 64K 42764 40712 2052 0 0 0 0 0 0 09:44:36 64K 42764 40712 2052 0 0 0 0 0 0 09:44:37 64K 42764 40712 2052 0 0 0 0 0 0 09:44:38
Similar samples have been taken on other days.... all showing the same thing. VMM is maintaining the 4K pool, business as usual, based on my default minfree 960, maxfree 1088, minperm% 3%, maxperm% 90%, and maxclient 90%. No problem on the 4K pool, VMM is doing it's job.
But VMM also seems to be maintaining approx. 2000 free pages in the 64K pool..... in my case, that's about 128MB free in the 64K pool that is sledom used.
So my question(s) are:
1. is there a way to influence AIX 6.1 to dynamically move more free 64K pages to the 4K pool where I need them ?
2. is there a whitepaper / redpaper that describes the dynamics of the 64K pool in AIX 6.1 ? and can you provide a link ? I have found good papers on VMM just not one that descibes the dynamics built into the 64K pool.
And finally, yes, there seems there's a way to turn off the 64K pool.... e.g. vmo -o vmm_mpsize_support=0 (default is 2) but the 64K pool seems like a good thing so I'd rather not just turn it off.