Original Message:
Sent: Wed August 27, 2025 11:08 AM
From: Shaun Anderson
Subject: Maximum extents error
Thank you Istvan. I've worked with Flashsystem for years and never caught the 5045 had fewer than the other models (2^20). We have a plan to move forward now. Appreciate the help.
------------------------------
Shaun Anderson
Original Message:
Sent: Tue August 26, 2025 05:42 AM
From: Istvan Buda
Subject: Maximum extents error
According to https://www.ibm.com/support/pages/v862x-configuration-limits-and-restrictions-ibm-flashsystem-50x5-and-5200
The Total storage capacity manageable per system for FS5045 is 8PB not 32PB. So the maximum extent number is less than (quarter of) 2^22.
This is for "per system" and for FS5045 that is max two I/O groups.
Thus you might really hit the max allowed number of extents.
As far as I know that is irrelevant if the extent is used or not. So check the unused mdisk extent numbers with lsfreeextents command.
You might need to completely reorganize your mdisks and pools to make room for the new bigger extents.
To further / deeper suggestions share more details about the current config.
Regards,
------------------------------
Istvan Buda
Original Message:
Sent: Mon August 25, 2025 08:27 PM
From: Shaun Anderson
Subject: Maximum extents error
IHAC with a FS5045 in a Hyperswap setup. They currently have a single pool on each array, each pool has a single mdisk. The extent allocation was configured at 1G,
We wanted to create a new pool per array at an 8G Extent size as this is used to store video and will house much more than it does currently. We added new drives to their 92F expansion and created the pools, but when attempting to add storage we are getting the error:
CMMVC9000E The action was not completed because the cluster has reached the maximum number of extents in storage pools.
Reviewing their Extent allocation:
:superuser>lsmdiskextent 0 | | :superuser>lsmdiskextent 16 |
id | number_of_extents | | id | number_of_extents |
0 | 62923 | | 11 | 1313 |
1 | 62760 | | 12 | 2 |
2 | 62729 | | 14 | 62799 |
3 | 62804 | | 15 | 2 |
4 | 62797 | | 17 | 62798 |
5 | 62798 | | 18 | 2 |
6 | 1313 | | 20 | 62806 |
13 | 2 | | 21 | 2 |
16 | 2 | | 23 | 62730 |
19 | 2 | | 24 | 2 |
22 | 2 | | 26 | 62762 |
25 | 2 | | 27 | 2 |
28 | 2 | | 29 | 62925 |
31 | 2 | | 30 | 2 |
Based on my quick math we are shy of 768K Extents utilized between both systems. Extents on a 5045 are still 2^22 right? Wouldn't we have 3+Million extents to utilize at the system level?
:superuser>lsmdiskgrp 2 | grep -e 8G -e extent -e name
name CCC_NL-8Gext-Pool
extent_size 8192
:superuser>lsmdiskgrp 3 | grep -e 8G -e extent -e name
name JCCH_NL-8Gext-Pool
extent_size 8192
Ultimately I want to create these new pools, move the existing data into them and remove the old pools so we can ingest those drives into the 8G extent pools.
It would be possible (not excited about this option) to delete the hyperswapped copy of the volumes to free those extents up, but based on my math we should have sufficient extents to create these pools at 8G and then be able to perform some of the needed data moves to get back to full redundancy.
Any help would be appreciated. Thanks all.
------------------------------
Shaun Anderson
------------------------------