Plan and operate your storage system with 85% or less physical capacity used. Flash
drives depend on free pages being available to process new write operations and to be
able to quickly process garbage collection. Without some level of free space, the internal
operations to maintain drive health and host requests might over-work the drive causing
the software to proactively fail the drive, or a hard failure might occur in the form of the
drive becoming write-protected (zero free space left).
1) This limitation did not exist for the Flash Sytem 900 series where there was hidden capacity that was not visible as usable
2) IBM support proactive team will only start sending email when the capacity exceeds 90 %
3) When using DRAID , we have at least one spare space and therefore capacity is blocked inside raid of course until a rebuild occurs
e.g. one spare space for 24 disks this is 1/24 = 4 % of capacity
I'm curios if the write slowdown is significant where storage is full. I only found this test for FS900 :
https://www.spcresults.org/sites/default/files/results/AE00008/ae00008_IBM_FlashSystem-900_SPC-1E_full-disclosure-report-r1.pdf
Original Message:
Sent: Thu June 26, 2025 10:44 AM
From: Istvan Buda
Subject: DRAID size
I have to revise / add to my previous post and here is a general flash (media) provisioning recommendation by IBM:
When flash arrays are provisioned to 100% of their capacity and are eventually filled up, the over-provisioned space will eventually be insufficient to keep up with write operations and garbage collection. The resulting write amplification with insufficient free pages can result in a variety of drive problems. In multi-tier pools, Easy Tier will promote hot extents into flash tiers. As a result, many IO requests will hit the SSD tier and force more ops to more regions of the flash drives and eventually, the flash drives will run out of free pages to maintain themselves (depending on write workload and over-provisioning in the drive). When Flash drives run out of space and lack sufficient overhead to perform their garbage collection and read_modify_write processes the result is high drive response times which may PFA the drive.
Develop a plan to migrate data off of the flash array such that the used space is less than 85% (80% if being virtualized behind SVC or some other virtualization engine).
------------------------------
Istvan Buda
Original Message:
Sent: Thu June 26, 2025 05:00 AM
From: stuart wade
Subject: DRAID size
This redbook goes a way to explaining the 85% advisement. page 70.
chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.redbooks.ibm.com/redbooks/pdfs/sg248503.pdf
I have always planned for 80% usage to allow some growth and still allow the flash drives space for background house keeping processes such as garbage collection.
Thanks Stuart
------------------------------
stuart wade
Original Message:
Sent: Mon June 23, 2025 02:32 PM
From: Tomas Kovacik
Subject: DRAID size
2) IBM tool storage modeller gives advise to use 85 % raid physical capacity doesn't matter whether use DRP pool or not
------------------------------
Tomas Kovacik
Original Message:
Sent: Mon June 23, 2025 03:04 AM
From: Istvan Buda
Subject: DRAID size
Hi Tomas,
1:
The slice size is not the calculated value, but the strip_size: which is 256 KiB fixed.
The complete explanation can be seen here:
https://www.ibm.com/docs/en/flashsystem-7x00/8.7.x_cd?topic=ac-distributed-raid-array-properties-1

2:
The 85% recommendation is for DRP pool only. For legacy pool there is no such value:
"A general guideline is to ensure that the provisioned capacity with the data reduction pool does not exceed 85% of the total usable capacity of the data reduction pool. "
https://www.ibm.com/docs/en/flashsystem-7x00/8.7.x_cd?topic=c-pools
Regards,
------------------------------
Istvan Buda
budai88@gmail.com
Original Message:
Sent: Fri June 20, 2025 04:30 AM
From: Tomas Kovacik
Subject: DRAID size
can somebody explain this :
I have STV 7300, DRAID6 with following parameters :
raid_level raid6
strip_size 256
drive_count 24
rebuild_areas_total 1
stripe_width 12
physical_capacity 664.21TB
Drive FCM 104,8TB physical size :
IBM_FlashSystem:sks73017a:tomas>lsdrive 0 | grep physical_capacity
physical_capacity 34.93TB
counting array size :
As I understand stripe_width means that 12 slices are backet by P+Q+S slice so I have 12 data slices on each drive
- slice size = drive_size/12+P+Q+S
slice size = 34.93/15 = 2.32866667 TB
- I have 12*24 data slices so usable data capacity should by 12*24*2.32866667=670,656 TB
- Why array show 664,21 TB instead of 670.565 TB ?
- IBM recommended capacity is 85 % from raid usable is this really necessary ?
Thanks
Tomas
------------------------------
Tomas Kovacik
------------------------------