Unfortunately, to use "rolling window" purging, the indices must be attached. Still, you might try dropping and recreating an index that isn't needed for referential integrity to see if it builds back a bit more compactly.
I think it would be a pretty heavy lift to allow for index compression, since it would make the search logic more complicated.
This is what I thought.
In our case rebuilding the indexes every time we're doing our weekly automatic fragment rotation on a, lets say, 320G of data table is out of question.
It would be great though if this functionality was available, especially where we need compression the most.
Administrateur de bases de données | Database Administrator
T : 514-870-2440 M : 514-207-7015
It wouldn't be a bad idea to audit the indices and make sure they're all truly needed. In 14.10 they introduced a "Last Lookup/Scan" field to oncheck –pt that shows you if certain indices are unused (or very infrequently used).
Also, I was suggesting just doing the index rebuild once, after implementing compression. All of the on-the-fly rewrites to update the page locations may have made the indices more inefficient and therefore larger. Post rebuild, just make sure you've got your btscanners set fairly aggressively.
Well, sort-of automate. You still have to run the syspurge() job, and with heavily-accessed tables, we've had issues getting the required locks to do it, even though they're only sub-second. But yes, that does on balance make things a lot easier.
This _may_ be improved in 14.10; I haven't tested removing our temporarily-lock-out-users logic that we implemented with 12.10.