Informix

Expand all | Collapse all

Compression with fragmented tables

  • 1.  Compression with fragmented tables

    Posted 10 days ago
    Hi everyone, we're using Informix 14.10.FC5. I need to fragment a table in let's say one 30G dbspace a week (7 millions records/ week) for easier data rotation (detach full fragment, attach empty fragment strategy). Btree indexes are of course attached.

    In order so save some space, I was asked if compression could be used either at the table or the dbspace level.

    I understand that If I create the table with the 'COMPRESSED' option, auto-compress should take care of compressing data in the newly attached fragments after they reach enough (2000) entries to build a compression dictionary.  I will run some test in a few days to see...

    My other concern is about the attached btree indices: I read that they will not compress. Am I right? Is there a better approach I could take?

    Thanks,

    Benoît Chamberland

    ------------------------------
    Benoit Chamberland
    ------------------------------


  • 2.  RE: Compression with fragmented tables

    Posted 10 days ago
    FYI, Look into using RANGE/INTERVAL fragmentation to automate adding and detaching or dropping partitions!

    Art

    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 3.  RE: Compression with fragmented tables

    Posted 10 days ago
    Thanks Art and Jean-François!

    I observe a reduction of 30-40% in the number of data pages after activating the compression.

    Often, the indexes are taking as much space than the data but by fragmenting the table, these btree indices being attached are not compressible.

    Are there any strategies possible to compress attached indexes? If not, what if I fragmented these indexes according to a matching table fragmentation pattern but in their own set of dbspaces? I suspect this could interfere with the data rotation/cleanup mechanism?

    Benoît

    ------------------------------
    Benoit Chamberland
    Bell Canada
    ------------------------------



  • 4.  RE: Compression with fragmented tables

    Posted 9 days ago

    Unfortunately, to use "rolling window" purging, the indices must be attached. Still, you might try dropping and recreating an index that isn't needed for referential integrity to see if it builds back a bit more compactly.

     

    I think it would be a pretty heavy lift to allow for index compression, since it would make the search logic more complicated.






  • 5.  RE: Compression with fragmented tables

    Posted 9 days ago

    Thanks Tom,

     

    This is what I thought.

     

    In our case rebuilding the indexes every time we're doing our weekly automatic fragment rotation on a, lets say, 320G of data table is out of question.

     

    It would be great though if this functionality was available, especially where we need compression the most.

     

    Benoît

     

    Bell_RGB_Small_55

     

    Benoît Chamberland

    Administrateur de bases de données | Database Administrator

    T : 514-870-2440  M : 514-207-7015

     






  • 6.  RE: Compression with fragmented tables

    Posted 9 days ago
    Put it in an Enhancement Request here:

    IBM Data and AI Ideas Portal for Customers
    Aha remove preview
    IBM Data and AI Ideas Portal for Customers
    IBM's success depends on gathering feedback from customers like yourself. Aha Ideas Portal is the third party tool through which IBM Product Managers gather feedback from customers such as yourself. IBM is a global organization with business processes, management structures, technical systems and service provider networks that cross borders.
    View this on Aha >


    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 7.  RE: Compression with fragmented tables

    Posted 9 days ago

    It wouldn't be a bad idea to audit the indices and make sure they're all truly needed. In 14.10 they introduced a "Last Lookup/Scan" field to oncheck –pt that shows you if certain indices are unused (or very infrequently used).

     

     






  • 8.  RE: Compression with fragmented tables

    Posted 9 days ago

    Also, I was suggesting just doing the index rebuild once, after implementing compression. All of the on-the-fly rewrites to update the page locations may have made the indices more inefficient and therefore larger. Post rebuild, just make sure you've got your btscanners set fairly aggressively.

     






  • 9.  RE: Compression with fragmented tables

    Posted 9 days ago
    Also before any index rebuild, set FILLFACTOR during the build to something that reflects how the index will be used.

    BTW, a parallel index build after a data load will usually take less time than the time saved by loading the table without that index in place. It usually saves time to drop the index before the build, load the data, then build the index with PDQPRIORITY or PSORT_NPROCS set with a high fill factor (like 99 or 100) so that the index is most efficient to query. An index build by a data load will have, on average, nearly all of its nodes half filled making index searches use twice the number of IOs. So, besides the load-time savings, there will be processing performance improvements from building the index after the data load.

    If you are using a unique index to filter duplicate data out of the inputs, then do that filtering BEFORE the load instead. One simple way is by sorting the input data by the unique key, then map the input file to an external table and insert into the staging table by SELECT DISTINCT from the external table. Or just let 'sort -u' remove the duplicates from the file.

    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 10.  RE: Compression with fragmented tables

    Posted 8 days ago
    Have you tried creating your indices as compressed ones?

    I think we should clarify "attached index" which, in my view, is an index residing in same partition (tblspace) as the data - with fragmented tables, regardless of fragmentation scheme, this shouldn't even be possible.

    ------------------------------
    Andreas Legner
    ------------------------------



  • 11.  RE: Compression with fragmented tables

    Posted 8 days ago
    Andreas:

    In order to be able to detach/drop a partition efficiently without rebuilding an index the indexes on such a table must be semi-attached (as they were called in v7.30). Which means they are created without an IN or FRAGMENT clause so that they follow the partitioning of the table itself with index partitions containing keys only from a single partition. This is not the OLD fully attached indexes from the pre-7.30 days which can only exist if they are legacy objects created under 7.24 or earlier.

    The RANGE/INTERVAL partitioning will not permit any detached index to be created at all, so all indexes for that partitioning scheme are semi-attached.

    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 12.  RE: Compression with fragmented tables

    Posted 6 days ago
    Hi Art,

    right, that would have been my assumptions, too, which in turn made me wonder where exactly there's a problem, or why such implicitly fragmented index could not use compression - if that is what the initial question was?
    I haven't played with this I must confess... thought asking here first would be easier ;-)

    BR,
     Andreas

    ------------------------------
    Andreas Legner
    ------------------------------



  • 13.  RE: Compression with fragmented tables

    Posted 6 days ago
    Have you tried adding compression after index creation with the SQL API functions? I have not.

    Art

    Art S. Kagel, President and Principal Consultant
    ASK Database Management


    Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference.  Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.








  • 14.  RE: Compression with fragmented tables

    Posted 10 days ago

    Well, sort-of automate. You still have to run the syspurge() job, and with heavily-accessed tables, we've had issues getting the required locks to do it, even though they're only sub-second. But yes, that does on balance make things a lot easier.

     

    This _may_ be improved in 14.10; I haven't tested removing our temporarily-lock-out-users logic that we implemented with 12.10.






  • 15.  RE: Compression with fragmented tables

    Posted 10 days ago
    Hi,

    an example of a rolling window table for 7 days with an automatic drop of the older fragment

    create table kpi_chkio
    (
    timestamp datetime year to second,
    chunknum smallint,
    reads bigint,
    pagesread bigint,
    writes bigint,
    pageswritten bigint
    )
    fragment by range (timestamp)
    interval (numtodsinterval(1,'day'))
    rolling (7 fragments) discard
    store in (mydbspace)
    partition p0 values < date ('01/01/2017') in mydbspace ;

    create index kpi_chkio_idx on kpi_chkio(timestamp, chunknum);

    ------------------------------
    jean-francois BOUDRY
    ------------------------------