Expand all | Collapse all

multiple table Data deletion

  • 1.  multiple table Data deletion

    Posted Sun June 13, 2021 11:28 PM
    Hello All,

         I have to delete data from multiple table (around 300) , kindly suggest what precaution should I take while data deletion that Process should not get aborted due to LOCK or LOG and LONG TRANSCTION. (Working on IDS 11.70).

    There are separate queries , All 300 tables has joins with there two parents tables.
    So there are 300 separate queries.

    Because I'm NOT getting any downtime , need to do this in Production env.

    Amit Patel


  • 2.  RE: multiple table Data deletion

    Posted Mon June 14, 2021 06:03 AM

    Consider using my dbdelete utility. It is designed for exactly your situation. It deletes data in blocks of 8192 rows per transaction minimizing locks and eliminating long transactions. Dbdelete is included in my utils2_ak package which you can download from my web site at:


    Art S. Kagel, President and Principal Consultant
    ASK Database Management

    Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference.  Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.

  • 3.  RE: multiple table Data deletion

    Posted Mon June 14, 2021 08:27 AM
    Hello Amit,
    It is necessary to divide the delete statement into conditional clauses as much as possible, or use a program such as Art's Dbdelete.

    Below is an article introducing the informix stored procedure to delete data in bulk. Hope this helps.

    SangGyu Jeong
    Software Engineer
    Seoul Korea, Republic of

  • 4.  RE: multiple table Data deletion

    Posted Mon June 14, 2021 10:03 AM

    I presume you have referential integrity in place.

    For starters, I would not try to do this in SQL directly, since that is near-certain to run you into a long transaction rollback.  Here's what I would do:
    • For every foreign key constraint, run an ALTER TABLE command to add the "ON DELETE CASCADE" clause to the constraint. (Can't think of the exact syntax right now.)
    • In 4GL or Perl/DBI  (or maybe SQLCMD?) pick a number you feel comfortable with, say 10,000.
    • Loop through the rows in the top master table, deleting them as you go along, keeping count down until you have deleted that number of of master rows.
    • At the bottom of each countdown, COMMIT.
    • Don't forget to COMMIT when you fall out of the big loop, when you have deleted fewer than that number in the last transaction.
    I'm certain there are more efficient ways to do this but this came first to my head. It should provoke the better ideas into existence on the theory of annoying people into activity. <wicked grin>

    BTW I mention Art's sqlcmd but I don't recall if that program is capable of looping.

    Good luck!

    -- Jacob S