Thank for responding.
I don’t think my issue is index related however, because
1) I have mix indexes defined on the search element.
2) Query that retrieves the record count, using the same search criteria came back OK.
I didn’t pull down the complete document listing, because the result would be huge, and that we would be getting into network bandwidth issue, which wouldn’t be a good true test.
My understanding of the problem is that because Tamino tries to log every single one of the delete transactions. And for a large data volume, I can see why it takes so long.
Other DBMS (Sybase, MSSQL) provide a bulk processing mode, where you can turn off logging prior to running large data volume processing like truncating, loading .etc. This helps tremendously with batch, maintenance processing .etc. where you know you don’t need the data integrity protection of an OLTP mode.
I wonder why SAG doesn’t offer something similar ? And whether anyone knows of a workaround on how to handle bulk processing ?
Thanks.
#webMethods#API-Management#Tamino