Hi Experts,
I have a situation something like this:
There are multiple publishers and a single subscriber. All the publishers publish a document with a different trasaction id. In the subscirption there is an insert into a database header table which has transaction id as primary key and a detail table which does not have any constraints key. The document which is published will have a single header information and multiple detail level documents. So on subscription there was batch insert performed in detail table, which made to use a local transaction. If the transaction id is same in two documnets, the information should be updated in header table accordingly. The no.of documents can go upto 10k per hour. So, we have made the subscription trigger parallel. No we see that many of documents are failing to make tot he target because of unique constaint in header table. Actually in the subscription side the code will select the data based on transaction id and then either insert/update. I doubt because of parallel trigger processing, one thread is selecting the data before one is commitng it for the same transaction id. So, please advice on what needs to be done. I have few ideas but not sure on which to implement
Commiting the header transaction as soon as it finishes using transaction management (i believe if there is no transaction management then the transaction is commited after the theread execution only), but owing to the huge volume can i be sure that the scenario wont occur
Using a separate connection with NO_Transaction as connection type so that the header table data wil be committed
Fidling with threads to make a thread to stop its exectution if the threads are of same transaction id (not sure on how to implement, probabaly i can use notify and wait services in pub folder using transaction id as key or write a java client using broker API to retrieve from broker based on transaction id)
Regards,
Pappu
#Universal-Messaging-Broker#webMethods#Integration-Server-and-ESB#broker