A 31 M file isn’t all that big, however…
If you’re loading it completely into memory (31M)
and converting it to a String (don’t know if you’re doing this) (62M)
and splitting it into 4 8MB copies, all in memory (96M)
and processing each of those at the same time
to create multiple IS documents (>127M, in memory representations are much larger than the strings they were built from)
and submitting those to TN
and the processing rule is synchronous
and each rule is creating a target document (>158M)
Still not outrageously big but it depends on what else is going on within IS, how much heap is available and how fragmented it may be, etc.
When you say subfiles, do you literally mean you’re writing the contents to disk? Or are they just kept in memory?
This following is all speculation so if I guess wrong on your scenario, please let me know.
I infer from your posts that you’re getting one big file from somewhere. That file contains a bunch of documents for the week. Each document within will be translated into a single EDI transaction set. You want the resulting transaction sets to be batched.
One approach is to not treat original file as a single entity. Don’t split it by size (8M boundary) but split each and every document/transaction out of it. Process each individually. Using stream techniques to split the original file. Post each transaction to TN for validation and translation. Queue each resulting EDI transaction set for batching. At a given time, run the service to batch the waiting transactions sets into interchanges and send them.
If you try to program solutions in IS the way you would in classic programming, it’s not going to work very well. Don’t assume that the big flat file needs to be processed atomically. It’s probable that it has a bunch of documents within it just because it can (and is mainframe driven?). If there is no business process reason that the documents within the file need to be kept together as a group, don’t.
Just a suggestion to consider.
#webMethods#edi#Integration-Server-and-ESB