Different applications will have different requirements. We’re often involved in RosettaNet projects where the data is inheritently “large”. Large enough that built-in services such as HTTP, recordToDocument (4.6), etc will fail intermittently. Without the largeDoc facilities, these data cannot be processed at all.
Just in case people start objecting, let me give an example. We have a lot of Semiconductor customers, and they need to send periodic updates of all the work in progress (called Work-In-Process) to their customers. That’s probably hundreds of die on a silicon wafer, and hundreds (or thousands) of wafer per customer. And they need to be transmitted as a document (breaking the data up create a lot more problems). Or another example is PC manufacturers sending Shipment Notices (for container loads of PCs) to customers, and customer wants list of all serial numbers in the Shipment. Easily over 10 MBs. In fact, for our Semi customers, we’re frequently asked to test wM processing of data approaching 100MB. With or w/o inefficiencies in handling the data in memory, these are not possible to handle w/o the “largeDoc” facilities.
A general guideline that we use:
- As RMG said, 4MB is a good size. We’ve testing Java to somewhere around 10-12MB before it starts to give intermittent errors.
- More importantly, largeDocs are handled quire differently than “normal” docs. Validation is different. Mapping is different. Even sending to backend might be different. The worst case is having a process that needs to handle both large and normal docs. That is, you want to pick a largeDocThreshold which, if possible, will separate your processes into either largeDoc (always) or normal (always).
#B2B-Integration#Integration-Server-and-ESB#webMethods