I should have included configuration info, but (considering the forum) I really just wanted to start a discussion of SAX vs. DOM from an architecture point of view.
I’ve done the tests with Sun’s jre 1.3.1 & 1.4.1, fyi - 1.4.1 is much faster with reflection (we’ve mapped XML XPATH to object methods/fields and use the reflection API to set ivars), but otherwise there’s little difference with regards to XML parsing.
All documents are fetched from Tamino using the java API (API4J/3.1.14). The parsers are the Apache ones shipped with Tamino.
And as I said - SAX outperforms DOM when the number of nodes is much greater (for now, I’ll disregard size in terms of kilobytes). Small documents have little difference, and for very small documents (tens of elements) DOM is faster.
I’ve got an Optmizit sampler screen here (it’s a great product, IMO) and see that parsing about 200 XML documents with 8 nodes (so each is about 1/2Kb on the outside) takes 7.5ms with SAX, 4.3 with DOM. This is starting at the stack frame of TSAXInputStreamInterpreter.doInterpret() and TDOMInputStreamInterpreter.doInterpret(), resp. I can verify that the SAX/DOM difference in my own code is insignificant.
And again - to move this discussion back up to the big picture and “best practices”, I’d like to know other peoples experiences with the different APIs. Is anyone else interested in pulling the node values into their program? If so, what are you using? (getNode().getItem().getWhatever() gets tedious and fragile really fast).
Or are most applications simply passing the XML onto a browser or other UI (with perhaps some XSLT)?
#API-Management#webMethods-Tamino-XML-Server-APIs#webMethods