This post is part of a series delving into the details of the JSR-352 (Java Batch) specification. Each post examines a very specific part of the specification and looks at how it works and how you might use it in a real batch application.To start at the beginning, follow the link to the first post.The next post in the series is
here.
This series is also available as a podcast on iTunes, Google Play, Stitcher, or use the link to the RSS feed.
-----
For this experiment we wanted to see how changing the size of the object returned by the ItemProcessor
affected how the step ran. Our baseline application doesn’t have an ItemProcessor
at all, but we’ll add one in for this set of measurements.
The ItemProcessor
I created established a large byte array when the constructor ran and then made a copy and returned it every time the processor was called. Naturally making a copy of a very large byte array consumed quite a bit of elapsed time, but we wondered how it would impact elapsed time for other processing in the chunk because of the increased heap usage.
Remember that the reader and processor are called in a loop until we reach a checkpoint. Our baseline run checkpoints every 1000 items. The writer is only called at the checkpoint. Which means the objects returned by the processor (all 1000 of them) pile up in memory until they are passed to the writer in a single list.
To get started we used a returned object size of just 1000 bytes. This added about 5 seconds to the total elapsed time for the step from our baseline run with no processor at all. Then we increased the object size to 100,000 bytes. The time spent in the processor to copy a 100,000 byte object 10 million times was pretty significant, but not interesting.
What was interesting is that time spent in the ItemReader
increased by six seconds. Time spent in the ItemWriter
(which isn’t actually processing all those objects..it is still just inserting and deleting rows) increased by 20 seconds. Time spent in the batch container increased by 16 seconds. That’s all due to garbage collection running to handle the 1000 objects returned by the processor each checkpoint.
We took the verboseGC output and fed it to the GCMV tool and you could very clearly see that there was minimal GC activity with the 1000 byte object, but a frenzy of GC activity with the larger returned object.
This really shouldn’t come as a surprise to anybody. Increased thrashing of the heap drives garbage collection which interferes with the efficiency of application code.
You should pay attention to the size of the object returned by the ItemProcessor
to make sure it doesn’t get too crazy. Think about your checkpoint size and how many of these objects are going to accumulate in memory. Also consider that you might have more than one job running at the same time in a single JVM. Factor all that into determining your heap size and then use tools like GCMV to look at how things are actually running and tune accordingly. It matters.
#Java-Batch#JSR#tWAS#WAS#WebSphereApplicationServer(WAS)