WebSphere Application Server & Liberty

 View Only

JSR-352 (Java Batch) Post #106: Batch Performance – Introduction

By David Follis posted Wed September 02, 2020 08:07 AM

  
This post is part of a series delving into the details of the JSR-352 (Java Batch) specification. Each post examines a very specific part of the specification and looks at how it works and how you might use it in a real batch application.

To start at the beginning, follow the link to the first post.

The next post in the series is here.

This series is also available as a podcast on iTunesGoogle PlayStitcher, or use the link to the RSS feed
-----

This post will kick off a short series talking about performance topics related to Java Batch jobs using JSR-352 (also known as Jakarta Batch, but I’m going to stay out of that for now).  What do we mean by batch performance?  There are standard benchmarks that are used to measure performance of various online transaction processing systems.  Is there something similar for batch?  Not that I could find.  And that wasn’t where I wanted to go with this.  Instead I wanted to talk about decisions you might make as you are developing a Java Batch application that could impact how it performs.

We’re just going to look at a single step job to keep things simple.  The performance of a batchlet step is pretty much up to how you write your application and things that would apply to any Java program will apply there.  On the other hand, a chunk step has some interesting design decisions you have to make that influence how the batch container behaves and probably influence the performance of the step. 

For example, how often should your chunk step take a checkpoint?  It seems pretty intuitive that if you checkpoint frequently you are introducing more overhead and the job will run longer.  But how much longer?  How frequently is too frequently? 

In the coming weeks we’ll take a single step job running a chunk step and fiddle around with how it behaves to see how those changes affect the elapsed time to run the step.  But we have to start somewhere! 

As a baseline we’ll use an ItemReader which reads records from a flat file.  We won’t do any processing (except for one measurement) because processing time is just whatever your application needs to do and there’s no special batch stuff about it.  Our writer will alternate between inserting records into a DB2 database and deleting those same records.  That saved me from having to remember to clean up the database between runs.  However, I did the inserts as a bulk insert and the deletes were done one-by-one which allowed me a rough comparison of the two techniques. 

I also included the chunk step listener we talked about a couple of weeks ago (post #101) to get more details about what’s going on inside the chunk processing.

Our baseline numbers had the reader fetching 10 million records from our flat file.  The initial item count (chunk size) was 1000 records.  The checkpoint data size for both the reader and writer was 1024 bytes.  Batch events were not enabled.   To start I ran it as a simple step without partitions (but we’ll get there).

All the runs were done on a z/OS system (z14) using a local DB2 accessed over a type-4 connection.  The same DB2 instance was used for the Job Repository and the application table.

With all that, our step took four minutes and not-quite 33 seconds to complete.  Next time we’ll start playing with checkpoint intervals.



#Featured-area-1
#Featured-area-1-home
#Java-Batch
#JSR
#tWAS
#WAS
#WebSphereApplicationServer(WAS)
0 comments
66 views

Permalink