This post is part of a series delving into the details of the JSR-352 (Java Batch) specification. Each post examines a very specific part of the specification and looks at how it works and how you might use it in a real batch application.To start at the beginning, follow the link to the first post.
The next post in the series is here
.This series is also available as a podcast on iTunes, Google Play, Stitcher, or use the link to the RSS feed.
Traditional JCL-based z/OS batch applications will go through an allocation phase that establishes access to datasets (files) used by that application. This generally results in shared or exclusive access to the dataset. Access might be established using JCL (DISP=OLD, SHR, or NEW for example) or programmatically using dynamic allocation services.
Batch jobs need to establish the right level of access to prevent other concurrently running programs from accessing the data (or to allow it). Two jobs that both require exclusive access to a file can’t be allowed to run at the same time.
Key to all of this is the fact that the lock (really a GRS enqueue) will be released at the end of the job (possibly sooner). This is done by the operating system at the end of the job.
Java batch applications running inside WebSphere Liberty introduce some complications. A dataset allocated by an application running inside a Liberty server is generally done using dynamic allocation (not using DD statements in the server’s JCL). A dynamically allocated dataset will be automatically released when the address space terminates – which is not the end of the job, it is the shutdown of the server.
Thus, any datasets allocated by Liberty Batch applications should be sure to close them to release the allocation. Be sure to do this, not just in normal termination cases, but in error/failure cases where you may need to rely on a step listener or other artifact to be sure your application gets control to perform the close processing.
Another complication arises when a Java Batch job in Liberty is submitted by a JCL batch job (as we’ve discussed earlier), but both the Java Batch job and the JCL job have steps that require access to the same dataset. Perhaps a step in the JCL job populates a dataset and a step in the Java Batch job reads the contents. If the JCL job uses a DD statement to allocate the dataset, it will remain allocated throughout the JCL job (usually) which includes the time the Java Batch job is running. If both require shared access, this is just fine, but if one or the other requires exclusive access the two will deadlock when the Java Batch job can’t get the dataset access it requires, and the JCL job is waiting for the Java Batch job to end.
There are several strategies to work around these situations, but it gets rather complicated to go into as a blog post. Instead I’ll point you to the WP102667 whitepaper on the IBM Techdocs website which delves into this whole topic in detail.