WebSphere Application Server

JSR-352 (Java Batch) Post #77: The Multi-Server Batch Configuration – Dispatchers

By David Follis posted Wed February 12, 2020 07:26 AM

  
This post is part of a series delving into the details of the JSR-352 (Java Batch) specification. Each post examines a very specific part of the specification and looks at how it works and how you might use it in a real batch application.

To start at the beginning, follow the link to the first post.

The next post in the series is here.

This series is also available as a podcast on iTunesGoogle PlayStitcher, or use the link to the RSS feed
-----

All the discussion we’ve had so far starts with contacting a server to run a job and the job runs right there in the server you contacted.  That means the application (.war file or .ear file) that contains the application has to be installed in that server.  Which means, if you have a lot of applications installed in a lot of different servers, that you need to know which server to contact depending on which job you want to run. 

Wouldn’t it be nice to just have a single point of contact you could tell “Please run this job” and it would figure out which server was configured to run that job and routed your request there?  The “Dispatcher” in a multi-server Liberty Batch configuration was designed to solve this problem.

You configure a dispatcher by adding a batchJmsDispatcher element to a Liberty server configuration.  This element provides references to connection factory and queue configuration leading to a defined queue in a messaging engine.  The messaging engine can be the one included in Liberty or the IBM MQ product. 

The presence of this configuration tells the server that any attempt to submit a job through the REST interface doesn’t really want the job run in this server, but instead in an appropriate server configured to run it.  The dispatcher will go ahead and create the entries in the Job Repository for the job (so it has an instance ID) and then put a message representing the job into the configured queue.  Next time we’ll talk about configuring the servers to process those messages.

Alright, so now we have a single server that any REST client can contact to submit a job and we’ll trust that it gets to the right place to run (it works…we’ll get there).  But that phrase “single server” raises concerns.  What if the dispatcher is down? 

With the configuration we started with (contacting the right server for the job directly) a dead server only meant that jobs for applications living in that server couldn’t be run.  And you might have set up several servers hosting the same applications.  But now, with one dispatcher server front-ending everything, if that server is down the whole batch capability is broken.  What to do?

Create more!  There’s nothing magical connecting a particular dispatcher to the executors we’ll talk about.  All the dispatchers and executors need to be sharing the same Job Repository tables, so they are on the same ‘page’ about what a particular job id value means.  But you can create as many dispatchers as you need and REST clients can choose freely among them, just like any other REST service hosted on replicated servers.  All the job state is kept in the Job Repository so there is no affinity established to a dispatcher.

But the dispatcher is useless without executors… next time!
0 comments
25 views

Permalink