IBM TechXchange Virtual WebSphere z/OS User Group

 View Only

Liberty z/OS Post #22- Production Topology migration

By David Follis posted Thu June 08, 2023 08:08 AM

  

This post is part of a series exploring the unique aspects and capabilities of WebSphere Liberty when running on z/OS.
We'll also explore considerations when moving from WebSphere traditional on z/OS to Liberty on z/OS.

The next post in the series is here.

To start at the beginning, follow this link to the first post.

---------------

This week we’ll talk about how many Liberty servers you need.  We’ll consider a simple case, but the discussion will hopefully help you evaluate more complicated scenarios. 

Consider a TWAS configuration with two nodes on separate z/OS images in a sysplex.  Each node has one server and those servers are in a cluster (the node probably has more servers, but we’ll just consider one here).  Each of those servers is defined with a minimum and maximum of two servant regions. 

If you migrate that configuration to Liberty, how many Liberty servers do you need?  Two?  Four?

There is, of course, no magic answer that is always right.  To make the decision you have to try to remember why you have this configuration to begin with.

Probably you have a server on each z/OS image for availability.  Perhaps also to spread the workload across the two images.  So it would probably make sense to have two Liberty servers, one on each image.  But each server had two servant regions, so should you have two Liberty servers on each LPAR?

Well, again the question is why you had two servant regions.  It could be for availability.  Timeout handling in TWAS on z/OS can result in a servant region being abended and a new one started, so having a second servant region to take over might make sense.  If this is the reason, then you probably don’t need two servers on each image because Liberty servers don’t get abended for timeout processing.

Another possibility relates to dispatch threads.  A servant region is configured with a fixed number of dispatch threads.  If you determined that you wanted 20 threads per servant region but sometimes need more to keep up with the workload, you might have decided to have two servant regions rather than 40 threads in one servant region.  Liberty doesn’t have a fixed number of threads but instead dynamically adjusts the number of threads in the pool up and down continuously as it tries to find the best value for the current environment. 

Perhaps the number of threads in your TWAS servant region was chosen to limit the number of concurrent requests to something that could fit inside the JVM Heap you could fit into a 31-bit address space.  Liberty only runs in 64 bit so you can have much larger heaps. 

You might want to look at SMF data and verboseGC output to see how your existing servant regions are being used as a guide to helping decide how many Liberty servers you need.  If you have a second servant for when one of them gets swamped with work, does it ever actually get used?  How many threads in your servant regions are normally used most of the time?  

Yet another factor might be how many applications are located in each server.  Since TWAS servers are pretty heavy weight, some customers located multiple applications in each server and sometimes used WLM to spread requests for different applications to different servant regions within the same server.  With Liberty you probably just want separate servers for separate applications.  If, in our example, the two servant regions were there to run two different applications then you might want four Liberty servers, two running each application on each z/OS image. 

There are probably other factors to consider also.  The best place to start is to try to remember why you have the TWAS configuration you have now, determine if that reasoning still applies (maybe requirements have changed?), and use that to guide decisions about the number and placement of Liberty servers to replace them.  You did document the rationale for your configuration choice, didn’t you? :-)

0 comments
9 views

Permalink