WebSphere Application Server & Liberty

WebSphere Application Server & Liberty

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Are you testing for performance? - part 4-6: Configuration baseline

By Samir Nasser posted Wed May 27, 2020 11:54 AM

  

In part 4-5, I described some guidance as to why resiliency related configuration changes should be part of the initial set of configuration changes. In this blog post, I would like to provide the guidance on the various timeout parameters which are considered key resiliency parameters that should be considered as part of the initial set of configuration changes.

Timeout: This parameter is literally everywhere. It exists in all the middleware solution stack; in the Java EE server such as WebSphere Application Server, in the database such as DB2, in the operating system such as Linux RedHat, in the web server such as IBM HTTP Server. There are also literally many types of timeout. For example, there is the connection wait timeout such as the one defined in WebSphere Application Server (WAS) on a JDBC connection pool, read/write timeout such as the ones defined on an HTTP outbound connection pool for web service calls in WAS, and lock timeout such as the one defined in IBM DB2. These are just examples; there are very many timeout parameters which are not crucial to mention for the objective of this post.

Why is it important to consider the timeout parameters? First of all, under normal conditions when there is no failure, abnormal event, or slowdown, the values of the timeout parameters may not have any impact on the solution behavior. However, as soon as a failure, an abnormal event, or a slowdown occurs, the values of the timeout parameters will have an impact on the solution behavior. Consider the request flows shown in Figure 1. There is one request flow for Requests 1 and another for Requests 2. Both Requests 1 and Requests 2 run on web container (WC) threads, use connections from the HTTP Outbound Connection Pool. Requests 1 use the connections to call Service Provider 1 whereas Requests 2 use the connections to call Service Provider 2. Suppose that these requests are flowing through the environment and suddenly, Service Provider 1 became so slow that Requests 1 are now timing out. In this situation, if the timeout is large, Requests 2 which may be more crucial than Requests 1 start to fail at different points in the flow. For example, Requests 2 may start to time out waiting on a connection to be available from the HTTP Outbound Connection pool because the pool connections are tied up during calls to Service Provider 1 (some of the connections may also be tied up during calls to Service Provider 2). Requests 2 may also start to timeout as the web container threads may be completely tied up running the slower Requests 2 in addition to some Requests 1.

 Timeout.jpg

Figure 1: Request Flows

This example request flow topology highlights the importance of the following:

  1. The need to know the application architecture in depth
  2. The need to decide what the various timeout parameters and the various pool sizes should be

This is important because if we had set the timeout parameter to a lower value, Requests 2 would have released the HTTP outbound connections and the web container threads sooner so that Requests 1 will be less negatively impacted by the slowdown of Service Provider 1.

 In the next blog post, I will continue this topic.

0 comments
10 views

Permalink