WebSphere Application Server & Liberty

WebSphere Application Server & Liberty

Join this online group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Too many connection in CLOSE_WAIT

  • 1.  Too many connection in CLOSE_WAIT

    Posted Fri June 07, 2013 06:15 AM
    Team,

    Before going further following is information about my setup

    1. Request flow is as follows
    request > NetScaler > IBM HTTP Server > WebSphere Application Server

    2. OS information where IBM HTTP Server is installed
    //----Uname info start----//

    SunOS HOSTNAME 5.10 Generic_142900-03 sun4v sparc SUNW,T5240

    //----Uname info stop----//

    3. IBM HTTP Server version
    //----Version info start----//

    Installed Product
    --------------------------------------------------------------------------------
    Name                     IBM HTTP Server
    Version                  7.0.0.13
    ID                       IHS
    Build Level              cf131039.07
    Build Date               10/2/10
    Architecture             SPARC (32 bit)

    //----Version info stop----//

    4. Network settings
    //----ndd info start----//

    # /usr/sbin/ndd -get /dev/tcp \? | grep wait
    tcp_time_wait_interval        (read and write)
    tcp_fin_wait_2_flush_interval (read and write)
    tcp_close_wait_interval(obsoleted- use tcp_time_wait_interval) (no read or write)

    # /usr/sbin/ndd -get /dev/tcp tcp_time_wait_interval
    60000

    # /usr/sbin/ndd -get /dev/tcp tcp_fin_wait_2_flush_interval
    675000

    //----ndd info stop----//

    5. IBM HTTP Server config snippest

    //----http config start----//

    Timeout 300
    KeepAlive On
    MaxKeepAliveRequests 100
    KeepAliveTimeout 10


    ThreadLimit                     300
    ServerLimit                     20
    StartServers            2
    MaxClients                      6000
    MinSpareThreads         500
    MaxSpareThreads         900
    ThreadsPerChild         300
    MaxRequestsPerChild     80000


    //----http config stop----//

    Now here comes the problem whenever production support team stop the WebSphere application server, connection are not getting cleared at IBM HTTP Server. They all goes into CLOSE_WAIT state.

    As per my understading CLOSE_WAIT indicates that the server has received the first FIN signal from the client and the connection is in the process of being closed. So this essentially means that this is a state where socket is waiting for the application to execute close(). A socket can be in CLOSE_WAIT state indefinitely until the application closes it.

    But here the question is as the application server is already in shutdown state why connections are in CLOSE_WAIT state at IBM HTTP Server. I am seeing this connections lying in CLOSE_WAIT state until and unlesh I refresh the IBM HTTP Server.

    Also after doing google I found a bug 15349654 in Solaris System due to which connection stuck in CLOSE_WAIT state.

    So now I want to understand following
    1. Is it happening because of a bug in Solaris system?
    2. Is it happening because of problem in IBM HTTP Server?