Decision Management (ODM,ADS)

Decision Management (ODM, ADS)

Connect with experts and peers to elevate technical expertise, solve problems and share insights

 View Only
Expand all | Collapse all

How to extend the 30 second session timeout for ODM on Kubernetes/OpenShift cluster ?

  • 1.  How to extend the 30 second session timeout for ODM on Kubernetes/OpenShift cluster ?

    Posted Tue June 22, 2021 07:34 AM

    How can the default session timeout of 30 seconds be increased when ODM is deployed into an Openshift cluster ?

    We have rulesets that can take up to 2 minutes to execute . When

    an HTDS service is used to test the ruleset, it times out every time at 30 seconds with and error "504 Gateway Time-out ".

    How can the HTDS timeout be increased to avoid getting this error ?



    #OperationalDecisionManager(ODM)
    #Support
    #SupportMigration


  • 2.  RE: How to extend the 30 second session timeout for ODM on Kubernetes/OpenShift cluster ?
    Best Answer

    Posted Tue June 22, 2021 07:44 AM

    ODM HTDS does not have a concept of a session timeout.

    When ODM is installed into a Kubernetes cluster, the session timeout from all client requests are managed by the routes created. Routes are load-balanced through the cluster's proxy gateway, i.e haproxy.router.openshift.io or an external load-balancer, such as F5 if that is deployed as a front-end gateway.

    Routes created within OpenShift has a timeout that is set to 30 seconds by default.

    If the service that uses the route, like an HTDS service , takes longer than 30 seconds to complete , even if it has a timeout set that is higher than its route's timeout, the OpenShift router will close the connection and return the 504 Gateway Time-out response when the upstream takes longer than that to answer.

    The following response is returned after 30 seconds:

    <html>

    <body>

    <h1>504 Gateway Time-out</h1>

    The server didn't respond in time.

    </body>

    </html>

    This error will occur despite the calling service timeout being set to a value that is higher than 30 seconds.

    One possible workaround when the load balancer is OpenShift's haproxy is to edit the routes that were created for ODM DecisionService pods as follows-

    First backup the existing routes to yaml-

    # oc get route <your_decisionservice_route> -o yaml > <your_decisionservice_route>-bkp.yaml

    # oc edit route <your_decisionservice_route>

    Create the annotations: tag inside metadata: or add the value below in case it already exists as in the following example for 120 seconds:

    metadata:

    annotations:

    haproxy.router.openshift.io/timeout: 120s

    When the value of the route is changed, it will impact all services currently running that use the existing route, so this change should be planned during a time period when a service outage does not affect the end users.



    #OperationalDecisionManager(ODM)
    #Support
    #SupportMigration