IBM Verify

IBM Verify

Join this online user group to communicate across Security product users and IBM experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only
  • 1.  DSC, Kubernetes, multiple Webseal instances

    Posted Fri November 12, 2021 03:31 AM
    Hi,
    We've been struggling with an external load balancer (BigIP) and an Ingress (NGINX based) setup in our Kubernetes environment to acces our WRP.
    When we load balance to the Kubernetes Ingress solution in the cluster, everything works as expected visavi ISVA and the WRP, but, for some yet to be determined reason, one (and only one) of our backend applications fails to acknowledge/keep/identify some session http-session information it needs to function correctly.
    "As always" we're running out of time and have found a workaround -- loadbalancing from BigIP directly to the WRP Nodeport. (There should be no difference between the two solutions but obviously one works and the other fails).
    Using the Nodeport solution unfortunately breaks the use of POD replicas of the WRP for fail-over and/or scaling, the nodeport manager will not distribute traffic to more than one replica of the WRP.
    So for this we have yet another workaround -- we have two WRP instances with manually set up "identical" configurations WRP1 & WRP2.
    BigIP round-robin balances over the two WRP nodeports and we have at least solved the failover issue but with relogin if one WRP fails.

    So after this (lengthy?) background information the question is:

    Can we use DSC over the two separate WRPS instances (not replicas) to achieve session failover?
    Rgds

    ------------------------------
    Anders Domeij
    CGI Sweden AB
    ------------------------------


  • 2.  RE: DSC, Kubernetes, multiple Webseal instances

    Posted Sun November 14, 2021 06:43 PM
    Anders,
     
    I am a little bit surprised that when you have multiple replicas the nodeport service does not load-balance between the two replicas.  Anyway, if you want to enable failover between two different WebSEAL instances (whether they be Kubernetes replicas), or just two WebSEAL instances, you have 3 different options:
     
    1. failover cookie
    2. DSC
    3. Redis
    I personally prefer Redis in a containerized environment as it performs better and is more standard.
     
     
    Scott A. Exton
    Senior Software Engineer
    Chief Programmer - IBM Security Verify Access

    IBM Master Inventor

     
     
     





  • 3.  RE: DSC, Kubernetes, multiple Webseal instances

    Posted Mon November 15, 2021 02:57 AM
    Hi Scott,

    Thanks for the answer!
    > I am a little bit surprised that when you have multiple replicas the nodeport service does not load-balance between the two replicas
    As am I :-)
    It's an empirical observation with 2 replicas of one wrp instanceI see the Nodeport service "locking" to the first service nodeport it finds and sends all traffic to that service. There might be some 'smartness" in the Nodeport manager that I am not aware of, but round-roin load-balancing does not seem to be one.
    All documentation I have found regading Kubernetes and external load balancing favors/mandates the Ingress approach.
    But like I said unfortunately one (essential) backend service misbehaves when we use the Ingress solution.

    Rgds

    ------------------------------
    Anders Domeij
    CGI Sweden AB
    ------------------------------



  • 4.  RE: DSC, Kubernetes, multiple Webseal instances

    Posted Mon November 15, 2021 03:54 AM
    The Ingress solution should be used for the WRP containers (with a service definition)  and from the WRP containers to the back-end systems one can use Service definitions together with junctions.  On the Ingress controller (NGINX) , there are multiple configuration options  (see eg https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-set-headers) 
    Below is a Ingress snippet on a similar setup (VPC load balancer,  Ingress controller, )
    Usage of a session affinity (cookie = albsc) mechanism to stick to the chosen reverse proxy Pod 

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: ingressresource
    annotations:
        kubernetes.io/ingress.class: "public-iks-k8s-nginx"
       nginx.ingress.kubernetes.io/affinity: "cookie"
       nginx.ingress.kubernetes.io/session-cookie-name: "albsc"
       nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
       nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
       nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
       nginx.ingress.kubernetes.io/proxy-ssl-secret: "app-ssl-secret"
    spec:

    Regards
    Serge Vereecke
    IBM Software

    ------------------------------
    Serge Vereecke
    ------------------------------



  • 5.  RE: DSC, Kubernetes, multiple Webseal instances

    Posted Mon November 15, 2021 04:42 AM
    Edited by Anders Domeij Mon November 15, 2021 04:43 AM
    Hi & Thanks,
    We use Rancher as the Kubernetes admin "console".
    Looking at the Ingress we set up we added the following

    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/session-cookie-path: /

    Nd relied on the Rancher Ingress defaults for the rest (i.e. cookiename and SSL secret settings)
    and the following "spec" (correctly formatted as yaml)

    spec:
    rules:
    - host: dns.external.name.com
    http:
    paths:
    - backend:
    service:
    name: isva-1-3-isvawrp-webseal
    port:
    number: 9443
    pathType: ImplementationSpecific

    status:
    loadBalancer:
    ingress:
    - ip: 164.9.163.1
    - ip: 164.9.163.2
    - ip: 164.9.163.3

    where 164.9.163.1/2/3 are the Kubernetes worker nodes in the Cluster available for WRP instances.

    Rgds

    ------------------------------
    Anders Domeij
    CGI Sweden AB
    ------------------------------