IBM Security Verify

  • 1.  WRP 2 replicas in the cluster

    Posted Mon October 25, 2021 11:41 AM
      |   view attached
    Hi all,
    I am a bit confused (and basically a Kubernetes newbie).
    I installed the 10.0.2.0 version of ISVA to my Kubernetes cluster, with 3 worker nodes.
    External LDAP and database so no containers for that functionallity.
    After the instalaation I have 1 isva-config, 1 isva-runtime and 1 isva-wrp up and running doing what they are supposed to do.
    Access to the wrp is through the NodePort service (see pg 1 in the included ppt).
    The nodeport is 'sprayed' by an external loadbalancer to all 3 Nodes.

    I increased the replica setting for the wrp from 1 to 2 and a new instance was fired  up on a different worker node (anti-affinity saw to that :-) ).

    For some reason there is never any traffic in both wrp-s (slide 2 in the ppt), it seems like the NodePort service chooses one of the wrp on Node 2 or Node 3 (seemingly randomly) and then sends all traffic to that instance.
    Is this works as designed from the NodePort perspective? I was under the impression that Node 2 and Node 3 would use their 'local' wrp and Node 1 (not having a local wrp) would redirect traffic to Node 2 (and/)or Node 3.

    If this is the normal Nodeport behavior, what is the recommended way to have 2 wrps running at the same time, basically for redundancy? Nodes 1 2 and 3 are not all in the same physical location.

    Hope my question is clear.
    Rgds
    Anders

    ------------------------------
    Anders Domeij
    CGI Sweden AB
    ------------------------------

    Attachment(s)

    pptx
    Multiwebseal.pptx   37 KB 1 version


  • 2.  RE: WRP 2 replicas in the cluster

    Posted Wed October 27, 2021 06:27 AM
    Hi Anders,

    I'm not an expert on Kubernetes services either.  Looking around, I found this page which explains that the configuration of the kube-proxy can have an impact on how service routing is implemented: https://kubernetes.io/docs/concepts/services-networking/service/

    A NodePort has a ClusterIP service behind it.  I think the NodePort simply directs requests from any NodePort to the ClusterIP Service. The ClusterIP Service then determines which pod should service the request.

    I'm not surprised that the NodePort is not preferring the local pod on the node... I don't think it works that way. I am vaguely surprised that it is "sticky" to a single pod... my expectation would have been it would spray requests across all pods selected by the service.

    I know that it's possible to set up session-affinity for services (at least in some configurations) but I've only ever been able to create a source-IP-based affinity.  Something that did client-based affinity would need to terminate SSL so it could add a cookie or similar.

    An Ingress should be able to provide IP or cookie based affinity.  Something like this:

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: iamlab-isvawrp-rp1
       annotations:
         nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
         nginx.ingress.kubernetes.io/affinity: "cookie"
     spec:
      tls:
      - hosts:
        - www.iamlab.ibm.com
      rules:
      - host: www.iamlab.ibm.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: isvawrprp1
                port:
                  number: 9443
    

    You have to have ingress enabled in your Kubernetes cluster for this to work.

    I wonder if someone else on the community can comment on use of ingress or load-balancer to get more control over load-balancing and stickiness.

    Jon.

    ------------------------------
    Jon Harry
    Consulting IT Security Specialist
    IBM
    ------------------------------