There are a few different ways of integrating IIS applications - it kinda depends on how they operate to figure out what is the best approach. Many many moons ago we developed an integration guide for this that operated against an earlier version of our on-premises technology. The guide is still around, and some (but not all) of the use cases are still configurable (for example we don't support WS-Federation on ISV SaaS, but do on IBM Security Verify Access om-premises). In case it helps, you can find it here: https://www.ibm.com/support/pages/ibm-security-access-manager-microsoft-applications
Generally speaking though, if you're using IBM Security Verify SaaS for a centralised IDP (and not IBM Security Verify Access on-premises), then your best bet is to find something for your IIS applications that understands OIDC, similar to what you've done with NGINX, then work toward your application logic from there. We have a very similar container-based proxy that does this called IBM Application Gateway (aka IAG), which is designed for heritage applications. This is a modern, containerized version of WebSEAL (WebSEAL is a very old historical name for our on-premises web reverse proxy).
It isn't strictly an ingress server in the true kubernetes sense, but can be deployed (optionally with an operator) alongside one or more applications. The idea is it speaks OIDC to an OP such as ISV SaaS, then can downstream identity info to applications typically via HTTP headers in many different formats, including plain text, JWT, Kerberos Constrained Delegation, etc.
Most likely, kerberos constrained delegation is what you'd want to use to authenticate to IIS, which IAG supports natively. It can be a little tricky to set up, but once you get through the technical details it works well.
Another alternative is to modify the IIS apps with a plugin for authentication that consumes HTTP headers a different way. It depends on what flexibility you have here.
Historically, how have your IIS applications been authenticated in the past?
Original Message:
Sent: Thu January 04, 2024 04:05 PM
From: Don Babcock
Subject: Business need for Secondary Interface in Reverse Proxy
Thanks for your comments Jen. I just mentioned NGINX as a specific and popular web server/proxy product that we happen to use. Our enterprise only recently adopted IBM Verify as a standalone product for IdP consolidiation (primary functionality is a single IdP "source of truth" which combines disparate ADFS/Azure directories from recent merger/acquisitions.). Whether you use IBM's WebSeal or some other web server/proxy product, the main point is shifting authentication (identity verification) "left" to put it before all of your protected workloads. NGINX Plus just happens to be the product we use for ingress to MOST of our containerized workloads. No reason it couldn't be WebSeal or Apache or any other similar product. It just happens that NGIINX has a solution that works well with all of the Major IDP's including IBM Verify. The key is that whatever you use can handle the OIDC stuff to the "left" of your protected workloads. We've got a long way to go yet on this journey as a lot of our stuff is Microsoft IIS and I am not aware of any IBM Verify integration that is supported for IIS. Let me know if there is one. Given the enterprise choice of IBM Verify, it would greatly benefit the incorporation thereof into legacy apps currently running in IIS.
NGINX is also quite popular with those that have made the move to containerization because a TON of the literature on that topic features NGINX for ingress. The mistake I see a lot of folks making is trying to have the app itself handle the OIDC stuff which means that every developer in every environment has to figure out how to do that with whatever IdP they have chosen. That ends up being an albatross in maintenance costs. The Java folks have to figure it out, the CFML folks have to figure it out, the .NET folks have to figure it out (enter your favorite dev language here) and they ALL have to stay current as the IdP product evolves. If, OTOH, you shift authentication left then you take all that effort/disparity off the table because it all gets done BEFORE the containers and then those can be written as needed in whatever environment suits without worrying about the IdP handshaking. Saves a lot of learning curve and allows for single point maintenance/auditing of the authentication stuff which makes the compliance folks happy too π
I need to look into WebSeal. Just hadn't heard of it before now.
Thanks for your response!
-dB
This electronic message is intended only for the use of the individual(s) and entity named as recipients in the message. If you are not an intended recipient of this message, please notify the sender immediately and delete the material from any computer. Do not deliver, distribute or copy this message, and do not disclose its contents or take any action in reliance on the information it contains. Thank you.
Original Message:
Sent: 1/4/2024 10:51:00 AM
From: Jens Petersen
Subject: RE: Business need for Secondary Interface in Reverse Proxy
Hello Don,
thats a good point and true, while WebSEAL or VA could also take that job. But right, if you want all features of VA features it needs deep knowledge for customization.
Anyway I think here it's more about running VA as containerized version rather than virtual appliance or hardware. What I wonder a bit is why it would take burden of your developers. WebSEAL as Ingres does exactly what NGINX does and will sends bearer token or header to backend apps usually.
------------------------------
Jens Petersen
Original Message:
Sent: Wed January 03, 2024 03:38 PM
From: Don Babcock
Subject: Business need for Secondary Interface in Reverse Proxy
The mention of "reverse proxy" in this post caught my attention. FWIW, we are using NGINX Plus as the "reverse proxy/ingress controller" for our containerized (Docker) workloads. It turns out that NGINX-Plus has published configuration information whereby their proxy can be easily configured to use most OIDC authentication providers (Ping/Okta/Auth0 etc.) I followed their recipe for one of these but adapted it to point to the IBM Verify endpoints and it works very well. The net effect I was after is to have the ingress controller handle the authentication tasks (redirects/JWT validation/token expireation/renewal etc.) rather than having every upstream app have to deal with it. The net result is that the container service doesn't have to "sweat" or know OIDC handshaking. The traffic only gets there and delivers the JWT payload for consumption if the authentication is established. Also, I don't have to worry with any of the container internals (networks/addresses and the like) because all of that remains internal to the containers. This decoupling of authentication from the application is a HUGE maintenance labor saver. In addition, the proxy is designed to do all this VERY efficiently, much more so than you can do in your app code. If you are using containers, there's a good chance that you already are familiar with NGINX. This OIDC feature requires the PLUS version which has a number of other performance enhancements as well. FWIW, I'm a believer in letting the ingress controller handle all the authentication. Your app still has to provide whatever means of access control to manage what the authenticated user can do but then that's always app specific anyway and it's appropriate work for each app. But AUTHENTICATION (determination that you are who you say you are) is the same regardless of application. Those are separate concerns and if you are trying to leverage an authentication product (IBM Verify) to manage ACCESS control then it gets very messy. I strongly recommend separating those concerns entirely as they are inherently orthogonal. In this case, our Docker workloads never "see" unauthenticated traffic from the proxy. They only recieve properly authenticated HTTP requests with the identity already parsed out of the JWT and in the request header. It's simply elegant. Let your ingress controller take care of AUTHENTICATION. It's a load off the shoulders of your developers.
Don Babcock, P.E.
------------------------------
Don Babcock
Original Message:
Sent: Tue January 02, 2024 08:42 AM
From: Narayan Verma
Subject: Business need for Secondary Interface in Reverse Proxy
Hi Jens, where do I see the interfaces of the containers in LMI?
Internal IP of the WebSEAL container (container name = ISVAWRPRP1) seems to be 172.20.0.6 while for the LMI (container name = ISVACOFIG) it seems to be 172.20.0.2. They both have a Gateway of 172.20.0.1. This is the information I got from Docker Desktop.
When I tried to use 172.20.0.6 in the LMI as the network-interface I got the same error - Error: DPWAP0073E An IP address which is not valid was located in the supplied entry: 172.20.0.6
From Docker installation perspective what's really considered a secondary interface? Do I need to have another instance/container running for the Reverse Proxy/WebSEAL just like ISVAWRPRP1? This will have a separate internal IP and external port on host, not sure LMI will accept that IP address either. How does LMI validate the ip address of the WebSEAL that we enter in the config file? Are there any network settings to register valid RP addresses in LMI? Really trying to understand the concept of secondary interface in general for final deployment and for Docker installation in particular for functional/lower lane testing.
Also, the port mapping of the LMI and WebSEAL containers is as below:
"Ports": { "443/tcp": null, "9443/tcp": [ { "HostIp": "192.168.1.182", "HostPort": "3000" } }
"Ports": { "9080/tcp": null, "9443/tcp": [ { "HostIp": "192.168.1.182", "HostPort": "4000" } ] }
Also, what's a portioner - sorry, too many questions...
Thanks!
------------------------------
Narayan Verma
Original Message:
Sent: Tue January 02, 2024 05:29 AM
From: Jens Petersen
Subject: Business need for Secondary Interface in Reverse Proxy
Hi Narayan,
sry. read it on my phone mail and answered without seeing the trail. I haven't worked with container now. What I see from Johns explanation looks like you need for client auth with Certs. Regarding the ports, usually your container IP/Port is mapped to an external IP/Port on Docker, where Port can but must not be same. So it depends on your Docker config. This could also explain the IP problem. The internal net of your Docker container probably is different from the host network. At lmi you can see the interfaces of your container. Maybe you have a look what's the internal Ip of your WebSEAl Container there or use Docker tools. You can also use portioner as a LMI if you feel uncomfortable with Docker CLI.
------------------------------
Jens Petersen
Original Message:
Sent: Mon January 01, 2024 05:47 PM
From: Narayan Verma
Subject: Business need for Secondary Interface in Reverse Proxy
Hi Jens, LMI and WebSeal are running on the same computer right now using Docker. LMI is at https://lmi.iamlab.ibm.com:3000 and RP/WebSeal is at https://www.iamlab.ibm.com:4000/. both are on the same computer at IP 192.168.1.182. I am not trying to install anything right now, all docker images are already installed. How can I get the secondary interface to be configured correctly for the Prompt as needed option to work for cert based authentication? That's where I am struggling with.
------------------------------
Narayan Verma
Original Message:
Sent: Mon January 01, 2024 02:55 PM
From: Jens Petersen
Subject: Business need for Secondary Interface in Reverse Proxy
Usually you should have the management interface in a separated network. The warning just announces an overlap. You can use the override button to install anyway if you are sure about.
be aware that the management interface exposes some critical services which shouldn't be exposed to a DMZ where the WebSEAL interfaces usually are placed.
------------------------------
Jens Petersen
Original Message:
Sent: Fri December 29, 2023 09:59 AM
From: Narayan Verma
Subject: Business need for Secondary Interface in Reverse Proxy
What are the guidelines for configuring a secondary interface? I am not able to save the IP address of the primary machine hosting the WebSEAL/reverse proxy. When I try to save the config in LMI get this error:
Error: DPWAP0073E An IP address which is not valid was located in the supplied entry: 192.168.1.182
for the following entry:
network-interface = 192.168.1.182 under [server]
Error: DPWAP0073E An IP address which is not valid was located in the supplied entry: network-interface=192.168.1.182;https-port=4444;certificate-label=WebSEAL-Test-Only;accept-client-certs=required;always-neg-tls=yes;use-secondary-listener=yes
for the following entry:
interface1 = network-interface=192.168.1.182;https-port=4444;certificate-label=WebSEAL-Test-Only;accept-client-certs=required;always-neg-tls=yes;use-secondary-listener=yes under [interfaces]
Currently I am running a container based installation of ISAM 10.0.0.6. My reverse proxy container is running at https://192.168.1.182:4000/ or https://www.iamlab.ibm.com:4000 and 192.168.1.182 is the IP address of my primary desktop hosting the docker desktop.
Also, can my primary https port remain 443 even through I am accessing the WebSeal at port 4000 as in https://www.iamlab.ibm.com:4000 or should I change that to 4000? This is currently how I have it:
[server]
https = yes
https-port = 443
Sorry, if this is duplicate post between browser not prompting for certificate thread that I started this month. Wanted to see if my issue is related to secondary interface/port.
Thanks!
------------------------------
Narayan Verma
Original Message:
Sent: Tue January 05, 2021 04:36 AM
From: Jon Harry
Subject: Business need for Secondary Interface in Reverse Proxy
Secondary interfaces were mainly added in support of Virtual Host Junctions. Before ability to present different server certificate based on incoming connection, it was common to use secondary interfaces (different IP addresses) for each Virtual Host Junction. This is much less common now that different certificates can be presented per connection based on target host being available as part of TLS negotiation.
The main time secondary interfaces are used now is to support "prompt-as-needed" client certificate authentication on TLS sessions.
When "prompt-as-needed" function was first introduced, a TLS session could be initially set up with server-side certificate only and then later, as needed, be re-negotiated in order to require a client certificate for authentication.
Changes in browser functionality (led by Chrome) mean that it is now not always possible to force the re-negotiation of a TLS session that has already been established. I think there is a security reason for this. In order to support "prompt-as-needed" now, there is a requirement to set up a "secondary interface" (which has TLS set to "required") and configure this as part of the prompt-as-needed setup. When client certificate authentication is triggered, the session is redirected to the secondary interface so that client certificate authentication can be performed.
Note: a secondary interface for the Reverse Proxy doesn't have to be a different IP address - it could just be a different port.
Jon.
------------------------------
Jon Harry
Consulting IT Security Specialist
IBM
Original Message:
Sent: Mon January 04, 2021 10:00 AM
From: Joao Goncalves
Subject: Business need for Secondary Interface in Reverse Proxy
Why would you need a secondary interface defined on a Reverse Proxy?
Is it for load balancing? For increasing Bandwidth? I'm not sure if we can create a Link Aggregation in ISAM, but this could be a solution?
------------------------------
Joao Goncalves
Pyxis, Lda.
Sintra
+351 91 721 4994
------------------------------