Security Global Forum

 View Only

Running ISAM on IBM Cloud

By Shane Weeden posted Mon January 08, 2018 12:00 AM

  

ISAM 9.0.4, released in December 2017 introduces several new capabilities for ISAM customers. One of these is the ability to run ISAM in containerized environments using Docker.

 

This article is all about showing you how to get IBM Security Access Manager (ISAM) running on a small Kubernetes cluster on IBM Cloud. With a bit of pre-reading and the right tools in place I believe you can get ISAM up and running on IBM Cloud in minutes. Read on if you want to know how to do that.

Credits: I would like to acknowledge Scott Exton, Jon Harry, Scott Andrews and Tiffany Guan who have directly or indirectly contributed content to this article.

 

There is some pre-requisite knowledge that you should have to get the most of of this how-to guide:

  • It will help a lot of you know what ISAM is, and have used it before in physical or virtual appliance form.
  • You should have basic shell scripting experience in bash, understand environment variables and be comfortable working on a linux command line.
  • You should know what Docker is – what images and containers are, and should have a local docker runtime environment. For a getting started guide, see: https://docs.docker.com/get-started/
  • You should work through the Kubernetes tutorials (https://kubernetes.io/docs/tutorials/) so that you understand the fundamental of Kubernetes – clusters, deployments, pods, containers, services and secrets.
  • You should have a paid or trial account on IBM Cloud (https://www.ibm.com/cloud/) – a free trial is available and we will only be using the features available in the free trial in this guide.

Another reference that is very useful for getting started with ISAM on Docker is this video from Scott Exton: https://www.youtube.com/watch?v=hn-COJwNiyY

The tools you need

In this section we’re going to go through the set of command-line tools that you will need to work with containerized ISAM and the IBM Cloud. Our example will be using an Ubuntu Linux base operating system, and we recommend either Linux (or a Mac) for simple scripting and command line tool use.

Besides just the installation and basic verification that the tools are working, we will also show you some key commands that you need to discover information about your Kubernetes cluster, including finding your cluster’s public IP address so that you may expose your ISAM server to the internet.

Standard security tools

The examples used in this article make use of the following standard tools found on most Linux distributions:

  • openssl
  • curl
  • ldapsearch (This is actually optional – ldapsearch is just used for deployment verification)

Docker

Obtain and install Docker for your operating system. Details are here:
https://www.docker.com/get-docker

Be sure that you can run the docker command line, including listing images:

ubuntu$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

Docker Store and Docker Hub

There are two container registries provided by Docker, Docker Hub and Docker Store. Docker Hub contains completely freely accessible images (no authentication required) – and this is the location where the OpenLDAP and Postgresql images used in this article reside. Docker Store has controls for accepting terms and conditions, and optionally for fees. Credentials are needed to access images in Docker Store, and the terms and conditions for a particular image need to be accepted before you can pull that image. The core ISAM image resides in Docker Store. There is no fee for this image, but you do have to register a Docker account, accept the terms and conditions for the ISAM image and “Proceed to Checkout”. This will ensure you have correctly subscribed to the image so that you can download it from deployments in your Kubernetes cluster.

For details on the ISAM image, and to register and subscribe, see:
https://store.docker.com/images/ibm-security-access-manager

 

IBM Cloud Command Line

Following the instructions here, download the bx command-line tool for IBM Cloud:
https://console.bluemix.net/docs/cli/reference/bluemix_cli/get_started.html#getting-started

Verify that you can login with the bx command line tool into your Bluemix account. As an IBM employee, I use the -sso option, however if you have a standard username/password based IBM ID login, just perform “bx login”:

ubuntu$ bx login -sso
<follow the prompts>

You can also setup long-lived API key access for bx login if you are looking for at scripted solutions for the steps shown in this article.

 

Then download and install the container service plugin:
https://console.bluemix.net/docs/containers/cs_cli_install.html#cs_cli_install

ubuntu$ bx plugin install container-service -r Bluemix

Make sure you have created a container service cluster (i.e. a Kubernetes cluster). Typically you do this in the IBM Cloud UI, and it will take some minutes for your cluster to be provisioned:
https://console.bluemix.net/containers-kubernetes/home/clusters


Verify that you are able to list your cluster using the command line tool, and that the state is “normal”. This indicates your cluster is ready to use. It can take some time (usually 15-20 mins) for your cluster to be provisioned:

ubuntu$ bx cs clusters
OK
Name ID State Created Workers Datacenter Version
mycluster 7f205dfbacc94bc089eb7b64855b7793 normal 1 month ago 1 hou02 1.7.4_1504

In the examples used in this article we will use the cluster name mycluster.

Find your cluster’s public IP address. Your cluster must be fully provisioned before this is available (this will be used later):
ubuntu$ bx cs workers mycluster
OK
ID                                               Public IP   Private IP    Machine Type State  Status Version
kube-hou02-pa7f205dfbacc94bc089eb7b64855b7793-w1 50.23.5.169 10.77.223.247 free         normal Ready  1.7.4_1503

In this example my public IP is 50.23.5.169. We recommend that you put this IP in your local machine’s /etc/hosts file and give it a hostname. In this article we will do just that, and refer to the host myisam throughout the rest of this article.

In my /etc/hosts:

50.23.5.169 myisam

 

Kubernetes Command Line

Install kubectl – the Kubernetes command line management tool:
https://kubernetes.io/docs/tasks/tools/install-kubectl/

Obtain the kubectl configuration required for your Kubernetes cluster.

ubuntu$ bx cs cluster-config mycluster
OK
The configuration for mycluster was downloaded successfully. Export environment variables to start using Kubernetes.

export KUBECONFIG=/home/amwebpi/.bluemix/plugins/container-service/clusters/mycluster/kube-config-hou02-mycluster.yml

 

Be sure to run that export command (perhaps put it in your bash profile so it persists over logins) so that kubectl can be used to manage your cluster. Verify this with the command line:

ubuntu$ kubectl cluster-info
Kubernetes master is running at https://184.173.44.62:28126
Heapster is running at https://184.173.44.62:28126/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://184.173.44.62:28126/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://184.173.44.62:28126/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

Key Commands Summary

Here’s a set of useful commands using the command line tools. These are the commands we frequently find ourselves “going to” when working with IBM Cloud container services and ISAM.

Logging into IBM Cloud:
bx login [-sso]

Listing and switching between IBM Cloud regions:

bx regions

bx target -r [region-name]

Determining which plugins to the bx tool you have installed:

bx cs plugin list

Getting your list of clusters, and a specific cluster’s configuration for setting up kubectl:
bx cs clusters
bx cs cluster-config mycluster

Discovering your cluster’s public IP address:
bx cs workers mycluster

Listing Kubernetes deployments, pods, services and secrets (you must have set your KUBECONFIG environment variable from the “bx cs cluster-config <cluster_name>” output for kubectl to work):
kubectl get deployments
kubectl get pods
kubectl get services
kubectl get secrets

Further frequently used kubectl references are available here: https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Resources

The resources for this article are simple text files that can be found in this directory:

 

Deployment Architecture of Containerized ISAM

The following diagram depicts the various containers which are used in the ISAM environment, as well as the network ports which are used by the provided services. All of the container to container communication will take place over the internal Kubernetes cluster network. IP addresses on this network will be automatically assigned by Kubernetes when a container is first started, however we never really need concern ourselves with the cluster network IP addresses since all internal cluster communications will be managed by Kubernetes services – which are exposed via an internal DNS service. External communication via the internet (outside the private cluster network) will be provided by the Kubernetes NodePort service, where each node will proxy requests from the advertised port to the internal container port. In a standard Kubernetes environment the LoadBalancer service would be used, which allows requests to be load balanced between different pods however this type of service is only made available to paid IBM Cloud accounts. NodePorts have a restricted range, which is why you see the 30636, 30543 and 30443 ports in use.

 


The isamconfig container is used to manage the configuration of the ISAM environment. All configuration changes are made against this container and the configuration information, once published, is made available to the isamwebseal and isamruntime containers via the configuration Web service (i.e. the isamwebseal and isamruntime containers, when triggered, will pull the configuration information from the isamconfig container).

Really, the only external service that we need to publish for this article is the port to isamwebseal. The openldap port is only exposed to allow external tools access (like ldapsearch, ldapmodify, etc) from your local machine – you can safely decide to not expose that if you wish. Similarly the isamconfig  external port is only exposed to allow direct browser access to the ISAM administration console. We will show you an alternative way to connect to the administration console using Kubernetes port fowarding when we deploy that container.

In any case it’s very important to plan out your deployment topology. It can get a little confusing if you don’t write down your intended architecture before you start deploying containers.

 

ISAM Deployment

In this section we will walk through the Kubernetes deployments described in the above architecture. Along the way we will show you how to test that each step of the deployment has been completed successfully.

Deploying LDAP

The LDAP container we will use is available from Docker Hub. For detailed information on the container and all configuration options, see:
https://hub.docker.com/r/ibmcom/isam-openldap/

We will be using secure connections to the LDAP server, and as such we have to provision certificates for it to use. The certificate files are then bundled into a Kubernetes secret. You can create scripts to do all this very quickly, however in this guide we will walk you through the verbose process so you have a complete understanding of what is involved.

Create key and certificate files for OpenLDAP

openssl req -x509 -newkey rsa:4096 -keyout "ldap.key" -out "ldap.crt" -days 365 -subj "/C=US/ST=TX/L=Austin/O=IBM/CN=OpenLDAP" -nodes

This will create two files – ldap.key and ldap.crt. These filenames are needed, but the DN and lifetime of the cert can be anything you like. We need a second copy of the ldap.crt file to be used as a CA certs file for OpenLDAP:

cp ldap.crt ca.crt

Create a Diffie-Hellman parameter file for OpenLDAP

This command takes a little time, and again the filename here is important:

openssl dhparam -out "dhparam.pem" 2048

Create a Kubernetes secret

The Kubernetes secret contains the key files we have just created. The secret is mounted as a well-known directory (volume) in the OpenLDAP container:

kubectl create secret generic openldap-keys --from-file "ldap.crt" --from-file "ldap.key" --from-file "ca.crt" --from-file "dhparam.pem"

 

Create the Kubernetes Deployment and Service

A Kubernetes YAML file (openldap.yaml) is used to create the deployment and service for ISAM OpenLDAP. The ISAM OpenLDAP image will be pulled directly from Docker Hub. You can optionally edit this file to change the configured passwords, volumes, nodePort, etc however for this article we will use the example values provided:

Note that the example YAML file includes a NodePort for exposing the LDAP server on the internet. This is not strictly required, and is only done so that it is possible for you to test an LDAP search from outside the cluster. You can choose to comment out the nodePort entry of the service if you do not wish your LDAPS port to be exposed to the internet.

Create the Kubernetes deployment and service from the YAML file:

kubectl create -f openldap.yaml

You should observe that the deployment and service are created. Check on the status of the deployment, and ensure that the pod status is “Running”:

ubuntu$ kubectl get pods
NAME                      READY STATUS  RESTARTS AGE
openldap-3757380541-g6q33 1/1   Running 0        1m

Check your service is deployed:

ubuntu$ kubectl get svc openldap
NAME     CLUSTER-IP     EXTERNAL-IP PORT(S)       AGE
openldap 172.21.211.197 <nodes>     636:30636/TCP 2m

We can use that service via the externalized nodePort to perform an ldapsearch over the internet to our LDAP server (remember myisam is a hosts file entry for your node’s public IP). On Linux, the LDAPTLS_REQCERT environment variable permits a TLS connection to LDAP with an untrusted certificate:

export LDAPTLS_REQCERT=allow
ldapsearch -H "ldaps://myisam:30636" -D cn=root,secAuthority=default -w Passw0rd -b "dc=ibm,dc=com" -s sub "(objectclass=*)"

 

Alternatively if you don’t want to test via the external NodePort (or did not expose LDAP via nodePort in the YAML file), you can exec the ldapsearch directly on the container itself:

kubectl exec -t openldap-3757380541-g6q33 -- ldapsearch -H "ldaps://localhost:636" -D cn=root,secAuthority=default -w Passw0rd -b "dc=ibm,dc=com" -s sub "(objectclass=*)"

 

Either way, you should see the ldap output of a couple of ISAM entries.

Your LDAP server is now ready for use with ISAM.

Deploying Postgresql

The Postgresql container we will use is available from Docker Hub, just like the OpenLDAP container. For detailed information on the container and all configuration options, see:
https://hub.docker.com/r/ibmcom/isam-postgresql/

We will be using secure connections to the Postgresql server, and as such we have to provision certificates for it to use. The certificate files are then bundled into a Kubernetes secret. You can create scripts to do all this very quickly, however in this guide we will walk you through the verbose process so you have a complete understanding of what is involved.

Create key and certificate files for Postgresql

openssl req -x509 -newkey rsa:4096 -keyout "postgres.key" -out "postgres.crt" -days 365 -subj "/C=US/ST=TX/L=Austin/O=IBM/CN=Postgresql" -nodes

This will create two files – postgres.key and postgres.crt. The Postgresql container actually needs the contents of both these files put together into a single file and made available via a Kubernetes secret. Create this file, called server.crt as follows:

cat postgres.key postgres.crt > server.crt

Create a Kubernetes secret

The Kubernetes secret contains the server.crt file. The secret is mounted as a well-known directory (volume) in the OpenLDAP container:

kubectl create secret generic postgresql-keys --from-file "server.crt"

 

Create the Kubernetes Deployment and Service

A Kubernetes YAML file (postgresql.yaml) is used to create the deployment and service for ISAM Postgresql. The image will be pulled directly from Docker Hub. You can optionally edit this file to change the configured passwords, volumes, etc however for this article we will use the example values provided:

Create the Kubernetes deployment and service from the YAML file:

kubectl create -f postgresql.yaml

You should observe that the deployment and service are created. Check on the status of the deployment, and ensure that the pod status is “Running”:

ubuntu$ kubectl get pods
NAME                        READY STATUS  RESTARTS AGE
openldap-3757380541-g6q33   1/1   Running 0        2h
postgresql-1163409656-tzpts 1/1   Running 0        56s

Check your service is deployed:

ubuntu$ kubectl get svc postgresql
NAME       CLUSTER-IP     EXTERNAL-IP PORT(S)  AGE
postgresql 172.21.172.116 <none>      5432/TCP 1m

This service is internal to the cluster, so to test access we’ll first exec a bash shell on the container in the pod, then run a psql client command from the container itself:

ubuntu$ kubectl exec -it postgresql-1163409656-tzpts bash
bash-4.3# psql -U postgres -p 5432 isam -c "select * from OAUTH20_TOKEN_CACHE;"
token_id | type | sub_type | date_created | date_last_used | lifetime | token_string | client_id | username | scope | redirect_uri | state_id | token_enabled | prev_token_string
----------+------+----------+--------------+----------------+----------+--------------+-----------+----------+-------+--------------+----------+---------------+-------------------
(0 rows)

This indicates the postgresql server is running and that one of the ISAM schema tables can be accessed.

Your Postgresql server is now ready for use with ISAM.

Deploying the ISAM Configuration Container

The ISAM configuration container we will use is available from Docker Store (not Docker Hub). It is available for free, however requires Docker credentials to obtain it, and as mentioned earlier in this article you must “subscribe” to the image, accepting the terms and conditions. To register a docker account, subscribe to this image, and for detailed information on the container and all configuration options, visit:
https://store.docker.com/images/ibm-security-access-manager

Note – if you do not subscribe to the image, you will not be able to be able to deploy it to your Kubernetes cluster.

 

Creating a Kubernetes Secret for Docker Store

After subscribing to the image, you need to create a Kubernetes secret with your Docker login credentials so that the image can be pulled as part of the Kubernetes deployment. You will need:

  • Docker username
    Docker password
    The email address used to register your Docker account

Create your Kubernetes secret with:

kubectl create secret docker-registry dockerlogin --docker-username=<USERNAME> --docker-password=<PASSWORD> --docker-email=<EMAIL_ADDRESS>

 

Create the Kubernetes Deployment and Service

A Kubernetes YAML file (isamconfig.yaml) is used to create the deployment and service for the ISAM Configuration Container. It also creates a secret that will be used by the Web Reverse Proxy (WRP) and Runtime containers to connect to the configuration service URL of the configuration container.

Note that the service used in this example uses a public NodePort 30543. This will allow us to connect to the ISAM local management interface using a browser over the internet. Typically you may not wish to do this, and instead it’s quite ok to not use a NodePort for this service. In that case just comment out the nodePort line if you don’t want to expose the local management interface (LMI) externally. Later in this section we’ll show you an alternative way to connect to the management interface using kubectl to port-forward to the management interface port.

The image will be pulled directly from Docker Store provided you have subscribed to the image and established credentials as shown above. You can optionally edit this file to change the configured passwords, volumes, etc however for this article we will use the example values provided:

Create the Kubernetes deployment and service from the YAML file:

kubectl create -f isamconfig.yaml

You should observe that the deployment, service and secret are created. Check on the status of the deployment, and ensure that the pod status is Running. The pod may spends a few minutes in the ContainerCreating state due to the requirement to download the large ISAM image from Docker Store.

ubuntu$ kubectl get pods
NAME                        READY STATUS  RESTARTS AGE
isamconfig-2467259294-tmz9c 1/1   Running 0        1m
openldap-3757380541-g6q33   1/1   Running 0        19h
postgresql-1163409656-tzpts 1/1   Running 0        17h

You should now be able to connect to your configuration container with a browser and see the ISAM management console:

https://myisam:30543

(remember myisam is a hosts file entry for your node’s public IP – you can use the public IP directly if you wish)

You can even login at this point (admin/admin) and start basic configuration if you wish.

As mentioned earlier in this section, using a public NodePort is not the only (or perhaps the safest) way to connect to your management console. As an alternative you can use kubectl and port forwarding to establish a local listening port and connect your browser to localhost. Here’s an example kubectl command that shows how to establish the port-forward for your POD:

ubuntu$ kubectl port-forward isamconfig-2467259294-tmz9c 9443:9443
Forwarding from 127.0.0.1:9443 -> 9443

You can now connect to the LMI with your browser using https://localhost:9443

The ISAM configuration container is now deployed and ready for use.

Deploying the ISAM Web Reverse Proxy Container

Even though we have not yet configured ISAM and created the reverse proxy instance using the configuration container, given that we know we are going to create one, we can “pre-deploy” the deployment that will run WebSEAL (now called the Web Reverse Proxy or WRP). Of particular important is the WRP “name”, and in this example we will use default.

When we perform this deployment, the WRP container will poll the configuration service of the configuration container waiting for a snapshot to be available for it to download and use. The configuration service is a special endpoint exposed on the isamconfig container that the other ISAM containers use to pull down copies of the configuration snapshot. Until the snapshot is available the WRP container will be in a bootstrap loop continuing to poll. Also until we actually modify the special cfgsvc user on the configuration container to set a password and enable the account, the polling will actually fail because of authentication errors. This is normal given we have not yet performed ISAM configuration.

The WRP deployment uses the same ISAM image as the configuration container, and environment variables control the personality under which it runs. As such, the Kubernetes secret containing your docker credentials that was used for the configuration container should already be in place and be used by this deployment.

Create the Kubernetes Deployment and Service

A Kubernetes YAML file (isamwrpdefault.yaml) is used to create the deployment and service for the ISAM WRP Container. Note that the service used in this example uses a public NodePort 30443. This will become the internet-facing port for your WebSEAL. You can optionally edit this file to change the configured passwords, volumes, ports, etc however for this article we will use the example values provided:

Create the Kubernetes deployment and service from the YAML file:

kubectl create -f isamwrpdefault.yaml

You should observe that the deployment, service and secret are created. Check on the status of the deployment, and ensure that the pod status is Running. This pod should come up very quickly since the ISAM image will be cached from when the configuration container deployment was performed.

ubuntu$ kubectl get pods
NAME                         READY STATUS  RESTARTS AGE
isamconfig-2467259294-tmz9c  1/1   Running 0        47m
isamwebseal-3167996209-v6g75 1/1   Running 0        4s
openldap-3757380541-g6q33    1/1   Running 0        20h
postgresql-1163409656-tzpts  1/1   Running 0        17h

You can observe the startup log of the WRP container using kubectl. At the moment it should be showing authentication errors to the configuration service URL of the ISAM configuration container because the cfgsvc username and password that is being used has not been configured on the configuration container yet.

ubuntu$ kubectl logs -f isamwebseal-3167996209-v6g75
2018-01-04T18:55:31-0500: Bootstrapping....
2018-01-04T18:55:31-0500: ---- Creating path: /var/shared/snapshots
2018-01-04T18:55:31-0500: ---- Creating path: /var/shared/support
2018-01-04T18:55:31-0500: ---- Creating path: /var/shared/fixpacks
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/lmi
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/rsyslog_forwarder
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/isam_runtime/policy
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/isam_runtime/user_registry
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/wrp
2018-01-04T18:55:31-0500: ---- Creating path: /var/application.logs/system
Docker detected
2018-01-05T09:55:32+1000: ---- Downloading data from the configuration service.
Error: WGAWA0662E An invalid response code was returned from the request to https://isamconfig:9443/shared_volume/fixpacks: 403
2018-01-05T09:55:35+1000: ---- The configuration service is not available. The container
2018-01-05T09:55:35+1000: startup will wait for the service to become available.
Error: WGAWA0662E An invalid response code was returned from the request to https://isamconfig:9443/shared_volume/fixpacks: 403
2018-01-05T09:55:42+1000: ---- Retrying....

This is enough to indicate the WRP container is running and awaiting a configuration snapshot.

Deploying the ISAM Runtime Container

The ISAM Runtime Container (called isamruntime, or ISAM Liberty Runtime) is very similar to the WRP Container – it’s a personality of the ISAM image that runs the advanced authentication, context-based access and federation services. The isamruntime container also retrieves a snapshot from the configuration container in same manner as the WRP container. The only real difference here (besides the technical function of the container) is that this container has no need to listen externally on a NodePort. Instead it only exposes it’s HTTPS interface on the cluster network via the isamruntime service.

Create the Kubernetes Deployment and Service

A Kubernetes YAML file (isamruntime.yaml) is used to create the deployment and service for the ISAM Runtime Container. You can optionally edit this file to change the configured passwords, volumes, ports, etc however for this article we will use the example values provided:

Create the Kubernetes deployment and service from the YAML file:

kubectl create -f isamruntime.yaml

You should observe that the deployment, service and secret are created. Check on the status of the deployment, and ensure that the pod status is “Running”. This pod should come up very quickly since the ISAM image will be cached from when the configuration container deployment was performed.

ubuntu$ kubectl get pods
NAME                         READY STATUS  RESTARTS AGE
isamconfig-2467259294-tmz9c  1/1   Running 0        1h
isamruntime-2170015637-nsj84 1/1   Running 0        6s
isamwebseal-3167996209-v6g75 1/1   Running 0        21m
openldap-3757380541-g6q33    1/1   Running 0        20h
postgresql-1163409656-tzpts  1/1   Running 0        18h

The bootstrapping logs for the runtime container can be observed using the same technique as shown for the WRP container (just use the runtime pod name).

This completes deployment of the runtime container.

Configuring ISAM

We will now use the LMI on the configuration container to:

  • Complete first steps
  • Import trusted certificates for Postgresql and LDAP connectivity
  • Configure ISAM to use our Postgresql deployment for the runtime database
  • Configure the ISAM policy server, including using our OpenLDAP deployment for the user registry
  • We will also create a test user using the command line
  • Configuring the “default” WRP instance
  • Publishing the snapshot so that the isamwebseal and isamruntime containers complete bootstrapping and start up
  • Configuring ISAM for use in an authentication service scenario
  • Re-publishing the snapshot so that the isamwebseal and isamruntime containers will obtain and use new configuration

All of these steps can be automated via the ISAM management API’s as part of a scripted devops process, however for the purposes of this article we’ll illustrate via the management console. There is some expectation of familiarity with the ISAM management console in the instructions shown here.

First Steps

Using the management console (see steps in deploying the ISAM configuration container), login as admin/admin, then complete the first steps wizard. This includes:

  • Accepting the services agreement
  • Changing the admin password and potentially updating the LMI session timeout
  • Obtaining a Trial License (links are provided in the LMI), or using an existing set of product activation codes

At the completion of these steps, you should have an ISAM management console with at least the Web and Access Control capabilities enabled:

Now we will enable Management Authorization Roles, and update the password for the cfgsvc user. This step is to setup the built-in LMI user account that is used by the WRP and runtime containers to obtain snapshots.

Navigate to Manage System Settings->Management Authorization.

  • On the Management Authorization tab, check Enable Authorization Roles
  • Move to the Users tab, select the cfgsvc user, and press Set Password. Change the password to Passw0rd. This needs to match the password set for the configuration service secret created when we deployed the isamconfig container.

Deploy pending changes when complete.

If you are monitoring either the isamwebseal or isamruntime pod’s logs at this time, you will see the 403 forbidden errors change to indicate that no snapshots are available. Here is an example from the isamruntime container:

ubuntu$ kubectl logs -f isamruntime-2170015637-nsj84
...
Error: WGAWA0662E An invalid response code was returned from the request to https://isamconfig:9443/shared_volume/fixpacks: 403
2018-01-05T15:14:32+1000: ---- Retrying....
Error: WGAWA0664E No published snapshots are available.
2018-01-05T15:14:47+1000: ---- Retrying....

Importing trusted certificates for Postgres and LDAP connectivity

You may recall that when setting up the Postgresql and LDAP containers we created private/public key material for these containers. In order for ISAM to be able to connect to the database and LDAP services, the public keys for TLS connections need to be imported into ISAM trust stores.

In the management console, navigate to Manage System Settings -> SSL Certificates. Add the following public signer keys to certificate database, as indicated:

Public Key Add to certificate database Notes
postgres.crt lmi_trust_store Used by the management console application (isamconfig container) to connect to the runtime database.
postgres.crt rt_profile_keys Used by the isamruntime container to connect to the runtime database.
ldap.crt lmi_trust_store Used by both the isamwebseal and isamruntime containers to connect to the LDAP server.

Deploy pending changes when complete. This commits the changes to the local configuration database. An example of the changes to the lmi_trust_store is shown below (but don’t forget the rt_profile_keys update as well):

Configuring Runtime High Volume Database

In the management console, navigate to Manage System Settings -> Database Configuration.

Configure the Runtime Database with the following settings:

Property Value Notes
Type PostgreSQL
Address postgresql This corresponds to the Kubernetes Service that was created when the postgresql container was deployed. The service name becomes a DNS entry in Kubernetes DNS and can be resolved by other deployments in the same cluster.
Port 5432
Secure Checked
Username postgres The username and password are configured via environment variables that are embedded in the postgresql.yaml file.
Password Passw0rd
Database name isam The database name is also configured via an environment variable in the postgresql.yaml file.

Save and deploy those changes.

Configuring ISAM Policy Server Runtime

This should be a very familiar set of operations for ISAM administrators.

Navigate to: Secure Web Settings -> Runtime Component -> Configure

  • Select LDAP Remote, then Next.
  • Under Policy Server tab:
    Property Value Notes
    Management Suffix <leave blank>
    Management Domain Default
    Administrator Password Passw0rd This is for establishing the sec_master password.
    SSL Server Certificate Lifetime 1460 This is the default – leave it.
    SSL Compliance FIPS 140-2 This is the default – leave it.
  • Under LDAP tab:
    Property Value Notes
    Host name openldap This corresponds to the Kubernetes Service that was created when the openldap container was deployed. The service name becomes a DNS entry in Kubernetes DNS and can be resolved by other deployments in the same cluster.
    Port 636 Standard SSL/TLS port for security ldaps connections
    DN cn=root,secauthority=default This is the root DN for our OpenLDAP container.
    Password Passw0rd Password as established in the openldap.yaml file.
    Enable SSL checked
    Certificate Database lmi_trust_store The certificate database into which we imported the ldap.crt file.
    Certificate Label <leave blank> This is only used for mutual TLS connections.
  • Click Finish.

 

Creating a testuser

This can be done with the web portal manager application built into the ISAM console (Secure Web Settings -> Policy Administration) however in this section we will show you how to use the command line tools on the configuration container to perform this step.

First we exec the isam_cli on the isamconfig pod (use kubectl get pods if you need to recall the pod name), then we use the standard pdadmin command line to create the user:

ubuntu$ kubectl exec -it isamconfig-2467259294-tmz9c isam_cli
Welcome to the IBM Security Access Manager appliance
Enter "help" for a list of available commands
isamconfig-2467259294-tmz9c> isam admin
pdadmin> login -a sec_master -p Passw0rd
pdadmin sec_master> user create testuser cn=testuser,dc=ibm,dc=com test user Passw0rd
pdadmin sec_master> user modify testuser account-valid yes
pdadmin sec_master> quit
isamconfig-2467259294-tmz9c> exit

You could also now do an ldapsearch and see the new entry in the LDAP container:

ubuntu$ kubectl exec -t openldap-3757380541-g6q33 -- ldapsearch -H "ldaps://127.0.0.1:636" -D cn=root,secAuthority=default -w Passw0rd -b "dc=ibm,dc=com" cn=testuser
# extended LDIF
#
# LDAPv3
# base <dc=iswga> with scope subtree
# filter: cn=testuser
# requesting: ALL
#

# testuser, ibm, com
dn: cn=testuser,dc=ibm,dc=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
cn: test
cn: testuser
sn: user
uid: testuser
userPassword:: e1NTSEF9WEUra0sxT3VlQUhGMjRnNzZlODkyaWZKVVY1d2ZjdHU=

# search result
search: 2
result: 0 Success

# numResponses: 2
# numEntries: 1

Configuring the WRP default instance

Navigate to Secure Web Settings -> Reverse Proxy.

Press New, and configure a WRP with the following settings:

Property Value Notes
Instance Name default This must match the value defined in the isamwrpdefault.yaml file for the INSTANCE environment variable.
Hostname isamconfig-XXXXX This should be pre-filled with the pod name. You can leave it as it’s default value.
Administrator Name sec_master
Administrator Password Passw0rd
Domain Default
Enable HTTP unchecked
Enable HTTPS checked
User Registry: Enable SSL checked
User Registry: Key File Name lmi_trust_store The certificate database into which we imported the ldap.crt file.
User Registry: Certificate Label <leave blank> This is only used for mutual TLS connections.
User Registry: Port 636 This will be the connection to the openldap service over the cluster network.

Click Finish.

Even though the default WRP is now configured, it won’t be running under the isamwebseal deployment container yet because we have not performed a publish snapshot. We will do that now, in part because it is required for the following step which is to load the SSL certificate of the isamruntime server endpoint into the pdsrv certificate database for SSL junctioning purposes.

Publishing the Snapshot and Observing WRP and Runtime container bootstrapping

At this stage deploy any pending changes, then navigate to Container Management -> Publish Configuration.

It is a useful exercise to monitor the logs of both the isamwebseal and isamruntime pods when doing this to watch the bootstrapping process taking place.

Shown here is the startup logs of the isamruntime container during this process:

ubuntu$ kubectl logs -f isamruntime-2170015637-nsj84
...
Error: WGAWA0664E No published snapshots are available.
2018-01-05T15:17:28+1000: ---- Retrying....
2018-01-05T15:17:41+1000: ---- Data has been downloaded from the configuration service.

Verifying checksums... Done
2018-01-05T15:19:48+1000: --- Running.
2018-01-05T15:19:48+1000: Log file: /var/application.logs/rtprofile/messages.log
[ SOME ENTRIES REMOVED FOR BREVITY]
[1/5/18 15:19:46:267 AEST] 00000021 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0011I: The server runtime is ready to run a smarter planet.

At this stage you can also test authentication to the WebSEAL reverse proxy.

https://myisam:30443

You should be able to login with testuser/Passw0rd.

Configuring ISAM for Authentication Service

First we need to import the SSL cert of the isamruntime service into the pdsrv certificate database so that an SSL junction can be created between the WebSEAL reverse proxy and the isamruntime server. Using the management console navigate to Manage System Settings -> SSL Certificates.

Edit the pdsrv certificate database, and Manage->Load a Signer Certificate from:

Server: isamruntime
Port: 443
Certificate Label: isamruntime

Deploy that change. This allows an SSL junction to be established between the WebSEAL reverse proxy and the isamruntime in the next step.

Now we will configure the ISAM WebSEAL reverse proxy to use the isamruntime for the authentication service. This step must be performed using the command-line interface on the configuration container. Follow the steps as shown:

ubuntu$ kubectl exec -it isamconfig-2467259294-tmz9c isam_cli
Welcome to the IBM Security Access Manager appliance
Enter "help" for a list of available commands
isamconfig-2467259294-tmz9c> isam aac config
Security Access Manager Autoconfiguration Tool Version 9.0.4.0 [20171201-2231]

Select/deselect the capabilities you would like to configure by typing its number. Press enter to continue:
[ X ] 1. Context-based Authorization
[ X ] 2. Authentication Service
[ X ] 3. API Protection
Enter your choice: 1
Select/deselect the capabilities you would like to configure by typing its number. Press enter to continue:
[ ] 1. Context-based Authorization
[ X ] 2. Authentication Service
[ X ] 3. API Protection
Enter your choice: 3
Select/deselect the capabilities you would like to configure by typing its number. Press enter to continue:
[ ] 1. Context-based Authorization
[ X ] 2. Authentication Service
[ ] 3. API Protection
Enter your choice: 
Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
Advanced Access Control Local Management Interface hostname: isamconfig
Advanced Access Control Local Management Interface port [443]: 9443
Advanced Access Control administrator user ID [admin]: 
Advanced Access Control administrator password: Passw0rd
Testing connection to https://isamconfig:9443/.
SSL certificate information:
Issuer DN: CN=isamconfig-2467259294-tmz9c
Subject DN: CN=isamconfig-2467259294-tmz9c
SSL certificate fingerprints:
MD5: 6D:D5:7A:86:46:58:17:57:C4:42:B2:10:0C:6F:9E:11
SHA1: E2:32:5C:67:40:FC:E1:84:4D:B2:3B:5E:5D:AA:C7:68:07:32:CF:F7
SHA256: D7:4E:0D:E6:6A:0F:08:44:29:CE:F9:EA:AD:4C:39:3A:0F:B6:44:F5:2C:2E:19:A9:D0:F2:91:25:E9:2D:F6:95

SSL certificate data valid (y/n): y
Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
Security Access Manager Appliance Local Management Interface hostname: isamconfig
Security Access Manager Appliance Local Management Interface port [443]: 9443
Security Access Manager Appliance administrator user ID [admin]:
Security Access Manager Appliance administrator password: Passw0rd
Testing connection to https://isamconfig:9443/.
SSL certificate information:
Issuer DN: CN=isamconfig-2467259294-tmz9c
Subject DN: CN=isamconfig-2467259294-tmz9c
SSL certificate fingerprints:
MD5: 6D:D5:7A:86:46:58:17:57:C4:42:B2:10:0C:6F:9E:11
SHA1: E2:32:5C:67:40:FC:E1:84:4D:B2:3B:5E:5D:AA:C7:68:07:32:CF:F7
SHA256: D7:4E:0D:E6:6A:0F:08:44:29:CE:F9:EA:AD:4C:39:3A:0F:B6:44:F5:2C:2E:19:A9:D0:F2:91:25:E9:2D:F6:95

SSL certificate data valid (y/n): y
Instance to configure:
1. default
2. Cancel
Enter your choice [1]: 1
Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
Security Access Manager administrator user ID [sec_master]:
Security Access Manager administrator password: Passw0rd
Security Access Manager Domain Name [Default]:
Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
Advanced Access Control runtime listening interface hostname: isamruntime
Advanced Access Control runtime listening interface port: 443
Testing connection to https://isamruntime:443.
Connection completed.
SSL certificate information:
Issuer DN: CN=isam, O=ibm, C=us
Subject DN: CN=isam, O=ibm, C=us
SSL certificate fingerprints:
MD5: C2:39:71:56:B7:E6:70:73:69:01:1A:AF:2A:7B:3F:25
SHA1: C3:AA:DD:77:5C:16:DB:30:64:46:27:6B:58:61:26:87:88:CB:74:0C
SHA256: 6E:9F:B8:56:00:98:01:A2:38:6E:BB:E3:28:04:28:B2:C7:2E:E1:86:5B:5D:60:AC:DA:5E:3F:AA:C1:D4:7F:7A

SSL certificate data valid (y/n): y
Automatically add CA certificate to the key database (y/n): y
Restarting the WebSEAL server...
Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1

URLs allowing unauthenticated access:
https://0.0.0.0/mga/sps/static
URLs allowing all authenticated users access:
https://0.0.0.0/mga/sps/xauth
https://0.0.0.0/mga/sps/mga/user/mgmt/html
https://0.0.0.0/mga/sps/mga/user/mgmt/device
https://0.0.0.0/mga/sps/mga/user/mgmt/otp
https://0.0.0.0/mga/sps/mga/user/mgmt/questions

Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
-----------------------------------------------
Planned configuration steps:

A junction to the Security Access Manager server will be created at /mga.

ACLs denying access to all users will be attached to:
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga

ACLs allowing access to all users will be attached to:
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/authsvc
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/xauth
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/authservice/authentication
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/static
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/apiauthsvc

ACLs allowing access to all authenticated users will be attached to:
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/auth
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/xauth
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/mga/user/mgmt/html
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/mga/user/mgmt/device
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/mga/user/mgmt/otp
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/mga/user/mgmt/questions

EAI authentication will be enabled for the endpoints:
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/auth
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/authservice/authentication
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/authsvc
/WebSEAL/isamconfig-2467259294-tmz9c-default/mga/sps/apiauthsvc
HTTP-Tag-Value header insertion will be configured for the attributes:
user_session_id=user_session_id

Press 1 for Next, 2 for Previous, 3 to Repeat, C to Cancel: 1
Beginning configuration...
Attaching ACLs.
Creating ACL isam_mobile_nobody.
Creating ACL isam_mobile_unauth.
Creating ACL isam_mobile_rest.
Creating ACL isam_mobile_rest_unauth.
Creating ACL isam_mobile_anyauth.
Creating junction /mga.
Editing configuration file...

Restarting the WebSEAL server...
Configuration complete.

 

Now we will configure one of the authentication mechanisms so that we can demonstrate it as part of a runtime test. You can configure any advanced access control authentication mechanism that you like however for our demonstration purposes we will configure the Username Password authentication service mechanism (which tests the isamruntime connection to LDAP), and then then use an existing policy which combines Username Password authentication end End-User License Agreement.

To configure the Username Password authentication service mechanism to communicate with our OpenLDAP, navigate to Security Access Control -> Authentication. Select Mechanisms and edit the Username Password mechanism. Click on the Properties tab, and edit the values as shown:

Property Value Notes
LDAP Bind DN cn=root,secauthority=default
LDAP Bind DN Passw0rd
LDAP Host Name openldap This corresponds to the Kubernetes Service that was created when the openldap container was deployed. The service name becomes a DNS entry in Kubernetes DNS and can be resolved by other deployments in the same cluster.
LDAP Port 636 This is the secure port to LDAP
Login Failures Persistent false default value
Management Domain Default default value
Maximum Server Connections 26 default value
SSL Enabled true
SSL Trust Store lmi_trust_store
Use Federated Directories Configuration false default value
User Search Filter (|(objectclass=ePerson)(objectclass=Person)) default value

Save the mechanism settings and deploy pending changes.

Re-publishing the Snapshot and Forcing WRP and Runtime containers to load it

Deploy any pending changes, then navigate to Container Management -> Publish Configuration.

This will create a new snapshot on the configuration container, however this will not be automatically loaded by the isamwebseal and isamruntime containers. They DO automatically load their FIRST snapshot – this is part of the initial bootstrap process, but thereafter you need to issue explicit commands to force those containers to reload the snapshot.

We do this first for the isamwebseal container:

ubuntu$ kubectl exec -t isamwebseal-3167996209-v6g75 -- isam_cli -c reload all
The command completed successfully.

Then also for the isamruntime container:

ubuntu$ kubectl exec -t isamruntime-2170015637-nsj84 -- isam_cli -c reload all
The command completed successfully.

Testing the basic scenario

Login using the Two-Factor Username Password plus EULA policy:

https://myisam:30443/mga/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:password_eula

You should first be required to login with a username and password (via ISAM advanced access control rather than the built-in WebSEAL username/password authentication support):

Then accept the end-user-license agreement:


Finally you should see the login success page:

Congratulations – You have now successfully deployed and tested an end-to-end scenario using ISAM with the web reverse proxy, then ISAM runtime for advanced access control (specifically a multi-step authentication service scenario).

 

What’s Next?

Demonstration vs Production Capabilities

The scenario we have configured is a simple demo architecture, suitable for a dev/test/proof-of-concept environment. There are several elements that may make it unsuitable for a production environment that would only come with features offered in a paid IBM Cloud account including:

  • Containers that need long-term persistent storage (specifically LDAP and the Postgres containers – and even the ISAM container if you aren’t careful to backup your snapshots) are using emptyDir volumes. This means that when the deployment/pod is destroyed, so is the persisted content. There are some scenarios (such as federated directories and scenarios that don’t require persisted data in the runtime database) which won’t be affected, however the majority of real ISAM use cases require persisted user registry and runtime data. In a paid account you would use a persistent volume to store the OpenLDAP and PostgreSQL data, and make these services highly available. In fact you might not use OpenLDAP or PostgreSQL at all and may choose to use enterprise LDAP and DB deployments.
  • Our free Kubernetes cluster (Lite profile) is a single worker node. Production Kubernetes clusters should have a minimum of 3 worker nodes – this comes from the Kubernetes documentation.

Rebuilding an Environment from a Saved Snapshot and Backup Data

It is an interesting exercise to be able to tear down and re-create environments quickly once you have successfully configured a scenario. You can do this, even with the demonstration scenario we have created in this article.

Backup Procedures

 

Backing up the ISAM Snapshot

Once the scenario is configured for the first time, download the snapshot from the configuration container (Manage System Settings -> Snapshots) to your local machine. You can also directly copy the snapshot file from the isamconfig container with kubectl cp:

 ubuntu$ kubectl cp isamconfig-2467259294-j6fhw:/var/shared/snapshots/isam_9.0.4.0_published.snapshot ./isam_9.0.4.0_published.snapshot
tar: removing leading '/' from member names
ubuntu$ ls -l isam_9.0.4.0_published.snapshot
total 12328
-rw-rw-r-- 1 amwebpi amwebpi 12621981 Jan 9 09:31 isam_9.0.4.0_published.snapshot

 

Backup OpenLDAP Data

Perform an LDIF dump of your OpenLDAP server. This example shows using simple ldapsearch/ldapadd so that we don’t have to stop the database. The ldapsearch output will be written to stdout, which is then directed to a local file on the machine from which you are running kubectl:
ubuntu$ kubectl exec -t openldap-3757380541-dxvnl -- ldapsearch -H "ldaps://localhost:636" -L -D "cn=root,secauthority=default" -w "Passw0rd" -b "secauthority=default" -s sub "(objectclass=*)" > secauthority.ldif
ubuntu$ kubectl exec -t openldap-3757380541-dxvnl -- ldapsearch -H "ldaps://localhost:636" -L -D "cn=root,secauthority=default" -w "Passw0rd" -b "dc=ibm,dc=com" -s sub "(objectclass=*)" > ibmcom.ldif
ubuntu$ ls -l *ldif
-rw-rw-r-- 1 amwebpi amwebpi 1457 Jan 9 09:57 ibmcom.ldif
-rw-rw-r-- 1 amwebpi amwebpi 24769 Jan 9 09:56 secauthority.ldif

 

Backup PostgreSQL Data

Backup the PostgreSQL data from the runtime database. In our scenario only the end-user-license-agreement (EULA) acceptance data is stored in the runtime database. Even so, we can easily backup and copy the entire database. The isam.db file is first created on the postgresql container, then copied locally:

ubuntu$ kubectl exec -t postgresql-1163409656-d91rz -- su postgres -c "/usr/local/bin/pg_dump isam -f /tmp/isam.db"
ubuntu$ kubectl cp postgresql-1163409656-d91rz:/tmp/isam.db ./isam.db
tar: removing leading '/' from member names
ubuntu$ ls -l isam.db
-rw-rw-r-- 1 amwebpi amwebpi 41431 Jan 9 09:39 isam.db

 

You now have all the data (ISAM snapshot, OpenLDAP, PostgreSQL) that you need to restore this environment either in the same cluster or a new cluster.

 

Delete / Recreate the Deployments and Services

To tear down and re-create the deployments and services (we’ll leave the secrets configured in this example, but you could also re-create them if desired or moving to a new cluster):

ubuntu$ kubectl delete deploy isamconfig isamruntime isamwebseal openldap postgresql
deployment "isamconfig" deleted
deployment "isamruntime" deleted
deployment "isamwebseal" deleted
deployment "openldap" deleted
deployment "postgresql" deleted

ubuntu$ kubectl delete svc isamconfig isamruntime isamwebseal openldap postgresql
service "isamconfig" deleted
service "isamruntime" deleted
service "isamwebseal" deleted
service "openldap" deleted
service "postgresql" deleted

ubuntu$ kubectl create -f openldap.yaml
deployment "openldap" created
service "openldap" created

ubuntu$ kubectl create -f postgresql.yaml
deployment "postgresql" created
service "postgresql" created

ubuntu$ kubectl create -f isamconfig.yaml
deployment "isamconfig" created
service "isamconfig" created
Error from server (AlreadyExists): error when creating "isamconfig.yaml": secrets "configreader" already exists

ubuntu$ kubectl create -f isamwrpdefault.yaml
deployment "isamwebseal" created
service "isamwebseal" created

ubuntu$ kubectl create -f isamruntime.yaml
deployment "isamruntime" created
service "isamruntime" created

It’s ok to ignore the error about the configreader secret already existing.

Confirm all pods are in the Running state (the pod names will be different) before continuing.

ubuntu$ kubectl get pod
NAME READY STATUS RESTARTS AGE
isamconfig-2467259294-47ht4 1/1 Running 0 43s
isamruntime-2170015637-2pdtn 1/1 Running 0 28s
isamwebseal-3167996209-z6sv5 1/1 Running 0 34s
openldap-3757380541-mtb01 1/1 Running 0 1m
postgresql-1163409656-j890f 1/1 Running 0 1m

Restore Procedures

Restoring OpenLDAP Data

Restore OpenLDAP data by copying the ldap files up to the new container and performing ldapadd operations:

ubuntu$ kubectl cp secauthority.ldif openldap-3757380541-mtb01:/tmp/secauthority.ldif
ubuntu$ kubectl exec -t openldap-3757380541-mtb01 -- ldapadd -c -f /tmp/secauthority.ldif -H "ldaps://localhost:636" -D "cn=root,secauthority=default" -w "Passw0rd"
<lots of "adding new entry ..." output here>

ubuntu$ kubectl cp ibmcom.ldif openldap-3757380541-mtb01:/tmp/ibmcom.ldif
ubuntu$ kubectl exec -t openldap-3757380541-mtb01 -- ldapadd -c -f /tmp/ibmcom.ldif -H "ldaps://localhost:636" -D "cn=root,secauthority=default" -w "Passw0rd"
<some "adding new entry ..." output here. Ignore any errors about "Already exists">

Restoring PostgreSQL Data

Restore PostgreSQL data by copying the database backup file to the new container and performing a psql command to load it.

ubuntu$ kubectl cp isam.db postgresql-1163409656-j890f:/tmp/isam.db
ubuntu$ kubectl exec -t postgresql-1163409656-j890f -- su postgres -c "/usr/local/bin/psql isam < /tmp/isam.db"
<lots of output here, including some constraint errors>

To check if our accepted EULA agreement for testuser made it into the restored database, perform this database search. You should see one entry:

ubuntu$ kubectl exec -t postgresql-1163409656-j890f -- psql -U postgres -p 5432 isam -c "select * from user_attributes;"
user_id | attribute_name | attribute_namespace | attribute_datatype | values_id
----------+-----------------------------+---------------------------------------------------------------+--------------------+------------------------------------
testuser | eula.last.date.accepted.key | urn:ibm:security:eula:/authsvc/authenticator/eula/license.txt | Date | \x5457dcdd48e564a42a4465009be96b27
(1 row)

Restoring ISAM Administration Password and Snapshot

Note that the admin password of the isamconfig container will still be admin. The admin password is not changed as part of the snapshot. You can update it with:

ubuntu$ kubectl exec -t isamconfig-2467259294-47ht4 -- bash -c "echo admin:Passw0rd | chpasswd"

Now restore the saved snapshot to config container:

ubuntu$ kubectl cp isam_9.0.4.0_published.snapshot isamconfig-2467259294-47ht4:/var/shared/snapshots/isam_9.0.4.0_published.snapshot

Perform a special version of the reload command on the isamconfig container to force it to recognize and load the restored snapshot. This will in turn result in the isamwebseal and isamruntime containers polling for and downloading the snapshot as well.

ubuntu$ kubectl exec -t isamconfig-2467259294-47ht4 -- isam_cli -c reload all force
The command completed successfully.

Watch the output of the isamruntime container to see it pick up and start with the snapshot:

ubuntu$ kubectl logs -f isamruntime-2170015637-r2mnh
<lots of output - Wait till to see:>
[1/9/18 11:17:47:742 AEST] 00000021 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0011I: The server runtime is ready to run a smarter planet.
The isamruntime container generally takes the longest to pickup the snapshot and start, so once it is running, the entire environment (including the isamwebseal container) should also be ready.

Re-test the scenario: https://myisam:30443/mga/sps/authsvc?PolicyId=urn:ibm:security:authentication:asf:password_eula

You should be prompted to login with username/password, but not be required to accept the end-user-license agreement as acceptance data should be restored in the PostgreSQL runtime database.

Congratulations – you have successfully restored the entire test scenario, including user and runtime database state, from a set of backed up data!

0 comments
11 views

Permalink