Deploying Sterling Secure Proxy CM/Engine on Red Hat OpenShift Using Certified Container and Connecting to Sterling B2Bi SFTP Adapter
Table of Contents
Introductory Notes
Products
IBM Sterling File Gateway (SFG) "enables companies to consolidate all internet-based file transfers on a single, scalable, secure and always-on edge gateway. The offering has the capabilities necessary to intelligently monitor, administer, route and transform high volumes of inbound and outbound files."
IBM Sterling Secure Proxy (SSP) "helps shield your trusted network by preventing direct connectivity between external partners and internal servers. It can secure your network and data exchanges at the network edge to enable trusted business-to-business transactions and managed file transfer (MFT) file exchanges. As a demilitarized zone (DMZ)-based proxy, Secure Proxy uses multifactor authentication, SSL session breaks, closing of inbound firewall holes, protocol inspection and other controls to ensure the security of your trusted zone."
Intent
The purpose of this blog is to provide non-production details on how to deploy Sterling Secure Proxy Configuration Manager and Engine, and configure both a SFTP Reverse Proxy and SFTP Adapter in a Sterling SFG/B2Bi deployment to authenticate and handle SFTP requests to the SFG server. Unless IBM documentation is referenced, each step covers all information necessary to deploy with this configuration. If your deployments need specific information not covered in this blog or if you wish to learn more about some of the installation steps, refer to the Glossary or Resources subsections for additional information and/or links.
Presumptions
Prior to following the installation steps in this blog, it is important to note that the environment and resulting deployments should not be used to replicate and/or produce a production environment for SSP and/or its connection to SFG/B2Bi. Additionally, a few presumptions are made with regards to these installations and their steps:
-
-
- These installation steps require an existing SFG or B2Bi deployment to exist and be accessible from your cluster. In my case, I will use the SFG deployment I previously deployed in my blog "Installing IBM Sterling File Gateway and ITX / ITXA on Red Hat OpenShift Using Certified Container". This blog will reference details from the deployment such as listening ports, network policies, and ingress load balancer IP addresses.
- The OpenShift cluster in which the SFG deployment exists and SSP will be deployed in automatically provisions load balancer IP addresses for ingress from the public internet. Unless otherwise mentioned, when a load balancer IP address is referenced in this blog, it is assumed that the IP address is publicly accessible.
- These instructions were developed on an OpenShift cluster running in the IBM Cloud. However, kubectl commands have also been provided and the instructions should work in Kubernetes as well.
- The Helm releases pull images for the deployments from the IBM container registry, for which the environment is already configured with required permissions and entitlement. Steps for configuring your development environment to pull the necessary images are referenced in the prerequisites for SSP.
- The SSP Configuration Manager and SSP Engine Helm charts are both version 1.3.4 which use SSP version 6.1.0.0.03.
Proposed Deployment
![](https://dw1.s81c.com//IMWUC/MessageImages/3c5325a0752e4e9eb176c09879ed4d64.png)
What will be deployed is as follows:
-
- A SSP CM v6.1.0.0.03 deployment with a load balancer and route for connecting to the user interface.
- A SSP Engine v6.1.0.0.03 deployment with a load balancer used to connect to the SFTP Reverse Proxy.
- A SFTP Adapter to receive and handle incoming SFTP requests.
Deployment Order
The order of deployment and configuration for this blog:
-
-
- Configuring SFG SFTP Adapter
- SSP CM Installation
- SSP Engine Installation
As outlined in the presumptions, an SFG or B2Bi deployment must be accessible from within the cluster.
The order of installation between SSP Configuration Manager and SSP Engine matters with regards to the information required in the respective values.yaml
file. For instance, if installing SSP Configuration Manager before SSP Engine then I would set cmArgs.keyCertExport: true
and leave secret.keyCertSecretName
empty in the YAML file for Configuration Manager. If installing SSP Engine before SSP Configuration Manager, then in the SSP Engine YAML file I would set engineArgs.keyCertExport: false
and provide the key certificate generated by SSP Configuration manager in the secret.keyCertSecretName
field.
Helm Installation and Charts
These installations use Helm version 3.10.1. Helm versions 3.10.2-3.15.1 (most recent release) should work as well. IBM's SSP CM version 1.3.4 Helm chart and SSP Engine version 1.3.4 Helm chart are available under the Resources subsection.
To install Helm, I first download version the 3.10.1 package from the GitHub repo.
With the tar downloaded I will unpack it and move the Helm binary to my bin folder:
tar -zxvf <Helm Package>
mv <Helm Binary> <bin Location>/bin/helm
You can check if Helm is installed and which version it is by running the following command in your command line:
helm version
Configuring SFG SFTP Adapter
Prerequisites
As previously mentioned, this blog assumes that either an IBM B2Bi or SFG deployment is deployed and available from within your cluster. In my SFG deployment I set a backend service adapter port within the SFG YAML file under the ASI configuration which is open under port 30201
. Additionally, a publicly accessible ingress IP should be set for the deployment's ASI backend service.
Network Policy
I will begin by ensuring that my SFG deployment can accept incoming traffic into port 30201
. To do this, I will create a network policy that allows ingress through port 30201
into pods matching my SFG release selector.
First, I will put the following definition into a YAML file called sfg_sftp_policy.yaml
:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: 'sftp-ingress-policy'
namespace: sfg-itxa-nonprod
spec:
podSelector:
matchLabels:
release: my-sfg-release
ingress:
- ports:
- protocol: TCP
port: 30201
policyTypes:
- Ingress
Note: my SFG namespace is sfg-itxa-nonprod
and SFG release name is my-sfg-release
.
I will create the network policy by running:
oc create -f sfg_sftp_policy.yaml
Or, if using kubectl:
kubectl create -f sfg_sftp_policy.yaml
Creating SSH Host Identity Key
To create an SFTP adapter, I need to first generate a new SSH host identity key in SFG. I open my SFG UI by using the dashboard route created by my SFG release. This dashboard route is in the form of:
<Internal ASI Ingress Hostname>/dashboard
Because my internal ASI ingress hostname was asi.us-south.containers.appdomain.cloud
, my dashboard URL is:
asi.us-south.containers.appdomain.cloud/dashboard
You can also find the route URL by running either of the following oc / kubectl commands:
oc get routes -o wide | grep dashboard
kubectl get routes -o wide | grep dashboard
I navigate to the SSH host identity key tool in the user interface and click SSH Host Identity Key under Deployment in the Administration Menu:
![](https://dw1.s81c.com//IMWUC/MessageImages/f202ead4014240a28d1a817ef436c804.png)
I will then click the Go! button under the Create subsection to begin creating my key.
I name my key host-key-ssp
and leave the key's type as ssh-rsa
and length as 1024
. The key comment is optional, so I will leave it blank:
![](https://dw1.s81c.com//IMWUC/MessageImages/1b3a92dbbc7f4a00a5ad3b3415d6496b.png)
After clicking Next I check to make sure my configuration is correct, and then click Finish to create the key:
![](https://dw1.s81c.com//IMWUC/MessageImages/d5a192e5868648258befcb1848a5a0fb.png)
I will also check out my host key for later use, to do this I navigate to Deployment > SSH Host Identity Key under the Administration Menu pane on the left and then click the Go! button under List:
![](https://dw1.s81c.com//IMWUC/MessageImages/532d0798ad9f4ccda7f45bd1efd570b8.png)
I then click on the check out button next to my host-key-ssp
key:
![](https://dw1.s81c.com//IMWUC/MessageImages/607cb859081044fbaf9329394043fb59.png)
In the popup window, I select OpenSSH as the format and click Go! to save the key to my local machine. I will remember the key filename of host-key-ssp.openssh
and where on my local machine it was saved.
Creating SFTP Adapter
To create the SFTP adapter, I will first open my SFG UI by using the dashboard route created by my SFG release. This dashboard route is in the form of:
<Internal ASI Ingress Hostname>/dashboard
For me, because my internal ASI ingress hostname was asi.us-south.containers.appdomain.cloud
, my dashboard URL is:
asi.us-south.containers.appdomain.cloud/dashboard
You can also find the exact route URL by running either of the following oc / kubectl commands:
oc get routes -o wide | grep dashboard
kubectl get routes -o wide | grep dashboard
After signing in, I am going to navigate to navigate to the Configuration page by following Deployment -> Services -> Configuration under the Administration Menu:
![](https://dw1.s81c.com//IMWUC/MessageImages/f202ead4014240a28d1a817ef436c804.png)
I will now click the Go! button under Create to begin creating the service.
When prompted to choose a service type, I click the list icon and select SFTP Server Adapter. I click Save to finish selecting and Next to move on to the configuration of the adapter:
![](https://dw1.s81c.com//IMWUC/MessageImages/05d36bd7d0654296815035348d737ba3.png)
![](https://dw1.s81c.com//IMWUC/MessageImages/92980127c89541c6b937573c1e7195e6.png)
I enter the adapter name SFTP Adapter
, mandatory description, and select All ASI Nodes under Environment:
![](https://dw1.s81c.com//IMWUC/MessageImages/823dd7b82aa24033b15715d899975292.png)
After clicking Next, I select my SSH host key host-key-ssp
and change the listen port to 30201
:
![](https://dw1.s81c.com//IMWUC/MessageImages/c4f5fa3cfdbc42fdbc327718de6c2ec3.png)
I continuously click Next, keeping the default values for all steps until I can confirm the adapter configuration. Once I am given the option to confirm, I click Finish:
![](https://dw1.s81c.com//IMWUC/MessageImages/5b1f024d26854388bc55fe9710e0b608.png)
Verification
To verify that my SFTP adapter has been successfully configured and is running, I return to the Deployment > Services > Configuration under the Administration Menu and search for SFTP Server Adapters by using the Search by Service Type option:
![](https://dw1.s81c.com//IMWUC/MessageImages/bcb41e8f920b43a08e7767e6ba436ad3.png)
I see my service listed and in the Enabled state:
![](https://dw1.s81c.com//IMWUC/MessageImages/ebd014d6b6af49989666a5d9f3371937.png)
I also click on the underlined name SFTP Adapter which opens a detailed popup window with more details about my service. I scroll down and note the line that stating "[...] SFTP_SERVER_ADAPTER status: Running":
![](https://dw1.s81c.com//IMWUC/MessageImages/0d259c2ee9a5486ba30fc8df89f020e8.png)
SSP Configuration Manager Installation
Installation
To install SSP Configuration Manager, I first need to create a new namespace for it in the same cluster in which I installed SFG. I'll use this namespace for both SSP Configuration Manager and SSP Engine. I'll name it ssp-nonprod
:
oc new-project ssp-nonprod
Or, if using kubectl:
kubectl create namespace ssp-nonprod
Next, I will ensure I have created the Security Context Constraint (SCC) outlined in the Helm chart under ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/ibm-ssp-cm-scc.yaml
:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: ibm-ssp-cm-scc
labels:
app.kubernetes.io/name: ibm-ssp-cm-scc
app.kubernetes.io/instance: ibm-ssp-cm-scc
app.kubernetes.io/managed-by: IBM
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
privileged: false
allowPrivilegedContainer: false
allowPrivilegeEscalation: true
requiredDropCapabilities:
- KILL
- MKNOD
- SETFCAP
- FSETID
- NET_BIND_SERVICE
- SYS_CHROOT
- SETPCAP
- NET_RAW
allowedCapabilities:
- SETGID
- DAC_OVERRIDE
defaultAddCapabilities: []
defaultAllowPrivilegeEscalation: false
forbiddenSysctls:
- "*"
fsGroup:
type: MustRunAs
ranges:
- min: 1
max: 4294967294
readOnlyRootFilesystem: false
runAsUser:
type: MustRunAsRange
uidRangeMin: 1
uidRangeMax: 1000639999
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
ranges:
- min: 1
max: 4294967294
volumes:
- configMap
- downwardAPI
- persistentVolumeClaim
- projected
- secret
- nfs
To create the SCC, I'll run the following oc command:
oc create -f ibm-ssp-cm-scc.yaml
I then add the SCC to my project namespace by running:
oc adm policy add-scc-to-group ibm-ssp-cm-scc system:serviceaccounts:ssp-nonprod
Next, I create a secret containing the data for Configuration Manager. I'll find the secret template in the Helm chart under ibm_cloud_pak/pak_extensions/pre-install/secret/ibm-ssp-cm-secret.yaml. I'll create a YAML file called ibm-ssp-cm-secret.yaml
and place the following in it.
apiVersion: v1
kind: Secret
metadata:
name: ibm-ssp-cm-secret
namespace: ssp-nonprod
type: Opaque
stringData:
sysPassphrase: <SSP CM System Passphrase>
adminPassword: <SSP CM Admin Password>
keyCertStorePassphrase: <SSP Key Cert Store Passphrase>
keyCertEncryptPassphrase: <SSP Key Cert Encryption Passphrase>
commonCertPassword: <SSP CM Common Cert Password>
engCertPassword: <SSP CM Engine Cert Password>
cmClientCertPassword: <SSP CM Client Cert Password>
cmCertPassword: <SSP CM Cert Password>
cmServerCertPassword: <SSP CM Server Cert Password>
webCertPassword: <SSP CM Web Cert Password>
exportCertPassword: <SSP CM Export Cert Password>
To create the secret, I run the following oc command:
oc create -f ibm-ssp-cm-secret.yaml
Or, if using kubectl:
kubectl create -f ibm-ssp-cm-secret.yaml
Once my secret is generated, I delete the ibm-ssp-cm-secret.yaml
file for security reasons:
rm ibm-ssp-cm-secret.yaml
Next, I create a copy of the provided values.yaml
file from the Helm chart. I name this copy override.yaml
. Note that I use ibmc-file-gold
as the storage class for the persistent volume with ReadWriteMany
as the access mode. This configuration is available to my OpenShift cluster under the IBM Cloud. If you aren't using IBM Cloud, you'll need to use a storage class available to your cluster.
In override.yaml
, I change the following values to meet my specifications:
cmArgs.hostNames: ssp.us-south.containers.appdomain.cloud
dashboard.enabled: true
license: true
persistentVolume.accessMode: ReadWriteMany
persistentVolume.labelName: ''
persistentVolume.labelValue: ''
persistentVolume.storageClassName: ibmc-file-gold
persistentVolume.useDynamicProvisioning: true
route.enabled: true
secret.secretName: ibm-ssp-cm-secret
serviceAccount.create: false
serviceAccount.name: default
After saving these changes to override.yaml
, I create my Helm release which I will call my-ssp-cm-release
. To do this, I'll run the following command from within the Helm chart directory:
helm install my-ssp-cm-release -f override.yaml --timeout 3600s .
Verification
To verify I have successfully installed SSP CM, I will first check the status of the Helm release, CM pod, route, and service:
helm status my-ssp-cm-release -n ssp-nonprod
In my output, I see that the status is deployed:
...
NAMESPACE: ssp-nonprod
STATUS: deployed
REVISION: 1
The following command gives me information about the CM pod created by the Helm release, most importantly that 1/1 pods are in the READY state:
kubectl get pods -l release=my-ssp-cm-release -n ssp-nonprod -o wide
Finally, the command below tells me that the CM service created by the Helm release has its IP addresses assigned:
kubectl get svc -l release=my-ssp-cm-release -n ssp-nonprod -o wide
I can also test my access to the CM route. I'll get this route by running the following command:
kubectl get routes -n ssp-nonprod
My SSP user interface is located at my-ssp-cm-release-ibm-ssp-cm-ssp.us-south.containers.appdomain.cloud
. Accessing this route takes me to the following UI, which visually confirms the installation was successful:
![](https://dw1.s81c.com//IMWUC/MessageImages/37612710afcd4bf1bc07a82be5708d65.png)
SSP Engine Installation
Installation
To install SSP Engine, I need to create the Security Context Constraint, the SSP Engine secret, and obtain and provide the key certificate secret generated by SSP Configuration Manager.
First, I will create the Security Context Constraint (SCC) as it is provided in the Helm chart under ibm_cloud_pak/pak_extensions/pre-install/clusterAdministration/ibm-ssp-engine-scc.yaml:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: ibm-ssp-engine-scc
labels:
app.kubernetes.io/name: ibm-ssp-engine-scc
app.kubernetes.io/instance: ibm-ssp-engine-scc
app.kubernetes.io/managed-by: IBM
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
privileged: false
allowPrivilegedContainer: false
allowPrivilegeEscalation: true
requiredDropCapabilities:
- KILL
- MKNOD
- SETFCAP
- FSETID
- NET_BIND_SERVICE
- SYS_CHROOT
- SETPCAP
- NET_RAW
allowedCapabilities:
- SETGID
- DAC_OVERRIDE
defaultAddCapabilities: []
defaultAllowPrivilegeEscalation: false
forbiddenSysctls:
- "*"
fsGroup:
type: MustRunAs
ranges:
- min: 1
max: 4294967294
readOnlyRootFilesystem: false
runAsUser:
type: MustRunAsRange
uidRangeMin: 1
uidRangeMax: 1000639999
seLinuxContext:
type: RunAsAny
supplementalGroups:
type: RunAsAny
ranges:
- min: 1
max: 4294967294
volumes:
- configMap
- downwardAPI
- persistentVolumeClaim
- projected
- secret
- nfs
I create the SCC by running the following oc command:
oc create -f ibm-ssp-engine-scc.yaml
Next, I add the SCC to my project namespace by running:
oc adm policy add-scc-to-group ibm-ssp-engine-scc system:serviceaccounts:ssp-nonprod
With the SCC created, I will create the SSP Engine secret found under ibm_cloud_pak/pak_extensions/pre-install/secret/ibm-ssp-engine-secret.yaml in the Helm chart. I place the following secret definition in a file named ibm-ssp-engine-secret.yaml
:
apiVersion: v1
kind: Secret
metadata:
name: ibm-ssp-engine-secret
namespace: ssp-nonprod
type: Opaque
stringData:
sysPassphrase: <SSP Engine System Passphrase>
keyCertStorePassphrase: <SSP Key Cert Store Passphrase>
keyCertEncryptPassphrase: <SSP Key Cert Encryption Passphrase>
To create the secret, I then run the following oc command:
oc create -f ibm-ssp-engine-secret.yaml
Or, if using kubectl:
kubectl create -f ibm-ssp-engine-secret.yaml
Once my secret is generated, I delete the ibm-ssp-cm-secret.yaml
file for security reasons:
rm ibm-ssp-engine-secret.yaml
I also need to create the key certificate secret containing the key certificate generated by SSP Configuration Manager. To begin this process, I first need to copy the generated key certificate from the SSP Configuration Manager persistent volume to somewhere local to my either my machine or cluster.
I can find the key certificate in my SSP CM pod at <Volume mapped Dir>/CM/defkeyCert.txt
. In my SSP CM pod, <Volume mapped Dir>
is /spinstall/IBM/SPcm.
To copy the key certificate file to my local directory, I run the following oc command:
oc cp my-ssp-cm-release-ibm-ssp-cm-0:/spinstall/IBM/SPcm/defkeyCert.txt ./defkeyCert.txt
Or, if using kubectl:
kubectl cp my-ssp-cm-release-ibm-ssp-cm-0:/spinstall/IBM/SPcm/defkeyCert.txt ./defkeyCert.txt
Then, to generate the key certificate secret for my SSP Engine deployment, I use the command provided in the documents. The template for this command is provided as:
kubectl create secret generic engine-key-cert --from-file=keyCert=/home/<user>/defkeyCert.txt
Replacing the file path, I run the following:
kubectl create secret generic engine-key-cert --from-file=keyCert=defkeyCert.txt
Next, I create a copy of the provided values.yaml
file from the Helm chart. I name this copy override.yaml
. Note that I use ibmc-file-gold
as the storage class for the persistent volume with ReadWriteMany
as the access mode. This configuration is available to my OpenShift cluster under the IBM Cloud. If you aren't using IBM Cloud, you'll need to use a storage class available to your cluster.
Additionally, I will edit the service2
section of the YAML file to define the port I intend on using for my SFTP Reverse Proxy connection. This will create a second service with a publicly accessible IP address along with a port I can use later.
I change the following values to meet my specifications:
dashboard.enabled: true
license: true
persistentVolume.accessMode: ReadWriteMany
persistentVolume.labelName: ''
persistentVolume.labelValue: ''
persistentVolume.storageClassName: ibmc-file-gold
persistentVolume.useDynamicProvisioning: true
route.enabled: true
secret.keyCertSecretName: engine-key-cert
secret.secretName: ibm-ssp-engine-secret
service2.ports:
- name: sftp-adapter
nodePort: 30111
port: 30111
serviceAccount.create: false
serviceAccount.name: default
After saving these changes to override.yaml
, I create my Helm release which I call my-ssp-engine-release
. To do this, I'll run the following command from within the Helm chart directory:
helm install my-ssp-engine-release -f override.yaml --timeout 3600s .
Verification
To verify I have successfully installed SSP Engine, I will first check the status of the Helm release, Engine pod, route, and services:
helm status my-ssp-engine-release -n ssp-nonprod
In my output, I see that the status is deployed:
...
NAMESPACE: ssp-nonprod
STATUS: deployed
REVISION: 1
This following command gives me information about the Engine pod created by the Helm release, most importantly that 1/1 pods are in the READY state:
kubectl get pods -l release=my-ssp-engine-release -n ssp-nonprod -o wide
Finally, the command below tells me that the Engine services created by the Helm release have their IP addresses assigned:
kubectl get svc -l release=my-ssp-engine-release -n ssp-nonprod -o wide
I need to remember the publicly accessible IP addresses given to both services made by my SSP Engine installation. The first is the IP given to access the engine itself through port 63366
. The other load balancer IP is used to connect to my SFTP adapter which has port 30111
open and listening. I will refer to these IP addresses as <SSP Engine IP>
and <SSP Engine Service IP>
respectively.
Connecting SSP Configuration Manager to SSP Engine
With SSP Engine and SSP Configuration Manager now deployed, I need to ensure that SSP CM connects to my SSP Engine deployment.
I begin by logging back into SSP CM using the SSP CM route my-ssp-cm-release-ibm-ssp-cm-ssp.us-south.containers.appdomain.cloud
and then clicking the option to use the legacy UI.
I then navigate to Actions > New Engine... under the Configuration tab. In the SSP Engine Configuration page, I give Engine Name SSP-Engine
, Engine Host <SSP Engine IP>
, and Engine Listen Port 63366
which is the default SSP Engine port provided in the Helm chart under service.engine.containerPort
and service.engine.servicePort
.
I then click Save to create the SSP Engine connection.
I can then verify that the SSP Engine connection has been made by checking under Engines via the Configuration tab tree:
![](https://dw1.s81c.com//IMWUC/MessageImages/ea2d4a322f7d466091c599f16e1be358.png)
Also, a green checkmark will exist next to the engine under Engine Status (All) via the Monitoring tab to indicate that the Engine is running:
![](https://dw1.s81c.com//IMWUC/MessageImages/6ba0214b93f84d5ab89c9ec844dd8fb5.png)
Network Policy
By default, both SSP CM and SSP Engine block all ingress and egress traffic for security reasons. It is up to the deployer to manually edit or add network policies to allow incoming and outgoing traffic. Our SFTP Reverse Proxy will be routed to the SSP Engine pod, and that pod will be responsible for the egress to the SFG SFTP Adapter which was set up previously to handle incoming SFTP requests on port 30201
.
Remembering that SFG listens on port 30201
and SSP Engine will listen on port 30111
, I create a new network policy in my ssp-nonprod
namespace with the following YAML definition in a file named ssp_network_policy.yaml
:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: 'sftp-reverse-proxy-policy'
namespace: ssp-nonprod
spec:
podSelector:
matchLabels:
release: my-ssp-engine-release
ingress:
- ports:
- protocol: TCP
port: 30111
egress:
- ports:
- protocol: TCP
port: 30201
policyTypes:
- Ingress
- Egress
I create the network policy by running:
oc create -f ssp_network_policy.yaml
Or, if using kubectl:
kubectl create -f ssp_network_policy.yaml
Configuring SSP SFTP Reverse Proxy
Prerequisites
The prerequisites for configuring an SFTP Reverse Proxy in SSP are:
-
-
- An SFTP Server is deployed and available for receiving SFTP requests.
- Sterling SSP CM and SSP Engine are deployed and connected.
- The SFTP Server is accessible either from within the same cluster or via public IP.
Because I have previously deployed IBM SFG, enabled an SFTP Server Adapter within it, deployed SSP CM and SSP Engine, and networking requirements such as public IPs / network policies are configured, I am ready to configure my SFTP Reverse Proxy.
Creating a Policy
I begin by first logging into my SSP user interface using the same route I acquired previously (my-ssp-cm-release-ibm-ssp-cm-ssp.us-south.containers.appdomain.cloud
). I then login using my admin credentials.
Once prompted, I change to the legacy UI to begin my configuration steps.
Creating an SFTP Policy begins with navigating to the Configuration tab at the top of the screen and then selecting Actions > New Policy... > SFTP Policy...:
![](https://dw1.s81c.com//IMWUC/MessageImages/132dcdf8bc314d9a86a1caca1769124a.png)
Under the basic tab, I am prompted to provide a policy name and description for my policy. I name my policy SFG-SFTP-Policy
and leave the description blank. Then I navigate to the Advanced tab and ensure that my SSH Authentication Method is set to Password
and User Mapping is set to Pass-Through
. This configuration means that authentication to the SFTP server will use the users stored in SFG and authenticate via their password:
![](https://dw1.s81c.com//IMWUC/MessageImages/53cd3531791849e5a8d1f1913b2a8680.png)
I then click Save to save my SFTP policy configuration.
Creating Key Stores
The next step in creating the Reverse Proxy is to create a netmap to define inbound connection information for your external trading partners and outbound information for the SFTP server that SSP will connect to. However, a netmap definition requires two key stores:
-
-
- A local host key store for inbound connection authentication. SSP will store the private key and the public key will be sent to the trading partner.
- A known host key store for outbound connection authentication. SSP will store the public key received from the SFTP server.
For this blog, the local host key will be generated using ssh-keygen which is a component of the Secure Shell protocol suite found on most systems. Other SSH key generation tools can be used to accomplish the same goal. I also opt to use the RSA key algorithm.
To create the private / public key pair using ssh-keygen, I open a terminal session on my local machine and type the following to create an RSA key pair with a key size of 2048 bits:
ssh-keygen -t rsa -b 2048
I press Enter to use the default file location, enter my passphrase (<Private Key Password>
), and then navigate to where the key pair was saved in the .ssh directory under the home directory. I now have access to both the private id_rsa and public id_rsa.pub key files.
To create my local host key store, I navigate to Actions > New SSH Key Store... > Local Host Key Store under the Credentials tab of the UI:
![](https://dw1.s81c.com//IMWUC/MessageImages/c70e50b0f9304013bba08c15fe406da3.png)
I give the key store the name of SFG-SFTP-Partner-KeyStore
and then click New to import my private RSA key, here I provide my key's name (id_rsa
), key password (<Private Key Password>
), and import my private key via the Choose File button. I will navigate to where my private / public key pair is in the .ssh directory on my local machine to select the private key. Upon selecting my private key, I see the key data in the Key Data box:
![](https://dw1.s81c.com//IMWUC/MessageImages/c1670a08107a4c8aa821ffad0b2e5c72.png)
NOTE: BE VERY CAREFUL WITH HOW YOU CHOOSE TO TRANSFER YOUR SSH KEYS.
IF YOUR KEY IS NOT STORED LOCALLY, CHOOSE A SECURE COPY METHOD SUCH AS SCP OR A PHYSICAL STORAGE DEVICE SUCH AS A USB DRIVE.
I will then click Ok to save the private key, and then Save when I am brought back to the Local Host Key Store Configuration page to finish my local host key store configuration. I can verify that the key store was successfully created by refreshing the Credentials navigation tree on the left side and checking for it under Local Host Key Stores:
![](https://dw1.s81c.com//IMWUC/MessageImages/ff0dd004f6de4c449dde8c4818ce2ee2.png)
Next, I will create the known host key store which will store outbound authentication to my SFG SFTP server adapter. This is where I will make use of the host key I exported from SFG which I named host-key-ssp.openssh
on my local machine.
I navigate to Actions > New SSH Key Store... > Known Host Key Store under the Credentials tab:
![](https://dw1.s81c.com//IMWUC/MessageImages/dfde5603c042436c9d5cdeba261e8ab8.png)
Here I provide the name of the key store which I will name SFG-SFTP-Server-KeyStore. I click New to import the key, here I give the key name host-key-ssp
, and then import host-key-ssp.openssh
from my local machine through the Choose File option. After selecting the host key, I see the key data in the Key Data box which tells me it successfully imported:
![](https://dw1.s81c.com//IMWUC/MessageImages/635095d3255148daad35462fb42268cf.png)
I then click Ok to add the key, and finally Save when brought back to the Known Host Key Store Configuration page to finish the known host key store setup. I can verify that the key store was successfully created by refreshing the Credentials navigation tree on the left side and checking for it under Known Host Key Stores:
![](https://dw1.s81c.com//IMWUC/MessageImages/aa365884d1f446a885737970c41a7010.png)
Configuring the Netmap
With my policy and key stores created, I can create my netmap by navigating to Actions > New Netmap... > SFTP Netmap... under the Configuration tab. Once prompted, I will specify SFG-SFTP-Netmap
as the name for my netmap, leave my description blank, and click the New button under Inbound Nodes to create an inbound node specification.
The inbound node name corresponds with a trading partner name. As this is a blog and test environment, I name this inbound node Test-Trading-Partner
in the Inbound Node Name field. The Peer Address Pattern field is responsible for specifying a wildcard pattern of addresses permitted to connect via inbound traffic. I leave the default configuration of *
which means any address can be used. Finally, I will select the previously created SFG-SFTP-Policy
under the Policy field:
![](https://dw1.s81c.com//IMWUC/MessageImages/0b4ba3155bb74ba89395ecbc14a361a2.png)
I click OK to finish the inbound node definition.
Once brought back to the SFTP Netmap page, I navigate to the Outbound Nodes tab and click New.
The outbound node name corresponds to the name of the SFTP server. In this case it is an SFTP Server Adapter running on an SFG deployment and is accessible via my ASI pod's service.
I name my outbound node SFG-SFTP-Node
. In the Primary Destination Address field, I specify the publicly accessible IP address given to my SFG ASI backend service during installation. In the Primary Destination Port field, I give port 30201
which is the port I gave for my SFG SFTP Adapter to be listening on. Then, I select SFG-SFTP-Server-KeyStore
as the Known Host Key Store and host-key-ssp
as the Known Host Key.
Then I navigate to the Security tab and ensure that aes128-cbc
is selected as one of the Cipher Suites and hmac-sha1
is selected as one of the MAC Suites because those are the suites specified as preferred in my SFTP adapter.
I then click Ok to finish the outbound node definition.
Once brought back to the SFTP Netmap page, I click Save to finish the SFTP Netmap definition.
I can verify that the netmap was successfully created by refreshing the Configuration navigation tree on the left side and checking for it under Netmaps:
![](https://dw1.s81c.com//IMWUC/MessageImages/f41dd678ef8f4c31bf5a5cf41941df62.png)
Configuring the Reverse Proxy
To create the SFTP Reverse Proxy, I navigate to Actions > New Adapter > SFTP Reverse Proxy... under the Configuration tab:
![](https://dw1.s81c.com//IMWUC/MessageImages/3726b1d6cc4045eca240d3eee0d65f39.png)
Here I give the following values:
-
-
- Adapter Name:
SFG-SFTP-Reverse-Proxy-Adapter
- Listen Port:
30111
(port given during SSP Engine installation for my SFTP adapter)
- Netmap:
SFG-SFTP-Netmap
- Routing Node:
SFG-SFTP-Node
- Local Host Key Store:
SFG-SFTP-Partner-KeyStore
- Local Host Key:
id_rsa
I click the Add button under the Engine selection box where I can add my engine SSP-Engine
.
I navigate to the Security tab and ensure that aes128-cbc
is selected as one of the Cipher Suites and hmac-sha1
is selected as one of the MAC Suites because those are the suites specified as preferred in my SFTP adapter.
I finish the SFTP Reverse Proxy configuration by clicking Save.
To verify that the SFTP Reverse Proxy was successfully created I refresh the Configuration navigation tree on the left side and check for it under Adapters:
![](https://dw1.s81c.com//IMWUC/MessageImages/34fe3739b8b646c6bdcdba5969d98988.png)
Verification
To verify that the SFTP Reverse Proxy was properly configured, I will attempt to connect to my SFG SFTP Adapter through the SSP Reverse Proxy adapter. You can do this step with an FTP/SFTP software such as FileZilla, or using the command line which is what I will do.
To connect to my SFTP adapter through the SSP Reverse Proxy, I need to specify the following values: the cipher, port, user, and address. In my case, I remember that the preferred cipher for my SFTP Adapter in SFG is set to aes128-cbc
, the port I am using in SSP for my Reverse Proxy to listen on is 30111
, I have access to the admin
user and password in SFG, and the address is the load balancer IP provided to me during SSP Engine installation for port 30111
which I named <SSP Engine Service IP>
.
With these values in mind, I can connect using the SFTP command using the following command template sftp -c CIPHER -P PORT USER@ADDRESS
:
sftp -c aes128-cbc -P 30111 admin@<SSP Engine Service IP>
I am prompted to provide the SFG password for the admin user. After providing the password, I see the sftp prompt which tells me I have successfully connected to my SFG SFTP Adapter through my SSP Reverse Proxy Adapter:
sftp>
Glossary
Helm
Helm allows for automating the containerized deployments and upgrades of SSP when used in conjunction with a provided and configurable YAML file. This YAML file is used to define relative configuration for the charts. The key here is to ensure that the file properly defines the necessary deployment configurations that fit your needs. For issues regarding Helm, refer to the Helm documentation on how to install it and the commands available to you via the Helm CLI.
SSH File Transfer Protocol (SFTP)
SFTP Adapter: "The SFTP Server adapter enables external SFTP clients or SCP clients to put files into or get files from a mailbox in this application or to a physical file system on the server."
SFTP Reverse Proxy: A reverse proxy acts on behalf of a trusted zone application. The trading partner or remote client initiates a connection to a trusted zone application and is connected to a reverse proxy. Secure Proxy provides reverse proxy services for Sterling B2B Integrator when the trading partners initiate FTP, HTTP, SFTP, and Connect:Direct® sessions to the Sterling B2B Integrator server in the trusted zone.
Resources
Helm Charts
SSP Configuration Manager Version 6.1.0.0.03
SSP Engine Version 6.1.0.0.03
Installation Document References
SFG: Configuring External Access for Application Backend (non-HTTP) Endpoints
Sterling SFG / B2Bi SFTP Adapter Set Up
Installing Sterling SSP using Certified Container
Creating SSP Secrets
Validating SSP Installation
SSP SFTP NetMap Inbound Node Definition / Wildcards
SSP SFTP Basic Configuration
SSP SFTP Reverse Proxy Configuration
Acronyms
- OCP: OpenShift Container Platform
- SSP (CM): Sterling Secure Proxy (Configuration Manager)
- SFG: IBM Sterling File Gateway
- B2B(i): Business to Business (Integrator)
- SFTP: Secure File Transfer Protocol
- SCC: Security Context Constraint