IBM Cloud Global

 View Only

Unleash the power of Satellite Connector to access your on-prem data from IBM Cloud

By Neela Shah posted Tue October 10, 2023 12:16 PM


By: Neela Shah (

       Todd Johnson ( 

In this blog, we will explore how you can run your applications on IBM Cloud and access your data that is persistent within your private network. A newly released feature in IBM Cloud, Satellite Connector, can serve this purpose for you in a secure manner. It creates a secure connection between IBM Cloud and your infrastructure without having to have an inbound public internet connection to your environment. Additionally giving you the ability to perform operations like monitoring and logging of this connection.

A note about terminology, throughout the blog we refer to “your on-prem environment” or “on-prem environment” or “your environment”, which really means any infrastructure you have outside of IBM Cloud. This can be in your datacenters, another cloud provider, in your office, etc as long as it meets the minimum requirements as documented here: In our end to end example we will be using an on-prem VMware environment, we’ll refer to this as “our VMware environment” or “the VMware environment”, but again it can be any infrastructure you have.


Satellite Connector provides a secure connection from IBM Cloud to a specific on-prem environment which allows you to access applications running in your on-prem environment from applications running on IBM Cloud. There are 3 major components you have to create:

  1. On IBM Cloud you create a Satellite Connector resource. This only requires a name, resource group and region to create the IBM cloud side of the connection.

  2. In your on-prem environment, you deploy a lightweight container (Satellite Connector Agent), on your choice of container platforms such as podman, docker, etc. This will establish the on-prem side of the connection. Using your API key, ID of the Satellite Connector resource and its region created in step 1, the agent container connects to the IBM cloud side to establish a tunnel. The Satellite Connector Agent image (amd64) is available in the IBM Cloud registry for users to download and run in their environment. See the documentation for network and container requirements and details about how to specify the required information.

  3. From the Satellite Connector resource, you now create a User Endpoint by providing the IP address or the fully qualified domain name of the application running in your environment and its port number. The Satellite Connector calls this the Destination and Destination port. Upon creation of the endpoint you will be provided an Endpoint address which then can be used by your applications running in IBM Cloud. This Endpoint Address is on the IBM Cloud private network. Connections to this endpoint address will be proxied over the tunnel to your application running in your on-prem environment.

The Satellite Connector resource is integrated into IBM Cloud Logging and Monitoring. This allows Satellite Connector to send logs and monitoring information to your IBM Cloud Logging and Monitoring instances respectively. We highly recommend you integrate a logging instance into Satellite Connector. This will help with troubleshooting should the need arise.

Here is a high level overview of the end to end flow and components that are used:


  1. The Satellite Connector Agent container that runs in your on-prem environment connects to the IBM cloud side during startup. Therefore it must be allowed to have outbound connectivity to the internet either directly or via a proxy. You do not need inbound internet connectivity. The requirements are described in more detail in the documentation including what addresses and ports that will need to be open in your firewalls and how to configure the agent to use a proxy.

  2. Satellite Connector is not intended to be a generic tunnel like a VPN. You can only use it to establish connections from the IBM Cloud to User Endpoints you create via the Endpoint address.

End to End Example

We will now walk through how to access data stored in a postgres DB running in VMware in on-prem environment.

We will show you how to do this in two different ways-

  1. Access the data from a postgres DBaaS instance running in IBM Cloud.

  2. Access the data from an application running on a Redhat OpenShift Kubernetes Cluster (ROKS) in IBM Cloud.

For this example we have a VMware environment running an NSX-T overlay network. Our Postgres database instance is in a VM. The connector agent is running in a docker swarm cluster on 3 additional VMs. In addition, we have Windows Active Directory which serves as our DNS server since we’ll be using the fully qualified host name of the Postgres VM in our endpoint definition. On the IBM Cloud side, is a IBM Cloud DB for Postgres instance. This DB needs to access some data from the Postgres instance running on in our VMware environment. Remember, you don’t need public inbound internet access to your Vmware environment. The NSX-T overlay network for these VMs is SNAT’d to allow public internet outbound access so the connector agents can establish the tunnel to the Satellite Connector servers running in IBM Cloud. The setup looks as follows:

You then configure an endpoint (see below) in your Satellite Connector instance on IBM Cloud. You tell the endpoint the destination address and port in your environment. In this case the Postgres DB hostname and port running in Vmware. In addition you provide what type of connection you’d like, TCP, HTTP, etc. In our case we’ll use TCP. See the documentation for a description of the various connection types. When you create the endpoint, you are given an endpoint address hostname and port. This hostname resolves to an IP address in the private IBM Cloud CSE network. You use this hostname and port in applications running on IBM Cloud and the network connection is forwarded to the connector agent over the tunnel and the connector agent resolves the hostname of the postgres DB and forwards the connection to Postgres. You can interact with the VMware Postgres database as if it were on the IBM Cloud network.

Access the data from a postgres DBaaS instance running in IBM Cloud

To get started, make sure you have the following pre-requisites completed:

  • A PostgreSQL instance running in your environment. In our example, we have a database called “appdb”. This has a table called “cities” which has 1 column called “name” and is in the “public” schema.

For this approach, the following steps have to be completed.

  1. Create a Satellite Connector instance in us-east, call it “vmware-connector”

  2. Run the Satellite Connector agent on your VMWare on-prem environment. In our example, we are using Docker Swarm running on 3 VMware VMs. Ensure it is connected to your Satellite Connector instance “vmware-connector”.

  3. Create an Endpoint in your vmware-connector Satellite Connector instance whose destination is the postgres DB running on VMware.

  4. Create a postgres service instance on IBM Cloud

  5. Install the postgres admin client on your local environment like a laptop or desktop.

  6. Connect the admin client to the postgres service instance that you created on IBM Cloud.

Lets walk through each of the above steps in detail so that we can get ready to setup our postgres DB access via the Satellite Connector.

Create the Satellite Connector

A Satellite Connector provides a secure connection from IBM Cloud to a specific remote location. Lets create the Satellite Connector from the IBM Cloud UI. 

From the Satellite menu in the left navigator, select Connectors. On the create page, give it a name “vmware-connector” and select Washington DC as the region, leave the rest as defaults.

Run the Satellite Connector Agent

The Satellite Connector Agent is the container that runs in your enviroment and provides the tunnel to from the IBM Cloud private network to your environment. The agent container must have network access to the application, in our case the Postgres DB VM. In this example we will be using a docker swarm cluster to run 3 instances of the connector agent container on 3 different VMware VMs. This provides high availabilty of the connector agent. You can, of course, just run 1 instance of connector agent. In this example we are showing docker swarm. However you can use any container platform of your choice, for instance, docker without swarm, podman, even an on-prem kubernetes cluster, etc. The important thing is that the connector agent container has network access to the destination applications. If you are running more than 1 instance of the connector agent to the same Satellite Connector instance, all agents must have network access to the target applications. 

When a connection is established the Connector tunnel server on IBM Cloud will pick one of the agents to forward the connection to the application. There is no mechanism to specify which agent will be selected for any individual connection. If you have applications in other networks or environments you need to create additional Satellite Connector instances and they need to have their own connector agents. If you would like to run without swarm see the documentation for how to configure and run the agent.

Let’s get started. In the documentation we provide a sample swarm configuration file. You’ll need to modify this file to change the image name to the version of the connector agent you want to run. See Connector agent image change log for a list of available versions or you can use “latest”. Also you can modify the SATELLITE_CONNECTOR_TAGS environment variable if you would like to see something different. This value shows up on the Agents tab of the Satellite Connector UI and is used to help you identify a specific connector agent container. By default this will be the hostname of the swarm node. You may also have to adjust the CPU and memory limits. Obviously, the more connections you have the more resources are needed by the container. The default values are reasonable for a small number of connections, however you should monitor the containers and adjust accordingly. Since there are lots of variables that go into the performance of a container, it is your responsibility to understand the performance characteristics of the container.

  1. Create a file called “satellite-connector-agent.yaml” with the following content:

version: '3.9'
    - SATELLITE_CONNECTOR_ID=/satellite-connector-id
    - SATELLITE_CONNECTOR_REGION=/satellite-connector-region
    - SATELLITE_CONNECTOR_IAM_APIKEY=/run/secrets/satellite-connector-iam-apikey
    - SATELLITE_CONNECTOR_TAGS={{.Node.Hostname}}
      replicas: 3
        condition: any
        parallelism: 2
        delay: 20s
        failure_action: rollback
        order: start-first
          cpus: '0.40'
          memory: 500M
          cpus: '0.04'
          memory: 200M 
      - source: satellite-connector-id
        uid: '1000'
        gid: '1000'
        mode: 0400
      - source: satellite-connector-region
        uid: '1000'     
        gid: '1000'     
        mode: 0400      
      - source: satellite-connector-iam-apikey
        uid: '1000'
        gid: '1000'
        mode: 0400
      driver: "json-file"
        max-size: ${JSON_FILE_MAX_SIZE:-1m}
        max-file: ${JSON_FILE_MAX_FILE:-10}

    external: true
    external: true

    external: true

    external: true
    name: bridge

  1. Create docker config with the region where the Satellite Connector instance is located. In our example it is Washington DC, so the region is the short name us-east. See the following Supported IBM Cloud regions documentation for a mapping of long region names to their short name.

  printf us-east | docker config create satellite-connector-region -

  1. Create a docker config with the Satellite Connector’s ID. You can get that from the UI.

printf <connector instance id> | docker config create -

  1. Create a docker secret with your api key

printf <your API Key> | docker secret create satellite-connector-iam-apikey -

  1. In order to pull connector agent image from the IBM Container Registry you need to login. You can do this using your api key. You’ll be be prompted for the password which will be your apikey. Or you can specify the -p parameter.

docker login -u iamapikey

  1. Deploy the stack which will create the containers.

docker stack deploy --compose-file satellite-connector-agent.yaml --with-registry-auth satellite_connector


You use normal docker swarm commands to interact with the service and the containers. Verify the containers are running.

docker service ps satellite_connector_agent

Once the connector agent is running and has established a tunnel, you should see the agents show up in your Satellite Connector’s UI in the Agents tab:

Running in Docker Swarm is documented here:

Create the endpoint in Satellite Connector

First we will need to create an endpoint in the Satellite Connector instance that points to the postgres application running on your VMWare on-prem environment. 

Note: In this blog we show just one destination application. Satellite Connector supports multiple destination applications you just need to add additional Endpoints to your Satellite Connector instance.

  1. Go to your connector instance on IBM cloud and select the “User endpoints” tab.

  2. Press the “Create Endpoint” button.

  3. Fill in the Resource details. Your values will differ depending on your environment. In our example:

Here we give it a name which can be anything you want, we call it “postgres”. The next 2 fields describe the endpoint in your environment. This can be a fully quailfied DNS name or an IP address. This must be resolvable by the connector agent running in your environment. The destination port is the port that your resource is listening on. In our case Postgres is listening on port 5432.

  1. Press Next

  2. Set the Protocol to TCP

Note: Even though our Postgres instance running in our environment requires an SSL connection we can still set it to TCP. The SSL termination is handled by Postgres.

  1. We are going to leave the at their default values. Press Next on the this screen and on the next. Finally press Create Endpoint on the last tab.

  1. Once the endpoint is created, you will given an endpoint address which you use from within the IBM Cloud private network to communicate with your application running in your environment. For example, using this address in an IBM Cloud application will be routed over the the tunnel to the postgres application running on VMware.

Create a PostgreSQL Service Instance in your IBM Cloud Account

From the IBM catalog search for “Postgres” and it will find Databases for PostgreSQL. Enter your information to create the database. For this example we used a Small instance and we selected Private only endpoints. If you chose private only endpoints you’ll need to be connected to the IBM Cloud private network in order to access the database. You can do this via a VPN or a jump box VSI instance. Whether you select private or public endpoints, this is only for accessing the database, it has no bearing on how the database accesses your on-prem environment database over Satellite Connector.

Connect to the PostgreSQL instance you just created

There are several ways to connect to your instance. For this example we installed PGAdmin client on my Macbook which is a graphical user interface. You can use whatever PostgreSQL client you wish. If you navigate to your database instance on IBM Cloud you will see the section titled Endpoints. Click on the PostgreSQL tab and you will see all the necessary information to connect to your database instance. Since we chose private endpoint only, I have a VPC client-to-site VPN setup on my Macbook using Tunnelblick. See the VPC VPN documentation for how to create and use the VPN.

Setup IBM Cloud PostgreSQL service instance access to postgres on VMWare

Open a psql terminal from inside the admin client and run the following commands

  1. Connect the foreign data wrapper object to the Connector endpoint. The host parameter value is the Endpoint Address from the endpoint “postgres” that we created in the previous step on Satellite Connector not including the port. The port value is added to the port parameter. The dbname parameter is the database name in your VMware environment that we want to connect to.

CREATE SERVER vmware_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '', port '32859', dbname 'appdb');

  1. Create a user mapping so that you can access the postgres DB on VMWare. The user and password information in this command is the user credentials to access the postgres database running on VMware. This maps the current user accessing the PGAdmin client to the userid/password for the database running on VMware.

CREATE USER MAPPING FOR admin SERVER vmware_server OPTIONS (user '<postgres user>', password '<postgres password>');

  1. In our postgres DB, we have a table called “cities” in the schema “public”. We will create a schema in the local database called “vmware” and then import the “public” schema from the VMware database into the local vmware schema.


IMPORT FOREIGN SCHEMA public FROM SERVER vmware_server INTO vmware;

  1. Now we can access the cities table and list the rows. The SQL is run remotely across the connector tunnel to the VMware database instance. Since we’re using a user that has access to insert in the the table, you can do an INSERT SQL to add a city. The city is added to the database running on VMware. Depending on what you have in your postgres DB, these commands should be adjusted to using your tables and specific values.

SELECT * FROM vmware.cities;

(4 rows)

INSERT INTO vmware.cities (name) VALUES ('Preston');

Access the data from an application running on a Redhat OpenShift Kubernetes Cluster in IBM Cloud

To get started, make sure you have the following pre-requisites completed as shown above.

  • Create a Satellite Connector instance in us-east, call it “vmware-connector”

  • Run the Satellite Connector agent on your VMWare on-prem environment and ensure it is connected to your Satellite Connector instance “vmware-connector”.

  • Create a Redhat OpenShift Kubernetes Cluster in IBM Cloud in us-east and make sure the cluster is in Normal state.

  • You will need a container application that processes the DB requests. We wrote a go module that uses the postgres go client to accomplish this. You can use a different technology of your choosing.

Now, lets walkthrough the steps in detail that have to be completed to access your postgres DB from a ROKS cluster.

Create the endpoint in Satellite Connector

We will use the same endpoint in the Satellite Connector instance that we did in the earlier option.

Create a deployment

Create a deployment, service and route to access the postgres DB on VMWare. We used a deployment yaml like this:

apiVersion: apps/v1
kind: Deployment
  name: connector-db-app
    app: connector-db-app
  replicas: 1
      app: connector-db-app
        app: connector-db-app
      - name: connector-db-app
        - containerPort: 5000
          - name: DATABASE_URL
            value: "postgres://{cred}"
          - name: postgres-creds
            mountPath: "/tmp/secrets"
            subPath: postgres
            readOnly: true
        - name: postgres-creds
            secretName: postgres
             - key: user
               path: postgres/user
             - key: pwd
               path: postgres/pwd

Things worth pointing out:

  • The go module reads the postgres database URL from the DATABASE_URL environment variable. The host and port of this URL is the connector endpoint address. The rest of the URL is specific to postgres. The database name in our case “appdb” and that the connection requires SSL.

  • The postgres credentials are stored in a secret called postgres and mounted to the pod at /tmp/secrets. You’ll notice in the URL a substitution string called “{cred}”. The go module substitutes the user and password from the secret into the URL before establishing the connection. The final URL that’s used:

  • We pushed the go module container image to ICR in my namespace, toddjohn.

docker push

Now we are ready to run the application as follows:

  1. Create a secret with the postgres DB credentials, for example:

oc create secret generic postgres --from-literal=user=myuser --from-literal=pwd=mypwd

  1. Create the deployment.

oc apply -f <deployment yaml file>

  1. In order to access the application, we’ll use an Openshift route which first requires a service to map the route to a specific application. The route will give you an externally accessible host name that will be SSL enabled using the default SSL certificate for the cluster.

The service yaml

apiVersion: v1
kind: Service
  name: connector-db-app
    app: connector-db-app
    - name: connector-db-app
      protocol: TCP
      port: 5000
      targetPort: 5000

The route yaml:

kind: Route
  name: connector-db-app
    targetPort: 5000
    termination: edge
    kind: Service
    name: connector-db-app
    weight: 100
  wildcardPolicy: None

  1. Create the service and the route:

oc apply -f <service yaml file>
oc apply -f <route yaml file>

        5. Verify the application is running successfully

oc get pods

       6. If the application is running successfully, you can now access the postgres database from the application running in ROKS. First get the route:

oc get route

        7. The route will give you the host name to use. Our application responds to an http get request at the URI path called “/cities/vmware”. And since we are using an SSL route make sure to specify “https://” on the curl. The full curl to the application is:


The output:



In this blog we showed how to setup and use the new Satellite Connector feature to access your applications running in your on-prem environment from your applications running in IBM Cloud. If you have problems there’s a “Debugging Connectors” section in the documentation:

We encourage you to read the documentation, it contains a lot of information that we didn’t have time to cover such as access control lists, and setting up logging and monitoring. Hopefully you found this blog helpful and will setup and use Satellite Connectors in your own environment.

Contact Us

Neela Shah -

Todd Johnson -