WebSphere Application Server & Liberty

 View Only

Tutorial: Migrate DayTrader to Liberty on OpenShift (Part 1)

By Liam Westby posted Tue June 15, 2021 04:00 PM


Part 2 is now published. Read it here.

By now you've (hopefully!) seen some of the blog posts and live sessions all about the migration tools that we've published to this community. We've covered Transformation Advisor, the Migration Toolkit for Application Binaries (also known as the binary scanner), and the WebSphere Application Migration Toolkit (also known as the Source Scanner). We've shown how features of each of these tools can help you assess all your applications, choose which ones to migrate, choose where to migrate them to, and make the changes required to get them there.

This time, I'd like to take a different approach: let's take an application from a WebSphere traditional Network Deployment cell and deploy it to Liberty running on OpenShift. In Part 1 (this part), we'll see what it takes to get the application itself running in the cloud, while leaving all other services it uses unchanged on-premises. Then, in Part 2, we'll bring some of the services it depends on into the deployment, to achieve the level of service and availability that WebSphere ND provided on-premises, but in a modern, cloud-native way.


DayTrader is one of our favorite sample applications for WebSphere modernization because it makes use of many WebSphere features common to apps:

  • JDBC datasources
  • JMS resources
  • Session persistence
  • Web Security roles
DayTrader runs great on both WebSphere traditional and Liberty, which will help us get to Liberty faster since we won't need to make any code changes. That's not guaranteed for every application; if you want to see possible issues with your applications, you can use Transformation Advisor or the binary scanner to find and make the changes.

Let's look at the deployment of DayTrader we'll modernize. The application and all the services it depends on are deployed to an on-premises datacenter. DayTrader itself is deployed to a 2x2 cluster (2 nodes, 2 cluster members per node), and memory-to-memory session replication is enabled.

The application uses two external services: it uses JDBC to access a remote IBM DB2 database, and it uses JMS to exchange messages with a remote IBM MQ queue manager.

Additionally, global and application security is enabled, authenticating against an on-premises LDAP server. Groups from that server are assigned to a role, authorizing members of that group to access the application.

Finally, note that DayTrader isn't the only application deployed to this cluster.

DayTrader doesn't need any of these other applications to function, so we can ignore them. We'll need to make sure we only migrate configuration from the cell that actually affects DayTrader, and filter out all the rest.

Configuring Open Liberty

Since we already know that DayTrader runs on Open Liberty, our main focus will be on converting the configuration for the various resources DayTrader needs. Let's look at the list again:

  • JDBC datasources: required, DayTrader won't work at all without its database.
  • JMS resources: partially required, main functionality will work, but a few features won't and we'll see errors if they aren't set up.
  • Session persistence: optional, and requires external set up with a new provider. This won't provide any benefit for a development deployment with a single instance anyways.
  • Web Security roles: required, the application won't allow us to access it without proper authorization.

That's 3 out of the 4 major areas that need to be brought over to get DayTrader fully working in a development or sandbox environment. We'll bring over what's required for now, saving the rest for our production-level deployment when the time comes.

To create the configuration, we're going to use the configuration generation feature of the binary scanner to get a head start:

[root@onprem-appserver-a1 was]# java -jar binaryAppScanner.jar /was/WAS90/profiles/appDmgr/config/cells/appCell/applications/DayTrader3.ear/DayTrader3.ear --generateConfig --includeSensitiveData --targetJavaEE=ee7 --includePackage=com.ibm --output=blog
Processing the DayTrader3.ear application.
Scanning files..........................
INFO: CWMIG12110I: Additional configuration migration advice is available in the analysis report. To view the advice, run the tool again with the --analyze or --all option.
The report was saved to the following file: /was/blog/DayTrader3.ear_server.xml
The report was saved to the following file: /was/blog/DayTrader3.ear_server_sensitiveData.xml
INFO: CWMIG12107I: The --includeSensitiveData option was specified during the scan. Passwords and other sensitive data were gathered and are included in the /was/blog/DayTrader3.ear_server_sensitiveData.xml file. Follow best practices with any sensitive data by encrypting or configuring secrets depending on the migration target.
INFO: CWMIG12116I: Some resource files used by the configuration have been copied to the output folder. Ensure these files are moved to appropriate location in your target server configuration.

This gives us a starting server.xml file to configure Open Liberty, a separate sensitiveData.xml file to hold authentication credentials used in configuration, and the keystore files needed to set up TLS configuration for the server.

<?xml version="1.0" encoding="UTF-8"?>
<server description="Configuration generated by binaryAppScanner">
        <!--The following features are available in all editions of Liberty.-->
        <!--The following features are available in all editions of Liberty, except for Liberty Core.-->

    <!-- This configuration was migrated on 6/14/21 at 4:25:53 PM from the following location: /was/WAS90/profiles/appDmgr -->
    <!-- The binary scanner does not support the migration of all WebSphere traditional configuration elements. Check the binary scanner documentation for the list of supported configuration elements. -->
    <applicationManager autoExpand="true"/>

    <httpEndpoint host="${httpEndpoint_host_1}" httpPort="${httpEndpoint_port_1}" httpsPort="${httpEndpoint_secure_port_1}" id="defaultHttpEndpoint"/>

    <!-- Some or all of the bindings that are migrated for this application might exist in the application archive. If so, the duplicate bindings can be removed from this server.xml file. Any binding configuration that is specified in this server.xml file takes precedence over the corresponding binding configuration in the application archive. -->
    <enterpriseApplication location="DayTrader3.ear">
        <web-ext context-root="/daytrader" moduleName="daytrader-ee7-web"/>
        <ejb-jar-bnd moduleName="daytrader-ee7-ejb">
            <message-driven name="DTBroker3MDB">
                <jca-adapter activation-spec-binding-name="eis/TradeBrokerMDB" destination-binding-name="jms/TradeBrokerQueue"/>
            <message-driven name="DTStreamer3MDB">
                <jca-adapter activation-spec-binding-name="eis/TradeStreamerMDB" destination-binding-name="jms/TradeStreamerTopic"/>
            <security-role name="testing"/>
            <security-role name="AllAuthenticated"/>
            <security-role name="grp1">
                <group access-id="group:defaultWIMFileBasedRealm/cn=WAS Admins,ou=groups,o=coconuts" name="WAS Admins"/>
            <security-role name="grp2">
                <group access-id="group:defaultWIMFileBasedRealm/cn=WAS Admins,ou=groups,o=coconuts" name="WAS Admins"/>
            <security-role name="grp3"/>
            <security-role name="grp4"/>
            <security-role name="grp5"/>
            <security-role name="webSecOnly"/>

    <authData id="TradeAdminAuthData" password="${TradeAdminAuthData_password_1}" user="${TradeAdminAuthData_user_1}"/>

    <authData id="TradeDataSourceAuthData" password="${TradeDataSourceAuthData_password_1}" user="${TradeDataSourceAuthData_user_1}"/>

    <jdbcDriver id="DB2_Universal_JDBC_Driver_Provider_Only">
            <file name="/was/db2jars/db2jcc.jar"/>
            <file name="/was/db2jars/db2jcc_license_cu.jar"/>

    <dataSource beginTranForResultSetScrollingAPIs="false" beginTranForVendorAPIs="false" containerAuthDataRef="TradeDataSourceAuthData" id="TradeDataSource" jdbcDriverRef="DB2_Universal_JDBC_Driver_Provider_Only" jndiName="jdbc/TradeDataSource" type="javax.sql.ConnectionPoolDataSource">
        <properties.db2.jcc databaseName="${TradeDataSource_databaseName_1}" driverType="4" name="TradeDataSource" portNumber="${TradeDataSource_portNumber_1}" retrieveMessagesFromServerOnGetMessage="true" serverName="${TradeDataSource_serverName_1}"/>
        <connectionManager connectionTimeout="180" minPoolSize="10" enableContainerAuthForDirectLookups="true"/>

    <jndiEntry id="AppNameJNDI" jndiName="app/AppName" value='"DayTrader"'/>

    <jmsActivationSpec id="eis/TradeBrokerMDB">
        <properties.mqJmsRa brokerCCDurSubQueue="SYSTEM.JMS.D.CC.SUBSCRIBER.QUEUE" brokerCCSubQueue="SYSTEM.JMS.ND.CC.SUBSCRIBER.QUEUE" brokerControlQueue="SYSTEM.BROKER.CONTROL.QUEUE" brokerSubQueue="SYSTEM.JMS.ND.SUBSCRIBER.QUEUE" brokerVersion="1" channel="${TradeBrokerMQMDB_channel_1}" cleanupInterval="3600000" destinationLookup="jms/TradeBrokerQueue" hostName="${TradeBrokerMQMDB_hostName_1}" poolTimeout="300000" port="${TradeBrokerMQMDB_port_1}" queueManager="${TradeBrokerMQMDB_queueManager_1}" sparseSubscriptions="FALSE" subscriptionDurability="Nondurable" subscriptionStore="MIGRATE" transportType="BINDINGS_THEN_CLIENT" useJNDI="true"/>

    <jmsActivationSpec id="eis/TradeStreamerMDB">
        <properties.mqJmsRa brokerCCDurSubQueue="SYSTEM.JMS.D.CC.SUBSCRIBER.QUEUE" brokerCCSubQueue="SYSTEM.JMS.ND.CC.SUBSCRIBER.QUEUE" brokerControlQueue="SYSTEM.BROKER.CONTROL.QUEUE" brokerSubQueue="SYSTEM.JMS.ND.SUBSCRIBER.QUEUE" brokerVersion="1" channel="${TradeStreamerMQMDB_channel_1}" cleanupInterval="3600000" destinationLookup="jms/TradeStreamerTopic" destinationType="javax.jms.Topic" hostName="${TradeStreamerMQMDB_hostName_1}" poolTimeout="300000" port="${TradeStreamerMQMDB_port_1}" queueManager="${TradeStreamerMQMDB_queueManager_1}" sparseSubscriptions="FALSE" subscriptionDurability="Nondurable" subscriptionStore="MIGRATE" transportType="BINDINGS_THEN_CLIENT" useJNDI="true"/>

    <jmsQueueConnectionFactory id="TradeBrokerMQQCF" jndiName="jms/TradeBrokerQCF">
        <properties.mqJmsRa channel="${TradeBrokerMQQCF_channel_1}" hostName="${TradeBrokerMQQCF_host_1}" queueManager="${TradeBrokerMQQCF_queueManager_1}" temporaryModel="SYSTEM.DEFAULT.MODEL.QUEUE"/>

    <jmsTopicConnectionFactory id="TradeStreamerMQTCF" jndiName="jms/TradeStreamerTCF">
        <properties.mqJmsRa brokerCCSubQueue="SYSTEM.JMS.ND.CC.SUBSCRIBER.QUEUE" brokerControlQueue="SYSTEM.BROKER.CONTROL.QUEUE" brokerPubQueue="SYSTEM.BROKER.DEFAULT.STREAM" brokerSubQueue="SYSTEM.JMS.ND.SUBSCRIBER.QUEUE" channel="${TradeStreamerMQTCF_channel_1}" cleanupInterval="3600000" clientId="${TradeStreamerMQTCF_clientID_1}" hostName="${TradeStreamerMQTCF_host_1}" queueManager="${TradeStreamerMQTCF_queueManager_1}" subscriptionStore="MIGRATE"/>

    <jmsQueue id="TradeBrokerMQQueue" jndiName="jms/TradeBrokerQueue">
        <properties.mqJmsRa baseQueueName="${TradeBrokerMQQueue_baseQueueName_1}"/>

    <jmsTopic id="TradeStreamerMQTopic" jndiName="jms/TradeStreamerTopic">
        <properties.mqJmsRa baseTopicName="${TradeStreamerMQTopic_baseTopicName_1}"/>

    <ldapRegistry baseDN="${ldapRegistry_baseDN_1}" bindDN="${ldapRegistry_bindDN_1}" bindPassword="${ldapRegistry_bindPassword_1}" connectTimeout="20s" host="${ldapRegistry_host_1}" id="virmire" ldapType="Custom" port="${ldapRegistry_port_1}" readTimeout="20s">
        <ldapEntityType name="Group">
            <memberAttribute name="member" objectClass="groupOfNames" scope="direct"/>

    <ldapRegistry baseDN="${ldapRegistry_baseDN_2}" connectTimeout="20s" host="${ldapRegistry_host_2}" id="LDAP2" ldapType="IBM Tivoli Directory Server" port="${ldapRegistry_port_2}" readTimeout="20s" sslEnabled="true" sslRef="CellDefaultSSLSettings">
        <loginProperty name="uid"/>
        <loginProperty name="mail"/>
        <ldapEntityType name="Group">
        <ldapEntityType name="OrgContainer">
        <ldapEntityType name="PersonAccount">
            <memberAttribute dummyMember="uid=dummy" name="member" objectClass="groupOfNames" scope="direct"/>
            <memberAttribute name="uniquemember" objectClass="groupOfUniqueNames" scope="direct"/>
            <attribute entityType="PersonAccount" name="userPassword" propertyName="password"/>
            <attribute entityType="PersonAccount" name="krbPrincipalName" propertyName="kerberosId"/>
            <attributesCache size="4000"/>
            <searchResultsCache resultsSizeLimit="1000" timeout="600s"/>

    <federatedRepository pageCacheTimeout="900ms">
        <primaryRealm allowOpIfRepoDown="true" name="defaultWIMFileBasedRealm">
            <participatingBaseEntry name="${ldapRegistry_baseDN_1}"/>
            <participatingBaseEntry name="${ldapRegistry_baseDN_2}"/>
            <uniqueGroupIdMapping inputProperty="uniqueName"/>
            <userSecurityNameMapping outputProperty="principalName"/>

    <keyStore id="CellDefaultKeyStore" location="${CellDefaultKeyStore_location_1}" password="${CellDefaultKeyStore_password_1}"/>

    <keyStore id="CellDefaultTrustStore" location="${CellDefaultTrustStore_location_1}" password="${CellDefaultTrustStore_password_1}"/>

    <ssl clientKeyAlias="${CellDefaultSSLSettings_clientKeyAlias_1}" id="CellDefaultSSLSettings" keyStoreRef="CellDefaultKeyStore" serverKeyAlias="${CellDefaultSSLSettings_serverKeyAlias_1}" sslProtocol="${CellDefaultSSLSettings_sslProtocol_1}" trustStoreRef="CellDefaultTrustStore"/>

    <keyStore id="NodeDefaultKeyStore" location="${NodeDefaultKeyStore_location_1}" password="${NodeDefaultKeyStore_password_1}"/>

    <ssl id="NodeDefaultSSLSettings" keyStoreRef="NodeDefaultKeyStore" sslProtocol="${NodeDefaultSSLSettings_sslProtocol_1}" trustStoreRef="CellDefaultTrustStore"/>

    <sslDefault sslRef="NodeDefaultSSLSettings"/>

    <resourceAdapter id="mqJmsRa" location="/path/to/wmq.jmsra.rar">
        <classloader apiTypeVisibility="spec, ibm-api, api, third-party"/>

    <!-- The sensitive data file location must be updated if the file is moved. -->
    <include location="DayTrader3.ear_server_sensitiveData.xml" onConflict="MERGE" optional="true"/>

    <!-- The following variables, which often differ between environments, have been extracted from the migrated configuration to allow for easy substitution. -->
    <variable name="CellDefaultKeyStore_location_1" defaultValue="DayTrader3_ear_appCell_CellDefaultKeyStore_key.p12"/>
    <variable name="CellDefaultSSLSettings_clientKeyAlias_1" defaultValue="default"/>
    <variable name="CellDefaultSSLSettings_serverKeyAlias_1" defaultValue="default"/>
    <variable name="CellDefaultSSLSettings_sslProtocol_1" defaultValue="TLS"/>
    <variable name="CellDefaultTrustStore_location_1" defaultValue="DayTrader3_ear_appCell_CellDefaultTrustStore_trust.p12"/>
    <variable name="httpEndpoint_host_1" defaultValue="*"/>
    <variable name="httpEndpoint_port_1" defaultValue="9080"/>
    <variable name="httpEndpoint_secure_port_1" defaultValue="9443"/>
    <variable name="ldapRegistry_baseDN_1" defaultValue="o=coconuts"/>
    <variable name="ldapRegistry_baseDN_2" defaultValue="o=ibm.com"/>
    <variable name="ldapRegistry_host_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="ldapRegistry_host_2" defaultValue="bluepages.ibm.com"/>
    <variable name="ldapRegistry_port_1" defaultValue="389"/>
    <variable name="ldapRegistry_port_2" defaultValue="636"/>
    <variable name="NodeDefaultKeyStore_location_1" defaultValue="DayTrader3_ear_appNode02_NodeDefaultKeyStore_key.p12"/>
    <variable name="NodeDefaultSSLSettings_sslProtocol_1" defaultValue="TLS"/>
    <variable name="TradeBrokerMQMDB_channel_1" defaultValue="DEV.APP.SVRCONN"/>
    <variable name="TradeBrokerMQMDB_hostName_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="TradeBrokerMQMDB_port_1" defaultValue="1414"/>
    <variable name="TradeBrokerMQMDB_queueManager_1" defaultValue="mqtest"/>
    <variable name="TradeBrokerMQQCF_channel_1" defaultValue="DEV.APP.SVRCONN"/>
    <variable name="TradeBrokerMQQCF_host_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="TradeBrokerMQQCF_queueManager_1" defaultValue="mqtest"/>
    <variable name="TradeBrokerMQQueue_baseQueueName_1" defaultValue="TRADE.BROKER.QUEUE"/>
    <variable name="TradeDataSource_databaseName_1" defaultValue="tradedb"/>
    <variable name="TradeDataSource_portNumber_1" defaultValue="50000"/>
    <variable name="TradeDataSource_serverName_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="TradeStreamerMQMDB_channel_1" defaultValue="DEV.APP.SVRCONN"/>
    <variable name="TradeStreamerMQMDB_hostName_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="TradeStreamerMQMDB_port_1" defaultValue="1414"/>
    <variable name="TradeStreamerMQMDB_queueManager_1" defaultValue="mqtest"/>
    <variable name="TradeStreamerMQTCF_channel_1" defaultValue="DEV.APP.SVRCONN"/>
    <variable name="TradeStreamerMQTCF_clientID_1" defaultValue="mqtest"/>
    <variable name="TradeStreamerMQTCF_host_1" defaultValue="virmire.rtp.raleigh.ibm.com"/>
    <variable name="TradeStreamerMQTCF_queueManager_1" defaultValue="mqtest"/>
    <variable name="TradeStreamerMQTopic_baseTopicName_1" defaultValue="TradeStreamerTopic"/>

The server.xml (and other files) that were created from that command are a good starting point, but will need some edits before everything works properly. We need to ensure the paths to the DB2 JDBC drivers and MQ resource adapter are correct. We'll use a variable shared.resource.dir which points to a the directory /opt/ol/wlp/usr/shared/resources for these drivers and adapters.

    <jdbcDriver id="DB2_Universal_JDBC_Driver_Provider_Only">
            <file name="${shared.resource.dir}/db2jcc.jar"/>
            <file name="${shared.resource.dir}/db2jcc_license_cu.jar"/>
    <resourceAdapter id="mqJmsRa" location="${shared.resource.dir}/wmq.jmsra.rar">
        <classloader apiTypeVisibility="spec, ibm-api, api, third-party"/>

We'll also make sure we have the files we need in place for the next step.

Note that Transformation Advisor will detect the need for these drivers, ask for you to upload them, and include them in the migration bundle if you use TA to scan the WebSphere ND cell. In this example, we're updating the paths manually, and copying the files over ourselves.

Creating the Container Image

Now that we have the artifacts in place, we can build the container image which OpenShift will deploy and run. To do that, we need to write the Dockerfile (if using Docker) or Containerfile (if using podman or similar).

FROM openliberty/open-liberty:full-java11-openj9-ubi

COPY --chown=1001:0 *.jar /opt/ol/wlp/usr/shared/resources/
COPY --chown=1001:0 *.rar /opt/ol/wlp/usr/shared/resources/
COPY --chown=1001:0 *.p12 /opt/ol/wlp/output/defaultServer/resources/security/

COPY --chown=1001:0 DayTrader3.ear_server_sensitiveData.xml /config/
COPY --chown=1001:0 DayTrader3.ear_server.xml /config/server.xml
COPY --chown=1001:0 DayTrader3.ear /config/apps/

RUN configure.sh

If you are using Transformation Advisor, it will generate a Dockerfile for you. There are some differences in what files TA includes, so you may need to make changes if you are following along.

Let's go over the Dockerfile piece by piece:

FROM openliberty/open-liberty:full-java11-openj9-ubi

We start with this Open Liberty image as a base. full means it contains all Liberty features even if we don't use them; this makes image builds faster but the resulting image larger. Since this effort is to build an image for a development environment, we'll use the full image, then switch to a slower-building but more size-efficient base image in Part 2.

COPY --chown=1001:0 *.jar /opt/ol/wlp/usr/shared/resources/
COPY --chown=1001:0 *.rar /opt/ol/wlp/usr/shared/resources/
COPY --chown=1001:0 *.p12 /opt/ol/wlp/output/defaultServer/resources/security/

Next, we copy in the various resources my image will need: the driver jars for JDBC, the resource adapter rar for JMS, and the keystores for TLS. We copy these in first because they tend not to change often. When we rebuild the image, Docker will re-use cached layers if nothing before them has changed; it's important to copy in files that change infrequently first, to reduce the amount of steps that have to be redone when changing a more common file like server.xml.

COPY --chown=1001:0 DayTrader3.ear_server_sensitiveData.xml /config/
COPY --chown=1001:0 DayTrader3.ear_server.xml /config/server.xml
COPY --chown=1001:0 DayTrader3.ear /config/apps/

Then we copy in the more volatile files for this image: the sensitive data and server XML files, and the app binary itself. All files that we copy in have the --chown=1001:0 parameter added, to ensure permissions are correct. By default, containers based the Open Liberty image start as a non-root user with uid 1001 in group gid 0. OpenShift overrides the user to have a random uid, but maintains the group 0 membership, so in order for our server to have permissions to access all its files in OpenShift, we need to make sure the owner gid is set as well.

RUN configure.sh

Finally, we run the built-in configure.sh script. This script can do many things based on how the image is configured, but the most important function it serves here is to pre-warm the Java caches so that containers based on this image start faster.

Time to build the image. We don't need any special arguments to the build, just the tag to name it:

$ docker build -t daytrader:v1 .
[+] Building 88.4s (13/13) FINISHED
 => [internal] load build definition from Dockerfile                                                               0.0s
 => => transferring dockerfile: 498B                                                                               0.0s
 => [internal] load .dockerignore                                                                                  0.0s
 => => transferring context: 2B                                                                                    0.0s
 => [internal] load metadata for docker.io/openliberty/open-liberty:full-java11-openj9-ubi                         0.0s
 => [1/8] FROM docker.io/openliberty/open-liberty:full-java11-openj9-ubi                                           0.0s
 => [internal] load build context                                                                                  0.3s
 => => transferring context: 13.25MB                                                                               0.3s
 => CACHED [2/8] COPY --chown=1001:0 *.jar /opt/ol/wlp/usr/shared/resources/                                       0.0s
 => [3/8] COPY --chown=1001:0 *.rar /opt/ol/wlp/usr/shared/resources/                                              0.1s
 => [4/8] COPY --chown=1001:0 *.p12 /opt/ol/wlp/output/defaultServer/resources/security/                           0.0s
 => [5/8] COPY --chown=1001:0 DayTrader3.ear_server_sensitiveData.xml /config/                                     0.0s
 => [6/8] COPY --chown=1001:0 DayTrader3.ear_server.xml /config/server.xml                                         0.0s
 => [7/8] COPY --chown=1001:0 DayTrader3.ear /config/apps/                                                         0.0s
 => [8/8] RUN configure.sh                                                                                        87.3s
 => exporting to image                                                                                             0.4s
 => => exporting layers                                                                                            0.4s
 => => writing image sha256:2dc8c44df41fc72f23b7f29f60230a1b91007da1954b8088f356286b547a7cf7                       0.0s
 => => naming to docker.io/library/daytrader:v1                                                                    0.0s

With that done, it's time to get this image over to OpenShift.

Deploying DayTrader to OpenShift

Since there are a few steps that need to be done via the command-line, we'll start by logging in to oc and then to the OpenShift internal image registry via docker.

$ oc login --token=sha256~_AIdXqiRrXYXZToGOmY48eE_INY6qAZo0tHYBw9ADmw --server=https://api.migr4.cp.fyre.ibm.com:6443
$ docker login -u (oc whoami) -p (oc whoami -t) default-route-openshift-image-registry.apps.migr4.cp.fyre.ibm.com

With both of these successful, we need to create a new project in OpenShift, then install the Open Liberty Operator into that project. In the OpenShift web console, we'll click Projects, then New Project, and name it "daytrader".

Now we'll install the Open Liberty Operator in it. It's available in OperatorHub, so we'll navigate to that, search for "Open Liberty" and select the Open Liberty Operator result in the list. We'll choose to install it into a single namespace, the daytrader project we created.

When that's complete, we'll return to the console and tag and push the image.

$ oc project daytrader
$ docker tag daytrader:v1 default-route-openshift-image-registry.apps.migr4.cp.fyre.ibm.com/daytrader/daytrader:v1
$ docker push default-route-openshift-image-registry.apps.migr4.cp.fyre.ibm.com/daytrader/daytrader:v1
The push refers to repository [default-route-openshift-image-registry.apps.migr4.cp.fyre.ibm.com/daytrader/daytrader]
b74d39931950: Pushed
b9823bc237d3: Pushed
055e997e12e5: Pushed
ab3013401810: Pushed
24cff2c4bc89: Pushed
8db161a9ad7a: Pushed
24df2d881763: Pushed
9b9ff23cda1c: Pushed
7c5ebee3095a: Pushed
dc86a4496596: Pushed
c356dac09bf9: Pushed
8433f9e0be86: Pushed
ef795d8fff72: Pushed
44879cca6a30: Pushed
86ebb8dcc5a7: Pushed
f17d3ed99602: Pushed
f0a77c369efd: Pushed
1a6543399d61: Pushed
v1: digest: sha256:7ae1b93db56e82b6b0b46ce268598b6c5a0225a0f7064c4bc8c0e5ebb6ed7864 size: 4101

Now that the Operator is deployed, and the image is pushed, the next step is to use the Operator to deploy the image. We only need to create one file, olapp.yaml, which describes an instance of the OpenLibertyApplication resource the Operator defines. When we create this in the "daytrader" project, the Operator will notice and will create the necessary Deployment, Service, Pod, and Route resources to get DayTrader running and accessible from the network.

If you are using Transformation Advisor, this YAML file will be created for you as part of the migration bundle for the application. Just replace the contents with what we have here.

apiVersion: openliberty.io/v1beta1
kind: OpenLibertyApplication
  name: daytrader
  applicationImage: daytrader:v1
    type: ClusterIP
    port: 9443
  expose: true
    termination: passthrough
      value: dev

The important values here are:

  • name: The name of the deployment we are creating
  • applicationImage: The name of the image. The Open Liberty Operator supports OpenShift image stream tags, so we don't need to specify the full tag we used to push the image initially
  • service: The definition for the service through which network traffic will be sent to the application. port needs to match the HTTPS port that Liberty is listening on, which is 9443.
  • expose: Tells the operator to create a route so this deployment can be accessed from outside the OpenShift cluster.
  • termination: Sets up TLS for the route. With passthrough, the application running inside the container controls the TLS connection, so we can use our TLS settings brought over from WebSphere.
  • env: Sets environment variables that are visible in the running container. Here we set a variable that causes Open Liberty to print logs in plain text instead of JSON.

To deploy DayTrader, all that's left to do is to run:

$ oc apply -f olapp.yaml
openlibertyapplication.openliberty.io/daytrader created

To check progress, we can run oc get pods and look for our new Pod:

$ oc get pods
NAME                                     READY   STATUS              RESTARTS   AGE
daytrader-89cb7955d-hv4f8                0/1     ContainerCreating   0          16s
open-liberty-operator-6f6b5bc46b-9sbk7   1/1     Running             0          29m

Note that the status is ContainerCreating. Eventually, it should turn to Running.

$ oc get pods
NAME                                     READY   STATUS    RESTARTS   AGE
daytrader-89cb7955d-hv4f8                1/1     Running   0          93s
open-liberty-operator-6f6b5bc46b-9sbk7   1/1     Running   0          30m

Now that it's up and going, we need to get the URL for the route that was exposed. We'll do this on the command line:

$ oc get route -o jsonpath='{.items[0].spec.host}'

Be sure to add https:// to the start and /daytrader to the end, resulting in https://daytrader-daytrader.apps.migr4.cp.fyre.ibm.com/daytrader. Navigate to this URL in your browser. When we do this, we're prompted to log in. Using the same credentials as we would in the on-premises deployment works, and DayTrader is displayed:

It's working! Clicking around the DayTrader interface reveals that everything is functional. We now have a developer image of DayTrader running on Open Liberty in our OpenShift cluster, where we can experiment and test as we prepare for a production-level deployment. In Part 2, we'll achieve that production-level deployment, using features of OpenShift and Liberty to upgrade the existing image for high-availability.

See the IBM Developer learning path Modernizing applications to use WebSphere Liberty to discover all the application modernization tools available with WebSphere Hybrid Edition. Also, check out the other articles in this app modernization blog series.