1. Introduction
This examples uses a previously created micro-service application that was created for x86 Architecture.
__ 0. Let's clone the repository:
Example
# git clone https://github.com/DanielCasali/microservices-demo-user.git
Cloning into 'microservices-demo-user'...
remote: Enumerating objects: 1048, done.
remote: Total 1048 (delta 0), reused 0 (delta 0), pack-reused 1048
Receiving objects: 100% (1048/1048), 169.42 KiB | 6.05 MiB/s, done.
Resolving deltas: 100% (594/594), done.
#
__ 1. You should see a new folder created:
# ls -la
total 4
drwxr-xr-x. 3 root root 38 Jul 6 05:13 .
dr-xr-x---. 20 root root 4096 Jul 6 05:09 ..
drwxr-xr-x. 9 root root 203 Jul 6 05:13 microservices-demo-user
__ 2. Change to this directory
# cd microservices-demo-user
#
__ 3. Check the directory
# ls
api apispec db docker docker-compose.yml docker-compose-zipkin.yml glide.lock glide.yaml LICENSE main.go Makefile README.md scripts users
#
You can see some .go files. This normally means you are working with a Golang or simply “Go” application. We will understand how Go applications are built in the next section.
2. Understanding the build process
Do not be intimidated by this, normally (95% of the time) the developers who created the application at your client will have all the steps documented and they will understand how to do this, if well guided. You just need to understand this enough at a high level to give them guidance. Most of the time all that is needed is to find the right image that will be used to compile the code and to build the container. Sometimes it is the same image but sometimes it is not. For example, Dotnet, Go and SpringBoot normally have different images, whereas Node.js will usually use the same image.
Section 1. Analyzing the Build Process
2. We will analyze the contents of the Directory to make sure we understand how the application is built in container form.
The Dockerfile is the main file that holds this information, as it determines the content of the container image and how it is built.
__ 4. Search the directory for the Dockerfile
# ls -la
total 60
drwxr-xr-x. 10 root root 4096 Aug 1 07:54 .
drwxr-xr-x. 10 root root 4096 Aug 1 07:44 ..
drwxr-xr-x. 2 root root 161 Aug 1 07:44 api
drwxr-xr-x. 2 root root 56 Aug 1 07:44 apispec
drwxr-xr-x. 3 root root 52 Aug 1 07:44 db
drwxr-xr-x. 4 root root 33 Aug 1 07:44 docker
-rw-r--r--. 1 root root 854 Aug 1 07:44 docker-compose.yml
-rw-r--r--. 1 root root 1447 Aug 1 07:44 docker-compose-zipkin.yml
drwxr-xr-x. 8 root root 163 Aug 1 07:44 .git
drwxr-xr-x. 2 root root 86 Aug 1 07:44 .github
-rw-r--r--. 1 root root 27 Aug 1 07:44 .gitignore
-rw-r--r--. 1 root root 4390 Aug 1 07:44 glide.lock
-rw-r--r--. 1 root root 729 Aug 1 07:44 glide.yaml
-rw-r--r--. 1 root root 11357 Aug 1 07:44 LICENSE
-rw-r--r--. 1 root root 3889 Aug 1 07:44 main.go
-rw-r--r--. 1 root root 2437 Aug 1 07:44 Makefile
-rw-r--r--. 1 root root 2383 Aug 1 07:44 README.md
drwxr-xr-x. 2 root root 45 Aug 1 07:44 scripts
-rw-r--r--. 1 root root 1072 Aug 1 07:44 .travis.yml
drwxr-xr-x. 2 root root 141 Aug 1 07:44 users
We don’t see a Dockerfile on the main code directory, but there is a docker directory.
__ 5. Inspect the docker directory
# ls -la docker
total 4
drwxr-xr-x. 4 root root 33 Aug 1 07:44 .
drwxr-xr-x. 10 root root 4096 Aug 1 07:54 ..
drwxr-xr-x. 2 root root 32 Aug 1 07:44 user
drwxr-xr-x. 3 root root 39 Aug 1 07:44 user-db
We will not work with the db for the application we will just focus on the user itself.
__ 6. Check the user directory inside the docker directory.
# ls -la docker/user
total 4
drwxr-xr-x. 2 root root 24 Jul 8 11:24 .
drwxr-xr-x. 4 root root 43 Jul 8 11:24 ..
-rw-r--r--. 1 root root 1395 Jul 8 11:24 Dockerfile-release
__ 7. Check the Dockerfile on the current docker/user, do NOT try to understand it now, we will do this step by step during the demo.
# cat docker/user/Dockerfile-release
FROM golang:1.7-alpine
COPY . /go/src/github.com/microservices-demo/user/
WORKDIR /go/src/github.com/microservices-demo/user/
RUN apk update
RUN apk add git
RUN go get -v github.com/Masterminds/glide
RUN glide install && CGO_ENABLED=0 go build -a -installsuffix cgo -o /user main.go
FROM alpine:3.4
ENV SERVICE_USER=myuser \
SERVICE_UID=10001 \
SERVICE_GROUP=mygroup \
SERVICE_GID=10001
RUN addgroup -g ${SERVICE_GID} ${SERVICE_GROUP} && \
adduser -g "${SERVICE_NAME} user" -D -H -G ${SERVICE_GROUP} -s /sbin/nologin -u ${SERVICE_UID} ${SERVICE_USER} && \
apk add --update libcap
ENV HATEAOS user
ENV USER_DATABASE mongodb
ENV MONGO_HOST user-db
WORKDIR /
EXPOSE 80
COPY --from=0 /user /
RUN chmod +x /user && \
chown -R ${SERVICE_USER}:${SERVICE_GROUP} /user && \
setcap 'cap_net_bind_service=+ep' /user
USER ${SERVICE_USER}
CMD ["/user", "-port=80"]
__ 8. It is a big Dockerfile because the developer that created the microservice is using it to build and package in a single step. (This is not always the case. Some Dockerfiles may be simpler with build and packaging separated into different files and run at different times.)
Review the note below that has only the compiling section of the application.
FROM golang:1.7-alpine
COPY . /go/src/github.com/microservices-demo/user/
WORKDIR /go/src/github.com/microservices-demo/user/
RUN apk update
RUN apk add git
RUN go get -v github.com/Masterminds/glide
RUN glide install && CGO_ENABLED=0 go build -a -installsuffix cgo -o /user main.go
Starting from the build part of the Dockerfile, we see above that the project uses a very old golang image (FROM golang:1.7).
Next it copies the entire source of the catalog internally to the image and changes the working directory to where the package was copied.
Next it runs a go get for gvt, that is an old tool for dependency support and it is not needed in newer versions of go. (It is unlikely for up-to-date applications maintained by customers to still be using this.)
Remember that our task at this point is not to understand the code, just build it to work on Power.
Given this build, we need an image for golang (from the first part delimited by “:”) that has a tag closer to “1.7” that means a go image that is the closest to the version 1.7. It is better to use the same version even if newer are available because the code might use functions that were deprecated and discontinued on newer versions and the idea at this stage is not to correct issues, just show that whatever is being done on x86 will build just fine on ppc64le.
Sometimes is not possible to find the exact version (as in this case with golang, because it is such an old image).
The best place to start looking for open-source images is generally the Docker Hub. (Though note that RedHat Quay.io is becoming a better choice to host images because of limitations that Docker has started imposing on downloads from Dockerhub.)
__ 9. Go to DockerHub (https://hub.docker.com) and search for the image using the search bar shown in red in the example:
Example
You can see that there is a PowerPC 64 LE filter that can be used if you have trouble finding a suitable image. The Golang container is a natively supported language for Power so the filter is not needed, but it is good to note that this feature exists. (DO NOT SELECT THE FIELD, if you do, search for the image again).
Note:
__ 10. Click on the golang Docker Official Image. (Note that PowerPC 64 LE is listed as a supported architecture in the tags below the image name.)
Example:
__ 11. Click on Tags
__ 12. Search for the tag we are looking for: 1.7
(It helps to choose Sort by: A-Z)
Example:
You will have to scroll down to find 1.7 as when sorted by A-Z, 1.11.7 is listed before 1.7.
The 1.7 image does not include Power PC 64 LE as a supported architecture, but if you look back at 1.11.7 you will see many more architectures including Power. (Note – you need to click on the “+7 more..” in the Digest list above to see all architectures)
Note: Since 1.7 does not have a Power PC 64 LE image and we know the 1.11 does, we will use 1.11 instead.
Every Docker official image has the project name: docker.io/library/<IMAGE>:<TAG> so our image will be:
docker.io/library/golang:1.11
Note that specifying 1.11 with no extension (i.e not 1.11.7, or 1.11.8 etc) will automatically select the latest release of that version. You can look inside the image in Dockerhub by clicking on the tag. Change the Tag search to 1.11 to find the 1.11 base image then click on the tag, as indicated below:
You will see this image actually selects version 1.11.13 of the golang image – the latest 1.11 release available.
Now that we have our builder image, the next part of the Dockerfile is the actual container build:
FROM alpine:3.4
ENV SERVICE_USER=myuser \
SERVICE_UID=10001 \
SERVICE_GROUP=mygroup \
SERVICE_GID=10001
RUN addgroup -g ${SERVICE_GID} ${SERVICE_GROUP} && \
adduser -g "${SERVICE_NAME} user" -D -H -G ${SERVICE_GROUP} -s /sbin/nologin -u ${SERVICE_UID} ${SERVICE_USER} && \
apk add --update libcap
ENV HATEAOS user
ENV USER_DATABASE mongodb
ENV MONGO_HOST user-db
WORKDIR /
EXPOSE 80
COPY --from=0 /user /
RUN chmod +x /user && \
chown -R ${SERVICE_USER}:${SERVICE_GROUP} /user && \
setcap 'cap_net_bind_service=+ep' /user
USER ${SERVICE_USER}
CMD ["/user", "-port=80"]
You can see above that this part starts with an alpine image. You must find the alpine image for Power.
This is a repetitive “search and find” to make sure we get the right pre-req images to build the final one that will be used on our application.
You will see that the alpine 3.4 image does not support ppc64le. Since the alpine image is being used just to run an executable and not to build any code, we will try using the latest alpine image.
Follow the same pattern that we followed to find the golang image, you will find that the latest alpine image does support Power, as you can see in the picture below.
Hint: Look for alpine image on DockerHub after you filtered for ppc64le
When you have an official docker image and there is no project for it, the project is “library” so the image to be used will be:
docker.io/library/alpine:latest
Now that you found both images needed to build the container, we can create a single Dockerfile that will do the build and create the final container on the main project Directory.
Section 1. Creating the Dockerfile to build the container
Now we have a clear picture of what containers are needed.
First, we will look at the build part and make the adjustments to compile the code for it. Read the in-line comments below to see the changes that are required.
#Changed from Golang 1.7 to Golang 1.11 image. Name this stage "builder" which
#will be referred to later in the container build commands.
FROM golang:1.11 AS builder
#Added this line to install certificates to be able to download dependencies
#This is needed because the container image is very old. The build would fail
#You should not have a problem like this on a customer since it would be newer #images
RUN apt update && apt install ca-certificates libgnutls30
#Copying the code that will be used for the working directory and setting the #working directory
COPY . /go/src/github.com/microservices-demo/user/
WORKDIR /go/src/github.com/microservices-demo/user/
#Commenting out apk update as we are using apt.
#RUN apk update
#RUN apk add git
#Downloading the dependency manager for go
RUN go get -v github.com/Masterminds/glide
#Installing dependency and compiling code
RUN glide install && CGO_ENABLED=0 go build -a -installsuffix cgo -o /user main.go
The part of the Dockerfile above will create the /app application on the container that will be used on the runtime container.
Next, we will analyze the runtime container part of the Dockerfile. Again, read the in-line comments to see the changes that are required.
#Changing from 3.4 to latest
FROM alpine:latest
#No need to create a user or to add Capabilities
#ENV SERVICE_USER=myuser \
# SERVICE_UID=10001 \
# SERVICE_GROUP=mygroup \
# SERVICE_GID=10001
#
#RUN addgroup -g ${SERVICE_GID} ${SERVICE_GROUP} && \
# adduser -g "${SERVICE_NAME} user" -D -H -G ${SERVICE_GROUP} -s #/sbin/nologin -u ${SERVICE_UID} ${SERVICE_USER} && \
# apk add --update libcap
#Setting environment variables used by the program to access the database
ENV HATEAOS user
ENV USER_DATABASE mongodb
ENV MONGO_HOST user-db
#Making the root directory the working dir.
WORKDIR /
#Changing the port to expose to 8080
EXPOSE 8080
# “builder” below refers to the Golang stage defined earlier in the image above
COPY --from=builder /user /
#RUN chmod +x /user && \
# chown -R ${SERVICE_USER}:${SERVICE_GROUP} /user && \
# setcap 'cap_net_bind_service=+ep' /user
#USER ${SERVICE_USER}
#Changing the port to 8080
CMD ["/user", "-port=8080"]
If we use just the uncommented lines from both the build section and the runtime section of the Dockerfile, we will have a simple Dockerfile as follows:
__ 13. Create the Dockerfile on the main microservices directory.
# echo 'FROM golang:1.11 AS builder
RUN apt update && apt install ca-certificates libgnutls30
COPY . /go/src/github.com/microservices-demo/user/
WORKDIR /go/src/github.com/microservices-demo/user/
RUN go get -v github.com/Masterminds/glide
RUN glide install && CGO_ENABLED=0 go build -a -installsuffix cgo -o /user main.go
FROM alpine:latest
ENV HATEAOS user
ENV USER_DATABASE mongodb
ENV MONGO_HOST user-db
WORKDIR /
EXPOSE 8080
COPY --from=builder /user /
CMD ["/user", "-port=8080"]'>Dockerfile
Now we have a single Dockerfile that will build the application on Power and containerize it in a single step, making it easier to integrate the software.
It may be necessary to login into Docker to be able to download images. Sometimes when the quota for pulling images is exceeded you will be denied downloading images from Docker.
2. Building and Pushing to the registry
When talking to customers you will hear the names Docker registry, JFrog Artifactory, Nexus, Quay or simply local registry. These registries are just different types of repositories for all container images. They can be also used for other parts for the developers like maven repositories, but it goes beyond the scope of this demonstration.
For this demo, we will use the internal OpenShift Registry that was exposed during the Demo preparation (The Lab Request & Prepare for Sock Shop section of the Techzone labs).
NB, a centralized Image registry like an enterprise registry could be used, but the creation of an enterprise registry goes beyond the scope of this demo.
Section 1. Building the container
With the Dockerfile created and the images for Power already known, the container building process is straightforward.
__ 14. Run the command to get the image registry Route into a variable
# REGISTRYHOST=$(su - cecuser -c "oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'")
#
__ 15. Verify the command correctly captured the host
# echo $REGISTRYHOST
default-route-openshift-image-registry.apps.p1334.cecc.ihost.com
#
Note: The route above will be different, It can’t be blank, if it is blank make sure you login to the OCP like stated on Section 1.
__ 16. Run the podman command to build our new image using the Dockerfile we created on the previous step and tag it to the internal registry for the course
# podman build -f Dockerfile --tag $REGISTRYHOST/sock-shop/user:latest
[1/2] STEP 1/8: FROM golang:1.11 AS builder
[1/2] STEP 2/8: RUN apt update && apt install ca-certificates libgnutls30
--> 19aec2a0fb4
[1/2] STEP 3/8: COPY . /go/src/github.com/microservices-demo/user/
--> 5d68faedeb5
[1/2] STEP 4/8: WORKDIR /go/src/github.com/microservices-demo/user/
--> 71a5153eece
.
.
.
[INFO] --> Exporting gopkg.in/tomb.v2
[INFO] Replacing existing vendor dependencies
--> 7e6a13c6e16
[2/2] STEP 1/8: FROM alpine:latest
[2/2] STEP 2/8: ENV HATEAOS user
--> 0f09a22bccd
[2/2] STEP 3/8: ENV USER_DATABASE mongodb
--> cc55be7b4ba
[2/2] STEP 4/8: ENV MONGO_HOST user-db
--> 05d16353b38
[2/2] STEP 5/8: WORKDIR /
--> f9791b7b607
[2/2] STEP 6/8: EXPOSE 8080
--> 0b726550cc6
[2/2] STEP 7/8: COPY --from=builder /user /
--> cc2d7945b83
[2/2] STEP 8/8: CMD ["/user", "-port=8080"] [2/2] COMMIT default-route-openshift-image-registry.apps.p1334.cecc.ihost.com/sock-shop/user:latest
--> ff8bfc90c50
Successfully tagged default-route-openshift-image-registry.apps.p1334.cecc.ihost.com/sock-shop/user:latest
ff8bfc90c50a94a513ab93ef5ff6242921bb0d431d7bf958a153758deffa8a0e
This created two new images on your podman, one that has no reference (the one used to build) and another one that is the user image.
__ 17. Check the two images created using podman images
# podman images |head -3
REPOSITORY TAG IMAGE ID CREATED SIZE
default-route-openshift-image-registry.apps.p1334.cecc.ihost.com/sock-shop/user latest ff8bfc90c50 About a minute ago 121 MB
<none> <none> 1d4425f7bd54 4 minutes ago 1.04 GB
You are ready to push the images to be used in production.
Section 2. Pushing the images to the internal registry
In this short section we will push the new image to the internal registry. (If you are working as team and have different team building different microservices, this will allow you to access each other’s microservices and ultimately run the application as a whole)
__ 18. Login into the registry using podman
# podman login $REGISTRYHOST -u cecuser -p $(su - cecuser -c "oc whoami -t") --tls-verify=false
Login Succeeded!
#
__ 19. Create the sock-shop project
# oc new-project sock-shop
Now using project "sock-shop" on server "https://api.p1323.cecc.ihost.com:6443".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve-hostname
__ 20. Push the created image to the internal registry
# podman push $REGISTRYHOST/sock-shop/user:latest --tls-verify=false
Getting image source signatures
Copying blob 1e9678c05654 done
Copying blob 0d2870e9322b done
Copying blob 50a402e5d1d4 done
Copying blob 12cfd3236c9e done
Copying config 86f486143b done
Writing manifest to image destination
Storing signatures
__ 20. Change the user Golang microservice service image. To do that either edit and find the yaml or simply run the command bellow to change the image.
# sed -i 's/image: quay.io\/daniel_casali\/usvc-user$/image: image-registry.openshift-image-registry.svc:5000\/sock-shop\/user/' /home/cecuser/assets/usvc/app.yaml
#
1. Deploying the Microservice App
Now with the correct Microservices changed you can just apply the yaml file.
(Optionally, open the Administrator console for your OpenShift cluster and navigate to the Developer view -> Topology and select sock-shop as the project. When you run the oc apply command below, you will see the microservices appear and they will start to build, then run.)
__ 21. Run the command bellow to create the application
# oc apply -f /home/cecuser/assets/usvc/app.yaml
Warning: resource namespaces/sock-shop is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
namespace/sock-shop configured
deployment.apps/carts created
service/carts created
deployment.apps/carts-db created
service/carts-db created
deployment.apps/catalogue created
service/catalogue created
deployment.apps/catalogue-db created
service/catalogue-db created
deployment.apps/front-end created
service/front-end-external created
service/front-end created
deployment.apps/orders created
service/orders created
deployment.apps/orders-db created
service/orders-db created
deployment.apps/payment created
service/payment created
deployment.apps/queue-master created
service/queue-master created
deployment.apps/rabbitmq created
service/rabbitmq created
deployment.apps/session-db created
service/session-db created
deployment.apps/shipping created
service/shipping created
deployment.apps/user created
service/user created
deployment.apps/user-db created
service/user-db created
route.route.openshift.io/front-end-external created
2. Show the application on OpenShift
Go to the Developer view on topology and select the sock-shop.
__ 22. See that the application is up and show the results
__ 23. Click on the front end link (on the upper right side of the front end pod and you should see the application.
#TechXchangePresenter
#TechXchangeSession