Cloud Pak for Data Group

Cloud Pak Installer on Managed OpenShift

By SUMIT KUMAR posted Mon August 12, 2019 04:30 PM

  

Steps required before running this installer:

  1. Provision Openshift cluster. The minimum recommended configuration is a 3 node cluster with each node with 16 CPUs and 64GB of memory.
  2. Make sure the cluster is able to connect to the Internet. It is required for pulling the pod images. If installing on a air-gap environment, consult <<TODO>>.
  3. Make sure to have connection to the cluster and have cluster-admin permissions.
  4. Use a Mac or Linux machine to run the installation scripts.

  

** Be aware that this installation instructions still require the cluster-admin role to be set for the service accounts default and icpd-anyuid-sa.

 

Installation Steps:

  1. Create the project and switch to it. On this how-to we are using the project name "zen".

# oc new-project zen

  1. Run the following pre-check script to set the environment

#!/bin/bash

#******************************************************************************

# Licensed Materials - Property of IBM

# (c) Copyright IBM Corporation 2019. All Rights Reserved.

#

# Note to U.S. Government Users Restricted Rights:

# Use, duplication or disclosure restricted by GSA ADP Schedule

# Contract with IBM Corp.

#******************************************************************************

 

export NAMESPACE="zen"

 

oc apply -f - << EOF

allowHostDirVolumePlugin: false

allowHostIPC: true

allowHostNetwork: false

allowHostPID: false

allowHostPorts: false

allowPrivilegedContainer: false

allowedCapabilities:

- '*'

allowedFlexVolumes: null

apiVersion: v1

defaultAddCapabilities: null

fsGroup:

  type: RunAsAny

groups:

- cluster-admins

kind: SecurityContextConstraints

metadata:

  annotations:

    kubernetes.io/description: ${NAMESPACE}-zenuid provides all features of the restricted SCC but allows users to run with any UID and any GID.

  name: ${NAMESPACE}-zenuid

priority: 10

readOnlyRootFilesystem: false

requiredDropCapabilities: null

runAsUser:

  type: RunAsAny

seLinuxContext:

  type: MustRunAs

supplementalGroups:

  type: RunAsAny 

users: []

volumes:

- configMap

- downwardAPI

- emptyDir

- persistentVolumeClaim

- projected

- secret

EOF

 

oc adm policy add-scc-to-user ${NAMESPACE}-zenuid system:serviceaccount:${NAMESPACE}:default

oc adm policy add-scc-to-user anyuid system:serviceaccount:${NAMESPACE}:icpd-anyuid-sa

oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:${NAMESPACE}:default

echo "SCRIPT RUN SUCESSFULLY"

 

  1. Execute install.sh and make sure it success replacing exported values with the ones used on your environment.
#!/bin/bash

#******************************************************************************

# Licensed Materials - Property of IBM

# (c) Copyright IBM Corporation 2019. All Rights Reserved.

#

# Note to U.S. Government Users Restricted Rights:

# Use, duplication or disclosure restricted by GSA ADP Schedule

# Contract with IBM Corp.

#******************************************************************************

 

 

export NAMESPACE="zen"

export STORAGE_CLASS="ibmc-file-gold"

export DOCKER_USERNAME="iamapikey"

export DOCKER_REGISTRY="cp.stg.icr.io/cp/cp4d"

export DOCKER_REGISTRY_PASS="<<DOCKER_PASS>>"

export INSTALL_TILLER=1

export TILLER_NAMESPACE=${NAMESPACE}

export TILLER_IMAGE=" cp.stg.icr.io/cp/cp4d/tiller:v2.9.1"

export TILLER_TLS=0

export CONSOLE_ROUTE_PREFIX="cp4data-console"

 

 

# create pull secret

oc create secret docker-registry icp4d-anyuid-docker-pull -n ${NAMESPACE} --docker-server=${DOCKER_REGISTRY} --docker-username=${DOCKER_USERNAME} --docker-password=${DOCKER_REGISTRY_PASS}

oc secrets -n ${NAMESPACE} link default icp4d-anyuid-docker-pull --for=pull

oc create secret docker-registry sa-${NAMESPACE} -n ${NAMESPACE} --docker-server=${DOCKER_REGISTRY} --docker-username=${DOCKER_USERNAME} --docker-password=${DOCKER_REGISTRY_PASS}

 

 

cat << EOF | oc apply --namespace ${NAMESPACE} -f -

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: cloud-installer

  labels:

    app: cp4data-installer

spec:

  replicas: 1

  selector:

    matchLabels:

      app: cp4data-installer

  template:

    metadata:

      labels:

        app: cp4data-installer

    spec:

      containers:

      - env:

        - name: NAMESPACE

          value: ${NAMESPACE}

        - name: TILLER_NAMESPACE

          value: ${TILLER_NAMESPACE}

        - name: INSTALL_TILLER

          value: "${INSTALL_TILLER}"

        - name: TILLER_IMAGE

          value: ${TILLER_IMAGE}

        - name: TILLER_TLS

          value: "${TILLER_TLS}"

        - name: STORAGE_CLASS

          value: ${STORAGE_CLASS}

        - name: DOCKER_REGISTRY

          value: ${DOCKER_REGISTRY}

        - name: DOCKER_USERNAME

          value: ${DOCKER_USERNAME}

        - name: DOCKER_REGISTRY_USER

          value: ${DOCKER_USERNAME}

        - name: DOCKER_REGISTRY_PASS

          value: ${DOCKER_REGISTRY_PASS}

        - name: CONSOLE_ROUTE_PREFIX

          value: ${CONSOLE_ROUTE_PREFIX}

        name: installer

        image: cp.stg.icr.io/cp/cp4d/cp4d-installer:v1

        imagePullPolicy: Always

        resources:

          limits:

            memory: "200Mi"

            cpu: 1

        command: [ "/bin/sh", "-c" ]

        args: [ "./deploy-cp4data.sh; sleep 30000" ]

      imagePullSecrets:

      - name: icp4d-anyuid-docker-pull

EOF

 

sleep 5

oc get pods -n ${NAMESPACE}

POD=$(oc get pods -n ${NAMESPACE} -l app=cp4data-installer -o jsonpath="{.items[0].metadata.name}")

echo $POD

oc logs -n ${NAMESPACE} --follow $POD

 

  1. Check status from the installer pod “oc logs cloud-installer” to wait the install to complete.
  2. Wait until the pod show the message “Installation Completed”.

#Highlights
#Highlights-home
#Blog
0 comments
80 views

Permalink