# Part 1: Install Cloud Pak For Automation on OCP 4.6 on AWS Custom VPC with DB2 and LDAP
# Install OCP 4.6 on AWS Custom VPC with DB2 and LDAP### Download required files<p align="left">
<a href="https://cloud.redhat.com/openshift/install/aws/installer-provisioned">
Download the installer, pull secret and command line tools
</a>
</p>
```$bash
ls -l /Users/mkhilnan/projects/aws
-rw-r--r--@ 1 mkhilnan wheel 24276390 Feb 14 17:25 openshift-client-mac.tar.gz
-rw-r--r--@ 1 mkhilnan wheel 93783733 Feb 14 17:25 openshift-install-mac.tar.gz
-rw-r--r--@ 1 mkhilnan wheel 2759 Feb 14 17:25 pull-secret
```
### Install oc and kubectl```$bash
tar xvf openshift-install-mac.tar.gz
x README.md
x openshift-install
tar xvf openshift-client-mac.tar.gz
x README.md
x oc
x kubectl
chmod +x ./kubectl
sudo cp ./kubectl /usr/local/bin/kubectl
kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"18d7461aca47e77cefb355339252a8d4c149188f", GitTreeState:"clean", BuildDate:"2021-01-30T16:44:37Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
chmod +x ./oc
sudo cp ./oc /usr/local/bin/oc
oc version
Client Version: 4.6.16
```
### Install AWS CLI<p align="left">
<a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html">
Download and install AWS command line tool
</a>
</p>
```$bash
aws --version
aws-cli/1.16.265 Python/2.7.16 Darwin/19.6.0 botocore/1.13.1
Execute aws command to verify the connection
aws s3 ls
```
### Verify the AWS accountAccess "My Service Quotas" in AWS console and verify the below
* EC2 Instance - Minimum 10 available
* Elastic IPs: Minimum 3
* VPCs: 1:
* Elastic Load Balancer: 3:
* NAT Gateway: Minimum 3
* VPC Gateway: Minimum 1 for S3 Access
* S3: Minimum 2 buckets
* Security Groups: Minimum 10
### Create AWS OCP install userLogin to the AWS console and create AWS IAM user named ocpadmin with programmatic access and "Administrative Access" Role
Make a copy of the user access key id and secret access key.
Add the new aws user profile to the credentials
```$bash
vi /Users/mkhilnan/.aws/credentials
[awscto_ocpadmin]
aws_access_key_id = <access_key_id>
aws_secret_access_key = <secret_access_key>
region = us-east-2
export AWS_PROFILE=awscto_ocpadmin
```
### Create AWS Route53 DomainLogin to AWS console and create AWS Route53 Domain named issfocpdemo.com
Once the route53 domain is created, it also creates the aws hosted zone named issfocpdemo.com
```$bash
aws route53 list-hosted-zones
{
"HostedZones": [
{
"ResourceRecordSetCount": 2,
"CallerReference": "49ccfde7-cb38-4bb5-bb27-a204c35d2423",
"Config": {
"Comment": "Created by Manoj Khilnani",
"PrivateZone": false
},
"Id": "/hostedzone/Z00740652TW6SQ6NGD32N",
"Name": "issfocpdemo.com."
}
]
}
```
### Create ssh key to access the nodes```$bash
ssh-keygen -t ed25519 -N '' -f ~/.ssh/awsocp_id_rsa
Created id_rsa and id_rsa.pub
Your identification has been saved in /Users/mkhilnan/.ssh/awsocp_id_rsa.
Your public key has been saved in /Users/mkhilnan/.ssh/awsocp_id_rsa.pub.
eval "$(ssh-agent -s)"
Agent pid 79045
ssh-add /Users/mkhilnan/.ssh/awsocp_id_rsa
Identity added: /Users/mkhilnan/.ssh/awsocp_id_rsa (mkhilnan@MacBook-Pro-92.local)
```
### Create AWS resourcesImport the aws_ocp_customvpc.yaml file to aws S3
Create CloudFormation stack using the S3 object link URL
```$bash
export AWS_PROFILE=awscto_ocpadmin
aws cloudformation create-stack --stack-name issfocpdemo --template-url https://cf-templates-kkkzr409sbek-us-east-2.s3.us-east-2.amazonaws.com/aws_ocp_customvpc.yaml
```
The aws_ocp_customvpc.yaml creates the required AWS components
* Internet gateways
* NAT gateways
* Subnets
* Route tables
* VPCs
* VPC DHCP options
* VPC endpoints
Verify the "StackStatus": "CREATE_COMPLETE" and copy the created subnet ids from the stack output
```$bash
aws cloudformation describe-stacks --stack-name=issfocpdemo
```
### Install OCP 4.6#### Create new installfiles folder```$bash
mkdir /Users/mkhilnan/projects/aws/installfiles
```
```$bash
vi /Users/mkhilnan/projects/aws/installfiles/install-config.yaml
```
The existing install-config.yaml creates 10 nodes across 3 availability zones.
* 3 control plane nodes (m5.2xlarge)
* 3 compute nodes that will be used for CP4Auto (m5.4xlarge)
* 3 compute nodes that will be used for OCS (m5.4xlarge)
* 1 DB2 LDAP node (m5.4xlarge)
Copy and modify the install-config.yaml
* Modify the base domain, subnets, region and availability zones values in the install-config.yaml
* Modify the pull-secret and sshkey (awsocp_id_rsa.pub) values in the install-config.yaml
#### Create OCP Cluster (Takes around 40 mins)```$bash
export AWS_PROFILE=awscto_ocpadmin
./openshift-install create cluster --dir=/Users/mkhilnan/projects/aws/installfiles --log-level=info
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/mkhilnan/Desktop/WorkFiles/Projects/RedHat_OpenShift/install/aws/installfiles/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mkcluster.issfocpdemo.com
INFO Login to the console with user: "kubeadmin", and password: ""
INFO Time elapsed: 37m38s
```
Note: After the cluster is up and running, take a backup of the /Users/mkhilnan/projects/aws/installfiles folder<br>
Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster.
#### Access and validate OCP 4.6```$bash
export KUBECONFIG=/Users/mkhilnan/projects/aws/installfiles/auth/kubeconfig
oc login -u kubeadmin https://api.mkcluster.issfocpdemo.com:6443
oc whoami
oc adm top nodes
oc get routes -n openshift-console | grep 'console-openshift'
```
### Install OpenShift Container Storage (OCS)#### Deploy the OCS Operator```$bash
oc create -f deployocs.yaml
namespace/openshift-storage created
operatorgroup.operators.coreos.com/openshift-storage-operatorgroup created
catalogsource.operators.coreos.com/ocs-catalogsource created
subscription.operators.coreos.com/ocs-subscription created
```
#### Verify OCS Deployment Phase is Succeeded```$bash
oc get csv -n openshift-storage -w
NAME DISPLAY VERSION REPLACES PHASE
ocs-operator.v4.8.0 OpenShift Container Storage 4.8.0 Succeeded
```
#### Create OCS Storage Cluster* Access OpenShift console.
* Go to Installed Operators.
* Click 'OpenShift Container Storage'
* Create 'Create StorageCluster'
* Change OCS Service Capacity from Standard 2 TiB to Small 0.5 TiB
* Select 3 worker nodes based on different availability zones (oc get nodes --show-labels | grep worker)
* Click Create
#### Verify OCS Deployment Phase is Succeeded and all pods are running```$bash
oc get csv -n openshift-storage -w
NAME DISPLAY VERSION REPLACES PHASE
ocs-operator.v4.8.0 OpenShift Container Storage 4.8.0 Succeeded
oc get pods -n openshift-storage
```
### Create htpasswd Identity Provider```$bash
htpasswd -c -B -b /Users/mkhilnan/Desktop/WorkFiles/Projects/RedHat_OpenShift/install/aws/users.htpasswd cp4autoadmin <password>
Adding password for user cp4autoadmin
oc create secret generic htpasswd-secret --from-file htpasswd=/Users/mkhilnan/Desktop/WorkFiles/Projects/RedHat_OpenShift/install/aws/users.htpasswd -n openshift-config
secret/htpasswd-secret created
oc apply -f /Users/mkhilnan/Desktop/WorkFiles/Projects/RedHat_OpenShift/install/aws/htpasswd.yaml
oauth.config.openshift.io/cluster configured
oc logout
oc login -u cp4autoadmin https://api.mkcluster.issfocpdemo.com:6443
oc logout
oc login -u kubeadmin https://api.mkcluster.issfocpdemo.com:6443
```
### Install CloudPak PreReqs - DB2 and LDAP#### Label DB2 LDAP worker node```$bash
oc label --overwrite node ip-10-0-135-122.us-east-2.compute.internal app=db2-ldap
```
#### Install OpenLDAP```$bash
oc create -f openldap_deploy.yaml
namespace/openldap created
persistentvolumeclaim/openldap-pvc-data created
persistentvolumeclaim/openldap-pvc-conf created
deployment.apps/openldap-2441-centos7 created
```
##### Import required LDAP users and groups```$bash
podname=$(oc get pod -n openldap | grep openldap-2441-centos7 | awk '{print $1}')
Copy the ldif file to the pod
oc -n openldap cp cp4a.ldif $podname:/tmp
Load the ldif file
oc exec $podname -n openldap -- ldapadd -x -H ldap://localhost -D "cn=Manager,dc=example,dc=com" -f /tmp/cp4a.ldif -w admin
oc expose deploy openldap-2441-centos7 -n openldap
service/openldap-2441-centos7 exposed
```
#### Install IBM Security Directory Server (SDS)If you install IBM SDS on EC2 instance, please verify the below
* Verify SDS EC2 instance ports (389 and 636) are open
* Verify the SDS EC2 Security group allows access to the OCP Master and Worker nodes
* Verify OCP pods can communicate to the IBM SDS on ports 389 and 636
#### Install DB2```$bash
export DOCKER_EMAIL=mkhilnan@us.ibm.com
export IBMENTITLEDKEY="<IBM Entitlement key>"
./db2setup.sh
```
##### Install DB2 Operator* Access OpenShift console.
* Go to Operator Hub.
* Search 'IBM DB2'
* Click 'IBM DB2'
* Click 'Install'
* Select specific namespace as 'db2u-oltp1'
* Click 'Install'
```$bash
oc create -f db2cluster.yaml
db2ucluster.db2u.databases.ibm.com/db2u-cp4auto created
Verify c-db2u-cp4auto-db2u-0 pod is running
oc get pods -w
```
##### Change db2inst1 password```$bash
oc rsh c-db2u-cp4auto-db2u-0 /bin/bash
sudo yum install -y passwd
sudo passwd db2inst1
sudo chage -M -1 db2inst1
su - db2inst1
db2 connect to BLUDB user db2inst1 using <password>
db2 connect reset
```
##### Verify DB2 and drop default BLUDB database```$bash
oc rsh c-db2u-cp4auto-db2u-0 /bin/bash
[db2uadm@oc rsh c-db2u-cp4auto-db2u-0 /]$ whoami
db2uadm
[db2uadm@c-db2u-cp4auto-db2u-0 /]$ su - db2inst1
db2 update dbm cfg using NUMDB 10
db2stop
db2start
db2 get dbm cfg | grep NUMDB
db2 drop database BLUDB
DB20000I The DROP DATABASE command completed successfully.
db2stop
db2start
exit
exit
```
# Uninstall OCP 4.6 on AWS Custom VPC with DB2 and LDAP### Uninstall DB2```$bash
oc project db2u-oltp1
oc delete -f db2cluster.yaml
oc delete -f db2u-scc.yaml
oc delete -f db2pvc.yaml
oc delete secret ibm-registry -n db2u-oltp1
oc delete project db2u-oltp1
oc delete -f ibmoperatorcatalog.yaml
```
### Uninstall OpenLDAP```$bash
oc project openldap
oc delete -f openldap_deploy.yaml
```
### Uninstall OCSDelete StorageCluster
```$bash
oc delete StorageCluster/ocs-storagecluster -n openshift-storage
```
Uninstall OCS Operator
```$bash
oc delete -f deployocs.yaml
```
### Uninstall SDSRemove the SDS security group access to OCP nodes
### Destroy OCP Cluster```$bash
cd /Users/mkhilnan/projects/aws/
export AWS_PROFILE=awscto_ocpadmin
export KUBECONFIG=/Users/mkhilnan/projects/aws/installfiles/auth/kubeconfig
./openshift-install destroy cluster --dir=/Users/mkhilnan/projects/aws/installfiles --log-level=info
```
### Delete CloudFormation stack```$bash
aws cloudformation delete-stack --stack-name issfocpdemo
```
Once you delete the cloudformation stack all the aws components are deleted.
Access <a href="https://github.ibm.com/mkhilnan/ocp4.6_aws_customvpc">Github</a> for install files.
#AmazonAWScloud#CloudPakforBusinessAutomation