Red Hat OpenShift - Group home

Infrastructure needed for installing Red Hat OpenShift

By Gerald Hosch posted Tue August 16, 2022 09:14 AM

  

Written by Holger Wolf

Purpose

The installation of a Red Hat OpenShift environment requires some infrastructure to get installed.

This blog describes the required steps and how the infrastructure components have to be configured.

The required infrastructural components are:

  • DNS
  • HA Proxy
  • NFS server
  • DHCP (required for some scenarios)

Overview on the basic steps:

  1. Install Red Hat Enterprise Linux (RHEL) 8 host which is placed in the same network as the Red Hat OpenShift network and is the external interface.
  2. Have access to a RHEL repository (needed for installation of additional packages)
  3. Install and configure: DNS, HA Proxy, NFS server, DHCP

Note: Topics 1 and 2 are not covered in this blog

The bastion is covering as host for the infrastructure components and is meant only for testing purposes. The Bastion has Three purposes:

  1. Supporting the name resolution for the Red Hat OpenShift external network. This is required for the installation, and for accessing the applications later. Optional IP address assignments via DHCP is part of the responsibility.
  2. Working as proxy and entry point via HA Proxy to get access to the applications and UI, and for holding the Red Hat OpenShift CLI to install and administrate the environment.
  3. Providing shared storage via NFS.

All those services can be shared on a single host for the purpose of a POC or Test/Dev environment. If required, you may consider hosting the services on different hosts. There is no need to host all those services on a single host. Even for the CLI you might want to perform operations from your laptop. Binaries are downloadable for Linux / Windows / Mac. But for the sake of simplicity, in this example we keep all services on a single RHEL8  guest.

Host Setup

As consideration, when installing services, it is important to consider that you adjust the ‘firewall’ and ‘SELinux’ settings, to enable outside access to the appropriate service.

In this example we will just disable the ‘firewall’  and ‘SELinux’.

Network Setup

The following picture shows the basic setup used and should be helpful in understanding the configuration files provided here.

There are two networks, an internal network to which the Red Hat OpenShift nodes are connected, and the overlay software-defined networking (SDN) is placed on. The second network is the external one, from where the Red Hat OpenShift user will connect to the cluster. 

In this example all systems are placed on a KVM Host but could also be placed on an IBM z/VM®. The topology is shown in the picture below.

DNS

You have to consider all the systems used in the environment. As shown in the picture before, the required DNS entries are described in the section ‘Installing a cluster with z/VM’ or ‘Installing a cluster with RHEL KVM’ sections under ‘Networking requirements for user-provisioned infrastructure’ in the  Red Hat OpenShift Documentation.

Use following command to get the DNS installed:

   dnf install bind

Then, you need to adjust the according files to get the appropriate systems included to the name service.

       1. adjust ‘/etc/named.conf’
       2. add configuration files for your Red Hat OpenShift zone to ‘/var/named’
     vim /etc/named.conf

The ‘named.conf’ needs to be enhanced to add the zone configuration for the DNS entries and for a forward DNS requests to the appropriate DNS that is covering all other requests.

    1 //
    2 // named.conf
    3 //
    4 // Provided by Red Hat bind package to configure the ISC BIND named(8)DNS
    5 // server as a caching only nameserver (as a localhost DNS resolver only).
    6 //
    7 // See /usr/share/doc/bind*/sample/ for example named configuration files.
    8 //
    9
   10 options {
   11         listen-on port 53 { 127.0.0.1; 192.168.79.2; };
   12         listen-on-v6 port 53 { ::1; };
   13         directory       "/var/named";
   14         dump-file       "/var/named/data/cache_dump.db";
   15         statistics-file "/var/named/data/named_stats.txt";
   16 - 34    .....
   35  36         forward first;
   37         forwarders {
   38                 192.168.122.1;
   39         };
   40
   41         response-policy { zone "m13lp46ocp.lnxne.boe"; };
   42 - 62    ...  63 include "/etc/named.rfc1912.zones";
   64 include "/etc/named.root.key";
   65
   66 zone "m13lp46ocp.lnxne.boe" {
   67         type master;
   68         file "m13lp46ocp.lnxne.boe.zone";
   69         allow-query { any; };
   70         allow-transfer { none; };
   71         allow-update { none; };
   72 };
   73
   74 zone "79.168.192.in-addr.arpa" {
   75         type master;
   76         file "79.168.192.in-addr.arpa.zone";
   77         allow-query { any; };
   78         allow-transfer { none; };
   79         allow-update { none; };
   80 }; 

Lines 35-41 was added to forward the requests.

Lines 66-80 handle DNS entries for the domain. In our case ‘m13lp46ocp.lnxne.boe.’

‘/var/named’ directory has the following files:
   ls
   79.168.192.in-addr.arpa.zone  data  dynamic  history.out  m13lp46ocp.lnxne.boe.zone  named.ca  named.empty
   named.localhost  named.loopback  slaves

The two files described in ‘/etc/named.conf m13lp46ocp.lnxne.boe.zone’ and ‘79.168.192.in-addr.arpa.zone’ must be created and filled out.

Here the content of the file ‘m13lp46ocp.lnxne.boe.zone’:
   $TTL 900
   @                     IN SOA bastion.m13lp46ocp.lnxne.boe. hostmaster.m13lp46ocp.lnxne.boe. ( 2019062002 1D 1H 1W 3H )
                         IN NS bastion.m13lp46ocp.lnxne.boe. 
   bastion2              IN A 192.168.79.2
   bastion               IN A 192.168.79.3
   api                   IN A 192.168.79.3
   api-int               IN A 192.168.79.3
   apps                  IN A 192.168.79.3
   *.apps                IN A 192.168.79.3 
   worker2               IN A 192.168.79.20
   bootstrap             IN A 192.168.79.20 
   master0               IN A 192.168.79.21
   master1               IN A 192.168.79.22
   master2               IN A 192.168.79.23 
   worker0               IN A 192.168.79.24
   worker1               IN A 192.168.79.25 
   etcd-0                IN A 192.168.79.21
   etcd-1                IN A 192.168.79.22
   etcd-2                IN A 192.168.79.23 
   _etcd-server-ssl._tcp IN SRV 0 10 2380 etcd-0.m13lp46ocp.lnxne.boe.
                         IN SRV 0 10 2380 etcd-1.m13lp46ocp.lnxne.boe.
                         IN SRV 0 10 2380 etcd-2.m13lp46ocp.lnxne.boe. 

Next, the content of the file ‘m13lp46ocp.lnxne.’:
   $TTL 900
   @ IN SOA bastion.m13lp46ocp.lnxne.boe hostmaster.m13lp46ocp.lnxne.boe. ( 2019062001 1D 1H 1W 3H  ) 
   IN NS bastion.m13lp46ocp.lnxne.boe.

  21 IN PTR master0.m13lp46ocp.lnxne.boe.
  22 IN PTR master1.m13lp46ocp.lnxne.boe.
  23 IN PTR master2.m13lp46ocp.lnxne.boe.
  24 IN PTR worker0.m13lp46ocp.lnxne.boe.
  25 IN PTR worker1.m13lp46ocp.lnxne.boe.
  20 IN PTR worker2.m13lp46ocp.lnxne.boe.

  20 IN PTR bootstrap.m13lp46ocp.lnxne.boe.
  3  IN PTR api.m13lp46ocp.lnxne.boe.
  3  IN PTR api-int.m13lp46ocp.lnxne.boe.
  2  IN PTR bastion2.m13lp46ocp.lnxne.boe.

After all files are edited and placed you can enable and start the service.

   # systemctl enable named
   # systemctl start named
   # systemctl status named
   ● named.service - Berkeley Internet Name Domain (DNS)
      Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor   preset: disabled)
      Active: active (running) since Fri 2021-06-04 03:22:41 EDT; 8h ago
    Main PID: 7754 (named)
       Tasks: 9 (limit: 308070)
      Memory: 68.9M  
      CGroup: /system.slice/named.service
              └─7754 /usr/sbin/named -u named -c /etc/named.conf
   Jun 04 11:24:24 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:24:24 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:28:19 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:28:19 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:28:19 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:28:19 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:32:49 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:32:49 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:32:49 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53
   Jun 04 11:32:49 bastion2.localdomain named[7754]: missing expected cookie from 192.168.122.1#53

Add the ‘nameserver’ to the bastion ‘/etc/resolve.conf’ to use it as a first instance:
   # cat /etc/resolv.conf
   # Generated by NetworkManager
   search localdomain
   nameserver 192.168.79.2

HA Proxy

This is straight forward, like the following:

   # dnf install haproxy
   # vim /etc/haproxy/haproxy.cfg

Add the appropriate services as required by Red Hat OpenShift to the file backed with the domain names as configured in the DNS step.

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global   
   # to have these messages end up in /var/log/haproxy.log you will   
   # need to:
   #
   # 1) configure syslog to accept network log events.  This is done
   #    by adding the '-r' option to the SYSLOGD_OPTIONS in   
   #    /etc/sysconfig/syslog   
   #   
   # 2) configure local2 events to go to the /var/log/haproxy.log
   #   file. A line like the following can be added to
   #   /etc/sysconfig/syslog
   #
   #    local2.*                       /var/log/haproxy.log
   #
   log         127.0.0.1 local2
   chroot      /var/lib/haproxy
   pidfile     /var/run/haproxy.pid
   maxconn     4000
   user        haproxy
   group       haproxy
   daemon

   # turn on stats unix socket
   stats socket /var/lib/haproxy/stats

   # utilize system-wide crypto-policies
   ssl-default-bind-ciphers PROFILE=SYSTEM
   ssl-default-server-ciphers PROFILE=SYSTEM

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend main
    bind *:5000
    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    use_backend static          if url_static
    default_backend             app

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    balance     roundrobin
    server  app1 127.0.0.1:5001 check
    server  app2 127.0.0.1:5002 check
    server  app3 127.0.0.1:5003 check
    server  app4 127.0.0.1:5004 check

listen ingress-http
    bind *:80
    mode tcp
    server worker0 worker0.m13lp46ocp.lnxne.boe:80 check
    server worker1 worker1.m13lp46ocp.lnxne.boe:80 check

listen ingress-https
    bind *:443
    mode tcp
    server worker0 worker0.m13lp46ocp.lnxne.boe:443 check
    server worker1 worker1.m13lp46ocp.lnxne.boe:443 check

listen api
    bind *:6443
    mode tcp
    server bootstrap bootstrap.m13lp46ocp.lnxne.boe:6443 check
    server master0 master0.m13lp46ocp.lnxne.boe:6443 check
    server master1 master1.m13lp46ocp.lnxne.boe:6443 check
    server master2 master2.m13lp46ocp.lnxne.boe:6443 check

listen api-int
    bind *:22623
    mode tcp
    server bootstrap bootstrap.m13lp46ocp.lnxne.boe:22623 check
    server master0 master0.m13lp46ocp.lnxne.boe:22623 check
    server master1 master1.m13lp46ocp.lnxne.boe:22623 check
    server master2 master2.m13lp46ocp.lnxne.boe:22623 check
​​

Enable the service and start it:

# systemctl enable haproxy
# systemctl start haproxy
# systemctl status haproxy
● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2021-05-29 12:46:31 EDT; 5 days ago
Main PID: 5719 (haproxy)
    Tasks: 2 (limit: 308070)
   Memory: 5.1M
   CGroup: /system.slice/haproxy.service
           ├─5719 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p            /run/haproxy.pid
           └─5721 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p            /run/haproxy.pid

May 29 12:46:31 bastion.localdomain systemd[1]: Starting HAProxy LoadBalancer...
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : parsing [/etc/haproxy/haproxy.cfg:49] : 'option httplog' not usable with proxy 'ingress->
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : config : 'option forwardfor' ignored for proxy 'ingress-http' as it requires HTTP mode.
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : parsing [/etc/haproxy/haproxy.cfg:49] : 'option httplog' not usable with proxy 'ingress->
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : config : 'option forwardfor' ignored for proxy 'ingress-https' as it requires HTTP mode.
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : parsing [/etc/haproxy/haproxy.cfg:49] : 'option httplog' not usable with proxy 'api' (ne>May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : config : 'option forwardfor' ignored for proxy 'api' as it requires HTTP mode.
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : parsing [/etc/haproxy/haproxy.cfg:49] : 'option httplog' not usable with proxy 'api-int'>
May 29 12:46:31 bastion.localdomain haproxy[5719]: [WARNING] 148/124631 (5719) : config : 'option forwardfor' ignored for proxy 'api-int' as it requires HTTP mode.
May 29 12:46:31 bastion.localdomain systemd[1]: Started HAProxy Load Balancer.

NFS

For a NFS server, which might be placed on bastion, you need to attach storage resources that are backed on the environment.
Therefore, first attach a disk to the NFS server which is large enough. In our example, an additional disk is attached to (in this case) KVM guest:

# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs                24G     0   24G   0% /dev
tmpfs                   24G   84K   24G   1% /dev/shm
tmpfs                   24G  265M   24G   2% /run
tmpfs                   24G     0   24G   0% /sys/fs/cgroup
/dev/mapper/rhel-root   35G  4.2G   31G  12% /
/dev/vdb1             1014M  197M  818M  20% /boot
/dev/vda1              120G  1.2G  119G   1% /home/nfs/image


For the installation of the NFS, the package needs to be installed with:

# dnf install nfs-utils

# mkdir /home/nfs/PV1
# # cat /etc/exports
/home/nfs/PV1     192.168.79.0/24(rw,no_root_squash)

# systemctl status nfs-server
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Tue 2021-06-01 05:42:36 EDT; 3 weeks 6 days ago 
 Main PID: 1425 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 308070)
   Memory: 0B
   CGroup: /system.slice/nfs-server.service

# systemctl restart nfs-server

# exportfs -rav
exporting 192.168.79.0/24:/home/nfs/PV1

DHCP

A DHCP server is required if you want to give the compute node an open IP. This is required for the KVM fastpath installation based on qcow images, or if you want to have more freedom in terms of assigning IPs to compute notes. The mac address is set during boot of the KVM guest. Therefore, the mac as defined in the ‘dhcpd.conf’ must be used for the installation at a later step (not covered here) in a later step.

# dnf install dhcp-server

Create or edit ‘/etc/dhcp/dhcpd.conf’:

#
# DHCP Server Configuration file.
#   see /usr/share/doc/dhcp-server/dhcpd.conf.example
#   see dhcpd.conf(5) man page
#
#

option domain-name "m13lp46ocp.lnxne.boe";
authoritative;
subnet 192.168.79.0 netmask 255.255.255.0 {
range 192.168.79.20 192.168.79.130;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.79.1;
option domain-name-servers 192.168.79.2;
default-lease-time 600;
max-lease-time 7200;
  host bootstrap {
    hardware ethernet 52:54:00:b9:d2:a5;
    fixed-address 192.168.79.20;
  }
  host master0 {
    hardware ethernet 52:54:00:a4:c8:77;
    fixed-address 192.168.79.21;
  }
  host master1 {
    hardware ethernet 52:54:00:2d:12:e2;
    fixed-address 192.168.79.22;
  }
  host master2 {
    hardware ethernet 52:54:00:dc:de:ad;
    fixed-address 192.168.79.23;
  }
  host worker0 {
    hardware ethernet 52:54:00:cd:01:45;
    fixed-address 192.168.79.24;
  }
  host worker1 {
    hardware ethernet 52:54:00:df:12:56;
    fixed-address 192.168.79.25;
  }
}

Enable and check if it is working correctly.

# systemctl enable --now dhcpd

# systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
   Loaded: loaded (/etc/systemd/system/dhcpd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2021-06-01 05:44:06 EDT; 3 weeks 6 days ago
     Docs: man:dhcpd(8)
           man:dhcpd.conf(5)
 Main PID: 1734 (dhcpd)
   Status: "Dispatching packets..."
    Tasks: 1 (limit: 308070)
   Memory: 8.9M
   CGroup: /system.slice/dhcpd.service
           └─1734 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user
dhcpd -group dhcpd --no-pid enc5

Jun 28 12:00:57 bastion2.localdomain dhcpd[1734]: Dynamic and static leases present for 192.168.79.25.
Jun 28 12:00:57 bastion2.localdomain dhcpd[1734]: Remove host declaration worker1 or remove 192.168.79.25
Jun 28 12:00:57 bastion2.localdomain dhcpd[1734]: from the dynamic address pool for 192.168.79.0/24
Jun 28 12:00:57 bastion2.localdomain dhcpd[1734]: DHCPREQUEST for 192.168.79.25 from 52:54:00:df:12:56 via enc5
Jun 28 12:00:57 bastion2.localdomain dhcpd[1734]: DHCPACK on 192.168.79.25 to 52:54:00:df:12:56 via enc5
Jun 28 12:01:40 bastion2.localdomain dhcpd[1734]: Dynamic and static leases present for 192.168.79.23.
Jun 28 12:01:40 bastion2.localdomain dhcpd[1734]: Remove host declaration master2 or remove 192.168.79.23
Jun 28 12:01:40 bastion2.localdomain dhcpd[1734]: from the dynamic address pool for 192.168.79.0/24
Jun 28 12:01:40 bastion2.localdomain dhcpd[1734]: DHCPREQUEST for 192.168.79.23 from 52:54:00:dc:de:ad via enc5
Jun 28 12:01:40 bastion2.localdomain dhcpd[1734]: DHCPACK on 192.168.79.23 to 52:54:00:dc:de:ad via enc5

For the ‘dhcpd.conf’, an address range was set and some of the hosts have been specified based on the MAC address for the guests. To ensure correctness, after the creation of the KVM guests, they have been checked for the correct MAC address.

Summary

This post should have helped you to know the required infrastructural components and how they are configured.

BTW, Red Hat OpenShift Container Platform 4.11 is available, see the release notes for Red Hat OpenShift 4.11 on IBM Z and IBM LinuxONE.



0 comments
23 views