Cloud Pak for Network Automation

Automating network cloud build using the Site planner in IBM's Cloud Pak for Network Automation

By SANIL NAMBIAR posted Wed June 09, 2021 06:42 AM



Automated deployment and management of a turnkey network cloud solution (for vRAN or vCore – 4G, 5G)  requires the coordination of several domain management systems, each designed to automate the lifecycle of one or more physical or software devices included in a network cloud stack. Each domain manager may be responsible for the lifecycle or configuration of one or more specific hardware or software entities. Domain management systems can be provided by the supplier of the components or 3rd parties to manage horizontal layers of the network cloud stack.

Network Cloud Stack
Figure 1: Network cloud stack

Domain managers are not aware of each other, creating a potential for manual “airgaps” that break a cloud-like zero touch experience. There is a need for an orchestration system to perform an over-arching coordination role, modelling an end to end network cloud stack design and coordinating the lifecycle of its individual components and interacting with the appropriate domain management systems. This coordination system should provide the information each domain management systems need to perform lifecycle tasks and ensure tasks are completed before moving to the next component lifecycle.

  • Network Cloud Stack Moving Parts

The full stack of the network cloud typically comprises of the network cloud hardware, software devices or entities to be lifecycle managed, along with a sample set of domain managers that could be responsible for or involved in executing their lifecycle.

    • Managed entities with lifecycles

Virtual, physical, or logical entities included in a network cloud stack that have their own lifecycle, i.e. they can be created, deleted and/or configured and reconfigured, are identified as a managed entity. Typically, managed entity lifecycles can be programmed with scripting technologies or through a domain manager that can perform lifecycles actions.

network cloud stack components
Figure 2: Network cloud stack components

Managed entities can include the following types of components with individual lifecycles: Switches, SDN Controllers, Storage systems, Servers, Smart NICs/Accelerators, VIM/CISMs etc.

Each hardware or software managed entity requires a tested and certified set of lifecycle automations. Also, dependencies between managed entities require that sets of managed entities should be certified and tested as a collective.

  • Landscape of Domain Managers

Domain managers can be provided by managed entity vendors, by an external party or custom built by a System Integrator. Examples of categories of domain managers are described in the table below.

Domain Manager


Entities managed

Fabric managers

Management of config across sets of switches


Infrastructure Managers

Allowing management of all aspects of a set of compute servers.

Server, firmware, IPAM

Element Manager

Systems that provide a management interface to a set of software-based network functions

Network functions

Automation scripts

General purpose scripting capability that can be used to create any of the above point automations, and/or fill in any automation gaps.


Cluster manager

manages the lifecycle of a VIM on NFVI (e.g., servers, storage and networking)

OpenStack, Kubernetes


manages the lifecycle of network services and the CNFs that realize them.


Software and physical network functions

                                                                                                                 Table 1: Domain managers and the entities they manage

As discussed earlier, the primary goal of this solution is to enable an automated, preferably zero-touch,  full network cloud stack automation system consisting of a set of managed entities whose lifecycles can be sequenced and coordinated around a modelled network cloud stack.

From an orchestrator’s point of view, managed entity lifecycles must be abstracted from any domain managers that may be involved in performing lifecycle automation tasks, allowing orchestration systems to focus lifecycle sequencing logic without worrying about how lifecycle tasks are implemented.

The modelled network cloud stack consists of physical hardware, physically realized network functions (PNF) – such as baremetal servers, Cloud operating systems such as Openshift or Openstack among others. The managed entities in the model have direct and indirect dependencies on each other in the context of the overall network stack.

A key challenge is to simplify the lifecycle management of complex and large deployments through a standard managed entity lifecycle interface between an overarching zero-touch orchestration system and the various domain managers involved in executing lifecycle tasks for a managed entity type.

Full stack cloud build automation

One of the areas addressed by the Cloud Pac for Network Automation in this release, is the full stack cloud build automation.

The Cloud Pak for Network Automation introduces the Site planner. The site planner is a data centre information model (DCIM) with an Openstack model overlaid on top of this DCIM. The Site planner also supports an OpenShift cloud model.

The Site planner helps the Cloud Planner in the Telco or in an enterprise to plan the low level details of the site in order to prepare the Cloud Pac for Network Automation’s actual automation engine to build a cloud of any size or shape.

  1. Cloud planning phase – results in the bill of quantity and low level design of the cloud. This is outside the scope of Site planner and is typically done manually by the Cloud Planning persona in the telco. This process of cloud planning and LLD generation can also be automated and is a future roadmap candidate for our Cloud Pac.
  2. Site planner scope - The LLD is entered into the site planner via APIs or manually using the Openstack templates that are provided in the Site planner.
  3. Cloud build automation: The Site planner’s automation context is used to associate resources and assembly descriptors ( site service models) in the Cloud Pak that represent this cloud site.
  4. Deployed site: The Cloud Pak then uses the relevant south bound integrations from its resource manager driver framework to automatically build the full cloud site with all its managed entities

    Automation candidates in the cloud build process
    Figure 3: Automation candidates in the cloud build process

So, we can summarise, that the aim of the site planner is to provide the following functionalities:

  1. Capture the Low level design of the cloud stack which comprises (but is not limited to):
    1. Site
    2. Rack
    3. IPMI
    4. BIOS settings
    5. Management software
    6. Firmware
    7. Compute Devices, RAID Controllers
    8. Network devices
    9. Clusters
      1. Undercloud
      2. Director VM, Repo VM
    10. Overcloud
      1. Controllers, computes
    11. Ceph
  2. Act as a planned site inventory for the cloud being built.
  3. Act as cloud site model with all the  required configuration (Bios, firmware, server, raid, ceph, site properties,
  4. The key trigger into the Cloud Pak’s intent engine to automated the cloud build.
  5. Act as a golden configuration repository for the cloud site so that the cloud planners can reconcile the deployed cloud site configuration
  6. Assist cloud scale out, expansions.

Site Planner's Data center information model 

Let’s take a look at the details of the site planner. An important aspect of the site planner is the concept of the “Planned site”. It is the manifestation of the BoQ and the LLD ( designed manually) for the network cloud site we planned, ported into a data centre information model (DCIM).

1. The overall site dashboard looks like this:

Site Planner dashboard for a planned cloud site
Figure 4: Site Planner dashboard for a planned cloud site

2. By clicking on the site, you see that two sites are planned. There is a region-> Site->Data centre->Floor hierarchy that can be maintained.

Site Planner dashboard for planned cloud sites in a region
Figure 5: Site Planner dashboard for planned cloud sites in a region

3. You can add site details at a glance, such as, the amount of racks, computes, vLANs, VMs ( VNFs on this cloud), IP Prefixes.

Site Planner dashboard for a specific planned cloud site
Figure 6: Site Planner dashboard for a specific planned cloud site

4. By clicking on a rack, you are provided with full details about the racking and stacking aspects. Additional details, such as, weight of the rack, power details, can be captured.

Site Planner dashboard planned rack configurations in a site
Figure 7: Site Planner dashboard, planned rack configurations in a site

5. By clicking on a compute on the rack, you are provided with details about that specific compute node including the networking aspects.

Site Planner dashboard a specific compute node on a rack
Figure 8: Site Planner dashboard, a specific compute node on a rack

Site Planner dashboard a specific compute node on a rack with NIC details
Figure 9: Site Planner dashboard a specific compute node on a rack with NIC details

6. You can view Cloud specific VLAN details:

You can view Cloud specific VLAN details:
Figure 10: Site Planner dashboard showing VLAN details for RH Open stack

7. In the case of zero touch provisioning (ZTP) of Open VRAN elements like eNB, Site planner is used as the initial entry point for the radio site operator to enter the new site details, and the eNB details, including the Radio EMS details, the eNB configurations of the vCU for example. The site planner will trigger Zero touch provisioning of the site, integrating with the IRP manager service of CP4NA which creates the self configuration profile in the eNB vendor EMS. 
The eNB as a Managed Entity is captured in the Site Planner. 

eNB planned entity
Figure 11: eNB planned entity 

8. The details of the eNB (vCU, vDU) and the Automation status is provided in the Site planner, after this eNB is deployed by CP4NA as part of the ZTP process

vcU, vDU details entered in Site planner by site operator
Figure 12: vCU, vDU details entered in Site planner by site operator

Automation status
Figure 13: Automation status of the vCU and vDU after deployment via CP4NA

Benefits of Site Planner 

  • Reduce time to build telco clouds from weeks to days (In trials, we have seen a 60% decrease in time to build the RHOSP 16.1 cloud comprising of 3 controllers, Undercloud VMs, Ceph clusters, and 3 overcloud computes – with BIOS, RAID, FIRMWARE, BareMetal provisioning etc).
  • Automated post-cloud build validation and testing 
  • Zero touch provisioning of open VRAN sites (in 5G and 4G) 
  • Planned site inventory with Golden configuration of the planned cloud site for Core, Edge, Far edge sites

    Benefits of cloud build automation using IBM CP4NA
    Figure 14: Benefits of cloud build automation using CP4NA