Hi Venkat,
Today is totally possible to look the the pipeline you have for any platform and you can pick the same tools for the mainframe too.
I've doing some tests using Github actions and Travis CI and both were working great, so you can take the same orchestrator you are used to and bring that to Z, having same server for your git, and if for security reasons code from Z could not be out of Z you can also have gitlab running on Linux on Z.
Once you have mapped all steps you do manually you can use both Ansible or Zowe to the bridge with Z, from managing datasets, running jobs, scripts on Z, interacting with CICS, IMS, DB2
------------------------------
William Pereira
------------------------------
Original Message:
Sent: Fri February 05, 2021 01:31 PM
From: Venkat Avinash Kavirayani
Subject: Dependencies in Mainframe Deployments
Normal deployments in Mainframe:
Usual projects we build in mainframe environment include components like given below. If this is a IMS DB/DC or CICS programs, the number of components involved would be much more like PSB/DBDs, SQL procedures, so on.
- COBOL programs
- Copybooks
- Procedures
- Job cards
- Control cards
- REXX/EZT or other language codes
Now to ensure a successful deployment, we need to ensure that all the components are correctly maintained and deployments are done in correct order. Now, this is a different topic altogether. We have teams in usual mainframe project teams that co-ordinate this, to ensure Load modules are promoted (Batch/online), libraries are refreshed and the update code is available successfully... DBDGENs, ACBGENs, WLM refresh so on...
We do have something in SCM tools like Endevor to get the code for a project in a single CCID/project code, but still, we'd manually have to build the dependencies of how the code has to be promoted in deployment tools like UCD etc and these are not exactly standard deployment models.
All these activities take careful planning and deployment activities are usually not similar across multiple regions like PROD/DEV/NFT etc. All increasing the complexity and multiple points of failures. If these are not enough, we have co-ordination across multiple teams, to name a few:
- working with Storage and infrastructure teams to ensure we have enough space for new files, create new GDG Bases, identify increase in MIPS etc...
- Performance analysis
- RACF setup and new transaction/DB setup... engaging with DBAs and System programmers
These are just some of the items a usual Mainframe Application developer has to go through to get his/her code across multiple regions and get that tested. As you have noted by now, number of unknowns are huge and this increases the complexity to get the code to PROD from DEV.
Modernization?
With Zowe coming up and software like ansible available in zOS environment now... can we try and have the dependencies maintained for mainframe code as well? if we take an example of NodeJs and other code builds, we maintain package.json files where the dependencies are maintained and the deployment activities get so much simpler with just one command (like npm install).
If we have a similar structure/concept available for mainframe deployments along with order of deployment and steps to be followed, this would be great. We do have to take in parameters like environment variables to ensure the system parameters (like DB2 SSID, IMS regions etc) are picked up automatically.
Is this doable? can we get mainframe deployments simplified and standardized across industry?
Thanks,
Avinash,
avi.vzm05@gmail.com
------------------------------
Venkat Avinash Kavirayani
------------------------------