After I posted my reply, I saw your post about Kubernetes. Do you happen to be using MSR containers? With MSRs, there’s a new and improved way of handling environment-specific configurations via an application.properties file. I haven’t had a whole lot of hands on experience with it, but it seems much simpler and it works relatively well.
Yep, I completely agree. In fact, when introducing new webMethods customers to CI/CD, I also break it up in those two phases: pre-commit and post-commit. I sometimes find myself using EGit (the more commonly used Eclipse plugin for Git) when doing webMethods local dev but it’s not required and I’m not a huge fan. I tend to use TortoiseGit myself for most of my Git interactions.
The most convenient method for compiling and/or frag’ing many java services at once is the jcode utility. Reloading the package won’t do it. There’s a section in the Services Development Guide called “Using the jcode utility” that you will find useful, but in a nutshell, these are the commands:
# Compile all Java services in a package
jcode.[bat|sh] makeall <package>
# Create fragment files for all Java services in a package
jcode.[bat|sh] fragall <package>
You could quite easily create a simple bat or shell script that takes care of checking out a package, creating the symlink, running jcode against it, and then activating/reloading the package as needed. In the very first project in which I was exposed to webMethods local dev and CI/CD in 2008, believe it or not, this was precisely how we did it at first.
If you’re creating MSR Docker images to be deployed into Kubernetes, then you will likely not need Deployer at all. During your image build step, you will simply add your full packages to your Docker image directly from your version control system. When doing full package deployments like this, dependency checking is not as critical. You simply need to ensure that all dependent packages go together, which is typically easily accomplished by keeping related packages in the same repository, but you could certainly organize them in separate repos if it makes sense (e.g. one repo with common utility packages, other repos supporting specific business functions). After your Docker image is ready, you will deploy that same image across environments and you will leverage application.properties to ensure that environment-specific values are propertly set in each environment.
If you do end up needing Deployer though, I recently worked on a CI/CD solution where I dockerized the Asset Build Environment and Deployer, which allowed me to execute deployments in a serverless Gitlab pipeline. In other words, ABE and Deployer existed nowhere else other than in those Docker images and those containers were brought up and torn down each time. No need to maintain a running Deployer server. I have more info there too if you need it.
Take care,
Percio
#webMethods#git#Universal-Messaging-Broker#Integration-Server-and-ESB