Out in the field we are seeing more and more interest in using Automation. Automation has, of course, been around for some time. One of my first roles in IBM was working with GDPS (a family of services offerings for automation Disaster Recovery and Business Continuity in the Mainframe space, including Linux on z) which uses Systems Automation for z/OS. In the open space there are many options out there including Ansible and Ansible Tower by Red Hat, Chef, Puppet, and SaltStack. When cloud automation is the focus then HashiCorp’s Terraform also becomes a common point of discussion. The common point between all of these options is the concept that’s become known as “Infrastructure as Code”(IaC); The idea that you’re not managing a piece of hardware, as we’d traditionally view things. Instead you’re interfacing with everything by programming in a simple interface.
The implications of this are far ranging, but the obvious one has not changed. Automation is often used to make repetitive tasks easier to take them off of the to-do lists of staff and even to make certain that if a critical staff member is unavailable, that the necessary tasks can be done. As Dan Sunday, a mentor of mine going way back in IBM and GDPS, liked to say – “Good automation takes the best skills of your best people and makes them run at the speed of light. Bad automation makes mistakes as the speed of light”. Which is a nice way of saying that you need to make sure that the automation process that you put together is going to do what you need properly. If it does, then the business becomes less reliant on key staff members for certain tasks that are done as routine ( provisioning volumes and mapping them to hosts, as an example), who are then freed up to work on other projects that otherwise wouldn’t get the necessary attention.
Other advantages of using IaC may not be as obvious. For example, we are encountering customers who do not want their users logging in as root for security reasons. Rather, they want IaC to be used for doing tasks, allowing it to elevate privileges as necessary, but within the controls of the automation. Similarly, in the security vein, you can look at IaC as a way to make sure that systems (locally and cloud based) are meeting compliance standards, or to automate patch processes when a security issue is noted.
So, that is a basic idea of what IaC can bring into an environment. Getting into specifics gets, well, specific. So rather than looking at every option out there I’m just going to look at Ansible because I have some expertise in that topic and because it has some specific benefits when we look at IBM Storage in general as well as IBM Spectrum Virtualize and IBM FlashSystems in specific. These benefits come from the Red Hat Ansible Automation Certification Program.
The idea of IaC is great, but what about support? If you create a playbook (that is a group of “plays” written in YAML (Yet Another Markup Language) – executed to do the required tasks) and something goes wrong, who do you call? If IBM wrote the modules (Python code written to control system resources), then do you call IBM, or do you call Red Hat since they support Ansible? It is a potentially complex situation that could end up taking valuable time to resolve. With the Certification Program, that issue gets put to rest pretty easily. Certified Partners submit their modules to Red Hat for approval. Before uploading them to the Automation Hub (accessible here with a valid subscription), Red Hat analyzes the code and makes sure that there are no vulnerabilities and that everything works correctly. As a result, there is a known expected outcome and Red Hat support can work through problems as needed.
Previously Ansible used Ansible Galaxy as a repository for modules created by the community and it will continue to be used (Spectrum Accelerate modules can be found there). But for the cleanest support environment there’s much to be said for the Automation Hub. IBM’s storage support currently includes Spectrum Virtualize (which, of course, includes FlashSystems). The certified modules can be used to collect facts about what’s installed and how it’s being used and perform various tasks that would be performed on a regular basis. This can, when a data center starts implementing IaC on a larger basis, result in economies of scale.
As an example, I recently had the pleasure of developing a Webinar with Matt Key from my old team from my IBM days, The Washington Systems Center (formerly Advanced Technical Skills, Formerly Advanced Technical Support) and Tom Coyle from the Mid-Atlantic territory. Matt used the Spectrum Virtualize modules to provision a VMware datastore on a FlashSystem 9200. This kind of thing could easily lend itself to users self-provisioning VMs and storage, even networking, from a catalog. More than that, it could be used to clean up unused resources (particularly in cloud environments) so that they aren’t needlessly taking up space and adding cost.
Now, for this demonstration, Matt took advantage of Ansible Tower, which adds additional capabilities beyond the basic Ansible engine. These include a GUI, user authentication, workflows, surveys (the way that Ansible Tower allows requests to be entered). So, I would highly recommend making that part of any storage automation in place and, regardless, if you’re either looking to buy or looking to sell IBM FlashSystems, SAN Volume Controller (SVC), or Spectrum Virtualize for Public Cloud, I would highly recommend that you consider making Ansible Automation Platform a part of that solution.
Anyway, that’s what I have for today. Let me know if you have any questions in the comments or what you’re seeing in your experience.