Data Protection Software

 View Only

Red Hat Summit 2021 Phoenix

By Tony Pearson posted Fri November 12, 2021 04:12 PM

  
Red Hat Summit 2021 event banner

This week I attended the Red Hat Summit in Phoenix, Arizona. This is a one-day event that is being done in twenty different US cities. The one here in Phoenix was held at the Renaissance Hotel downtown. Here is my recap for the labs and lectures that I attended.

(FTC disclosure: I work for, and am a stockholder of, IBM, the parent company of Red Hat. Mentions of Red Hat products and services in my blog posts can be considered a celebrity endorsement under the United States Federal Trade Commission Endorsement Guidelines.)
Containerizing Applications: Existing and New

Ian Pilcher, Mark Baker, and Jon Leek facilitated this hands-on lab. The lab advertised that we would containerize an existing application, and then break it up into separate microservice containers. There were four parts:

  • Red Hat no longer uses "Docker", but instead has a set of tools, including podman and buildah, for local building and running of container images. Instead of carrying a full operating system like virtual machines, containers are built with the bare minimum of applications and run-time libraries needed to run the application. The "Containerfile" (or "Dockerfile" if you prefer) contains the sequence of commands to build the container image.
  • The "existing application" for this lab was simply a PHP-based WordPress website running Apache HTTP server with a MariaDB back-end database. I was quite familiar, as this was exactly what I used in my last job to run my "Tony @ TechU" blog. In much the same way that you can run all of this software on a bare metal server, or in a single virtual machine, we used podman and buildah to package this into one big container.
  • Splitting this up into "microservices" we put the Wordpress/PHP/Apache in one container, and the MariaDB database in a second container. We connected the two containers together and everything worked well.
  • The last step was to orchestrate these multiple containers with a Kubernetes-based platform, specifically Red Hat OpenShift Container Platform. We put the counters we built into pods, and these pods could then attach persistent data volumes. The WordPress pod had a persistent volume for all of its file uploads, and the database pod had a persistent volume for the MariaDB database tables.

This would be a great lab to pre-req the sessions I attended at [IBM Virtual TechU 2021] last month.

Dining Demo: DevOps, Data Scientists, Developers - Oh My!

Edson Yanaga presented this lecture during lunch.

The session was focused on Red Hat's latest offering, Quarkus. In Arizona, there is a roadside attraction called [The Thing]. Billboards along the highway ask "What is The Thing?" to get you to finally stop there. I felt similarly about Quarkus.

From what I can gather, Quarkus is both an optimized Java engine as well as a development framework, based on OpenJDK but designed specifically for cloud-native container usage. Java was developed back in 1995, long before VMware was founded in 1998, and designed initially for bare metal systems.

Today, Java is considered fat and slow compared to other programming language environments. But with 90 percent of Fortune 500 relying on Java, and over 8 million developers, it is not going away anytime soon.

Edson used "Visual Studio Code" (VS Code IDE) to show how quickly he could develop a website in Java using Quarkus. In effect, Quarkus provides a framework like Laravel for PHP, Django or Flask for Python. From a show of hands, I was the only software developer in the audience familiar with all three languages.

Edson claimed that it took 3 years for Red Hat to develop Quarkus, but that it is now five times faster than Node.js (based on JavaScript). Applications that ran on Java can run in the cloud in as little as 256MB of memory. Some customers using Quarkus have already realized thousands of dollars in savings by using smaller footprints and fewer cloud resources.

While Quarkus is not yet a full replacement for J2EE, it offers a "Spring" framework compatibility API. It is compatible with Java 8 and 11, and support for Java 17 is nearly complete.

Machine Learning workflows on Red Hat OpenShift Data Science (RHODS)

Audrey Guidera and Ian Pilcher facilitated this hands-on lab.

The lifecycle of Artificial Intelligence machine learning has four phases. First, you have to build a model, then train it, then deploy it, and then monitor its results. It is a lot like buying a puppy, training it to be a bomb-sniffing dog, attack guard dog, or seeing-eye dog, then when the dog is ready, deploying it into the world, and making sure it works successfully in new unforeseen situations.

I have used tools like Anaconda or IBM Watson Studio that are available on premise. Red Hat brings this to the cloud, providing a Red Hat managed service called "Red Hat OpenShift Data Science", or RHODS for short. This is currently deployed on Amazon Web Services (AWS) with basic Intel CPU, but they have plans to add GPU support next year.

The lab itself involved taking a set of financial transaction data, like credit card swipes, and writing a model to classify these are legitimate or fraud. We had thousands of data points to work with, and ran them through linear and non-linear methods.

As is typically done, we took 75 percent of the data points to train the model, and the remaining 25 percent to test the model. You always want to test a model with never-seen-before data. My model ended up getting about 92 percent accuracy, meaning that 92 percent of the time, legitimate or fraudulent transactions were correctly identified, and the other 8 percent were false positives/false negatives.

To help with this RHODS includes Jupyter Networks, TensorFlow and PyTorch. I was familiar with all three, so the lab was not difficult. Once you got a model that you were happy with, you can then use "Source to Image" (S2I) technology to convert the model into a small container-based API. How cool is that!

Red Hat OpenShift Application Services roadmap

Mark Baker presented this lecture about three recently launched services: Red Hat OpenShift Streams for Apache Kafka, Red Hat OpenShift Data Science, and Red Hat OpenShift API Management.

Normally, when I see a roadmap presentation, it is assumed everyone knows the base function, and is interested in what is coming out over the next few years. Instead, Mark had to spend the bulk of his time explaining what these services are, and only gave a glimpse of what is coming in first half of next year.

The first, Red Hat OpenShift Streams for Apache Kafka, is basically a service that connects "producers" with "consumers" of small packets of data, similar to Red Hat's existing AMQ, or IBM's MQ Series. If "Apache Kafka" sounds familiar to you, it is because it is also used in IBM's Spectrum Discover product.

The second, Red Hat OpenShift Data Science was what my last lab was all about.

The third, Red Hat OpenShift API Management, manages the deployment of API modules, such as the ones I build in my labs earlier that day. These could be API's that encapsulate business logic, or Artificial Intelligence Machine Learning (AI/ML) models.

Assessing and addressing security at scale with Red Hat Insights and Red Hat Enterprise Linux (RHEL)

Greg Scott and Freddy Montero presented this lecture on maintaining a consistent security policy in your enterprise. This task is difficult enough today, and is becoming more complex with hybrid cloud deployments. Red Hat Enterprise Linux (RHEL) offers great technologies to help secure your environment, but how do you manage these at scale?

Red Hat Insights is a cloud-based service which works together with Red Hat Enterprise Linux to help solve these issues for you. The Red Hat Insights client runs continuously on your RHEL server, sending status to the cloud, or via Red Hat Satellite for systems not on the Internet. This could identify CVE vulnerabilities that need to be patched, security configuration settings that need to be changed, and so on.


The event ended with an informal reception in the foyer. Many of the speakers had to leave to catch flights for the next event, while many attendees were local in Phoenix and went home to their families. I knew better not to leave during Phoenix's infamous rush hour traffic, and was able to chat with some of the Red Hat event organizers.

0 comments
11 views

Permalink