View the video below as I talk with Sam Werner, IBM VP, Storage Product Management and Dave Vellante, Co-Founder and Co-CEO of SiliconANGLE Media, describe the evolution of containers and how IBM helps customers create container storage strategies.
Dave Vellante 00:05
Hello, everyone, and welcome to this Cube conversation. My name is Dave Vellante and you know containers, they used to be stateless and ephemeral, but they're maturing very rapidly. As cloud native workloads become more functional, and they go mainstream, persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities, such as high availability, reliability, scalability, and other features are now more fundamental and important. And containers are linchpin of hybrid cloud, cross cloud, and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions, not to mention the complexities and cost of doing so? And with me to talk about how IBM thinks about and is solving these challenges are Eric Hertzog, who is the Chief Marketing Officer and VP of global storage channels for the IBM storage division. And Sam Warner is the Vice President of offering management and the business line executive for IBM storage. Guys, great to see you again. Wish we were face to face. But thanks for coming on the cube. Great to be here as always. Hi, guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off? I mean, is it? Is it onpoint?
Eric Herzog 01:32
Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team. And then eventually, of course, it became the mainstay containers is going through exactly that right now. brought in by the DevOps people, the software teams. And now it's becoming, again, persistent, real use clients that want to deploy a million of them just the way they historically have deployed a million virtual machines. Now, they want a million containers, or 2 million. So now it's going mainstream and the feature functions that you need, once you take it out of the test, sort of play with stage two, the real production phase really changes the ballgame on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage to do in a fully container world.
Dave Vellante 02:33
So Sam, how'd we get here? I mean, he was container has been around forever, you could say to Linux, right? But then then they did, as Eric said, go mainstream, but it started out, you know, kind of a little experimental, as I said, they're, they're ephemeral, didn't really need to persist them. But but it's changed very quickly, maybe you could talk to that evolution and how we got here.
Sam Werner 02:57
This is all about agility, right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers. And developers are constantly out running out, they got to go faster, they have to deliver new applications, business lines, need to figure out new ways to engage with their customers, especially now with the past year, we had an even further accelerated this need to engage with customers in new ways. So it's about being agile containers, promise, or provide a lot of the capabilities, you need to be agile. What enterprises are discovering a lot of these initiatives are starting within the business lines. And they're building these applications are making these architectural decisions, building DevOps environments, on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want, because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like DevOps. But they need to bring the infrastructure teams along. And that's what we're focused on. Now. How do you make that agile infrastructure to support these new container worlds?
Dave Vellante 04:17
Got it. So Eric, you guys made an announcement to directly address these issues? Like it's kind of a firehose of of innovation. Maybe you could take us through and then we can unpack that a little bit. Sure.
Eric Herzog 04:29
So all we did is on April 27, we announced IBM Spectrum Fusion. This is a fully container native Software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global, scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos, and no more 12 copies of the data which of course drive up CAPEX and OPEX, Spectrum Fusion reduces that and makes it easier to manage cuts the cost from a CAPEX perspective and cuts of cost for an OPEX perspective, by being fully container native, it's ready to go for the container centric world, and can span all types of areas. So what we've done is create a storage foundation, which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core or cloud, regardless of where the data is, you think the data is right with you, even if it physically is not. So that allows people to work on it, we have file locking and other technologies to ensure that the data is always good. And then of course, we've imbued it with the HA disaster recovery, the backup or restore technology, which we've had for years, and are now baited fully container native. So Spectrum Fusion, basically take several elements of IBM's existing portfolio has made them container-native, and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyper converged appliance, which will be available next quarter in q3 of 2021. That, of course, means it'll come with compute, it'll come with storage, come with a rack even come with networking. And because we can preload everything for the end users, or for our business partners, will also include Kubernetes, Red Hat Openshift, and Red Hat's virtualization technology, all in one simple package, all ease of use, and a single management GUI to manage everything, both the software side and the physical infrastructure. That's part of the hyper converged system level technologies.
Dave Vellante 06:49
So Sam, maybe you can help us understand the architecture and maybe the prevailing ways in which people approach container storage, you know, what's what's the stack look like? And how have you guys approached it?
Sam Werner 07:02
That's a great question. Really, there's three layers that we look at, when we talk about container native storage, it starts with the storage foundation, which is the layer that actually lays the data out onto media and does it in an efficient way, and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted. I think as they move to containers, we're talking about moving mission critical applications now into a container in hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site for site replication of their data with hyper swap, and they can ensure high availability? How do you bring that into a Kubernetes environment? Are you ready to do that? We talked about how only 20% of applications have really moved into a hybrid cloud world, the thing that's inhibiting the other 80%, these types of challenges. Okay, so the storage services include HDR, data protection, data governance, data discovery, you talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of copies of data, if you needed data to be available in the cloud, you're making a copy there? How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR? Right? So if I have to delete data about a person, how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talked about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime, there has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on openshift. Right? When we talk about Kubernetes, how do you create the knowledge between layer two, the storage services, and layer three, the application services.
Dave Vellante 09:18
And so so this is your three layer cake. And then as far as like, the policies that I want to inject you got to API out and entries in, can use whatever policy engine I want. How does that work?
Sam Werner 09:30
So we're creating consistent sets of API's to bring those storage services up into the application runtime. We in IBM have things like IBM Cloud Satellite, which bring the IBM Public Cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience will integrate they're giving you that consistent set of storage services within an IBM Cloud satellite. We're also working with Red Hat on their advanced cluster manager, also known as RAC-M, to create a multi cluster management of your Kubernetes environment giving that consistent experience. Again, one common set of API's.
Dave Vellante 10:12
Sure, the appliance comes first. Is that just time to market? Or is there a sort of enduring demand for appliances, some customers? Do they want that? Maybe you could explain that strategy?
Eric Herzog 10:26
Yeah, so first, let me take it back a second look at our existing portfolio. Our award winning products are both software defined, and system based. So for example, Spectrum Virtualize comes on our Flashsystem. Spectrum Scale comes on our Elastic Storage System. And we've had this model where we provide the exact same software both on an array or as a standalone piece of software. This is unique in the storage industry, when you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape, or form. So we've done both. So a Spectrum Fusion will have a hyper converged configuration, which will be available in Q3, they'll have a software defined configuration, which was available at the very beginning of 2022. So you want to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead, we're going to where the puck is, we're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid mid guy got it to the forward ahead of time, so we could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great, but virtualization is kind of old hat. Right? VMware, and other virtualization layers have been around for 20 years now. Containers are where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what we work fine with that. We work fine with virtualized, we have a tight tight integration with both Hyper V and VMware. So some customers will still do that. And containers is a new wave. So with Spectrum Fusion, we are riding the wave, not fighting the wave. And that way we could meet all the needs, right? Bare metal virtual environments and container environments in a way that is all based on the end users applications, workloads, and use cases, what goes where, and IBM storage can provide all of it. So we'll give them two methods of consumption, you know, by early next year, and we started with a hyper converged first because A, we felt we had a lead, truly elite other people are leading with virtualization, we're leading with openshift. And containers were the first full container native openshift, ground up based hyper converged of anyone in the industry, versus somebody who's done VMware or some other virtualization layer, and then sort of glommed on containers, that is afterthought. We're going to where the market is moving, not to where the market has been.
Dave Vellante 13:06
So just follow up on that you kind of you got the sort of Switzerland DNA, and it's not just Openshift and Red Hat and the open source eat those images, it goes all the way back to SAN Volume Controller back in the day, where you could, you know, virtualize anybody storage, how is that carrying through to this announcement.
Eric Herzog 13:27
So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports non IBM storage, for example, EMC Isilon NFS, it will support fusion will support Spectrum Scale, fusion will support our elastic storage system, fusion will support NetApp filers as well. Fusion will support IBM Cloud object storage, both Software defined storage, or as an array technology and Amazon S3, object stores and any other object storage vendor who is compliant with S3. All of those can be part of the global namespace, scalable file system, we can bring in, for example, object data without making a duplicate copy. The normal way to do that is you make a duplicate copies, you had a copy in the object store, you make a copy and bring that into the file. Well guess what, we don't have to do that. So again, cutting capex and OPEX and ease of management. But just as we do with our flash systems product, and our Spectrum virtualized and the sand volume controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion. We've done the same thing. Fusion scale, the IBM VSS, IBM Cloud object storage, Amazon S3 object store as well as other compliance, EMC, Isilon, NFS, and NFS. From data and by the way, we can do this discovery model as well. Not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI, or analytic software connecting with the API into the discovery technology. You don't have to traverse and try to find things, because the discovery will create real time metadata cataloguing and indexing, not just the var storage, but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors, but derive real value with Spectrum Fusion, and get all the capabilities of Spectrum Fusion, and all the advantages of the Enterprise Data Services, but not just for our own product prefer the other products as well that aren't ours.
Dave Vellante 15:51
So Sam, we understand the downside of copies. But then then, so you got, you know, you're not doing multiple copies. How do you deal with latent latency? What's the secret sauce here? Is it the file system? is there other magic in here? Yeah, that's
Sam Werner 16:07
a great question. And I'll build a little bit off of what Eric said. But look, one of the really great unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are, it could have originated in object storage will cash in into the file system, it can be on any block storage, it can literally be on any storage, you can imagine, as long as you can integrate a file system with it. And as you know, most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world. And there's nothing that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data, and the performance improves with scale. So you can actually deploy Spectrum Scale on prem, build out an environment of it, consuming whatever storage you have, then you can go into AWS or IBM cloud or Azure, deploy an instance of it, and it'll now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility, and will caching only what's needed. Normally, you would have to make a copy of data into the other environment, then you'd have to deal with that copy later. Let's say you weren't, we're doing a cloud bursting. Use Case, let's look at that, as an example. To make this real. You're running an application on prem, you want to spin up more compute in the cloud for your AI, the data, normally, you'd have to make a copy of the data, you'd run your AI, they have to figure out what to do with that data. Do you copy some of the back? Do you re sync them? Do you delete it? What do you do with Spectrum Scale will just automatically cache him whatever you need, it'll run there, if you decide to spin it down, your copy is still on prem, you know, no data is lost, we can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say, video surveillance data pouring in looking at the manufacturing floor looking for defects, you can run a AI right at the edge, make it available in the cloud, make that data available in your data center, again, one file system going across all and that's something unique in our data Foundation, built on Spectrum Scale.
Dave Vellante 18:23
So there's some metadata magic in there as well. NET intelligence, based on location and Okay, so you're smart enough to know where the data lives. What's the sweet spot for this? Eric, there any particular use cases or, or industries that we should be focused on?
Eric Herzog 18:42
So first, let's talk about the industries. We see certain industry is going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing talks about AI based manufacturing platforms. We actually have a couple clients. Right now we're doing autonomous driving software, with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public Of course, healthcare and in healthcare, don't just think delivery at IBM that includes the research, guys, so the genomic companies, the biotech companies, the drug companies, are all included in that. And then of course, retail, both on prem and off prem. So those are sort of the industries, when we see from an application workload, basically AI, analytics and big data applications or workloads are the key things. That Spectrum Fusion helps you because of its file system, its high performance, and those applications are tending to spread across core edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way, if they want to run adjusted in the data center, that's fine or perfect example of a giant global auto manufacturer, they've got factories all over. And if you think there isn't compute resources in every factory there is because those factories just saw an article actually, those factories cost about a billion dollars to build a billion. So they've got their own it now it's connected to their core data center as well. So that's a perfect example, that enterprise edge, where Spectrum Fusion would be an ideal solution, whether they did it as software defined only, or of course, when you got a billion dollar factory just to make it let alone produce the autos or whatever you're producing silicon, for example, those fabs all cost of building. That's where the enterprise edge fits in very well with Spectrum Fusion.
Dave Vellante 20:41
So are those industries? What like, what's driving the adoption of containers? Is that just they just want to modernize? Is it because they're doing, you know, some of those workloads that you mentioned, or there's edge, like you mentioned, manufacturing, I could see that may potentially being an edge is the driver?
Eric Herzog 20:59
Well, it's a little bit of all of all of those days, for example, virtualization came out and virtualization offered advantages over bare metal. Okay, now containerization has come out. And containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000 1500, they're probably going to run all three based on the application workload a use case, and our storage can is really good at bare metal, very good at virtualization environments. And now with Spectrum Fusion are container native, outstanding for container-based environments. So we see these big companies will probably have all three. And IBM Storage is one of the few vendors if not the only vendor that can adroitly support all three of those various workload types. So that's why we see this as a as a huge advantage. And again, the market is going to containers we are I'm a native Californian, you don't fight the wave, you ride the wave. And the wave is containers, and we're riding that wave.
Dave Vellante 21:59
If you don't ride the wave, you become driftwood as Pat Gelsinger would say
Eric Herzog 22:04
through another guy in California, my old boss.
Dave Vellante 22:07
So So okay, so so I wonder Sam, I sort of hinted upfront in my little narrative there. But the way we see this is, you've got on prem hybrid, you got public clouds, cross cloud, moving to the edge. openshift is, I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the puck, or whatever sports analogy you want to use. Is that where the puck is headed?
Sam Werner 22:44
Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You ask why these industries are implementing containers. It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their, you know, their customers and delivering what they want improving their experience. So it's all about agility. Developers don't want to wait around for infrastructure, you need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API's. It requires abstracting out the complexity of things like a che and Dr. You don't want every application owner to have to figure out how to implement that you want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do. As you bring more and more of your data of your information about your customers into these container worlds. You've got to have security, rock solid, you can't leave any exposures there, and you can't afford downtime. There are increasing threats from things like ransomware, you don't see it in the news every day. But it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services. And you need to make it simply available to the developers in these DevOps environments. And that's what we're doing with Spectrum Fusion. We're taking, I think, extremely unique, and one of a kind storage foundation with Spectrum Scale that gives you a single namespace globally. And we're building on to it an incredible set of storage services, making extremely simple to deploy enterprise class container applications.
Dave Vellante 24:39
So what's the bottom line business impact? I mean, how does this change, Sam, you think articulated very well through all about serving the developers versus you know, storage admin provisioning a LUN. So how does how does this change my my organization, my business, what's the what's the impact there?
Sam Werner 24:58
I've mentioned one other point. that we talked about in IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have, and, and be able to take it to build new insights to give your company an advantage or an incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data, and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers are built on that data foundation, and information. And the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is is to build out a containerized application environment that has access to real time data. The ultimate outcome...sorry, I know you asked for business results, is that you will in real time understand your clients understand your industry and deliver the best possible services. And the absolute, you know, business outcome is you will continue to gain market share in your environment and grow revenue. I mean, that's the that's the outcome every business wants.
Dave Vellante 26:19
That's all about speed, everybody's kind of everyone's last year was forced into digital transformation, it was sort of rushed into and compressed. And now they get some time to do it. Right. And so, you know, modernizing apps, containers, do DevOps, developer led sort of initiatives are really key to modernization. Alright, Eric, we got we're out of time. But give us the bottom line summary. We didn't talk Actually, we are talking about the 3200. Maybe you could give us a little little insight on that before that before we close? Sure.
Eric Herzog 26:51
Sure. So in addition to what we're doing with Fusion, we also introduced a new Elastic Storage System 3200. It's all flash, it gets 80 gigs a second sustained at the node level, and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes a second sustained. And of course, AI big data analytic workloads are extremely, extremely susceptible to bandwidth or, and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in, that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale, because it is if you will, one of the one of the three fathers of Spectrum Fusion. So it's ideal with its highly parallel file system. It's used all over in high performance computing in supercomputing in drug research in healthcare, and finance. Probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3200 is an all flash version twice as fast as the older version, and all the benefit of Spectrum Scale, including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3200s to it if you want to do that because of the capability of our global namespace and our single file system across edge and cloud. So that's the 3200 in a nutshell, Dave. So what's the bottom line, Eric? What's the bumper sticker? Yeah, bumper sticker is you got to ride the wave of containers. And IBM storage is coming that can take you there so that you win the big surfing contest and get the big prize.
Dave Vellante 28:42
Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we get together soon. So get your jabs and we'll have a beer. All right. Thanks, Dave. All right. Thank you for watching everybody. This is Dave volante for the Cube. We'll see you next time.