Red Hat’s New Mission: Making IT’s Four Footprints Immaterial

By: Paul Cormier, president, Products and Technologies, Red Hat

For the past several years, Red Hat has emphasized the interplay of IT’s four footprints, from physical servers and virtual machines to private and public clouds. A single environment is unlikely to scale and adapt to meet the needs of the modern enterprise, from competitive dynamics to evolving customer demands. Hybrid cloud, where workloads and resources span these deployment options, is now a critical component for digital transformation, as is consistency. CIOs need to know that their applications and services will respond consistently in a certain way, every time, everywhere.

This isn’t new - you’ve heard this all before from me, from Red Hat, and at Red Hat Summit.

But here’s what is new: These four footprints are immaterial to the end user. The hybrid cloud is becoming a default technology selection - enterprises want the best answers to address their problems, regardless of what footprint it exists in or which vendor offers it. We’ve reached what I believe to be an inflection point in enterprise IT, where the underlying foundations of the technology stack are commoditized, just like plumbing, wiring, or the block foundation of a house: they just are. When you build a house, the quality of materials a builder chooses and how they construct it often make a difference in the end quality, even though much of that may be invisible to the homeowner.

The same holds true in tech. For the most part, no one cares what’s behind the walls of enterprise IT. What enterprise users do care about, regardless of role, is that the technology stack meets their needs, is stable, has strong security, and can scale to address future needs.

Red Hat already rose to this challenge once before, in a pre-cloud world where bespoke UNIXes ruled a landscape of custom, million-dollar beige server boxes. Red Hat Enterprise Linux broke down the complex and expensive silos of custom operating systems and also battled against the popular-but-proprietary Windows Server market, providing a common platform that spanned the server world, enabling rapid hardware innovation to flourish while at the same time retaining stability at the operating system level (where it actually impacted mission-critical systems). Red Hat Enterprise Linux has become the 'single point of sanity' for an ecosystem of hardware and software companies to standardize on, and for commercial and enterprise developers to build applications and solutions on with deployment options across all four footprints.

We’re facing a similar scenario today - the rise of public cloud services provides innovation that far outpaces the enterprise capacity to consume it. Coupled with legacy hardware, sprawling virtualization deployments, and existing private clouds, organizations are faced with what seems like a binary choice: desert millions, if not billions, of dollars in IT investments or get left behind in the dust clouds of innovation.

But once again, we have an answer: Red Hat intends to provide the common platform that offers a stable, consistent, and reliable fabric that stretches across four of IT’s footprints, regardless of the underlying hardware, service, or provider. This provides consistency and abstraction, enabling IT teams to focus on embracing innovation rather than trying to desperately integrate legacy technologies with screaming-fast emerging cloud services.

Broadly, this platform is Kubernetes; specifically, it’s Red Hat OpenShift.

One platform for digital transformation

Many of the conversations around emerging technologies, including clouds of all sizes, distill down into one theme: digital transformation. Organizations want to embrace new technologies, like Linux containers, microservices, artificial intelligence/machine learning, and the hybrid cloud, to outpace competitors, address new markets, and meet evolving customer demands. But pairing emerging services with existing IT investments with cloud services is far easier said than done.

Red Hat OpenShift Container Platform already serves as the Kubernetes-based bridge, linking the old world of bare metal and virtualization to IT’s new reality of private and public cloud services. In effect, Red Hat OpenShift provides a clear path to the hybrid cloud and, through that, digital transformation...but it gets better.

With the acquisition of CoreOS in January 2018, we’re adding automated operations to OpenShift, based on similar functionality from CoreOS Tectonic. This is designed to make running Kubernetes at cloud-scale as easy as pushing a button, as all of the manual grunt work of updating and upgrading OpenShift clusters is replaced by an automated process with OpenShift itself.

We’re extending this concept of Kubernetes automation on OpenShift even further, with the addition of the Operator Framework to Red Hat OpenShift. An open source project launched last week and based on CoreOS’s Operator concept, this adds automation to the services and applications running on top of OpenShift. Now, ISVs can bring their applications to Red Hat OpenShift more quickly in a common, consistent manner, and deliver them on any cloud infrastructure where Red Hat OpenShift runs. And the initial response from ISVs has been tremendous.

Today, we also announced plans for Container Linux, CoreOS’s popular, lightweight container operating system. The Container Linux community lives on and is thriving. Container Linux capabilities will be used within the broader Red Hat ecosystem and will ultimately help to form the immutable foundation of Red Hat OpenShift offerings in the future as Red Hat CoreOS.

Hybrid clouds, hybrid services

The hybrid cloud is not, and will never be, a single vendor ecosystem. The diverse set of technologies and services comprising the hybrid cloud are what make it so powerful for enterprise IT, but this diversity makes integration and partner collaboration critical to deliver working solutions. To this end, Red Hat has long maintained a robust partner network to help drive open hybrid cloud adoption, and today we’re adding new integration with key partners to help make hybrid cloud the de facto footprint for modern IT.

Adding to our existing collaboration with IBM, we intend to bring the hybrid cloud to IBM customers globally, with Red Hat OpenShift Container Platform serving as the linchpin. As part of this expanded collaboration, IBM software will be offered as Red Hat Certified Containers on OpenShift, IBM Private Cloud will integrate with the capabilities of Red Hat OpenShift Container Platform, and Red Hat OpenShift will be available on the IBM public cloud. Simply put, this opens up hybrid cloud capabilities to a huge swath of enterprises and makes it easier to consume hybrid IT via a supported, enterprise-grade model.

We’re also building upon our growing relationship with Microsoft with the launch of the first jointly-managed Red Hat OpenShift service on a public cloud: Red Hat OpenShift on Azure. Through this service, enterprises can run both WIndows containers and Red Hat Enterprise Linux containers on a single, common foundation, and move their cloud-native workloads across the hybrid cloud with ease.

There’s one common thread to these impressive updates, regardless of the partner: It’s Red Hat OpenShift Container Platform. We’re providing the common platform that spans hybrid cloud and hybrid services, enabling enterprises to build the applications that they need, with whatever services and components that they need, regardless of where those tools lie or where the application will ultimately live. This is the future that OpenShift can provide, one where the footprint, the cloud, and even the developer tooling doesn’t matter - it’s all just there.

Emerging technologies and the modern management landscape to oversee them

The hybrid cloud is built from technologies that, even just a few years ago, were considered “too innovative” to support enterprise workloads, including Linux containers and Kubernetes. Now, we’re seeing the next wave of emerging technologies that will likely fuel the next iteration of hybrid IT, most notably serverless, a trend that we aim to address in the coming days.

Serverless fits in with the broad move towards abstracting enterprise IT. Broadly, serverless technologies enable developers to write functions that work when needed and for only as long as needed; then they disappear. No one cares what happens behind the walls of enterprise IT, and serverless is a prime example of this; it’s this technology trend that makes the open hybrid cloud and a common operating platform so critical in today’s age.

But even with all of this abstraction based upon high levels of standardization, we still need to manage these technologies and, most importantly, remove complexity from the equation. The hybrid cloud requires a hybrid approach to management, but legacy Cloud Management Platforms (CMP) are frequently ill-equipped to manage today’s hybrid environments, which require portability and flexibility. That’s why in the coming months you will hear more from us about how Red Hat's management portfolio is evolving to better meet the needs of modern enterprises and support a cloud-native future.

It’s your IT that matters, not the footprint

We talk a lot about technology, footprints, and a common stack, but the only thing that matters to your organization is what matters to your organization. You want your essential services and applications to just work, regardless of where they live or who built them. That’s what defines your infrastructure: It’s the services, not the technology. The future is service-defined infrastructure, where enterprises simply consume services from an abstracted and automated Kubernetes platform running on standards-based infrastructure software and commoditized hardware (on-premises or in the cloud).

To further this concept, you can expect to see Red Hat introduce technologies that straddle the infrastructure and services demarcation, which started with the launch of Red Hat’s container-native storage offering in 2017 and continues today with container-native virtualization. We’ve seen great momentum with container-native storage and now with container-native virtualization, we working to bring virtual machines to the same level as container-native development, breaking down developer silos and enabling enterprise IT teams to have a single workflow for building their mission-critical apps, no matter if they’re based in containers or VMs. Because it’s the applications and the services that matter to your organization, not how they’re delivered.

You need your IT delivered in your own specific way, in a way that makes sense for your business. Focus on what makes you different, and we’ll focus on how to get you there. streamlined, open, and hybrid technologies.

Comments