When hybrid multicloud has technical advantages

Just deploy your new application, microservice, or machine learning model to the public cloud? Well, maybe not so fast

When hybrid multicloud has technical advantages
Thinkstock

Ask someone who has spent a big part of their career in IT operations managing data centers, and you’ll hear plenty of reasons why and where private clouds have advantages over public clouds. They’ll mention reliability, scalability, and security reasons and argue they can meet higher standards by having control in selecting, deploying, and managing the infrastructure.

Now ask a CIO, and they’ll have other reasons for considering private clouds and a hybrid multicloud architecture. They’ll provide rationales around the variability in public cloud costs, the need to support legacy systems for an extended period, and the realities of long-term data center contracts. Many enterprise CIOs have stronger practices in managing data centers than public cloud architectures and want to avoid becoming locked into any public cloud vendor.

Big companies can’t turn their ships fast enough, and the CIO must consider setting priorities — based on business impact and time to value — for application modernization. As Keith Townsend, co-founder of The CTO Advisor, put it on Twitter, “Will moving all of my Oracle apps to Amazon RDS net business value vs. using that talent to create new apps for different business initiatives? The problem is today, these are the same resources.”

Then ask software developers, and you’ll find many prefer building applications that deploy to public clouds, and that leverage serverless architectures. They can automate application deployment with CI/CD, configure the infrastructure with IaC, and leave the low-level infrastructure support to the public cloud vendor and other cloud-native managed service providers.

And will your organization be able to standardize on a single public cloud? Probably not. Acquisitions may bring in different public clouds than your standards, and many commercial applications run only on specific public clouds. Chances are, your organization is going to be multicloud even if it tries hard to avoid it. 

In the discussion below, we’ll examine a number of scenarios in which a hybrid cloud architecture offers technical advantages over private cloud only or multiple public clouds. 

Defining a hybrid multicloud architecture and strategy

Summing this up, developers often prefer outsourcing the infrastructure and related managed services to public clouds, while IT operations lobby to build private clouds that leverage their expertise and existing data center infrastructure. Enterprise CIOs must manage to practical realities by supporting hybrid multicloud architecture and operating practices.

To get the terminology straight, multicloud means that your organization leverages multiple cloud platforms, say AWS and Azure. A hybrid multicloud means that your organization uses a mix of private clouds and public clouds, and must orchestrate connectivity and security between them. A private cloud might run in your organization’s data center, or it might be hosted by a service provider. 

But that leaves the question of where to deploy new applications and where to modernize legacy ones. Answering this question requires a collaborative effort among IT decision-makers, architects, developers, and engineers to consider best practices and architecture patterns. As cloud consultant Sarbjeet Johal told me, “The goal of a hybrid multicloud strategy is to gain agility while ensuring stability, or in other words, targeting the right workload to the right place.”

Sarbjeet’s theory of cloud consumption has three principles, albeit with many exceptions:

  • Never build systems of record yourself, procure SaaS
  • Procure extendable SaaS (with PaaS) for systems of engagement/differentiation
  • Use public clouds for systems of innovation

These guidelines provide some context around when to buy and configure SaaS solutions versus building applications in-house. Many legacy systems must remain in the data center until there are opportunities and business rationale to modernize them. Then there are cases where businesses run applications in private clouds because of cost, compliance, security, and other operational considerations. Lastly, Sarbjeet suggests using public clouds for applications that deliver innovation. Developers can start these as small POCs and experiments, use cloud services to develop features quickly, and scale the infrastructure based on usage.

Ed Featherston, a distinguished technologist at Cloud Technology Partners, a Hewlett Packard Enterprise company, has some sharp advice around defining a hybrid cloud strategy. He says, “Everything is a tradeoff. Your business needs to drive the priorities and tradeoffs that are acceptable to achieve the goal. Design and planning are still required. Lack of taking this into account ends up with failed implementations.”

But the developer in me wanted to go deeper. I can think of thousands of reasons why I would choose public clouds for new applications and microservices. So, when is it optimal to build and deploy new applications or services in private clouds? I wanted to find use cases where private clouds enable technical and competitive advantages beyond operational considerations.

Escaping data gravity through proximity

If you were ready to deploy a large-scale machine learning model running on TensorFlow, you might assume that the best option is to deploy it to a public cloud. Amazon SageMaker, Azure Machine Learning, and GCP’s TensorFlow Enterprise are all options data scientists can use to experiment, develop, test, and deploy production deep learning models. Is one of these public cloud options optimal?

What if I told you that the model required retraining every 30 days against a multi-petabyte data set sitting across several data warehouses and data lakes in the enterprise data center. Is it more efficient and cost-effective to move all this data to a public cloud so that the machine learning model can be trained there? Or maybe it’s better to train the machine learning model in a private cloud close to where all the data resides?

What if I’m configuring a control system based on an event-driven architecture? Well, if this is for a large advertising agency that collects behavioral data from dozens of SaaS platforms, then I probably deploy the system to a public cloud. But what if it’s a manufacturer, and the events come from thousands of IoT sensors, and the factory is in a remote area in South America? Should I deploy a private cloud at the edge to perform this data processing?

These examples illustrate two important concepts when considering public versus private cloud deployments. The first is data gravity, a term that suggests that large data sets have a gravitational pull on their consuming applications and services. Deploying these applications and services in close proximity to their largest data sets allows them to run faster, cheaper, and more reliably. The second is latency, which can be a factor when operations are in remote locations, and securing high bandwidth and reliable connectivity is not available or expensive. In these situations deploying private clouds at the edge offers performance and cost advantages.

Architecting applications that require human safety

Because most well-architected e-commerce applications can run reliably in public clouds or private clouds, the decision often comes down to cost, compliance, and other operational factors. The same is true for many applications supporting business workflows, analytics, transactions, and collaborations.

But introduce human safety as a design consideration, and you may feel differently. Hospitals require medical systems to be run on-premises because no one wants a robotic-assisted surgery to stop mid-procedure because of a public cloud outage. Architects of smart buildings and smart city implementations must consider strategically distributing services between private and public clouds, and most certainly look to deploy life-critical services in hybrid models.  

Architecting for the intersection of digital and physical-world experiences

Over the next decade, we will witness an increasing number of applications that connect the physical and digital worlds. Enterprise architects must consider hybrid architectures that optimize for a growing list of parameters at this intersection, including user experience, performance, reliability, scalability, and maintainability.

Todd Mazza, VP of enterprise architecture at Rockwell Automation, shared how he thinks about the tradeoffs. He replied to me with this tweet, “There are elements of my manufacturing floor that will likely not go to hybrid or public cloud in the next five years or so. But I may be able to graduate to hybrid more quickly if I can demonstrate that I can ship more product, more reliably, at a lower cost.”

The stakes increase as more organizations develop applications that leverage IoT, 5G, and AI at scale.

What this means is that there are a growing number of applications where architecture, cloud, and infrastructure decisions are critical design considerations. While it might be a simple decision to implement a proof of concept or a lightweight mobile application on the public cloud, more mission-critical, life-supporting, and data-intensive applications are likely going to require hybrid multicloud deployments.

Thanks to colleagues from the Hybrid Clouders on Twitter that responded to my questions, including @CTOAdvisor, @sarbjeetjohal, @efeatherston, @tmazza, @mdkail, @ballen_clt, @tcrawford@mthiele10, @bhaines0 @AnuragTechaisle, @2Obeto, @jimmychow, @ibbitsc, @CraigMilroy, @hcoyote, @waynesadin, @TelcoDR, @joannefriedman, @ROIdude, @digitalcloudgal. My apologies to anyone I missed.

Copyright © 2020 IDG Communications, Inc.