Saturday, 29 December 2018

Serverless and Knative Underline Cloud Native Evolution

Serverless computing played an interesting subplot at the recent KubeCon + CloudNativeCon North America 2018 event in Seattle, where a number of keynotes and panels were dedicated to the topic of how these systems will impact the evolution of cloud native.

Most of the attention, not surprisingly, centered on the Knative platform that relies on Kubernetes as an orchestration layer for serverless workloads. The platform was developed by Google, Pivotal, IBM, SAP, and Red Hat, and launched at the Google Next event in July.

Knative is an open source set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. It’s focused on orchestrating source-to-container builds; routing and managing traffic during deployment; auto-scaling workloads; and binding services to event ecosystems.

It’s basically a way to use Kubernetes to liberate management of serverless platforms from specific cloud providers. Many of the current serverless platforms are based on and tied to a specific cloud platform, which can lead to vendor lock-in for an organization adopting one of those platforms. Those include AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions. Knative can break this lock-in by providing a platform that can be accessed regardless of the underlying cloud.

“This portability is really important and what is behind the industry aligning behind Knative,” explained Aparna Sinha, group product manager for Kubernetes at Google, during her keynote address at the KubeCon event.

Jason McGee, vice president and CTO for IBM’s Cloud Platform, told attendees that Knative was an important project in unifying the dozens of serverless platforms that have flooded the market.

“That fragmentation, I think, holds us all back from being able to really leverage functions as part of the design of our applications,” McGee said during his keynote. “I think Knative is an important catalyst for helping us come together to bring functions and applications into our common cloud native stack in a way that will allow us to move forward and collaborate together on this common platform.”

He added that Knative also teaches Kubernetes how to deal with building and serving applications and functions, which makes it an important piece in the cloud native landscape.

Maturation Needed
Despite the growing hype, most also took time to mention that serverless platforms, and more specifically Knative itself, remain relatively immature. Modern serverless platforms themselves are less than five years old, and Knative only recently released its 0.2 version.

Dan Berg, a distinguished engineer at IBM’s Cloud Kubernetes Service, told SDxCentral in an interview that while interest around Knative has surpassed expectations, maturity of the platform remains a significant challenge to broader adoption.

“I think maturity is where Knative needs to really evolve over the next year,” Berg said. “The interest is there, but it’s just still too early.”

That maturation is expected, with some already predicting that Knative was in line to become the serverless platform of choice to run on Kubernetes.

“Knative will almost certainly become the standard plumbing for functions-as-a-service on Kubernetes,” wrote James Governor, analyst and co-founder at RedMonk, in a blog post shortly after the platform was announced.

https://www.sdxcentral.com

Monday, 24 December 2018

SIG-Auth Bolstering Security Authorization in Kubernetes

Today’s topics include Kubernetes security authentication moving forward with SIG-Auth, and Elastifile providing scalable file storage for Google Cloud.

One of the primary Special Interest Groups within Kubernetes is SIG-Auth, whose members are tasked with looking at authorization security issues. At the KubeCon + CloudNativeCon NA 2018 in Seattle last week, SIG-Auth leaders outlined how the group works and its current and future priorities for the Kubernetes project.

"SIG-Auth is responsible for designing and maintaining parts of Kubernetes, mostly inside the control plane, that have to deal with authorization and security policy," said Google Software Engineer Mike Danese.

He said SIG-Auth has multiple subprojects detailed in the group's GitHub repository. Those subprojects include audit, encryption at rest, authenticators, node identity/isolation, policy, certificates and service accounts.

Over 2018, SIG-Auth added a number of security authorization features into Kubernetes, including better node isolation, protection of specific labels and self-deletion, and better audit capabilities.

Elastifile, a new-gen provider of enterprise-grade, scalable file storage for the public cloud, announced on Dec. 11 the introduction of a fully managed, scalable file storage service for Google Cloud Platform. Using its tight integration with Google Cloud infrastructure, Elastifile Cloud File Service makes it easy to deploy, manage and scale enterprise file storage in the public cloud.

According to CEO Erwan Menard, the software runs on any server and can use any type of flash media, including 3D and TLC. He also said Elastifile brings flash performance to all enterprise applications while reducing the Capex and Opex of virtualized data centers, and simplifies the adoption of hybrid cloud by extending file systems across on-premises and cloud deployments.

http://www.eweek.com

Data Management: Which New Changes Are Coming in 2019

Basically everything in IT these days revolves around managing waves of data coming in from multiple outside sources: mobile devices, ecommerce sites, data streams for analytics, enterprise ecosystems, sales/marketing partnerships and so on. Thus, the function of data management is constantly evolving to handle the influx of files, logs, images and everything else.

What’s relevant today may not be relevant tomorrow, much less next year, requiring companies to constantly evaluate data, innovate, evolve and maintain agility–all while tiptoeing the line of data management and analysis to ensure the right data is on hand when you need it most.

The sheer volume of data continues to increase at a staggering rate, and while some of it is beneficial, much of it is irrelevant. Moreover, a disturbing portion is dark and potentially dangerous. This has led to data protection giving way to data management, where data is the fuel for company success, driving insights, customer targeting and business planning--and even more so today, training artificial intelligence (AI) and machine learning models.

Any way to extract additional value from it is critical to business success and the shift to management ensures data is properly archived, easily searchable, can be leveraged for analytics, and is compliant the entire time.

This eWEEK Data Point article features an industry perspective for 2019 from Prem Ananthakrishnan, vice president of products at Druva. Here’s a look at his expectations for the new year.

Data Point No. 1: We’ll see the rise of smart clouds. The adoption of streaming data capture from the internet of things (IoT) and sensors, data governance policies, security standards, expanded data curation and compilation and widespread adoption of AI and machine learning have made it impossible to rely completely on on-premises solutions. Technologies such as AI, machine learning, and analytics thrive in environments with expansive amounts of data and compute abilities beyond those available in on-premises solutions. These trends greatly favor cloud-based architectures, and will only increase as vendors offer more advanced solutions.

Data Point No. 2: The cloud wars will escalate in 2019. Serverless architecture will drive down costs even further, and I would expect hybrid and multi-cloud to become more popular with pushes from VMware and Amazon Web Services (AWS). Online marketplaces will shift spending from offline distribution and vendors, and resellers will increasingly adopt digital VAR-like models. Machine learning and AI will continue to rise in adoption, become embedded within cloud-based solutions and increase the allure of cloud computing. Because of these technologies, public cloud will become the de-facto choice for developers.

Data Point No. 3: Unrecovered data loss will be on the rise. Ninety percent of respondents to Druva’s 2018 State of Virtualization in the Cloud survey noted they will be using public cloud in 2019, however many companies are still backing up their IaaS/PaaS/SaaS with manual processes. Even more concerning, notes W. Curtis Preston, Chief Technologist at Druva, is that some are not backing up their IaaS/PaaS/SaaS environments at all, based on the assumption that the protections offered within the service itself are “good enough.” These protections--in Office365, for example--do not mitigate risks associated with hackers, ransomware, malicious users, or typically anything deleted more than 60 days ago.

Data Point No. 4: 2019 is the year of government data compliance. Data management is no longer simply a consumer vs. corporation battle; it has quickly elevated to the country and federal level. In the wake of GDPR, others are using it as blueprint to enact more stringent compliance standards. The California Consumer Privacy Act goes into effect January 2020, and we should expect to see more of the same in the coming years from other jurisdictions. Such regulations mean company obligations will become more complicated and will need to meet new standards. Having the flexibility and scalability to store data within specific regions will become a key buying consideration and increasingly favor cloud deployments over on-premises solutions.

Data Point No.  5: Blockchain will become a commodity. Vendors are fighting for a share of the rapidly increasingly market for blockchain applications, but the reality is it’s a race to the bottom. As standardization continues, there will be little differentiation, and blockchain will slip into the background of applications, taking place behind the scenes. Industries such as data management will begin adopting this technology, since it offers a way to validate and trust the data as records are pulled into other resources.

For a good example of how blockchain works in an enterprise, see this eWEEK article.

Data Point No. 6: The autonomous car will create data center chaos. There is a massive investment right now in autonomous and connected cars, and soon this investment will need to cascade to the data center. The success of autonomous cars relies on telemetry data from vehicles to inform driving decisions, but how do you properly archive this data for compliance? With so many data points becoming created every minute, how do you properly isolate necessary data, such as from any accidents or incidents and retain it for the multiple years necessary? Proper data management architectures will be key to ensuring their success.

http://www.eweek.com

Saturday, 22 December 2018

Which cloud performs better, AWS, Azure or Google?

Most IT professionals select cloud providers based on price or proximity to users, but network performance should also be considered. Because as we see in a new report from ThousandEyes, the underlying network architecture of the big cloud providers can have a significant impact on performance. And performance varies widely among cloud service providers.

In its first annual public cloud benchmark report, ThousandEyes compared the global network performance of the “big three” public cloud providers — Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. The network management company looked at network performance (latency, packet loss, jitter) and connectivity architecture. It also measured user-to-cloud connectivity from 27 cities around the globe to 55 AWS, Azure, and GCP regions and measured the inter-AZ and inter-region connectivity within all three cloud provider networks. In addition, they measured inter-region connectivity between all 55 regions on a multi-cloud basis.

Using AWS means more internet 
Perhaps the most intriguing finding in the ThousandEyes report was that the AWS network design forced user traffic to use the public internet for most of its journey between the user’s location and AWS region. This is in stark contrast to Azure and GCP, which ingest user traffic close to the user and ride their private network for as long as possible. 

There are some technical differences in network design that causes that, but the net result is that AWS routes user traffic away from its backbone until it gets geographically close to that region. 

In bandwidth-flush regions such as the U.S. and Europe, internet performance and private network performance don’t vary that much, so users are not likely to notice a difference. In locals such as Asia where fiber routes are sparser, however, internet performance can vary, creating unpredictable performance. The tests showed that in Asia, the standard deviation on AWS network performance is 30 percent higher than GCP and Azure. 

Regional performance varies by cloud provider 
Another major finding was that there are some regional anomalies that vary by provider. For example, GCP has a robust global network but does not have a fiber route between Europe and India. That means traffic going from London to Mumbai would take three times as long to get there than traffic on Azure or AWS.

All three cloud service providers continue to invest in their networks to fill gaps like this, but there will always be variances in the different networks — and it’s good to have the data to uncover what those are.

Other regional differences include:

  • Within Asia, AWS network performance was 56 percent less stable than Azure and 35 percent less stable than Azure.
  • When connecting Europe to Singapore, Azure was 1.5 times faster than AWS and GCP.

The time for multi-cloud is now
One question that’s always on IT leader’s minds is how well do AWS, GCP, and Azure play together. They compete, but they also cooperate to ensure customers can employ a multi-cloud strategy. The test results showed extensive connectivity between the backbone networks of all three major cloud providers. Customers that embrace multi-cloud can be assured that traffic going between GCP, Azure, and AWS rarely traverses the public internet. Inter-region performance across the three are stable and reliable, so businesses should feel free to go big on multi-cloud.

The study highlighted that the network matters with respect to cloud performance, and the only way to truly know what’s going on is collect data and measure performance. The internet is always changing, and this ThousandEyes study is a good snapshot as to what things look like right now. But things are constantly changing. Businesses should continue to collect network intelligence and measure their own performance to ensure they are getting what they expect from their cloud providers.

https://www.networkworld.com

Thursday, 20 December 2018

IBM Embraces Knative to Drive Serverless Standardization

Serverless computing is one of the hottest trends in IT today, and it's one that IBM is embracing too.

Serverless computing, also often referred to as functions-as-a-service, enables organizations to execute functions without the need to first provision a long-running persistent server. At KubeCon + CloudNativeCon NA 2018 here this week, multiple vendors including Red Hat, Google, SAP and IBM announced that they are coming together to support the open-source Knative project.

In a video interview with eWEEK, Jason McGee, vice president and chief technology officer of IBM Cloud Platform, explains why Knative matters and how it fits into IBM's plans.

"I'm an old Java apps server guy, and I see a lot of parallels with what we're trying to do with cloud-native, where we're essentially building the app platform for the cloud era," McGee said. "I think we've made a lot of progress in the last two years with containers and Kubernetes, but what has been missing is the app stack and bringing functions and serverless into the community in a way that we all agree on."

There have been multiple efforts in recent years to enable serverless models, often using containers as the core element. Knative functionally runs on top of the Kubernetes container orchestration system, allowing operators to make use of existing Kubernetes skills and infrastructure.

"Projects like Knative are really important because it allows us to really complete the picture of a full application platform that everyone can build on for the next 20 years," he said.

OpenWhisk
Knative is not the first open-source functions-as-a-service effort that IBM has backed. Back in 2016, IBM announced the OpenWhisk effort, which is now run as an open-source project at the Apache Software Foundation.

"The role that Knative is playing is aligning the community around Kubernetes as the foundation," McGee said. "We can run OpenWhisk on Kubernetes, but Kubernetes itself needs to be extended to understand some of the concepts that exist in the functions landscape."

McGee said that Knative provides a model to extend Kubernetes with the things it needs to be able to support functions in a first-class way. He added that OpenWhisk can still participate and will adapt to benefit from Knative components.

Serverless Standards
While the Knative project will potentially provide a common foundation for serverless, other things are needed to help fully enable an open ecosystem for serverless.

"What developers want is a way to build functions-based systems that have the flexibility to move around. That's what they like about Kubernetes too—you can run Kubernetes anywhere," he said. "Knative is an important step in helping to get us there."

http://www.eweek.com

Wednesday, 19 December 2018

Salesforce IoT Insights Gives Field Service Agents a Head Start

Salesforce said its latest offering will help field service agents better serve customers by leveraging internet of things data to get better information as to when products in use need servicing or replacement.

The new Salesforce IoT Insights, released on Dec. 5, also overlays IoT data with customer relationship management (CRM) data so both the customer service agent at a company’s main office and the mobile worker in the field see a complete record of the customer’s service history to deliver more personalized service. With this more comprehensive view, the agent will not to have to ask the customer things like “Has this problem happened before?” because they have easy access to the product’s service record.

The news of Salesforce’s new offering comes at a time when the number of IoT devices and smart sensors are exploding. Gartner estimates there will be more than 20 billion connected “things” by 2020. But companies are still developing the best ways to make use of all the data these devices generate, and there is also a shortage of IoT expertise and skills. A study by Dun & Bradstreet in May 2018 titled “Are Data Silos Killing Your Business” stated that 80 percent of businesses report that data silos exist within their organizations, which can keep important information—like device breakdowns or outages—from reaching the people on the front lines who could actually solve the problems.

“You collect all the data, but there is a gap where it’s not getting in the hands of people who can do something with it,” Paolo Bergamo, senior vice president and general manager of Field Service Lightning at Salesforce, told eWEEK. “Where is the context with CRM to check the SLA [service-level agreement] with customer support?”

Salesforce IoT Insights is designed to bridge that gap by bringing data from the connected devices to the CRM system. Rather than the traditional scenario where companies have to wait for something to happen, they can use the system’s orchestration capabilities combined with IoT signals to automatically trigger the creation of cases and work orders. Rules can be established where, for example, if a part malfunctions a case order is automatically created and a field service agent is notified to address the issue.

How Long Can a Rope Last?

One early customer, Samson Rope, is a 140-year-old company that offers ropes across a variety of industries including for fishing vessels, mining and forestry.

“At Samson Rope we have over 8,000 lines of rope in use that each last around 8-10 years, and we service them throughout the life of the product,” said Dean Haverstraw, director of IT at Samson Rope, in a statement. “These post-purchase services are a large part of our business, and we chose Field Service Lightning to manage all of those lines, and provide customers with tools to monitor rope health, manage compliance requirements and more. We’re now piloting high-tech rope threading that—when connected to Field Service Lightning via Salesforce IoT—will help customers monitor rope conditions and know when it needs to be replaced.”

Jacuzzi, another Salesforce Field Service customer, uses the technology to let it know when its hot tubs and related products are likely to have a component failure. “When you collect all this information, you get companies like Jacuzzi creating new business models,” said Bergamo. “Now that they know when a filter needs to be replaced or some other part, they engage with the customer and that leads to greater customer satisfaction.”

In a demonstration for eWEEK, Salesforce’s vice president of IoT, Taksina Eammano, showed how Field Service Lightning makes raw IoT data from a piece of equipment more accessible. “We take that raw data and categorize it into business logic so the service agent can see it’s a battery issue in their dashboard and see the context of the issue,” said Eammano.

While it’s possible to create similar automated functions using traditional developer tools, Bergamo said that could take much longer than using Lightning’s drag-and-drop menu that requires a minimum of coding. Projects can be completed in a few days or less using Lightning versus weeks or months using other tools. “This really empowers business users,” said Bergamo.

http://www.eweek.com

Monday, 17 December 2018

How to get started with Kubernetes

With every innovation comes new complications. Containers made it possible to package and run applications in a convenient, portable form factor, but managing containers at scale is challenging to say the least.

Kubernetes, the product of work done internally at Google to solve that problem, provides a single framework for managing how containers are run across a whole cluster. The services it provides are generally lumped together under the catch-all term “orchestration,” but that covers a lot of territory: scheduling containers, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, and more.

In this guide we’ll walk through the basics of setting up Kubernetes and populating it with container-based applications. This isn’t intended to be an introduction to Kubernetes’s concepts, but rather a way to show how those concepts come together in simple examples of running Kubernetes.

Use a Kubernetes distribution
Kubernetes was born to manage Linux containers. However, as of Kubernetes 1.5, Kubernetes also supports Windows Server Containers, though the Kubernetes control plane must continue to run on Linux. Of course, with the aid of virtualization, you can get started with Kubernetes on any platform.

If you’re opting to run Kubernetes on your own hardware or VMs, one common way to do this is to obtain a packaged Kubernetes distribution, which typically combine the upstream Kubernetes bits with other pieces — container registry, networking, storage, security, logging, monitoring, continuous integration pipeline, etc. — needed for a complete deployment. Plus, Kubernetes distributions generally can be installed and run in any virtual machine infrastructure: Amazon EC2, Azure Virtual Machines, Google Compute Engine, OpenStack, and so on. 

Canonical Kubernetes, Cloud Foundry Container Runtime, Mesosphere Kubernetes Service, Oracle Linux Container Services, Pivotal Container Service, Rancher, Red Hat OpenShift, and Suse CaaS Platform are just a few of the dozens of Kubernetes distributions available. Note that the Canonical, Red Hat, and Suse offerings bundle Kubernetes with a Linux distribution, which does away with the need for setting up Kubernetes on a given operating system—not only the download-and-install process, but even some of the configure-and-manage process.

Another approach is to run Kubernetes atop a conventional Linux distribution, although that typically comes with more management overhead and manual fiddling. Red Hat Enterprise Linux has Kubernetes in its package repository, for instance, but even Red Hat recommends its use only for testing and experimentation. Rather than try to cobble something together by hand, Red Hat stack users are recommended to use Kubernetes by way of the OpenShift PaaS, as OpenShift now uses Kubernetes as its own native orchestrator.

Many conventional Linux distributions provide special tooling for setting up Kubernetes and other large software stacks. Ubuntu, for instance, provides a tool called conjure-up that can be used to deploy the upstream version of Kubernetes on both cloud and bare-metal instances. Canonical also provides MicroK8s, a version of Kubernetes that installs via the Snap package system.

Use a Kubernetes service in the cloud
Kubernetes is available as a standard-issue item in many clouds, though it appears most prominently as a native feature in Google Cloud Platform (GCP). GCP offers two main ways to run Kubernetes. The most convenient and tightly integrated way is by way of Google Kubernetes Engine, which allows you to run Kubernetes’s command-line tools to manage the created cluster.

Alternatively, you could use Google Compute Engine to set up a compute cluster and deploy Kubernetes manually. This method requires more heavy lifting, but allows for customizations that aren’t possible with Container Engine. Stick with Container Engine if you’re just starting out with containers. Later on, after you get your sea legs and want to try something more advanced, like a custom version of Kubernetes or your own modifications, you can deploy VMs running a Kubernetes distro.

With Amazon, one originally had to run Kubernetes by deploying a compute cluster in EC2. That is still an option, but Amazon now offers the Elastic Container Service for Kubernetes (EKS). With EKS, Amazon runs the control plane and you focus on deploying the containers you’ll use with the configuration you want. EKS also runs a standard upstream edition of Kubernetes. One smart feature is the integration of Kubernetes with rest of the AWS portfolio. AWS services appear in EKS as Kubernetes-native Custom Resource Definitions, so any changes to either AWS or Kubernetes won’t break such connections.

Many Kubernetes distributions come with detailed instructions for getting set up on AWS and elsewhere. Red Hat OpenShift, for instance, can be installed on one or more hosts via an interactive installer or a script, or by using the Terraform “infrastructure-as-code” provisioning tool. Alternatively, Kubernetes’s Kops tool can be used to provision a cluster of generic VMs on AWS, with support for Google Cloud Engine, VMware vSphere, and other clouds in the works.

Microsoft Azure has support for Kubernetes by way of the Azure Kubernetes Service. Here Azure manages the Kubernetes master nodes, while you create the clusters via Resource Manager templates or Terraform. If you want control of both the master and the agent nodes, you can always install a Kubernetes distribution on an Azure Virtual Machine. That said, one key advantage of AKS is that you don’t pay for the use of the master node, just the agents.

One quick way to provision a basic Kubernetes cluster in a variety of environments, cloud or otherwise, is to use a project called Kubernetes Anywhere. This script works on Google Compute Engine, Microsoft Azure, VMware vSphere (vCenter is required), and OpenStack. In each case, Kubernetes Anywhere provides some degree of automation for the setup.

Use Minikube to run Kubernetes locally
If you’re only running Kubernetes in a local environment like a development machine, and you don’t need the entire Kubernetes enchilada, there are a few ways to set up “just enough” Kubernetes for such use.

One that is provided by the Kubernetes development team itself is Minikube. Run it and you’ll get a single-node Kubernetes cluster deployed in a virtualization host of your choice. Minikube has a few prerequisites, but they are all easy enough to meet on MacOS, Linux, or Windows.

Run a Kubernetes demo app
Once you have Kubernetes running, you’re ready to begin deploying and managing containers. You can ease into container ops by drawing on one of the many container-based app demos available.

Take an existing container-based app demo, assemble it yourself to see how it is composed, deploy it, and then modify it incrementally until it approaches something useful to you. If you have chosen Minikube to find your footing, you can use the Hello Minikube tutorial to create a Docker container holding a simple Node.js app in a single-node Kubernetes demo installation. Once you get the idea, you can swap in your own containers and practice deploying those as well.

The next step up is to deploy an example application that resembles one you might be using in production, and becoming familiar with more advanced Kubernetes concepts such as pods (one or more containers that comprise an application), services (logical sets of pods), replica sets (to provide self-healing on machine failure), and deployments (application versioning).

Lift the hood of the WordPress/MySQL sample application, for instance, and you’ll see more than just instructions on how to deploy the pieces into Kubernetes and get them running. You will also see implementation details for many concepts used by production-level Kubernetes applications. You’ll learn how to set up persistent volumes to preserve the state of an application, how to expose pods to each other and to the outside world by way of services, how to store application passwords and API keys as secrets, and so on.

Weaveworks has an example app, the Sock Shop, that shows how a microservices pattern can be used to compose an application in Kubernetes. The Sock Shop will be most useful to people familiar with the underlying technologies—Node.js, Go kit, and Spring Boot—but the core principles are meant to transcend particular frameworks and illustrate cloud-native technologies.

If you glanced at the WordPress/MySQL application and imagined there might be a pre-baked Kubernetes app that meets your needs, you’re probably right. Kubernetes has an application definition system called Helm, which provides a way to package, version, and share Kubernetes applications. A number of popular apps (GitLab, WordPress) and app building blocks (MySQL, Nginx) have Helm “charts” readily available by way of the Kubeapps portal.

Manage containers with Kubernetes
Kubernetes simplifies container management through powerful abstractions like pods and services, while providing a great deal of flexibility through mechanisms like labels and namespaces, which can be used to segregate pods, services, and deployments (such as development, staging, and production workloads).

If you take one of the above examples and set up different instances in multiple namespaces, you can then practice making changes to components in each namespace independent of the others. You can then use deployments to allow those updates to be rolled out across pods in a given namespace, incrementally.

The next step is learning how Kubernetes can be driven by tools for managing infrastructure. Puppet, for instance, has a module for creating and manipulating resources in Kubernetes. Similarly, HashiCorp’s Terraform has growing support for Kubernetes as a resource. If you plan on using such a resource manager, note that different tools might bring vastly different assumptions to the table. Puppet and Terraform, for instance, default to using mutable and immutable infrastructures respectively. Those philosophical and behavioural differences can determine how easy or difficult it will be to create the Kubernetes setup you need.

https://www.infoworld.com