For a while there several years ago, industry prognosticators wrote that once enterprises decided on their cloud IT strategies, they would build private clouds first and add public cloud services later as needed.
Well, that didn’t happen. Turns out they jumped right into hybrid as fast as they could get their boards of directors to allocate the funding.
This trend is only picking up speed. Gartner Research has predicted that 90 percent of organizations will have adopted hybrid infrastructure management capabilities by 2020.
However, any disruptive trend comes with other considerations. Along with this massive shift to hybrid come more open doors to security threats.
Minority of Security Pros Using Unified Security Tools
A recent study of 250 hybrid cloud security leaders found that only 30 percent of those professionals are using unified security tools that span on-premise and the cloud. With AWS and Azure both leading the way in cloud adoption among enterprise users, many other large enterprises are hot on their heels.
How can managers sufficiently prepare and monitor their environments to ensure the shift to a hybrid cloud be as clean and efficient as possible so that organizations can take advantage of the on-premise assets and unlimited cloud scalability?
Turns out there are answers for that. This eWEEK Data Point article, using industry information compiled by David Ginsburg, a vice-president at cyber-intelligence software provider Cavirin, offers readers his 10 key criteria for building a secure hybrid environment.
Data Point No. 1: Flexibility
The ease of implementation and the ability to span multiple workload environments (such as IaaS, PaaS, on-premises, virtual machines, containers, and in the future, function-as-a-service, or FaaS), delivering a single view, is integral for mid-size and enterprise organizations. Ideally, if initially deployed on-premise, the same tools and applications will extend into the cloud. This implies that the platform architecture has been conceived from the start for hybrid environments. Flexibility also includes ease of installation from a cloud service provider’s marketplace.
Data Point No. 2: Extensibility
DevOps-friendly open application programming interfaces (APIs) open the platform to external data sources and items such as identity and access management, pluggable authentication module (PAM), security information and event management, user and entity behavior analytics, logging, threat intelligence, or a helpdesk. This out-of-the box cloud and API interoperability is essential to accommodate business-critical applications. APIs also enable integration into an organization’s continuous integration and deployment (CI-CD) process and their DevOps tools. This of course relates to lifecycle container support that encompasses images, the container runtimes, and orchestration.
Data Point No. 3: Responsiveness
As today’s security threats quickly multiply, minimizing the time required for implementation and time to baseline, as well as quickly identifying any changes in posture, has become vital. This requires a microservices-based architecture for elastic scaling and an agentless architecture that adapts well to containers and function-based workloads as well as eliminating “agent bloat” that impacts central processing units, memory and I/O.
Data Point No. 4: Deep Discovery
It’s essential to automatically identify existing and new workloads as well as changes to existing ones across multiple cloud service providers, and then the ability to properly group these by function. This discovery should be a simple process, leveraging existing AuthN and AuthZ (open authorization) policies to avoid having to create a special identity access management policy every time.
Data Point No. 5: Broad Policy Library
The platform must support a wide range of benchmarks / frameworks / guidelines and the creation of custom polices based on workload type. These policies should automatically apply to existing and new workloads. Broad coverage also relates to operating systems, virtualization and cloud service providers. Capabilities may include OS hardening, vulnerability and patch management, configuration management, whitelisting, and system monitoring.
Data Point No. 6: Real Time Risk Scoring Across Infrastructure
Assets, once discovered and with policies applied, must be scored. This may be individually, across different slices of the infrastructure (such as location, subnet, department), by workload type across environments (cloud and on-premise), or by application (PCI, web). Scoring must be prioritized, available historically, integrated with third-party tools for automation or into an existing UI, and most importantly, correlated. For example, an organization operates a web server farm with 10 on-premises Red Hat Enterprise Linux servers and begins to transition to the cloud. Midway through the migration, five web servers are on Azure, and five on-premises. If tracking payment card industry (PCI) compliance, the tool must generate a normalized view across both environments.
Data Point No. 7: Container (Docker) Support
Docker technology has attracted the attention of many enterprise adopters. If you are implementing containers either on-premise or as part of a cloud deployment, you need to ensure that their workloads are secure. And, if you bring in images from a registry, you need to ensure that these are not corrupted. Many of the same capabilities described in Data Point No. 6 apply here as well, such as hardening, scanning and whitelisting. One way to look at container support is across a lifecycle that includes image scanning, container runtime monitoring and security at the orchestration layer.
Data Point No. 8: Cloud Security Posture
Workload protection is as important as securing the cloud. This includes the various services offered by the major cloud providers, such as storage, identity, load balancing, computing and media. The architecture must support monitoring and assessment of these services in real time, and then, most importantly, looking at how the security of these services relates to that of critical workloads. It must correlate scoring and then provide the CISO and team with a unified score that reflects a true hybrid security posture across workloads and the cloud.
Data Point No. 9: Cloud-agile Pricing
Reflecting the cloud compute and storage pricing model, it’s important to adopt a pricing model that has the flexibility to meet changing requirements. This may involve a software-as-a-service (SaaS) offering or connecting the back end of the platform to the cloud service provider’s billing engine with an ability to charge by the minute. Alternatively, pricing may be abstracted but still agile, closer to the concept of committed and burst workloads and analogous to a cellphone provider’s rollover-minutes model. In either case, this is a departure from existing static pricing.
Data Point No. 10: Intelligence
Predictive analytics permits the platform to “predict” the outcome of change; a “what-if” analysis for configurations and operating systems is crucial in today’s changing environment. It is capable of bringing in data from third parties via APIs to create a more correlated view of this change. Some customers describe this as a “virtual whiteboard.”
http://www.eweek.com
Well, that didn’t happen. Turns out they jumped right into hybrid as fast as they could get their boards of directors to allocate the funding.
This trend is only picking up speed. Gartner Research has predicted that 90 percent of organizations will have adopted hybrid infrastructure management capabilities by 2020.
However, any disruptive trend comes with other considerations. Along with this massive shift to hybrid come more open doors to security threats.
Minority of Security Pros Using Unified Security Tools
A recent study of 250 hybrid cloud security leaders found that only 30 percent of those professionals are using unified security tools that span on-premise and the cloud. With AWS and Azure both leading the way in cloud adoption among enterprise users, many other large enterprises are hot on their heels.
How can managers sufficiently prepare and monitor their environments to ensure the shift to a hybrid cloud be as clean and efficient as possible so that organizations can take advantage of the on-premise assets and unlimited cloud scalability?
Turns out there are answers for that. This eWEEK Data Point article, using industry information compiled by David Ginsburg, a vice-president at cyber-intelligence software provider Cavirin, offers readers his 10 key criteria for building a secure hybrid environment.
Data Point No. 1: Flexibility
The ease of implementation and the ability to span multiple workload environments (such as IaaS, PaaS, on-premises, virtual machines, containers, and in the future, function-as-a-service, or FaaS), delivering a single view, is integral for mid-size and enterprise organizations. Ideally, if initially deployed on-premise, the same tools and applications will extend into the cloud. This implies that the platform architecture has been conceived from the start for hybrid environments. Flexibility also includes ease of installation from a cloud service provider’s marketplace.
Data Point No. 2: Extensibility
DevOps-friendly open application programming interfaces (APIs) open the platform to external data sources and items such as identity and access management, pluggable authentication module (PAM), security information and event management, user and entity behavior analytics, logging, threat intelligence, or a helpdesk. This out-of-the box cloud and API interoperability is essential to accommodate business-critical applications. APIs also enable integration into an organization’s continuous integration and deployment (CI-CD) process and their DevOps tools. This of course relates to lifecycle container support that encompasses images, the container runtimes, and orchestration.
Data Point No. 3: Responsiveness
As today’s security threats quickly multiply, minimizing the time required for implementation and time to baseline, as well as quickly identifying any changes in posture, has become vital. This requires a microservices-based architecture for elastic scaling and an agentless architecture that adapts well to containers and function-based workloads as well as eliminating “agent bloat” that impacts central processing units, memory and I/O.
Data Point No. 4: Deep Discovery
It’s essential to automatically identify existing and new workloads as well as changes to existing ones across multiple cloud service providers, and then the ability to properly group these by function. This discovery should be a simple process, leveraging existing AuthN and AuthZ (open authorization) policies to avoid having to create a special identity access management policy every time.
Data Point No. 5: Broad Policy Library
The platform must support a wide range of benchmarks / frameworks / guidelines and the creation of custom polices based on workload type. These policies should automatically apply to existing and new workloads. Broad coverage also relates to operating systems, virtualization and cloud service providers. Capabilities may include OS hardening, vulnerability and patch management, configuration management, whitelisting, and system monitoring.
Data Point No. 6: Real Time Risk Scoring Across Infrastructure
Assets, once discovered and with policies applied, must be scored. This may be individually, across different slices of the infrastructure (such as location, subnet, department), by workload type across environments (cloud and on-premise), or by application (PCI, web). Scoring must be prioritized, available historically, integrated with third-party tools for automation or into an existing UI, and most importantly, correlated. For example, an organization operates a web server farm with 10 on-premises Red Hat Enterprise Linux servers and begins to transition to the cloud. Midway through the migration, five web servers are on Azure, and five on-premises. If tracking payment card industry (PCI) compliance, the tool must generate a normalized view across both environments.
Data Point No. 7: Container (Docker) Support
Docker technology has attracted the attention of many enterprise adopters. If you are implementing containers either on-premise or as part of a cloud deployment, you need to ensure that their workloads are secure. And, if you bring in images from a registry, you need to ensure that these are not corrupted. Many of the same capabilities described in Data Point No. 6 apply here as well, such as hardening, scanning and whitelisting. One way to look at container support is across a lifecycle that includes image scanning, container runtime monitoring and security at the orchestration layer.
Data Point No. 8: Cloud Security Posture
Workload protection is as important as securing the cloud. This includes the various services offered by the major cloud providers, such as storage, identity, load balancing, computing and media. The architecture must support monitoring and assessment of these services in real time, and then, most importantly, looking at how the security of these services relates to that of critical workloads. It must correlate scoring and then provide the CISO and team with a unified score that reflects a true hybrid security posture across workloads and the cloud.
Data Point No. 9: Cloud-agile Pricing
Reflecting the cloud compute and storage pricing model, it’s important to adopt a pricing model that has the flexibility to meet changing requirements. This may involve a software-as-a-service (SaaS) offering or connecting the back end of the platform to the cloud service provider’s billing engine with an ability to charge by the minute. Alternatively, pricing may be abstracted but still agile, closer to the concept of committed and burst workloads and analogous to a cellphone provider’s rollover-minutes model. In either case, this is a departure from existing static pricing.
Data Point No. 10: Intelligence
Predictive analytics permits the platform to “predict” the outcome of change; a “what-if” analysis for configurations and operating systems is crucial in today’s changing environment. It is capable of bringing in data from third parties via APIs to create a more correlated view of this change. Some customers describe this as a “virtual whiteboard.”
http://www.eweek.com
No comments:
Post a Comment