Friday, 12 February 2021

How the data-center workforce is evolving

The COVID-19 pandemic has profoundly affected many areas of IT, including the data center, where changes to the infrastructure--particularly adoption of cloud services--are bringing about the need for new skill sets among workers who staff them.

Perhaps no technology industry benefitted more from the pandemic than cloud computing; the location independence of cloud services makes them ideal for a world where the majority of line-of-business as well as IT workers are no longer in the office.

But does that mean businesses will rely on infrastructure as a service (IaaS) and no longer need their own on-premises data centers and data-center IT teams? Analysts and futurists have been asking this question for about a decade, but now cloud, already strong before the pandemic, has gone through an inflection point and brought new immediacy to the issue.

The answer is that data centers are not going anywhere anytime soon, but they will look fundamentally different. That's good news for people currently working in data centers and those considering careers there, because adoption of cloud and other changes will create a wave of new opportunities.

Uptime Institute predicts that data-center staff requirements will grow globally from about 2 million full-time employees in 2019 to nearly 2.3 million by 2025. Growth in expected demand will mainly come from cloud and colocation data centers. Enterprise data centers will continue to employ a large number of staff, but cloud data-center staff will outnumber enterprise data-center staff after 2025, Uptime says.

On the hiring side, finding the right talent remains difficult for many organizations. In 2020, 50% of data center owners or operators globally reported having difficulty finding qualified candidates for open jobs, compared to 38% in 2018, according to Uptime Institute.

For IT pros looking to be part of the new data center, here are some of the top roles and in-demand skills to develop.

Technical architect

The role of the technical architect has grown in importance because applications are no longer deployed in technology silos. In the past, each application had its own servers, storage, and security. Modern data centers are built on disaggregated infrastructure where resources are shared across multiple applications.

This requires new infrastructure design skills to ensure application performance remains high as the underlying technology is being shared across a broad set of applications. And it requires high-level domain knowledge of network, storage, servers, virtualization, and other infrastructure.

Data-center architect

The challenging job of data-center architect requires specific knowledge of the physical data center--an understanding of power, cooling, real estate, cost structure, and other factors essential to designing data centers. Architects help determine the layout of the facility as well as its physical security. The internal design involving racks, flooring and wiring is also part of this role. If done poorly, the job can have an enormous negative impact on the workflows of the technical staff.

Cloud management

There is no single cloud provider, and an emerging and continually evolving enterprise role is selecting and managing cloud services--private, public and hybrid. The attributes of cloud providers vary, with some being strong in specific regions while others may be better suited than competitors to provide specific services, for example. In some cases, third-party cloud services are inappropriate, making private cloud the best answer, as is often the case when strict data privacy is called for.

Cloud services need to be constantly monitored and optimized to ensure businesses are not overspending in some areas and underspending in others. At the same time, cost optimization cannot be allowed to result in performance issues. This role requires the skills to properly evaluate cloud offerings and provide ongoing management.

AI and ML

Data volumes are now massive and getting larger by the day, and with the rise of edge computing, more data will reside in more places. Artificial intelligence and machine learning are required to facilitate effective data management. There's a wide range of jobs in this area across the spectrum of the AI lifecycle, including training AI systems, modeling, programming and providing human-in-the-loop participation to ensure AI goals are being met.

Data analytics

The future data center will be driven by analyzing massive amounts of data. Expect this trend to continue as more data is being generated by IoT endpoints, video systems, robots--almost everything we do. Data-center operations teams will make critical decisions based on the analysis of this data. Businesses today have a shortage of people with analytic skills, particularly those who understand how to use AI/ML to accelerate the analysis.

Software skills

Many IT engineers, particularly those who work with network infrastructure, are hardware centric. Sure, they may know how to hunt and peck on a command-line interface, but that's not really a software skill. Most network engineers have never executed even basic software functions like making an API call. Using APIs can make many tasks much easier than trying to write a script to parse a CLI.

Not all network engineers need to become programmers--although those who want to should focus on languages such as Python and Ruby--but all should become software power-users and understand how to use APIs and SDKs to perform administrative tasks. All modern network infrastructure has been designed to be managed through APIs, many of them cloud based. The days of being a CLI jockey are over, and an unwillingness to admit it is the biggest threat to today's data-center engineers.

Data-center security

There are multiple avenues for jobs in data-center security, given that this discipline refers to both physical and cyber security. Data centers house sensitive and proprietary data, and breaches can have disastrous consequences for an organization. Physical security was once done with badge readers and keypads, but there has been a wealth of innovation, including AI-enabled cameras, fingerprint scanners, iris readers and facial-recognition systems. This promises to be an exciting area to work in over the next decade.   

Cyber security has also evolved as security-information and event-management tools transition to ML-based systems that enable security professionals to see things they never could before. Also, many advanced organizations are adopting zero-trust models to isolate application traffic from other systems. Through the use of microsegmentation, secure zones can be created, minimizing the "blast radius" of a breach.

Data-center networking

The role of the network in the data center has changed significantly over the past decade. The traditional multi-tier architectures that were optimized for North-South traffic flows have shifted to leaf-spine networks that are designed for higher volumes of East-West traffic. Also, software-defined networking (SDN) systems are being used to provision virtual-fabric overlays of the physical underlay. This brings greater automation, traffic visibility, and cost effectiveness to the data-center network.

Network engineers who work in data centers need to become familiar with new concepts associated with network fabrics such as Linux-based operating systems, open-source network platforms, VxLAN tunnels and Ethernet VPNs. These all increase the scalability, elasticity and resiliency of the network while simplifying network operations. Also, most data-center platforms are now open by design, making vendor interoperability much easier and breaking the lock-in customers experienced in the past.

Another aspect of data center networking that's changed is cloud connectivity. Historically, network engineers were concerned with the network inside the data center, which is a highly controlled environment.

The rise of cloud and edge computing dictates that the network extend outside the physical confines of customer premises, across the wide area to the cloud provider. It's imperative that the network function as if it is a single, continuous fabric across all cloud locations. There are a number of ways to do this, including SD-WAN, SASE and direct cloud connects.

Jobs outside the data center

What, if any, are the jobs for data center professionals if they want to transition out of that environment but make use of their current skills? Unfortunately, those skills don't translate well. You don't see many mainframe engineers or PBX administrators around anymore.

However, while the future lies with the jobs outlined here, it will take a long time for legacy data centers to transition. After all, businesses do often adopt an "if it ain't broke, don't fix it" mentality when it comes to mission-critical systems. So for those unable or unwilling to reskill, it may be necessary to seek employers in verticals that tend to be on the slower end of technology adoption--state and local government, regional banks, and specialty retail are some examples.

The future of data centers lies in distributed clouds, and that changes the skiill sets needed to run them. Data centers are certainly not going away, but they will look much different in the future, and that should be exciting for all.

https://www.networkworld.com/

Wednesday, 10 February 2021

Top metrics for effective multicloud management

When it comes to effectively managing a multicloud environment, there are a ton of network and application metrics that enterprise customers should be watching.

Among enterprises, the trend is toward multicloud environments, which can include workloads running on-premises and in public clouds run by multiple cloud providers such as AWS, Microsoft Azure, IBM/Red Hat, Google Cloud Platform and others. Gartner predicts by 2021, more than 75% of midsize and large organizations will have adopted some form of a multicloud and/or hybrid IT strategy. Likewise, IDC predicts that by 2022, more than 90% of enterprises worldwide will be relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs.

"As enterprises increasingly embrace multicloud, they must consider what KPIs [key performance indicators] will best measure their success in managing multicloud environments," said Briana Frank, director of product management with IBM Cloud.

"There's a variety of KPIs that can help evaluate success, including financial metrics, such as cost, return on investment, and rate of change in the reduction of total cost of ownership," Frank said. "However, enterprises should go beyond financial metrics alone and also consider measures of other critical causes for concern, such as downtime caused by outages and security breaches," Frank added. 

Useful KPIs in a multicloud setting include those that measure costs and billing, network and application performance, said Roy Ritthaler, vice president of product marketing in VMware's cloud management business unit.

Financial KPIs

Optimizing multicloud costs and eliminating waste are key goals for IT and lines of business, and metrics should include budget tracking and detailed spend analysis, Ritthaler said.

"They should also provide capabilities to uncover hidden costs, flag anomalies, reallocate cloud spend for showback and chargeback, and provide proactive recommendations to purchase and exchange reservations and savings plans," Ritthaler said.

Common multicloud KPIs include looking at the cost of all untagged resources, such as databases that might be using resources. Tagged resources can include myriad items such as the owner of a resource, the environment its operating in, and project name, for example. The idea is to most effectively identify, manage and support resources.

Other financial KPIs include a look at the percentage of infrastructure running on demand and the percentage of total bill charged back, Ritthaler said.

Security and network KPIs

Visualizing the security posture of the connectivity in a multicloud network is absolutely necessary, as there are too many components to consider individually, Ritthaler said.

"Identifying bad actors or bad security posture is handled in a single tool for monitoring and guarding," Ritthaler said. "Capabilities include monitoring traffic to detect vulnerabilities and to design app-centric security and generating recommended firewall rules to implement application security."

Some common KPIs include measuring security incidents per month by team, the number of security lapses, and the time to remediate security violations measured in hours, Ritthaler said.

Security is critical, but so it getting a handle on access policies, managing QoS, and providing consistent measuring capabilities across such a diverse group of systems, said Nabil Bukhari, chief technology officer with Extreme Networks. "Applications are the star of the multicloud show, but getting a handle on network performance metrics – like latency, packet loss– is important too," Bukhari said.

Additional network-related KPIs include measuring response time, bandwidth usage and throughput.

Network monitoring must include end-to-end network visibility across physical and virtual environments, and it's built from flow-based traffic monitoring using NetFlow, sFlow and SNMP device monitoring, Ritthaler said.

Application performance KPIs

On the applications side, each application will have its own performance metrics to monitor, and it's important to have tools that can tie these applications to the infrastructure that they're running on, to deliver an end-to-end picture of the infrastructure.

"Many organizations already have multiple application performance monitoring tools. However, these tools do not provide the end-to-end visibility that you need to troubleshoot over different teams," Ritthaler said. "Being able to consolidate those tools into a single solution brings teams together when problems occur."

Ritthaler said some common application KPIs include user experience measurements from APM (application performance management) packages; configured versus used resource consumption (such as CPU and memory); response times; and connection counts.

"Applications tend to spread out over multiple clouds; the path of least resistance will typically be taken. Documentation will fall behind, and this is where automatic discovery is necessary," Ritthaler said. "Application discovery works over the multicloud architecture to make organizations aware of where their applications are running, who they are talking to, what application dependencies there are, and how they are secured."

Compliance KPIs

Compliance is another area that requires management attention, experts say.  

"Compliance KPIs require continuously inspecting cloud-resource configurations and benchmarking them against cloud, industry and custom security and compliance standards. It should track compliance score, violations, and resolution progress," Ritthaler said. 

https://www.networkworld.com/

Customers need to safeguard the health of hybrid and public cloud services using logs and metrics to proactively monitor and troubleshoot issues with native cloud services as they occur and suggest remediation steps, Ritthaler said.