Thursday, 29 August 2019

VMware boosts load balancing, security intelligence, analytics

VMware has added new features to its core networking software that will let customers more securely control cloud application traffic running on virtual machines, containers or bare metal. 
At its VMworld event, the company announced a new version of the company’s NSX networking software with support for the cloud-based advanced load balancer technology it recently acquired from Avi Networks.
The load balancer is included in VMware vRealize Network Insight 5.0 and tied to NSX Intelligence software that lets customers optimize network performance and availability in virtual and physical networks. The load balancer includes a web application firewall and analytics features to help customers securely control and manage traffic. 
VMware bought Avi in June with the plan to punch up its data-center network-virtualization capabilities by adding Avi’s load balancing, analytics and application-delivery technology to NSX. Avi’s integration with VMware NSX delivers an application-services fabric that synchronizes with the NSX controller to provide automated, elastic load balancing including real-time analytics for applications deployed in a software-defined network environment. The Avi technology also monitors, scales and reconfigures application services in real time in response to changing performance requirements.
“The load balancer uses a modern interface and architecture to deliver and optimize application delivery in a dynamic fashion," said Rohit Mehra, vice president, Network Infrastructure for IDC. "Leveraging inbuilt advanced analytics and monitoring to deliver scale that is much needed for cloud applications and micro-services, the advanced load balancer will essentially be a nice add-on option to VMware’s NSX networking portfolio. While many customers may benefit from its integration into NSX, VMware will likely keep it as an optional add-on, given the vast majority of its networking clients currently use other ADC platforms.”
NSX-T Data Center software is targeted at organizations looking to support multi-vendor cloud-native applications, bare-metal workloads, hypervisor environments and the growing hybrid and multi-cloud worlds. The software offers a range of services layer 2 to Layer 7 for workloads running on all types of infrastructure – virtual machines, containers, physical servers and both private and public clouds. NSX-T is the underpinning technology for VMware’s overarching Virtual Cloud Network portfolio that offers a communications-software layer to connect everything from the data center to cloud and edge.
“NSX now provides a complete set of networking services offered in software. Customers don’t need dedicated hardware systems to do switching, routing or traffic load balancing as NSX treats VM, container and app traffic all the same from the cloud to data center and network edge,” said Tom Gillis, VMware senior vice president and general manager, networking and security business unit. 
Now customers can distribute workloads uniformly across network improving capacity, efficiency and reliability, he said.
Speaking at the event, a VMware customer said VMware NSX-T Data Center is helping the company secure workloads at a granular level with micro-segmentation, and to fundamentally re-think network design. "We are looking to develop apps as quickly as possible and use NSX to do automation and move faster,” said Andrew Hrycaj, principal network engineer at IHS Markit – a business information provider headquartered in London.
NSX also helps IT manage a common security policy across different platforms, from containers, to the public cloud with AWS and Azure, to on-prem, simplifying operations and helping with regulatory compliance, while fostering a pervasive security strategy, Hrycaj said.
At VMworld the company announced version 2.5 of NSX which includes a distributed  \analytics engine called NSX Intelligence that VMware says will help eliminate blind spots to reduce security risk and accelerate security-incident remediation through visualization and deep insight into every flow across the entire data center.
“Traditional approaches involve sending extensive packet data and telemetry to multiple disparate centralized engines for analysis, which increase cost, operational complexity, and limit the depth of analytics,” wrote VMware’s Umesh Mahajan, a senior vice president and general manager networking and security in a blog about version 2.5.
“In contrast, NSX Intelligence, built natively within the NSX platform, distributes the analytics within the hypervisor on each host, sending back relevant metadata… [and providing] detailed application--topology visualization, automated security-policy recommendations, continuous monitoring of every flow, and an audit trail of security policies, all built into the NSX management console.”
IDC’s Mehra said: “The NSX Intelligence functionality is indeed very interesting, in that it delivers on the emerging need for deeper visibility and analytics capabilities in cloud IT environments. This can then be used either for network and app optimization goals, or in many cases, will facilitate NSX security and policy enforcement via micro-segmentation and other tools. This functionality, built into NSX, runs parallel to vRealize Network Insight, so it will be interesting to see how they mirror, or rather, complement each other,” he said.
NSX-T 2.5, also introduces a new deployment and operational approach VMware calls Native Cloud Enforced mode.
“This mode provides a consistent policy model across the hybrid cloud network and reduces overhead by eliminating the need to install NSX tools in workload VMs in the public cloud,” Mahajan wrote. “The NSX security policies are translated into the cloud provider’s native security constructs via APIs, enabling common and centralized policy enforcement across clouds.”
Networking software vendor Apstra got into the NSX act by announcing it had more deeply integrated the Apstra Operating System (AOS) with NSX. 
AOS includes a tighter design and operational interoperability between the underlying physical network and software-defined overlay networks with a solution that liberates customers from being locked into any specific network hardware vendor, said Mansour Karam, CEO and founder of Apstra. 
AOS 3.1 adds automation to provide consistent network and security policy for workloads across the physical and virtual/NSX infrastructure, Apstra said. AOS supports VMware vSphere and allows for automatic remediation of network anomalies. AOS’ intent-based analytics perform regular  network checks to safeguard configurations between the Apstra managed environment and the vSphere servers are in sync.
Like other AOS releases, version 3.1 is hardware agnostic and integrated with other networking vendors including Cisco, Arista, Dell and Juniper as well as other vendors such as Microsoft and Cumulus.
Big Switch also announced that it has extended its Enterprise Virtual Private Cloud (E-VPC) integration to the VMware Cloud Foundation (VCF) and NSX-T.   The company's  Big Cloud Fabric (BCF) underlay now fully integrates with VMware’s software-defined data center (SDDC) portfolio, including NSX-T, vSphere, VxRail and vSAN, providing unmatched automation, visibility and troubleshooting capabilities.

Saturday, 17 August 2019

Powering edge data centers: Blue energy might be the perfect solution

About a cubic yard of freshwater mixed with seawater provides almost two-thirds of a kilowatt-hour of energy. And scientists say a revolutionary new battery chemistry based on that theme could power edge data centers.
The idea is to harness power from wastewater treatment plants located along coasts, which happen to be ideal locations for edge data centers and are heavy electricity users.
“Places where salty ocean water and freshwater mingle could provide a massive source of renewable power,” writes Rob Jordan in a Stanford University article.
The chemical process harnesses a mix of sodium and chloride ions. They’re squirted from battery electrodes into a solution and cause current to flow. That initial infusion is then followed by seawater being exchanged with wastewater effluent. It reverses the current flow and creates the energy, the researchers explain.
“Energy is recovered during both the freshwater and seawater flushes, with no upfront energy investment and no need for charging,” the article says.
In other words, the battery is continually recharging and discharging with no added input—such as electricity from the grid. The Stanford researchers say the technology could be ideal for making coastal wastewater plants energy independent.

Coastal edge data centers

But edge data centers, also taking up location on the coasts, could also benefit. Those data centers are already exploring kinetic wave-energy to harvest power, as well as using seawater to cool data centers.
I’ve written about Ocean Energy’s offshore, power platform using kinetic wave energy. That 125-feet-long, wave converter solution, not only uses ocean water for power generation, but its sea-environment implementation means the same body of water can be used for cooling, too.
“Ocean cooling and ocean energy in the one device” is a seductive solution, the head of that company said at the time.
Microsoft, too, has an underwater data center that proffers the same kinds of power benefits.
Locating data centers on coasts or in the sea rather than inland doesn’t just provide virtually free-of-cost, power and cooling advantages, plus the associated eco-emissions advantages. The coasts tend to be where the populous is, and locating data center operations near to where the actual calculations, data stores, and other activities need to take place fits neatly into low-latency edge computing, conceptually.
Other advantages to placing a data center actually in the ocean, although close to land, include the fact that there’s no rent in open waters. And in international waters, one could imagine regulatory advantages—there isn’t a country’s official hovering around.
However, by placing the installation on terra firma (as the seawater-saltwater mix power solution would be designed for) but close to water at a coast, one can use the necessary seawater and gain an advantage of ease of access to the real estate, connections, and so on.
The Stanford University engineers, in their seawater/wastewater mix tests, flushed a battery prototype 180 times with wastewater from the Palo Alto Regional Water Quality Control Plant and seawater from nearby Half Moon Bay. The group says they obtained 97% “capturing [of] the salinity gradient energy,” or the blue energy, as it’s sometimes called.
“Surplus power production could even be diverted to nearby industrial operations,” the article continues.
“Tapping blue energy at the global scale: rivers running into the ocean” is yet to be solved. “But it is a good starting point,” says Stanford scholar Kristian Dubrawski in the article.