Wednesday, 30 May 2018

Box Launches Multizone Cloud Storage for Box Zones Service

One of the General Data Protection Regulation’s (GDPR) key mandates is that enterprises must know exactly upon which server and what geographic location the primary copy of each piece of data resides—and be prepared to prove it.
Box, a cloud-based storage and enterprise collaboration tool provider, made things a bit easier for its customers May 24—the day before GDPR went live—when it launched multi-zone support for its flagship Box Zones service. This option provides users with the ability to store data and to collaborate across any of Box’s existing seven zones, all from a single Box instance.
So if an inspector from the European Union’s GDPR enforcement agency comes calling, a multinational company with this service can immediately show in which geographical zones its data stores are located as controlled in the Box cloud.
Breaking Some Ground in Cloud-Based Storage
Faced with the evolving and increasingly complex global regulatory and compliance landscape, Box enables enterprises to have more control over their data residency while making sure users have a frictionless collaboration experience, no matter where they are working or in what zone their data lies.Box Zones, first introduced in April 2016, broke ground in cloud content management by enabling customers to store their data in the region of their choice for the first time. But things have become a lot more complicated in the last two years.
“Multizone support for Box Zones gives enterprises the best of both worlds,” Box Chief Product Officer Jeetu Patel told eWEEK. “Not only will they be able to make granular decisions about how to govern and store their data across the globe, but users will get the same collaborative experience they have with Box today—no matter where they, or their collaborators, are located.”
Work Globally, Store Locally
According to Patel, by using multizone support for Box Zones, organizations can:
  • Reduce risk and address data protection requirements, including GDPR: Organizations will have the ability to assign a storage Zone for individual users, as well as designate a default Zone for the entire organization, proactively addressing data residency and compliance requirements.
  • Provide flexibility for changing needs: Organizations can change a user's assigned Zone at any time. Content will automatically migrate to the new Zone without the user ever losing access.
  • Gain global visibility and control: Administrators can manage data for an entire enterprise from a single admin console, no matter how many Zones an organization is using.
  • Drive transparency and insights: Real-time, self-serve reporting provides administrators with the ability to download reports on individual users and their assigned Zones from the admin console for easy auditing.
  • Frictionless end user experience: End users will still be able to freely collaborate with colleagues, partners and suppliers without ever having to worry about where their data is stored.
http://www.eweek.com/

Tuesday, 29 May 2018

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.

Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.

Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters.
AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.

Inside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert.  As conditions change, a machine-learning system learns from the changes – it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks.

The goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn’t a trivial task.

A baseline requirement is real-time data from major components, says Steve Carlini, senior director of data-center global solutions at Schneider Electric. That means chillers, cooling towers, air handlers, fans and more. On the IT equipment side, it means metrics such as server utilization rate, temperature and power consumption.

“Metering a data center is not an easy thing,” Carlini says. “There are tons of connection points for power and cooling in data centers that you need to get data from if you want to try to do AI.”

IT pros are accustomed to device monitoring and real-time alerting, but that’s not the case on the facilities side of the house. “The expectation of notification in IT equipment is immediate. On your power systems, it’s not immediate,” Carlini says. “It’s a different world.”

It’s only within the last decade or so that the first data centers were fully instrumented, with meters to monitor power and cooling. And where metering exists, standardization is elusive: Data-center operators rely on building-management systems that utilize multiple communication protocols – from Modbus and BACnet to LONworks and Niagara – and have had to be content with devices that don’t share data or can’t be operated via remote control. “TCP/IP, Ethernet connections – those kinds of connections were unheard of on the powertrain side and cooling side,” Carlini says.

The good news is that data-center monitoring is advancing toward the depth that’s required for advanced analytics and machine learning. “The service providers and colocation providers have always been pretty good at monitoring at the cage level or the rack level, and monitoring energy usage. Enterprises are starting to deploy it, depending on the size of the data center,” Carlini says.

Machine learning keeps data centers cool
A Delta Airlines data center outage, attributed to electrical-system failure, grounded about 2,000 flights over a three-day period in 2016 and cost the airline a reported $150 million. That’s exactly the sort of scenario that machine learning-based automation could potentially avert. Thanks to advances in data center metering and the advent of data pools in the cloud, smart systems have the potential to spot vulnerabilities and drive efficiencies in data-center operations in ways that manual processes can’t.

A simple example of machine learning-driven intelligence is condition-based maintenance that’s applied to consumable items in a data center, for example, cooling filters. By monitoring the air flow through multiple filters, a smart system could sense if some of the filters are more clogged than others, and then direct the air to the less clogged units until it’s time to change all the filters, Carlini says.

Another example is monitoring the temperature and discharge of the batteries in UPS systems. A smart system can identify a UPS system that’s been running in a hotter environment and might have been discharged more often than others, and then designate it as a backup UPS rather than a primary. “It does a little bit of thinking for you. It’s something that could be done manually, but the machines can also do it. That’s the basic stuff,” Carlini says.

Taking things up a level is dynamic cooling optimization, which is one of the more common examples of machine learning in the data center today, particularly among larger data-center operators and colocation providers.

With dynamic cooling optimization, data center managers can monitor and control a facility’s cooling infrastructure based on environmental conditions. When equipment is moved or computing traffic spikes, heat loads in the building can change, too. Dynamically adjusting cooling output to shifting heat loads can help eliminate unnecessary cooling capacity and reduce operating costs.

Colocation providers are big adopters of dynamic cooling optimization, says Rhonda Ascierto, research director for the datacenter technologies and eco-efficient IT channel at 451 Research. “Machine learning isn’t new to the data center,” Ascierto says. “Folks for a long time have tried to better right-size cooling based on capacity and demand, and machine learning enables you to do that in real time.”

Vigilent is a leader in dynamic cooling optimization. Its technology works to optimize the airflow in a data center facility, automatically finding and eliminating hot spots.

Data center operators tend to run much more cooling equipment than they need to, says Cliff Federspiel, founder, president and CTO of Vigilent. “It usually produces a semi-acceptable temperature distribution, but at a really high cost.”

If there’s a hot spot, the typical reaction is to add more cooling capacity. In reality, higher air velocity can produce pressure differences, interfering with the flow of air through equipment or impeding the return of hot air back to cooling equipment. Even though it’s counterintuitive, it might be more effective to decrease fan speeds, for example.

Vigilent’s machine learning-based technology learns which airflow settings optimize each customer's thermal environment. Delivering the right amount of cooling, exactly where it’s needed, typically results in up to a 40% reduction in cooling-energy bills, the company say.

Beyond automating cooling systems, Vigilent’s software also provides analytics that customers are using to make operational decisions about their facilities.

“Our customers are becoming more and more interested in using that data to help manage their capital expenditures, their capacity planning, their reliability programs,” Federspiel says. “It’s creating opportunities for lots of new kinds of data-dependent decision making in the data center.”

AI makes existing processes better
Looking ahead, data-center operators are working to extend the success of dynamic-cooling optimization to other areas. Generally speaking, areas that are ripe for injecting machine learning are familiar processes that require repetitive tasks.

“New machine learning-based approaches to data centers will most likely be applied to existing business processes because machine learning works best when you understand the business problem and the rules thoroughly,” Ascierto says.

Enterprises have existing monitoring tools, of course. There’s a longstanding category of data-center infrastructure management (DCIM) software that can provide visibility into data center assets, interdependencies, performance and capacity. DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. Enterprises use DCIM software to simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.

“If you have a basic monitoring and asset management in place, your ability to forecast capacity is vastly improved,” Ascierto says. “Folks are doing that today, using their own data.”

Data-center management as a service, or DMaaS, is a service that’s based on DCIM software. But it’s not simply a SaaS-delivered version of DCIM software. DMaaS takes data collection a step further, aggregating equipment and device data from scores of data centers. That data is then anonymized, pooled and analyzed at scale using machine learning.

Two early players in the DMaaS market are Schneider Electric and Eaton. Both vendors mined a slew of data from their years of experience in the data-center world, which includes designing and building data centers, building management, electrical distribution, and power and cooling services.

“The big, significant change is what Schneider and Eaton are doing, which is having a data lake of many customers’ data. That’s really very interesting for the data-center sector,” Ascierto says.

Access to that kind of data, harvested from a wide range of customers with a wide range of operating environments, enables an enterprise to compare its own data-center performance against global benchmarks. For example, Schneider’s DMaaS offering, called EcoStruxure IT, is tied to a data lake containing benchmarking data from more than 500 customers and 2.2 million sensors. 

“Not only are you able to understand and solve these issues using your own data. But also, you can use data from thousands of other facilities, including many that are very similar to yours. That’s the big difference,” Ascierto says.

Predictive and preventative maintenance, for example, benefit from deeper intelligence. “Based on other machines, operating in similar environments with similar utilization levels, similar age, similar components, the AI predicts that something is going to go wrong,” Ascierto says.

Scenario planning is another process that will get a boost from machine learning. Companies do scenario planning today, estimating the impact of an equipment move on power consumption, for example. “That’s available without machine learning,” Ascierto says. “But being able to apply machine-learning data, historic data, to specific configurations and different designs – the ability to be able to determine the outcome of a particular configuration or design is much, much greater.”

Risk analysis and risk mitigation planning, too, stand to benefit from more in-depth analytics. “Data centers are so complex, and the scale is so vast today, that it’s really difficult for human beings to pick up patterns, yet it’s quite trivial for machines,” Ascierto says.

In the future, widespread application of machine learning in the data center will give enterprises more insights as they make decisions about where to run certain workloads. “That is tremendously valuable to organizations, particularly if they are making decisions around best execution venue,” Ascierto says. “Should this application run in this data center? Or should we use a collocation data center?”

Looking further into the future, smart systems could take on even more sophisticated tasks, enabling data centers to dynamically adjust workloads based on where they will run the most efficiently or most reliably. “Sophisticated AI is still a little off in to the future,” Carlini says.

In the meantime, for companies that are just getting started, he stresses the importance of getting facilities and IT teams to collaborate more.

“It’s very important that you consider all the domains of the data center – the power, the cooling and the IT room,” Carlini says. The industry is working hard to ensure interoperability among the different domains’ technologies. Enterprises need to do the same on the staffing front.

https://www.networkworld.com

Saturday, 26 May 2018

Google Adds Programmatic Budget Notification Feature to Cloud Billing

Google is giving organizations a new way to keep an eye on how much they are spending on some of the company's cloud computing services. 

Google has added a new programmatic budget notification feature to its cloud billing service that notifies app administrators and business line managers when particular cloud service costs are approaching budget limits. 

The feature is designed to help organizations stick to previously set budgets and to take action automatically when usage costs might be near or over the budget limit. 

The notification feature works with all Google cloud services as well as internally developed or third-party cost management tools, Google product manager Matt Leonard said. "For example, as an engineering manager, you can set up budget notifications to alert your entire team through Slack every time you hit 80 percent of your budget," Leonard wrote in a May 23 blog. 

The programmatic budget notification feature builds on an existing capability in Google cloud billing that allows enterprise administrators to set a specific budget limit for either a cloud project or for a particular account. 

Or they can use the feature to set budgets based on the previous month's spending. The feature includes an alerting capability that billing administrators can use to get emailed notifications when actual or estimated charges reach or exceed a previously specified percentage of the total budget. 

The new notification feature that Google announced this week gives administrators a way of informing users about their cloud spending so they can manage costs better, Leonard said. 

In addition to simply alerting users, administrators can use the new feature to specify actions to take when a project or account hits a budget threshold. An administrator for instance can use programmatic budget notification to cap costs for a project or account and to stop use of Google Cloud services beyond a certain budget limit. 

An organization might want to set such caps because they have hard limits on how much they can spend on Google cloud services or on the amount an employee or business department is permitted to spend on it, according to the company. Such caps might be necessary for instance when students or researchers might be using Google Cloud or when researchers or developers are working in sandbox environments, the company noted. 

Administrators can take more nuanced actions as well. For example, they can use the budget notification feature and associated actions to selectively stop some cloud computing resources from running when a project is approaching its budget cap. But they could opt to leave storage services available thereby reducing per hour costs without completely disabling all services, Google said. 

In announcing the new feature, Google this week also released documentation designed to help administrators set up the programmatic budget notification feature. 

Besides billing alerts, Google also offers a billing export feature for organizations that want to closely monitor their costs. The export feature gives administrators a way to get details of daily usage along with estimates of associated costs. 

http://www.eweek.com

Intel Makes a Move into Vision Intelligence IoT Business

Intel plays a lot of roles in the IT business besides making processors with microscopic transistors for servers, PCs, the internet of things, and mobile devices. It also makes security hardware and software, memory and programmable enterprise solutions, 5G connectivity hardware and software and a list of others too long to note here.

But one of the greenest fields coming into the venerable chipmaker’s view here in mid-2018 has to do with what’s called “the edge”—that mysterious, nebulous and more distributed area outside the data center where a lot of computing is starting to happen and will be happening more and more as time goes on.

We’re hearing a lot about this lately, largely because our devices (smartphones, laptops, tablets, IoT devices) on the fringes of centralized systems can hold much more information and do more with it than in years past. Intel wants to make more and more of the infrastructure for these devices and systems.

What is Edge Computing?

Definition: Edge computing is a method of optimizing cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors and the central data center by performing analytics and knowledge generation at or near the source of the data. This approach requires using resources that may not be continuously connected to a network, such as laptops, smartphones, tablets and sensors.

By its very nature, edge computing—which also includes these devices communicating with each other via Bluetooth and other non-cloud methods—decreases workloads that used to be processed inside 24/7 cloud computing systems. This not only increases the efficiency of computing and data applications but also promotes further implementation of emerging technologies, such as artificial intelligence and 5G bandwidth.

Intel is determined to become a leader in providing the smarts for edge computing. We know this because Tom Lantzsch, Intel senior vice president and general manager of the Internet of Things (IoT) Group, explained it in a blogpost.

“We have been working hard [for the last year] to define and develop a data-driven technology foundation for industry innovation,” Lantzsch said. “Our strategy is to drive end-to-end distributed computing in every vertical by focusing on silicon platforms and workload consolidation at the edge.”

Enterprise Video: Low-Hanging IoT Analytics Fruit

A big part of this strategy in the early years of edge computing involves the low-hanging fruit of enterprise video. These are the IoT use cases that are being transformed first, because old-time analog video is very expensive to use, maintain and store, whereas digital video handled inside a cloud or edge-computing system is much more effective, easier to use and maintain, and easier and cheaper to store.

“We are seeing significant growth in IoT markets worldwide, driven in part by a dramatic increase in vision applications, particularly those leveraging artificial intelligence (AI),” Lantzsch said. “These imaging and video use cases span nearly every IoT segment. They include finding product defects on assembly lines, managing inventory in retail, identifying equipment maintenance needs in remote locations and enabling public safety in cities and airports. They all leverage high-resolution cameras and create extraordinary amounts of data, which needs to be aggregated and analyzed.”

To address this expansive data growth, Intel this week introduced what it calls the OpenVINO (Open Visual Inference & Neural Network Optimization) toolkit. This development toolkit is designed to fast-track development of high-performance computer vision and deep learning inference applications at the edge.

OpenVINO Integratable into Other Apps

Intel customers can integrate the OpenVINO toolkit with devices running AWS GreenGrass for performing machine learning inference at the edge, for one example.

Processing high-quality video requires the ability to rapidly analyze vast streams of data near the edge and respond in real time, moving only relevant insights to the cloud asynchronously, Lantzsch said. To process video data efficiently, enterprises need the right solution for the job. Unlike others with a one-size-fits-all philosophy, Intel believes the market requires a portfolio of scalable hardware and software solutions to move into an intelligent data-powered future.

This includes widely deployed and available Intel computing products, including those with integrated graphics, Intel FGPAs (field-programmable gate arrays) and Intel Movidius VPU (vision processing unit).

Thus, OpenVINO is the latest offering in the Intel Vision Products lineup of hardware and software that is dedicated to transforming vision data into business and security insights. This type of video system can identify faces, patterns of traffic, specific vehicles through license plates and so on, so that city, security and other types of administrators can glean knowledge—and find bad guys—in video surveillance.

Why Move Video Analytics to the Edge?

Adam Burns, Intel’s Director of Computer Vision and Digital Surveillance, told eWEEK said there are two main reasons for video and accompanying analytics to move to the edge.

“The first is economic, because the data itself lends itself to processing at the edge. The second is application-dependent, meaning either that the data is such that you want to maintain security and privacy, or you want to take action immediately on it, and keep it resident to where it’s happening,” Burns said.

There’s no one solution that can map to all the video analytics capabilities that are now being used, so Intel is offering an opportunity to set a foundational brick with this toolset.

The OpenVINO toolkit provides a high-performance solution for edge-to-cloud video analytics and deep learning. It empowers developers to deploy deep-learning inference and computer vision solutions, using a wide range of common software frameworks such as TensorFlow, MXNet and Caffe.

Intel’s vision products and the OpenVINO toolkit are being used by global partners such as Dahua for smart city and traffic solutions; GE Healthcare  in medical imaging; and Hikvision for industrial and manufacturing safety. Additional users currently include Agent Vi, Current by GE, Dell and Honeywell.

http://www.eweek.com

Friday, 25 May 2018

Oracle plans to dump risky Java serialization

Oracle plans to drop from Java its serialization feature that has been a thorn in the side when it comes to security. Also known as Java object serialization, the feature is used for encoding objects into streams of bytes. Used for lightweight persistence and communication via sockets or Java RMI, serialization also supports the reconstruction of an object graph from a stream. 

Removing serialization is a long-term goal and is part of Project Amber, which is focused on productivity-oriented Java language features, says Mark Reinhold, chief architect of the Java platform group at Oracle.

To replace the current serialization technology, a small serialization framework would be placed in the platform once records, the Java version of data classes, are supported. The framework could support a graph of records, and developers could plug in a serialization engine of their choice, supporting formats such as JSON or XML, enabling serialization of records in a safe way. But Reinhold cannot yet say which release of Java will have the records capability.

Serialization was a “horrible mistake” made in 1997, Reinhold says. He estimates that at least a third—maybe even half—of Java vulnerabilities have involved serialization. Serialization overall is brittle but holds the appeal of being easy to use in simple use cases, Reinhold says.

Recently, a filtering capability was added to Java so if serialization is being used on a network and untrusted serialization data streams must be accepted, there is a way to filter which classes can be mentioned, to provide a defense mechanism against serialization’s security weaknesses. Reinhold says Oracle has received many reports are received about application servers running on the network with unprotected ports taking serialization streams, which is why the filtering capability was developed.

https://www.infoworld.com

OpenStack Boosts Container Security With Kata Containers 1.0

The OpenStack Foundation announced on May 22 the Kata Containers 1.0 release which is designed to bolster container security.

The Kata Containers project provides a virtualization isolation layer to help run multi-tenant container deployments in a more secure manner than running containers natively on bare-metal. The effort provides a micro-virtual machine (VM) layer that can run container workloads.

"Containers use cGroups, namespaces and other features of the Linux kernel to enforce rules on what a container can and can't do," the OpenStack Foundation's Anne Bertucio said during an analyst briefing at the OpenStack Summit. "While cGroups and namespaces are good, they only provide one level of isolation between workloads."

The Kata Containers project started in December 2017 as the first new standalone effort from the OpenStack Foundation that operates outside of the organization's existing structure for the development of the OpenStack cloud platform. On May 21, the OpenStack Foundation announced its second standalone effort—with the Zuul continuous integration, continuous deployment (CI/CD) project.

The Kata Containers project was started as a joint effort between Intel which had been working on its own "clear" container technology for isolation and Hyper.sh which had been working on the Run V container security technology. The Kata Containers 1.0 release represents the culmination of the effort to to turn the work of Intel and Hyper.sh into into a unified and stable codebase. Over the past six months, the Kata Containers project has also grown beyond its initial two supporters. The project now also benefits from the financial support of ARM, Canonical, Dell/EMC, Intel and Red Hat. Other vendors including Microsoft are also participating in the Kata Containers project at a technical level.

Microsoft Software Engineer Jessie Frazelle is on the Kata Containers architecture committee and was on the OpenStack Summit keynote stage to talk briefly about why she is interested in the project. Frazelle said that she first saw a demontration of Intel's clear containers in 2015 and was immediately sold on the idea.

"With the merger of Run V, community help and cloud providers, it can only mean better innovation in this space," Frazelle said. "I'm super excited for the future and what this means for container infrastructure overall."

Bertucio noted that with the Kata Containers 1.0 release, the project enables an Open Container Initiative (OCI) runtime and provides seamless integration with both the Kubernetes Container Runtime Interface (CRI) and Docker. Looking forward to future releases, Bertucio said that the project will aim to provide support for multiple hypervisors and will also seek enable support for accelerators, including GPUs in the future.

Jonathan Bryce the Executive Director of the OpenStack Foundation commented during the analyst session that among the reasons why Intel was originally interested container security is because it maps to hardware security.

"They (Intel) have virtualization extensions that go all the way down to the processor and allow you to do trusted computing," Bryce said.

As such, Bryce said that by tying into the silicon's virtualization extensions, containers can be secure all the way down to the bare metal hardware. He added that AMD also has a secure memory capability that also can be enabled to work well with Kata Containers. Extending Kata Containers and hardware security elements also has cloud impact. Bryce said that Microsoft Azure for example is able to now benefit from Kata Container elements with the hardware security provided by silicon vendors, to provide additional isolation.

http://www.eweek.com

DNS in the cloud: Why and why not

As enterprises consider outsourcing their IT infrastructure, they should consider moving their public authoritative DNS services to a cloud provider’s managed DNS service, but first they should understand the advantages and disadvantages.

Advantages of Cloud DNS
Resiliency

Cloud DNS providers have fully redundant and geographically diverse networks and DNS server infrastructure that provides reliability and fault-tolerance. Enterprises commonly lack redundancy in their DNS infrastructure because they use DNS servers that do not share synchronized distributed zone information.  The enterprise must ensure that this service is redundant, because if a their non-redundant DNS servers were to fail there would be significant business impacts.  If the enterprise network lacks internal and internet redundancy and the network fails, then the reachability of their DNS infrastructure is also compromised.  If your current DNS servers are not highly redundant, then a cloud DNS service would provide higher resiliency to failure.

Enterprises often maintain authoritative DNS servers on their Internet perimeter networks and allow them to be globally reachable over TCP port 53 and UDP port 53.  If an organization’s authoritative DNS servers are in one location, and they are servicing a global environment, then there is added latency for resolvers around the world that are distant from that location to fulfill queries. Significantly better performance would be achieved using a cloud DNS provider with numerous geographically diverse DNS servers using anycast, which provides high availability and performance by routing traffic to the “nearest” of a group of destinations.

Cloud DNS providers leverage anycast to create a highly scalable and redundant DNS infrastructure.  There would be extensive costs for an enterprise to build out this level of redundancy using anycast and BGP routing on their own.

Support for DNSSEC

Domain Name System Security Extensions (DNSSEC) provides a cryptographic method of authenticating DNS records and helps protect against many of the common DNS security issues.  Most enterprises haven’t yet adopted DNSSEC because of their lack of familiarity with its configuration and its benefits.  Enterprises may lack DNS servers that make it easy to establish DNSSEC configurations and, periodically automatically deal with key rotation and updating.  If a DNS administrator forgets the annually-performed key-rotation steps, mistakes can be serious.  The cloud DNS provider may automatically enable DNSSEC or make it far easier to implement DNSSEC and perform automatic key rotation.

DNS DDoS protection

If an enterprise were to deploy its own DNS servers, it would not have the capacity to absorb any significant-size DDoS attack on its DNS servers.  It would be cost-prohibitive for an enterprise to deploy highly scalable infrastructure required to absorb such an attack.  Resiliency against DNS DDoS attacks would improve when using a cloud DNS provider that has greater ability to absorb the attack, scale up with the attack or mitigate the attack quickly.  Cloud DNS providers have higher bandwidth links, diverse resources and the ability to scale up resources automatically based on transaction volume.

Improved security

Because DNS is an Internet-facing service, the enterprise must constantly monitor the security of this server, keep it patched and make sure it doesn’t become an open DNS resolver.  A cloud DNS provider would keep their redundant DNS servers continually patched, scanned, secured and monitored. 

Advanced traffic routing

Cloud DNS providers also offer advanced traffic routing capabilities that may not be possible with an enterprise’s current on-premises DNS servers.  For example, AWS’s Route 53 cloud DNS service offers different advanced traffic-routing policies such as simple failover, round-robin, latency-based routing, geographic DNS and geo-proximity routing.  For an enterprise to create this same functionality it would need to invest in geographically diverse DNS servers and sophisticated load-balancing functions at each site.

Potential cost savings

Using a cloud-managed DNS serivce may save money compared to an enterprise purchasing redundant physical servers, licensing an operating system and staffing to maintain and configure DNS.  If DNS servers are in need of a hardware or software upgrade, then this might be the compelling event to defer a capital expenditure of new DNS servers and switch to using a cloud-managed DNS service.

Enterprises may lack the ability to make DNS changes quickly with their current systems, and they may not have the ability to easily have software-driven automatic changes made based on some triggering event.  Enterprises typically have internal IT processes that require submitting a support ticket to the DDI team anytime an addition or change is needed.  Cloud DNS providers have software-programmable interfaces and scripts to handle the automatic creation and updating of DNS records.  You can use their APIs to configure dynamic additions or changes to your DNS resource records.

Better monitoring, visibility, reporting

Many enterprises may take their DNS servers for granted and not fully understand the dependency their entire IT infrastructure has on DNS.  Enterprises may lack monitoring visibility and performance and operational metrics from their existing on-premises DNS systems.  Typical on-premises DNS servers may not have useful reporting or useful insights into DNS resolutions.  Cloud DNS providers do a much better job of performing 24X7X365 monitoring and maintenance of their revenue-generating infrastructure. 

Disadvantages of Cloud DNS
DNS managed-service crashes

An outage of the DNS provider’s infrastructure can cause disastrous consequences for its customers’ businesses.  Because all of an enterprise’s IT applications rely on network availability and DNS resolution, if the DNS fails, none of their business applications work.  This could have disastrous financial implications.  A couple of years ago, the DNS provider ChangeIP had a multi-day outage that left its customers unable to resolve DNS.  Most cloud DNS providers have SLAs, but there may not be sufficient penalties or consequential damages that could cover an enterprise’s financial risk.

Possible increased latency

If a DNS resolver is “far away” from a company from a network-topology perspective, then this adds latency to each client connection requiring a DNS resolution that is not cached locally.  To minimize latency and improve end-user application experience (UX), it is best to have the DNS resolver near the DNS client.  Having an on-premises DNS service that internal DNS clients can reach quickly can improve application response times for both internal and external applications.

Geolocation problems

There can be problems with geolocation, if your DNS resolver is not close to you. Then Content Delivery Networks (CDNs) may direct you to connect to a server that is closer in proximity to your DNS resolver (rather than your actual location).  For example, if an international enterprise is using a U.S.-based cloud DNS resolver service, this could cause problems for geographic content for their sites on other continents.  Users on other continents will appear to be coming from the U.S. when connecting to content systems that use geographic proximity based on the IP address and location of the DNS resolver.  All the users in other continents could be forced to traverse the globe as they are directed to U.S. located content and experience higher latency and poor application response.

Undermining current DNS investments

If an organization has already invested in a sophisticated DNS-, DHCP- and IP-address-management (DDI) system, then there is financial justification to leverage the current DDI infrastructure.  Enterprises may have invested in redundant DNS infrastructure that uses a synchronized distributed database supported by a redundant network.  Enterprises may have invested in DDI infrastructure that has programmatic interfaces, software automation, secure DNS services, DNSSEC automation with monitoring visibility and reporting.

Loosening of DNS integration

Having DDI management fully integrated into a single platform has operational advantages.  Routing and addressing go hand-in-hand. Organizations carefully plan their IP addressing and DHCP scopes for their network topology, and DHCP leases are granted.  DDI systems perform dynamic DNS and provide a single management interface for these integrated functions.  DDI systems provide operational visibility to IP-address usage and offer valuable management of addressing resources.  When you separate out the external authoritative DNS into a separate non-integrated cloud-based service, then you give up some of the benefits of tightly integrated DDI functions.

Loss of complete DNS-configuration control

Some cloud-managed DNS servers may not give you complete control of the DNS configuration.  If the cloud-managed DNS service has only a rudimentary web interface allowing only a subset of resource-record type,s and your organization has highly complex DNS requirements, then this might not be a fit.  One-size may not fit all, so you would need to determine if you have specific requirements that can be met with a cloud DNS provider.

Examples of Cloud DNS Providers

Today, there are many different cloud DNS providers.  There are numerous dynamic DNS services available for free or for a nominal fee.  There are cloud DNS providers that allow you to use a web interface to configure highly resilient and geographically diverse authoritative DNS resolvers.  Cloud DNS providers have high bandwidth dual-protocol Internet connectivity to diverse data centers that house redundant and scalable DNS server infrastructure.  Cloud DNS providers have anycast addressing and dynamic routing already configured to their name services. 

When shopping for a DNS service provider, enterprises should inquire about these optional features and prioritize the features that they require.  There are cloud DNS providers that provide added security features such as DDoS protection, packet scrubbing and anti-spoofing.  Cloud DNS providers can make it trivially easy to implement DNSSEC for your domain and configure your DNSSEC resource records.  Cloud DNS providers may have RESTful APIs and programmable interfaces that aid in automation of configuration.

Here are the names of some cloud-managed DNS service providers: Akamai , Amazon Route 53, Cloudflare DNS, ClouDNS, DNSMadeEasy, Google Cloud DNS, Infoblox NIOS in the cloud, Microsoft Azure DNS, Neustar (acquired UltraDNS), NS1 Managed DNS, Oracle (acquired Dyn), Rackspace DNS - Cloud Control Pane, Verisign Managed DNS.

Comparing Performance
Before you make a choice of cloud managed DNS provider, you may be interested in comparing the performance of these company’s offerings.  There have been studies performed and evaluations of the various providers.  These surveys are often made from the perspective of the individual making the DNS performance measurements.  The location of the source of these tests may not accurately represent what your enterprise location and Internet geography is like.

You may elect to perform some of your own measurements from your own locations to get an approximation of what your performance may actually be when you select a cloud DNS provider.  There are several useful tools you can use to help you take these measurements:

  • DNSDiag is an open source Python DNS diagnostics and performance measurement toolset that can help you perform your own testing.  Their dnsping.py utility can help you determine latency, dnstraceroute.py can help you compare Internet traffic paths to DNS servers, and dnseval.py can perform comparisons.
  • DNSPerf is a set of over 200 monitoring systems, provided by Prospect One, that can measure global DNS service performance. 
  • Namebench is an older, but still useful tool for assessing which public DNS resolver service may have the lowest latency from the perspective of your location.  There is a new Golang namebench 2.0 open source GitHub repository available.
  • ThousandEyes offers a commercial DNS monitoring service, and last year they published results from their performance measurements of popular DNS service providers.
https://www.networkworld.com

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.

Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.

AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.

Inside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert.  As conditions change, a machine-learning system learns from the changes – it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks.

The goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn’t a trivial task.

A baseline requirement is real-time data from major components, says Steve Carlini, senior director of data-center global solutions at Schneider Electric. That means chillers, cooling towers, air handlers, fans and more. On the IT equipment side, it means metrics such as server utilization rate, temperature and power consumption.

“Metering a data center is not an easy thing,” Carlini says. “There are tons of connection points for power and cooling in data centers that you need to get data from if you want to try to do AI.”

IT pros are accustomed to device monitoring and real-time alerting, but that’s not the case on the facilities side of the house. “The expectation of notification in IT equipment is immediate. On your power systems, it’s not immediate,” Carlini says. “It’s a different world.”

It’s only within the last decade or so that the first data centers were fully instrumented, with meters to monitor power and cooling. And where metering exists, standardization is elusive: Data-center operators rely on building-management systems that utilize multiple communication protocols – from Modbus and BACnet to LONworks and Niagara – and have had to be content with devices that don’t share data or can’t be operated via remote control. “TCP/IP, Ethernet connections – those kinds of connections were unheard of on the powertrain side and cooling side,” Carlini says.

The good news is that data-center monitoring is advancing toward the depth that’s required for advanced analytics and machine learning. “The service providers and colocation providers have always been pretty good at monitoring at the cage level or the rack level, and monitoring energy usage. Enterprises are starting to deploy it, depending on the size of the data center,” Carlini says.

Machine learning keeps data centers cool
A Delta Airlines data center outage, attributed to electrical-system failure, grounded about 2,000 flights over a three-day period in 2016 and cost the airline a reported $150 million. That’s exactly the sort of scenario that machine learning-based automation could potentially avert. Thanks to advances in data center metering and the advent of data pools in the cloud, smart systems have the potential to spot vulnerabilities and drive efficiencies in data-center operations in ways that manual processes can’t.

A simple example of machine learning-driven intelligence is condition-based maintenance that’s applied to consumable items in a data center, for example, cooling filters. By monitoring the air flow through multiple filters, a smart system could sense if some of the filters are more clogged than others, and then direct the air to the less clogged units until it’s time to change all the filters, Carlini says.

Another example is monitoring the temperature and discharge of the batteries in UPS systems. A smart system can identify a UPS system that’s been running in a hotter environment and might have been discharged more often than others, and then designate it as a backup UPS rather than a primary. “It does a little bit of thinking for you. It’s something that could be done manually, but the machines can also do it. That’s the basic stuff,” Carlini says.

Taking things up a level is dynamic cooling optimization, which is one of the more common examples of machine learning in the data center today, particularly among larger data-center operators and colocation providers.

With dynamic cooling optimization, data center managers can monitor and control a facility’s cooling infrastructure based on environmental conditions. When equipment is moved or computing traffic spikes, heat loads in the building can change, too. Dynamically adjusting cooling output to shifting heat loads can help eliminate unnecessary cooling capacity and reduce operating costs.

Colocation providers are big adopters of dynamic cooling optimization, says Rhonda Ascierto, research director for the datacenter technologies and eco-efficient IT channel at 451 Research. “Machine learning isn’t new to the data center,” Ascierto says. “Folks for a long time have tried to better right-size cooling based on capacity and demand, and machine learning enables you to do that in real time.”

Vigilent is a leader in dynamic cooling optimization. Its technology works to optimize the airflow in a data center facility, automatically finding and eliminating hot spots.

Data center operators tend to run much more cooling equipment than they need to, says Cliff Federspiel, founder, president and CTO of Vigilent. “It usually produces a semi-acceptable temperature distribution, but at a really high cost.”

If there’s a hot spot, the typical reaction is to add more cooling capacity. In reality, higher air velocity can produce pressure differences, interfering with the flow of air through equipment or impeding the return of hot air back to cooling equipment. Even though it’s counterintuitive, it might be more effective to decrease fan speeds, for example.

Vigilent’s machine learning-based technology learns which airflow settings optimize each customer's thermal environment. Delivering the right amount of cooling, exactly where it’s needed, typically results in up to a 40% reduction in cooling-energy bills, the company say.

Beyond automating cooling systems, Vigilent’s software also provides analytics that customers are using to make operational decisions about their facilities.

“Our customers are becoming more and more interested in using that data to help manage their capital expenditures, their capacity planning, their reliability programs,” Federspiel says. “It’s creating opportunities for lots of new kinds of data-dependent decision making in the data center.”

AI makes existing processes better
Looking ahead, data-center operators are working to extend the success of dynamic-cooling optimization to other areas. Generally speaking, areas that are ripe for injecting machine learning are familiar processes that require repetitive tasks.

“New machine learning-based approaches to data centers will most likely be applied to existing business processes because machine learning works best when you understand the business problem and the rules thoroughly,” Ascierto says.

Enterprises have existing monitoring tools, of course. There’s a longstanding category of data-center infrastructure management (DCIM) software that can provide visibility into data center assets, interdependencies, performance and capacity. DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. Enterprises use DCIM software to simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.

“If you have a basic monitoring and asset management in place, your ability to forecast capacity is vastly improved,” Ascierto says. “Folks are doing that today, using their own data.”

Next up: adding outside data to the DCIM mix. That’s where machine learning plays a key role.

Data-center management as a service, or DMaaS, is a service that’s based on DCIM software. But it’s not simply a SaaS-delivered version of DCIM software. DMaaS takes data collection a step further, aggregating equipment and device data from scores of data centers. That data is then anonymized, pooled and analyzed at scale using machine learning.

Two early players in the DMaaS market are Schneider Electric and Eaton. Both vendors mined a slew of data from their years of experience in the data-center world, which includes designing and building data centers, building management, electrical distribution, and power and cooling services.

“The big, significant change is what Schneider and Eaton are doing, which is having a data lake of many customers’ data. That’s really very interesting for the data-center sector,” Ascierto says.

Access to that kind of data, harvested from a wide range of customers with a wide range of operating environments, enables an enterprise to compare its own data-center performance against global benchmarks. For example, Schneider’s DMaaS offering, called EcoStruxure IT, is tied to a data lake containing benchmarking data from more than 500 customers and 2.2 million sensors. 

“Not only are you able to understand and solve these issues using your own data. But also, you can use data from thousands of other facilities, including many that are very similar to yours. That’s the big difference,” Ascierto says.

Predictive and preventative maintenance, for example, benefit from deeper intelligence. “Based on other machines, operating in similar environments with similar utilization levels, similar age, similar components, the AI predicts that something is going to go wrong,” Ascierto says.

Scenario planning is another process that will get a boost from machine learning. Companies do scenario planning today, estimating the impact of an equipment move on power consumption, for example. “That’s available without machine learning,” Ascierto says. “But being able to apply machine-learning data, historic data, to specific configurations and different designs – the ability to be able to determine the outcome of a particular configuration or design is much, much greater.”

Risk analysis and risk mitigation planning, too, stand to benefit from more in-depth analytics. “Data centers are so complex, and the scale is so vast today, that it’s really difficult for human beings to pick up patterns, yet it’s quite trivial for machines,” Ascierto says.

In the future, widespread application of machine learning in the data center will give enterprises more insights as they make decisions about where to run certain workloads. “That is tremendously valuable to organizations, particularly if they are making decisions around best execution venue,” Ascierto says. “Should this application run in this data center? Or should we use a collocation data center?”

Looking further into the future, smart systems could take on even more sophisticated tasks, enabling data centers to dynamically adjust workloads based on where they will run the most efficiently or most reliably. “Sophisticated AI is still a little off in to the future,” Carlini says.

In the meantime, for companies that are just getting started, he stresses the importance of getting facilities and IT teams to collaborate more.

“It’s very important that you consider all the domains of the data center – the power, the cooling and the IT room,” Carlini says. The industry is working hard to ensure interoperability among the different domains’ technologies. Enterprises need to do the same on the staffing front.

“Technically it’s getting easier, but organizationally you still have silos," he says.

https://www.networkworld.com

Thursday, 24 May 2018

PAR Technology Releases Platform for Cloud-Based Food Safety

According to a recent release, ParTech, a provider of point of sale (POS) and workforce efficiency solutions to the restaurant and retail industries, today announced the release of PAR SureCheck Food Safety Solution 10.0.

Building upon years of global food safety technology experience, over the last 18 months PAR has been building a new stack of cloud-based technology called SureCheck 10.0. With over 10,000 licensed clients installed in 14 different countries, PAR Technology has gathered vast experience and knowledge as a leader in food safety management solutions.
SureCheck is designed to offer users an easy-to-use enterprise mobile application specifically for food safety and task management. The new platform is iOS and Android compatible, and allows users to quickly and efficiently; conduct, monitor, document and trend operations that affect food quality, temperature monitoring and HACCP compliance, which are necessary to protect brands and increase consumer confidence.
Key features of this latest release include:
  • Advanced functionality that enhances the user experience when automating food safety and task management operations
  • Auto advancement workflow for QSR operations
  • Provides customized software that allows organizations to personalize checklist workflow to achieve internal food safety requirements
  • Offers users an easy-to-use application, especially built for mobile devices (IOS and Android)
  • Ability to capture photo/video proof with notes for recordkeeping and reporting purposes
  • Allows the flexibility to alert one or more users when an observation triggers a corrective action
  • Rich reporting and cloud data storage that provides information that can assist organizations in establishing best practices for food safety and task management
“SureCheck v10.0 is a major release that is built for velocity, performance, and scalability. By incorporating new methods for wide scale enterprise configuration, workflow and reporting into a single code base with the many insights and features incorporated over the past 9 years, we are excited to be introducing v10.0 to new and existing customers alike,” said John Sammon III, SVP & GM, SureCheck, ParTech. 
http://www.iotevolutionworld.com

OpenStack Moves Beyond the Cloud to Open Infrastructure

The OpenStack Summit got underway on May 21, with a strong emphasis on the broader open-source cloud community beyond just the OpenStack cloud platform itself.
At the summit, the OpenStack Foundation announced that it was making its open-source Zuul continuous development, continuous integration (CI/CD) technology a new top level standalone project. Zuul has been the underlying DevOps CI/CD system that has been used for the past six years, to develop and test the OpenStack cloud platform.
During the OpenStack Summit keynotes, Jim Blair, principal software engineer with Red Hat and founding member of the Zuul project team, explained that every patch that is made in OpenStack project code is tested via Zuul before it is integrated into the project.
The OpenStack Zuul project should not be confused with a Netflix open-source project of the same name and is un-related. Netflix's Zuul project is an edge routing service.
"Zuul is an automation system that is focused on project gating," Blair explained. "Every change should be a commit and every commit should be tested before it lands and related commits should all be tested together."
Blair explained that in Zuul, jobs are written with the open-source ansible configuration management technology. He noted that with ansible, the same playbook for application deployment and configuration can be used for testing, giving developers a lot of flexibility. Zuul version 3 which became generally available in March, provides integration with GitHub, further de-coupling the project from its core OpenStack roots. Blair said that Zuul is designed to help developers work across project boundaries.
"Zuul v3 is capable of handing OpenStack's workflow and anything else you can throw at it," Blair said.
Open Infrastructure 
As a top level standalone project, the OpenStack Foundation is promoting Zuul to be something that can work with multiple open-source efforts. It's part of a broader effort from the OpenStack Foundation to look beyond its own efforts.
"We build and operate open infrastructure," Mark Collier, Chief Operating Officer of the OpenStack Foundation said during his keynote.
Collier said that today organizations expect their infrastructure to do more, use containers, use artificial intelligence, provide serverless capabilities and help to enable compliance requirements. He also noted that that there are more places where compute is needed than just data center provided clouds, with a need for edge computing capacity.
"Cloud consolidation is a myth," Collier said. "The cloud is diversifying, driven by demands of applications and workloads."
Collier noted that what has also become clear is that having one cloud provider is not enough to power global computing needs and similarly no one open-source project is enough.
"None of us should just be thinking about our one piece," Collier said. "We have to look at the big picture because that's what operators need."
http://www.eweek.com

Wednesday, 23 May 2018

Magnetic smart fabrics will store data in clothes

High-density data could one day be stored in fabric patches embedded in people’s clothing, say scientists at the University of Washington. Importantly, it wouldn’t require electricity, so the smart-fabric could be washed or ironed just like regular clothing. That could make it more convenient than other forms of memory.

Off-the-shelf conductive thread, which the scientists say they recently discovered can be magnetized, is being used in trials. The data is read using a simple magnometer. The conductive thread is used commercially now in gloves for operating touch screens, for example.

“You can think of the fabric as a hard disk,” said Shyam Gollakota, associate professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, announcing the breakthrough on the school’s website at the end of last year. “You’re actually doing this data storage on the clothes you’re wearing.”

In tests, the engineers stored a secure-area door-passcode on conductive fabric sewn into a normal shirt sleeve like the kind one puts on every morning. Swiping the cuff over the door-mounted interface unlocked the door.

Electronic-free storage
We’ve seen e-textiles, or smart garments, before. They include performance-enhancing fabrics made for regulating body temperature or to control muscle movement for athletes, as well as fabrics that light up or change appearance for fun.

But those technologies have generally required some form of power to function — sometimes batteries or sometimes from the environment by harnessing body heat for example. Electricity, though, in clothing, is a problem in the rain and for washing. Waterproofing needs to be built in. That’s hard to do.

Gollakota, however, says those things won’t be an issue with his gear.

“This is a completely electronic-free design, which means you can iron the smart fabric or put it in the washer and dryer,” he says.

How data is stored in cloth
Data is encoded by polarizing single-bit cells created in a strip of embroidered, magnetized thread. The directions North and South are programmed with a magnet and correlate with a 1 and a 0 bit. The data is then read — in the experiments using a smartphone’s compass function.

Un-magnetized parts of the thread separate the symbols to stop interference. It’s a “passive approach,” Gollakota says in his paper co-written with Justin Chan (pdf).

Embedded tech
Interestingly, we’ve been hearing about on-body (or even in-body) interfaces. RFID microchips, of the kind used in pets to identify them when they get lost, are being embedded between the thumb and forefinger by enthusiastic employees at some companies keen to authenticate photocopiers without having to remember to carry a key card.

And Facebook is apparently working on technology called a transcutaneous language communication (TLC) that will let people feel incoming texts through their skin when they can’t get to their phone. Words are converted into vibrations.

Mailonline wrote last month of a standard 3-D printer that can print biological sensors onto hands.

In the case of University of Washington’s magnetic smart fabric memory — while being less intrusive for the squeamish — its future efforts are being geared towards creating more powerful magnetic fields that would be able to hold larger amounts of data.

Conceivably then, one day we might see the ultimate in a low-latency data — it being on an actual person. Now, that’s an edge data center.

https://www.networkworld.com/