Friday, 31 August 2018

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.

Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.

Check out our review of VMware’s vSAN 6.6 and see IDC’s top 10 data center predictions. Get regularly scheduled insights by signing up for Network World newsletters.
AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.

Inside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert.  As conditions change, a machine-learning system learns from the changes – it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks.

The goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn’t a trivial task.

A baseline requirement is real-time data from major components, says Steve Carlini, senior director of data-center global solutions at Schneider Electric. That means chillers, cooling towers, air handlers, fans and more. On the IT equipment side, it means metrics such as server utilization rate, temperature and power consumption.

“Metering a data center is not an easy thing,” Carlini says. “There are tons of connection points for power and cooling in data centers that you need to get data from if you want to try to do AI.”

IT pros are accustomed to device monitoring and real-time alerting, but that’s not the case on the facilities side of the house. “The expectation of notification in IT equipment is immediate. On your power systems, it’s not immediate,” Carlini says. “It’s a different world.”

It’s only within the last decade or so that the first data centers were fully instrumented, with meters to monitor power and cooling. And where metering exists, standardization is elusive: Data-center operators rely on building-management systems that utilize multiple communication protocols – from Modbus and BACnet to LONworks and Niagara – and have had to be content with devices that don’t share data or can’t be operated via remote control. “TCP/IP, Ethernet connections – those kinds of connections were unheard of on the powertrain side and cooling side,” Carlini says.

The good news is that data-center monitoring is advancing toward the depth that’s required for advanced analytics and machine learning. “The service providers and colocation providers have always been pretty good at monitoring at the cage level or the rack level, and monitoring energy usage. Enterprises are starting to deploy it, depending on the size of the data center,” Carlini says.

Machine learning keeps data centers cool
A Delta Airlines data center outage, attributed to electrical-system failure, grounded about 2,000 flights over a three-day period in 2016 and cost the airline a reported $150 million. That’s exactly the sort of scenario that machine learning-based automation could potentially avert. Thanks to advances in data center metering and the advent of data pools in the cloud, smart systems have the potential to spot vulnerabilities and drive efficiencies in data-center operations in ways that manual processes can’t.

A simple example of machine learning-driven intelligence is condition-based maintenance that’s applied to consumable items in a data center, for example, cooling filters. By monitoring the air flow through multiple filters, a smart system could sense if some of the filters are more clogged than others, and then direct the air to the less clogged units until it’s time to change all the filters, Carlini says.

Another example is monitoring the temperature and discharge of the batteries in UPS systems. A smart system can identify a UPS system that’s been running in a hotter environment and might have been discharged more often than others, and then designate it as a backup UPS rather than a primary. “It does a little bit of thinking for you. It’s something that could be done manually, but the machines can also do it. That’s the basic stuff,” Carlini says.

Taking things up a level is dynamic cooling optimization, which is one of the more common examples of machine learning in the data center today, particularly among larger data-center operators and colocation providers.

With dynamic cooling optimization, data center managers can monitor and control a facility’s cooling infrastructure based on environmental conditions. When equipment is moved or computing traffic spikes, heat loads in the building can change, too. Dynamically adjusting cooling output to shifting heat loads can help eliminate unnecessary cooling capacity and reduce operating costs.

Colocation providers are big adopters of dynamic cooling optimization, says Rhonda Ascierto, research director for the datacenter technologies and eco-efficient IT channel at 451 Research. “Machine learning isn’t new to the data center,” Ascierto says. “Folks for a long time have tried to better right-size cooling based on capacity and demand, and machine learning enables you to do that in real time.”

Vigilent is a leader in dynamic cooling optimization. Its technology works to optimize the airflow in a data center facility, automatically finding and eliminating hot spots.

Data center operators tend to run much more cooling equipment than they need to, says Cliff Federspiel, founder, president and CTO of Vigilent. “It usually produces a semi-acceptable temperature distribution, but at a really high cost.”

If there’s a hot spot, the typical reaction is to add more cooling capacity. In reality, higher air velocity can produce pressure differences, interfering with the flow of air through equipment or impeding the return of hot air back to cooling equipment. Even though it’s counterintuitive, it might be more effective to decrease fan speeds, for example.

Vigilent’s machine learning-based technology learns which airflow settings optimize each customer's thermal environment. Delivering the right amount of cooling, exactly where it’s needed, typically results in up to a 40% reduction in cooling-energy bills, the company say.

Beyond automating cooling systems, Vigilent’s software also provides analytics that customers are using to make operational decisions about their facilities.

“Our customers are becoming more and more interested in using that data to help manage their capital expenditures, their capacity planning, their reliability programs,” Federspiel says. “It’s creating opportunities for lots of new kinds of data-dependent decision making in the data center.”

AI makes existing processes better
Looking ahead, data-center operators are working to extend the success of dynamic-cooling optimization to other areas. Generally speaking, areas that are ripe for injecting machine learning are familiar processes that require repetitive tasks.

“New machine learning-based approaches to data centers will most likely be applied to existing business processes because machine learning works best when you understand the business problem and the rules thoroughly,” Ascierto says.

Enterprises have existing monitoring tools, of course. There’s a longstanding category of data-center infrastructure management (DCIM) software that can provide visibility into data center assets, interdependencies, performance and capacity. DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. Enterprises use DCIM software to simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.

“If you have a basic monitoring and asset management in place, your ability to forecast capacity is vastly improved,” Ascierto says. “Folks are doing that today, using their own data.”

Next up: adding outside data to the DCIM mix. That’s where machine learning plays a key role.

Data-center management as a service, or DMaaS, is a service that’s based on DCIM software. But it’s not simply a SaaS-delivered version of DCIM software. DMaaS takes data collection a step further, aggregating equipment and device data from scores of data centers. That data is then anonymized, pooled and analyzed at scale using machine learning.

Two early players in the DMaaS market are Schneider Electric and Eaton. Both vendors mined a slew of data from their years of experience in the data-center world, which includes designing and building data centers, building management, electrical distribution, and power and cooling services.

“The big, significant change is what Schneider and Eaton are doing, which is having a data lake of many customers’ data. That’s really very interesting for the data-center sector,” Ascierto says.

Access to that kind of data, harvested from a wide range of customers with a wide range of operating environments, enables an enterprise to compare its own data-center performance against global benchmarks. For example, Schneider’s DMaaS offering, called EcoStruxure IT, is tied to a data lake containing benchmarking data from more than 500 customers and 2.2 million sensors. 

“Not only are you able to understand and solve these issues using your own data. But also, you can use data from thousands of other facilities, including many that are very similar to yours. That’s the big difference,” Ascierto says.

Predictive and preventative maintenance, for example, benefit from deeper intelligence. “Based on other machines, operating in similar environments with similar utilization levels, similar age, similar components, the AI predicts that something is going to go wrong,” Ascierto says.

Scenario planning is another process that will get a boost from machine learning. Companies do scenario planning today, estimating the impact of an equipment move on power consumption, for example. “That’s available without machine learning,” Ascierto says. “But being able to apply machine-learning data, historic data, to specific configurations and different designs – the ability to be able to determine the outcome of a particular configuration or design is much, much greater.”

Risk analysis and risk mitigation planning, too, stand to benefit from more in-depth analytics. “Data centers are so complex, and the scale is so vast today, that it’s really difficult for human beings to pick up patterns, yet it’s quite trivial for machines,” Ascierto says.

In the future, widespread application of machine learning in the data center will give enterprises more insights as they make decisions about where to run certain workloads. “That is tremendously valuable to organizations, particularly if they are making decisions around best execution venue,” Ascierto says. “Should this application run in this data center? Or should we use a collocation data center?”

Looking further into the future, smart systems could take on even more sophisticated tasks, enabling data centers to dynamically adjust workloads based on where they will run the most efficiently or most reliably. “Sophisticated AI is still a little off in to the future,” Carlini says.

In the meantime, for companies that are just getting started, he stresses the importance of getting facilities and IT teams to collaborate more.

“It’s very important that you consider all the domains of the data center – the power, the cooling and the IT room,” Carlini says. The industry is working hard to ensure interoperability among the different domains’ technologies. Enterprises need to do the same on the staffing front.

“Technically it’s getting easier, but organizationally you still have silos," he says.

https://www.networkworld.com

Cloud-native devops won’t work without test automation

It’s 8:00 p.m., and you’re looking to complete a sprint to push out a net new cloud-native application. You make the deadline, but sending the application to the testing group takes two more weeks to push first to deployment and then to ops. Considering the time it took to develop the damn thing from idea to ops, agile development just is not … well … agile.

What went wrong? The problem is today that not enough automation exists for testing and deployment of cloud-native applications, and thus those pesky people must get involved in the testing process, which slows things down and brings in the likelihood of more errors. Moreover, there are not enough testers who understand what cloud-native applications testing should be, thus the latency is figuring out the approaches and mechanism for testing.

For example, the ability to determine the stability of an application using a cloud-native identity and access management system, or native encryption. Or, the ability to determine if scaling up six server instances is enough to scale, using an autoscaling cloud-native service. These are specific to a particular cloud provider.

Many expertscall for those testing cloud-native applications to learn much more about what cloud-native is and does, what the best practices are, and what’s a good cloud-native application and what’s not.

But the best advice that I have is to remove them all together, and instead put the burden back on the cloud-native application developers to add test planning, including scripts to automate testing, as well as infrastructure as code (IAC), that will tell the cloud computing provider how to configure the platform where the application will run.

The approach I’m outlining has worked great in the PowerPoint presentations at devops conferences, but it's not getting the real-world adoption it needs. It continues to be the missing link at most enterprises that are using devops and cloud computing. Indeed, I suspect that testing processes for cloud-native applications continue to be largely a set of manual processes, assisted by some automation for most enterprises.

That’s not where we need to be. Cloud-native application test automation can be pretty much void of people, with the majority of the testing automated in ways that are defined by the people who built the thing: the developers.

Moreover, this will reduce the latency that we’re seeing with devops today, and even make the applications better tested before deployment. That in turn will make ops and the users much happier.

https://www.infoworld.com

Thursday, 30 August 2018

Dell EMC Expands Integration With VMware for Multicloud

Dell EMC officials are looking to VMware as they expand their product portfolio for a world that is increasingly embracing multiple clouds for their traditional and newer workloads.

That reliance was put on display Aug. 27 at the opening day of the VMworld 2018 show in Las Vegas, where Dell EMC unveiled a broad range of infrastructure, software and services updates aimed at making it easier for organizations to not only get a cloud environment up and running but also to more seamlessly move applications and workloads from one cloud to another, or between the cloud and their on-premises infrastructures.

In a highly distributed and multiple-cloud world, the key is to have a consistent operating environment that is the same in both the cloud and on-premises, according to Sam Grocott, senior vice president of marketing for Dell EMC Infrastructure Solutions Group. Such a single operating environment is a key part of the company’s overall cloud strategy, Grocott told journalists during an online press conference.

“We’re seeing a large shift of applications that were born in the cloud starting to be repatriated back,” he said. “Customers really need to plan not only for a multicloud world but a world where workloads are going to be dynamic, going on- and off-prem seamlessly and change over time. The most important benefit of having a consistent operating environment is that they’re able to evaluate the application that is right for each cloud and them move it back and forth without having to change the operating environment.”

In a recent study conducted by analysts with IDC, 64 percent of survey respondents said they are using a multicloud strategy with either low or high interoperability capabilities, while another 7 percent said they are adopting a hybrid cloud approach of both public clouds and on-premises environments. Only 28 percent said they are using a single cloud provider for their workloads. In addition, 85 percent said they are moving workloads from the cloud back into their on-premises environments, speaking to the demand for easier application movement.

The move to the multicloud has become a key focus for a broad array of tech vendors. Cisco Systems officials for a few years have been talking about the use of multiple clouds by businesses, which are loath to put all their applications and data into a single cloud provider. Spreading them around to a variety of cloud providers, from hyperscalers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud to others like VMware and Oracle, protects the customers if one of the services goes down and enables businesses to assess which applications run best on which clouds. Hewlett Packard Enterprise, Nutanix and others also are pushing multicloud strategies.

However, having a consistent operating environment—which includes having consistent platforms, tools and services—for the multiple clouds and on-premises environments means adopting a multicloud approach doesn’t have to be a major headache, Grocott said. Dell EMC offers a broad range of platforms, infrastructures, services and consumption models to address the growing cloud needs of customers, particularly as they balance their traditional applications with newer workloads that can involve artificial intelligence (AI), machine learning, data analytics and other emerging technologies.

At VMworld, Dell EMC officials are rolling out new capabilities across a range of areas, including platforms, where they are showing off new VxRail and VxRack SDDC (software-defined data center) hyperconverged infrastructure solutions. With VxRail, Dell EMC is offering synchronous support for VMware software. Essentially, within 30 days of a new VMware software release, VxRail will support it, Grocott said. In addition, VxRail will offer expanded disaster recovery support for VMware Cloud and improved network design via its Network Fabric Design Center.

Dell EMC also is introducing the 2U (3.5-inch) VxRail G560 that includes 1.75 times the Intel processing cores of the previous VxRail G series, four times the memory and three times the capacity to increase improvement in the boot device.

The network is become a more strategic component of hyperconverged infrastructures, and Dell EMC’s offering will streamline network design customization and configuration and improve interoperability, he said.

With VxRack SDDC, Dell EMC is supporting the latest VMware Cloud Foundation releases and closer alignment with VxRail hardware offerings, including the storage-dense S-Series.

Other moves include making the Cloud Snapshot Manager tool, which is designed for backup and recovery jobs on public clouds, easier to run on AWS but also available on Azure. In addition, with the latest version of Data Domain Virtual Edition, the vendor is adding support for the KVM hypervisor (to go along with VMware and Hyper-V) and scaling the storage capacity in the cloud from 60TB to 96TB.

The vendor also is introducing the UnityVSA (Virtual Storage Appliance) Cloud Edition, which can run on AWS, enables up to 256TB file systems, and offers software-defined storage for application test and development work and disaster recovery. Enhancements to Dell EMC’s Cloud IQ offering, which can monitor the health of a storage system and detect and resolve problems, include support for five Dell EMC storage platforms, including PowerMax and VMax, deeper integration with VMware, and a mobile app for both Apple and Android smartphones, enabling IT administrators to keep tabs via their mobile devices.

Dell EMC also is increasing the storage capacity of its Elastic Storage Cloud (ESC) by more than 50 percent, introducing a PC-as-a-service offering for smaller businesses with 20 to 300 units and unveiling the Dell EMC Cloud Marketplace, an online portal designed to give customers tools and information about the vendor’s cloud offerings.

http://www.eweek.com

Tuesday, 21 August 2018

Dell EMC Designs New Networking Switch for Speed, Big Data Centers

Dell EMC is expanding its portfolio of open networking systems to include a 100 Gigabit Ethernet switch designed for Hyperscale data centers  large enterprises and service providers.

The highly dense Z9264F-ON switch, with 64 ports of 100GbE in a 2U (3.5-inch) form factor, is designed to address the rapid changes occurring in data centers and the rising demand for fast 100GbE networking, according to company officials.

With the rise of the cloud computing, greater mobility, the internet of things (IoT), edge computing, along with the adoption of virtualization and automation technologies, the need for faster connectivity between compute system and storage is growing.

At the same time, organizations increasingly look for more software-defined environments, where software is decoupled from hardware and customers have greater choice of what software to run on their data center systems.

“The new Z9264F-ON delivers on that desire, providing our customers with the most capable 100GbE switching platform in the industry and granting our customers maximum control over their network spend and their infrastructure,” Tom Burns, senior vice president and general manager of networking and solutions at Dell EMC, said in a statement.

Dell officials four years ago introduced its Open Networking initiative, building Dell networking gear that could run either the vendor’s own networking software or that from third parties such as Cumulus Networks and Big Switch Networks. The systems also offer a choice of merchant silicon from Intel, Broadcom and Cavium, whose chips are based on the Arm architecture. Cavium is now owned by Marvell Technology.

Other vendors, including Hewlett Packard Enterprise—with its Altoline switches—and Juniper Networks, now offer similar open networking systems. Such open systems are in line with the software-defined networking (SDN) push, which promises greater flexibility and programmability in the network and enables networking tasks such as load balancing and routing to be done in software that can run on commodity hardware, including low-cost white boxes.

According to Paul Parker-Johnson, principal analyst with ACG Research, open-source networking hardware and software will grow about 40 percent a year through 2023, reaching a point where it will be account for 15 percent of the overall data center switching and routing market.

The demand for speed in the data center also continues to grow as well. According to analysts at IDC, revenue for 100 GbE switches in the first three months of the year grew 83.8 percent over the same time in 2017, to $742 million.



Network port shipments jumped 117.7 percent year-over-year. Only the market for 25GbE switches grew faster, with revenue increasing 176 percent and port shipments up 359 percent, an indication that both 25GbE and 100GbE provided a better price-performance ratio than 40GbE, according to the analysts.

"There are two macro trends that contributed to growth—the emergence of next-generation software-based network intelligence platforms that add to the intrinsic value of networking, and the push by large enterprises, hyperscalers, and service providers to leverage faster Ethernet switching speeds for cloud rollouts,” Rohit Mehra, vice president of network infrastructure at IDC, said in a statement. “Both trends bode well for this industry moving forward.”

The Z9264F-ON switch is the latest in Dell EMC’s growing portfolio of Open Networking hardware. The switch is powered by Broadom’s 6.4 Terabit-per-second StrataXGS Tomahawk II chip and is aimed at high-performance data center spine or fabric environments. The switch comes with a range of options, including 64 100GbE or 50GbE ports or 128 25GbE ports.

There also are multiple software options. The Z9264F-ON—which will be on display at the VMworld show in Las Vegas starting Aug. 26 and generally available Aug. 31—can run Dell EMC’s OS10 Open Edition or the open-source version, OS10 Open Edition, which is based on the Linux Foundation’s OpenSwitch software. Other software options include commercial OSes from Big Switch, Cumulus, Pluribus Networks or IP Infusion as well as free versions of OpenSwitch or SONiC from the Open Compute Project.

The open switch combined with software from Dell EMC or partners can create a flexible physical networking environment, according to company officials. Dell EMC also is partnering with VMware—another Dell Technologies company—to combine the physical network with virtual networks to support virtual machines and containers.

By running the Z9262F-ON, customers can support their Virtual Cloud Networks, which are built using VMware’s NSX software, according to Peder Ulander, vice president of product marketing at VMware’s Networking and Security Business Unit.

http://www.eweek.com

Saturday, 18 August 2018

Oracle offers GraphPipe spec for machine learning data transmission

Oracle has developed an open source specification for transmitting tensor data, which the company wants to become a standard for machine learning.
Called GraphPipe, the specification provides a protocol for network data transmission. GraphPipe is intended to bring the efficiency of a binary, memory-mapped format while being simple and light on dependencies. There also are clients and servers for deploying and querying machine learning models from any framework.
It includes:
  • A set of flatbuffer definitions. Flatbuffers are similar to Google protocol buffers, with an additional benefit of avoiding memory copy during deserialization. Flatbuffer definitions provide a request message that includes input, tensors, input names, and output names.
  • Guidelines for serving models.
  • Examples of serving models from various machine learning frameworks.
  • Client libraries for querying models served through GraphPipe. Clients are available for Python, Go, and Java. There’s a plugin for Google’s TensorFlow library, for including a remote model inside a local TensorFlow graph.
With GraphPipe, a remote model accepts a request message and returns one tensor per output name. The model also provides metadata about types and shapes of inputs and outputs. 
Oracle says GraphPipe addresses three persistent challenges in machine learning:
  • There is no standard for model serving APIs, leaving developers likely stuck with whatever the framework provides, which could be protocol buffers or custom JSON. An application will generally need a custom client to talk to the deployed model. The situation worsens if multiple frameworks are being used.
  • Building a model server can be complicated.
  • Serving tensor data from a complex model via a Python-JSON API is insufficient for performance-critical applications.

Where to download GraphPipe

You can download GraphPipe from Oracle’s GitHub repo site.
https://www.infoworld.com

Friday, 17 August 2018

Possible Python rival? Programming language Julia is winning over developers

Python is now one of the most popular programming languages among developers and could soon overtake C++. But a much younger language, Julia -- a possible alternative to Python -- is catching on quickly, according to developer-focused analyst RedMonk.

While developers have been using Python for nearly 30 years and is being spurred on by machine learning and data scientists, Julia has only been available since 2012 but is now showing up in numerous language popularity rankings.

Last week, analysts from the TIOBE programming language index noted that Julia for the first time made its top 50 list.

Stephen O'Grady, co-founder of RedMonk, has also seen growing interest in Julia, which rose three spots to 36th place over the past three months, according to its latest rankings. It also was the fourth quarter in a row it's grown, up from 52nd spot a year ago.

O'Grady notes that RedMonk last week received its first-ever inquiry about Julia and took note because it came from "large vendor" who asked: "What are your thoughts on Julia? Is it going to remain a niche language or grow or die?"

Its growing popularity could be explained by the goal Julia's four makers outlined when they unveiled it in 2012: to create a perfect language that suited their tasks in scientific computing, machine learning, data mining, large-scale linear algebra, distributed and parallel computing

"We want a language that's open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that's homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab," they wrote.

"We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled."

According to Julia's makers, who also run the company Julia Computing, Julia has been download two million times.

Julia nonetheless remains a long way behind older and more widely-taught languages, as well as newer but fast-growing languages driven by mobile platforms, such as Kotlin for Android developers, and Swift, Apple's language for iOS developers and replacement for Objective-C.

O'Grady says Julia's steady rise makes it one to watch but added that "the esoteric nature of the language may yet relegate it to niche status".

The other notable trend is that after months of rapidly ascending, both Kotlin and Android fell one spot this quarter.

Swift previously was in equal 10th place with Objective-C, and is now in 11th. Kotlin fell one place from 27th to 28th. Meanwhile, Google-created Go rose two places to 14th while the Microsoft-backed TypeScript fell two places to 16th.

RedMonk's current top-10 list contain all the usual suspects, in descending order including JavaScript, Java, Python, PHP, C#, CSS, Ruby, and C, and Objective C.

https://www.zdnet.com

Thursday, 16 August 2018

Oracle Adds Transaction Processing to its Autonomous Database Cloud

Oracle CTO Larry Ellison on Aug. 7 announced the release of new transaction processing features for the Oracle Autonomous Database Cloud, the latest milestone for the cloud database since the company introduced autonomous data warehousing capabilities earlier this year.

Ellison said that with the addition of transaction processing Oracle offers a comprehensive, autonomous solution.

“With the immediate availability of transaction processing, the database can optimize itself for transactions. It can run batch programs, reporting, Internet of things and mixed workloads. With these two systems, warehousing and transactions, Oracle handles all of your workloads,” Ellison said during an Oracle webcast Aug. 7.

He went on to compare the autonomous database capabilities to self-driving cars.

“Everything is optimized; there’s nothing to learn and nothing to do. How hard is it to learn a self-driving car? What class do you take? It seems strange, but there is nothing to learn.”

Oracle uses machine learning and related artificial intelligence technology to create database services that learn and adapt on their own. One of the big benefits of the technology that Ellison has touted since announcing the Autonomous Database last fall is its ability to automatically apply security patches and updates to what is typically a manual and time-consuming process.

Ellison also couldn’t resist another opportunity to bash Oracle’s top cloud rival, Amazon Web Services in discussing the Oracle Autonomous Database’s ability to allocate more system capacity as needed and de-allocate compute and other resources when they’re not needed.

“Amazon says they have an elastic cloud, but their databases aren’t elastic,” said Ellison. “Oracle allocates storage, compute and memory automatically, but when you run your application and you’re not using all that capacity, say in the middle of night, Oracle starts de-allocating servers or it automatically adds servers whenever they’re needed.

“Performance is sustained if you scale up or down and you don’t pay for what you don’t use. Amazon can’t do that. It can’t dynamically add a server or take one away when there is no demand. They’re not serverless.”

During a customer panel David VanWiggeren, CEO of Drop Tank, a customer loyalty services company, said he’s seeing a lot of benefit from the autonomous features. Five years ago Drop Tank had a hundred gas stations in its customer network and has since branched out to other industries and has 3,500 partners. 
“There’s been a ton of integration that started with Oracle Cloud Integration services,” said VanWiggeren, noting deals the company has with Southwest Airlines and La Quinta Inns & Suites, among others.

He said an airline partner sent a promotional email to millions of members and Drop Tank was able to see a rapid increase in sign-ups within 15 minutes. “The system handled it smoothly, which gave us a lot of confidence in the autonomous capabilities,” said VanWiggeren.

Ellison Previews the next version of Oracle Cloud

Ellison also previewed some features that will come in Oracle Database 19c, the next version that the said will be available by January, 2019 if not sooner.

“As you upgrade to 19c, there are no performance regressions, everything will run faster or—worst case—the same. There’s no more need for regression performance testing, which is huge, because the system does it for you,” said Ellison.

He also said that most customers will want to go with the serverless features of 19c that enables customers to provision server resources, but if they’re not used the system makes them available to other tenants or another Oracle Cloud customer.  

“We think a lot of people want that. It’s the lowest cost and that’s where most customers end up,” said Ellison.

“However, some of our largest customers say, ‘I don’t want to share my Exadata with anyone, not anyone near me. I want to rent the entire neighborhood myself.’ We give you the ability to have dedicated infrastructure if you’re like a bank or phone company and anything you partition belongs to you,” said Ellison.

“No other cloud company offers this kind of isolation inside their public cloud,” he asserted.

Ellison said there is interest from big companies in regulated industries “who want all their stuff behind their own firewalls. The advantage is you get a very high-speed network right in your own data center. Some of these big companies want the autonomous capabilities, but they want us to stick it on our floor, not the public cloud.” 

http://www.eweek.com/

What is data deduplication, and how is it implemented?

Deduplication is arguably the biggest advancement in backup technology in the last two decades.  It is single-handedly responsible for enabling the shift from tape to disk for the bulk of backup data, and its popularity only increases with each passing day.  Understanding the different kinds of deduplication, also known as dedupe, is important for any person looking at backup technology.

What is data deduplication?
Dedupe is the identification and elimination of duplicate blocks within a dataset. It is similar to compression, which only identifies redundant blocks in a single file. Deduplication can find redundant blocks of data between files from different directories, different data types, even different servers in different locations.

For example, a dedupe system might be able to identify the unique blocks in a spreadsheet and back them up. If you update it and back it up again, it should be able to identify the segments that have changed and only back them up. Then if you email it to a colleague, it should be able to identify the same blocks in your Sent Mail folder, their Inbox and even on their laptop’s hard drive if they save it locally. It will not need to back up these additional copies of the same segments; it will only identify their location.

How does deduplication work?
The usual way that dedupe works is that data to be deduped is chopped up into what most call chunks. A chunk is one or more contiguous blocks of data. Where and how the chunks are divided is the subject of many patents, but suffice it to say that each product creates a series of chunks that will then be compared against all previous chunks seen by a given dedupe system.

The way the comparison works is that each chunk is run through a deterministic cryptographic hashing algorithm, such as SHA-1, SHA-2, or SHA-256, which creates what is called a hash.  For example, if one enters “The quick brown fox jumps over the lazy dog” into a SHA-1 hash calculator, you get the following hash value:
2FD4E1C67A2D28FCED849EE1BB76E7391B93EB12

If the hashes of two chunks match, they are considered identical, because even the smallest change causes the hash of a chunk to change. A SHA-1 hash is 160 bits. If you create a 160-bit hash for an 8 MB chunk, you save almost 8 MB every time you back up that same chunk. This is why dedupe is such a space saver.

Target dedupe
Target dedupe is the most common type of dedupe sold on the market today. The idea is that you buy a target dedupe disk appliance and send your backups to its network share or to virtual tape drives if the product is a virtual tape library (VTL). The chunking and comparison steps are all done on the target; none of it is done on the source. This allows you to get the benefits of dedupe without changing your backup software.

This incremental approach allowed many companies to switch from tape to disk as their primary backup target. Most customers copied the backups to tape for offsite purpose.  Some advanced customers with larger budgets used the replication abilities of these target dedupe appliances to replicate their backups offsite. A good dedupe system would reduce the size of a typical file by 99%, and the size of an incremental backup by 90%, making replication of all backups possible.  

Source dedupe
Source dedupe happens on the backup client – at the source – hence the name source, or client-side dedupe. The chunking process happens on the client, and then it passes the hash value to the backup server for the lookup process. If the backup server says a given chunk is unique, the chunk will be transferred to the backup server and written to disk. If the backup server says a given chunk has been seen before, it doesn’t even need to be transferred. This saves bandwidth and storage space.

One criticism of source dedupe is that the process of creating the hash is a resource-intensive operation requiring a lot of CPU power. While this is true, it is generally offset by a significant reduction in the amount of CPU necessary to transfer the backup, since more than 90% of all chunks will be duplicates on any given backup.

The bandwidth savings also allow source dedupe to run where target dedupe cannot run. For example, it allows companies to back up their laptops or mobile devices, all of which are using the Internet as their bandwidth. Backing up such devices with a target dedupe system would require an appliance local to each device being backed up. This is why source dedupe is the preferred method for remote backup.

There aren’t as many installations of source dedupe in the field as there are target dedupe, for several reasons. One reason is that target dedupe products have been out and stable longer than most source dedupe products. But perhaps the biggest reason is that target dedupe can be implemented incrementally (i.e. using the same backup software and just changing the target), where source dedupe usually requires a wholesale replacement of your backup system. Finally, not all source-dedupe implementations are created equal, and some had a bit of a rocky road along the way.

Advantages, disadvantages of deduplication
The main advantage of target dedupe is that you can use it with virtually any backup software, as long as it is one the appliance supports. The downside is that you need an appliance everywhere you’re going to back up, even if it’s just a virtual appliance. The main advantage of source dedupe is the opposite; you can backup literally from anywhere. This flexibility can create situations where backups meet your needs but restore speeds don’t, so make sure to take that into consideration.

https://www.networkworld.com

Tuesday, 14 August 2018

Blockchain, once seen as a corporate cure-all, suffers a slowdown

Corporate America’s love affair with all things blockchain may be cooling.

A number of software projects based on the distributed ledger technology will be wound down this year, according to Forrester Research Inc. And some companies pushing ahead with pilot tests are scaling back their ambitions and timelines. In 90% of cases, the experiments will never become part of a company’s operations, the firm estimates.

Even Nasdaq Inc., a high-profile champion of blockchain and cryptocurrencies, hasn’t moved as quickly as hoped. The exchange operator, which talked in 2016 about deploying blockchain for voting in shareholder meetings and private-company stock issuance, isn’t using the technology in any widely deployed projects yet.

“The expectation was we’d quickly find use cases,” Magnus Haglind, Nasdaq’s senior vice president and head of product management for market technology, said in an interview. “But introducing new technologies requires broad collaboration with industry participants, and it all takes time.”

Blockchain is designed to provide a tamper-proof digital ledger — a groundbreaking means of tracking products, payments and customers. But the much-ballyhooed technology has proven difficult to adopt in real-life situations. As companies try to ramp up projects across their businesses, they’re hitting problems with performance, oversight and operations.

“The disconnect between the hype and the reality is significant — I’ve never seen anything like it,” said Rajesh Kandaswamy, an analyst at Gartner Inc. “In terms of actual production use, it’s very rare.”

That could be bad news for makers of blockchain software and services, which include International Business Machines Corp. and Microsoft Corp. They’re aiming to make billions on cloud services that help run supply chains, send and receive payments, and interact with customers. Now their projections — and investors’ expectations — may need to be tempered.

“Blockchain is supposed to be an important future revenue stream for IBM, Microsoft and others in equipment sales, cloud services and consulting,” said Roger Kay, president of Endpoint Technologies Associates. “If it materializes more slowly, analysts will have to make downward revisions.”

IBM, which has more than 1,500 employees working on blockchain, said it’s still seeing strong demand. But growing competition could affect how much it can charge clients, according to Jerry Cuomo, vice president of blockchain technologies at IBM.

Microsoft also remains upbeat. “We see tremendous momentum and progress in the enterprise blockchain marketplace,” the company said in a statement. “We remain committed to developing cutting-edge technology and working side-by-side with industry leaders to ensure business of all types realize this value.”

So far, IBM and Microsoft have grabbed 51% of the more than $700-million market for blockchain products and services, WinterGreen Research Inc. estimated earlier this year.

For a large swath of companies, blockchain remains an exotic fruit. Only 1% of chief information officers said they had any kind of blockchain adoption in their organizations, and only 8% said they were in short-term planning or active experimentation with the technology, according to a Gartner study. Nearly 80% of CIOs said they had no interest in the technology.

Many companies that previously announced blockchain rollouts have changed plans. ASX Ltd., which operates Australia’s primary national stock exchange, now expects to have a blockchain-based clearing and settlement system at the end of 2020 or the beginning of 2021. Two years ago, the company was aiming for a commercial blockchain platform within 18 months. An exchange spokesman said “there’s been no delay,” as the company hadn’t announced the exact launch date until recently.

Another early advocate, Australian mining giant BHP Billiton Ltd., said in 2016 that it would deploy blockchain to track rock and fluid samples in early 2017. But it currently doesn’t “have a blockchain project/experiment in progress,” according to spokeswoman Judy Dane.

But there could be more of an uptick next year, according to blockchain-backing organizations.

“It’s not on a steep ramp-up curve at all,” said Ron Resnick, executive director of Enterprise Ethereum Alliance, comprised of about 600 members such as Cisco Systems Inc., Intel Corp. and JPMorgan Chase & Co. “I don’t expect that to happen this year. They are still testing the waters.”

One reason behind the delays: Most blockchain vendors don’t offer compatible software. Companies are worried about being beholden to one vendor — an issue the EEA group hopes to resolve by setting standards.

The organization will launch its certification testing program for blockchain software in mid-2019, Resnick said. Rival industry effort Hyperledger, which represents companies such as IBM, Airbus SE and American Express Co., is preparing to connect its blockchain software to a popular platform called Kubernetes.

Most blockchains also can’t yet handle a large volume of transactions — a must-have for major corporations. And they shine only in certain types of use cases, typically where companies collaborate on projects. But because different businesses have to share the same blockchain, it can be a challenge to agree on technology and how to adopt it.

Many companies also are simply worried about being the first to deploy new technology — and the first to run into problems.

“They want to see other people fail first — they don’t wanna be a guinea pig,” said Brian Behlendorf, executive director of Hyperledger. “It’s just the nature of enterprise software."

http://www.latimes.com

Thursday, 9 August 2018

How health care should take advantage of the cloud

The cloud has come to the health care sector, and it’s having an impact by saving some money. However, that’s not the real value of cloud computing for this sector, a sector that affects us personally at some point in our lives.

Black Book Research found that 93 percent of hospital CIOs are actively acquiring the staff to configure, manage, and support a HIPAA-compliant cloud infrastructure. Also, 91 percent of CIOs in the Black Book survey report that cloud computing provides more agility and better patient care with the proliferation of health care data.

But there is a huge innovation gap when it comes to health care and cloud computing between what’s possible versus what is actually being done. Take patient data, for example. Most health care organizations, providers, and payers don’t make many moves toward better and more proactive management of patient data unless regulations move them along.

This isn’t about operational and billing data, or electronic health records (EHRs). If health care systems abstracted information in certain ways, both the doctor and patient would have better insights into the patients’ health, preventive care, and treatment.

The cloud services that support these innovative functions are now dirt-cheap. As hospitals become cloud-enabled, it’s time to start moving faster toward the complete automation of care, treatments, and analyses of patient health. Let’s move from a system that’s largely reactive to a system that’s completely proactive.

Of course, there are islands of innovation in the health care sector. But it’s still mostly on the R&D side of things and has yet to trickle down to direct patient care. The potential here is greater than in any other sector I’ve seen. Just consider the telemetry information gathered from smart watches and cellphones and the ability to funnel all data though deep learning-enabled systems that cost pennies an hour to run on the cloud.

Now that we have the tools, there is little excuse not to innovate beyond what’s been done already. We’re better than this. 

https://www.infoworld.com

Seagate announces new flash drives for hyperscale markets

The Flash Memory Summit is taking place in Santa Clara, California, this week, which means a whole lot of SSD-related announcements headed my way. One already has my attention for the unique features the vendor is bringing to an otherwise dull market.

Seagate is expanding the Nytro portfolio of SSD products with emphasis on the enterprise and hyperscale markets and focusing on read-intensive workloads such as big data and artificial intelligence (AI). It has some of the usual areas of emphasis: lower power requirements and capacity that scales from 240GB to 3.8TB.

Also being updated is data protection via Seagate Secure, which prevents data loss during power failure by enabling data inflight to be saved to the NAND flash. The DuraWrite feature increases random write performance by up to 120 percent or provides maximum capacity to the user.

DuraWrite also has the added benefit of compacting the data as it goes through the controller. Some databases are compressable by as much as 50 percent, while media content, which does not lend itself to good compression, can be reduced by 5 percent.

New Nytro drives use SATA
The surprising aspect of the Nytros is they use the SATA interface. SATA is an old interface, a legacy from hard drives, and nowhere near capable of fully utilizing SSD’s performance. For true parallel throughput, you need a PCI Express or M.2 interface, which are designed specifically for the nature of how flash memory works.

“People keep expecting SATA to go away, but SATA is lingering. It’s a very easy way of using your bits. It’s simple, it replaces hard disk drives and still give 30 times faster performance with the same security and same management [as PCI Express drives] and gives our portfolio a no-brainer for our customers,” said Tony Afshary, director of product management for SSD storage products at Seagate.

But there are also PCI Express drives, and they bring new features to the table, as well. The new Nytro 5000 for hyperscale data centers doubles the read and write performance of the previous model while adding some NVMe features such as SRIOV for virtualization, additional name spaces, and support for multi streams. And it cuts the power draw from 25 watts from the old model to 12 watts in the new one.

The new Nytro drives use 64 layer 3D stacking, and the company is sampling 96 layer NAND from Toshiba, its NAND partner. The company also plans to announce quad-level cell (QLC), which greatly increases capacity, but it will be for consumer drives. QLC doesn’t meet all the cooling and power specs for the enterprise, said Afshary. “It will be limited in enterprise and for people who know exactly their cooling and power budget,” he said.

https://www.networkworld.com

Wednesday, 8 August 2018

How to Build a Secure Enterprise Hybrid Cloud

For a while there several years ago, industry prognosticators wrote that once enterprises decided on their cloud IT strategies, they would build private clouds first and add public cloud services later as needed.

Well, that didn’t happen. Turns out they jumped right into hybrid as fast as they could get their boards of directors to allocate the funding.

This trend is only picking up speed. Gartner Research has predicted that 90 percent of organizations will have adopted hybrid infrastructure management capabilities by 2020.

However, any disruptive trend comes with other considerations. Along with this massive shift to hybrid come more open doors to security threats.

Minority of Security Pros Using Unified Security Tools

A recent study of 250 hybrid cloud security leaders found that only 30 percent of those professionals are using unified security tools that span on-premise and the cloud. With AWS and Azure both leading the way in cloud adoption among enterprise users, many other large enterprises are hot on their heels.

How can managers sufficiently prepare and monitor their environments to ensure the shift to a hybrid cloud be as clean and efficient as possible so that organizations can take advantage of the on-premise assets and unlimited cloud scalability?

Turns out there are answers for that. This eWEEK Data Point article, using industry information compiled by David Ginsburg, a vice-president at cyber-intelligence software provider Cavirin, offers readers his 10 key criteria for building a secure hybrid environment.

Data Point No. 1: Flexibility

The ease of implementation and the ability to span multiple workload environments (such as IaaS, PaaS, on-premises, virtual machines, containers, and in the future, function-as-a-service, or FaaS), delivering a single view, is integral for mid-size and enterprise organizations. Ideally, if initially deployed on-premise, the same tools and applications will extend into the cloud. This implies that the platform architecture has been conceived from the start for hybrid environments. Flexibility also includes ease of installation from a cloud service provider’s marketplace.

Data Point No. 2: Extensibility

DevOps-friendly open application programming interfaces (APIs) open the platform to external data sources and items such as identity and access management, pluggable authentication module (PAM), security information and event management, user and entity behavior analytics, logging, threat intelligence, or a helpdesk. This out-of-the box cloud and API interoperability is essential to accommodate business-critical applications. APIs also enable integration into an organization’s continuous integration and deployment (CI-CD) process and their DevOps tools. This of course relates to lifecycle container support that encompasses images, the container runtimes, and orchestration.

Data Point No. 3: Responsiveness

As today’s security threats quickly multiply, minimizing the time required for implementation and time to baseline, as well as quickly identifying any changes in posture, has become vital. This requires a microservices-based architecture for elastic scaling and an agentless architecture that adapts well to containers and function-based workloads as well as eliminating “agent bloat” that impacts central processing units, memory and I/O.

Data Point No. 4: Deep Discovery

It’s essential to automatically identify existing and new workloads as well as changes to existing ones across multiple cloud service providers, and then the ability to properly group these by function. This discovery should be a simple process, leveraging existing AuthN and AuthZ (open authorization) policies to avoid having to create a special identity access management policy every time.

Data Point No. 5: Broad Policy Library

The platform must support a wide range of benchmarks / frameworks / guidelines and the creation of custom polices based on workload type. These policies should automatically apply to existing and new workloads. Broad coverage also relates to operating systems, virtualization and cloud service providers. Capabilities may include OS hardening, vulnerability and patch management, configuration management, whitelisting, and system monitoring.

Data Point No. 6: Real Time Risk Scoring Across Infrastructure

Assets, once discovered and with policies applied, must be scored. This may be individually, across different slices of the infrastructure (such as location, subnet, department), by workload type across environments (cloud and on-premise), or by application (PCI, web). Scoring must be prioritized, available historically, integrated with third-party tools for automation or into an existing UI, and most importantly, correlated. For example, an organization operates a web server farm with 10 on-premises Red Hat Enterprise Linux servers and begins to transition to the cloud. Midway through the migration, five web servers are on Azure, and five on-premises. If tracking payment card industry (PCI) compliance, the tool must generate a normalized view across both environments.

Data Point No. 7: Container (Docker) Support

Docker technology has attracted the attention of many enterprise adopters. If you are implementing containers either on-premise or as part of a cloud deployment, you need to ensure that their workloads are secure. And, if you bring in images from a registry, you need to ensure that these are not corrupted. Many of the same capabilities described in Data Point No. 6 apply here as well, such as hardening, scanning and whitelisting. One way to look at container support is across a lifecycle that includes image scanning, container runtime monitoring and security at the orchestration layer.

Data Point No. 8: Cloud Security Posture

Workload protection is as important as securing the cloud.  This includes the various services offered by the major cloud providers, such as storage, identity, load balancing, computing and media. The architecture must support monitoring and assessment of these services in real time, and then, most importantly, looking at how the security of these services relates to that of critical workloads.  It must correlate scoring and then provide the CISO and team with a unified score that reflects a true hybrid security posture across workloads and the cloud.

Data Point No. 9: Cloud-agile Pricing

Reflecting the cloud compute and storage pricing model, it’s important to adopt a pricing model that has the flexibility to meet changing requirements. This may involve a software-as-a-service (SaaS) offering or connecting the back end of the platform to the cloud service provider’s billing engine with an ability to charge by the minute. Alternatively, pricing may be abstracted but still agile, closer to the concept of committed and burst workloads and analogous to a cellphone provider’s rollover-minutes model. In either case, this is a departure from existing static pricing.

Data Point No. 10: Intelligence

Predictive analytics permits the platform to “predict” the outcome of change; a “what-if” analysis for configurations and operating systems is crucial in today’s changing environment. It is capable of bringing in data from third parties via APIs to create a more correlated view of this change. Some customers describe this as a “virtual whiteboard.”

http://www.eweek.com

Python scales language popularity charts

Python continues to build momentum, approaching the Top 3 in the Tiobe language popularity index after having already scaled to the top of language ratings from IEEE and PyPL.
Python, which has become a popular option in data science and machine learning, is creeping up on third-place C++ in Tiobe’s index, which bases popularity on a formula assessing searches in search engines such as Google, Yahoo, and Wikipedia. In this month’s index, Python is in fourth place with a rating of 6.992 percent, less than a half of a percentage point behind C++, rated at 7.471 percent. (In July, the gap between the two was 1.254 points.)
If Python can surpass C++, it would be the language’s first foray into the Tiobe Top 3. Tiobe not only expects this to happen but believes Python might even ascend to the index’s No. 1 spot, which has been occupied by Java for most of the past several years.
Tiobe’s August rankings were as follows:
  1. Java, with a rating of 16.881 percent
  2. C, at 14.966 percent
  3. C++, at 7.471 percent
  4. Python, at 6.992 percent
  5. Visual Basic.Net, at 4.762 percent
  6. C#, at 3.541 percent
  7. PHP, at 2.925 percent
  8. JavaScript, at 2.411 percent
  9. SQL, at 2.316 percent
  10. Assembly, at 1.409 percent
The most-recent PyPL and IEEE indexes were ruled by Python. PyPL assesses popularity based on searches on language tutorials in Google while IEEE looks at contexts such as social chatter, open source code production, and job postings. IEEE rankings for 2018, published on July 31, were as follows:
  1. Python
  2. C++
  3. C
  4. Java
  5. C#
  6. PHP
  7. R
  8. JavaScript
  9. Go
  10. Assembly
PyPL’s top 10 for July were:
  1. Python, with a 23.59 percent share
  2. Java, at 22.4 percent
  3. JavaScript, at 8.49 percent
  4. PHP, at 7.93 percent
  5. C#, at 7.84 percent
  6. C/C++, at 6.28 percent
  7. R, at (4.18 percent
  8. Objective-C, at 3.4 percent
  9. Swift, at 2.65 percent
  10. Matlab, at 2.25 percent
http://www.computerworld.in

Monday, 6 August 2018

Google Embraces Hybrid as Path to the Future Cloud

Google’s enterprise cloud strategy came into focus this week at its third annual Next conference. The search giant announced products that, for the first time, make it a player in the hybrid cloud space.

Google’s cloud technology has already helped many vendors establish their own hybrid cloud services—such as Red Hat, IBM, VMware, Pivotal and others—to enable customers to run container-based cloud apps in their own data centers or in the cloud.

The new Cloud Services Platform strategy now gives Google an on-premises story of its own. CSP is a collection of Google’s cloud software led by Kubernetes, the now ubiquitous container orchestration platform. Included in CSP is the new GKE On-Prem, a distribution of the Google Kubernetes Engine that offers portability between public or private clouds.

Also included is Istio, a service mesh that supplies critical functions to container-based applications such as security, telemetry and networking. This week Istio reached its 1.0 version milestone and will soon be available for production environments. Google is making a Managed Istio service available that will automatically set up Istio to work with Kubernetes applications in any environment.

Try Before You Buy

Urs Hölzle, Google Cloud’s SVP of Engineering, explained that the benefits of Google offering customers hybrid cloud capabilities spring from the fact that Google remains close to both Kubernetes and Istio, though both are open source.

“[The ecosystem] enables ISVs and customers to build their software on a platform that is open and non-proprietary,” he said during a press conference. “That open platform is Kubernetes and Istio. For Google Cloud we have managed versions of those services, close to us, our products.”

Google’s managed versions have “high fidelity” with the open source, he said, because they are written by the same team of developers who contribute to the open communities.

“Our managed version [of Kubernetes] is now the only one that is explicitly multi-environment,” he said. “You can use GKE to control your cluster on [Google Cloud] and also use exactly the same tool to manage your cluster that is on premise.”

By working with Kubernetes and containers on premise, he said, developers can get familiar with the microservices paradigm with less risk, while businesses get time to decide when, where and how public cloud services should be employed.

“Migration [into the cloud] can inherit the complexity from on premise, or is a scary transition where everything changes,” Hölzle said. “Cloud Services Platform makes it possible to start that transition on premise without coupling it with an immediate move. It lets you do one thing at a time, and start containerization before you do any migration, in the environment that they already know.”

The Future Cloud

This was a common refrain heard from customers here. Nielsen and Lahey Health, for example, are using Google’s G Suite of email and productivity tools as baby steps to bigger things in the cloud, akin to the way Microsoft customers have leveraged Office 365 for moving or developing applications in Azure Cloud.

Others are using open source technologies internally to build a bridge to the cloud. “Cloud is a method, not a location,” said Justin Arbuckle, Senior Vice President at Scotiabank, who announced the release of the bank’s own “Accelerator Pipeline” as an open source project. “We need to stop thinking about cloud as a where and start thinking about how to use the cloud.”

Ultimately, however, the cloud is the ultimate destination for most applications and data. Google’s top competitors AWS and Microsoft have been more aggressive to this point in enabling hybrid environments, but Google now sees that hybrid is the way to ease users into the cloud and expose them to more high-value services such as artificial intelligence and machine learning, which also made plenty of news at the Next conference.

Despite the new hybrid direction, Google will not be able to execute the strategy on its own, and knows it has a long way to go.

“This is a partner-led engagement,” Hölzle said. “Partners are already trusted in the enterprise. We see our role as helping make that environment more compatible and more of an onramp to the future cloud.”

http://www.eweek.com