Friday, 27 July 2018

Google Launches New Cloud Programs for SaaS Partners

Google wants to make it easier for organizations to run Salesforce, Box, MongoDB, JFrog and other SaaS applications on the company's cloud computing platform.

So the huge internet services conglomerate on July 23 announced several new programs aimed at incentivizing its growing roster of partners to bring their SaaS applications to GCP.

The announcements—to be made at Google's Next '18 conference in San Francisco this week—include a co-selling program under which Google will work with partners in pitching Google-managed SaaS applications to enterprises.

Google also has launched a new program where the company's SaaS partners will be able get help from Google's customer reliability engineering team in keeping their products running optimally on GCP. The company has established an online community in which Google SaaS partners will be able to get updates from the company and to network and share best practices with each other.

Rewards Are in the Offing

Google will reward partners that drive enterprises to its cloud platform with financial incentives from a Marketing Development Fund established for the purpose.

Google's partner-related announcements this week included several new and expanded integrations between its cloud platform and third-party applications and services. For example, Google has established a new collaboration with Deloitte, in which both organizations together will offer a broad set of services for running SAP's enterprise applications on GCP.

Similarly, the company has struck up new partnerships with Digital Asset and BlockApps to offer enterprises a way to use distributed-ledger technology frameworks on GCP.  Another new integration is a plug-in for VMware's vRealize Orchestrator that the company said gives companies a way to run GCP alongside their on-premise VMware environment.

Best Practices Resources Also in the Mix

Meanwhile, a new center of excellence that Google has established in collaboration with Appsbroker and Intel will offer best practices and other resources for helping enterprises migrate high-performance workloads to Google cloud.

Google's efforts to expand its partner ecosystem come amid a general growth in the company's cloud business. Recent numbers from Synergy Research Group show that Google's cloud market share at 6 percent lags far behind AWS’s 33 percent share and is substantially below even Microsoft Azure's 13 percent. Even so, Google's cloud business has been growing rapidly and has become one of the biggest investment areas from a technology and staffing standpoint.

Contributing to Google's cloud momentum is the company's rapidly growing partner ecosystem.

"Whether businesses are moving to the cloud to speed up innovation, discover important insights from their data, or transform the way they work, they often need help," noted Nan Boden, Senior Director of Global Technology Partnerships at Google Cloud, and Nina Harding, the company's channel chief in the blog announcing the updates.

Why Partners Are So Important to GCP

Google's thousands of partners play an indispensable role in offering this help to enterprises, Boden and Harding wrote.

Google’s focus on growing its partner ecosystems has resulted in the number of technology partners with whom the company is working to increase tenfold since the start of 2017, the two Google executives noted.

The big companies with whom Google has signed new partnerships or has expanded an existing relationship over the past year include KPMG, Deloitte and Accenture, they said. Boden and Harding described the partnerships as already having a positive impact on enterprises in the form of better migration support and value-added services.

http://www.eweek.com

HPE Adds More Automation, Predictive Intelligence to 3PAR Storage

Artificial intelligence seems to be embedded in every enterprise app here in mid-2018. Now even the normally serene and relatively docile data storage sector is getting more smarts inserted into things like arrays, virtual machines and controllers.
Hewlett Packard Enterprise on July 24 announced that it is adding new intelligence into its 3PAR data center storage lineup, so that the machines ostensibly can optimize themselves for higher application availability, among other things.
These additions include:
  • new predictive support automation with HPE InfoSight, the company’s home-developed AI for the data center, to drive higher levels of application availability for HPE 3PAR environments; and
  • enhanced application automation for on-premises infrastructure with HPE 3PAR to accelerate DevOps initiatives for increased productivity.
HPE InfoSight Predictive Analytics
HPE InfoSight is a cloud-based AI platform built on a new approach to data collection and analysis; it predicts and automates resolution to problems and is continuously remembering processes to make HPE storage hardware smarter and more reliable.
Storage networking has always been a bottleneck in IT systems--especially in recent years with so much more data being created by mobile devices.
InfoSight offers HPE 3PAR customers a predictive analytics framework to anticipate and prevent such issues across the infrastructure stack. Users can benefit from InfoSight’s capabilities to predict problems and automate resolution, the company said, in addition to the cross-stack analytics already made available to HPE 3PAR customers. These cross-stack functions provide IT the ability to resolve performance problems and pinpoint the root cause of issues between the storage and host virtual machines.
Since releasing these capabilities, InfoSight already has predicted and auto-resolved 85 percent of more than 1,500 complex, priority cases across the HPE 3PAR installed base, the company said.
Enhanced integration with any automation framework for HPE 3PAR provides operational efficiency with self-service storage. Enterprises need to develop both mainstream, mission-critical applications and newer, cloud-native applications on the same infrastructure in order to scale and save costs due to multiple storage systems.
DevOps-Friendly Platform
HPE 3PAR is a DevOps and container-friendly platform, allowing users to run both mainstream and containerized applications on the same enterprise-grade infrastructure. 
In other news July 24, HPE announced:
  • New toolsets to automate and manage HPE 3PAR for the cloud, DevOps, virtualization and container environments. In addition to existing integration with Docker and Mesosphere DC/OS, HPE 3PAR now works with Kubernetes and Red Hat OpenShift to offer best-in-class automation and integration with the leading container platforms.
  • A new plug-in for VMware vRealize Orchestrator, which empowers users with self-service storage automation through pre-built workflows to quickly deploy, and streamline storage management.
  • Enhanced native infrastructure management toolchains to empower DevOps teams to be more agile. New pre-built blueprints for configuration management tools available for HPE 3PAR users--Chef, Puppet and Ansible, as well as language software development kits in Ruby and Python--enable DevOps teams to automate storage functions in native programming languages for faster application deliveries.
http://www.eweek.com

How Kubernetes conquers stateful cloud-native applications

The widespread misconception that Kubernetes was not ready for stateful applications such as MySQL and MongoDB has had a surprisingly long half-life. This misconception has been driven by a combination of the initial focus on stateless applications within the community and the relatively late addition of support for persistent storage to the platform.

Further, even after initial support for persistent storage, the kinds of higher-level platform primitives that brought ease of use and flexibility to stateless applications were missing for stateful workloads. However, not only has this shortcoming been addressed, but Kubernetes is fast becoming the preferred platform for stateful cloud-native applications.

Today, one can find first-class Kubernetes storage support for all of the major public cloud providers and for the leading storage products for on-premises or hybrid environments. While the availability of Kubernetes-compatible storage has been a great enabler, Kubernetes support for the Container Storage Interface (CSI) specification is even more important.

The CSI initiative not only introduces a uniform interface for storage vendors across container orchestrators, but it also makes it much easier to provide support for new storage systems, to encourage innovation, and, most importantly, to provide more options for developers and operators.

While increasing storage support for Kubernetes is a welcome trend, it is neither a sufficient nor primary reason why stateful cloud-native applications will be successful. To step back for a second, the driving force behind the success of a platform like Kubernetes is that it is focused on developers and applications, and not on vendors or infrastructure. In response, the Kubernetes development community has stepped in with significant contributions to create appropriate abstractions that bridge the gap between raw infrastructure such as disks and volumes and the applications that use that infrastructure.

Kubernetes StatefulSets, Operators, and Helm charts
First, to make it much simpler to build stateful applications, support for orchestration was added in the form of building blocks such as StatefulSets. StatefulSets automatically handle the hard problems of gracefully scaling and upgrading stateful applications, and of preserving network identity across container restarts. StatefulSets provide a great foundation to build, automate, and operate highly available applications such as databases.

Second, to make it easier to manage stateful applications at scale and without human intervention, the “Operator” concept was introduced. A Kubernetes Operator encodes, in software, the manual playbooks that go into operating complex applications. The benefits of these operators can be clearly seen in the operators published for MySQL, Couchbase, and multi-database environments.

In conjunction with these orchestration advances, the flourishing of Helm, the equivalent of a package manager for Kubernetes, has made it simple to deploy not only different databases but also higher-level applications such as GitLab that draw on multiple data stores. Helm uses a packaging format called “charts” to describe applications and their Kubernetes resources. A single-line command gets you started, and Helm charts can be easily embedded in larger applications to provide the persistence for any stack. In addition, multiple reference examples are available in the form of open source charts that can be easily customized for the needs of custom applications.

Kanister and the K10 Platform
At Kasten, we have been working on two projects, Kanister and K10, that make it dramatically easier for both developers and operators to consume all of the above advancements. Driven by extensive customer input, these projects don’t just abstract away some of the technical complexity inherent in Kubernetes but also present a homogeneous operational experience across applications and clouds at scale.

Kanister, an open-source project, has been driven by the increasing need for a universal and application-aware data management plane—one that supports multiple data services and performs data management tasks at the application level. Developers today frequently draw on multiple data sources for a single app (polyglot persistence), consume data services that are eventually consistent (e.g., Cassandra), and have complex requirements including consistent data capture, custom data masking, and application-centric backup and recovery.

Kanister addresses these challenges by providing a uniform control plane API for data-related actions such as backup, restore, masking, etc. At the same time, Kanister allows domain experts to capture application-specific data management actions in blueprints or recipes that can be easily shared and extended. While Kanister is based on the Kubernetes Operator pattern and Kubernetes CustomResourceDefinitions, those details are hidden from developers, allowing them to focus on their application’s requirements for these data APIs. Instead of learning how to write a Kubernetes Controller, they simply author actions for their data service in whatever language they prefer, ranging from Bash scripts to Go. Today, public examples cover everything from MongoDB backups to deep integration with PostgreSQL’s Point-in-Time-Recovery functionality.

Whereas Kanister handles data at an application level, significant operator challenges also exist for managing data within multiple applications and microservices spread across clusters, clouds, and development environments. We at Kasten introduced the K10 Platform to make it easy for enterprises to build, deploy, and manage stateful containerized applications at scale. With a unique application-centric view, K10 uses policy-driven automation to deliver capabilities such as compliance, data mobility, data manipulation, auditing, and global visibility for your cloud-native applications. For stateful applications, K10 takes the complexity out of a number of use cases including backup and recovery, cross-cluster and multi-cloud application migration, and disaster recovery.

The state of stateful Kubernetes
The need for products such as Kanister and the K10 Platform is being driven by the accelerating growth in the use of stateful container-based applications. A recent survey run by the Kubernetes Special Interest Group on Applications showed that more than 50 percent of users were running some kind of relational database or NoSQL system in their Kubernetes clusters. This number will only go up.

Further, we not only see the use of traditional database systems in cloud-native environments but also the growth of database systems that are built specifically for resiliency, manageability, and observability in a true cloud-native manner. As next-generation systems like Vitess, YugaByte, and CockroachDB mature, expect to see even more innovation in this space.

As we turn the page on this first chapter of the evolution of stateful cloud-native applications, the future holds both a number of opportunities as well as challenges. Given the true cloud portability being offered by cloud-native platforms such as Kubernetes, moving application data around multi-cluster, multi-cloud, and even planet-scale environments will require a new category of distributed systems to be developed.

Data gravity is a major challenge that will need to be overcome. New efficient distribution and transfer algorithms will be needed to work around the speed of light. Allowing enterprise platform operators to work at the unprecedented scale that these new cloud-native platforms enable will require a fundamental, application-centric rethinking of how the data in these environments is managed. What we are doing at Kasten with our K10 enterprise platform and Kanister not only tackles these issues but also sets the stage for true cloud-native data management.

https://www.infoworld.com

Wednesday, 25 July 2018

Google's G Suite adds new AI and security tools

Google is trying to entice more people to use its services in the workplace.

On Tuesday, the search giant announced several updates to G Suite, its set of apps that resemble Google Docs and Sheets but are tailored for the office. The company made the announcement during its annual Google Cloud Next conference in San Francisco.

One new feature is an investigation tool that gives administrators more control over cybersecurity issues. For example, if there's been a breach, an admin can see what users might have been affected or if any information has been shared externally. The tool also lets admins revoke access to certain drives and take action without sifting through security logs.

Google is also adding the ability for companies to choose where their data is physically stored, whether in the United States, Europe or distributed around the globe.

Another new feature uses Google's artificial intelligence tools to bring grammar suggestions to Google Docs. The software will be able to recognize errors, like when you should use "an" instead of "a," or suggest how to use a subordinate clause correctly. The new feature is available for G Suite's Early Adopter Program, for users to test updates.

Other new features let employees use AI to help write messages and replies. Smart Reply, a tool that uses machine learning to automatically compose messages in email, is coming to Google's Hangouts Chat app. Another feature, called Smart Compose, helps to autofill longer emails that require more than just a dashed off reply. Google CEO Sundar Pichai first introduced it at the company's I/O conference in May. At the time, it was meant for regular Gmail users, but the feature is coming to business customers.

"Security is the No. 1 worry. And AI is the No. 1 opportunity," Diane Greene, head of Google Cloud, said during the Next keynote presentation on Tuesday. "We're incorporating the power of AI into everything you do." 

Google said it now has 4 million paying businesses using G Suite. During an earnings call Monday, Pichai announced some new customers for its cloud division, a win for the growing organization, which now brings in more than $1 billion a quarter. The new customers include Domino's Pizza, SoundCloud and PricewaterhouseCoopers. Target is also moving "key areas" of its business to Google's cloud, Pichai said.

AI controversies
Google's cloud division has also drawn controversy, however. Outside of building workplace versions of apps like Gmail or Drive, Google also licenses its AI technology to other businesses.

Under Greene, the division has gone after lucrative military contracts. But employees have challenged the company's decision to take part in Project Maven , a US Defense Department initiative aimed at developing better artificial intelligence for the military. Googlers were divided over their employer's role in helping develop technology that could be used in warfare. More than 4,000 employees reportedly signed a petition addressed to Pichai demanding the company cancel the project. Last month, Google said it wouldn't renew the Maven contract or pursue similar contracts.

Soon after, Pichai released ethical guidelines regarding the company's development of AI. He said Google wouldn't create technology that would be used for weapons, but he said Google would still pursue work with the military.

Google Cloud has had other challenges recently. The platform experienced problems last week, causing outages for Google Cloud Networking, App Engine and Stackdriver. Apps like Snapchat, Pokemon Go and Spotify, which use Google's cloud platform to help run their services, were also affected. 

On Tuesday, Google also unveiled new artificial intelligence technology for call centers. The software, which it calls Contact Center AI, is designed to talk to humans over the phone. The technology is bound to draw comparisons to Duplex, the controversial AI tool Google announced in May. Duplex is designed to book restaurant reservations, hair appointments and the like, as well as check business hours and such, using an eerily human-sounding voice. 

Google was quick to emphasize the distance between the two products.

"While Contact Center AI and the recently announced Duplex share some underlying components, they have distinct technology stacks and aims overall," Fei-Fei Li, Google's chief scientist for AI and machine learning, said on stage.

https://www.cnet.com

Google to offer blockchain as part of cloud service

Google has announced the second of two partnerships that will allow it to offer the financial services industry and others a cloud-based platform on which they can develop and run blockchain-based applications.

In a blog post ahead of its Google Cloud Next '18 conference this week, the search giant said it is partnering with Digital Asset and BlockApps to enable customers to "explore ways they might use distributed ledger technology (DLT) frameworks on Google's Cloud Platform (GCP)."

Later this year, GCP will run both open-source integrations for Hyperledger Fabric and Ethereum, the two leading enterprise blockchain platforms, Google said.

Digital Asset is a provider of DLT software for the financial services industry; BlockApps is a service platform on which enterprises can develop blockchain apps. Both companies are based in New York.

"This will reduce the technical barriers to DLT application development by delivering our advanced distributed ledger platform and modelling language to Google Cloud," Digital Asset CEO Blythe Masters said in a statement.

Google Cloud also joined the private beta of Digital Asset's developer program, which gives a select group of technology partners, software vendors and financial services companies access to the SDK for its Digital Asset Modeling Language, a smart contract coding language.

Smart contracts are a blockchain-based business automation tool – software scripts, in essence – that run on DLT against pre-determined business rules. For example, a smart contract could determine when the conditions of a real-estate purchase have been met, releasing the funds from the bank; or, a smart contract could be used in supply chain management to track and verify the receipt of goods.

Over the past two years, blockchain-as-a-service (BaaS) offerings have rapidly grown to include some of the tech industry's biggest players, including Microsoft, IBM, HPE, SAP, Oracle Amazon Web Services (AWS).

AWS partnered with business cloud service Kaleido to offer cloud services on which to host an Enterprise Ethereum-based, open-source blockchain platform.

BaaS offerings enable enterprises to create proof-of-concepts and production blockchains without having the capital investment in-house deployments would require.

For example, the peer-to-peer architecture on which blockchain networks are built requires many server nodes, which can grow quickly as a DLT network expands; and, blockchain developers are in short supply and hot demand today.

BaaS providers not only provide the infrastructure but also often act as consultants on the nascent technology, according to Bill Fearnley Jr., IDC's research director for Worldwide Blockchain Strategies.

"As with any new technology, there is a learning curve as enterprise customers put it into production," Fearnley said in an earlier interview. "One advantage of partnering with a BaaS provider is users can leverage the lessons learned by the provider to help make their systems more secure."

http://www.cio.in/

Google races against AWS, Microsoft to bring AI to developers

At the Google Cloud Next conference in San Francisco Tuesday, Google laid out how it's bringing artificial intelligence to developers, as well as integrating more AI capabilities throughout its cloud products.

Artificial intelligence has long been a cornerstone of Google Cloud's value proposition, but to win more customers it needs to make those capabilities more accessible. It also has to contend with Amazon Web Services and the fast-growing Microsoft Azure, which have been building up their own AI-powered offerings and creating their own plans for lowering the barrier to entry.

During the Day One Next keynote, Google Cloud CEO Diane Greene noted that Google is heavily investing in two key areas: AI and security. While Google is investing in security because it is customers' "number one worry," it's investing in AI because it is the "number one opportunity."

AI is "key to re-engineering a business," Greene said. "Today it's built into everything Google does. We are now working to make it easy for you. We are incorporating AI into everything you do."

To make AI more accessible, Google announced the expansion of Cloud AutoML, the software that automates the creation of machine learning models. Announced earlier this year, AutoML makes it possible to build custom machine learning models without any specialized machine learning knowledge. It effectively extends Google's Cloud Vision API to recognize entirely new, customized categories of images.

Google in January announced the alpha of AutoML Vision, and on Tuesday it announced the product is moving into public beta. This means any Google Cloud customer can submit a set of labeled images, and Google will create an image recognition model matching that data set. Since announcing the product in January, around 18,000 customers have expressed interest in AutoML Vision, Rajen Sheth, senior director of product management for Google Cloud AI, told reporters.

Additionally, Google is introducing AutoML Natural Language and AutoML Translation. With AutoML Translation, customers can build models that take into account industry-specific language. For example, the phrase "the driver is not working" would be translated differently for the computer industry than it would be for the transportation industry.

In addition to expanding AutoML, Google on Tuesday launched updates to its machine learning APIs. The Cloud Vision API now recognizes handwriting, supports PDF and TIFF files, and can identify where an object is located within an image.

The Cloud Text-to-Speech API is getting updates that include the ability to optimize for different speakers from which the speech will play. Meanwhile, Cloud Speech-to-Text can now identify what language is spoken as well as different speakers in a conversation. Multi-channel recognition enables users to record each participant separately in multi-participant recordings.

Google's AI hardware is also getting an update: The third generation of Google Cloud TPUs are now available in alpha. Second generation TPUs are now generally available, meaning all GCP users can access them, including free tier users. TPUs, Google says, dramatically accelerate machine learning tasks and are accessible via GCP.

In an indicator of how AI may be more ubiquitous in the future, Greene on Tuesday said that Google sees a disproportionate amount of TPU use from startups on the Google Cloud Platform.

Google also announced updates to G Suite that included a heavy dose of artificial intelligence.

Google's competitors are also offering ways to put machine learning and AI in the hands of developers. Late last year, AWS unveiled SageMaker, which makes it easier and faster to train machine learning models. At the AWS Summit in New York earlier this month, the company announced it's bringing streaming algorithms as well as batch job improvements to the service,. AWS also used the summit to bring DeepLens, a deep learning enabled video camera for developers, into general availability. The general takeaway of the summit was that Amazon Web Serivces is a platform designed for analytics, AI and machine learning.

Meanwhile, this year's Microsoft Build conference, the company's annual developer confab, was filled with data- and AI-related announcements and demonstrations.

https://www.zdnet.com

Tuesday, 24 July 2018

Taking the temperature of IoT for healthcare

The Internet of Things (IoT) is full of promises to transform everything from transportation to building maintenance to enterprise security. But no field may have more to gain than the healthcare industry. Healthcare providers and device makers are all looking to the IoT to revolutionize the gathering of healthcare data and the delivery of care itself.

But while many of those benefits are already becoming reality, others are still on the drawing board. Two very different IoT healthcare stories crossed by desk this month — taken together they provide a surprisingly nuanced picture of healthcare IoT.

Smart bandages still in prototype
First, I was excited to hear about the development of advanced prototypes of “smart bandages.” Developed by researchers at Tufts University using flexible electronics, these smart bandages not only monitor the conditions of chronic skin wounds, but they also use a microprocessor to analyze that information to electronically deliver the right drugs to promote healing. By tracking temperature and pH of chronic skin wounds, the 3mm-thick smart bandages are designed to deliver tailored treatments (typically antibiotics) to help ward off persistent infections and even amputations, which too often result from non-healing wounds associated with burns, diabetes, and other medical conditions.

Sameer Sonkusale, Ph.D., professor of electrical and computer engineering at Tufts University’s School of Engineering, a co-author of Smart Bandages for Monitoring and Treatment of Chronic Wounds, said in a statement: “Bandages have changed little since the beginnings of medicine. We are simply applying modern technology to an ancient art in the hopes of improving outcomes for an intractable problem.”

It’s unclear if Tufts’ smart bandages will be internet connected, but the potential benefits of an IoT connection here seems obvious. Individual users could have their wounds’ progress monitored more easily, with changes in treatment proscribed as needed, whether or not they’ve been seen by a healthcare provider. Researchers, meanwhile, could gather real-time data on wound healing and the efficacy of various treatments.

But while the smart bandages have been tested in the lab, they have yet to undergo clinical trials, and there’s no telling how long it will take for them to reach actual patients, or how much they will cost.

Not-so-smart speakers give bad medical advice
While the world waits for smart bandages to show up in the local pharmacy, we’re already able to get medical advice from digital assistants and “smart speakers” such as Amazon Alexa, Google Assistant, and Apple’s Siri. Unfortunately, it turns out that advice isn’t always very good.

A recent in-depth article in Quartz warns that Alexa is a terrible doctor. The problem is that IoT devices are connected to the internet, and as authors Katherine Ellen Foley and Youyou Zhou point out, “Many of the internet’s answers to health-related questions tend to be far-reaching or vague.” And asking a smart speaker like Alexa is even worse: “Although she is wonderfully helpful with basic questions, her knowledge of the complex medical world is limited, and Alexa generally only serves up one answer.”

Worse, the source of that answer can vary widely. If you ask Alexa to search on a health-related question, you’ll likely get an answer from Wikipedia or WebMD. But the authors note that there are also 1,000 health-related skills on Amazon, ranging in quality from “cumbersome at best [to] peddlers of pseudoscience at worst.” Many carry disclaimers that their information is “for entertainment purposes only,” but that’s unlikely to stop people from relying on them for medical advice.

One doctor interviewed by the Quartz authors said, “The treatment advice offered by the Alexa skills was generally fine, but could be dangerously unsuitable for specific groups of patients,” depending on allergies or pre-existing conditions. Another was even more dismissive: “These answers all sound like they just extract information from Wikipedia (which contains a lot of incorrect information) using very simple ‘yes’ or ‘no’ algorithms. Based on my judgment, these are all bad responses.”

Healthcare IoT stuck in the waiting room
OK, so relying on your smart speaker for medical advice probably isn’t a good idea. That seems fairly obvious. But that’s basically what the IoT actually offers right now. And those cool smart bandages? Who knows when they’ll be on store shelves. Unfortunately, that dichotomy could describe the overall state of healthcare IoT: lots of cool stuff in development, but be wary of what’s available now.

https://www.networkworld.com

AI boosts data-center availability, efficiency

Artificial intelligence is set to play a bigger role in data-center operations as enterprises begin to adopt machine-learning technologies that have been tried and tested by larger data-center operators and colocation providers.

Today’s hybrid computing environments often span on-premise data centers, cloud and collocation sites, and edge computing deployments. And enterprises are finding that a traditional approach to managing data centers isn’t optimal. By using artificial intelligence, as played out through machine learning, there’s enormous potential to streamline the management of complex computing facilities.

AI in the data center, for now, revolves around using machine learning to monitor and automate the management of facility components such as power and power-distribution elements, cooling infrastructure, rack systems and physical security.

Inside data-center facilities, there are increasing numbers of sensors that are collecting data from devices including power back-up (UPS), power distribution units, switchgear and chillers. Data about these devices and their environment is parsed by machine-learning algorithms, which cull insights about performance and capacity, for example, and determine appropriate responses, such as changing a setting or sending an alert.  As conditions change, a machine-learning system learns from the changes – it's essentially trained to self-adjust rather than rely on specific programming instructions to perform its tasks.

The goal is to enable data-center operators to increase the reliability and efficiency of the facilities and, potentially, run them more autonomously. However, getting the data isn’t a trivial task.

A baseline requirement is real-time data from major components, says Steve Carlini, senior director of data-center global solutions at Schneider Electric. That means chillers, cooling towers, air handlers, fans and more. On the IT equipment side, it means metrics such as server utilization rate, temperature and power consumption.

“Metering a data center is not an easy thing,” Carlini says. “There are tons of connection points for power and cooling in data centers that you need to get data from if you want to try to do AI.”

IT pros are accustomed to device monitoring and real-time alerting, but that’s not the case on the facilities side of the house. “The expectation of notification in IT equipment is immediate. On your power systems, it’s not immediate,” Carlini says. “It’s a different world.”

It’s only within the last decade or so that the first data centers were fully instrumented, with meters to monitor power and cooling. And where metering exists, standardization is elusive: Data-center operators rely on building-management systems that utilize multiple communication protocols – from Modbus and BACnet to LONworks and Niagara – and have had to be content with devices that don’t share data or can’t be operated via remote control. “TCP/IP, Ethernet connections – those kinds of connections were unheard of on the powertrain side and cooling side,” Carlini says.

The good news is that data-center monitoring is advancing toward the depth that’s required for advanced analytics and machine learning. “The service providers and colocation providers have always been pretty good at monitoring at the cage level or the rack level, and monitoring energy usage. Enterprises are starting to deploy it, depending on the size of the data center,” Carlini says.

Machine learning keeps data centers cool
A Delta Airlines data center outage, attributed to electrical-system failure, grounded about 2,000 flights over a three-day period in 2016 and cost the airline a reported $150 million. That’s exactly the sort of scenario that machine learning-based automation could potentially avert. Thanks to advances in data center metering and the advent of data pools in the cloud, smart systems have the potential to spot vulnerabilities and drive efficiencies in data-center operations in ways that manual processes can’t.

A simple example of machine learning-driven intelligence is condition-based maintenance that’s applied to consumable items in a data center, for example, cooling filters. By monitoring the air flow through multiple filters, a smart system could sense if some of the filters are more clogged than others, and then direct the air to the less clogged units until it’s time to change all the filters, Carlini says.

Another example is monitoring the temperature and discharge of the batteries in UPS systems. A smart system can identify a UPS system that’s been running in a hotter environment and might have been discharged more often than others, and then designate it as a backup UPS rather than a primary. “It does a little bit of thinking for you. It’s something that could be done manually, but the machines can also do it. That’s the basic stuff,” Carlini says.

Taking things up a level is dynamic cooling optimization, which is one of the more common examples of machine learning in the data center today, particularly among larger data-center operators and colocation providers.

With dynamic cooling optimization, data center managers can monitor and control a facility’s cooling infrastructure based on environmental conditions. When equipment is moved or computing traffic spikes, heat loads in the building can change, too. Dynamically adjusting cooling output to shifting heat loads can help eliminate unnecessary cooling capacity and reduce operating costs.

Colocation providers are big adopters of dynamic cooling optimization, says Rhonda Ascierto, research director for the datacenter technologies and eco-efficient IT channel at 451 Research. “Machine learning isn’t new to the data center,” Ascierto says. “Folks for a long time have tried to better right-size cooling based on capacity and demand, and machine learning enables you to do that in real time.”

Vigilent is a leader in dynamic cooling optimization. Its technology works to optimize the airflow in a data center facility, automatically finding and eliminating hot spots.

Data center operators tend to run much more cooling equipment than they need to, says Cliff Federspiel, founder, president and CTO of Vigilent. “It usually produces a semi-acceptable temperature distribution, but at a really high cost.”

If there’s a hot spot, the typical reaction is to add more cooling capacity. In reality, higher air velocity can produce pressure differences, interfering with the flow of air through equipment or impeding the return of hot air back to cooling equipment. Even though it’s counterintuitive, it might be more effective to decrease fan speeds, for example.

Vigilent’s machine learning-based technology learns which airflow settings optimize each customer's thermal environment. Delivering the right amount of cooling, exactly where it’s needed, typically results in up to a 40% reduction in cooling-energy bills, the company say.

Beyond automating cooling systems, Vigilent’s software also provides analytics that customers are using to make operational decisions about their facilities.

“Our customers are becoming more and more interested in using that data to help manage their capital expenditures, their capacity planning, their reliability programs,” Federspiel says. “It’s creating opportunities for lots of new kinds of data-dependent decision making in the data center.”

AI makes existing processes better
Looking ahead, data-center operators are working to extend the success of dynamic-cooling optimization to other areas. Generally speaking, areas that are ripe for injecting machine learning are familiar processes that require repetitive tasks.

“New machine learning-based approaches to data centers will most likely be applied to existing business processes because machine learning works best when you understand the business problem and the rules thoroughly,” Ascierto says.

Enterprises have existing monitoring tools, of course. There’s a longstanding category of data-center infrastructure management (DCIM) software that can provide visibility into data center assets, interdependencies, performance and capacity. DCIM software tackles functions including remote equipment monitoring, power and environmental monitoring, IT asset management, data management and reporting. Enterprises use DCIM software to simplify capacity planning and resource allocation as well as ensure that power, equipment and floor space are used as efficiently as possible.

“If you have a basic monitoring and asset management in place, your ability to forecast capacity is vastly improved,” Ascierto says. “Folks are doing that today, using their own data.”

Next up: adding outside data to the DCIM mix. That’s where machine learning plays a key role.

Data-center management as a service, or DMaaS, is a service that’s based on DCIM software. But it’s not simply a SaaS-delivered version of DCIM software. DMaaS takes data collection a step further, aggregating equipment and device data from scores of data centers. That data is then anonymized, pooled and analyzed at scale using machine learning.

Two early players in the DMaaS market are Schneider Electric and Eaton. Both vendors mined a slew of data from their years of experience in the data-center world, which includes designing and building data centers, building management, electrical distribution, and power and cooling services.

“The big, significant change is what Schneider and Eaton are doing, which is having a data lake of many customers’ data. That’s really very interesting for the data-center sector,” Ascierto says.

Access to that kind of data, harvested from a wide range of customers with a wide range of operating environments, enables an enterprise to compare its own data-center performance against global benchmarks. For example, Schneider’s DMaaS offering, called EcoStruxure IT, is tied to a data lake containing benchmarking data from more than 500 customers and 2.2 million sensors. 

“Not only are you able to understand and solve these issues using your own data. But also, you can use data from thousands of other facilities, including many that are very similar to yours. That’s the big difference,” Ascierto says.

Predictive and preventative maintenance, for example, benefit from deeper intelligence. “Based on other machines, operating in similar environments with similar utilization levels, similar age, similar components, the AI predicts that something is going to go wrong,” Ascierto says.

Scenario planning is another process that will get a boost from machine learning. Companies do scenario planning today, estimating the impact of an equipment move on power consumption, for example. “That’s available without machine learning,” Ascierto says. “But being able to apply machine-learning data, historic data, to specific configurations and different designs – the ability to be able to determine the outcome of a particular configuration or design is much, much greater.”

Risk analysis and risk mitigation planning, too, stand to benefit from more in-depth analytics. “Data centers are so complex, and the scale is so vast today, that it’s really difficult for human beings to pick up patterns, yet it’s quite trivial for machines,” Ascierto says.

In the future, widespread application of machine learning in the data center will give enterprises more insights as they make decisions about where to run certain workloads. “That is tremendously valuable to organizations, particularly if they are making decisions around best execution venue,” Ascierto says. “Should this application run in this data center? Or should we use a collocation data center?”

Looking further into the future, smart systems could take on even more sophisticated tasks, enabling data centers to dynamically adjust workloads based on where they will run the most efficiently or most reliably. “Sophisticated AI is still a little off in to the future,” Carlini says.

In the meantime, for companies that are just getting started, he stresses the importance of getting facilities and IT teams to collaborate more.

“It’s very important that you consider all the domains of the data center – the power, the cooling and the IT room,” Carlini says. The industry is working hard to ensure interoperability among the different domains’ technologies. Enterprises need to do the same on the staffing front.

“Technically it’s getting easier, but organizationally you still have silos," he says.

https://www.networkworld.com

Is Edge Becoming as Commonplace as the Cloud?

 Although edge computing had been around for some time, the term edge started creeping into every discussion and every presentation at industry conferences.

Today, nearly every company we talk to at SDxCentral wants to tell us about their edge computing product. And even service providers like AT&T and Verizon are talking about their network edge as part of their marketing message.

This type of frenzy around the edge reminds me a lot of the early days of cloud computing when every company wanted to talk about their “cloud” solution and many industry groups spent time defining the cloud.

But what exactly is the edge? I’m not the only one asking that question. About a year ago my colleague Linda Hardesty tried to unravel the puzzle in this article. But she noted that in terms of network infrastructure the definition of the edge includes everything from base stations to small cells and data centers to routers and even switches.

Recently a group of vendors and analysts also tackled this issue in a “State of the Edge 2018” report. The report focused on the architecture for edge deployments and provided some guidance on the future of edge computing.

The report defined the edge in two ways: the infrastructure edge and the device edge. And it said that compute will exist at both locations coordinated with a centralized cloud.

Even vendors are trying to define the edge. In a July blog from Kevin Shatzkamer, vice president of enterprise and service provider strategy and solutions at Dell EMC, he said he breaks edge into two different categories. The first group is the access edge, which is a terminating point on the network like SD-WAN or an IoT device. The second group is the network edge, which is the aggregation point within one network such as the data center initiatives like the Central Office Re-architected as Data Centers (CORD) and edge clouds.

Interestingly, Shatzkamer also groups edge into different use cases. For example, there is content at the edge; security at the edge; IoT at the edge; and data processing, or analytics, at the edge.

For David King, CEO of Foghorn Systems, which makes an analytics software that works on the edge of the network, the hype about the network edge has accelerated dramatically since the company first announced its funding in June 2016. “Everybody had overlooked the edge when we launched. Now everyone has an edge,” King said.

But King discounts those that claim even sensors on an IoT network are part of the edge. “Sensors don’t have context or awareness,” he said, adding that he believes that for something to be the network edge it has to be “close to the data but high enough in the topology to make it valuable.”

Jason Anderson, VP of business line management at Stratus Technologies, which makes an edge computing platform specifically for industrial applications, agrees that people are getting confused when they see IoT as the edge. “It’s a layer above the IoT device,” he said. “Most consumer IoT applications don’t need an edge computing layer. Where you need that additional computing layer is when the data is critical and needs protecting.”

Similar to the cloud, there are a lot of different approaches to the edge. And because of that it’s difficult to say exactly where the edge begins and where it ends. But like any new technology, the industry will eventually start to coalesce around a definition. And when that happens, we will likely start to have an easier time distinguishing the hype from reality.

https://www.sdxcentral.com

Pegasystem launches Pega Digital Experience API to create consumer-grade Interfaces

Pegasystems, the software company empowering customer engagement at the world’s leading enterprises, launched the Pega Digital Experience API, a set of design and application development capabilities that allows organizations to provide elegant and powerful digital experiences on any web or mobile channel. Part of the digital transformation suite, the Pega Digital Experience API allows front-end developers to create consumer-grade user interfaces that seamlessly embed Pega’s industry-leading process automation and customer experience functionality — ensuring exceptional form and function in every digital customer interaction.

Developers face technological silos that make it challenging to deliver consistently superior experiences across digital channels. Too often, technology limitations force developers to hard code business logic into each individual channel — creating disjointed customer experiences and ongoing maintenance headaches. The Pega Digital Experience API enables organizations to create stunning front-end interfaces at every digital point of engagement while directly connecting them to the end-to-end processes that drive work across the enterprise. Pega gives developers the flexibility to leverage popular UI frameworks such as as React and Angular together with Pega’s powerful UX design system to create connected customer experiences with their preferred tools. The Pega Digital Experience API provides powerful design capabilities that enable developers to:

• Unify with leading design technologies: Developers that prefer to use other UI frameworks such as React and Angular can leverage open APIs to dynamically use Pega design capabilities as a REST-enabled service to power their front-end UI framework of choice. The Pega Digital Experience API delivers a rich set of UX metadata so they can dynamically assemble an experience that seamlessly embeds business logic such as required fields, data types, validation rules, and more. UI elements changed using Pega’s no-code UX design system will be immediately reflected in the developer’s custom JavaScript framework without additional coding. It also includes starter packs and sample code to quickly integrate Angular and React into their workflows.

• Enhance and extend existing interfaces with micro front ends: Pega makes it easy to embed responsive UI components directly into existing web pages or mobile apps leveraging Pega Mashup technology. Developers can add new functionality that seamlessly interacts with legacy interfaces, enabling them to adapt quickly to changing customer needs without recoding the entire UI.
• Design effective and elegant interfaces jumpstarted with pre-built templates: Pega’s UX design system enables users to create responsive web and app designs that both grab the customer’s eye and allow for fast, accurate service. Reusable digital components plug seamlessly into existing digital ecosystems, while the drag-and-drop interface enables complete UI customization with no coding required. It provides 12 out-of-the-box templates for commonly used experiences with the ability to create additional templates to match any design.

• Build seamless mobile apps: Pega’s open, responsive, and adaptive UI technology makes multi-channel deployment fast to build and easy to change. Users can build mobile applications completely in Pega, embed Pega into existing apps leveraging mashup, or seamlessly connect native mobile apps to Pega APIs by using the Pega Connect SDK. Pega handles all mobile OS updates automatically on the backend to ensure apps are compatible with the latest features and capabilities.

The Pega Digital Experience API is part of Pega Infinity, the next-generation digital transformation software suite that connects customer engagement applications to digital process automation (DPA). Today Pega Infinity powers customer experiences for more than 1.5 billion consumers for some of the largest and most successful brands in the world. By adding the Pega Digital Experience API to Pega Infinity, Pega now delivers both the form and function for digital engagement on a single unified platform. 

http://www.cio.in

Monday, 23 July 2018

The best programming language for data science and machine learning

Arguing about which programming language is the best one is a favorite pastime among software developers. The tricky part, of course, is defining a set of criteria for "best."

With software development being redefined to work in a data science and machine learning context, this timeless question is gaining new relevance. Let's look at some options and their pros and cons, with commentary from domain experts.

Even though, in the end, the choice is at least to some extent a subjective one, some criteria come to mind. Ease of use and syntax may be subjective, but things such as community support, available libraries, speed, and type safety are not. There are a few nuances here, though.

Execution speed and type safety
In machine learning applications, the training and operational (or inference) phases for algorithms are distinct. So, one approach taken by some people is to use one language for the training phase and then another one for the operational phase.

The reasoning here is to work during development with the language that is more familiar or easy to use, or has the best environment and library support. Then the trained algorithm is ported to run on the environment preferred by the organization for its operations.

While this is an option, especially using standards such as PMML, it may increase operational complexity. In addition, in many cases things are not clear-cut, as programming done in one language may call libraries in another one, thus diluting the argument on execution speed.

Another thing to note is type safety. Type safety in programming languages is a little like schema in databases: While not having it increases flexibility, it also increases the chances of errors.

In this thread initiated by Andriy Burkov, machine learning team leader at Gartner, Burkov argues against using dynamically typed languages such as Python for machine learning.

"You can run an experiment for several hours, or even days, just to find out that the code crashed because of an incorrect type conversion or a wrong number of attributes in a method call," says Burkov.

Java
Despite having what is arguably the largest footprint in enterprise deployment, Java is not getting much love these days. Some of this may have to do with the "coolness factor," as Java has been challenged by new programming languages, but there are also some very real concerns here.

What has greatly helped Java establish it footprint, namely the JVM, is also a reason why people are skeptical about using it for machine learning. Similarly, one famous feature of Java, which helps deal with the complexities of C++, garbage collection, may pose problems in production environments.

When discussing trends in software development with Paco Nathan, managing partner at Derwen and data science practitioner and thought leader, the topic did come up.

Nathan notes that the trend he sees is toward real-time applications, and this is not something he believes the JVM is well-suited for, as it is an abstraction over the hardware. Adding a layer between the code and the hardware provides cross-platform portability, but also slows down execution.

Nathan also cites Ion Stoica, the initiator of Apache Spark, which is heavily used for real-time applications. Nathan mentioned that one of the rules Stoica has recently set for his research team in Berkeley is abolishing Java.

Nathan commented that he expects that to spill over from research to industry over a five-year timeframe, as is typical for directions initiated in research environments. But maybe we should not be too fast in writing off Java.

The ups and downs that have been following Java during its stewardship by Oracle may have contributed to its falling out of grace. They may also have something to do with the perceived stalemate in the evolution of the JVM.

With enterprise Java being handed off to the Eclipse foundation, however, there is a chance Java and the JVM may be revitalized. There are also initiatives, such as Gandiva, which aim to optimize Java code for specialized hardware, potentially making it a competitive option for machine learning.

In addition, that large footprint has given rise to initiatives, such as DeepLearning4J, which aim to bring to Java users access to the same libraries typically used through other languages.

Python
According to a recent survey by KDNuggets, Python is the undisputed leader in use for data science and machine learning. Some often cited reasons for this preference are the wide choice in libraries and the fact that it's considered an easy language to work with.

Ashok Reddy, GM DevOps at CA Technologies, notes that Python was the language of choice in his recently completed master's in AI and Machine Learning at Georgia Tech.

Reddy goes on to add that Python is gaining popularity in universities due to its simplicity, so graduates are more likely to know Python than Java. Beyond simplicity, he also cites the abundance of libraries as a key reason for this.

Reddy notes that, from a performance perspective, C is also a popular choice for use in AI and embedded-IoT applications, but Java is not going away. Reddy also sees a pattern in using Python for development and then other languages for deployment of machine learning algorithms.

This also applies internally at CA, as Reddy notes that, in addition to having legacy code in C and Java, the cross-platform portability that Java offers is a key priority for CA.

"Many startups use Ruby or Python initially, and when they grow up they switch to Java," says Reddy.

R
In the KDNuggets survey, R's share seems to be dropping compared to last. R, however, has been gaining enterprise adoption over the last few years.

In some ways R is not a typical programming language, as it's not a general purpose one. R's roots lies in statistics, as it has been developed specifically to deal with such needs.

That, and the fact that it's open source, make for a wealth of off-the-shelf libraries for common and not-so-common related tasks. The flip side of this is that R has been plagued by issues such as memory management and security, and its syntax is not very straightforward or disciplined.

In the past few years, R has seen development environments been built around it in order to fill the gaps required to take it out of the data science lab and into enterprise deployments.

One of those, created by Revolution Analytics, has been integrated in Microsoft's offering (Visual Studio, SQL Server, Power BI and Azure) following its acquisition by Microsoft. Another one, R Studio, has been integrated initially with Apache Spark and now with Databricks.

The way this was done is indicative of another strength of R -- its package system. It is through this, and its ties with the academic community, that R keeps up to date with all latest developments in data science and machine learning.

While R may be a good choice for development, its value in production is highly dependent on its supporting ecosystem.

Julia, Golang, Rust, Swift, and JVM languages
And what about those who do not want the dynamic typing of Python, or the lecagy baggage of Java or C / C++? Well, apart from the fact that Python 3.6 and later supports static typing.

Burkov notes that Scala and Kotlin, two newer languages based on the JVM, have optional typing, but a steep learning curve and low user adoption, respectively. And, in the end, we might add, they also come with the same restrictions imposed by the JVM.

Swift, notes Burkov, has static typing and low availability of machine learning libraries/data analysis. Other options suggested by contributors in the same thread are Golang, Julia, and Rust.

Golang has been pointed out as being fast, thread ready, easy, clean, compiled, and simple. And it has increasing support for libraries for NLP, general machine learning, and data analysis, extraction, processing and visualization.

Julia has been pointed out as being flexible with type usage and JIT complied similar to Java, but having execution speed comparable to C. It's a relatively new language, so its community is not the biggest around. However, Julia does have some support for machine learning libraries.

Rust has been pointed out as compiling natively and efficiently like plain C/C++, lacking garbage collection, and being type safe and rich. Admittedly, even by its proponents, though, it is not really ready for ML due to lack of ML specific libraries.

The choice of programming language is not a simple one, and in the end it may not even be the most important one either. As pointed out by Luiz Eduardo Le Masson, data science leader at Stone Co.:

"For 'ordinary machine learning,' it does not matter what language you use. But when you need to have real online learning algorithms and inferences in realtime for millions of simultaneous clusters and respond in less than 500 ms, the topic does not only involve languages, but architecture, design, flow control, fault tolerance, resilience."

https://www.zdnet.com

AI, ML emerge as popular domains for reskilling: Survey

Edtech Company Simplilearn announced the findings of the Career Impact Survey 2018, which is aimed at analyzing the impact of professional certifications and reskilling among working professionals in India. The findings reveal that Artificial Intelligence (AI) and Machine Learning (ML) are the most widely chosen (25percent of respondents) domains for reskilling, followed by Big Data and Data Science (20percent) domains.

Other new age categories such as digital marketing, cloud computing, cybersecurity, DevOps, and agile and scrum together saw 55percent uptake in reskilling among professionals. According to the survey, certification courses helped (31percent) professionals to enhance their performance, gain manager and peer appreciation (29 percent).  40 percent of respondents who have taken certification courses, admitted to feeling more confident at work.

“Going digital is indispensable for a company’s survival today and likewise, it has become crucial for professionals to proactively upgrade their skills to meet the latest industry requirements”, said Krishna Kumar, Founder & CEO of Simplilearn.

“The survey outlines how impactful certification courses and reskilling are for professionals who want to acquire the right skills for career advancement opportunities.”

Impact on Career
The industry is ever evolving due to advanced technologies and is enabling more and more professionals to  take charge of their careers through certification courses and reskilling.According to the survey, 44 percent of the respondents said that reskilling and certifications impacted their pay raise during the performance appraisal cycle,  while 24 percent  of the respondents said it impacted their promotions. Thanks to the benefit of reskilling , 32 percent of the respondents were able to move to other departments within their organization. When it comes to job searches,  majority of respondents (62 percent ) found professional certifications increased their prospect of finding new jobs.

Future Demand for Reskilling
With organizations around the world adapting to digital transformation, professionals want to proactively hone their technology skills.  Over the next three to six months, 67 percent of the respondents want to collectively hone their skills in  Big Data, Data Science, AI, Machine Learning and Cloud Computing. In addition to technology skills, 55 percent of the respondents are keen to improve their managerial skills, followed by problem solving skills. 

http://www.cio.in

Big Switch Takes the Fast Path to Hybrid Cloud Via On-Prem VPCs

Big Switch Networks will help enterprises set up virtual private clouds (VPCs) in their on-premises data centers, modeling the VPCs they use in the big public clouds. Big Switch is doing this to make it easy for enterprises to adopt hybrid cloud.

When businesses set up their accounts with the major public cloud providers, they create VPCs. For Amazon Web Services (AWS) it’s called AWS VPC; for Microsoft it’s Azure vNet; and for Google it’s Google Cloud Platform VPC.

Today, Big Switch announced its intention to bring “VPC everywhere” to accelerate hybrid cloud adoption.

Kyle Forster, founder of Big Switch Networks, said the company wants to take the best of cloud networking and bring that on premises. It’s starting with VPC because “It’s the single biggest configuration that everything else hangs off of,” said Forster. “I think we’re the only ones taking this first step. We think it’s the fastest path to hybrid cloud.”

Big Switch’s technology is based on software-defined networking (SDN). Its VPC for on-premises is a logical Layer 2/Layer 3 network that users provision on top of the cloud. “The model is really well-thought-through for an as-a-service cloud culture,” said Forster.

Hybrid Cloud Approaches
The “hybrid-cloud” topic has been hot for the past year. And different vendors are taking various approaches. Both Cisco and Arista have announced hybrid cloud strategies to bridge their on-premises environments to public clouds. In August 2017, Cisco said it was making it easier for its customers that use its Application Centric Infrastructure (ACI) in their private data centers to connect that infrastructure to public clouds.

In cloud-native circles, Cisco’s approach has been disparaged as a virtual hardware (vHardware) tactic, where engineers use the same CLIs for both their on-premises and cloud workloads.

In September 2017, Arista Networks rolled out its Any Cloud software platform designed for enterprises to extend their workloads across private data centers and public clouds.

There are also rumors that AWS may sell white box switches to enterprise customers, making their on-premises environment look like AWS’ public cloud environment. This would make it easy to move workloads back and forth.

Forster doesn’t think AWS will build a commercial white box switch to compete with traditional switch vendors. But he does think there’s a good chance AWS will build a white box appliance to connect private data centers with the public cloud. Asked how this might affect Big Switch’s VPC plans, Forster said, “If AWS were to build a DirectConnect appliance using white box switch hardware, it would fill a nice hole in the market, and we’d connect to it for automation using our controllers — same thing we do with all of their other APIs.”

Big Switch says it VPCs on-prem provide consistent IT governance between private data centers and in any public cloud. Additionally, application placement is not constrained by networking or compliance but instead is based on application needs such as latency, bandwidth, data locality, or cost.

The company’s enterprise VPC is the first step in its new Cloud First portfolio. Over the next six months it plans to announce three new products to support hybrid cloud.

https://www.sdxcentral.com

Friday, 20 July 2018

AWS Adding Artificial Intelligence, Compute Services to Cloud Lineup

Amazon is dealing with striking workers in Europe, site disruptions during its Prime Day sale event and protestors inside and out of the Javits Convention Center, site of this week’s AWS NYC Summit 2018. 

None of which appeared to bother Amazon Web Services executives at the Summit, who announced new capabilities for its artificial intelligence machine learning and compute services on the AWS cloud. 

With artificial intelligence and machine learning services in demand, AWS rolled out improvements to its SageMaker service, which enables users to build and deploy models in the cloud. 

Dr. Matt Wood, AWS’s General Manager for Machine Learning, announced two updates to the help speed up the service: SageMaker Streaming Algorithms and SageMaker Batch Transform. 

Streaming Algorithms enables users to stream large amounts of training data from the S3 storage service into SageMaker. Streaming support, which will help developers build and train custom machine learning applications faster, is available for Tensor Flow-based models now, with others coming soon, he said. Along the same lines, the Batch Transform service allows users to dump large files or file sets into SageMaker for testing or training. 

Two other improvements were made to two of AWS’s turnkey AI services, Amazon Translate and Amazon Transcribe. Wood said Amazon Translate now supports Japanese, Russian, Italian, Traditional Chinese, Turkish, and Czech languages, with support for Dutch, Swedish, Polish, Danish, Hebrew and Finnish coming soon. 

Amazon Transcribe now supports Channel Synthesis, which can take multi-channel audio streams and construct a single transcription from them. 

“It’s still early and nascent,” Wood said. “But our mission is to put machine learning in the hands of every developer.” 

In Compute, AWS Chief Technology Officer Werner Vogels announced two new instance types, the memory-optimized R5 and the turbo-charged Z1. 

R5 is a fifth-generation instance built on Skylake Xeon processors, for high performance computing, parallel processing and in-memory database applications. The R5 is also available with up to 3.6 TB of NVMe-based flash storage. 

The Z1 instance type includes custom Xeon Scalable Processors, which can run at sustained processing speeds of up to 4.0 GHz for compute-intensive workloads. 

A third new instance was announced as part of an update to the AWS Snowball Edge—the hardened device that began as a data transfer device and has evolved into an edge computing platform. 

AWS has updated it with the ability to run full Elastic Compute 2 instances based on the new “SBE1” instance type, which runs a Xeon D processor at 1.8 GHz. 

Officials said that users, such as those in offshore environments, are finding the need to add some batch processing or simple analytics on data coming into a Snowball device, before it is connected to the AWS cloud. 

Executives did not address the outage that knocked out Amazon for some Prime Day customers Monday afternoon. Vogels kept his cool on stage despite being interrupted by a protestor about 10 minutes into his talk. The protestor was escorted out by security. 

Despite the drama, the numbers of Amazon Web Services customers going “all-in” keep growing. That includes DTCC, a financial markets clearinghouse, which has grown its AWS usage significantly over the past five years, said DTCC Managing Director and Chief Technology Architect Rob Palatnick, who spoke on stage at the event. The company’s usage has grown from 275 million requests per month to 780 million, and 1.5 TB of data stored to 20 TB. 

Likewise, 21st Century Fox has reduced its data center footprint from 74 to four and now stores 25 petabytes of data and 45 million video assets in AWS. 

“AWS has enabled us to rearchitect the way we work,” said Fox CTO Paul Cheesbrough, and has changed the way the company manages its content supply chain, streaming services, ad placement and delivery, and creative production and marketing. “We are becoming real-time predictive and even more intelligent with SageMaker, Athena [S3 query service], Glue [Extract, Load, Transform data management] and Redshift [data warehouse].” 

Scot Petersen, http://www.eweek.com/cloud