Friday, 29 June 2018

Google cloud storage gets a boost with managed NAS service

Google is adding to its cloud storage portfolio with the debut of a network attached storage (NAS) service.

Google Cloud Filestore is managed file storage for applications that require a file system interface and shared file system for data. It lets users stand up managed NAS with their Google Compute Engine and Kubernetes Engine instances, promising high throughput, low latency and high IOPS.

The managed NAS option brings file storage capabilities to Google Cloud Platform for the first time. Google’s cloud storage portfolio already includes Persistent Disk, a network-attached block storage service, and Google Cloud Storage, a distributed system for object storage. Cloud Filestore fills the need for file workloads, says Dominic Preuss, director of product management at Google Cloud.

“You can only attach one VM to Persistent Disk. With [Google Cloud Storage] object storage, you can have a bunch of things read it, but it doesn’t have the transactional semantics of a file system. Cloud Filestore is the intermediate of those two: you can have multiple VMs speak to it, and it has transactional properties of a file system.”

To make it easier for companies to move data into its cloud storage, Google also announced the availability of Transfer Appliance. The rackable storage server is designed to move large amounts of data out of enterprise data centers and into Google’s cloud.

Google ships the appliance to customer sites where the data is transferred at high speeds into the appliance before it’s sent back to Google to complete the data transfer to the customer’s storage system in the Google cloud. (AWS Snowball is Amazon’s version of a similar appliance.) Google first shared details of its Transfer Appliance last summer, and it’s now generally available in the U.S.

Transfer Appliance is aimed at companies that want to move large amounts of data to Google Cloud Platform and don’t want to rely on traditional network links. It’s recommended for companies that have to move more than 20TB of data or data that would take more than a week to upload, Google says.

Rounding out Google’s news is the launch of its latest cloud region in Los Angeles. LA is the fifth U.S. site in Google’s global cloud platform network and the 16th site worldwide.

Google put a Hollywood spin on its spate of news, emphasizing that media and entertainment are among the expected beneficiaries of Google’s regional expansion to LA. One example is The Mill, a global visual effects studio that works on short-form content like commercials and music videos in addition to larger projects.

“A lot of our short-form projects pop up unexpectedly, so having extra capacity in-region can help us quickly capitalize on these opportunities,” said Tom Taylor, head of engineering at The Mill, in a statement. “The extra speed the LA region gives us will help us free up our artists to do more creative work.”

Google Cloud Filestore
Media and entertainment companies are also among the targets for Cloud Filestore, which is geared for organizations with lots of rich unstructured content and applications that require low latency and high IOPS, such as content management systems, website hosting, render farms and virtual workstations.

Film studios and production shops are always looking to render movies and create CGI images faster and more efficiently, Preuss says. Cloud Filestore offers an alternative to using on-premises machines and on-premises files for rendering.

“Media and entertainment have a number of distributed workloads. The most common one is render farms, or render workloads,” Preuss says. “You’ve spent several years, let’s say, developing a movie or an animated movie, and the last step is to have a big compute job to either render the animation or render the special effects or do some post processing on the movie. You need to be able to have a number of VMs working on that data to do updates. That’s a very common use case for a managed NFS offering like Cloud Filestore.”

Google offers two pricing tiers for Cloud Filestore. The Premium tier is $0.30 per GB per month, and the Standard tier is $0.20 per GB per month (pricing varies slightly in some regions). Cloud Filestore Premium is designed to provide up to 700 MB/s throughput and 30,000 IOPS, regardless of instance capacity. For Standard instances, performance scales with capacity; peak performance is available at 10TB and above. Filestore will release into beta next month.

Google Transfer Appliance
Companies can request a Transfer Appliance directly from the Google Cloud Platform console. The appliance comes in two configurations: a 100TB model priced at $300, and a 480TB model priced at $1,800. If customers compress data, it typically doubles the raw capacity, according to Google. Express shipping runs roughly $500 for the 100TB appliance and $900 for the 480TB model. The device is designed to fit in a standard 19-inch data-center rack.

Some typical use cases Google cited include: migrating part or all of a data center to the cloud; kick-starting a machine learning or analytics project by transferring test data and staging it quickly; moving large archives of content such as creative libraries, videos, images, regulatory or backup data to cloud storage; and collecting data from research bodies or data providers and moving it to Google’s cloud for analysis.

Schmidt Ocean Institute is an early adopter of Transfer Appliance. The nonprofit foundation owns and operates the Falkor, an oceanographic research ship with a 100TB Transfer Appliance installed onboard.

“We needed a way to simplify the manual and complex process of copying, transporting and mailing hard drives of research data, as well as making it available to the scientific community as quickly as possible,” said Allison Miller, research program manager at Schmidt Ocean Institute, in a statement. “We are able to mount the Transfer Appliance onboard to store the large amounts of data that result from our research expeditions and easily transfer it to Google Cloud Storage post-cruise. Once the data is in Google Cloud Storage, it’s easy to disseminate research data quickly to the community.”

https://www.networkworld.com

What’s new in Rust 1.27

Version 1.27 of the Rust systems programming language is now available.

Current version: What’s new in Rust 1.27

Rust 1.27 features basic SIMD (single-instruction, multiple-data)capabilities. The std::arch module serves as a gateway to architecture-specific instructions usually related to SIMD. A higher-level std::simdmodule is planned for the future.
Other new features in Rust 1.27 include:
  • The dyn Trait syntax is stabilized, providing a syntax for trait objects using a contextual dyn. A “bare trait” syntax for trait objects is deprecated, because it is often ambiguous and confusing.
  • The #[must use] attribute can now be used on functions. Previously, it only applied to types, such as Result<T, E>. Parts of the standard library also have been enhanced to use #[must use].
  • Multiple new APIs were stabilized in the release, including DoubleEndedIterator: : rfind and NonNull: :cast.
  • The Cargo package manager for Rust has been upgraded to require a --target-dir flag to change the target directory for a given invocation. Additionally, auto keys have been added to tomlfor dealing with targets.

Where to download Rust 1.27

You can download Rust 1.27 from the project website.

Previous version: What’s new in Rust 1.26

Rleased in mid-May 2018, highlights of Rust 1.26 include:
  • A reduction in compile times, via a fix impacting nested types.
  • Support for 128-bit integers, twice the size of u64 and holding more values.
  • Library stabilizations, specifically fs::read_to_string, providing a convenience over File::open and io::Read::read_to_string for reading a file into memory.
  • The Cargo package manager is expected to offer faster resolution of lock files and require manual cargo update invocations less often. 
  • The impl Trait feature, allowing for abstract types in returns or function parameters, is now stable. Existential types are provided.
  • Better match bindings, with the compiler automatically referencing or dereferencing in match.
  • Basic slice patterns, which allow you to match on slices in a similar way to matching on other data types.

Previous version: What’s new in Rust 1.25

Version 1.25 of Rust features an upgrade to its LLVM (Low-Level Virtual Machine) compiler infrastructure that improves support for the WebAssembly portable code format, which itself is designed to improve the performance of web applications. Rust 1.25 also includes improvements to the Cargo package manager and library stabilizations.

New Rust features from the LLVM upgrade

The Rust language has been upgraded to LLVM 6 from LLVM 4. This lets Rust keep abreast of the upstream WebAssembly back end and pick up new features when they land.
The LLVM upgrade also fixes some SIMD-related compilation errors. For internet of thigs (IoT) development, LLVM 6 brings Rust closer to supporting the AVR microntroller family, leveraged in the Arduino Uno board. Rust, Mozilla claims, can improve security and reliability of IoT devices and is much better at this than the C/C++ languages commonly used to write microcontroller firmware. AVR support is due soon.

New Rust features from the Cargo CLI changes

For the Cargo command-line interface, cargo new will default to generating a binary rather than a library. Rust’s developers said they try to keep the CLI stable but that this change was important and unlikely to result in breakage. They said that cargo new accepts two flags:
  • —lib, for building libraries
  • —bin, to build binaries or executables
In previous versions of Cargo, developers who did not pass one of these flags would default to —lib. This was done because each binary often depends on other libraries, thus making the library case more common. But this behavior is actually incorrect, because each library is depended on by many binaries. Also, some community members found the default surprising.

Other new features in Rust 1.25

Other features in Rust 1.25 include:
  • Library stabilizations include a std::ptr::NonNull<T> type, which is nonnull and covariant.
  • libcore has gained the time module, with the Duration type that had only been available in libstd
  • Checkouts of Git dependenices from the Cargo database folder should be quicker, due to the use of hard links. Cargo caches Git repositories in a few locations.
  • Nested imports groups provide a new way to write use statements, which can reduce repetition and foster clarity.
https://www.infoworld.com

Thursday, 28 June 2018

Serverless cloud computing is the next big thing

Serverless computing in the cloud is a good idea—serverless computing is not just for the datacenter. Serverless cloud computing means the ability to get out of the business of provisioning cloud-based servers, such as storage and compute, to support your workloads, and instead use autiation at the cloud provider to allocate and deallocate resources automatically.

Although there are cost advantages of serverless cloud computing, the real advantage is simplicity. Removing developers and application managers from resource provisioning just makes the public cloud easier to use and—most important—easier to change.

The notion of serverless computing goes beyond resource provisioning; it is spreading to other parts of the cloud as well. The most used serverless platform, AWS Lambda, has been augmented with Lambda@Edge, for edge computing.

There are also serverless enabled versions of cloud-based databases such as Amazon Aurora. Available in editions that are either MySQL-compatible or PostgreSQL-compatible. Aurora scales to up to 64TB of database storage, using a serverless approach to deal with cloud resources as needed.

We’re witnessing a reengineering of public cloud services to use a serverless approach. First, we’re seeing resource-intensive services such as compute, storage, and databases, but you can count on the higher-end cloud services being added to the list over time, including machine learning and analytics.

What this all means for the enterprise is that less work will be needed to figure out how to size workloads. This serverless trend should also provide better utilization and efficiency, which should lower costs over time. Still, be careful: I’ve seen the use of serverless computing lead to higher costs in some instances. So be sure to monitor closely.

There is clearly a need for serverless cloud computing. In fact, I am surprised that it took so long for the public cloud providers to figure this out. But it’s good that they have.

https://www.infoworld.com

Who is a cloud architect? A vital role for success in the cloud

Who is a cloud architect?

Cloud architects are responsible for managing the cloud computing architecture in an organization, especially as cloud technologies grow increasingly complex. Cloud computing architecture encompasses everything involved with cloud computing, including the front-end platforms, servers, storage, delivery and networks required to manage cloud storage.

The cloud architect role

According to a 2018 report from RightScale, 81 percent of enterprises have a multi-cloud strategy and 38 percent of enterprises view public cloud as their top priority in 2018 — up from 29 percent in 2017. The report also found that cloud architect jobs grew in the past year, with 61 percent identifying as a cloud architect in 2018 compared to 56 percent in 2017.
To handle the complexities of cloud adoption, most organizations will want to hire a cloud architect — if they haven’t already. These IT pros can help navigate the entire organization’s cloud adoption, helping to avoid risk and ensure a smooth transition.

Cloud architect responsibilities

According to Gartner, the three main, high-level responsibilities of a cloud architect are:
  • Leading cultural change for cloud adoption
  • Developing and coordinating cloud architecture
  • Developing a cloud strategy and coordinating the adaptation process
While those are the high-level responsibilities, day-to-day responsibilities of a cloud architect, according to Gartner, include:
  • Finding talent with the necessary skills
  • Assessing applications, software and hardware
  • Creating a “cloud broker team”
  • Establish best practices for cloud across the company
  • Selecting cloud providers and vetting third-party services
  • Oversee governance and mitigate risk
  • Work closely with IT security to monitor privacy and develop incident-response procedures
  • Managing budgets and estimating cost
  • Operating at scale

Cloud architect salary

According to data from PayScale, the average salary for a cloud architect is $124,923 per year, with a reported salary range between $82,309 to $185,208 per year depending on experience, location and skills.

Cloud architect skills

Cloud architects are responsible for communicating with vendors to negotiate third-party contracts for hardware, software and other cloud technologies. It’s a constantly evolving field, and the job requires someone who can stay on top of the latest trends and technologies.
http://www.cio.in

IBM Brings Cloud Private Platform to Cloud Managed Services Offering

IBM announced on June 21 that is it expanding its cloud-native capabilities, enabling the Kubernetes-based IBM Cloud Private platform to run on the company's Cloud Managed Services offering.

The new capability brings together IBM's recently launched Cloud Private platform with the managed CMS offering to help enterprises adopt container-based, cloud-native application models.

"CMS is a managed infrastructure-as-a-service offering which is trusted by many large IBM customers due to its security and privacy features," Michael Elder, IBM Distinguished Engineer and Master Inventor for the IBM Private Cloud Platform, told eWEEK. "IBM Cloud Private now enables Kubernetes-based clusters to run on top of CMS."

IBM Cloud Private embeds many capabilities to support the adoption and operation of Kubernetes in the enterprise, including a built-in Image Registry and Helm Catalog, Elder added. Kubernetes is a popular open-source container orchestration system that was originally built by Google and is now widely deployed by enterprises as well as public cloud providers.

IBM Cloud Private Not Based on OpenStack

IBM Cloud Private was first announced in November 2017 as a hybrid cloud offering. While IBM had previously strongly advocated for using OpenStack-based technology for both private and public cloud deployments, Cloud Private is not based on OpenStack. Elder explained that IBM Cloud Private is a private cloud platform for developing and running workloads locally. 

"It is an integrated environment that enables customers to design, develop, deploy and manage on-premises, containerized cloud applications behind the firewall," Elder said. "It includes the container orchestrator Kubernetes, a private image repository, a management console and monitoring frameworks. It can run on-premises, but also in many public cloud environments, and now we are making it available on CMS." 

IBM Standardizes on Helm

A core element of application deployment for Kubernetes is the Helm project, which became a top-level project of the Cloud Native Computing Foundation (CNCF) on June 1. The CNCF is also the open-source organization where the Kubernetes project has been hosted since July 2015.

"IBM has standardized on Helm charts as our preferred mechanism for delivering containerized workloads," Elder said. IBM Cloud Private offers a rich catalog of Helm charts for IBM middleware, open-source middleware and other third parties such as F5."

Elder added that IBM supports syndicating Helm repositories from the community as well, with built-in role-based access control (RBAC) to allow an enterprise to choose what their team may consume.

Security Features

CMS has multiple security and disaster recovery capabilities that will now supplement features that are already in IBM Cloud Private. IBM Cloud Private has built-in security for RBAC, data in transit encryption, data at rest encryption, image vulnerability scanning and other capabilities, Elder said. 

"CMS will take advantage of these capabilities and offer them to CMS cloud consumers," he said. "CMS will manage the backup and recovery procedures for IBM Cloud Private on behalf of IBM clients." 

From a policy management perspective, Elder said IBM Cloud Private administrators will have control over each uniquely deployed cluster. Cluster admins will be able to control behaviors such as segmenting applications and network traffic, and also admission controllers that allow admins to assign their image deployment policies as well.

For organizations looking to deploy applications across multiple providers, the nature of Kubernetes is a key enabler.

"Because IBM Cloud Private is based on open technology, including Kubernetes, Open Container Initiative image formats [Docker], Helm, and Terraform apps written on Kubernetes can be ported across various Kubernetes providers," Elder said.

www.eweek.com

Google opens its human-sounding Duplex AI to public testing

Google is moving ahead with Duplex, the stunningly human-sounding artificial intelligence software behind its new automated system that places phone calls on your behalf with a natural-sounding voice instead of a robotic one.

The search giant said Wednesday it's beginning public testing of the software, which debuted in May and which is designed to make calls to businesses and book appointments. Duplex instantly raised questions over the ethics and privacy implications of using an AI assistant to hold lifelike conversations for you.

Google says its plan is to start its public trial with a small group of "trusted testers" and businesses that have opted into receiving calls from Duplex. Over the "coming weeks," the software will only call businesses to confirm business and holiday hours, such as open and close times for the Fourth of July. People will be able to start booking reservations at restaurants and hair salons starting "later this summer."

On Tuesday, Google invited press to Oren's Hummus Shop in Mountain View, California, a small Israeli restaurant two-and-a-half miles away from its corporate campus, to see the first live demos of the project and try it out for ourselves. (Google wouldn't allow video recording of the demos, though. A similar press event was held at a Thai restaurant in New York City a day before.)

The event was also a chance for Google to clear the air on Duplex, which has been under scrutiny from the moment Google CEO Sundar Pichai unveiled the technology at its I/O developer conference. Google gave me an early peek at Duplex in May, but declined to give me a live demo, making it difficult at the time to assess how the technology might actually work in real life.

Unlike the semi-robotic voice assistants we hear today -- think Amazon's Alexa, Apple's Siri or the Google Assistant coming out of a Google Home smart speaker -- Duplex sounds jaw-droppingly lifelike. It mimics human speech patterns, using verbal ticks like "uh" and "um." It pauses, elongates words and intones its phrases just like you or I would.

But that realism has also freaked people out. Critics were concerned about the ethical implications of an artificially intelligent robot deceiving a human being into thinking he or she was talking to another person.

On Wednesday, Google revealed exactly how it will let people know they're talking to an AI. After the software says hello to the person on the other end of the line, it will immediately identify itself: "Hi, I'm the Google Assistant, calling to make a reservation for a client. This automated call will be recorded." (The exact language of the disclosure varied slightly in a few of the different demos.)

The company said it will disclose that the call is being recorded "in states that legally require" that disclosure. 11 states, including California, Illinois and Florida "require the consent of every party to a phone call or conversation in order to make the recording lawful," according to the Digital Media Law Project. 38 states and the District of Columbia have one-party consent laws. For calls between states, the stricter law needs to be enforced -- for instance, California law requires it, but New York law doesn't.

Setting a standard
How Google handles the release of Duplex is important because that will set the tone for how the rest of the industry treats commercial AI technology at a mass scale. Alphabet, Google's parent, is one of the most influential companies in the world, and the policies it carves out now will not only set a precedent for other developers, but also set expectations for users. 

Duplex is the stuff of sci-fi lore, and now Google wants to make it part of our everyday life. Looking years down the line, if the tech is a hit, it could touch off an era in which humans conversing with natural-language robots is normal. So getting it right at the dawn of lifelike bots is crucial.

"We think it's important to set a standard on ways technology could be used for good," said Nick Fox, vice president of product and design for the Google Assistant. "With things like the disclosure, it's important that we do take a stand there, so that others can follow as well."

Google has already been thinking more broadly about the effects of its AI. Earlier this month, Pichai released a manifesto on the ethics of AI, highlighting what the company would and wouldn't develop as it thinks about its moral responsibility. He  said Alphabet would not develop AI for weapons, but it would still pursue military contracts. The new guidelines came after an employee protest at Google over its involvement with Project Maven, a Pentagon initiative aimed at using AI for the analysis of drone footage.  

Fox referred to those guidelines on Tuesday as he talked about the questions surrounding Duplex's release. "These are things we're figuring out as a tech community," said Fox.   

https://www.cnet.com

Monday, 25 June 2018

Rackspace Launches First Kubernetes Private Cloud

Rackspace isn't afraid to make up its own new enterprise cloud markets.

On June 20 at the HPE Discover conference in Las Vegas, the internet services provider revealed that it has expanded its private cloud-as-a-service portfolio in collaboration with Hewlett Packard Enterprise to launch Rackspace Kubernetes-as-a-Service (RKaaS). This will include a pay-per-use infrastructure in a private cloud environment, which the company described as “an industry first.”

At the same time, Rackspace announced a similar deal with HPE to launch a Private Cloud-as-a-Service (PCaaS) powered by VMware, also using pay-per-use infrastructure.

The San Antonio-based company said that with both offerings, enterprises can deploy services for its elastic infrastructure and simplified IT in a private cloud environment located in their own data center, a colocation facility or a data center managed by Rackspace.

Kubernetes Orchestration as a Cloud Service
Kubernetes is an open-source project that provides container orchestration, deployment and management capabilities. While Kubernetes started off as a Google project and Google still contributes more code than anyone, it has been a multi-stakeholder effort run by the Linux Foundation's Cloud Native Computing Foundation (CNCF) since July 2015.
Kubernetes' roots go back to 2014, when Google publicly released the open source code for the project. But it was 2017 when Kubernetes' popularity took off, with nearly every major IT vendor now backing the platform--even onetime rivals, such as Docker.
Rackspace has been focused in recent months on creating a portfolio of Private Cloud-as-a-Service (PCaaS) solutions designed to relieve IT teams of their operational burden, the company said.
Rackspace Kubernetes-as-a-Service will help enterprises take advantage of the benefits of Kubernetes, such as the flexibility to easily move applications across multiple clouds, the ability to scale up to billions of containers and increased productivity managing containerized applications. It will do this by delivering:
  • Pay-as-you-go service: Using HPE GreenLake Flex Capacity, customers pay for what they use in an on-demand model for infrastructure. This feature enables customers to more closely align resources without the need to pay for additional fixed capacity. This flexible capacity model allows customers to take full advantage of the instant enterprise-level scalability of Kubernetes.
  • Agility, scalability and strategic flexibility: Users maintain the architectural and data control benefits of a private cloud environment while rapidly scaling their entire private cloud capacity in a public cloud-like manner. Because this solution is fully open-source using upstream Kubernetes, customers will avoid vendor lock-in.  Finally, customers will have the flexibility to scale their Kubernetes environments at their own pace in nearly any data center in the world, including the customer data center, Rackspace data center or third-party colocation facility.
  • Transformed Day 2 operations: Rackspace helps ensure the transformation to container-based workloads. In addition to getting customers up and running, Rackspace goes a step further by managing ongoing “Day 2” operations for customers, including updates, zero downtime upgrades, patching and security hardening for Kubernetes, all managed cluster services and the node operating system.
  • Enterprise-grade security: From the infrastructure to the cluster itself, including the containers running inside the cluster and additional services required to run the application, Rackspace secures Kubernetes using industry-best practices. Rackspace experts validate and vet each component of the service, provide static container scanning and ensure only authorized users can access the environment.
Rackspace Kubernetes-as-a-Service will be available in all regions this month, the company said. 
http://www.eweek.com

Thursday, 21 June 2018

HPE adds GreenLake Hybrid Cloud to enterprise service offerings

With its new GreenLake Hybrid Cloud offering, HPE's message to the enterprise is simple: Your cloud, your way.

HPE is adding Microsoft Azure and Amazon Web Services capabilities to its GreenLake pay-per-use offerings, providing a turnkey, managed service to deploy public and on-premises clouds.

The company debuted the new HPE GreenLake Hybrid Cloud service Tuesday at its Discover conference in Las Vegas, saying that it can manage enterprise workloads in public and private clouds using automation and remote services, eliminating the need for new skilled staff to oversee and manage cloud implementations.

"What you’re seeing is, we’re shifting the company to be very much services-led," said Ana Pinczuk, senior vice president and general manager for HPE Pointnext, which runs the GreenLake services. "We're leveraging our 'right-mix' heritage and we're moving to consumption models that provide customers with flexibility –  we call it 'their cloud, their way,' and it gives us an entrée to a relationship with a customer  that’s much more long-term."

When HPE GreenLake launched last November it offered a variety of packaged workload services including big data, backup, database with EDB Postgres, SAP HANA, and edge compute solutions, but was limited to on-premises deployments.

Since just about every large enterprise runs a hybrid cloud to some extent, the move to embrace public cloud was arguably inevitable.

"I’d say it was a position they had to take," said Rob Brothers, vice president of Data Center and Support & Deploy Services at IDC. "They know that workloads aren’t going to reside in one place so they need a strategy where if it didn’t make financial sense to use  GreenLake on premise they need to be able to help the customers with migration to the cloud side of the equation, to the public cloud sectors. So for me it’s a kind of an evolution and something they had to get into to be able to paint the whole picture."

Growing through acquisition
HPE has added expertise gained from the acquisition of two companies: London-based RedPixie,  a cloud consultancy and application developer specializing in Microsoft Azure, and Boston-based Cloud Technology Partners (CTP), a consultancy firm focused on migrations to Amazon Web Services (AWS). CTP was acquired last September and RedPixie was scooped up in April.

HPE says GreenLake Hybrid Cloud can design, implement, manage and optimize hybrid cloud environments using an automated toolset based on HPE OneSphere and the company's software-defined technology.

HPE's Pointnext IT services unit has 25,000 staff around the world. The unit is a rebranded version of HPE's old Technical Services arm, which took up the slack on consulting and services after HPE two years ago spun out its huge services business, merging it with CSC to form DXC Technology, a company with $25 billion in revenue a year.

"Our cloud practice is one of these big areas where we started in an incubation mode – you know, how do we help customers optimize their environments and move workloads between private and public clouds," said Pinczuk. "So we did two acquisitions, CTP and RedPixie, and now we have a set of offerings that are much more scaled."

What is Flex Capacity?
One of Pointnext's marquee offerings is GreenLake Flex Capacity, which allows users to pay for services on a consumption, or per-use, basis. The on-premise infrastructure that HPE uses includes products from the ProLiant server, Synergy and SimpliVity converged infrastructure, and 3PAR storage lines.

"What GreenLake Flex Capacity does is take our infrastructure solutions like storage and compute and provides that in a pay-asyou-go model," said Pinczuk. "We have different metering options: so depending on whether its compute or storage we can charge by gig, by core, and there are other options for clients. We take the basic unit for whatever the infrastructure is, and we charge by that metric – by VM, by container by gigabit, or for storage, [by] petabyte of backup."

Some of HPE's major competitors offer flexible pricing plans. Dell EMC, for example, offers CloudFlex, and Cisco has Cloud Pay.

Pay-per-use offers financial benefits
Generally, consuming IT on a pay-per-use model means that the money spent can be accounted for as an operational rather than a capital expense, giving users a tax benefit.

HPE's 2017 acquisition of California cloud-consumption-analytics software provider Cloud Cruiser has helped the company provide cost comparisons to enterprises, Pinczuk noted.

"Now we can send our financial people to talk to the CFO about what does it mean to shift from capex to opex models," Pinczuk. "The same way that the development world is moving toward agile development, we're moving toward agile service delivery."

On Monday, HPE said it was making GreenLake Flex Capacity available for partners to sell. 

As HPE as well as its channel partners and competitors ramp up their flexible-consumption programs, enterprises have an increasingly broad set of choices when planning their IT budgets. 

At the moment, while HPE's GreenLake Hybrid Cloud service works with AWS and Azure, the company is willing to embrace other cloud platforms.

"We want to be cloud-agnostic," Pinczuk said. If there is demand from HPE customers, the company will extend its hybrid-cloud services to other clouds. "It will depend on customer interest."

https://www.networkworld.com

Cloud backup is not the same as standard datacenter backup

Backup is just good policy. You need the ability to back up data and applications someplace, so they can be restored somehow, to keep the business running in case of some natural or manmade disaster that takes the primary business-critical systems down.

We have whole industries that provide backup sites and backup technology. They can be passive, meaning that you can restore the site in a short period of time and get back to operations. Or they can be active (which costs more), meaning it can instantly take over for the disabled systems with current data and code releases—in some cases, without the users even knowing.

In the cloud, disaster recovery inolves a new set of choices that don’t look much like the ones you have for on-premises systems. The approach that you take should represent the value that the applications and data sets have for the business. I suggest that you look at the practicality of it all, and also make sure that you’re not spending more than the disaster recovery configuration is worth.

Option 1: Region-to-region disaster recovery
You can set up two or more regions in the same public cloud provider to provide recovery. So, if the Virginia region is taken out, there are other regions on the country that take over.

You can pay to have exact copies of the data and apps replicated to the backup region, so they can seamlessly take over (that’s active recovery). Or you can use more cost-effective approaches, such as scheduled backup to passive mass storage, to stand up the other region quickly (that’s passive recovery).

Option 2: Cloud-to-cloud disaster recovery
The most common question that I get is: What if the entire public cloud provider is wiped out, or in a long-term outage, how can we protect ourselves?

Using one public cloud to provide backup to another public cloud would let you, for example, use Amazon Web Services to back up Azure, go the other way around, or do some other pairing.

While this seems like the ultimate in disaster recovery—and in hedging bets—doing multicloud just to support disaster recovery means keeping around two different skill sets, having two different platfiorm configurations, and other costs and risks.

Ongoing cloud-to-cloud system replication (aka intercloud replication), increases the chance that things will go wrong. That’s not what you want when trying to replicate the primary and backup platforms. While not impossible, intercloud replication is five times more difficult than intracloud replication within the same provider. That’s why intercloud support is almost nonexistent, outside a few clever consultants.

https://www.infoworld.com

HPE puts enterprise software applications at the edge network

CIOs, network administrators and data-center managers who see a need to run full-fledged, unmodified enterprise software at the edge of their networks, on factory floors and oil rigs, now have an opportunity to do so.

HPE is certifying complete enterprise software stacks for its EdgeLine converged infrastructure devices, allowing enterprises to run the exact same applications in the data center, in the cloud or at the network edge.

The certifications will cover software from vendors including Microsoft, SAP, PTC, SparkCognition and Citrix to run on its EdgeLine EL 1000 and EdgeLine EL4000 systems, the company said Wednesday at its Discover conference in Las Vegas.

The move comes after HPE CEO Antonio Neri on Tuesday announced that the company would be investing $4 billion in intelligent edge systems over the next four years.

The EdgeLine systems use Intel Xeon processors and offer built-in computing, storage and data-capture, taking data-center power to the edge of the network in hardware that can be placed on factory floors, rail cars or even windmills.

EdgeLine systems merge operational technology – controlling everything from factory equipment to pumps, heating and cooling systems and electrical power – and information technology. They incorporate HPE's iLO firmware, allowing them to be controlled from the data center or remote devices using HPE management software.

"HPE is probably the only vendor who supports the operational technology systems and operational technology networks and devices directly, whereas the other players use some form of gateway or some form of protocol conversion that’s probably in a separate box somewhere," said Peter Havart-Simkin, research director for IoT for Gartner. "So EdgeLine is different in the sense that it can actually support the operational-technology side of the business and the information-technology side of the business."

With the move to certify complete enterprise software stacks for its EdgeLine devices, HPE's merger of IT and OT becomes even more solid.

"We will ship enterprise class-compute, storage and manageability out to the edge, out of the data center, out to the cloud, no compromise," said Tom Bradicich, general manager and vice president of IoT and converged systems at HPE.

"Most edge systems are either closed and proprietary or they're quite compromised; they have lower processing power, or smaller memory – they can’t run fully enterprise-class applications," Bradicich said.

Various software companies offer modified versions of enterprise applications for edge devices, often tapping container technology, but they need to connect to the cloud to be fully operational, noted Havart-Simkin. Having complete enterprise software stacks that can run in places like oil rigs or factories in remote locations is a huge advantage for enterprises, he said.

Complete edge systems avoid network latency
"If you're running a factory in South America where the networking is dodgy in that country you wouldn’t want to run your manufacturing execution software on the cloud because the network could actually stop your factory from working," Havart-Simkin said.

Another advantage of having full systems – complete with enterpise-class storage – on the edge is that it helps companies comply with data privacy regulations. "Some companies can't have their data leave the edge," said Bradicich.

To complement and support enterprise applications running on the EdgeLine devices, HPE also announced the EdgeLine Extended Storage Adapter option kit, which adds as much as 48 terabytes of storage to the systems. The extra storage is meant to enable data-heavy workloads like AI, video analytics and databases. It will also let enterprises use a variety of storage management tools like HPE's StoreVirtual VSA, VMware vSAN and Microsoft Storage Space, HPE said.

The extended storage adapters will start at a list price of $299.

HPE is certifying a range of software applications for the EdgeLine devices, including:

  • PTC ThingWorx, a platform for IoT designed to let enterprises develop and implement augmented reality applications;
  • SparkCognition's SparkPredict, which taps Spark machine learning technology to enable predictive analytics and provide advanced failure notices to data center managers.

In addition, HPE will certify Azure Stack, Microsoft's on-premises version of its public cloud, and SAP Hana, SAP's in-memory data platform, in "the September timeframe," as it rolls out the Extended Storage Adapter, Bradicich said.

Since the versions of these applications that run on EdgeLine systems are the same ones that run in the enterprise data center and in the cloud, no new skills are need to manage them, Bradicich said.

https://www.networkworld.com

Wednesday, 20 June 2018

Getting grounded in IoT networking and security

The internet of things already consists of nearly triple the number of devices as there are people in the world, and as more and more of these devices creep into enterprise networks it’s important to understand their requirements and how they differ from other IT gear.

The major difference is that so far they are designed with little or no thought to security. That stems from having comparatively little memory and compute power to support security but also because often they are designed with time-to-market, price and features as top considerations to the exclusion of security.

IoT devices use a varied set of communications protocols, so in an enterprise environment it’s essential that there’s support for whatever means they use to transfer the data they gather.

They are also built around a small set of recent standards or no standards at all, which can complicate interoperability.

Vendors, service providers and practitioners are working on these problems, but in the meantime, it’s important for networking pros to come up to speed with the challenges they face when the time comes to integrate IoT. That’s where this downloadable PDF guide comes in.

Networking IoT devices
It starts off with an article about what to consider when networking IoT devices. This includes linking up and communicating, but also the impact that the volumes of data they produce will have on networking infrastructure, delay, congestion, storage and analytics. IoT can even have an impact on network architecture, pushing more computing power to the network edge to deal with this data close to its source. Management is yet another challenge.

IoT network security
This is followed up by an article about how the network itself might have to become the place where IoT security is implemented. Given that the most desirable aspects of IoT – cost, density of deployments, mobility – cannot be forfeited, and compute power is limited, something else has to pick up the slack.

That something else could be the network and how it’s segmented to isolate IoT devices from attackers. This is followed up with 10 quick tips that help enhance IoT security.

Industial IoT challenges
A major subcategory of IoT is industrial IoT, which includes robots, sensors and other specialized equipment commonly found in industrial settings. They come with their own set of challenges and security concerns that are the topic of the fourth article in this package.

https://www.networkworld.com (https://images.idgesg.net/assets/2018/05/ie_nw_getting20grounded20in20iot.pdf)

Tuesday, 19 June 2018

Oracle Updates Autonomous Cloud Services, Data Integration

The latest version of the Oracle Cloud Platform released June 13 features a range of built-in autonomous capabilities, including the Oracle Mobile Cloud Enterprise, Oracle Data Integration Platform Cloud, and Oracle API Platform Cloud.

These new features builds on the software giant’s introduction last fall of its Autonomous Database, a self-monitoring system that automatically finds and applies patches and regularly tunes itself for optimum performance.

The new Mobile Cloud Enterprise makes it easier for developers to add features such as chatbots for customer interaction via popular services such as Facebook Messenger and Amazon Alexa. Oracle said it’s easier because MCE takes care of the time-consuming task of integrating these messaging platforms with the chatbot.

“We also use natural language processing to understand the user’s intent as part of our dialog engine so the developer doesn’t need to code for every type of end user,” Siddhartha Agarwal, vice president of product management at Oracle, told eWEEK.

“We can help manage the conversation as it goes along because the chatbot understands the sentiment of the conversation. Also, it will hand off to a human if the customer doesn’t understand something or the customer is, say, getting ticked off.” 

In an enterprise setting Agarwal said the chatbot’s integration with either Software as a Service or on-premises applications helps it work more effectively. “There is intelligent interaction between you and the system so if you say, for example, I need a new laptop, the system shows the three laptops you are eligible to procure and walks you through the transaction,” he said.

The Mutual Madrid Open ATP Tennis Tournament used the Oracle Cloud to develop a chatbot it calls MatchBot that uses AI to engage in “natural” conversations with fans online related to information about the event, players and results as well as information about hospitality services.

“With this new technology, we were able to provide visitors with an amazing experience—a pleasant, simpler, and faster way to get the information they wanted about the tournament, Gerard Tsobanian, president and CEO of Mutua Madrid Open, said in a statement.

Oracle is also tackling big data in this latest release by simplifying and automating the creation of so-called data lakes and data warehouses, the huge repositories of operational data and customer transactions that companies rely on for insights into buying trends and customer satisfaction.

Oracle’s Data Integration Platform Cloud uses AI to stream and transfer large data volumes. Agarwal said Oracle is removing a lot of the steps developers typically have to take to build a data lake, such as knowing how to stream all the data, cleanse it, set up permission zones and directories.

“You need a lot of integration tools.  We give you one tool to do all of that from configuring the data lake to cataloging the data,” said Agarwal. “We land the data in the right security zones and we understand the meta data coming through and tag it so you have easier access to customer insights.”

Oracle also updated the Oracle API Platform Cloud and the Oracle Developer Cloud. The API Platform Cloud supports both Oracle and third-party clouds for agile API design and development. Oracle said it’s designed to help with hybrid deployments across Oracle Cloud, private and third party clouds covering every aspect of the API lifecycle. The system continually “learns” usage patterns and recommends allocation limits and configurations.

“We’ve created an API gateway and enabled that gateway to run in our public cloud or you can have it run locally,” said Agarwal.

The Oracle Developer Cloud is a complete development platform that’s included with all Oracle Cloud Services. It’s designed to automate software development and delivery cycles, and help teams manage agile development processes with an easy-to-use web interface and integration with popular development tools. 

http://www.eweek.com

Docker Advances Container Platform for the Multicloud World

DockerCon 18 kicked off here on June 13 with Docker Inc. making a series of announcements that aim to further advance container adoption by enterprises.

Docker announced it is enhancing its flagship Docker Enterprise Edition (EE) with a new federated application management capability that enables enterprises to manage and deploy containers across a multicloud infrastructure. The company is also improving its Docker Desktop application for developers with new template-based workflows for building container applications.

"Federated application management shows how Docker Enterprise Edition can be used to provide a consistent, uniform secure environment across which you could manage applications on multiple clusters, whether they're on premises or in the cloud," Docker Chief Product Officer Scott Johnston told eWEEK.

The federated application management capability is a technology preview that Docker is demonstrating at DockerCon that will become generally available in a Docker EE update later this year. The last major Docker EE milestone was version 2.0, which was announced on April 17, bringing Kubernetes container orchestration to Docker's platform.

How It Works

Each of the different cloud platforms has its own separate tooling, security models and workflows, Johnston said. What that means for container deployments is that they can often become siloed, which is what the federated application management capability is aiming to avoid.

Organizations that want to federate across multiple clouds, including those that are running Docker as well as those that run cloud provider based Kubernetes services, will need to run a component called Docker Trusted Registry (DTR) as an agent.   

Johnston explained that organizations push their updated Docker container to DTR, which then provides the security authentication and cryptographic signing of images. Those images can then be automatically promoted out to production data centers and cloud providers. DTR enables a common layer for image management and control across a multicloud deployment, he added.

The Docker federated approach also works with the native security policies of a given cloud provider. 

"What we do is we plug into the cloud policies at cloud providers to drive them natively," Johnston said. "So, for example, we're driving the native Amazon security policies, but we're not exposing users; we're abstracting that out, making it easier to use and manage for a multicloud deployment."

Docker Desktop

The Docker Desktop developer tools, which include Docker for Mac and Docker for Windows, are also getting a boost with a new technology preview. Docker Desktop enables developers to easily run Docker and Kubernetes on a notebook computer to build, test and run container applications.

Johnston said that while many developers are comfortable with the command line, there are many more who are not. To that end, Docker Desktop is now previewing template-based workflows that provide a drag-and-drop graphical user interface for getting started with application development templates.

This isn't the first time Docker has had some form of graphical tool to help users get started with containers. In 2015, Docker acquired Kitematic, technology that enables users to select applications from the Docker Hub image repository and then easily deploy them. Docker learned from the Kitematic experience, and the learning is reflected in the new template system for Docker Desktop, Johnston said.

Competition

With the integrated support for Kubernetes, Docker now finds itself in more direct competition with Kubernetes distribution vendors. Johnston noted, however, that there are only two other vendors Docker sees in competitive situations, Red Hat's OpenShift and Pivotal Container Service (PKS).

Overall, Johnston said the macro trend is that the widespread adoption of containers is what is driving the need for the multicloud federation capabilities.

"For us, it's an indication that we're hitting the hockey stick of the adoption curve right now," he said. "Most organizations are not wondering anymore why they should use containers or what containers are. Organizations now are saying they're important and are figuring out how to scale adoption throughout their IT environments."

http://www.eweek.com

Google Cloud sets service providers loose to drive cloud adoption

Google Cloud has officially launched Partner Interconnect into the market, leveraging an expanding ecosystem of channel partners to drive customer adoption.

Following a beta release in April, the rollout is designed to offer large-scale organisations high-speed connections from data centres to Google Cloud Platform (GCP) regions, leveraging multiple partners worldwide.

“Partner Interconnect lets you connect your on-premises resources to Google Cloud Platform (GCP) from the partner location of your choice, at a data rate that meets your needs,” wrote Sankari Venkataraman, technical program manager at Google Cloud, via an official blog post.

Through general availability, customers now receive an SLA for Partner Interconnect connections when using one of the recommended topologies.

“If you were a beta user with one of those topologies, you will automatically be covered by the SLA,” Venkataraman explained.

“Partner Interconnect is ideal if you want physical connectivity to your GCP resources but cannot connect at one of Google’s peering locations, or if you want to connect with an existing service provider.”

Similar to Dedicated Interconnect - which was rolled out in September 2017 - Partner Interconnect offers private connectivity to GCP to organisations that don't require the full 10Gbps of a dedicated circuit.

The offering also allows organisations whose data centres are geographically distant from a Google Cloud region or Point of Presence (POP) to connect to GCP, using third-party partner connections.

From a geographic perspective, key partners in Sydney include Equinix; Macquarie Cloud Services; Megaport and NEXTDC.

Meanwhile, Megaport covers Singapore, with other providers globally including NTT Communications; Tata Communications and Verizon.

Furthermore, other providers include AT&T Business; BT; CenturyLink; Cologix; Colt; DE-CIX; Digital Realty; Internet2; IX Reach; KDDI; NRI; Orange Business Services; SoftBank; Tamares Telecom; Telia Carrier; @Tokyo and Zayo.

Getting started

For businesses already utilising the expertise of a service provider for network connectivity, a check-list of supported offerings is now available to assess whether such a partner provides Partner Interconnect service.

If not, Venkataraman said customer can select a partner from the approved list based on data centre location.

“Make sure the partner can offer the availability and latency you need between your on-premises network and their network,” Venkataraman added. “Check whether the partner offers layer 2 connectivity, layer 3 connectivity, or both.

“If you choose a layer 2 partner, you have to configure and establish a BGP session between your cloud routers and on-premises routers for each VLAN [virtual local area network] attachment that you create. If you choose a layer 3 partner, they will take care of the BGP [border gateway protocol] configuration.

“Please review the recommended topologies for production-level and non-critical applications. Google provides a 99.99 per cent (with Global Routing) or 99.9 per cent availability SLA, and that only applies to the connectivity between your VPC network and the partner's network.”

In addition, Venkataraman said Partner Interconnect provides “flexible options” for bandwidth between 50 Mbps and 10 Gbps.

“Google charges on a monthly basis for VLAN attachments depending on capacity and egress traffic,” Venkataraman added.

http://www.channelworld.in

Sunday, 17 June 2018

Is Kubernetes Ready to Replace OpenStack and VMware?

 Kubernetes just recently celebrated the 4-year anniversary of its first commit, but the open source container orchestration platform is acting like anything but a toddler. In fact, those guiding the project are looking at a future where Kubernetes could replace OpenStack and VMware as the basis for cloud-native infrastructure.

Speaking at the DockerCon 2018 event in San Francisco, Dan Kohn, executive director of the Cloud Native Computing Foundation (CNCF), said the group is focused during the second half of 2018 on seeing if Kubernetes can take on a greater role in shaping the future of cloud architecture.

“It’s sort of doing it on its own without our help,” Kohn said of the Kubernetes Project, which recently hit graduation status within CNCF.

Kohn admitted that this focus was still in its early stages, but that CNCF could make some of those plans more public at the upcoming Open Source Summit North America event. That event is scheduled for late August in Vancouver, British Columbia.

OpenStack is still seen as foundational for telecom operators in migrating their network infrastructure to a virtualized environment. But there has also been ongoing concern about the bloated nature of the platform.

The same can be said for VMware, which remains the basis for virtualized server platforms that dominate the private cloud and enterprise space. However, truly unleashing the potential of cloud platforms will require a loosening of that grip on physical servers.

Kohn did note that a challenge for Kubernetes would be in making sure that compatibility remained across the platform. “But that’s where the conformance program comes in,” he said.

The CNCF launched that program last November as a way to stabilize deployments across vendors and use cases.

Airship Example
Such a push by the Kubernetes community would seem to be a natural progression for the project. While the platform did just celebrate that commit milestone, it’s growth trajectory has gone nearly vertical over the past year. Today, all the major public cloud providers have integrated the container orchestrator as native to their operations. And surveys have shown that Kubernetes is driving new container adoption.

One example of Kubernetes’ potential power was the recently released Airship Project launched by AT&T, SK Telecom, Intel, and the OpenStack Foundation. The initial focus of that project is the implementation of a declarative platform to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud.

AT&T said last November that it was planning to put more reliance on Kubernetes in its AT&T Integrated Cloud (AIC) platform, which is based on OpenStack. Ryan van Wyk, assistant vice president of Cloud Platform Development at AT&T Labs, said at the time that the use of the Kubernetes would add more agility and remove costs from running the AIC platform.

Van Wyk said in blog post tied to the Airship announcement that, “Airship is going to allow AT&T and other operators to deliver cloud infrastructure predictably that is 100 percent declarative, where day zero is managed the same as future updates via a single unified workflow, and where absolutely everything is a container from the bare metal up.”

Maybe Smaller?
Kohn also said CNCF was looking at ways to shrink Kubernetes, which has grown significantly in size and scope. He explained that this would involve having projects evolve outside of Kubernetes that could then use well-defined APIs to hook into the platform to fill out functionality.

One example of this was the recently established Helm Project within CNCF. Helm began inside of the Kubernetes Project as a package manager that was developed to support software built on Kubernetes. It’s now an incubation-level hosted project at CNCF.

“We think that’s a great sign from the community that they wanted greater independence, a governance structure, and process,” Kohn said of the Helm Project. “They are a great consumer of the Kubernetes API, but they don’t need to be hooked into it.”

But, before we get too far down this path, Kohn did admit that at this point it was unlikely those efforts to shrink Kubernetes would gain any traction in the near term.

“It’s an aspiration at this point, and I don’t think it’s actually going to happen because there is so much interest in Kubernetes and so much work going into it,” Kohn said.

https://www.sdxcentral.com