Sunday, 28 October 2018

OpenStack Foundation releases software platform for edge computing

The OpenStack Foundation, the joint project created by NASA and Rackspace to create a freely usable Infrastructure-as-a-Service (IaaS) platform, has announced the initial release of StarlingX, a platform for edge computing.

StarlingX is designed for remote edge environments, offering node configuration in host, service management, and perform software updates remotely. It can also warn operators if there are any issues with the servers or the network.

The foundation says the platform is optimized for low-latency, high-performance applications in edge network scenarios and is primarily aimed at carrier networking, industrial Internet of Things (IIoT), and Internet of Things (IoT).

StarlingX is based on key technologies Wind River developed for its Titanium Cloud product. In May of this year, Intel, which owns Wind River, announced plans to turn over Titanium Cloud to OpenStack and deliver the StarlingX platform.

StarlingX is controlled through RESTful APIs and has been integrated with a number of popular open-source projects, including OpenStack, Ceph, and Kubernetes.

The software handles everything from hardware configuration down to host recovery and everything in between, such as configuration, host, service, and inventory management services, along with live migration of workloads.

“When it comes to edge, the debates on applicable technologies are endless. And to give answers, it is crucial to be able to blend together and manage all the virtual machine (VM) and container-based workloads and bare-metal environments, which is exactly what you get from StarlingX,” wrote Glenn Seiler, vice president of product management and strategy at Wind River, in a blog post announcing StarlingX’s availbility.

“Beyond being an integration project, StarlingX also delivers new services to fill the gaps in the open-source ecosystem to fulfill the strict requirements of edge use cases and scenarios throughout all the segments of the industry,” he added.

StarlingX fills an essential need in edge computing
It’s something of a missing link in the new field of edge computing. There are hardware providers such as Vapor IO and Schneider Electric, but they use their own home-brewed software stack. With StarlingX, companies can set up their own hardware configurations and now have a software option.

And even though it’s a 1.0 product, StarlingX is built on some very mature technology, starting with Wind River and extending to OpenStack. OpenStack is the world’s most widely deployed open-source infrastructure software, used in thousands of private and public cloud deployments and running across more than 10 million physical CPU cores.

https://www.networkworld.com

Humanization Is Key to Making AI Projects Successful

Artificial intelligence is routinely touted at tech conferences and elsewhere as the “Next Big Thing” that is going to transform the customer experience and the ability of companies to better sell and market their wares. But there were also skeptical and cautionary notes sounded here, even from vendors, at the Connected Enterprise conference (running Oct. 22-25) sponsored by Constellation Research.

“There are a lot of misconceptions about what AI can do in the enterprise. I would focus on really picking a specific problem,” said Inhi Cho Suh, general manager of Watson Customer Engagement at IBM.

For customers of IBM’s Watson AI supercomputer services, Suh said it’s important to focus on precise algorithms for small sets of data. “The language of business is incredibly unique,” said Suh. “Ask the marketing team or the supply team for the definition of ‘customer’ and ‘order,’ and you might get different answers.”

Estaban Kolsky, principal and founder of Think Jar, a research and advisory firm focused on customer strategies, agreed. “You don’t get to good AI without good data; no one has.”

Another key point is that AI systems evolve. “The big thing with AI is you have to exercise it often. If you use it infrequently, it’s hard to coach it to do what you want. It has to learn,” said Marge Breya, chief marketing officer at Microstrategy.

Jennifer Hewit, who leads the Cognitive and Digital Services department at Credit Suisse, used AI to deploy a new kind of virtual IT service desk at the company called Amelia. The project rolled out slowly, which she said was a deliberate strategy to see how it worked.

When Amelia went live in December of 2017, it proved to be about 23 percent effective at answering employee questions. As with chatbots, an inquiry gets bumped up to a live agent if the virtual help isn’t effective. “One thing we learned was not to let the system fake knowing. That was huge,” said Hewit.

Amelia was initially designed as an avatar, but is now voice only. “We took that [avatar] down because she looked too much like a robot,” said Hewit. The focus is on common, simple problems tech support deals with like email is stuck or a password needs to be reset. In the past year, the staff at Credit Suisse has helped to train the system that is now 85 percent effective at answering questions and serves 76,000 users in 40 countries.

Training Human Beings

While it’s well-known that AI systems need to be taught or learn from their mistakes, these systems are also training the humans who use them, warned Liza Lichtinger, owner of Future Design Station, a company that does research in human computer interaction.

“The language we deliver to devices is rewiring and remapping people’s brains, and that projects into their social interactions,” said Lichtinger.

Recently Lichtinger was doing some consulting work for a company bringing out a new virtual agent for personalized health care. In one crisis scenario the app responded: “Is the victim alive?”

“I jumped when I heard that. Suddenly it was a ‘victim’ not a patient. That changes our paradigm of how we see humans. It just shows that companies aren’t always sure about language and the messaging they’re sending people,” she said.

As these AI systems get more sophisticated, they will pick up on visual cues thanks to the inclusion of bio-metric data. “This new area into social signalling is going gangbusters at Stanford University where it’s about capturing how engaged you are looking at specific content,” said Lisa Hammitt, global vice president of artificial intelligence at Visa. “Ethics has to come to the forefront as we look at how we are personalizing the experience and trying to predict intent.”

Hammitt said Visa has developed a data bill of rights that is known internally as “rules for being less creepy.”

“You have to expose what the algorithm is doing. If it says you are a karate expert but you hate karate, you have to let people see that and be able to update it,” said Hammitt.

http://www.eweek.com

Friday, 26 October 2018

Mirantis Rides Kubernetes to Supercharge Open Source Edge Ecosystem

Mirantis is fed up with the slow development pace of open source edge software and infrastructure and thinks that Kubernetes is the answer. The company is using the container orchestration platform as the basis for its Mirantis Cloud Platform Edge (MCP Edge).

Boris Renski, co-founder and chief marketing officer at Mirantis, said that mobile operators are racing to deploy their 5G networks, but the real race will be in deploying edge networks to support new use cases. However, the lack of maturity in the edge ecosystem is forcing those operators to look at prepackaged edge platforms that often rely on proprietary technology.

“These operators have already spent a lot of time and money on virtualizing their networks,” Renski explained. “Why would they then want to take a prepackaged edge solution from Ericsson, Nokia, or Huawei that would just lock them into a single vendor? They are effectively forsaking everything that they have learned and their investments into virtualizing their core.”

Mirantis’ MCP Edge platform integrates Kubernetes, OpenStack, and Mirantis’ DriveTrain infrastructure manager. This allows operators to deploy a combination of container, virtual machine (VM), and bare metal points of presence (POP) that are connected by a unified management plane.

“It’s basically a Kubernetes distro that is purpose built for service provider edge deployments,” Renski said. “We are specifically targeting the infrastructure substrate that infrastructure would run at an aggregation location.”

The platform builds on Mirantis’ Cloud Platform (MCP) that it launched last year. That integrated cloud platform supports VMs using OpenStack, containers using Kubernetes, and bare metal, all on the same cloud. The edge product will run alongside the core MCP platform.

The company based the edge platform on Kubernetes due to its lower footprint when compared to something like OpenStack. Renski explained that this size advantage is crucial for edge deployments where resources will be more constrained. “OpenStack is just too heavy to use in a deployment with just a few nodes,” Renski said.

Something Tangible
Renski cited the recently launched Akraino Project as an example of the glacial pace the industry is moving in terms of an open source edge platform.

The Linux Foundation launched the Akraino Project in February using source code from AT&T. The project is an open source software stack that can support carrier availability and performance needs in cloud services optimized for edge computing systems and applications. The Linux Foundation opened up the seed code for the project in late August to allow the open source community to begin digging into the platform and narrow down potential use cases.

“Akraino is out there, but good luck trying to download something to work with,” Renski said. He added that Mirantis would love to have its MCP Edge platform become part of something broader like Akraino, or OPNFV, or ETSI, “but the first step is to build this and get it out there.”

“We feel very strongly that while what we are releasing might not be ideal for the edge, it’s something that is tangible,” Renski said. “It’s important for folks that are producing functions that can run on the edge to have something tangible that they can build against.”

Developers can download a demo version of the offering from Mirantis’ website as a virtual appliance. That version can support the deployment of a Kubernetes-based, six-node edge POP that can run containers and VMs.

“Users can experiment with running applications on it or run tests against it to see how it performs,” Renski said.

Mirantis is not alone in tapping Kubernetes to bolster edge deployments.

The Cloud Native Computing Foundation (CNCF), which is housed within the Linux Foundation and itself hosts the Kubernetes Project, last month formed a new working group focused on using Kubernetes to manage IoT and edge networking deployments. The Kubernetes IoT Edge Working Group, which was formed with the Eclipse Foundation, is using Kubernetes as a control plane and common infrastructure set for to support edge use cases.

There are also proprietary efforts like that from IoTium that offers edge-cloud infrastructure built on remotely managed Kubernetes.

https://www.sdxcentral.com

Thursday, 25 October 2018

Oracle announces new solutions to make cloud more secure

To help ensure customers’ data is secure from the core of infrastructure to the edge of the cloud, Oracle announced new cloud security technologies. In addition to the self-securing and self-patching capabilities of Oracle Autonomous Database and with the integration of machine learning and intelligent automation to remediate threats, these new cloud services allow customers to improve the security of applications deployed on the next generation of Oracle Cloud Infrastructure. The new cloud services include a Web Application Firewall (WAF) to protect against attacks on web traffic, Distributed Denial-of-Service (DDoS) protection to stop outside parties from disrupting running applications, an integrated Cloud Access Security Broker (CASB) which monitors and enforces secure configurations, and a Key Management Service (KMS) that allows customers to control the encryption of their data.
Emerging technologies like cloud, artificial intelligence and IoT, enable organizations to drive new innovations and reduce costs. However, with opportunities come increased risk including expanded attack surfaces. Security teams rely on manual processes and disparate tools that introduce human error and take an excessive amount of time to accurately detect and respond to threats and outages. Oracle has built integrated layers of defense that are designed to secure users, apps, data and infrastructure.
“Organizations are facing constant security threats from sophisticated actors who want to attack their applications and access their sensitive data,” said Don Johnson, senior vice president, product development, Oracle Cloud Infrastructure. “The new solutions build on Oracle’s existing, strong security heritage and give customers always-on capabilities that make it easier than ever to achieve end-to-end security. These new security layers include highly automated detective, preventive, responsive, and predictive security controls that help mitigate data breaches, address regulatory compliance, and reduce overall risk.”
To help customers combat today’s sophisticated threats and protect their data, Oracle has introduced the following automated security solutions:
• Web Application Firewall (WAF). The native WAF is designed to protect next generation Oracle Cloud Infrastructure applications against botnets, application attacks and DDoS attacks. The platform can then automatically respond to threats by blocking them and alerting security operations teams for further investigation. 
• Distributed Denial of Service (DDoS) Protection. As part of the next generation of Oracle Cloud Infrastructure, all Oracle data centers get automated DDoS attack detection and mitigation of high volume, Layer 3/4 DDoS attacks. This helps ensure the availability of Oracle network resources even when under sustained attack. 
• Cloud Access Security Broker (CASB). Keeping a cloud environment secure requires constant monitoring and enforcement to ensure that no one has set up an insecure network or left data unprotected. Oracle Cloud Access Security Broker (CASB) constantly checks OCI environments to help make sure that corporate security practices are being followed. It comes with preconfigured policies and controls so that customers can deploy applications faster while reducing security and operational risk. CASB also leverages machine learning-based behavioral analytics to predict threats. 
• Key Management Service. Oracle Key Management enables enterprises to encrypt data using keys that they control and offers centralized key management and key lifecycle monitoring capabilities. The solution delivers partitions in highly available and certified Hardware Security Modules that are isolated per customer. It is ideal for organizations that need to verify for regulatory compliance and security governance purposes that their data is encrypted where it is stored.
http://www.csoonline.in

Wednesday, 24 October 2018

GitHub Releases New Workflow Tools, 'Octoverse' Report

GitHub held its Universe 2018 conference at the Palace of Fine Arts in San Francisco Oct. 16, and it was quite a newsy event for the little gang of about 31 million developers who use the company’s 96 million repositories of open source code each day.

Those numbers are correct. That’s how large and in charge open source software has been for more than a generation and here in the waning months of 2018.

This event was largely about helping devs with building workflows that are: a) easy to do; b) realistic; and c) efficient. The company introduced some futuristic features that included GitHub Actions and GitHub Connect advance development workflows and break down barriers between teams.

GitHub also released new security tools with the GitHub Security Advisory API, new ways to learn across teams with GitHub Learning Lab for organizations, and other items.

“As a developer, you spend too much time configuring workflows—or get locked into inflexible tools as the industry evolves around you,” GitHub Senior Vice-President of Technology Jason Warner wrote in a blogpost. “We’re bringing the same tools you use while writing software to the rest of your development workflow, allowing you to focus on what matters most: code.”

Users can choose the developer tools, languages and deployment platforms they need most, supported by the ecosystem of GitHub Apps and integrations using the REST and GraphQL APIs, Warner said.

The company on Oct. 16 also released its "State of the Octoverse" report, which illustrates what the GitHub community can do in a year--such as creating 2.9 billion lines of code and promoting teamwork across time zones. 

http://www.eweek.com

Tuesday, 23 October 2018

What is a private cloud?

Private cloud is a well-defined term that government standards groups and the commercial cloud industry have pretty much agreed upon, and while some think its use is waning, recent analysis indicates that spending on private cloud is still growing at a breakneck pace.

A study by IDC projects that sales from private-cloud investment hit $4.6 billion in the second quarter of 2018 alone, which is a 28.2 percent increase from the same period in 2017.

So why are organizations attracted to private cloud?

What is a private cloud?
There are four types of cloud – public, community, hybrid, and private cloud, according to the National Institute for Standards and Technology.

NIST says that private cloud has some unique characteristics that set it apart from the rest: “The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

That’s what sets private cloud apart, but it also shares five characteristics with the other types of cloud, NIST says.

The first, on-demand self-service, means that end users can provision compute resources themselves without asking help from IT.

The second, broad access. requires that the resources in the cloud are accessible via most every type of device from workstations and laptops to tablets and phones.

The third, resource pooling, which makes for more overall efficient use of the compute resources, means various tenants share resources that are dynamically assigned over and over. In a private cloud this means that different divisions of an organization share resources, but they are exclusively available for just that organization. They are not shared with third parties as is the case with multi-tenancy services.

The fourth, rapid elasticity, enables ramping capacity up or down as needed and releasing resources for use by others when the need passes.

Finally, measured service insures that providers and users can measure how much of various resources – storage, processing, bandwidth, numbers of user accounts – are used so they can be allocated in a way that optimizes use of the resources.

Virtualization is just a part of private cloud

Just utilizing virtualization by throwing a hypervisor on a server does not comprse private cloud computing. While virtualization is a key component of cloud computing, it is not a cloud by itself.

Virtualization technology allows organizations to pool and allocate resources, which are both part of NIST's definition. But other qualities around self-service and the ability to scale those resources is needed for it to technically be considered a cloud environment.

A private cloud – compared to public or hybrid clouds – refers specifically to resources used by a single organization, or when an organization's cloud-based resources are completely isolated.

Private cloud economics
One of the biggest misconceptions about private cloud is that the cloud will save money. It can and often does, but it doesn’t inherently do so.

The up-front costs can be considerable. For example, automation technology, an important part of a private-cloud network, can be a significant investment for many IT organizations. The result can be the ability to reallocate resources more efficiently, and it may allow some organizations to reduce their overall capital expenditures for new hardware, which can also save money. But overall savings are not assured.

Gartner analysts say the primary driving benefit of adopting a private cloud model should not be cost savings, but rather increased agility and dynamic scalability, which can improve time-to-market for businesses that make use the technology.

Private cloud can be in the public cloud
Many people associate private cloud with being located in an organization's private, on-premises data center and public cloud as coming from a third-party service provider. But as NIST notes, while a private cloud may be owned, managed and operated by a private organization, it’s infrastructure may be located off premises.

Many providers sell off-premises private clouds, meaning that while the physical resources are located in a third-party facility, they are dedicated to a single customer. They are not shared, as they are in a public cloud, with multi-tenant pooling of resources among multiple customers.  "Private-cloud computing is defined by privacy, not location, ownership or management responsibility," says Gartner analyst Tom Bittman.

When dealing with cloud providers, be wary of security definitions. Some vendors may, for example, outsource their data-center operations to a collocation facility where they might not dedicate hardware to each customer. Or they could pool resources among customers but say they guarantee privacy by separate them using VPNs. Investigate the details of off-premises private-cloud offerings, Bittman advises.

Private cloud is more than IaaS
Infrastructure as a service is a big reason for adopting private cloud architectures, but it’s by no means its only usefulness. Software and platform as a service are also important, although Bittman says IaaS is the fastest growing segment.

"IaaS only provides the lowest-level data-center resources in an easy-to-consume way, and doesn't fundamentally change how IT is done," he says. Platform as a service (PaaS) is where organizations can create customized applications built to run on cloud infrastructure. PaaS comes in public or private flavors as well, having the application development service hosted either in an on-premises data center or in a dedicated environment from a provider.

Private cloud isn’t always private
Private cloud is the natural first step toward a cloud network for many organizations. It provides access to the benefits of the cloud – agility, scalability, efficiency – without some of the security concerns, perceived or real, that come with using the public cloud. But Bittman predicts that as the cloud market continues to evolve, organizations will open to the idea of using public cloud resources. Service-level agreements and security precautions will mature and the impact of outages and downtime will be minimized.

Eventually, Gartner predicts,  the majority of private cloud deployments will become hybrid clouds, meaning they will leverage public cloud resources. Meaning your private cloud today, may be a hybrid cloud tomorrow. "By starting with a private cloud, IT is positioning itself as the broker of all services for the enterprise, whether they are private, public, hybrid or traditional," Bittman says. "A private cloud that evolves to hybrid or even public could retain ownership of the self-service, and, therefore, the customer and the interface. This is a part of the vision for the future of IT that we call 'hybrid IT.'"

Cloud repatriation
When businesses move workloads and resources to the public cloud, then move it back to a private cloud or a non-cloud environment, that’s called cloud repatriation.

According to a 2017 survey by 451 Research, 39% of respondents said they moved at least some data or applications out of the public cloud, the top reason being performance and availability issues. A 451 blog about the research said many of the respondents’ reasons “matched the reasons we know businesses ultimately decide to shift to the public cloud in the first place.”

The top six reasons cited by the survey respondents were performance/availability issues (19%), improved on-premises cloud (11%), data sovereignty regulation change (11%), higher than expected cost (10%), latency issues (8%) and security breaches (8%).

And it’s not that these IT decision makers were abandoning public cloud for private cloud. Rather it’s that cloud environments are constantly evolving for each organization, and that many have a hybrid cloud that incorporates both private and public cloud. A majority of 451s survey respondents (58%) said they are “moving toward a hybrid IT environment that leverages both on-premises systems and off-premises cloud/hosted resources in an integrated fashion.”

https://www.networkworld.com

Thursday, 18 October 2018

7 cloud services to ease machine learning

One of the last computing chores to be sucked into the cloud is data analysis. Perhaps it’s because scientists are naturally good at programming and so they enjoy having a machine on their desks. Or maybe it’s because the lab equipment is hooked up directly to the computer to record the data. Or perhaps it’s because the data sets can be so large that it’s time-consuming to move them. 

Whatever the reasons, scientists and data analysts have embraced remote computing slowly, but they are coming around. Cloud-based tools for machine learning, artificial intelligence, and data analysis are growing. Some of the reasons are the same ones that drove interest in cloud-based document editing and email. Teams can log into a central repository from any machine and do the work in remote locations, on the road, or maybe even at the beach. The cloud handles backups and synchronization, simplifying everything for the group.

But there are also practical reasons why the cloud is even better for data analysis. When the data sets are large, cloud users can spool up large jobs on rented hardware that accomplish the work much, much faster. There is no need to start your PC working and then go out to lunch only to come back to find out that the job failed after a few hours. Now you can push the button, spin up dozens of cloud instances loaded with tons of memory, and watch your code fail in a few minutes. Since the clouds now bill by the second, you can save time and money.

There are dangers too. The biggest is the amorphous worry about privacy. Some data analysis involves personal information from subjects who trusted you to protect them. We’ve grown accustomed to the security issues involved in locking data on a hard drive in your lab. It’s hard to know just what’s going on in the cloud.

It will be some time before we’re comfortable with the best practices used by the cloud providers but already people are recognizing that maybe the cloud providers can hire more security consultants than the grad student in the corner of a lab. It’s not like personal computers are immune from viruses or other backdoors. If the personal computer is connected to the Internet, well, you might say it’s already part of the cloud.

There are, thankfully, workarounds. The simplest is to anonymize data with techniques like replacing personal information with random IDs. This is not perfect, but it can go a long way to limiting the trouble that any hacker could cause after slipping through the cloud’s defenses.

There are other interesting advantages. Groups can share or open source data sets to the general public, something that can generate wild combinations that we can only begin to imagine. Some of the cloud providers are curating their own data sets and donating storage costs to attract users (see AWS, Azure, GCP, and IBM for starters). If you like, you might try to correlate your product sales with the weather or sun spots or any of the other information in these public data sets. Who knows? There are plenty of weird correlations out there.

Here are seven different cloud-based machine learning services to help you find the correlations and signals in your data set.

Amazon SageMaker
Amazon created SageMaker to simplify the work of using its machine learning tools. Amazon SageMaker knits together the different AWS storage options (S3, Dynamo, Redshift, etc.) and pipes the data into Docker containers running the popular machine learning libraries (TensorFlow, MXNet, Chainer, etc.). All of the work can be tracked with Jupyter notebooks before the final models are deployed as APIs of their own. SageMaker moves your data into Amazon’s machines so you can concentrate on thinking about the algorithms and not the process. If you want to run the algorithms locally, you can always download the Docker images for simplicity.

Azure Machine Learning
Microsoft has seen the future of machine learning and gone all-in on the Machine Learning Studio, a sophisticated and graphical tool for finding signals in your data. It’s like a spreadsheet for AI. There is a drag-and-drop interface for building up flowcharts for making sense of your numbers. The documentation says that “no coding is necessary” and this is technically true but you’ll still need to think like a programmer to use it effectively. You just won’t get as bogged down in structuring your code. But if you miss the syntax errors, the data typing, and the other joys of programming, you can import modules written in Python, R, or several other options.

The most interesting option is that Microsoft has added the infrastructure to take what you learn from the AI and turn the predictive model into a web service running in the Azure cloud. So you build your training set, create your model, and then in just a few clicks you’re delivering answers in JSON packets from your Azure service.

BigML
BigML is a hybrid dashboard for data analysis that can either be used in the BigML cloud or installed locally. The main interface is a dashboard that lists all of your files waiting for analysis by dozens of machine learning classifiers, clusterers, regressors, and anomaly detectors. You click and the results appear.

Lately the company has concentrated on new algorithms that enhance the ability of the stack to deliver useful answers. The new Fusion code can integrate the results from multiple algorithms to increase accuracy.

Priced by subscription with a generous free tier on BigML’s own machines. You can also build out a private deployment on AWS, Azure, or GCP. If that’s still too public, they’ll deploy it on your private servers.

Databricks
The Databricks toolset is built by some of the developers of Apache Spark who took the open source analytics platform and added some dramatic speed enhancements, increasing throughput with some clever compression and indexing. The hybrid data store called Delta is a place where large amounts of data can be stored and then analyzed quickly. When new data arrives, it can be folded into the old storage for rapid re-analysis.

All of the standardized analytical routines from Apache Spark are ready to run on this data but with some well-needed improvements to the Spark infrastructure like integrated notebooks for your analysis code.

Databricks is integrated with both AWS and Azure and priced according to consumption and performance. Each computational engine is measured in Databrick Units. You’ll pay more for a faster model.

DataRobot
Many of the approaches here let you build a machine learning model in one click. DataRobot touts the ability to build hundreds of models simultaneously, also with just one click. When the models are done, you can pick through them and figure out which one does a better job of predicting and go with that. The secret is a “massively parallel processing engine,” in other words a cloud of machines doing the analysis.

DataRobot is expanding by implementing new algorithms and extending current ones. The company recently acquired Nutonian, whose Eureqa engine should enhance the automated machine learning platform’s ability to create time series and classification models. The system also offers a Python API for more advanced users.

DataRobot is available through the DataRobot Cloud or through an enterprise software edition that comes with an embedded engineer.

Google Cloud Machine Learning Engine
Google has invested heavily in TensorFlow, one of the standard open-source libraries for finding signals in data, and now you can experiment with TensorFlow in Google’s cloud. Some of the tools in the Google Cloud Machine Learning Engine are open source and essentially free for anyone who cares to download them and some are part of the commercial options in the Google Cloud Platform. This gives you the freedom to explore and avoid some lock-in because much of the code is open source and more or less ready to run on any Mac, Windows, or Linux box.

There are several different parts. The easiest place to begin may be the Colaboratory, which connects Jupyter notebooks with Google’s TensorFlow back end so you can sketch out your code and see it run. Google also offers the TensorFlow Research Cloud for scientists who want to experiment. When it’s appropriate, you can run your machine learning models on Google’s accelerated hardware with either GPUs or TPUs.

IBM Watson Studio
The brand name may have been born when a huge, hidden AI played Jeopardy but now Watson encompasses much of IBM’s push into artificial intelligence. The IBM Watson Studio is a tool for exploring your data and training models in the cloud or on-prem. In goes data and out comes beautiful charts and graphs on a dashboard ready for the boardroom.

The biggest difference may be the desktop version of the Watson Studio. You can use the cloud-based version to study your data and enjoy all of the power that comes with the elastic resources and centralized repository. Or you can do much the same thing from the firewalled privacy and convenience of your desktop.

A machine learning model in every cloud
While many people are looking to choose one dashboard for all of their AI research, there’s no reason why you can’t use more of the choices here. Once you’ve completed all of the pre-processing and data cleansing, you can feed the same CSV-formatted data into all of these services and compare the results to find the best choice. Some of these services already offer automated comparisons between algorithms. Why not take it a step further and use more than one?

You can also take advantage of some of the open standards that are evolving. Jupyter notebooks, for instance, will generally run without too much modification. You can develop on one platform and then move much of this code with the data to test out any new or different algorithms on different platforms.

We’re a long way from standardization and there are spooky and unexplained differences between algorithms. Don’t settle for just one algorithm or one training method. Experiment with as many different modelling tools as you can manage.

https://www.infoworld.co

GitHub Actions to let developers do CI/CD in GitHub

GitHub has introduced a workflow tool called GitHub Actions to its popular code-sharing site, to allow continuous integration/continuous deployment (CI/CD) right from GitHub itself.

Using the tool, which is now in limited beta, developers can build, deploy, and update software projects on either GitHub or an external system without having to run code themselves. Workflows and infrastructure deployments can be expressed as code.

Actions adds customizable workflow capabilities to GitHub.com, so developers build and share code containers to run a software development workflow, even across multiple clouds. Other examples of tasks that can be done with actions include packaging an NPM module or sending an SMS alert.

GitHub Actions is similar to the Apple Shortcuts task management app or the If This Then That (IFTTT) app for communication among devices and apps, but unlike those technologies GitHub Actions is powered by containers.

Developers can pair their tools and integrations with custom actions or those shared by the community.

GitHub Actions will be free for open source use but will require payment for commercial use.

https://www.infoworld.com

Friday, 5 October 2018

Cisco, SAP team up to ease cloud, container integration, management

Cisco today said it has teamed with SAP to make it easier for customers to manage high volumes of data from multi-cloud and distributed data center resources.
The companies announced that Cisco’s Container Platform will work with SAP’s Data Hub to integrate large data sets that may be in public clouds, such as Amazon Web Services, Hadoop, Microsoft or Google, and integrate them with private cloud or enterprise apps such as SAP S/4 HANA. 
Cisco introduced its Kubernetes-based Container Platform in January and said it allows for self-service deployment and management of container clusters. SAP rolled out the Data Hub about a year ago, saying it provides visibility, orchestration and access to a broad range of data systems and assets while enabling the fast creation of powerful, organization-spanning data pipelines.
The Cisco Container Platform for SAP Data Hub will be available in two forms:
  • A stand-alone Cisco Container Platform: An SAP-certified, software-only option for customers to install on their choice of on-premise infrastructure.
  • Integrated Cisco Container Platform on Cisco HyperFlex: An SAP-certified software and hardware bundle for customers who want a preconfigured hyperconverged platform to quickly and easily run SAP Data Hub.
The idea is to let customers place data where ever they want – in cloud services or locally in the data center and let legacy applications work in hybrid cloud environments without the need to lift and shift data, said Dave Cope, senior director of Cisco Cloud Platform & Solutions Group. This reduces complexity, helping customers quickly gain actionable insights and develop useful applications from their stored data.

Helping customers manage complex environments

Indeed, the package promises to help customers manage a complex environment and remove the limitations of data silos, analysts said.
“Data migration, underlying performance, control, and governance over the data, and tighter alignment between the application layer to the network layer,” said Stephen Elliot, program vice president with IDC. “Data migration to public clouds is difficult to execute correctly to reduce business risks.” 
"For large enterprise accounts that use SAP HANA and Cisco HyperFlex, or have SAP Data  Hub, this announcement offers certified capabilities between the two vendors," Elliot said. This is about enabling enterprises that want to move more SAP workloads to the cloud. It enables customers to optimize data migration from a private to public cloud.
For Cisco, the package continues the company’s strategy of developing products for the multi-cloud universe and supporting any of the cloud environments that work best for customers, Cope said. 
The SAP Data Hub on Cisco Container Platform will be available from Cisco in November 2018. It is expected to be available on the SAP App Center in Q1 2019.
The SAP/Cisco package was introduced as part of a larger SAP announcement that included a new version of the Data Hub which featured a new interface and additional management tools. 
https://www.networkworld.com

What is cloud computing? Everything you need to know now

Cloud computing has two meanings. The most common refers to running workloads remotely over the internet in a commercial provider’s data center, also known as the “public cloud” model. Popular public cloud offerings—such as Amazon Web Services (AWS), Salesforce’s CRM system, and Microsoft Azure—all exemplify this familiar notion of cloud computing. Today, most businesses take a multicloud approach, which simply means they use more than one public cloud service.

The second meaning of cloud computing describes how it works: a virtualized pool of resources, from raw compute power to application functionality, available on demand. When customers procure cloud services, the provider fulfills those requests using advanced automation rather than manual provisioning. The key advantage is agility: the ability to apply abstracted compute, storage, and network resources to workloads as needed and tap into an abundance of prebuilt services.

The public cloud lets customers gain new capabilities without investing in new hardware or software. Instead, they pay their cloud provider a subscription fee or pay for only the resources they use. Simply by filling in web forms, users can set up accounts and spin up virtual machines or provision new applications. More users or computing resources can be added on the fly—the latter in real time as workloads demand those resources thanks to a feature known as autoscaling.

Types of cloud computing defined
The array of available cloud computing services is vast, but most fall into one of the following categories.

SaaS (software as a service)
This type of public cloud computing delivers applications over the internet through the browser. The most popular SaaS applications for business can be found in Google’s G Suite and Microsoft’s Office 365; among enterprise applications, Salesforce leads the pack. But virtually all enterprise applications, including ERP suites from Oracle and SAP, have adopted the SaaS model. Typically, SaaS applications offer extensive configuration options as well as development environments that enable customers to code their own modifications and additions.

IaaS (infrastructure as a service)
At a basic level, IaaS public cloud providers offer storage and compute services on a pay-per-use basis. But the full array of services offered by all major public cloud providers is staggering: highly scalable databases, virtual private networks, big data analytics, developer tools, machine learning, application monitoring, and so on. Amazon Web Services was the first IaaS provider and remains the leader, followed byMicrosoft Azure, Google Cloud Platform, and IBM Cloud.

PaaS (platform as a service)
PaaS provides sets of services and workflows that specifically target developers, who can use shared tools, processes, and APIs to accelerate the development, testing, and deployment of applications. Salesforce’s Heroku and Force.com are popular public cloud PaaS offerings; Pivotal’s Cloud Foundry and Red Hat’s OpenShift can be deployed on premises or accessed through the major public clouds. For enterprises, PaaS can ensure that developers have ready access to resources, follow certain processes, and use only a specific array of services, while operators maintain the underlying infrastructure.

FaaS (functions as a service)
FaaS, the cloud version of serverless computing, adds another layer of abstraction to PaaS, so that developers are completely insulated from everything in the stack below their code. Instead of futzing with virtual servers, containers, and application runtimes, they upload narrowly functional blocks of code, and set them to be triggered by a certain event (such as a form submission or uploaded file). All the major clouds offer FaaS on top of IaaS: AWS Lambda, Azure Functions, Google Cloud Functions, and IBM OpenWhisk. A special benefit of FaaS applications is that they consume no IaaS resources until an event occurs, reducing pay-per-use fees.

Private cloud
A private cloud downsizes the technologies used to run IaaS public clouds into software that can be deployed and operated in a customer’s data center. As with a public cloud, internal customers can provision their own virtual resources to build, test, and run applications, with metering to charge back departments for resource consumption. For administrators, the private cloud amounts to the ultimate in data center automation, minimizing manual provisioning and management. VMware’s Software Defined Data Center stack is the most popular commercial private cloud software, while OpenStack is the open source leader.

Note, however, that the private cloud does not fully conform to the definition of cloud computing. Cloud computing is a service. A private cloud demands that an organization build and maintain its own underlying cloud infrastructure; only internal usersof a private cloud experience it as a cloud computing service.

Hybrid cloud
A hybrid cloud is the integration of a private cloud with a public cloud. At its most developed, the hybrid cloud involves creating parallel environments in which applications can move easily between private and public clouds. In other instances, databases may stay in the customer data center and integrate with public cloud applications—or virtualized data center workloads may be replicated to the cloud during times of peak demand. The types of integrations between private and public cloud vary widely, but they must be extensive to earn a hybrid cloud designation.

Public APIs (application programming interfaces)
Just as SaaS delivers applications to users over the internet, public APIs offer developers application functionality that can be accessed programmatically. For example, in building web applications, developers often tap into Google Maps’s API to provide driving directions; to integrate with social media, developers may call upon APIs maintained by Twitter, Facebook, or LinkedIn. Twilio has built a successful business dedicated to delivering telephony and messaging services via public APIs. Ultimately, any business can provision its own public APIs to enable customers to consume data or access application functionality.

iPaaS (integration platform as a service)
Data integration is a key issue for any sizeable company, but particularly for those that adopt SaaS at scale. iPaaS providers typically offer prebuilt connectors for sharing data among popular SaaS applications and on-premises enterprise applications, though providers may focus more or less on B-to-B and e-commerce integrations, cloud integrations, or traditional SOA-style integrations. iPaaS offerings in the cloud from such providers as Dell Boomi, Informatica, MuleSoft, and SnapLogic also let users implement data mapping, transformations, and workflows as part of the integration-building process.

IDaaS (identity as a service)
The most difficult security issue related to cloud computing is the management of user identity and its associated rights and permissions across private data centers and pubic cloud sites. IDaaS providers maintain cloud-based user profiles that authenticate users and enable access to resources or applications based on security policies, user groups, and individual privileges. The ability to integrate with various directory services (Active Directory, LDAP, etc.) and provide is essential. Okta is the clear leader in cloud-based IDaaS; CA, Centrify, IBM, Microsoft, Oracle, and Ping provide both on-premises and cloud solutions.

Collaboration platforms
Collaboration solutions such as Slack, Microsoft Teams, and HipChat have become vital messaging platforms that enable groups to communicate and work together effectively. Basically, these solutions are relatively simple SaaS applications that support chat-style messaging along with file sharing and audio or video communication. Most offer APIs to facilitate integrations with other systems and enable third-party developers to create and share add-ins that augment functionality.

Vertical clouds
Key providers in such industries as financial services, health care, retail, life sciences, and manufacturing provide PaaS clouds to enable customers to build vertical applications that tap into industry-specific, API-accessible services. Vertical clouds can dramatically reduce the time to market for vertical applications and accelerate domain-specific B-to-B integrations. Most vertical clouds are built with the intent of nurturing partner ecosystems.

Other cloud considerations
The most widely accepted definition of cloud computing means that you run your workloads on someone else’s servers, but this is not the same as outsourcing. Virtual cloud resources and even SaaS applications must be configured and maintained by the customer. Consider these factors when planning a cloud initiative.

Cloud computing security
Objections to the public cloud generally begin with cloud security, although the major public clouds have proven themselves much less susceptible to attack than the average enterprise data center.

Of greater concern is the integration of security policy and identity management between customers and public cloud providers. In addition, government regulation may forbid customers from allowing sensitive data off premises. Other concerns include the risk of outages and the long-term operational costs of public cloud services.

Multicloud management
The bar to qualify as a multicloud adopter is low: A customer just needs to use more than one public cloud service. However, depending on the number and variety of cloud services involved, managing multiple clouds can become quite complex from both a cost optimization and technology perspective.

In some cases, customers subscribe to multiple cloud service simply to avoid dependence on a single provider. A more sophisticated approach is to select public clouds based on the unique services they offer and, in some cases, integrate them. For example, developers might want to use Google’s TensorFlow machine learning service on Google Cloud Platform to build machine-learning-enabled applications, but prefer Jenkins hosted on the CloudBees platform for continuous integration.

To control costs and reduce management overhead, some customers opt for cloud management platforms (CMPs) and/or cloud service brokers (CSBs), which let you manage multiple clouds as if they were one cloud. The problem is that these solutions tend to limit customers to such common-denominator services as storage and compute, ignoring the panoply of services that make each cloud unique.

Edge computing
You often see edge computing described as an alternative to cloud computing. But it is not. Edge computing is about moving local computing to local devices in a highy distributed system, typically as a layer around a cloud computing core. There is typically a cloud involved to orchestrate all the devices and take in their data, then analyze it or otherwise act on it. 

Benefits of cloud computing
The cloud’s main appeal is to reduce the time to market of applications that need to scale dynamically. Increasingly, however, developers are drawn to the cloud by the abundance of advanced new services that can be incorporated into applications, from machine learning to internet of things (IoT) connectivity.

Although businesses sometimes migrate legacy applications to the cloud to reduce data center resource requirements, the real benefits accrue to new applications that take advantage of cloud services and “cloud native” attributes. The latter include microservices architecture, Linux containers to enhance application portability, and container management solutions such as Kubernetes that orchestrate container-based services. Cloud-native approaches and solutions can be part of either public or private clouds and help enable highly efficient devops-style workflows.

Cloud computing, public or private, has become the platform of choice for large applications, particularly customer-facing ones that need to change frequently or scale dynamically. More significantly, the major public clouds now lead the way in enterprise technology development, debuting new advances before they appear anywhere else. Workload by workload, enterprises are opting for the cloud, where an endless parade of exciting new technologies invite innovative use.

https://www.infoworld.com

Wednesday, 3 October 2018

Eclipse takes over all Java EE reference components

The Eclipse Foundation now has received all Java EE (Enterprise Edition) reference implementation components from Oracle, as part of the foundation’s takeover of the enterprise Java platform.

Oracle has contributed 100 percent of EE and GlassFish application server components to the foundation. GlassFish has served as a reference implementation of Java EE, which has been renamed Jakarta EE under Eclipse’s jurisdiction. The foundation said that it now had all the components in hand, which have been published to GitHub repositories. What this means is progress of the individual projects under Eclipse’s enterprise Java effort now is largely under control of the projects themselves.

The foundation also noted several other milestones that have been reached in the past couple of weeks:
  • Java EE Technology Compatibility Kits (TCKs) have been contributed and now are available in open source. This move provides transparency in that vendors, customers, and the community now can see actual tests being performed and gain insight into the process.
  • Eclipse will be able to ship Eclipse Glassfish as Java EE 8-compatible.
  • Builds for Enterprise Eclipse for Java (EE4J) projects now are running on Eclipse infrastructure. EE4J the is open source initiative for enterprise Java at Eclipse.
  • IBM, Oracle, Payara, Red Hat, and Tomitribe have committed to three years of funding for Jakarta EE, ranging from $25,000 to $300,000 each per year. This will fund creation of a dedicated team and marketing activities.
Ecilpse agreed to take jurisdiction over enterprise Java last year, after Oracle sought to divest itself of the platform and turn it over to an open source organization.
https://www.infoworld.com