Sunday, 28 October 2018

OpenStack Foundation releases software platform for edge computing

The OpenStack Foundation, the joint project created by NASA and Rackspace to create a freely usable Infrastructure-as-a-Service (IaaS) platform, has announced the initial release of StarlingX, a platform for edge computing.

StarlingX is designed for remote edge environments, offering node configuration in host, service management, and perform software updates remotely. It can also warn operators if there are any issues with the servers or the network.

The foundation says the platform is optimized for low-latency, high-performance applications in edge network scenarios and is primarily aimed at carrier networking, industrial Internet of Things (IIoT), and Internet of Things (IoT).

StarlingX is based on key technologies Wind River developed for its Titanium Cloud product. In May of this year, Intel, which owns Wind River, announced plans to turn over Titanium Cloud to OpenStack and deliver the StarlingX platform.

StarlingX is controlled through RESTful APIs and has been integrated with a number of popular open-source projects, including OpenStack, Ceph, and Kubernetes.

The software handles everything from hardware configuration down to host recovery and everything in between, such as configuration, host, service, and inventory management services, along with live migration of workloads.

“When it comes to edge, the debates on applicable technologies are endless. And to give answers, it is crucial to be able to blend together and manage all the virtual machine (VM) and container-based workloads and bare-metal environments, which is exactly what you get from StarlingX,” wrote Glenn Seiler, vice president of product management and strategy at Wind River, in a blog post announcing StarlingX’s availbility.

“Beyond being an integration project, StarlingX also delivers new services to fill the gaps in the open-source ecosystem to fulfill the strict requirements of edge use cases and scenarios throughout all the segments of the industry,” he added.

StarlingX fills an essential need in edge computing
It’s something of a missing link in the new field of edge computing. There are hardware providers such as Vapor IO and Schneider Electric, but they use their own home-brewed software stack. With StarlingX, companies can set up their own hardware configurations and now have a software option.

And even though it’s a 1.0 product, StarlingX is built on some very mature technology, starting with Wind River and extending to OpenStack. OpenStack is the world’s most widely deployed open-source infrastructure software, used in thousands of private and public cloud deployments and running across more than 10 million physical CPU cores.

https://www.networkworld.com

Humanization Is Key to Making AI Projects Successful

Artificial intelligence is routinely touted at tech conferences and elsewhere as the “Next Big Thing” that is going to transform the customer experience and the ability of companies to better sell and market their wares. But there were also skeptical and cautionary notes sounded here, even from vendors, at the Connected Enterprise conference (running Oct. 22-25) sponsored by Constellation Research.

“There are a lot of misconceptions about what AI can do in the enterprise. I would focus on really picking a specific problem,” said Inhi Cho Suh, general manager of Watson Customer Engagement at IBM.

For customers of IBM’s Watson AI supercomputer services, Suh said it’s important to focus on precise algorithms for small sets of data. “The language of business is incredibly unique,” said Suh. “Ask the marketing team or the supply team for the definition of ‘customer’ and ‘order,’ and you might get different answers.”

Estaban Kolsky, principal and founder of Think Jar, a research and advisory firm focused on customer strategies, agreed. “You don’t get to good AI without good data; no one has.”

Another key point is that AI systems evolve. “The big thing with AI is you have to exercise it often. If you use it infrequently, it’s hard to coach it to do what you want. It has to learn,” said Marge Breya, chief marketing officer at Microstrategy.

Jennifer Hewit, who leads the Cognitive and Digital Services department at Credit Suisse, used AI to deploy a new kind of virtual IT service desk at the company called Amelia. The project rolled out slowly, which she said was a deliberate strategy to see how it worked.

When Amelia went live in December of 2017, it proved to be about 23 percent effective at answering employee questions. As with chatbots, an inquiry gets bumped up to a live agent if the virtual help isn’t effective. “One thing we learned was not to let the system fake knowing. That was huge,” said Hewit.

Amelia was initially designed as an avatar, but is now voice only. “We took that [avatar] down because she looked too much like a robot,” said Hewit. The focus is on common, simple problems tech support deals with like email is stuck or a password needs to be reset. In the past year, the staff at Credit Suisse has helped to train the system that is now 85 percent effective at answering questions and serves 76,000 users in 40 countries.

Training Human Beings

While it’s well-known that AI systems need to be taught or learn from their mistakes, these systems are also training the humans who use them, warned Liza Lichtinger, owner of Future Design Station, a company that does research in human computer interaction.

“The language we deliver to devices is rewiring and remapping people’s brains, and that projects into their social interactions,” said Lichtinger.

Recently Lichtinger was doing some consulting work for a company bringing out a new virtual agent for personalized health care. In one crisis scenario the app responded: “Is the victim alive?”

“I jumped when I heard that. Suddenly it was a ‘victim’ not a patient. That changes our paradigm of how we see humans. It just shows that companies aren’t always sure about language and the messaging they’re sending people,” she said.

As these AI systems get more sophisticated, they will pick up on visual cues thanks to the inclusion of bio-metric data. “This new area into social signalling is going gangbusters at Stanford University where it’s about capturing how engaged you are looking at specific content,” said Lisa Hammitt, global vice president of artificial intelligence at Visa. “Ethics has to come to the forefront as we look at how we are personalizing the experience and trying to predict intent.”

Hammitt said Visa has developed a data bill of rights that is known internally as “rules for being less creepy.”

“You have to expose what the algorithm is doing. If it says you are a karate expert but you hate karate, you have to let people see that and be able to update it,” said Hammitt.

http://www.eweek.com

Friday, 26 October 2018

Mirantis Rides Kubernetes to Supercharge Open Source Edge Ecosystem

Mirantis is fed up with the slow development pace of open source edge software and infrastructure and thinks that Kubernetes is the answer. The company is using the container orchestration platform as the basis for its Mirantis Cloud Platform Edge (MCP Edge).

Boris Renski, co-founder and chief marketing officer at Mirantis, said that mobile operators are racing to deploy their 5G networks, but the real race will be in deploying edge networks to support new use cases. However, the lack of maturity in the edge ecosystem is forcing those operators to look at prepackaged edge platforms that often rely on proprietary technology.

“These operators have already spent a lot of time and money on virtualizing their networks,” Renski explained. “Why would they then want to take a prepackaged edge solution from Ericsson, Nokia, or Huawei that would just lock them into a single vendor? They are effectively forsaking everything that they have learned and their investments into virtualizing their core.”

Mirantis’ MCP Edge platform integrates Kubernetes, OpenStack, and Mirantis’ DriveTrain infrastructure manager. This allows operators to deploy a combination of container, virtual machine (VM), and bare metal points of presence (POP) that are connected by a unified management plane.

“It’s basically a Kubernetes distro that is purpose built for service provider edge deployments,” Renski said. “We are specifically targeting the infrastructure substrate that infrastructure would run at an aggregation location.”

The platform builds on Mirantis’ Cloud Platform (MCP) that it launched last year. That integrated cloud platform supports VMs using OpenStack, containers using Kubernetes, and bare metal, all on the same cloud. The edge product will run alongside the core MCP platform.

The company based the edge platform on Kubernetes due to its lower footprint when compared to something like OpenStack. Renski explained that this size advantage is crucial for edge deployments where resources will be more constrained. “OpenStack is just too heavy to use in a deployment with just a few nodes,” Renski said.

Something Tangible
Renski cited the recently launched Akraino Project as an example of the glacial pace the industry is moving in terms of an open source edge platform.

The Linux Foundation launched the Akraino Project in February using source code from AT&T. The project is an open source software stack that can support carrier availability and performance needs in cloud services optimized for edge computing systems and applications. The Linux Foundation opened up the seed code for the project in late August to allow the open source community to begin digging into the platform and narrow down potential use cases.

“Akraino is out there, but good luck trying to download something to work with,” Renski said. He added that Mirantis would love to have its MCP Edge platform become part of something broader like Akraino, or OPNFV, or ETSI, “but the first step is to build this and get it out there.”

“We feel very strongly that while what we are releasing might not be ideal for the edge, it’s something that is tangible,” Renski said. “It’s important for folks that are producing functions that can run on the edge to have something tangible that they can build against.”

Developers can download a demo version of the offering from Mirantis’ website as a virtual appliance. That version can support the deployment of a Kubernetes-based, six-node edge POP that can run containers and VMs.

“Users can experiment with running applications on it or run tests against it to see how it performs,” Renski said.

Mirantis is not alone in tapping Kubernetes to bolster edge deployments.

The Cloud Native Computing Foundation (CNCF), which is housed within the Linux Foundation and itself hosts the Kubernetes Project, last month formed a new working group focused on using Kubernetes to manage IoT and edge networking deployments. The Kubernetes IoT Edge Working Group, which was formed with the Eclipse Foundation, is using Kubernetes as a control plane and common infrastructure set for to support edge use cases.

There are also proprietary efforts like that from IoTium that offers edge-cloud infrastructure built on remotely managed Kubernetes.

https://www.sdxcentral.com

Thursday, 25 October 2018

Oracle announces new solutions to make cloud more secure

To help ensure customers’ data is secure from the core of infrastructure to the edge of the cloud, Oracle announced new cloud security technologies. In addition to the self-securing and self-patching capabilities of Oracle Autonomous Database and with the integration of machine learning and intelligent automation to remediate threats, these new cloud services allow customers to improve the security of applications deployed on the next generation of Oracle Cloud Infrastructure. The new cloud services include a Web Application Firewall (WAF) to protect against attacks on web traffic, Distributed Denial-of-Service (DDoS) protection to stop outside parties from disrupting running applications, an integrated Cloud Access Security Broker (CASB) which monitors and enforces secure configurations, and a Key Management Service (KMS) that allows customers to control the encryption of their data.
Emerging technologies like cloud, artificial intelligence and IoT, enable organizations to drive new innovations and reduce costs. However, with opportunities come increased risk including expanded attack surfaces. Security teams rely on manual processes and disparate tools that introduce human error and take an excessive amount of time to accurately detect and respond to threats and outages. Oracle has built integrated layers of defense that are designed to secure users, apps, data and infrastructure.
“Organizations are facing constant security threats from sophisticated actors who want to attack their applications and access their sensitive data,” said Don Johnson, senior vice president, product development, Oracle Cloud Infrastructure. “The new solutions build on Oracle’s existing, strong security heritage and give customers always-on capabilities that make it easier than ever to achieve end-to-end security. These new security layers include highly automated detective, preventive, responsive, and predictive security controls that help mitigate data breaches, address regulatory compliance, and reduce overall risk.”
To help customers combat today’s sophisticated threats and protect their data, Oracle has introduced the following automated security solutions:
• Web Application Firewall (WAF). The native WAF is designed to protect next generation Oracle Cloud Infrastructure applications against botnets, application attacks and DDoS attacks. The platform can then automatically respond to threats by blocking them and alerting security operations teams for further investigation. 
• Distributed Denial of Service (DDoS) Protection. As part of the next generation of Oracle Cloud Infrastructure, all Oracle data centers get automated DDoS attack detection and mitigation of high volume, Layer 3/4 DDoS attacks. This helps ensure the availability of Oracle network resources even when under sustained attack. 
• Cloud Access Security Broker (CASB). Keeping a cloud environment secure requires constant monitoring and enforcement to ensure that no one has set up an insecure network or left data unprotected. Oracle Cloud Access Security Broker (CASB) constantly checks OCI environments to help make sure that corporate security practices are being followed. It comes with preconfigured policies and controls so that customers can deploy applications faster while reducing security and operational risk. CASB also leverages machine learning-based behavioral analytics to predict threats. 
• Key Management Service. Oracle Key Management enables enterprises to encrypt data using keys that they control and offers centralized key management and key lifecycle monitoring capabilities. The solution delivers partitions in highly available and certified Hardware Security Modules that are isolated per customer. It is ideal for organizations that need to verify for regulatory compliance and security governance purposes that their data is encrypted where it is stored.
http://www.csoonline.in

Wednesday, 24 October 2018

GitHub Releases New Workflow Tools, 'Octoverse' Report

GitHub held its Universe 2018 conference at the Palace of Fine Arts in San Francisco Oct. 16, and it was quite a newsy event for the little gang of about 31 million developers who use the company’s 96 million repositories of open source code each day.

Those numbers are correct. That’s how large and in charge open source software has been for more than a generation and here in the waning months of 2018.

This event was largely about helping devs with building workflows that are: a) easy to do; b) realistic; and c) efficient. The company introduced some futuristic features that included GitHub Actions and GitHub Connect advance development workflows and break down barriers between teams.

GitHub also released new security tools with the GitHub Security Advisory API, new ways to learn across teams with GitHub Learning Lab for organizations, and other items.

“As a developer, you spend too much time configuring workflows—or get locked into inflexible tools as the industry evolves around you,” GitHub Senior Vice-President of Technology Jason Warner wrote in a blogpost. “We’re bringing the same tools you use while writing software to the rest of your development workflow, allowing you to focus on what matters most: code.”

Users can choose the developer tools, languages and deployment platforms they need most, supported by the ecosystem of GitHub Apps and integrations using the REST and GraphQL APIs, Warner said.

The company on Oct. 16 also released its "State of the Octoverse" report, which illustrates what the GitHub community can do in a year--such as creating 2.9 billion lines of code and promoting teamwork across time zones. 

http://www.eweek.com

Tuesday, 23 October 2018

What is a private cloud?

Private cloud is a well-defined term that government standards groups and the commercial cloud industry have pretty much agreed upon, and while some think its use is waning, recent analysis indicates that spending on private cloud is still growing at a breakneck pace.

A study by IDC projects that sales from private-cloud investment hit $4.6 billion in the second quarter of 2018 alone, which is a 28.2 percent increase from the same period in 2017.

So why are organizations attracted to private cloud?

What is a private cloud?
There are four types of cloud – public, community, hybrid, and private cloud, according to the National Institute for Standards and Technology.

NIST says that private cloud has some unique characteristics that set it apart from the rest: “The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.”

That’s what sets private cloud apart, but it also shares five characteristics with the other types of cloud, NIST says.

The first, on-demand self-service, means that end users can provision compute resources themselves without asking help from IT.

The second, broad access. requires that the resources in the cloud are accessible via most every type of device from workstations and laptops to tablets and phones.

The third, resource pooling, which makes for more overall efficient use of the compute resources, means various tenants share resources that are dynamically assigned over and over. In a private cloud this means that different divisions of an organization share resources, but they are exclusively available for just that organization. They are not shared with third parties as is the case with multi-tenancy services.

The fourth, rapid elasticity, enables ramping capacity up or down as needed and releasing resources for use by others when the need passes.

Finally, measured service insures that providers and users can measure how much of various resources – storage, processing, bandwidth, numbers of user accounts – are used so they can be allocated in a way that optimizes use of the resources.

Virtualization is just a part of private cloud

Just utilizing virtualization by throwing a hypervisor on a server does not comprse private cloud computing. While virtualization is a key component of cloud computing, it is not a cloud by itself.

Virtualization technology allows organizations to pool and allocate resources, which are both part of NIST's definition. But other qualities around self-service and the ability to scale those resources is needed for it to technically be considered a cloud environment.

A private cloud – compared to public or hybrid clouds – refers specifically to resources used by a single organization, or when an organization's cloud-based resources are completely isolated.

Private cloud economics
One of the biggest misconceptions about private cloud is that the cloud will save money. It can and often does, but it doesn’t inherently do so.

The up-front costs can be considerable. For example, automation technology, an important part of a private-cloud network, can be a significant investment for many IT organizations. The result can be the ability to reallocate resources more efficiently, and it may allow some organizations to reduce their overall capital expenditures for new hardware, which can also save money. But overall savings are not assured.

Gartner analysts say the primary driving benefit of adopting a private cloud model should not be cost savings, but rather increased agility and dynamic scalability, which can improve time-to-market for businesses that make use the technology.

Private cloud can be in the public cloud
Many people associate private cloud with being located in an organization's private, on-premises data center and public cloud as coming from a third-party service provider. But as NIST notes, while a private cloud may be owned, managed and operated by a private organization, it’s infrastructure may be located off premises.

Many providers sell off-premises private clouds, meaning that while the physical resources are located in a third-party facility, they are dedicated to a single customer. They are not shared, as they are in a public cloud, with multi-tenant pooling of resources among multiple customers.  "Private-cloud computing is defined by privacy, not location, ownership or management responsibility," says Gartner analyst Tom Bittman.

When dealing with cloud providers, be wary of security definitions. Some vendors may, for example, outsource their data-center operations to a collocation facility where they might not dedicate hardware to each customer. Or they could pool resources among customers but say they guarantee privacy by separate them using VPNs. Investigate the details of off-premises private-cloud offerings, Bittman advises.

Private cloud is more than IaaS
Infrastructure as a service is a big reason for adopting private cloud architectures, but it’s by no means its only usefulness. Software and platform as a service are also important, although Bittman says IaaS is the fastest growing segment.

"IaaS only provides the lowest-level data-center resources in an easy-to-consume way, and doesn't fundamentally change how IT is done," he says. Platform as a service (PaaS) is where organizations can create customized applications built to run on cloud infrastructure. PaaS comes in public or private flavors as well, having the application development service hosted either in an on-premises data center or in a dedicated environment from a provider.

Private cloud isn’t always private
Private cloud is the natural first step toward a cloud network for many organizations. It provides access to the benefits of the cloud – agility, scalability, efficiency – without some of the security concerns, perceived or real, that come with using the public cloud. But Bittman predicts that as the cloud market continues to evolve, organizations will open to the idea of using public cloud resources. Service-level agreements and security precautions will mature and the impact of outages and downtime will be minimized.

Eventually, Gartner predicts,  the majority of private cloud deployments will become hybrid clouds, meaning they will leverage public cloud resources. Meaning your private cloud today, may be a hybrid cloud tomorrow. "By starting with a private cloud, IT is positioning itself as the broker of all services for the enterprise, whether they are private, public, hybrid or traditional," Bittman says. "A private cloud that evolves to hybrid or even public could retain ownership of the self-service, and, therefore, the customer and the interface. This is a part of the vision for the future of IT that we call 'hybrid IT.'"

Cloud repatriation
When businesses move workloads and resources to the public cloud, then move it back to a private cloud or a non-cloud environment, that’s called cloud repatriation.

According to a 2017 survey by 451 Research, 39% of respondents said they moved at least some data or applications out of the public cloud, the top reason being performance and availability issues. A 451 blog about the research said many of the respondents’ reasons “matched the reasons we know businesses ultimately decide to shift to the public cloud in the first place.”

The top six reasons cited by the survey respondents were performance/availability issues (19%), improved on-premises cloud (11%), data sovereignty regulation change (11%), higher than expected cost (10%), latency issues (8%) and security breaches (8%).

And it’s not that these IT decision makers were abandoning public cloud for private cloud. Rather it’s that cloud environments are constantly evolving for each organization, and that many have a hybrid cloud that incorporates both private and public cloud. A majority of 451s survey respondents (58%) said they are “moving toward a hybrid IT environment that leverages both on-premises systems and off-premises cloud/hosted resources in an integrated fashion.”

https://www.networkworld.com

Thursday, 18 October 2018

7 cloud services to ease machine learning

One of the last computing chores to be sucked into the cloud is data analysis. Perhaps it’s because scientists are naturally good at programming and so they enjoy having a machine on their desks. Or maybe it’s because the lab equipment is hooked up directly to the computer to record the data. Or perhaps it’s because the data sets can be so large that it’s time-consuming to move them. 

Whatever the reasons, scientists and data analysts have embraced remote computing slowly, but they are coming around. Cloud-based tools for machine learning, artificial intelligence, and data analysis are growing. Some of the reasons are the same ones that drove interest in cloud-based document editing and email. Teams can log into a central repository from any machine and do the work in remote locations, on the road, or maybe even at the beach. The cloud handles backups and synchronization, simplifying everything for the group.

But there are also practical reasons why the cloud is even better for data analysis. When the data sets are large, cloud users can spool up large jobs on rented hardware that accomplish the work much, much faster. There is no need to start your PC working and then go out to lunch only to come back to find out that the job failed after a few hours. Now you can push the button, spin up dozens of cloud instances loaded with tons of memory, and watch your code fail in a few minutes. Since the clouds now bill by the second, you can save time and money.

There are dangers too. The biggest is the amorphous worry about privacy. Some data analysis involves personal information from subjects who trusted you to protect them. We’ve grown accustomed to the security issues involved in locking data on a hard drive in your lab. It’s hard to know just what’s going on in the cloud.

It will be some time before we’re comfortable with the best practices used by the cloud providers but already people are recognizing that maybe the cloud providers can hire more security consultants than the grad student in the corner of a lab. It’s not like personal computers are immune from viruses or other backdoors. If the personal computer is connected to the Internet, well, you might say it’s already part of the cloud.

There are, thankfully, workarounds. The simplest is to anonymize data with techniques like replacing personal information with random IDs. This is not perfect, but it can go a long way to limiting the trouble that any hacker could cause after slipping through the cloud’s defenses.

There are other interesting advantages. Groups can share or open source data sets to the general public, something that can generate wild combinations that we can only begin to imagine. Some of the cloud providers are curating their own data sets and donating storage costs to attract users (see AWS, Azure, GCP, and IBM for starters). If you like, you might try to correlate your product sales with the weather or sun spots or any of the other information in these public data sets. Who knows? There are plenty of weird correlations out there.

Here are seven different cloud-based machine learning services to help you find the correlations and signals in your data set.

Amazon SageMaker
Amazon created SageMaker to simplify the work of using its machine learning tools. Amazon SageMaker knits together the different AWS storage options (S3, Dynamo, Redshift, etc.) and pipes the data into Docker containers running the popular machine learning libraries (TensorFlow, MXNet, Chainer, etc.). All of the work can be tracked with Jupyter notebooks before the final models are deployed as APIs of their own. SageMaker moves your data into Amazon’s machines so you can concentrate on thinking about the algorithms and not the process. If you want to run the algorithms locally, you can always download the Docker images for simplicity.

Azure Machine Learning
Microsoft has seen the future of machine learning and gone all-in on the Machine Learning Studio, a sophisticated and graphical tool for finding signals in your data. It’s like a spreadsheet for AI. There is a drag-and-drop interface for building up flowcharts for making sense of your numbers. The documentation says that “no coding is necessary” and this is technically true but you’ll still need to think like a programmer to use it effectively. You just won’t get as bogged down in structuring your code. But if you miss the syntax errors, the data typing, and the other joys of programming, you can import modules written in Python, R, or several other options.

The most interesting option is that Microsoft has added the infrastructure to take what you learn from the AI and turn the predictive model into a web service running in the Azure cloud. So you build your training set, create your model, and then in just a few clicks you’re delivering answers in JSON packets from your Azure service.

BigML
BigML is a hybrid dashboard for data analysis that can either be used in the BigML cloud or installed locally. The main interface is a dashboard that lists all of your files waiting for analysis by dozens of machine learning classifiers, clusterers, regressors, and anomaly detectors. You click and the results appear.

Lately the company has concentrated on new algorithms that enhance the ability of the stack to deliver useful answers. The new Fusion code can integrate the results from multiple algorithms to increase accuracy.

Priced by subscription with a generous free tier on BigML’s own machines. You can also build out a private deployment on AWS, Azure, or GCP. If that’s still too public, they’ll deploy it on your private servers.

Databricks
The Databricks toolset is built by some of the developers of Apache Spark who took the open source analytics platform and added some dramatic speed enhancements, increasing throughput with some clever compression and indexing. The hybrid data store called Delta is a place where large amounts of data can be stored and then analyzed quickly. When new data arrives, it can be folded into the old storage for rapid re-analysis.

All of the standardized analytical routines from Apache Spark are ready to run on this data but with some well-needed improvements to the Spark infrastructure like integrated notebooks for your analysis code.

Databricks is integrated with both AWS and Azure and priced according to consumption and performance. Each computational engine is measured in Databrick Units. You’ll pay more for a faster model.

DataRobot
Many of the approaches here let you build a machine learning model in one click. DataRobot touts the ability to build hundreds of models simultaneously, also with just one click. When the models are done, you can pick through them and figure out which one does a better job of predicting and go with that. The secret is a “massively parallel processing engine,” in other words a cloud of machines doing the analysis.

DataRobot is expanding by implementing new algorithms and extending current ones. The company recently acquired Nutonian, whose Eureqa engine should enhance the automated machine learning platform’s ability to create time series and classification models. The system also offers a Python API for more advanced users.

DataRobot is available through the DataRobot Cloud or through an enterprise software edition that comes with an embedded engineer.

Google Cloud Machine Learning Engine
Google has invested heavily in TensorFlow, one of the standard open-source libraries for finding signals in data, and now you can experiment with TensorFlow in Google’s cloud. Some of the tools in the Google Cloud Machine Learning Engine are open source and essentially free for anyone who cares to download them and some are part of the commercial options in the Google Cloud Platform. This gives you the freedom to explore and avoid some lock-in because much of the code is open source and more or less ready to run on any Mac, Windows, or Linux box.

There are several different parts. The easiest place to begin may be the Colaboratory, which connects Jupyter notebooks with Google’s TensorFlow back end so you can sketch out your code and see it run. Google also offers the TensorFlow Research Cloud for scientists who want to experiment. When it’s appropriate, you can run your machine learning models on Google’s accelerated hardware with either GPUs or TPUs.

IBM Watson Studio
The brand name may have been born when a huge, hidden AI played Jeopardy but now Watson encompasses much of IBM’s push into artificial intelligence. The IBM Watson Studio is a tool for exploring your data and training models in the cloud or on-prem. In goes data and out comes beautiful charts and graphs on a dashboard ready for the boardroom.

The biggest difference may be the desktop version of the Watson Studio. You can use the cloud-based version to study your data and enjoy all of the power that comes with the elastic resources and centralized repository. Or you can do much the same thing from the firewalled privacy and convenience of your desktop.

A machine learning model in every cloud
While many people are looking to choose one dashboard for all of their AI research, there’s no reason why you can’t use more of the choices here. Once you’ve completed all of the pre-processing and data cleansing, you can feed the same CSV-formatted data into all of these services and compare the results to find the best choice. Some of these services already offer automated comparisons between algorithms. Why not take it a step further and use more than one?

You can also take advantage of some of the open standards that are evolving. Jupyter notebooks, for instance, will generally run without too much modification. You can develop on one platform and then move much of this code with the data to test out any new or different algorithms on different platforms.

We’re a long way from standardization and there are spooky and unexplained differences between algorithms. Don’t settle for just one algorithm or one training method. Experiment with as many different modelling tools as you can manage.

https://www.infoworld.co