Saturday, 31 March 2018

Essential skills and traits of elite data scientists

Data scientists continue to be in high demand, with companies in virtually every industry looking to get the most value from their burgeoning information resources.

“As organizations begin to fully capitalize on the use of their internal data assets and examine the integration of hundreds of third-party data sources, the role of the data scientist will continue to expand in relevance,” says Greg Boyd, director at consulting firm Protiviti.

“In the past, the teams responsible for data were relegated to the back rooms of the IT organization, performing the critical database tasks to keep the various corporate systems fed with the data ‘fuel’ [that] allowed corporate executives to report out on operations activities and deliver financial results,” Boyd says.

This role is important, but the rising stars of the business are those savvy data scientists that have the ability to not only manipulate vast amounts of data with sophisticated statistical and visualization techniques, but have a solid acumen from which they can derive forward-looking insights, Boyd says. These insights help predict potential outcomes and mitigate potential threats to the business. 

So what does it take to be data science whiz? Here are some important attributes and skills, according to IT leaders, industry analysts, data scientists, and others.

Critical thinking
Data scientists need to be critical thinkers, to be able to apply objective analysis of facts on a given topic or problem before formulating opinions or rendering judgments.

“They need to understand the business problem or decision being made and be able to 'model' or 'abstract' what is critical to solving the problem, versus what is extraneous and can be ignored,” says Anand Rao, global artificial intelligence and innovation lead for data and analytics at consulting firm PwC. “This skill more than anything else determines the success of a data scientist,” Rao says.

A data scientist needs to have experience but also have the ability to suspend belief, adds Jeffry Nimeroff, CIO at Zeta Global, which provides a cloud-based marketing platform.

“This trait captures the idea of knowing what to expect when working in any area, but also knowing that experience and intuition are imperfect,” Nimeroff says. “Experience provides benefits but is not without risk if we get too complacent. This is where the suspense of belief is important.”

It’s not about looking at things with the wide eyes of a novice, Nimeroff says, but instead stepping back and being able to assess a problem or situation from multiple points of view.

Coding
Top-notch data scientists know how to write code and are comfortable handling a variety of programming tasks.

“The language of choice in data science is moving towards Python, with a substantial following for R as well,” Rao says. In addition, there are a number of other languages in use such as Scala, Clojure, Java and Octave.

“To be really successful as a data scientist, the programming skills need to comprise both computational aspects — dealing with large volumes of data, working with real-time data, cloud computing, unstructured data, as well as statistical aspects — [and] working with statistical models like regression, optimization, clustering, decision trees, random forests, etc.,” Rao says.

The impact of big data beginning in the late 1990s has demanded that more and more data scientists understand and be able to code in languages such as Python, C++ or Java, says Celeste Fralick, chief data scientist at security software company McAfee.

If a data scientist doesn’t understand how to code, it helps to be surrounded by people who do. “Teaming a developer with a data scientist can prove to be very fruitful,” Fralick says.

Math
Data science is probably not a good career choice for people who don’t like or are not proficient at mathematics.

“In our work with global organizations, we engage with clients looking to develop complex financial or operational models,” Boyd says. “In order for these models to be statistically relevant, large volumes of data are required. The role of data scientist is to leverage their deep expertise in mathematics to develop statistical models which may be used to develop or shift key business strategies.”

The data scientist whiz is one who excels at mathematics and statistics, while having an ability to collaborate closely with line-of-business executives to communicate what is actually happening in the “black box” of complex equations in a manner that provides re-assurance that the business can trust the outcomes and recommendations, Boyd says.

Machine learning, deep learning, AI
Industries are moving extremely fast in these areas because of increased compute power, connectivity, and the huge volumes of data being collected, Fralick says. “A data scientist needs to stay in front of the curve in research, as well as understand what technology to apply when,” she says. “Too many times a data scientist will apply something ‘sexy’ and new, when the actual problem they are solving is much less complex.”

Data scientists need to have a deep understanding of the problem to be solved, and the data itself will speak to what’s needed, Fralick says. “Being aware of the computational cost to the ecosystem, interpretability, latency, bandwidth, and other system boundary conditions — as well as the maturity of the customer — itself helps the data scientist understand what technology to apply,” she says. That’s true as long as they understand the technology.

Also valuable are statistical skills. Most employers do not consider these skills, Fralick says, because today’s automated tools and open source software are so readily available. “However, understanding statistics is a critical competency to comprehending the assumptions these tools and software make,” she says.

It’s not enough to understand the functional interfaces to the machine learning algorithms, says Trevor Schulze, CIO at data storage provider Micron Technology. “To select the appropriate algorithm for the job, a successful data scientist needs to understand the statistics within the methods and the proper data preparation techniques to maximize overall performance of any model,” he says.

Skills in computer science are also important, Schulze says. Because data science is mainly done at the keyboard, strong fundamentals in software engineering are helpful.

Communication
The importance of communication skills bears repeating. Virtually nothing in technology today is performed in a vacuum; there’s always some integration between systems, applications, data and people. Data science is no different, and being able to communicate with multiple stakeholders using data is a key attribute.

“The 'storytelling' ability through data translates what is a mathematical result into an actionable insight or intervention,” says Rao. “Being at the intersection of business, technology, and data, data scientists need to be adept at telling a story to each of the stakeholders.”

That includes communicating about the business benefits of data to business executives; about technology and computational resources; about the challenges with data quality, privacy, and confidentiality; and about other areas of interest to the organization.

Being a good communicator includes the ability to distill challenging technical information into a form that is complete, accurate, and easy to present, Nimeroff says. “A data scientist must remember that their execution yields results that can and will be used to support directional action by the business,” he says. “So, being able to ensure that the audience understands and appreciates everything that is being presented to them — including the problem, the data, the success criteria, and the results — is paramount.”

A good data scientist must have the business savvy and inquisitiveness to adequately interview the business stakeholders to understand the problem and identify which data is likely to be relevant, Schulze says.

In addition, data scientists need to be able to explain algorithms to business leaders. “Communicating how an algorithm arrived at a prediction is a critical skill to gain leaders’ trust in predictive models being part of their business processes,” Schulze says.

Data architecture
It is imperative that the data scientist understand what is happening to the data from inception to model to business decision.

“To not understand the architecture can have serious impact on sample size inferences and assumptions, often leading to incorrect results and decisions,” Fralick says.

Even worse, things can change within the architecture. Without understanding its impact on models to begin with, a data scientist might end up “on a firestorm of model redo’s or suddenly inaccurate models without understanding why,” Fralick says.

While Hadoop gave big data legs by delivering the code to the data and not vice versa, Fralick says, understanding the complexities of the data flow or data pipeline are critical to insuring good fact-based decision-making.

Risk analysis, process improvement, systems engineering
A sharp data scientist needs to understand the concepts of analyzing business risk, making improvements in processes, and how systems engineering works.

“I’ve never known an excellent data scientist without these” skills, Fralick says. “They all play hand-in-hand, both inwardly focused to the data scientist but outwardly to the customer.”

Inwardly, the data scientist should remember the second half of the title — scientist — and follow good scientific theory, Fralick says.

Building in risk analyses at the start of model development can mitigate risks. “Outwardly, these are all skills that data scientists require to probe the customer about what problem they are trying to solve,” she says.

Connecting spending to process improvement, comprehending inherent company risks and other systems that can impact data or the result of a model can lead to greater customer satisfaction with the data scientist’s efforts, Fralick says.

Problem solving and good business intuition
In general, the traits great data scientists exhibit are the same traits that are exhibited by any good problem solver, Nimeroff says. “They look at the world from many perspectives, they look to understand what they are supposed to be doing before pulling all the tools out of their tool belt, they work in a rigorous and complete manner, and they can smoothly explain the results of their execution,” Nimeroff says.

When evaluating technology professionals for roles such as data scientists, Nimeroff looks for these traits. “The approach yields far more successes than failures, and also ensures that potential upside is maximized because critical thinking is brought to the forefront.”

Finding a great data scientist involves finding someone who has somewhat contradictory skill sets: intelligence to handle data processing and create useful models; and an intuitive understanding of the business problem they’re trying to solve, the structure and nuances of the data, and how the models work, says Lee Barnes, head of Paytronix Data Insights at business software provider Paytronix Systems.

“The first of these is the easiest to find; most people with good math skills and a degree in math, statistics, engineering, or other science-based subjects are likely to have the intellectual horsepower to do it, Barnes says. “The second is much harder to find. It is surprising how many people we interview that have built complex models, but when pushed on why they think the model worked or why they chose the approach they did, they don’t have a good answer.”

These people are likely to be able to explain how accurate a model was, “but without understanding why and how it works, it’s hard to have a lot of confidence in their models,” Barnes says. “Someone with this deeper understanding and intuition for what they are doing is a true data science whiz, and will likely have a successful career in this field.”

https://www.cio.com/

Robot will crawl through pipes to help decommission nuclear facility

pipecrawling.jpg

There are miles of pipes at a closed uranium enrichment plant in Piketon, Ohio, that no living creature can safely enter.

So DOE will use a couple custom robots.

Robots have found an important calling working in radioactive environments. In the wake of the Fukushima disaster, teams of Japanese roboticists have created a small army of robots capable of surviving, if only for a few minutes, inside the compromised reactor cores.

One of those robots recently transmitted the first photos of nuclear debris from the site.

The job of helping decommission the Piketon facility, which is operated by the Department of Energy and has been closed since 2000, will fall to a pair of customized autonomous robots developed at the Robotics Institute at Carnegie Mellon University.

Specifically, the autonomous robots will be used to identify uranium deposits on pipe walls.

That work has previously fallen to humans working on scaffolding outside the pipes. Due to the sensitive nature and inherent health risks of the work, it's a costly endeavor.

DOE officials estimate the robots could save tens of millions of dollars at Piketon, and save perhaps $50 million at a similar uranium enrichment plant in Paducah, Kentucky.

"This will transform the way measurements of uranium deposits are made from now on," says William "Red" Whittaker, robotics professor and director of the Field Robotics Center at CMU.

Also: Disaster robots slow to gain acceptance from responders

The tetherless robot he helped developed is called RadPiper. It will maneuver through pipes 30 inches and greater in diameter atop flexible tracks.

Like many driverless cars, the robot is equipped with a LiDAR as well as a fisheye camera to detect and maneuver around obstacles, such as closed valves.

RadPiper measures radioactivity with a "disc-collimated sensing instrument" that uses a sodium iodide sensor to count gamma rays.

Like a post-apocalyptic Roomba, it returns to its launch point after navigating each section of pipe.

DOE has paid CMU $1.4 million to develop the robots as part of what Carnegie Mellon calls the Pipe Crawling Activity Measurement System.

It's a great example of laboratory robotics making it into the real world -- a process that's proven particularly slow when it comes to bringing robotics developed by grad school researchers to commercial markets.

www.zdnet.com

Hybrid cloud: How organizations are using Microsoft's on-premises cloud platform

Microsoft’s on-premise Azure cloud platform, Azure Stack, has now been embedded in real-world, core business environments with early adopters validating business use cases that require secured and host environments.  Here are some of the current uses of Azure Stack that are deployed in enterprises.

Azure Stack in healthcare
Healthcare organizations have been a prime candidate for Azure Stack as they fit the model of having large (extremely large!) sets of data and customers, and also face regulatory policies and protection aimed at securing the data being transacted.  Azure Stack fits the mold of providing healthcare organizations the cloud-scale that they wish to achieve, in a protected, managed and secured environment.

Beyond simply providing cloud-scale operations of Azure on-premise, Azure Stack has also enabled healthcare providers to leverage Azure (public) for application development and the flexibility to host applications on the on-premises, secured environment.  Public cloud provides a wonderful platform for application development, allowing an organization to code, develop, test, rollback, retest and start all over on platform systems.

The organization doesn’t need to buy hardware for an application-development cycle and then sit on the investment for weeks or months until the next large development and redeployment cycle.  Cloud provides an organization the ability to burst during peak development times, then completely de-allocate all systems and configurations until resources are needed again.

Since the code development doesn’t have sensitive and protected patient data, the open development in a public cloud doesn’t compromise the organization’s ability to develop in a shared cloud environment.  Once the application is developed and tested, it can simply be moved to Azure Stack on-premise using the EXACT same system configuration states, settings, template builds and models ensuring that application dev/test validation will work on-premises as it did in  Azure public-cloud test cycles.

Azure Stack in government
Another core sector where Azure Stack has taken hold in with initial production deployments government.  Both in the areas of international government entities as well as specific defense contact enterprises.

Microsoft has an isolated Azure instance for U.S. government entities, but not for other governments around the globe.

Azure Stack has filled this need by providing protected Azure cloud services to international government entities, allowing them to take advantage of Azure resources both for open and publicly accessible resources, as well as for private content and resources managed by Azure Stack.

Additionally, in the U.S. an initial group of defense-contract entities are leveraging Azure Stack for high-performance, tightly managed, and secured cloud-scale platform environments. These defense contractors are leveraging Azure (public) and Azure Stack (on-premises) just as healthcare organizations do.

There are a couple ways Azure Stack fulfills the needs of  government entities.  One is to leveraging cloud resources on an as-needed basis. Organizations can provision resources they require, and when they’re done, reallocate them to other workloads. The other scenario is handling workloads that need to be secured. Organizations can apply security policies and practices to workloads in Azure Stack that they may not be able to do easily in the public Azure Government cloud.

So Azure Stack meets government needs in two ways. One is fulfilling project-based scale-up to leverage public cloud resources on an as-needed basis, then having those resources drop off and be deallocated after the project is completed, minimizing long-term costs of buying, maintaining, supporting or leasing equipment.

The other is to provide secure operations for key workloads that can retain a level of security to which organizations can apply age-old security policies and practices. Then they can utilize secured cloud instances in the Azure Government cloud.

Azure stack in isolated environments
Some organizations do application development with users that do not have reliable Internet connectivity.  One such enterprise’s development team is in a part of the world where Internet connectivity is spotty at best.  The model has been to work on standalone systems using containers, where the containers can be ported across when Internet connectivity is working or the containers are saved to physical media that is shipped to a location where the code and content is needed.

The challenge for the organization has been around the security of code development. With multiple standalone systems and with containers being moved around, the integrity of the code and leakage of intellectual property has been something the organization has been unable to manage.

With Azure Stack, all users are connected within the walls of the remote site.  All content being created remains isolated in the secured and encrypted Azure Stack environment.  When content is created and needs to be ported, that can be accomplished via secured connections between Azure Stack (on-premises) and Azure (public).

These transfers are logged, tracked and tightly managed, providing the organization a seamless way to move intellectual property around, yet retain integrity and security of the content being developed.

Azure Stackprovides real-world solutions to business cases that we anticipated would be important as the product was being developed two-and-a-half years ago and that now are available for use by enterprises and government entities around the world.

https://www.networkworld.com

Friday, 30 March 2018

Avaya, Pyrios Partner to Proliferate Cloud Presence

The cloud is a global phenomenon, with Gartner projecting substantial growth in the coming years placing a valuation of $411.4 billion by 2020 on the public cloud services market. Cloud services providers are jockeying for position in this ripe market, many by moving products into new markets.  

Avaya announced a partnership with Pyrios bringing ‘Powered by Avaya’ to New Zealand. The Software-as-a-Service stack providing flexibility fit for a medium sized business seeking cloud migration for unified communications or contact center.

Delivering a subscription-based model, businesses can select a fully hosted, hybrid-cloud or on premises. Pyrios will offer systems integration expertise in developing communications solutions, and partners can enhance the package with value add services to provide a complete cloud solution.

Aimed at the midmarket, ‘Powered by Avaya’ joins the Avaya Breeze, Oceana and Equinox platforms seeking to ride the wave of transformation. Gartner noted and increase in market size by nearly 20 percent globally in 2017, with more to come. Mature markets like New Zealand serve as fertile ground.

Pyrios Chief Executive Officer, Robyn O’Reilly, explained, “As a nation, New Zealand is among the leaders in the adoption of cloud-based services; from day-to-day applications right through to specialized solutions. Much of this can be attributed to the densely-populated midmarket, which inspires innovation and adaptability as organizations devise advanced strategies to compete in the fast-moving technology climate.”

The cloud is a global phenomenon, and those capable of providing the keys to future forward technologies like artificial intelligence, Internet of Things (IoT), big data, machine learning and more will be those with the last laugh.

Has your organization migrated communications to cloud?

http://unified-communications.tmcnet.com

ZaiLab Putting the Cloud into Cloud Contact Center

Back in the early years of this decade, when cloud computing really reached the beginning of its hype cycle, countless vendors were promoting their cloud solutions and services. Yet, several obstacles stood in the way of progress, including security, interoperability and standardization, connectivity and access, service quality, and of course long-term cost models.

These obstacles have since been overcome – to the degree they no longer present a deterrent to cloud adoption.  Standards have improved and there is plenty of ongoing interop work between vendors across industries, and APIs have become a common language in the tech space.  Connectivity has improved dramatically and we’re now on the cusp on a major leap with 5G deployments looming.  All of these have driven consistency in service quality that has helped increase adoption of cloud services.

Security has become an even bigger issue, but not one that’s unique to cloud providers, but one that impacts every provider, business, and individual user on a daily basis.  Security risk has, one might say, become a technology standard, but one that cloud providers are often better equipped to handle than most businesses.

“Cloud providers using good tools can do a better job of protecting data  than you would do yourself,” explained John Thielens to me several years ago when he was chief architect of cloud services at Axway.  He is now CEO at Cleo, but added back then that, “Cloud makes sense, especially for smaller companies, from a security perspective, provided their providers are transparent about their security.”

The maturation of cloud services has brought about an entirely new business mindset – and a host of thriving new providers delivering cloud-based communications, contact center, networking and IT, and even security services.  While the industry has matured and large enterprises, too, are moving increasingly into the cloud, the model still holds tremendous value for the SMB market, where IT resources are at a minimum.

Many of these new cloud-based providers are taking the approach that simplicity is key – that the value of cloud is muted when there are too many deployment requirements, from minimum licenses, edge devices, and so forth, which still require in-house IT support.  Businesses are looking for simple to deploy and simple to manage services at an affordable cost.

“The rapid-deployment, low-cost cloud model never really made it a decade ago, because there were too many contractual requirements with the large contact center players who were moving to a cloud model,” explains Fokion Natsis, chief sales and strategy officer at ZaiLab.  “There were all these solutions out there, but none of them were really providing what business wanted – they weren’t true cloud solutions.”

Natsis spent almost six years at what was formerly Interactive Intelligence, which has since been acquired by Genesys, so he has a solid understanding of what businesses are looking for in a cloud solution.

“We’re trying to change the model at ZaiLab,” he says.  “We’re bringing an enterprise-grade contact center to market, fully hosted on Amazon’s cloud, on a pure consumption model.”

Part of the model includes speed of deployment, which the company feels is often complicated by vendors in order to meet specific revenue targets. ZaiLab’s record for a 10-seat deployment is five minutes, 35 seconds.

The key is there are no setup fees, no implementation costs, and no ongoing maintenance.  Billing is done on a per-second basis for voice, and per-message, per-email, per-web session billing for other interactions. 

If you’re wondering about the cost model, one customer in the insurance market thought it had been under-invoiced upon receiving its first bill and called ZaiLab to verify.  To the customer’s delight, the two-thirds reduction in cost was, indeed, accurate.

The model is ideal for the SMB market (2-200 seats), which has always seen a barrier to get into a contact center deployment, yet are very much in need of the same capabilities afforded to larger competitors.  ZaiLab can scale to meet any business’ needs, but larger companies tend to have different requirements and in-house teams to manage software, making the model less attractive for them.

Part of the secret is AI-based routing.  Voice has always been the channel that requires immediate response, but ZaiLab treats all interactions equally, sending all inbound channels into the same virtual waiting room, where the system performs optimized routing based on agents’ skills.  It’s all designed to create the best customer experience within an omnichannel environment. 

“We really look at all available metrics,” explains Natsis.  “That includes SLA, agent idle time, customer feedback, customer history and other more common data.”

In addition, ZaiLab is investing the agent community.  The company has already created a training facility to certify home-based agents in South Africa, and is now in talks with the U.S. Chamber of Commerce to introduce a similar project here.  While most people are afraid of AI and fear it will cause job loss due to a need for fewer agents, ZaiLab sees it as an opportunity to use the training and certification process to create opportunities, especially for new workers, pensioners, retired military personnel, and others who may otherwise have a hard time finding employment. 

“AI is a very loosely used term today, and we are nowhere near what it really is,” Natsis says.  “We want to use AI to actually create jobs, not reduce them.”

http://callcenterinfo.tmcnet.com

Thursday, 29 March 2018

Skip containers and do serverless computing instead

Normally, mainstream enterprises are slow to embrace cutting-edge technologies, with startups and other early adopters setting the pace on everything from public cloud to NoSQLs. Serverless computing, however, just might be different.

Serverless, first popularized by AWS Lambda, has seen “astonishing” growth of over 300 percent year over year, according to AWS chief Andy Jassy. Ironically, that growth may be driven by the “laggards,” as Redmonk analyst James Governor calls them, rather than the techno-hipsters.

Containers are hot, but maybe not for you
Over the last few years, nothing has been as hot as containers. Indeed, containers are so hot they’ve broken the scale ETR uses to measure CIO intent to purchase enterprise technology, registering “the strongest buying intention score ever recorded in [its] six-year history.” The reason is simple: Containers make developers much more productive. As Chenxi Wang writes, containers let developers “deploy, replicate, move, and back up a workload even more quickly and easily than you can do so using virtual machines.”

That’s big.

As great as they are, containers have a built-in deficiency: They’re not nearly easy enough, as Governor highlights:

Containers can help with IT cost reduction, but the main driver of adoption is velocity and the efficient management of infrastructure. The problem with container infrastructures is that this efficient management also calls for highly skilled developers and operators. Talent is a scarce resource. Even if you can afford the people, they may prefer to work for cooler companies.

“Cooler companies” refers to every company but yours. Not really, of course, but most large, successful enterprises may be cool with financial analysts but less so with developers. For these companies, serverless—the hottest thing since, well, containers—could be the answer.

Serverless is cooler than cool in the mainstream
“Serverless” refers to services like AWS Lambda that offer developers a way to focus on writing application logic rather than server infrastructure. Yes, this means a developer must trust that AWS, Microsoft, or Google get that infrastructure right, but the upside to embracing these cloud back ends is huge. As such, Stackery told Governor, “Serverless is being driven by mainstream enterprises. We see them leapfrogging containers so they can take something off the shelf and move quickly.”

In other words, they’d love to get into containers, but they may lack the expertise. So they’re borrowing that expertise from Amazon or another serverless vendor and skipping the container revolution.

For those enterprises less willing to trust their application infrastructure to a cloud vendor, some have hoped to bring serverless “in house,” running it on-premises in a corporate datacenter, just as some hope to ape the benefits of public cloud computing in so-called private clouds in their datacenters. It’s a nice theory. Unfortunately, it doesn’t work. Not for most companies, anyway.

Indeed, the minute that you bring serverless in house, you start to “negate the very advantage you started with,” argues AWS evangelist Mackenzie Kosut. Instead, he says, companies should “spend more time on developing your application and business logic, less time managing systems.” Or, as AWS vice president of cloud architecture Adrian Cockcroft puts it, if you “want to move quickly and cheaply,” you need to stop fixating on servers and instead entrust that to a cloud partner like AWS, Microsoft, or Google.

Of course, there will always be companies that want to get deep into their systems. For such companies, containers are revelatory in how much control and power they gain over their infrastructure.

Yet for most developers, contends Octopus engineer Pawel Pabich, “containers are a distraction.” This is an amazing statement given how important containers have been. But it smells like truth. Developers are the new kingmakers, as the saying goes, but not everyone has the über-developers on their payroll necessary to tame containers to their needs. For such “laggards,” serverless will do just fine.

https://www.infoworld.com

What’s new in Kubernetes 1.10

The latest version of the container orchestration system Kubernetes, 1.10, moves some storage, DNS, and authentication features to beta status. Kubernetes 1.10 is also the first release under a new issue-lifecycle management strategy for the product.

Where to download Kubernetes

Kubernetes can be obtained directly from source at the releases page of its official GitHub repository. Kubernetes is also available by way of the upgrade process provided by the various vendors that supply Kubernetes distributions.

Current version: New features in Kubernetes 1.10

The beta release of the Container Storage Interface (alpha as of Kubernetes 1.9) promotes an easier way to add volume plug-ins to Kubernetes, something that previously required recompilng the Kubernetes binary. The Kubectl CLI, used to perform common maintenance and administrative tasks in Kubernetes, can now accept binary plug-ins that perform authentication against third-party services such as cloud providers and Active Directory.

“Non-shared storage,” or the ability to mount local storage volumes as persistent Kubernetes volumes, is now also beta. The APIs for persistent volumes now have additional checks to make sure persistent volumes that are in use aren’t deleted. The native DNS provider in Kubernetes can now be swapped with CoreDNS, a CNCF-managed DNS project with a modular architecture, although the swap can only be accomplished when a Kubernetes cluster is first set up.

The Kubernetes project is now also moving to an automated issue-lifecycle management project, to ensure stale issues don’t stay open for too long.

https://www.infoworld.com

LF Deep Learning Foundation Debuts to Advance AI Usage

The Linux Foundation is continuing to expand its scope, announcing the launch of the LF Deep Learning Foundation on March 26.

The goal of the LF Deep Learning Foundation is to make it easier to adopt and deploy artificial intelligence and machine learning methodologies for industry-specific use cases, including cyber-security threat detection, network automation and image recognition. The LF Deep Learning Foundation is backed by Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa and ZTE. 

The initial project at the core of the LF Deep Learning Foundation is Acumos, which was announced in November 2017, though few details were publicly disclosed at the time. The Acumos project integrates code contributed by AT&T and Tech Mahindra to enable organizations to more easily deploy AI models. 

There are multiple key components that will help to make AI use more widespread, Arpit Joshipura, general manager of networking and orchestration at the Linux Foundation, told eWEEK. 

AI frameworks, such as TensorFlow, PaddlePaddle and Caffe, are needed but can be difficult to consume, Joshipura said. That's why there are various forms of data wrappers available to help get both public and private data into the different AI models. He noted that understanding what data can be shared and how it can be used is also a challenge.

"All the different components need to come together with shared best practices for different verticals, and that's where Acumos comes in," Joshipura said. "Acumos as a project brings the algorithms, data and the compute together, creating templates in a marketplace along with an app store that is vertical specific for a given use case that can be shared by peers across a community." 

With Acumos, multiple industry verticals can benefit from AI templates that will enable more rapid adoption, according to Joshipura. Rather than each individual company re-creating the data flow model for a deep learning model, he said the Acumos templates will enable organizations to reuse approaches that have already been proven.

LF Deep Learning

The Linux Foundation doesn't see Acumos as a stand-alone effort, but rather as the beginning of a broader AI initiative that will emerge over time. To help facilitate a governance framework for how different AI projects can join Acumos at the Linux Foundation, Joshipura said that a decision was made to create the LF Deep Learning Foundation. The LF Deep Learning Foundation is set to be an umbrella organization for multiple AI projects. 

"So as more deep learning projects come in, we will already have a governance model that is scoped out," he said.

The Linux Foundation in recent years has created multiple efforts that have become umbrella projects, or foundations of their own. One example is the LF Networking project, which was created as an umbrella effort for networking in January 2018. The Cloud Native Computing Foundation is another example; it was originally created with just the Kubernetes project and is now home to 17 projects.

http://www.eweek.com

TLS 1.3 Encryption Standard Moves Forward, Improving Internet Security

After years of development and 28 drafts, the Internet Engineering Task Force has approved Transport Layer Security 1.3 as a proposed internet standard. The new standard aims to provide improved security and cryptographic assurances for the internet.

"TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery," the IETF announcement message for TLS 1.3 states.

TLS 1.3 is the successor to TLS 1.2, which was formally defined in August 2008 and is supported in most major web browsers and servers. Many browsers and servers, however, still also run the older TLS 1.1 protocol that was defined in April 2006 as an update to the TLS 1.0/Secure Socket Layers (SSL) v3.0 web encryption protocol that came out in 1999.

Among the multiple improvements in TLS 1.3 is increased speed of operation. A core promise of the new encryption standard is that encrypted traffic will be handled by servers and browsers as fast as unencrypted traffic. When TLS 1.2 was first defined, only a small fraction of web traffic was encrypted. In 2018, encrypted HTTPS traffic accounts for over 50 percent of web traffic according to multiple reports including one from Cisco.

TLS 1.3 is also more secure than its predecessors as it removes support for older, less secure cryptographic algorithms.

"The list of supported symmetric algorithms has been pruned of all algorithms that are considered legacy," the TLS 1.3 draft standard states. "Those that remain all use Authenticated Encryption with Associated Data (AEAD) algorithms."

The removal of older protocols is an important step in limiting multiple types of attacks that have been reported by researchers in recent years. Attacks such as POODLE, FREAK and Logjam are primary reasons why SSLv3.0 and TLS 1.0 are not considered to be safe. Those attacks made use of older, less-secure cryptographic algorithms to exploit HTTPS. 

Going a step further, all of the public-key exchange methods supported in TLS 1.3 support forward secrecy.

"The goal of forward secrecy is to protect the secrecy of past sessions so that a session stays secret going forward," Patrick Crowley, engineering lead for Cisco's Stealwatch, wrote in a blog post. "With TLS 1.2 and earlier versions, a bad actor who discovered a server’s private key could use it to decrypt network traffic that had been sent earlier."

The TLS 1.3 specification also serves to remove a potential attack vector by encrypting all messages after the initial "ServerHello handshake" is made to initiate an encrypted data stream.

"The newly introduced EncryptedExtension message allows various extensions previously sent in clear in the ServerHello to also enjoy confidentiality protection from active attackers," the IETF standard draft states.

Deployment

Although TLS 1.3 is only a recommended internet standard, browser and infrastructure operators have already takes steps to implement earlier drafts of the protocol. Among the vendors that have long supported TLS 1.3 is CloudFlare, which began implementing draft support in September 2016.

"There are over 10 interoperable implementations of the protocol from different sources written in different languages," the IETF TLS 1.3 announcement states. " The major web browser vendors and TLS libraries vendors have draft implementations or have indicated they will support the protocol in the future."

http://www.eweek.com

InfluxData Expands in EMEA with Time Series Database Tools

InfluxData, the Modern Open Source Platform built specifically for metrics, events and other time series data that empowers developers to build next-generation monitoring, analytics and IoT applications, today announced its continued expansion in EMEA to meet growing global demand for its time series database metrics and events platform, including hiring two key EMEA officials. 

InfluxData has appointed former Canonical and Rackspace executive Rob Gillam as its new EMEA Sales Director. In addition, the company has named Dean Sheehan, former Apcera and iWave executive as its new Senior Director of Pre- and Post-Sales. The EMEA regional expansion comes on the heels of last month’s round of $35M in funding to expand worldwide Sales, Marketing and R&D and fuel the company’s rapid acceleration internationally, where it has seen a significant rise in revenue and growth.

"InfluxData has experienced a dramatic increase in demand from EMEA enterprises for our purpose-built time series platform," said Evan Kaplan, InfluxData CEO. "The demand is driven by accelerating enterprise investment in IoT and DevOps monitoring and control applications. We are pleased to add Rob and Dean to our executive team to help us rapidly expand our EMEA presence. We will rely on their expertise and experience as we work to support our existing and new customers in the region."

According to DB-Engines' latest results, InfluxData is the overwhelming worldwide market leader, with its InfluxDB user popularity ranking nearly three times more than the nearest competitor. To date the company's EMEA operations account for 40 percent of total revenue, with projections for significant growth. InfluxData is also seeing synergies with important industry organizations in-region, such as its work around IoT and the Eclipse IoT Initiative. Due to popular demand, the company has also expanded its InfluxDays event series to London set for June 14, its first InfluxDays event overseas.

The InfluxData Platform provides a comprehensive set of tools and services to accumulate metrics and events data, analyze the data, and act on the data via powerful visualizations and notifications. InfluxData's unique features enable customers to quickly build:

- Monitoring, alerting and notification applications supporting their DevOps initiatives

- IoT applications supporting millions of events per second, providing new business value around predictive maintenance and real-time alerting and control

- Real-time analytics applications that are focused on streaming data and anomaly detection

InfluxData has rapidly built its developer and customer base across industries -- including manufacturing, financial services, energy, and telecommunications -- by delivering the fastest growing Open Source Platform that enables customers to derive better business insights, data-driven real-time actions, and a consolidated single view of their entire infrastructure -- from applications to microservices, and from systems to sensors.

More than 400 customers, including Cisco Systems, Coupa Software, IBM, Houghton Mifflin Harcourt, Nordstrom, and Tesla, have selected InfluxData as their modern data platform for metrics and events. InfluxData is pioneering the shift to time series in a modern metrics and events platform, and is making it possible for customers to become data-driven and take on digital transformation initiatives.

https://www.newsfactor.com

Wednesday, 28 March 2018

Microsoft Officially Launches Azure Databricks Analytics Service

Months after initially announcing Azure Databricks in November 2017 at the Connect 2017 conference in New York City, Microsoft officially made the cloud service available on March 22.

Based on the Apache Spark framework for big data processing, Azure Databricks is aimed at organizations looking for a running start on their large-scale data analytics and artificial intelligence (AI) projects. It offers developers and data scientists a streamlined setup process and integrates with the company's other cloud-based services, including Azure SQL Data Warehouse and Cosmos DB.

The service was built in collaboration with Databricks, a company founded by the team behind Apache Spark, noted Rohan Kumar, corporate vice president of Azure Data at Microsoft, and Ali Ghodsi, CEO of Databricks, in a joint blog post. In 2016, Databricks made waves by claiming to be the first company to enable end-to-end, enterprise-grade security on Apache Spark with its Databricks Enterprise Security (DBES) framework. 

Continuing the big data theme, Microsoft also announced the general availability of an integration between Azure Event Hubs, a telemetry data-ingestion and event-streaming service, with Apache Spark. A new connector, which supports the Spark Core, Spark Streaming and Structured Streaming processing engines for Spark versions 2.1 through 2.3, allows developers to use the big data processing platform as the basis of large-scale analytics and machine learning applications.

Microsoft also recently launched Azure Database for MySQL and Azure Database for PostgreSQL, cloud- delivered offerings based the community versions of the open source databases.

"The GA [general availability] milestone means that, starting today, these services are bringing the community versions of MySQL and PostgreSQL with built-in high availability, a 99.9 percent availability SLA, elastic scaling for performance, and industry leading security and compliance to Azure," stated Tobias Ternstrom, principal group program manager of Azure Data at Microsoft, in a March 20 announcement. Ternstrom's team is also working on the official release of an Azure-branded MariaDB database service in the coming months, he added.

Elsewhere on the Azure cloud, Microsoft kicked off a public beta of Azure DNS Private Zones on March 23. The capability provides name resolution services to virtual networks in Azure without having to configure and manage a custom domain name server (DNS).

For administrators who like to keep a close eye on their Azure services, an updated alert platform consolidates notifications into a streamlined experience and delivers alerts faster than before. It is now capable of up-to-the-minute monitoring of various metrics and can issue alerts in less than five minutes. The so-called "next generation of Azure alerts" also allows administrators to set rules involving multi-dimensional metrics for more precise alerting.

Finally, Microsoft is warning customers using the Access Control Service for authentication that the service will shut down on Nov. 7, or a little over seven months from today. Users will want to get started on their migrations to Azure Active Directory or another system as soon as possible, since significant code changes will be required for most migrations, according to Microsoft. 

http://www.eweek.com

IBM Now Offering Cloud-Based Security for Mainframes

Amid all the talk, videos and live demos at IBM Think 2018 around artificial intelligence, quantum computing and dozens of other cool technologies, the host company also spent some quality time on good, old-fashioned security.

IBM on March 20 unveiled four new cloud services for mainframe-level data protection—solutions already trusted by the world's largest financial institutions and banks—for implementation in the IBM Cloud. These include what IBM claims is the first cloud hardware security module solution built with the industry's highest cryptographic standards (FIPS 140-2 Level 4 certified technology) offered by a public cloud provider.

With these new services, IBM aims to simplify how enterprises can securely bridge to the public cloud by helping to address their needs throughout the journey, from accelerating the migration of existing workloads to the cloud to modernizing and extending existing apps to delivering tools to build next-gen cloud native apps.

Cloud Services with Mainframe-Level Data Protection

Cloud adoption has been increasing at a rapid pace for several years, but security and data concerns still remain barriers to adoption. Check out these facts:

  • According to a recent study from the Ponemon Institute, only 40 percent of all data stored in the cloud is secured with encryption and key management solutions. According to the Breach Level Index, of the nearly 10 billion records breached since 2013, only 4 percent of the stolen data was encrypted and therefore rendered useless to the hackers;
  • A second Ponemon Institute survey pointed out that in security of privileged users, 80 percent of threats are internal, and 58 percent of IT operations and security managers believe their organizations are unnecessarily granting access to individuals beyond their roles and responsibilities.

The new IBM Cloud Hyper Protect product line includes four new services that are made possible by bringing IBM Z into IBM’s global public cloud data centers. Through the IBM Cloud catalog, developers can gain easy access to industry-leading security capabilities to modernize their applications in the IBM Cloud. This includes:
  • IBM Cloud Hyper Protect Crypto Services are designed to enable developers to infuse security with data encryption and key management capabilities into their modern applications. These new services bring the capability of IBM Z to the IBM Cloud through the same state-of-the-art cryptographic technology relied upon by leading banks and financial institutions. This service supports secure key operations and random number generation via IBM Z cryptographic hardware. This is the industry’s first and only Cloud HSM Solution built with FIPS 140-2 Level 4 certified technology offered by a public cloud provider, and is the same technology that is the backbone of IBM’s Enterprise Blockchain Platform solution.
  • IBM Cloud Hyper Protect DBaaS is designed to enable enterprises to protect cloud-native database services, such as MongoDB – EE, with data stores that are security-rich and private. This is ideal for highly regulated industries that are responsible for sensitive personal data (SPI) such as credit card numbers, financial data, social security numbers and more.
  • IBM Cloud Hyper Protect Containers are designed to enable enterprises to deploy container-based applications and microservices, supported through the IBM Cloud Container service, that handle sensitive data with a security-rich Service Container Systems environment in IBM Z/LinuxOne platform. This environment is built with IBM LinuxONE Systems that offer extreme security, designed for EAL5+ isolation and Secure Services Containers technology that are designed to prevent privileged access from malicious users and Cloud Admins.
  • IBM Cloud Hyper Protect Developer Starter Kits are designed to enable iOS developers to safeguard credentials, services and data using the Hyper Protect cloud services when building enterprise apps on IBM Cloud. This complements the high level of security of Apple devices.
http://www.eweek.com

IBM launches private IoT analytics cloud platform

IBM has launched the latest effort to bring the nature of the cloud to the on-premises data center with Cloud Private for Data. It's an integrated data science, engineering and development platform designed to help companies gain insights from data sources such IoT, online commerce, and mobile data.

Cloud Private for Data builds on IBM Cloud Private, a private cloud platform IBM introduced in November that brought Kubernetes into the data center. Cloud Private for Data expands on that greatly, adding IBM Streams for data ingestion, IBM Data Science Experience, Information Analyzer, Information Governance Catalogue, Data Stage, Db2, and Db2 Warehouse. All run on the Kubernetes platform, allowing services to be deployed “in minutes,” IBM claimed, and to scale up or down automatically as needed.

IBM said the solution is meant to provide a data infrastructure layer for AI behind firewalls. In the future, the Cloud Private for Data will run on all clouds, as well as be available in industry-specific solutions for financial services, healthcare, manufacturing, and others.

“Whether they are aware of it or not, every company is on a journey to AI as the ultimate driver of business transformation,” Rob Thomas, general manager of IBM Analytics, said in a statement. “But for them to get there, they need to put in place an information architecture for collecting, managing, and analyzing their data. With today’s announcements, we are planning to bring the AI destination closer and give access to powerful machine learning and data science technologies that can turn data into game-changing insight.”

IBM debuts Data Science Elite team

Additionally, IBM debuted its Data Science Elite team, a no-charge consultancy to help enterprises in their machine learning and artificial intelligence (AI) strategies.

Described as a “global team of data scientists, machine learning engineers, and decision optimization engineers,” the Data Science Elite Team was assembled to help clients with particular use cases. So far, IBM has assigned 30 people to the Data Science Elite Team, but the company plans to expand that to 200 over the next few years.

https://www.networkworld.com

Nimbus Data Previews World’s Largest (100TB) Solid-State Drive

Flash drive maker Nimbus Data may be a smaller storage provider than such major players as SanDisk/WD, Samsung and Toshiba, but it is now ahead of them all in one key category: drive capaciousness.

Nimbus this week introduced its ExaDrive DC100, the largest-capacity (100TB) solid-state drive ever produced. In addition to having more than three times the capacity of the closest competitor (32TB from Samsung and SanDisk/WD), Nimbus claims the ExaDrive draws 85 percent less power per terabyte, reducing the total cost of ownership per terabyte by 42 percent compared to competing enterprise SSDs.

Storage admins can do their own math to see if that type of performance actually happens when using such a large SSD, but this is the stake in the ground from Irvine, Calif.-based Nimbus.

“As flash memory prices decline, capacity, energy efficiency, and density will become the critical drivers of cost reduction and competitive advantage,” Thomas Isakovich, CEO and founder of Nimbus Data, told eWEEK in a March 19 media advisory. “The ExaDrive DC100 meets these challenges for both data center and edge applications, offering unmatched capacity in an ultra-low power design.”

Optimized for Capacity and Efficiency, Not Speed

While existing SSDs focus on speed, the DC100 is optimized for capacity and efficiency, Isakovich said. With its patent-pending multiprocessor architecture, the DC100 supports much greater capacity than monolithic flash controllers.

Using 3D NAND, each DC100 provides enough flash capacity to store 20 million songs, 20,000 HD movies, or 2,000 iPhones (32GB each)  worth of data in a device small enough to fit into a back pocket. Inside data centers, a single rack of DC100 SSDs can store more than 100PB of content. Data centers can reduce power and cooling costs by a whopping 85 percent per terabyte, enabling more workloads to move to flash, Isakovich said.

Designed in the same 3.5-inch form factor and SATA interface used by hard disk drives, the DC100 is plug-and-play compatible with hundreds of storage and server platforms. The unit’s low-power (0.1 watts/TB) and portability also make it well-suited for edge and IoT applications, Isakovich said.

Good for High Number of Use Cases

As for speed specs, the DC100 achieves up to 100,000 I/Ops (read or write) and up to 500 MBps throughput. This type of balanced read/write performance works just fine for a wide range of workloads, from big data and machine learning to rich content and cloud infrastructure, Isakovich said. It wouldn't be optimal for, say, high-frequency stock trading or processing human genome data.

The ExaDrive DC series includes both 100 TB and 50 TB models. It is currently sampling to strategic customers and will be generally available this summer, Isakovich said. He said pricing will be similar to existing enterprise SSDs on a per-terabyte basis, but specific numbers were not released.

http://www.eweek.com

Microsoft Announces Project Denali SSD Storage Specification Effort

Open source hardware is hitting its stride with new initiatives designed to lower data center hardware costs and give IT departments more flexibility in how they configure systems. 

A new study released here March 20 at the Open Compute Project Summit found that revenue generated from OCP equipment in 2017 reached $1.2 billion. That figure, compiled by IHS Markit, doesn’t include revenue from project board members Facebook, Goldman Sachs, Intel, Microsoft, and Rackspace. The Open Compute Project Foundation was formed in 2011 by Facebook, Intel and Rackspace. 

Most of these companies got involved in the open source hardware movement because they were building their own “white label” cloud servers, storage and network components instead of buying brand-name equipment to reduce their infrastructure costs.

Executives from several major vendors took to the stage to discuss current and future “open compute” hardware and network efforts and detail the benefits of ongoing projects. 

Kushagra Vaid, general manager of Microsoft’s Azure Cloud Hardware Infrastructure group, announced Project Denali, a specification for standardizing Solid-State Drive firmware interfaces. The goal is to give enterprises and cloud computing providers more flexibility in how they use flash storage in large scale data centers.

With Denali, Microsoft says cloud service providers like itself will be able to better optimize their workloads by making management of parts of the drive more accessible and reduce the cost of SSD deployments. 

“Today’s SSDs aren’t designed to be cloud-friendly,” said Vaid. “We want to drive every ounce of efficiency in the drive with 24-7 availability. We aren’t exposing the capabilities of these flash devices and that means we can’t take advantage of innovation.”

Rather than adhering to generic design of today’s SSDs, Vaid said Denali will give cloud providers new storage options to handle high performance workloads more efficiently at scale.

Jason Waxman, general manager of the Datacenter Solutions Group at Intel, said that with the growth of the Internet, social media and different media types including video, a new generation of hyperscale data centers is needed. “We believe this next level of hyperscale performance will drive the next $1 billion wave of OCP products. We believe 80 percent of all workloads in 2025 will be deployed in hyperscale data centers.”

He said that from a hardware perspective what’s needed is technology that allows networks to be virtualized and that accelerates cloud computing. Intel has already contributed 75 product designs to the OCP and is planning more contribution including Intel Rack Scale Design and silicon photonics that Waxman said are optimized for hyperscale performance. 

Rack Scale Design uses virtualization, cloud computing, and other software-oriented approaches while allowing workloads to run on bare-metal for faster performance. He said Intel’s “disaggregation” approach separates out pools of accelerators, storage and other resources that can be refreshed independently.

Waxman also referenced the recently-discovered Meltdown and Spectre security flaws in processors made by Intel and others. “We have microcode patches for all the Intel processors of the last five years and additional hardware fixes. That was just the start,” said Waxman.

“Thank you to the community because it’s important we understand the impact on the community. All this infrastructure is great, but without a security mindset it won’t mean a lot,” he added.

Another speaker, Omar Baldonado, engineering director at Facebook, explained why the social media giant is committed to advancing hardware standards. Facebook was a cofounder of OCP seven years ago.

“Facebook was built on open source software and we want to continue to leverage the hardware we need,” said Baldonando. “The reason we’re still here [with OCP] is that we want to move fast and innovate and believe the best way to do that is to work with the Open Compute community and the companies you represent.”

Over the years Facebook has contributed server, storage and network design specifications to the OCP. “We have a good problem—billions of people use our applications, post and watch videos, text and do video chats all over the world,” said Baldonando. That’s only part of the story though. The traffic of Facebook to the Internet is growing, but it’s dwarfed by the internal [Facebook] traffic.”

Baldonando said Facebook takes a three-pronged approach to advancing its hardware infrastructure. First it develops hardware components with the help of hardware vendors. “Then we scale out that solution around the world. When we hit scaling limits we innovate.” Those innovations in turn end up being contributed back to the OCP. 

http://www.eweek.com

VMware Makes Big Move into the Office with BI-Driven Workspace

 VMware, whose software resides in virtually every data center in the world but who also wants to become a player inside enterprise office PCs, released some new features for its Workspace ONE platform March 21.

The Palo Alto, Calif.-based virtualization software maker claims that this is the first and only “intelligence-driven digital workspace” designed to improve user experience and enable predictive security across an IT environment.

New feature No. 1 is Workspace ONE Intelligence, a cloud-based service that combines aggregation and correlation of users, apps, networks and endpoints data.

Feature No. 2 is the Workspace ONE Trust Network, which combines data and analytics from Workspace ONE with a new network of trusted security partner solutions to deliver predictive and automated security.

Lastly, feature No. 3 is Workspace ONE AirLift, a new Windows 10 co-management app designed to help organizations modernize their approach to PC lifecycle management (PCLM). 

A New Intelligence-Driven Digital Workspace

Most IT companies contend that business intelligence in the form of new-gen apps and cloud services is foundational to a smart, automated and secure enterprise. However, organizations have struggled to gain visibility across all end users, devices and applications, because the data is spread across many systems and tools. This lack of visibility most often results in poor user experience, greater operational costs and a lack of proper security controls, VMware said.

Workspace ONE Intelligence combines aggregation and correlation of users, apps, networks and endpoints data, and it features a decision engine that uses the data to provide actionable recommendations and automation.

Capabilities of Workspace ONE Intelligence include:

  • Integrated Insights bring together actionable information and recommendations for the entire digital workspace across all endpoints, apps, networks and user experience into one comprehensive view. Integrated insights pinpoint what’s working and what’s not in the environment, including monitoring application performance, and offer tangible recommendations that IT and development can easily act on.
  • Insights-Driven Automation powered by a Decision Engine helps customers automate remediation rapidly across their entire digital workspace. Gone are the days of analyzing multiple timely decisions across several stand-alone tools. With the decision engine, IT can create rules to automate and optimize common tasks, such as remediating vulnerable Win 10 endpoints with a critical patch and setting conditional access controls to apps and services at the group or individual level. Automating alerts, notifications and remediation steps enable improved employee self-service to eliminate time spent on issues such as battery changes or answering helpdesk tickets that get in the way of employee productivity. With the decision engine, organizations can create rules to automate remediation across their entire environment including workflows with other third-party services, such as ServiceNow or Slack.

Connecting Security Silos to Improve Data Visibility

When organizations are forced to manage endpoint remediation and access management with multiple disconnected solutions, visibility across the ecosystem is limited and vulnerabilities are bound to slip through the cracks.

This is why VMware launched the Workspace ONE Trust Network, which combines security capabilities currently available in Workspace ONE with those of VMware’s new security partner network. By extending Workspace ONE Intelligence and its application programming interfaces (APIs), VMware security partners will be able to share and correlate threat data with Workspace ONE, to give joint customers deeper insight in their digital workspaces.

Carbon Black, CrowdStrike, Cylance, Lookout, McAfee, Netskope and Symantec are the initial partners who will integrate their security solutions with Workspace ONE as part of the Workspace ONE Trust Network.

Updating Windows 10 Management

To deliver an effective workspace for all their employees, companies are now preparing to transition legacy Windows management models to an updated approach. VMware claims that Workspace ONE offers the only unified endpoint management (UEM) platform with integrated Intelligence that supports all stages of the Windows 10 PC lifecycle–from onboarding to retirement–and providing a modern approach for any management task.

So VMware added Workspace ONE AirLift, which enables co-management of Windows 10 PCs alongside Microsoft System Center Configuration Management (SCCM). AirLift’s co-existence with SCCM allows customers to speed and de-risk transition efforts by migrating PCLM tasks, such as device onboarding, patching, software distribution and remote user support, to a more cost-efficient, secure and cloud-based modern management model.

This enables organizations to move to the new model without replacing SCCM or requiring costly PC and SCCM server upgrades.

Additional Workspace ONE features include:

  • Simplified Mac Adoption: The new Workspace ONE client for macOS delivers a consistent experience as employees switch between OS platforms. Using Workspace ONE, employees can access all apps from their macOS device, including virtual Windows apps.
  • Extended Security for O365 Apps: Workspace ONE now integrates the Microsoft Graph APIs to give IT O365-specific security features such as new Data Loss Prevention (DLP) controls, as well as continuous device risk monitoring that cuts off O365 access instantly if risk increases. Customers can enable vital business data is secure while increasing O365 adoption and delighting employees with simple access that bridges the gap between the O365 ecosystem and other work apps.
  • VMware Boxer with Intelligent Workflows Empower Employee Mobile Moments: New mobile flows surface context-based actions VMware Boxer secure email. This enables users to complete tasks across many business applications--such as Salesforce, Jira or Concur--without leaving the Boxer email app.
  • VMware Cloud on Azure VDI Beta: VMware announced a beta for VMware virtual desktops with Azure infrastructure in partnership with Microsoft. Horizon Cloud on Azure VDI will expand upon VMware's support for published applications on Azure that was previously announced.
www.eweek.com