Thursday, 14 February 2019

IBM Taps Kubernetes to Unleash Watson Across Clouds

IBM is using Kubernetes to help unleash its Watson artificial intelligence (AI) platform to work across any cloud environment, including private, public, or hybrid multi-cloud environments. This expansion also includes support for cloud ecosystems powered by IBM rivals like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

The move will see Watson applications like Watson Assistant and Watson OpenScale integrated with IBM’s Cloud Private (ICP) for Data and run as microservices using Kubernetes. This will allow for those microservices to be portable across the different infrastructure types and cloud ecosystems.

For IBM, the move allows it to broaden the reach of its Watson AI platform. It will also allow organizations to use the Watson platform to help analyze and manage data across all of their data sources.

IBM CEO Ginni Rometty during a keynote at this week’s IBM Think event said the move makes Watson “the most open, scalable AI for business in the world.”

IBM launched its ICP platform in late 2017. It’s built on a Kubernetes-based container architecture and solidified IBM’s envelopment of Kubernetes.

The vendor early last year announced the ICP for Data extension. It allows companies to glean insight from their data resources on their way to supporting enterprise AI services. IBM has since been slowly layering in different Watson AI capabilities onto the platform, including Watson Speech-to-Text and Watson Assistant last year.

The Watson Assistant platform helps developers and non-technical users create conversational AI products, ranging from simple chatbots to complex enterprise-grade products for customer service. Watson OpenScale is IBM’s open AI platform for managing multiple AI instances.

The move could also boost IBM’s unique cloud positioning. A recent Synergy Research Group report found that IBM had lost market share among cloud infrastructure service providers during the fourth quarter of last year compared with the previous year. This made IBM the only cloud provider among the market’s five largest providers to post such a loss.

However, John Dinsdale, chief analyst at SRG, noted that IBM has a slightly different focus than its rivals “as it remains the strong leader in the hosted private cloud services segment of the market.”

Banco Santander Deal
In addition to expanding the reach of its Watson platform, IBM this week also announced a five-year, $700 million deal with Banco Santander to help the company update its IT architecture toward a hybrid cloud environment. The deal will see Santander use Watson to improve customer service and employee production.

Santander will work with IBM to enhance the bank’s recently created Cloud Competence Center. It will also use IBM’s DevOps and API Connect platforms to help develop, iterate, and launch new or upgraded applications.

IBM is working with a handful of banks on similar migrations, including ICBC Argentina, Lloyds Banking Group, and Royal Bank of Canada.

https://www.sdxcentral.com

Monday, 4 February 2019

Career Shift from a Tester to Business Analyst – A Step by Step Guide

A testing professional is required to thoroughly test the software developed to ensure if the software meets the end requirements of the customer.

A Business Analyst is also responsible to verify whether the software built and delivered meets the end customer requirements. This aspect of both the roles make it easier for a tester to switch to a business analyst role.

If BA and tester switch their roles, then each of them can unleash their skill sets which can benefit the project itself. When it comes to testing the software system, both tester, and BA work as two sides of the same coin.

Why Business Analysis?
Testing professional has a thorough knowledge and understanding of a software and its betterment along with the attention for minute details. This skill set opens the door for a tester into many roles in the IT industry today.

By having a good understanding of the development lifecycle and process, they can choose to become release manager, automation engineer, QA Strategist, solution architect, senior manager and of course business analyst.

Having said that, a career switch in business analysis is a much more promising one in today’s scenario. Business Analysis is a much larger role when compared to testing or any other roles mentioned above.

It’s a promising career avenue and a lucrative one too. A tester who loves to travel across the globe can really enjoy a challenging and satisfying BA role. A business analyst can further climb the ladder to become Lead/Senior Business Analyst, Consultants, Product Owners or Product Managers which are quite glamorous.

I strongly recommend Business Analysis as a career switch option for testing professionals if they have excellent analytical, documentation and communication skills, enjoy customer interaction, like a pinch of glamour in work profile and of course, love being a globetrotter.

View at: https://www.softwaretestinghelp.com/career-shift-from-tester-to-ba/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Softwaretestinghelp+%28softwaretestinghelp%29

Wednesday, 23 January 2019

Open-Source Metasploit Framework 5.0 Improves Security Testing

Among the most widely used tools by security researchers is the open-source Metasploit Framework, which has now been updated with the new 5.0 release.

Metasploit Framework is penetration testing technology, providing security researchers with a variety of tools and capabilities to validate the security of a given application or infrastructure deployment. With Metasploit, researchers can also test exploits against targets to see if they are at risk, in an attempt to penetrate the defensive measures that are in place. The 5.0 release of Metasploit introduces multiple new and enhanced capabilities, including automation APIs, evasion modules and usability improvements.

"As the first major Metasploit release since 2011, Metasploit 5.0 brings many new features, as well as a fresh release cadence," Brent Cook, senior manager at Rapid7, wrote in a blog post. 

The Metasploit project celebrated its 15th anniversary in 2018 and iterates on major version numbers infrequently. The Metasploit 5.0 update is the first major version change since Metasploit 4 was released in 2011. While major version numbers have not iterated frequently, a steady stream of exploit modules and incremental improvements are continuously added to Metasploit.

The Metasploit project itself was created by HD Moore, with commercial efforts moving to Rapid7 in 2009 after the effort was acquired. Rapid7 provides the commercially supported Metasploit Pro version of the Metasploit Framework.

Metasploit 5.0 Features

Among the core new features in Metasploit 5.0 is the extensibility of the framework's database back end, which can now be run as a REST web service. By extending the database as a web service, multiple external tools can pull from the same base and interact with each other.

"This release adds a common web service framework to expose both the database and the automation APIs," the release notes for Metasploit 5.0 states. "This framework supports advanced authentication and concurrent operations." 

Evasion


Metasploit has had different types of evasion capabilities since at least the 3.0 release in 2006. Evasion refers to the ability to get around, bypass or "evade" a target's existing defenses, which could include antivirus, firewall, intrusion prevention system (IPS), or other technologies and security configurations. With the evasion modules capability in Metasploit 5.0, researchers can now more easily create and test their own evasion module payloads.

"The purpose of the evasion module type is to allow developers to build executables specifically to evade antivirus, and hopefully this creates a better pentesting experience for the users," Wei Chen, lead security engineer at Rapid7, wrote in the GitHub code commit for the evasion module.

Usability

Metasploit 5.0 now also brings improved usability for security researchers to test multiple targets at scale.

"While Metasploit has supported the concept of scanners that can target a subnet or network range, using an exploit module was limited to only one host at a time," Cook wrote. "With Metasploit 5.0, any module can now target multiple hosts in the same way by setting RHOSTS to a range of IPs or referencing a host’s file with the file:// option."

Usability also gets a boost with improved performance, including faster startup and searching capabilities than in previous versions of Metasploit. Additionally, with Metasploit 5.0, researchers are now able to write and use modules in any of three programming languages: Go, Python and Ruby. Overall, development for Metasploit 5.0 benefited from an updated process that included a stable branch that is used by Rapid7 and other distributions for everyday use and an unstable branch where new development can be rapidly added before it’s ready for broader consumption. 

"The takeaway is that Metasploit now has a more mature development process that we hope to continue leveraging in the future to enable even bigger improvements to the code base," Cook wrote.

https://www.eweek.com

Sunday, 20 January 2019

AWS Unveils New Data Backup Service

Amazon Web Services announced AWS Backup, a centralized service for customers to back up their data across both AWS’ public cloud as well as their on-premises data centers.

The company said enterprises are having to deal with data located in multiple services such as databases, block storage, object storage, and file systems. While all of these services in AWS provide backup capabilities, customers often create custom scripts to automate scheduling, enforce retention policies, and consolidate backup activity to better meet their business and regulatory compliance requirements.

AWS Backup removes the need for custom scripts by providing a centralized place to manage backups. Using the AWS Management Console, customers can create a policy that defines how frequently backups are created and how long they are stored.

Bill Vass, VP of storage, automation, and management services at AWS, said in a statement that many customers want one place to go for backups versus having to do it across multiple, individual services. “Today, we are proud to make AWS Backup available with support for block storage volumes, databases, and file systems, and over time, we plan to support additional AWS services,” said Vass.

Initially, AWS Backup is integrated with Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), Amazon Relational Database Service (Amazon RDS), and AWS Storage Gateway.

Customers can also back up their on-premises application data through the AWS Backup integration with AWS Storage Gateway.

Ever since AWS announced its AWS Outposts offering in November 2018, it’s been making a concerted effort to include its customer’s on-premises data centers in its offerings. AWS Outposts help customers connect their on-premises environments to AWS’ services in the public cloud.

The company’s recent purchase of TSO Logic is one example of its effort to help its customers across hybrid clouds. TSO Logic’s software is designed to find the optimal spot to place workloads, whether on public or private clouds.

Amazon’s move into the backup space may hit some other backup vendors in the wallet, including Rubrik, Commvault, Veeam, and Kaseya.

https://www.sdxcentral.com

Friday, 11 January 2019

TriggerMesh Clears Serverless Bridge Between AWS Lambda, Knative

TriggerMesh launched an open source project that stitches together Amazon Web Services’ Lambda serverless architecture with the Knative open source set of serverless components to bridge the portability gap between the two platforms. That gap between serverless platforms has been one of the main barriers to broader serverless adoption.

The TriggerMesh bridge is in the form of its Knative Lamdba Runtime (KLR, which is pronounced “clear”) project. The high-level focus of the project is to provide portability of AWS Lambda functions to Knative-enabled clusters and serverless cloud infrastructure without needing to rewrite the serverless functions.

“We just opened up every Lambda function to run on Knative,” said Mark Hinkle, co-founder of TriggerMesh. “That’s a huge ecosystem.”

The KLR platform uses a combination of the AWS Lambda custom runtime API and the Knative Build system. The KLRs are constructed as Knative build templates that can run a Lambda function in a Kubernetes cluster installed with Knative. The custom AWS runtime interface provides a clone of the Lambda cloud environment where the function runs.

“With these templates you can run your AWS Lambda functions as is in a Knative-powered Kubernetes cluster,” added TriggerMesh’s other co-founder Sebastian Goasguen.

TriggerMesh launched last November with a serverless management platform that runs on top of Knative. This allows developers to automate the deployment and management of serverless and functions-as-a-service (FaaS) across different cloud platforms.

Goasguen said that work on KLR started after AWS unveiled its customer API runtime at its recent re:Invent conference.

“We saw this as a way to run Lambda on Knative clusters,” Goasguen said. “We worked hard during Christmas and the New Year and announced the project.”

Goasguen admitted that there is still work to be done with the KLR platform. He cited the need to add support for other programming languages and the need for more thorough testing to make sure it’s production ready.

The other missing piece is eventing. Events are the thing that triggers a function to run. This is currently still siloed to specific cloud platforms. Goasguen said that he hopes to see work to expand eventing outside of those siloes over the next six months.

Knative Portability
The KLR platform targets a significant pain point that continues to haunt the serverless space, which is that AWS’ Lambda is the most used platform for serverless deployments but is only compatible with AWS infrastructure. This ties developers that use Lambda-based serverless functions to AWS.

This model is also used by other large cloud providers and their respective hosted serverless platforms. These include Microsoft’s Azure Functions and Google’s Cloud Functions.

But, a number of efforts have popped up that allow developers to move their serverless functions between cloud providers. Knative is one of those projects.

Knative was launched last year under the notion of using Kubernetes as a portability layer for serverless. It provides a set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. Basically, Knative is using the market momentum behind Kubernetes to provide an established platform on which to support serverless deployments that can run across different public clouds.

Knative’s ability to support portability across cloud platforms was mentioned several times during keynote speeches at the recent KubeCon + CloudNativeCon North America 2018 event in Seattle.

“This portability is really important and what is behind the industry aligning behind Knative,” explained Aparna Sinha, group product manager for Kubernetes at Google, during her keynote address at the KubeCon event.

Jason McGee, vice president and CTO for IBM’s Cloud Platform, told attendees that Knative was an important project in unifying the dozens of serverless platforms that have flooded the market.

“That fragmentation, I think, holds us all back from being able to really leverage functions as part of the design of our applications,” McGee said during his keynote. “I think Knative is an important catalyst for helping us come together to bring functions and applications into our common cloud native stack in a way that will allow us to move forward and collaborate together on this common platform.”

https://www.sdxcentral.com

Saturday, 29 December 2018

Serverless and Knative Underline Cloud Native Evolution

Serverless computing played an interesting subplot at the recent KubeCon + CloudNativeCon North America 2018 event in Seattle, where a number of keynotes and panels were dedicated to the topic of how these systems will impact the evolution of cloud native.

Most of the attention, not surprisingly, centered on the Knative platform that relies on Kubernetes as an orchestration layer for serverless workloads. The platform was developed by Google, Pivotal, IBM, SAP, and Red Hat, and launched at the Google Next event in July.

Knative is an open source set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. It’s focused on orchestrating source-to-container builds; routing and managing traffic during deployment; auto-scaling workloads; and binding services to event ecosystems.

It’s basically a way to use Kubernetes to liberate management of serverless platforms from specific cloud providers. Many of the current serverless platforms are based on and tied to a specific cloud platform, which can lead to vendor lock-in for an organization adopting one of those platforms. Those include AWS Lambda, Microsoft Azure Functions, and Google Cloud Functions. Knative can break this lock-in by providing a platform that can be accessed regardless of the underlying cloud.

“This portability is really important and what is behind the industry aligning behind Knative,” explained Aparna Sinha, group product manager for Kubernetes at Google, during her keynote address at the KubeCon event.

Jason McGee, vice president and CTO for IBM’s Cloud Platform, told attendees that Knative was an important project in unifying the dozens of serverless platforms that have flooded the market.

“That fragmentation, I think, holds us all back from being able to really leverage functions as part of the design of our applications,” McGee said during his keynote. “I think Knative is an important catalyst for helping us come together to bring functions and applications into our common cloud native stack in a way that will allow us to move forward and collaborate together on this common platform.”

He added that Knative also teaches Kubernetes how to deal with building and serving applications and functions, which makes it an important piece in the cloud native landscape.

Maturation Needed
Despite the growing hype, most also took time to mention that serverless platforms, and more specifically Knative itself, remain relatively immature. Modern serverless platforms themselves are less than five years old, and Knative only recently released its 0.2 version.

Dan Berg, a distinguished engineer at IBM’s Cloud Kubernetes Service, told SDxCentral in an interview that while interest around Knative has surpassed expectations, maturity of the platform remains a significant challenge to broader adoption.

“I think maturity is where Knative needs to really evolve over the next year,” Berg said. “The interest is there, but it’s just still too early.”

That maturation is expected, with some already predicting that Knative was in line to become the serverless platform of choice to run on Kubernetes.

“Knative will almost certainly become the standard plumbing for functions-as-a-service on Kubernetes,” wrote James Governor, analyst and co-founder at RedMonk, in a blog post shortly after the platform was announced.

https://www.sdxcentral.com

Monday, 24 December 2018

SIG-Auth Bolstering Security Authorization in Kubernetes

Today’s topics include Kubernetes security authentication moving forward with SIG-Auth, and Elastifile providing scalable file storage for Google Cloud.

One of the primary Special Interest Groups within Kubernetes is SIG-Auth, whose members are tasked with looking at authorization security issues. At the KubeCon + CloudNativeCon NA 2018 in Seattle last week, SIG-Auth leaders outlined how the group works and its current and future priorities for the Kubernetes project.

"SIG-Auth is responsible for designing and maintaining parts of Kubernetes, mostly inside the control plane, that have to deal with authorization and security policy," said Google Software Engineer Mike Danese.

He said SIG-Auth has multiple subprojects detailed in the group's GitHub repository. Those subprojects include audit, encryption at rest, authenticators, node identity/isolation, policy, certificates and service accounts.

Over 2018, SIG-Auth added a number of security authorization features into Kubernetes, including better node isolation, protection of specific labels and self-deletion, and better audit capabilities.

Elastifile, a new-gen provider of enterprise-grade, scalable file storage for the public cloud, announced on Dec. 11 the introduction of a fully managed, scalable file storage service for Google Cloud Platform. Using its tight integration with Google Cloud infrastructure, Elastifile Cloud File Service makes it easy to deploy, manage and scale enterprise file storage in the public cloud.

According to CEO Erwan Menard, the software runs on any server and can use any type of flash media, including 3D and TLC. He also said Elastifile brings flash performance to all enterprise applications while reducing the Capex and Opex of virtualized data centers, and simplifies the adoption of hybrid cloud by extending file systems across on-premises and cloud deployments.

http://www.eweek.com