Wednesday, 23 January 2019

Open-Source Metasploit Framework 5.0 Improves Security Testing

Among the most widely used tools by security researchers is the open-source Metasploit Framework, which has now been updated with the new 5.0 release.

Metasploit Framework is penetration testing technology, providing security researchers with a variety of tools and capabilities to validate the security of a given application or infrastructure deployment. With Metasploit, researchers can also test exploits against targets to see if they are at risk, in an attempt to penetrate the defensive measures that are in place. The 5.0 release of Metasploit introduces multiple new and enhanced capabilities, including automation APIs, evasion modules and usability improvements.

"As the first major Metasploit release since 2011, Metasploit 5.0 brings many new features, as well as a fresh release cadence," Brent Cook, senior manager at Rapid7, wrote in a blog post. 

The Metasploit project celebrated its 15th anniversary in 2018 and iterates on major version numbers infrequently. The Metasploit 5.0 update is the first major version change since Metasploit 4 was released in 2011. While major version numbers have not iterated frequently, a steady stream of exploit modules and incremental improvements are continuously added to Metasploit.

The Metasploit project itself was created by HD Moore, with commercial efforts moving to Rapid7 in 2009 after the effort was acquired. Rapid7 provides the commercially supported Metasploit Pro version of the Metasploit Framework.

Metasploit 5.0 Features

Among the core new features in Metasploit 5.0 is the extensibility of the framework's database back end, which can now be run as a REST web service. By extending the database as a web service, multiple external tools can pull from the same base and interact with each other.

"This release adds a common web service framework to expose both the database and the automation APIs," the release notes for Metasploit 5.0 states. "This framework supports advanced authentication and concurrent operations." 

Evasion


Metasploit has had different types of evasion capabilities since at least the 3.0 release in 2006. Evasion refers to the ability to get around, bypass or "evade" a target's existing defenses, which could include antivirus, firewall, intrusion prevention system (IPS), or other technologies and security configurations. With the evasion modules capability in Metasploit 5.0, researchers can now more easily create and test their own evasion module payloads.

"The purpose of the evasion module type is to allow developers to build executables specifically to evade antivirus, and hopefully this creates a better pentesting experience for the users," Wei Chen, lead security engineer at Rapid7, wrote in the GitHub code commit for the evasion module.

Usability

Metasploit 5.0 now also brings improved usability for security researchers to test multiple targets at scale.

"While Metasploit has supported the concept of scanners that can target a subnet or network range, using an exploit module was limited to only one host at a time," Cook wrote. "With Metasploit 5.0, any module can now target multiple hosts in the same way by setting RHOSTS to a range of IPs or referencing a host’s file with the file:// option."

Usability also gets a boost with improved performance, including faster startup and searching capabilities than in previous versions of Metasploit. Additionally, with Metasploit 5.0, researchers are now able to write and use modules in any of three programming languages: Go, Python and Ruby. Overall, development for Metasploit 5.0 benefited from an updated process that included a stable branch that is used by Rapid7 and other distributions for everyday use and an unstable branch where new development can be rapidly added before it’s ready for broader consumption. 

"The takeaway is that Metasploit now has a more mature development process that we hope to continue leveraging in the future to enable even bigger improvements to the code base," Cook wrote.

https://www.eweek.com

Sunday, 20 January 2019

AWS Unveils New Data Backup Service

Amazon Web Services announced AWS Backup, a centralized service for customers to back up their data across both AWS’ public cloud as well as their on-premises data centers.

The company said enterprises are having to deal with data located in multiple services such as databases, block storage, object storage, and file systems. While all of these services in AWS provide backup capabilities, customers often create custom scripts to automate scheduling, enforce retention policies, and consolidate backup activity to better meet their business and regulatory compliance requirements.

AWS Backup removes the need for custom scripts by providing a centralized place to manage backups. Using the AWS Management Console, customers can create a policy that defines how frequently backups are created and how long they are stored.

Bill Vass, VP of storage, automation, and management services at AWS, said in a statement that many customers want one place to go for backups versus having to do it across multiple, individual services. “Today, we are proud to make AWS Backup available with support for block storage volumes, databases, and file systems, and over time, we plan to support additional AWS services,” said Vass.

Initially, AWS Backup is integrated with Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), Amazon Relational Database Service (Amazon RDS), and AWS Storage Gateway.

Customers can also back up their on-premises application data through the AWS Backup integration with AWS Storage Gateway.

Ever since AWS announced its AWS Outposts offering in November 2018, it’s been making a concerted effort to include its customer’s on-premises data centers in its offerings. AWS Outposts help customers connect their on-premises environments to AWS’ services in the public cloud.

The company’s recent purchase of TSO Logic is one example of its effort to help its customers across hybrid clouds. TSO Logic’s software is designed to find the optimal spot to place workloads, whether on public or private clouds.

Amazon’s move into the backup space may hit some other backup vendors in the wallet, including Rubrik, Commvault, Veeam, and Kaseya.

https://www.sdxcentral.com

Friday, 11 January 2019

TriggerMesh Clears Serverless Bridge Between AWS Lambda, Knative

TriggerMesh launched an open source project that stitches together Amazon Web Services’ Lambda serverless architecture with the Knative open source set of serverless components to bridge the portability gap between the two platforms. That gap between serverless platforms has been one of the main barriers to broader serverless adoption.

The TriggerMesh bridge is in the form of its Knative Lamdba Runtime (KLR, which is pronounced “clear”) project. The high-level focus of the project is to provide portability of AWS Lambda functions to Knative-enabled clusters and serverless cloud infrastructure without needing to rewrite the serverless functions.

“We just opened up every Lambda function to run on Knative,” said Mark Hinkle, co-founder of TriggerMesh. “That’s a huge ecosystem.”

The KLR platform uses a combination of the AWS Lambda custom runtime API and the Knative Build system. The KLRs are constructed as Knative build templates that can run a Lambda function in a Kubernetes cluster installed with Knative. The custom AWS runtime interface provides a clone of the Lambda cloud environment where the function runs.

“With these templates you can run your AWS Lambda functions as is in a Knative-powered Kubernetes cluster,” added TriggerMesh’s other co-founder Sebastian Goasguen.

TriggerMesh launched last November with a serverless management platform that runs on top of Knative. This allows developers to automate the deployment and management of serverless and functions-as-a-service (FaaS) across different cloud platforms.

Goasguen said that work on KLR started after AWS unveiled its customer API runtime at its recent re:Invent conference.

“We saw this as a way to run Lambda on Knative clusters,” Goasguen said. “We worked hard during Christmas and the New Year and announced the project.”

Goasguen admitted that there is still work to be done with the KLR platform. He cited the need to add support for other programming languages and the need for more thorough testing to make sure it’s production ready.

The other missing piece is eventing. Events are the thing that triggers a function to run. This is currently still siloed to specific cloud platforms. Goasguen said that he hopes to see work to expand eventing outside of those siloes over the next six months.

Knative Portability
The KLR platform targets a significant pain point that continues to haunt the serverless space, which is that AWS’ Lambda is the most used platform for serverless deployments but is only compatible with AWS infrastructure. This ties developers that use Lambda-based serverless functions to AWS.

This model is also used by other large cloud providers and their respective hosted serverless platforms. These include Microsoft’s Azure Functions and Google’s Cloud Functions.

But, a number of efforts have popped up that allow developers to move their serverless functions between cloud providers. Knative is one of those projects.

Knative was launched last year under the notion of using Kubernetes as a portability layer for serverless. It provides a set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. Basically, Knative is using the market momentum behind Kubernetes to provide an established platform on which to support serverless deployments that can run across different public clouds.

Knative’s ability to support portability across cloud platforms was mentioned several times during keynote speeches at the recent KubeCon + CloudNativeCon North America 2018 event in Seattle.

“This portability is really important and what is behind the industry aligning behind Knative,” explained Aparna Sinha, group product manager for Kubernetes at Google, during her keynote address at the KubeCon event.

Jason McGee, vice president and CTO for IBM’s Cloud Platform, told attendees that Knative was an important project in unifying the dozens of serverless platforms that have flooded the market.

“That fragmentation, I think, holds us all back from being able to really leverage functions as part of the design of our applications,” McGee said during his keynote. “I think Knative is an important catalyst for helping us come together to bring functions and applications into our common cloud native stack in a way that will allow us to move forward and collaborate together on this common platform.”

https://www.sdxcentral.com