Sunday, 31 March 2019

6 Digital Transformation Challenges Enterprises Need To Overcome

Digital transformation strategies have so many moving parts. And as a moving target, it's pretty difficult to pin down, which makes it unsurprising how much attention the process actually gets. 

In a piece in the Harvard Business Review last year on why so many digital transformation projects fail, Thomas H. Davenport and George Westerman defined digital transformation as follows, "Digital transformation is an ongoing process of changing the way you do business. It requires foundational investments in skills, projects, infrastructure, and often, in cleaning up IT systems. It requires mixing people, machines and business processes, with all of the messiness that entails.”

Identifying Digital Transformation Problems
Herein lies the problem. There are so many technologies, so many processes, so many departments and people to pull into the process that somewhere along the line it's all going to break down. Despite this, Eric Hanson, VP of market intelligence at Fuze, said that transformation is a challenge that IT leaders want to take on not only because of its wide-ranging tech benefits, but also because of the impact it could have for their organizations as they usher in the next generation of workers.

However, throughout the integration of a new technology and its impact on a changing workforce, CIOs and other IT decision-makers are faced with the challenge of tech adoption, which goes beyond being solely an IT initiative. They are business transformation projects that require executive sponsorship and cross-functional execution from HR to facilities to marketing and IT. This is not a change that affects a small group of workers; it impacts everyone.

Adoption requires buy-in from every single employee, each with different work and learning preferences, and includes those working outside of the company’s physical space. In today’s increasingly distributed workforce, implementing this roll-out of digital technologies becomes an even larger challenge as leaders face offices with different corporate cultures and processes.

In order to minimize friction around change and accelerate business outcomes, organizations should identify ways to internally promote the benefits of change, set clear expectations, and provide access to ongoing training with options for self-service and live sessions. So what are the challenges? After contacting a number or organizations we were able to identify six major challenges.


1. ‘Blind’ Challenge
Rob Maille, co-founder and head of strategy and customer experience at CommerceCX, argues that when people think about a digital transformation there are three common challenges enterprises must overcome, which impact the planning and cost of the transformation. The first is starting the digital transformation blindly, the second is adding unnecessary technology and the third is believing it is a one-and-done process.

However, starting blindly will kill the process from the start. The digital transformation process is often overlooked by an eager team looking to improve their business. Enterprises must remember to identify where the company currently is in the transformation journey and what is needed before starting. A good place to start is with customer journey maps and collecting user or customer data through observation, research and interviews. The journey maps are meant to help with design thinking and overall business strategy.

2. Short-Term View Challenge
For Michael Graham, CEO of Epilogue Systems, one of the most enduring challenges is ensuring that planning for digital transformation adoption goes beyond planning for the first three to five months. In these cases adoption becomes simply a project task, which threatens to undermine the years of work, millions of dollars invested and organizational disruption endured.

As they near the end of a digital transformation initiatives, many enterprises also make the mistake of not planning for project fatigue. Digital transformation projects involve so many people (internal and external) over such a long period of time that exhaustion — staff exhaustion, budget exhaustion, time exhaustion — is inevitable. Once key internal players reach exhaustion, and external players like systems integrators and project management consultants are gone, end users are left to live with the changes when a company reaches the go-live date. “Without the dedicated resources in place before go-live to ensure end users are properly equipped…adoption will be irrevocably hurt,” warns Graham.

3. Culture Challenge
Shaping organizational culture is a crucial-and often undervalued-factor in enabling successful digital transformation, said Melissa Henley, director of customer experience at Laserfiche.

Changing culture will always be more difficult than changing technology, and that's why it's important to proactively address the changes needed to instill a digital culture. Digital transformation affects every area of the business and requires teams to coordinate and collaborate like never before. To successfully lead digital transformation, leaders must be intentional in building a digital culture, including changing legacy technology and structures that hinder transformation.

4. Aligning Business and IT Challenge
However, no project should begin until the goals of both IT and business have been aligned, said John Mullen, North America CEO at Capgemini. One of the biggest obstacles companies face is the lack of connectivity between business and technology — the two are now indistinguishable. Businesses need to understand that it’s no longer a question of if businesses are going to use technology to transform for their customers, but rather how and when and how quickly can they do it at scale.

Another issue that plagues businesses is the fear factor of the transformation process. Companies need leadership who will embrace the risk-taking required to transform their operations and adopt an ongoing culture of innovation. While companies are making progress on evolving their customer experience, they are struggling to transform their back-end operations. "It’s imperative that organizations gain alignment between IT and the business. It is no longer possible for the CIO and the IT organization to operate separately from other C-Suite leaders," he said. 

In addition to alignment at the top, companies also need to include and engage their employees every step of the way in the transformation agenda. Companies will achieve digital leadership if they succeed in balancing the technology with retraining their workforce to realize new skills.

5. Technology Integration Challenge
Culturally, organizations have been built around certain technologies, with specific policies and procedures developed to support them, said Jeff Looman, vice president of engineering at FileShadow. The integration process for new technologies causes delays as employees face acceptance, training and getting accustomed to new data management techniques. Having multiple data silos may result in redundancy of work and confusion between disparate groups with regards to effective collaboration. Different approaches to data storage make cohesive blending laborious.

6. The Data Challenge
For MemSQL CEO Nikita Shamgunov, data is the principal challenge. The data economy has shifted into a “decision economy” where the business that makes the best insight-driven decisions faster than its peers will gain a competitive edge. There are two data challenges in digital transformation journeys:

  • Infrastructure - No matter what the next digital initiative is — AI, multi-cloud, streaming analytics or other emerging technologies — having the right infrastructure built to scale these modern applications will be key to digital transformation success.
  • Information Management and Security - As enterprises advance in their digital transformation journeys, data centralization will be key to ensuring that the enterprise’s digital foundation is scalable for its business without compromising data management simplicity and security.
https://www.cmswire.com

Microsoft Rolls Out Edge Pay-As-You-Go Hardware, Compute, and Storage

Microsoft jumped onboard the compute-and-storage-box-for-data-at-the-edge train with its aptly named Azure Data Box Edge, which is now generally available.

It’s an 1U rack-mountable device designed for edge locations, but customers don’t buy the hardware. Instead it has a pay-as-you-go model like other Azure cloud services. And it comes with local compute, a built-in storage gateway (and the ability to automatically transfer data between the local device and Azure cloud storage), and an Intel Arria field programmable gate array (FPGA) built for machine learning.

It’s also cloud-managed, which is another key edge consideration. Customers can order the device and manage these capabilities remotely from the Azure portal.

If this all sounds familiar, it is. Amazon Web Services (AWS) launched its edge storage and compute device, called AWS Snowball Edge, in 2016. It also includes on-board storage and compute, and it allows customers to transfer data between the edge and the AWS public cloud.

Microsoft previewed Data Box Edge at Ignite in September. Since then, customers including retail giant Kroger and location information software company Esri have trialed the edge device in different scenarios, wrote Dean Paron, general manager of Azure Data Box, in a blog post. “Data Box Edge can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles,” he wrote.

For example, Sunrise Technology, a wholly owned division of Kroger, plans to use the edge server to improve its retail platform with new features such as at-shelf product recommendations, guided shopping, and other personalized shopping capabilities. Additionally, the device’s live video analytics can help store employees identify out-of-stock items more quickly.

And Esri is looking to use the edge device to help first responders in disconnected environments. The goal is to improve response effectiveness at wildfires, hurricanes, and other disasters. “Data Box Edge will allow teams in the field to collect imagery captured from the air or ground and turn it into actionable information that provides updated maps.” Paron wrote. “The teams in the field can use updated maps to coordinate response efforts even when completely disconnected from the command center.”

https://www.sdxcentral.com/

Thursday, 14 February 2019

IBM Taps Kubernetes to Unleash Watson Across Clouds

IBM is using Kubernetes to help unleash its Watson artificial intelligence (AI) platform to work across any cloud environment, including private, public, or hybrid multi-cloud environments. This expansion also includes support for cloud ecosystems powered by IBM rivals like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

The move will see Watson applications like Watson Assistant and Watson OpenScale integrated with IBM’s Cloud Private (ICP) for Data and run as microservices using Kubernetes. This will allow for those microservices to be portable across the different infrastructure types and cloud ecosystems.

For IBM, the move allows it to broaden the reach of its Watson AI platform. It will also allow organizations to use the Watson platform to help analyze and manage data across all of their data sources.

IBM CEO Ginni Rometty during a keynote at this week’s IBM Think event said the move makes Watson “the most open, scalable AI for business in the world.”

IBM launched its ICP platform in late 2017. It’s built on a Kubernetes-based container architecture and solidified IBM’s envelopment of Kubernetes.

The vendor early last year announced the ICP for Data extension. It allows companies to glean insight from their data resources on their way to supporting enterprise AI services. IBM has since been slowly layering in different Watson AI capabilities onto the platform, including Watson Speech-to-Text and Watson Assistant last year.

The Watson Assistant platform helps developers and non-technical users create conversational AI products, ranging from simple chatbots to complex enterprise-grade products for customer service. Watson OpenScale is IBM’s open AI platform for managing multiple AI instances.

The move could also boost IBM’s unique cloud positioning. A recent Synergy Research Group report found that IBM had lost market share among cloud infrastructure service providers during the fourth quarter of last year compared with the previous year. This made IBM the only cloud provider among the market’s five largest providers to post such a loss.

However, John Dinsdale, chief analyst at SRG, noted that IBM has a slightly different focus than its rivals “as it remains the strong leader in the hosted private cloud services segment of the market.”

Banco Santander Deal
In addition to expanding the reach of its Watson platform, IBM this week also announced a five-year, $700 million deal with Banco Santander to help the company update its IT architecture toward a hybrid cloud environment. The deal will see Santander use Watson to improve customer service and employee production.

Santander will work with IBM to enhance the bank’s recently created Cloud Competence Center. It will also use IBM’s DevOps and API Connect platforms to help develop, iterate, and launch new or upgraded applications.

IBM is working with a handful of banks on similar migrations, including ICBC Argentina, Lloyds Banking Group, and Royal Bank of Canada.

https://www.sdxcentral.com

Monday, 4 February 2019

Career Shift from a Tester to Business Analyst – A Step by Step Guide

A testing professional is required to thoroughly test the software developed to ensure if the software meets the end requirements of the customer.

A Business Analyst is also responsible to verify whether the software built and delivered meets the end customer requirements. This aspect of both the roles make it easier for a tester to switch to a business analyst role.

If BA and tester switch their roles, then each of them can unleash their skill sets which can benefit the project itself. When it comes to testing the software system, both tester, and BA work as two sides of the same coin.

Why Business Analysis?
Testing professional has a thorough knowledge and understanding of a software and its betterment along with the attention for minute details. This skill set opens the door for a tester into many roles in the IT industry today.

By having a good understanding of the development lifecycle and process, they can choose to become release manager, automation engineer, QA Strategist, solution architect, senior manager and of course business analyst.

Having said that, a career switch in business analysis is a much more promising one in today’s scenario. Business Analysis is a much larger role when compared to testing or any other roles mentioned above.

It’s a promising career avenue and a lucrative one too. A tester who loves to travel across the globe can really enjoy a challenging and satisfying BA role. A business analyst can further climb the ladder to become Lead/Senior Business Analyst, Consultants, Product Owners or Product Managers which are quite glamorous.

I strongly recommend Business Analysis as a career switch option for testing professionals if they have excellent analytical, documentation and communication skills, enjoy customer interaction, like a pinch of glamour in work profile and of course, love being a globetrotter.

View at: https://www.softwaretestinghelp.com/career-shift-from-tester-to-ba/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+Softwaretestinghelp+%28softwaretestinghelp%29

Wednesday, 23 January 2019

Open-Source Metasploit Framework 5.0 Improves Security Testing

Among the most widely used tools by security researchers is the open-source Metasploit Framework, which has now been updated with the new 5.0 release.

Metasploit Framework is penetration testing technology, providing security researchers with a variety of tools and capabilities to validate the security of a given application or infrastructure deployment. With Metasploit, researchers can also test exploits against targets to see if they are at risk, in an attempt to penetrate the defensive measures that are in place. The 5.0 release of Metasploit introduces multiple new and enhanced capabilities, including automation APIs, evasion modules and usability improvements.

"As the first major Metasploit release since 2011, Metasploit 5.0 brings many new features, as well as a fresh release cadence," Brent Cook, senior manager at Rapid7, wrote in a blog post. 

The Metasploit project celebrated its 15th anniversary in 2018 and iterates on major version numbers infrequently. The Metasploit 5.0 update is the first major version change since Metasploit 4 was released in 2011. While major version numbers have not iterated frequently, a steady stream of exploit modules and incremental improvements are continuously added to Metasploit.

The Metasploit project itself was created by HD Moore, with commercial efforts moving to Rapid7 in 2009 after the effort was acquired. Rapid7 provides the commercially supported Metasploit Pro version of the Metasploit Framework.

Metasploit 5.0 Features

Among the core new features in Metasploit 5.0 is the extensibility of the framework's database back end, which can now be run as a REST web service. By extending the database as a web service, multiple external tools can pull from the same base and interact with each other.

"This release adds a common web service framework to expose both the database and the automation APIs," the release notes for Metasploit 5.0 states. "This framework supports advanced authentication and concurrent operations." 

Evasion


Metasploit has had different types of evasion capabilities since at least the 3.0 release in 2006. Evasion refers to the ability to get around, bypass or "evade" a target's existing defenses, which could include antivirus, firewall, intrusion prevention system (IPS), or other technologies and security configurations. With the evasion modules capability in Metasploit 5.0, researchers can now more easily create and test their own evasion module payloads.

"The purpose of the evasion module type is to allow developers to build executables specifically to evade antivirus, and hopefully this creates a better pentesting experience for the users," Wei Chen, lead security engineer at Rapid7, wrote in the GitHub code commit for the evasion module.

Usability

Metasploit 5.0 now also brings improved usability for security researchers to test multiple targets at scale.

"While Metasploit has supported the concept of scanners that can target a subnet or network range, using an exploit module was limited to only one host at a time," Cook wrote. "With Metasploit 5.0, any module can now target multiple hosts in the same way by setting RHOSTS to a range of IPs or referencing a host’s file with the file:// option."

Usability also gets a boost with improved performance, including faster startup and searching capabilities than in previous versions of Metasploit. Additionally, with Metasploit 5.0, researchers are now able to write and use modules in any of three programming languages: Go, Python and Ruby. Overall, development for Metasploit 5.0 benefited from an updated process that included a stable branch that is used by Rapid7 and other distributions for everyday use and an unstable branch where new development can be rapidly added before it’s ready for broader consumption. 

"The takeaway is that Metasploit now has a more mature development process that we hope to continue leveraging in the future to enable even bigger improvements to the code base," Cook wrote.

https://www.eweek.com

Sunday, 20 January 2019

AWS Unveils New Data Backup Service

Amazon Web Services announced AWS Backup, a centralized service for customers to back up their data across both AWS’ public cloud as well as their on-premises data centers.

The company said enterprises are having to deal with data located in multiple services such as databases, block storage, object storage, and file systems. While all of these services in AWS provide backup capabilities, customers often create custom scripts to automate scheduling, enforce retention policies, and consolidate backup activity to better meet their business and regulatory compliance requirements.

AWS Backup removes the need for custom scripts by providing a centralized place to manage backups. Using the AWS Management Console, customers can create a policy that defines how frequently backups are created and how long they are stored.

Bill Vass, VP of storage, automation, and management services at AWS, said in a statement that many customers want one place to go for backups versus having to do it across multiple, individual services. “Today, we are proud to make AWS Backup available with support for block storage volumes, databases, and file systems, and over time, we plan to support additional AWS services,” said Vass.

Initially, AWS Backup is integrated with Amazon DynamoDB, Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), Amazon Relational Database Service (Amazon RDS), and AWS Storage Gateway.

Customers can also back up their on-premises application data through the AWS Backup integration with AWS Storage Gateway.

Ever since AWS announced its AWS Outposts offering in November 2018, it’s been making a concerted effort to include its customer’s on-premises data centers in its offerings. AWS Outposts help customers connect their on-premises environments to AWS’ services in the public cloud.

The company’s recent purchase of TSO Logic is one example of its effort to help its customers across hybrid clouds. TSO Logic’s software is designed to find the optimal spot to place workloads, whether on public or private clouds.

Amazon’s move into the backup space may hit some other backup vendors in the wallet, including Rubrik, Commvault, Veeam, and Kaseya.

https://www.sdxcentral.com

Friday, 11 January 2019

TriggerMesh Clears Serverless Bridge Between AWS Lambda, Knative

TriggerMesh launched an open source project that stitches together Amazon Web Services’ Lambda serverless architecture with the Knative open source set of serverless components to bridge the portability gap between the two platforms. That gap between serverless platforms has been one of the main barriers to broader serverless adoption.

The TriggerMesh bridge is in the form of its Knative Lamdba Runtime (KLR, which is pronounced “clear”) project. The high-level focus of the project is to provide portability of AWS Lambda functions to Knative-enabled clusters and serverless cloud infrastructure without needing to rewrite the serverless functions.

“We just opened up every Lambda function to run on Knative,” said Mark Hinkle, co-founder of TriggerMesh. “That’s a huge ecosystem.”

The KLR platform uses a combination of the AWS Lambda custom runtime API and the Knative Build system. The KLRs are constructed as Knative build templates that can run a Lambda function in a Kubernetes cluster installed with Knative. The custom AWS runtime interface provides a clone of the Lambda cloud environment where the function runs.

“With these templates you can run your AWS Lambda functions as is in a Knative-powered Kubernetes cluster,” added TriggerMesh’s other co-founder Sebastian Goasguen.

TriggerMesh launched last November with a serverless management platform that runs on top of Knative. This allows developers to automate the deployment and management of serverless and functions-as-a-service (FaaS) across different cloud platforms.

Goasguen said that work on KLR started after AWS unveiled its customer API runtime at its recent re:Invent conference.

“We saw this as a way to run Lambda on Knative clusters,” Goasguen said. “We worked hard during Christmas and the New Year and announced the project.”

Goasguen admitted that there is still work to be done with the KLR platform. He cited the need to add support for other programming languages and the need for more thorough testing to make sure it’s production ready.

The other missing piece is eventing. Events are the thing that triggers a function to run. This is currently still siloed to specific cloud platforms. Goasguen said that he hopes to see work to expand eventing outside of those siloes over the next six months.

Knative Portability
The KLR platform targets a significant pain point that continues to haunt the serverless space, which is that AWS’ Lambda is the most used platform for serverless deployments but is only compatible with AWS infrastructure. This ties developers that use Lambda-based serverless functions to AWS.

This model is also used by other large cloud providers and their respective hosted serverless platforms. These include Microsoft’s Azure Functions and Google’s Cloud Functions.

But, a number of efforts have popped up that allow developers to move their serverless functions between cloud providers. Knative is one of those projects.

Knative was launched last year under the notion of using Kubernetes as a portability layer for serverless. It provides a set of components that allows for the building and deployment of container-based serverless applications that can be transported between cloud providers. Basically, Knative is using the market momentum behind Kubernetes to provide an established platform on which to support serverless deployments that can run across different public clouds.

Knative’s ability to support portability across cloud platforms was mentioned several times during keynote speeches at the recent KubeCon + CloudNativeCon North America 2018 event in Seattle.

“This portability is really important and what is behind the industry aligning behind Knative,” explained Aparna Sinha, group product manager for Kubernetes at Google, during her keynote address at the KubeCon event.

Jason McGee, vice president and CTO for IBM’s Cloud Platform, told attendees that Knative was an important project in unifying the dozens of serverless platforms that have flooded the market.

“That fragmentation, I think, holds us all back from being able to really leverage functions as part of the design of our applications,” McGee said during his keynote. “I think Knative is an important catalyst for helping us come together to bring functions and applications into our common cloud native stack in a way that will allow us to move forward and collaborate together on this common platform.”

https://www.sdxcentral.com