Thursday, 28 November 2019

How cloud providers' performance differs

Not all public cloud service providers are the same when it comes to network performance.

Each one’s connectivity approach varies, which causes geographical discrepancies in network performance and predictability. As businesses consider moving to the cloud, especially software-defined wide-area networks (SD-WAN) and multi-cloud, it’s important to understand what each public cloud service provider brings to the table and how they compare.

In 2018, ThousandEyes first conducted a benchmark study assessing three major public cloud providers: Amazon Web Services (AWS), Microsoft Azure (Azure), and Google Cloud Platform (GCP). The study gathered data on network performance and connectivity architecture to guide businesses in the planning stage.

This year’s study offers a more comprehensive view of the competition, with two more providers added to the list: Alibaba Cloud and IBM Cloud. It compares 2018 and 2019 data to show changes that took place year-over-year and what triggered them.

ThousandEyes periodically collected bi-directional network performance metrics—such as latency, packet loss and jitter—from 98 user vantage points in global data centers across all five public cloud providers over a four-week period. Additionally, it looked at network performance from leading U.S. broadband internet service providers (ISPs), including AT&T, Verizon, Comcast, CenturyLink, Cox, and Charter.

The network management company then analyzed more than 320 million data points to create the benchmark. Here are the results.

Inconsistencies among providers
In its initial study, ThousandEyes revealed that some cloud providers rely heavily on the public internet to carry user traffic while others don’t. In this year’s study, the cloud providers generally showed similar performance in bi-directional network latency.

However, ThousandEyes found architectural and connectivity differences have a big impact on how traffic travels between users and certain cloud hosting regions. AWS and Alibaba mostly rely on the internet to transport user traffic. Azure and GCP use their private backbone networks. IBM is different from the rest and takes a hybrid approach.

ThousandEyes tested the theory of whether AWS Global Accelerator out-performs the internet. AWS Global Accelerator launched in November 2018, offering users the option to utilize the AWS private backbone network for a fee instead of the default public internet. Although performance did improve in some regions around the world, there where other instances where the internet was faster and more reliable than AWS Global Accelerator.

Broadband ISPs that businesses use to connect to each cloud also showed inconsistencies, even in the mature U.S. market. After evaluating network performance from the six U.S. ISPs, sub-optimal routing results were recorded, with up to 10 times the expected network latency in some cases.

Location, location, location

Cloud providers commonly experience packet loss when crossing through China’s content-filtering firewall, even those from the region like Alibaba. The 2019 study closely examined the performance toll cloud providers pay in China, which has a notoriously challenging geography for online businesses. For those with customers in China, ThousandEyes recommends Hong Kong as a hosting region since Alibaba Cloud traffic experienced the least packet loss there, followed by Azure and IBM.

In other parts of the world, Latin America and Asia showed the highest performance variations for all cloud providers. For example, network latency was six times higher from Rio de Janeiro to GCP’s São Paulo hosting region because of a suboptimal reverse path, compared to other providers. But across North America and Western Europe, all five cloud providers demonstrated comparable, robust network performance.

The study’s results confirm that location is a major factor, therefore, user-to-hosting-region performance data should be considered when selecting a public cloud provider.

Multi-cloud connectivity

In 2018, ThousandEyes discovered extensive connectivity between the backbone networks of AWS, GCP, and Azure. An interesting finding in this year’s study shows multi-cloud connectivity was erratic when IBM and Alibaba Cloud were added to the list.

ThousandEyes found IBM and Alibaba Cloud don’t have fully established, direct connectivity with other providers. That’s because they typically use ISPs to connect their clouds to other providers. AWS, Azure, and GCP, on the other hand, peer directly with each other and don’t require third-party ISPs for multi-cloud communication.

With multi-cloud initiatives on the rise, network performance should be included as a metric in evaluating multi-cloud connectivity since it appears to be inconsistent across providers and geographical boundaries.

ThousandEyes’ comprehensive performance benchmark can serve as a guide for businesses deciding which public cloud provider best meets their needs. But to err on the side of caution, businesses selecting public cloud connectivity should consider the unpredictable nature of the internet, how it affects performance, creates risk, and increases operational complexity. Businesses should address those challenges by gathering their own network intelligence on a case-by-case basis. Only then they will benefit fully from what cloud providers have to offer.


Saturday, 23 November 2019

Edge security: There’s lots of attack surfaces to worry about

The problem of edge security isn’t unique – many of the issues being dealt with are the same ones that have been facing the general IT sector for decades.

But the edge adds its own wrinkles to those problems, making them, in many cases, more difficult to address. Yet, by applying basic information security precautions, most edge deployments can be substantially safer.

The most common IoT vulnerability occurs because many sensors and edge computing devices are running some kind of built-in web server to allow for remote access and management. This is an issue because many end-users don’t – or, in some cases, can’t – change default login and password information, nor are they able to seal them off from the Internet at large. There are dedicated gray-market search sites out there to help bad actors find these unsecured web servers, and they can even be found with a little creative Googling, although Joan Pepin, CISO at security and authentication vendor Auth0, said that the search giant has taken steps recently to make that process more difficult.

“There’s definitely a market opportunity for a company to do better at the device management level, not having thousands of little web servers with the default username and password,” she said.

One issue with solving that problem is the heterodox nature of the IIoT and edge computing worlds – any given deployment might use one company’s silicon, running in another company’s boxes, which are running another company’s software, connecting to several other companies’ sensors. Full-stack solutions – which would include edge devices, sensors, and all the various types of software and connectivity solutions required – are not common.

“Given existing platforms, there’s a lot of viable attack vectors and increased exposure of both the endpoint and the edge devices,” said Yaniv Karta, CTO of app security and penetration-testing vendor SEWORKS.

Worse, some of the methods currently used to secure all or part of an edge deployment can increase the exposure of the IoT network. VPNs, used to secure traffic while in transit, can be vulnerable to man-in-the-middle attacks under certain circumstances. Older industrial protocols like CANbus simply weren’t designed to protect against modern infosec threats, and even LP-WAN protocols used to connect sensors to the edge can be vulnerable if encryption keys are compromised.

The industry currently considers this fragmentation something of an advantage, said Karta, mostly from a flexibility standpoint. The ability to use equipment and software from a wide array of different vendors without too much difficulty in tying those systems together is attractive to some customers. The fact that companies generally have to use a middleware layer of some type to tie all the disparate elements of their deployments together, however, makes for yet another attack surface.

What’s to be done?

It’s not rocket science, according to Pepin. Most of the same fundamental principles that apply to securing cloud or data center or userland environments apply to the edge as well.

“For example, you should not be running any unnecessary services on your devices, whether that’s a server, a laptop, an IoT device.” She joked that the industrial IoT, in a way, is a dream situation for IT pros – potentially hundreds of thousands of endpoints, but no users at the end of them to mess things up.

Tortuga Logic CEO Jason Oberg agreed that better fundamentals are needed to help secure the edge, as well as authentication and encryption for the code that edge devices are running. One way to promote better security will be new industry standards.

“I think there will be some working groups around best practices,” he said. “I do think there will be a large initiative to build security into the hardware, and that’s already happening, because I think people realize it’s a heavily hardware/software-driven issue.”

End-to-end encryption is another technique that could prove useful against edge attackers, argued Pepin. While there’s a performance cost to encryption, there are standards and software out there that are designed to make that cost a minimal one, even on smaller and less capable devices.

“If all these devices are encrypting data over the wire … everything is running over secure protocols like TLS, and you’re not running random listening ports and whatnot, it’s the same security model,” she said, also citing the Blowfish cipher as well-suited for edge and IIoT deployments. “If [a smartphone], which fits easily in my hand, can do that type of encryption and not impact my user experience, then, certainly, an IoT device can perform the same types of encryption and not affect the user experience.”

IBM aims at hybrid cloud, enterprise security

IBM is taking aim at the challenging concept of securely locking-down company applications and data spread across multiple private and public clouds and on-premises locations.

IBM is addressing this challenge with its Cloud Pak for Security, which features open-source technology for hunting threats, automation capabilities to speed response to cyberattacks, and the ability integrate customers’ existing point-product security-system information for better operational safekeeping – all under one roof.

IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of infrastructure, be it private or public clouds, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.

Cloud Pak for Security is the latest of six that are available today, the others being Data, Application, Integration, Automation and Multicloud Management, and they also incorporate containerized IBM middleware designed to let customers quickly spin-up enterprise-ready containers, the company said.

The Cloud Paks are part of a massive Big Blue effort to develop an advanced cloud ecosystem with the technology it acquired with its $43 billion buy of Red Hat in July. The Paks will ultimately include IBM’s DB2, WebSphere, API Connect, Watson Studio, Cognos Analytics and more.

“The infrastructure is evolving in such a way that the traditional perimeter is going away and in the security domain, customers have a plethora of point-vendor solutions and now cloud-vendor security offerings to help manage this disparate environment,” said Chris Meenan, Director, Offering Management and Strategy, IBM Security.

Protecting this fragmented IT environment requires security teams to undertake complex integrations and continuously switch between different screens and point products. More than half of security teams say they struggle to integrate data with disparate security and analytic tools and combine that data across their on-premises and cloud environments to spot advanced threats, Meenan said.

One of the foundational components of Cloud Pak for Security is that it can, from a single containerized dashboard, connect, gather and see information from existing third-party tools and data sources, including multiple security-information and event-management software platforms, endpoint detection systems, threat-intelligence services, identity and cloud repositories, IBM said. Cloud Pak Connectors have been included for integration with security tools from vendors including IBM, Carbon Black (now part of VMware), Tenable, Elastic, BigFix, and Splunk, as well as public-cloud setups from IBM, AWS, and Microsoft Azure. 

The big deal here is that the tool  lets security teams connect all data sources to uncover hidden threats and make better risk-based decisions, while leaving the data where it resides, without needing to move that data into the platform for analysis, Meenan said.

“There’s a ton of security data out there, and the last thing we wanted to do was force customers to build another data lake of information, “ Meenan said. “Cloud Pak lets customer access data at rest on a variety of security systems, search and query those systems all via a common open-source federated framework.”

For example, the system supports Structured Threat Information Expression (STIX), an open-source language used to exchange cyber-threat intelligence. The platform also includes other open-source technology IBM co-developed through the OASIS Open Cybersecurity Alliance.

The open source technology and the ability to easily gather and exchange data from multiple sources should be a very attractive feature for customers analysts said.

“The main takeaway is their ability to federate security-related data from a broad variety of sources, and provide flexible/open access to that," said Martin Kuppinger, founder and principal analyst at KuppingerCole. "They federate, not replicate, the data, avoiding having yet another data lake. And the data can be consumed in a flexible manner by apps you build on IBM Security Cloud Pak but also by external services. With security data commonly being spread across many systems, this simplifies building integrated security solutions and better tackling the challenges in managing complex attacks. IBM successfully managed to launch this offering with a very broad and comprehensive partner ecosystem – it is not just a promise, but they deliver.”

Once the data is gathered and analyzed the platform lets security teams orchestrate and automate their response to hundreds of common security scenarios, IBM said.  Via the Cloud Pak’s support for Red Hat Ansible automation technology customers can define actions such as segmenting a multicloud domain or locking down a server quickly. Meenan said.

The platform helps customers formalize security processes, orchestrate actions and automate responses across the enterprise, letting companies react faster and more efficiently while arming themselves with information needed for increasing regulatory scrutiny, IBM said.

The Security Cloud Pak is a platform on which Big Blue will develop future applications, Meenan said, "to address new challenges and risks such as insider security threats, all designed in realistic ways for customer to deploy without having to rip and replace anything."

Kuppinger said the security Pak will have immediate value for larger businesses running their own security operations/cyber-defense centers.

“The biggest challenge for IBM might be education – it is a new approach. However, the offering distinguishes clearly from other approaches, providing obvious benefits and adding value to existing infrastructures, not replacing these. Thus, it is clearly more than yet another product, but something really innovative that adds value.”

Wednesday, 20 November 2019

IBM Kicks Up Kubernetes Compatibility With Open Source

On the first day of KubeCon IBM kicked up its Kubernetes compatibility by announcing two new open source projects — Kui and Iter8 — along with advancements to existing open source Tekton and Razee. IBM has a long history contributing open source code, and today’s announcement comes as the vendor quickly moves to integrate its legacy software portfolio with Red Hat’s Kubernetes-focused OpenShift platform.

The mass migration to containerize and orchestrate applications in hybrid- and multi-cloud environments has seen Kubernetes emerge as the de facto container orchestration platform for cloud-native applications. And as such, vendors are scrambling to re-orient their operations to accommodate the burgeoning Kubernetes ecosystem. 

Despite an increasing number of cloud-native applications being deployed in Kubernetes environments, this Kubernetes adoption and sprawl has added a layer of operational complexity. James Governor, analyst and co-founder of RedMonk, noted in a recent blog post that “Kubernetes is extremely sophisticated, but with great sophistication comes great complexity.”

Because these environments require their own console or command-line interfaces (CLIs) such as kubectl or helm, for example, developers need something that can streamline management actions like monitoring, analyzing, and troubleshooting into a single tool. 

Open Source Kui
Kui is IBM’s pitch to minimize console switching for developers working in multi-cloud environments through a single tool. The vendor claims that navigating complex data through familiar CLIs and visualizations will allow developers to “seamlessly interact with multiple tools in order to minimize context switching and get more done in a single place.”

IBM is moving quickly to tie its legacy software portfolio into its newly-acquired Red Hat assets. It’s making those services available through Cloud Paks that rely on Red Hat’s Kubernetes-based OpenShift platform to use that software across any public or private cloud environment.

And IBM has already begun making Kui’s first introductions to select IBM Cloud offerings.  According to the blog post, the recently released IBM Cloud Pak for Multi-cloud Management features a Kui-based Visual Web Terminal to visually orchestrate and navigate the results of commands.

Inter8 For Istio
IBM also introduced a new tool to to the Istio ecosystem. It’s called Iter8, and it uses Istio APIs to perform comparative analytics. 

Istio acts as the control plane for management of the service mesh. It can handle the deployment of Envoy sidecars — where Envoy sits next to a running container pod — and coordinate that deployment through the container orchestration layer working with platforms like Kubernetes or Apache Mesos.

With Iter8, developers can compare versions of applications and perform extended behavioral analysis of microservices for a greater understanding of the impact of a version update might have on other microservices in the environment, IBM says.

Inter8’s announcement is another tip of the hat to Red Hat as one of the founding developers of the Knative platform.

Tekton
Tekton, a Google-born project, was presented to the Linux Foundation’s Continuous Delivery Foundation (CDF) as the basis for a continuous integration/continuous development (CI/CD) platform for deployments to Kubernetes, virtual machines (VMs), bare metal, and mobile use cases earlier this year.

This open source effort, which was initially developed within the Knative ecosystem before being spun out into its own project, now enables developers to configure and run CI/CD pipelines within a Kubernetes cluster.

IBM is announcing the integration of Tekton into the IBM Cloud Continuous Delivery service to leverage the platforms built-in scaling, reliability, and extensibility of Kubernetes and absorb software development and modernize the CD control plane. Tekton provides specifications for pipelines, workflows, source code access, and other primitives.

Razee
Razee, a multi-cluster, continuous delivery tool that allows developers to manage applications in their Kubernetes-based cluster deployments, hit the open source scene earlier this year as part of IBM’s push to more deeply integrate OpenShift with IBM’s Cloud Private platform and middleware services.

After offering Razee internally within its cloud platform, which allowed for the tracking of changes across Kubernetes clusters — as promised — IBM has successfully integrated Razee to run on top of Red Hat OpenShift. 

“We’re working toward full support and certification to help clients use Razee to automate deployment of their clusters running on their preferred Kubernetes platform, Red Hat OpenShift,” the vendor wrote in a blog post. 

IBM also announced support for Razee with the IBM Cloud DevOps ToolChains to help users build and push applications from a single cloud service, speeding time to deployment.

https://www.sdxcentral.com/

GitHub makes CodeQL free for research and open source

CodeQL, a semantic code analysis engine and query tool for finding security vulnerabilities across a codebase, has been made available for free by GitHub for anyone to use in research or to analyze open source code.

CodeQL queries code as if it were data. Developers can use CodeQL to write a query that finds all variants of a vulnerability, and then share that query with other developers. For example, a developer could create a query that mimics a bug class for cross-site scripting, then use that query to find any bug class. CodeQL also can be used to find zero days, variants of critical vulnerabilities, and defects such as buffer overflows or SQL injection issues.

CodeQL was developed several years ago by Semmle, which was acquired by GitHub in September. Prior to making CodeQL available for free for open source code, Semmle provided it as a commercially available service. It is still available under a commercial license for private code repositories.

Features of CodeQL include:

  • Libraries for control and data flow analysis, taint tracking, and threat model exploration. Languages supported include C/C++, C#, Java, JavaScript, Python, and others. One language currently not supported is Rust.
  • CodeQL plug-ins to IDEs.
  • The LGTM query console, which can be used to write CodeQL in a browser and query a portfolio for vulnerabilities.
  • The ability to run out-of-the-box queries or custom queries on multiple codebases.

How to access CodeQL

CodeQL can be tried out in the LGTM query console at LGTM.com.         

Tuesday, 19 November 2019

Why do we Test? What is the Purpose of Software Testing?

To answer the above question(s), let us look at the nature of software testing. The software testing group is a service provider. Software testers provide valuable information and insights into the state of the system.

This information contributes towards reducing the ambiguity about the system. For example, when deciding whether to release a product, the decision makers would need to know the state of the product including aspects such as the conformance of the product to requirements, the usability of the product, any known risks, the product’s compliance to any applicable regulations, etc.

Software testing enables making objective assessments regarding the degree of conformance of the system to stated requirements and specifications.

Testing verifies that the system meets the different requirements including, functional, performance, reliability, security, usability and so on. This verification is done to ensure that we are building the system right.

In addition, testing validates that the system being developed is what the user needs. In essence, validation is performed to ensure that we are building the right system. Apart from helping make decisions, the information from software testing helps with risk management.

Software testing contributes to improving the quality of the product. You would notice that we have not mentioned anything about defects/bugs up until now.

While finding defects / bugs is one of the purposes of software testing, it is not the sole purpose. It is important for software testing to verify and validate that the product meets the stated requirements / specifications.

Quality improvements help the organization to reduce post release costs of support and service, while generating customer good will that could translate into greater revenue opportunities.

Also, in situations where products need to ensure compliance with regulatory requirements, software testing can safeguard the organization from legal liabilities by verifying compliance.

Sunday, 17 November 2019

Google Cloud launches TensorFlow Enterprise

Google Cloud has introduced TensorFlow Enterprise, a cloud-based TensorFlow machine learning service that includes enterprise-grade support and managed services.

Based on Google’s popular, open source TensorFlow machine learning library, TensorFlow Enterprise is positioned to help machine learning researchers accelerate the creation of machine learning and deep learning models and ensure the reliability of AI applications. Workloads in Google Cloud can be scaled and compatibility-tested.

  • Enterprise-grade support, with long-term version support for TensorFlow. For certain versions, security patches and select bug fixes will be provided for as long as three years. These versions will be supported on Google Cloud, with patches and fixes accessible in the TensorFlow code repo. Also, “white-glove” service will be offered to cutting-edge customers, featuring engineer-to-engineer assistance from TensorFlow and Google Cloud teams at Google.
  • Cloud-scale performance, with Google Cloud providing a range of compute options for training and deploying models. Deep Learning VMs, now generally available, and Deep Learning containers, in beta, are featured. Both products are available for Nvidia GPUs and Google’s Cloud TPU, for AI. TensorFlow Enterprise optimizations, meanwhile, have increased data reading times by as much as three times.
  • Managed services, with enterprises able to leverage cloud services such as Kubernetes Engine and AI Platform.

Users can get benefits of TensorFlow Enterprise by using the TensorFlow Enterprise Distribution on AI Platform Notebooks, AI Platform Deep Learning Containers, and the AI Platform Deep Learning VM Image.

How to access TensorFlow Enterprise

A free trial of TensorFlow Enterprise is available on the Google Cloud website. The service is currently in a beta stage. 


Saturday, 16 November 2019

Igniting Passion And Diversity In STEM

It wasn’t until my first job out of college—one in the wireless business—that I developed a passion for technology and saw how STEM impacts everything we do. This was the spark that led me to fall in love with the network engineering elements of wireless, and the more immersed I got in the industry, the more exposed and interested I was in other components of technology.

Now, as the father of a teenage daughter who’s interested in STEM subjects and potentially even computer science, I want her to find her own opportunities, discover where her passions lie, and to ensure she has the resources and encouragement to pursue them.

In the U.S., there simply aren’t enough people pursuing STEM to meet growing technology demands. According to the Smithsonian Science Education Center, "78 percent of high school graduates don't meet benchmark readiness for one or more college courses in mathematics, science or English." And then there are barriers to STEM advancement like four or six-year degree requirements for many jobs—which are remarkably difficult for most people to afford. So it’s not that surprising when people like Nasdaq vice chairman Bruce Aust say, “By 2020, there will be one million more computing jobs than there will be graduates to fill them, resulting in a $500 billion opportunity gap.”

What’s clear is we need to make it easier for people to experiment with STEM early in life, then create accessible and alternative opportunities to pursue their dreams. Equally important, we need to find ways to dramatically advance gender diversity in STEM fields to accelerate innovation around the world. 

Fostering Excitement Around STEM Takes a Village

Organizations like the Washington Alliance for Better Schools (WABS)—which I’m on the board of—partners with school districts around Western Washington State, and is an example of families, teachers, schools, and public and private sector businesses uniting to develop meaningful STEM education and advancement opportunities, because everyone involved can benefit. Hands-on learning and vocational programs like their After School STEM Academy is a great way to help students connect the dots of scientific principles in a fun way. And WABS’ 21st Century Community Learning Centers leverage Title IV funds to help students meet state and local academic standards—from homework tutoring to leadership opportunities that can turn into summer internships or jobs.

As students’ interests in STEM grow, it creates a fantastic opportunity for businesses to see passions play out through hackathons, group ideation, and other challenges. Recently, for the second consecutive year, T-Mobile’s Changemaker Challenge initiative—in partnership with Ashoka—called on youth aged 13 to 23 from the U.S. and Puerto Rico to submit big ideas for how they would drive change in their communities. T-Mobile received 428 entries—a 28% increase over last year—133 in the ‘Tech for Good’ category. Interestingly, one quarter of all the tech entries were focused on STEM projects and even more interestingly, 63% of all technology category applications were from young women. We saw submissions from apps to robots to video games—all with the goal of changing the world for good. Next up, we’ll announce the Top 30 teams and each of them will receive a trip to T-Mobile’s HQ for the three-day Changemaker Challenge Lab to supercharge their projects along with some seed funding. Three category winners will pitch their ideas to T-Mobile leadership for a chance to win the $10,000 grand prize. To say that these young people’s ideas are inspiring is an understatement!

Accelerating Innovation Through Gender Diversity and Inner-Sourcing

Women aren’t typically well represented in many STEM-focused industries. Gender diversity is crucial to designing and building innovative solutions around the world, including T-Mobile’s products and services. At least half of our customers are female, and of the more than 50,000 employees who make up T-Mobile, 42% identify as female. If our product and technology employees don’t represent the diversity in our community, we stand to lose relevance in the market. By making diversity and inclusion a thoughtful, premeditated, sustained, and structural part of our recruitment and retainment of employees—including network engineers, software developers, data scientists, and other STEM professions—we’re able to foster a stronger company culture and build more innovative, customer experience obsessed products and services.

Let’s not forget that plenty of STEM-related jobs don’t include “engineer”, “developer”, or “scientist” in the job title across fields that intersect technology and digital customer experiences. One way we’ve cultivated the right talent at T-Mobile is “inner-sourcing” existing employees. For instance, through our Team of Pros program (TOPs), we provide opportunities for our frontline retail and customer care employees to apply for a 6 to 9-month program in a product management capacity to learn and work directly with engineering teams to ensure a tight coupling between what customers really want and the products, apps, training, and troubleshooting resources we design and develop. This is a great opportunity for our frontline employees to pivot into full-time STEM-related roles within T-Mobile corporate, without the need to pursue a formal technology-oriented education.

Championing STEM to Create a Better World

We live in a world where technology is omnipresent however connected, collaborative, and continuous STEM education isn’t equally accessible, and gender diversity is not well represented. To address pervasive global issues like climate change, resource inequality, economic stagnation, disease prevention, and others, we need diverse people who understand technical processes and technologies to work together to develop effective solutions. For those of us fortunate enough to reach a level of financial stability in STEM fields, we owe it to the future of our world to give back by leading and inspiring today’s and the next generation of technology leaders.

https://bit.ly/2KsMpPL

Edge computing best practices

Data processing, analytics, and storage increasingly are taking place at the network edge, close to where users and devices need access to the information. Not surprisingly, edge computing is becoming a key component of IT strategy at a growing number of organizations.

A recent report from Grand View Research predicted the global edge computing market will reach $3.24 billion by 2025, expanding at a “phenomenal” compound annual growth rate (CAGR) of 41% during the forecast period.

One of the biggest contributors to the rise of edge computing is the ongoing growth of the Internet of Things (IoT). The vast amounts of data created by IoT devices might cause delays and latency, Grand View says, and edge computing solutions can help enhance the data processing power, which further aids in avoiding delays. Data processing takes place closest to the source of the data, which makes it more feasible for business users to gain real-time insights from the IoT data devices are gathering.

Also helping to boost the edge market is the presence of high-connectivity networks in regions such as North America.

Edge computing is used in a variety of industries such as manufacturing, IT and telecommunications, and healthcare. The healthcare and life sciences sector is estimated to see the highest CAGR between 2017 and 2025, Grand View says, because of the storage capabilities and real-time computing offered by edge computing tools that enable the delivery of reliable healthcare services in lesser time. The decision-making process is enhanced as network failures and delays are avoided.

Supporting edge computing can be challenging for organizations because it involves a lot of moving parts and a change in thinking from the current IT environment dominated by data centers and cloud-based services. Here are some best practices to consider when building a strategy for the edge.

Create a long-term edge computing vision

Edge computing involves a lot of different components, and it requires building an infrastructure with the capacity and bandwidth to ingest, transform, analyze, and act on enormous volumes of data in real time, says Matt Kimball, senior analyst, data center, at global technology analyst and advisory firm Moor Insights & Strategy.

On the networking side alone, it means deploying connections from devices to the cloud and to data centers. While companies might have a desire to ramp up their edge infrastructure as soon as possible in order to support IoT and other remote computing efforts, all of this is not going to happen overnight.

“Think big, act small – meaning map out the long-term vision for edge deployments” but don’t be in a rush to implement edge technologies all over the place right away, Kimball says.

The speed at which edge technologies can be rolled out varies based on industry, deployment model, and other factors, Kimball says. But given the rapid pace of innovation in the edge market, “it’s easy to get swayed by technology that is very cutting edge but maybe doesn’t contribute to an organization’s needs,” he says. “So, map out the vision and execute in small steps that are manageable.”

As part of planning the edge strategy, develop a business plan that will help secure a budget.

“Most organizations say that cost is a top concern – even above data security,” says Jennifer Cooke, research director, datacenter trends and strategies, at research firm International Data Corp. (IDC). “Obtaining budget is difficult and requires a solid plan for how edge IT is going to drive value for the business. Because cost is such a high concern, pay-per-use offerings will become increasingly sought after.”

Address cultural issues: Edge computing involves IT and operations

Putting processing power at the edge involves not just IT, but operational technology (OT) as well, and these are two separate organizations with different cultures and personalities, Kimball says.

“The OT folks are different,” Kimball says. “These are equally technical folks – in many cases, more technical – but focused on things like making sure a water treatment plant is operating properly through Supervisory Control And Data Acquisition (SCADA) process control systems.”

These are the systems that make sure valves open at certain times, for example, and environmental conditions are within specified ranges, Kimball says. It’s “IT for the industrial environment. So, processes, tools, and the kinds of technologies deployed and managed are different between the two organizations,” he says.

Bridging the two into one group that manages from the core data center out to the field or shop floor is a big challenge, but one that needs to be addressed. “Culture matters. If an organization can’t converge IT and OT at the organizational level, the convergence of technology will fall short,” Kimball says.

IT and operational teams must be equal partners, says Daniel Newman, principal analyst and founding partner at Futurum Research, a research and analyst firm. While edge computing is mainly driven by operational teams today, IT teams are responsible for managing these systems in more than two-thirds of enterprises, Newman notes in a 2018 study.

For edge computing to grow and increase its overall business value, IT must become more of a strategic collaborator with operational teams. It's not only managing edge computing resources, but also being involved in the long-term strategy, budgeting, and sourcing to ensure these systems are in line with larger, enterprise-wide strategic and transformational initiatives, Newman says.

Find partners to help with edge computing technology deployments

Many organizations say they lack the internal skills to support IT at the edge, Cooke says. “For this reason, we believe that many edge buildouts will happen through partnerships with collocation providers as well as vertical industry solutions through integrators,” she says.

IDC finds that many organizations are looking for a “one-stop solution” for delivering IT service at the edge. “Systems integrators with vertical market expertise will be sought after to help organizations along their edge journey,” Cooke says.

For example, a retail business might want to implement a solution, but is not interested in putting all the pieces together itself. Or it might want to derive insights from data on site at the edge and build the infrastructure to accomplish this, which can be complex.

“Beyond the software tools to analyze data, the solution needs connectivity as well as compute and storage infrastructure,” Cooke says. “Considerations such as controlling the physical environment [including temperature and humidity], physical security, and protection of equipment are important considerations as well.” An expert partner can help with all of this.

Don’t forget about edge computing security

As with any other aspect of IT, edge computing comes with its own set of cyber security threats and vulnerabilities. The InfoSec Institute, an organization that provides training for information security and IT professionals, in August 2018 noted a number of security issues related to the edge.

These risks include weak passwords for access to devices, which makes them easy targets for attackers; insecure communications, with data collected and transmitted by devices largely unencrypted and unauthenticated; physical security risks, because security is commonly acknowledged to be a low priority in the development of IoT and other edge devices; and poor service visibility, with security teams unaware of the services running on certain devices.

“It’s a top-of-mind issue,” Kimball says. “Not just security on the [device]. But security of the data that’s transmitted, security of the servers that sit on the edge and perform the data transformation and analysis, and security of the data as it travels from the edge to the cloud to the core data center.”

InfoSec Institute recommends actions such as expanding corporate password policies to testing and enforcing strong passwords on edge devices; encrypting the data sent by devices or using virtual private networking (VPN) to encrypt traffic in transit between devices and its destination; taking steps to provide devices with physical security protections; and identifying and securing services provided by devices, including analysis of network logs to identify traffic from unknown devices within an organization’s network perimeter.

Companies need to have a security strategy in place to properly secure both IoT and edge computing systems, Newman says, from a physical and logical perspective. That includes data that is processed and remains at the edge.

Prepare for rapid IoT growth: Edge computing scalabilty required

For some sectors such as manufacturing, healthcare, utilities, and municipal government, the growth of the IoT will likely be dramatic over the coming years in terms of the number of connected devices and the volumes of data gathered and processed, so companies will need to build scalability into their edge computing plans.

“Not only are we anticipating an increase in the overall percentage of data generated at the edge being processed at the edge, but we see an ongoing increase in the volume of data being created throughout the enterprise, and particularly in the intelligent edge of the future,” according to a 2018 Futurum report on the edge.

As edge computing expands to support operational IoT devices and data, the implementation of edge computing will make it easier to derive value from new IoT-based data sources, the report says. Without planning for the scalability of storage, data analytics, network connectivity, and other functions, companies will not be able to reap the full benefits of the edge or IoT.

shorturl.at/luPWZ

Friday, 15 November 2019

Bitcoin, blockchain jobs remain unfilled as interest drops off

While the number of jobs related to blockchain and cryptocurrencies such as bitcoin has skyrocketed in the past four years, the number of searches for those jobs has  drastically dropped recently, according to job search site Indeed.
Over the past year, the share of cryptocurrency- and blockchain-related job postings per million has slowed on Indeed, increasing 26%. At the same time, the share of searches per million for jobs in the field has decreased by 53%.
Indeed blockchain cryptocurrency bitcoin jobs
A year ago, Indeed's data similarly showed interest in blockchain development skills, including the creation of cryptocurrencies, had waned as bitcoin's value - and the hype around it - fell off.

"We've previously covered how bitcoin's volatility seems to correlate with job seeker interest, and the change in bitcoin price this year might be why job searches have declined," said Allison Cavin, a writer for Seen, Indeed's tech hiring platform.
The mismatch between the number of jobs being created and the number of qualified candidates to fill them has always been lopsided. According to Indeed.com, in the four-year period between September 2015 and September 2019, the share of cryptocurrency jobs per million grew by 1,457%. In that same time period, the share of searches per million increased by only 469%.
Bitcoin's value has been on a roller coaster ride in the past two years. In 2018, the cryptocurrency's price plummeted from nearly $19,500 in Februrary to around $3,600 by the end of last year. Over the past year, however, bitcoin's value jumped to more than $12,000 before settling back to about $9,200 today. The volatility seems to be turning potential job seekers off.
"For the first time, the number of jobs per million exceeded the number of searches per million," Cavin wrote. It could be reasonable to assume that if bitcoin drops dramatically again, a candidate looking for a blockchain role would run into less competition than they would after a large increase."
An April report from a management consulting firm, Janco Associates, showed blockchain positions remained unfilled as a dearth of qualified IT workers persisted –  and those who do have the skills remain in high demand. The shortage of qualified candidates with blockchain and cryptocurrency experience also lead to companies poaching talent from each other.
"With 20,600 new IT jobs created in the first three months of 2019, the market is tight," Janco Associates CEO Victor Janulaitis said at the time. "There is a skills shortage, some projects are missing key early benchmark dates due to lack of staffing.
Last year, the job of developing blockchain distributed ledgers for businesses was ranked first among the top 20 fastest-growing job skills by freelance employment website Upwork. LinkedIn also ranked blockchain developer as the No. 1 emerging job.
The most promising jobs include more than just developers and engineers, according to research by BusinessStudent.com — a site that reviews business schools and their courses.
According to a mid-year salary survey from Janco Associates, blockchain development and management positions remain in high demand.
"From coding smart contracts to designing user interfaces for cryptocurrency apps to building decentralized applications (dApps) that communicate with the blockchain, there's no shortage of work to be done in the bitcoin field—and the tech jobs in our top five prove it," Cavin wrote.
For a better chance at landing a blockchain-related job, candidates should become familiar with basic cryptography, P2P networks and a language like C++, Java, Python or JavaScript (along with certain crypto soft skills).
"To stand out, learn new blockchain development languages like Hyperledger, Bitcoin Script, Ethereum's Solidity, the Ripple protocol or even languages currently in development like Rholang to stay ahead of the curve," Cavin wrote.

Thursday, 14 November 2019

Native or non-native? Choosing a cloud integration solution

Data integration solutions come in one of three categories. First there’s old school data integration solutions created in the 1990s from the EAI movement, now expanded to include public cloud computing domains. Second, newer cloud-based iPaaS (integration platforms as a service) solutions built from the ground up as on-demand integration servers hosted on the open Internet, but existing outside of public cloud providers. Third, data integration solution existing within public clouds, which are typically more primitive. However, these native services are easily deployable from inside a public cloud provider.

The old school data integration solution is typically not desirable for cloud computing, considering that cloud-based integration is often an afterthought. That said, many of these solutions are already installed and are very difficult to unseat without a tremendous amount of cost and risk.

My general advice is to leave them where they are, as long as they work up to expectations. Although they may not be desirable for integration with public cloud-based applications and data stores, due to their on-premises enterprise focus, replacing them with iPaaS or cloud-native data integration solutions is cost ineffective.

iPaaS solutions have been built with data integration in the public cloud in mind, but can also handle on-premises and traditional data integration. These have come into the market in the last 10 years and are purpose-built for hybrid and multicloud problem domains that include internal systems as well.

For the most part this is the sweet spot for data integration, cloud or not. The on-demand approach means that you likely don’t have to deal with hardware and software configurations, the software is updated automatically and continuously improved, and the connectors and adaptors are purpose-built for cloud-based systems and databases.

Cloud-native integration solutions usually focus on specific patterns, such as integrating data streams and raw messaging. Although there are connectors and adaptors for most native systems that produce and consume data, integration with on-premises systems or other public cloud providers is typically not supported without a great deal of custom development.

All three types of solutions have their purpose. Leave old school data integration in place unless there is a compelling reason to remove it. Keep in mind that it can act as an on-premises data access gateway for the iPaaS tech as well. You can expect to replace it at some point, but pushing it five years down the road means that better solutions will be available.

Cloud-native data integration largely focuses on tactical solutions that are much more primitive in nature. They do provide tools for intracloud integration, but for larger strategic integration platforms or general purpose data integration, they fall short by design.

That leaves iPaaS as the data integration tool of choice for most cases. The technology is mature and works to expectations. Enough said.         

https://www.infoworld.com/

Forrester: Edge computing is about to bloom

The next calendar year will be the one that propels edge computing into the enterprise technology limelight for good, according to a set of predictions from Forrester Research.

While edge computing is primarily an IoT-related phenomenon, Forrester said that addressing the need for on-demand compute and real-time app engagements will also play a role in driving the growth of edge computing in 2020.

What it all boils down to, in some ways, is that form factors will shift sharply away from traditional rack, blade or tower servers in the coming year, depending on where the edge technology is deployed. An autonomous car, for example, won’t be able to run a traditionally constructed server.

It’ll also mean that telecom companies will begin to feature a lot more heavily in the cloud and distributed-computing markets. Forrester said that CDNs and colocation vendors could become juicy acquisition targets for big telecom, which missed the boat on cloud computing to a certain extent, and is eager to be a bigger part of the edge. They’re also investing in open-source projects like Akraino, an edge software stack designed to support carrier availability.

But the biggest carrier impact on edge computing in 2020 will undoubtedly be the growing availability of 5G network coverage, Forrester says. While that availability will still mostly be confined to major cities, that should be enough to prompt reconsideration of edge strategies by businesses that want to take advantage of capabilities like smart, real-time video processing, 3D mapping for worker productivity and use cases involving autonomous robots or drones.

Beyond the carriers, there’s a huge range of players in the edge computing, all of which have their eyes firmly on the future. Operational-device makers in every field from medicine to utilities to heavy industry will need custom edge devices for connectivity and control, huge cloud vendors will look to consolidate their hold over that end of the market and AI/ML startups will look to enable brand-new levels of insight and functionality.

What’s more, the average edge-computing implementation will often use many of them at the same time, according to Forrester, which noted that integrators who can pull products and services from many different vendors into a single system will be highly sought-after in the coming year. Multivendor solutions are likely to be much more popular than single-vendor, in large part because few individual companies have products that address all parts of the edge and IoT stacks.

Friday, 8 November 2019

Why the Rust language is on the rise

You’ve probably never written anything in Rust, the open source, systems-level programming language created by Mozilla, but you likely will at some point. Developers crowned Rust their “most loved” language in Stack Overflow’s 2019 developer survey, while Redmonk’s semi-annual language rankings saw Rust get within spitting distance of the top 20 (ranking #21).

This, despite Rust users “find[ing] difficulty and frustration with the language’s highly touted features for memory safety and correctness.”

You’ve probably never written anything in Rust, the open source, systems-level programming language created by Mozilla, but you likely will at some point. Developers crowned Rust their “most loved” language in Stack Overflow’s 2019 developer survey, while Redmonk’s semi-annual language rankings saw Rust get within spitting distance of the top 20 (ranking #21).
This, despite Rust users “find[ing] difficulty and frustration with the language’s highly touted features for memory safety and correctness.”
Most developers don’t normally travel into systems programming territory. Application developers, for example, tend not to need to get close to the underlying hardware. They also likely don’t need to build platforms upon which other software will run, a core definitional element of systems programming.
For those developers who do work with lower-level programming languages like C or C++, Rust is a revelation, something I first covered in 2015. Fast forward a few years, however, and Rust just keeps getting better.
Asked to detail Rust’s major selling points, developer David Barsky offers the following:
  • Performant. Rust is able to replace C/C++ in spaces it typically thrived in. For example: “For latency-sensitive network services, Rust’s lack of runtime garbage collection results in almost non-existent tail latencies.”
  • Reliable. “Its type system and borrow checker—a static, compile-time garbage collector—prevents whole classes of bugs that are accepted as ‘normal’ in Python, Java, and C++.”
  • Developer productivity. “Cargo, the build tool and package manager, is one of the best build systems and package managers I’ve used.” Rust also comes with excellent built-in documentation, and great, built-in unit, integration, and documentation testing.
Barsky’s experience seems similar to Scott’s. Coming from higher-level programming languages (Java, Ruby on Rails), Scott says his experience with C was less-than-pleasant: “C was awful because I was constantly running into memory issues, segfaults, etc. And I more or less felt like I was fighting with the code the whole time.”
Rust, by contrast, was “systems programming with guard rails.” Scott explains:
Then I tried Rust (it had just turned 1.0), and it felt like systems programming with guard rails. All the things I needed to do low-level systems programming, but with a lot of help to debug and to make the code safe – like the borrow checker and compiler, and then later on tooling like the linters (“clippy”). It had offered a lot of the familiar aspects of functional and object-oriented programming, and just seemed to fit with my mental model of how I wanted to build systems.
As co-founder of Oso, Scott couldn’t avoid lower-level programming. Oso, with a mission to make back-end infrastructure security invisible for developers and simple for ops,” needs the performance that a systems-level language offers. “We can’t use a garbage-collected language like Go, because the performance wouldn’t be consistent enough for what we do, since we sit on the critical path of customer traffic,” Scott said. 
All of which sounds great, until we return to the potential problem of sourcing developer talent well-versed in a relatively new language. However, accessible talent may be Rust’s best feature of all.

Rust programmers wanted

A critical component of learning something new is having people willing to help with the transition. Here Rust shines. As Barsky puts it,
The Rust community is full of passionate, kind, and intelligent people. It has a strongly-enforced code of conduct, which means that rude or harassing behavior is not tolerated. Anecdotally, it has some of the highest concentrations of LGBTQA people I’ve seen in any tech community.
This community is a big reason that, according to Scott, developers can pick up Rust in a few months. Rust “requires a bit of a change of mindset,” he says. “You need to do more work up front reasoning about things like types and lifetimes.” But once you get there “it pays dividends down the line.”
Small wonder, then, that so many developers love Rust. The upside is big and the downside is minimized by Rust’s welcoming and inclusive community.