Sunday, 21 November 2021

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.

This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.

That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.

VMware gears up for a challgening future

In “Proving the value of analytics on the edge,” CIO contributor Bob Violino offers three case studies that illustrate the benefits of edge architecture. Two involve transportation: One centers on the collection and processing of telematics from fleets of freight vehicles to improve safety; the other focuses on real-time collection of traffic data in Las Vegas to improve the city's traffic control. The third is an epic edge case: Adding analytics processing to satellites that capture geospatial imagery, cutting the amount of data transferred to the ground. 

Edge architecture is also shaking up one of the original IoT areas, medical devices. Processing medical IoT data at the edge at scale is a relatively new idea, explains Computerworld contributor Mary K. Pratt in "The cutting edge of healthcare: How edge computing will transform medicine." With the healthcare industry faciing a fresh wave of data emanating from wearable health monitors, allocating edge compute power to process those petabytes will become increasingly imperative.

InfoWorld's Martin Heller takes a different tack in "How to choose a cloud IoT platform." All the major clouds offer platforms for IoT asset management -- cataloging devices, monitoring them, updating them, etc. Also, they provide edge "zones," appliances, and various on-prem cloud choices that can serve as edge computing nodes. And of course, the big clouds offer all the analytics options you could want for processing IoT data.

Unfortunately, you can't escape the fact that the more you physically distribute your compute and storage, the more you increase your attack surface area. That's one concern examined in "Securing the edge: 4 trends to watch" by CSO contributor Jaikumar Vijayan. Another trend is even more obvious: Escalating alarm over the inherent vulnerabilities of IoT devices themselves, which together raise the ante for edge security. One positive development Vijayan identifies is the accelerated shift to SASE (secure access service edge), which integrates SD-WAN and security into a single edge solution (see the guide "Who's seling SASE and what do you get?").

Security is only one of the liabilities raised in Network World's "Edge computing: 5 potential pitfalls." Complexity is the leading villain -- there are so many choices of technologies and providers that enterprises often turn to partners for planning and implementation.

But that's true of many emerging areas of technology. Edge computing is exciting because it signals a shift in the way enterprises view the IT estate: If we're really going to transform the enterprise, then appropriate technology must be deployed in every corner the business, with streaming data feeding continuous optimization. Edge computiing provides a framework for that vision.

www.networkworld.com

Saturday, 13 November 2021

How to automate QA testing of SaaS and low-code applications

Quality assurance automation engineers test applications developed in-house, from legacy monoliths to cloud-native applications that leverage microservices. A typical mission-critical application requires a combination of unit testing at the code level, code review, API tests, automated user experience testing, security testing, and performance testing. The best devops practice is to automate running these tests and then select an optimal subset for continuous testing inside CI/CD (continuous integration and continuous delivery) pipelines.

But what about applications, workflows, integrations, data visualizations, and user experiences configured using SaaS (software-as-a-service) platforms, low-code development tools, or no-code platforms that empower citizen developers? Just because there’s less or no coding involved, does that automatically imply that workflows function as required, data processing meets the business requirements, security configurations align with company policies, and performance meets user expectations?

So, what should be tested? How can these apps be tested without access to the underlying source code? Where should IT prioritize testing, especially considering many devops organizations are understaffed in QA engineers? 

Start by defining and implementing agile acceptance testing

Low-code and no-code require testing the business logic

Use low-code testing platforms and machine learning

That’s a high bar for deployment frequency and testing practices and what you hope other SaaS and low-code platforms target. This level of testing, complemented with the development team’s test automation efforts, helps reduce deployment risks, especially for applications requiring high reliability.

https://www.infoworld.com/

Friday, 18 June 2021

Deal with supply chain issues using cloud computing

Something you need out of stock? From bicycle parts to a single chip needed to complete a new car or truck, supply chain disruptions are killing many businesses—as well as impacting the consumers who use those businesses.

In 2020, the retail sector experienced drastic global inventory distortion due to the pandemic. The estimated value for out-of-stock items was $1.14 trillion. The value of inventory distortion costs in the global retail industry in 2020 was $580 million at the store level, $512 million for supply chain, and $677 million for the manufacturer, according to the IHL Group.

This is truly a classic dependency problem. If you’re building a product with 1,000 parts and all are available except one, that product doesn’t get sold. The 999 successful supply chain events are overshadowed by the one failure that did not allow a company to gain revenue for that product.

Some of this is self-inflicted. By leveraging technology correctly, many of these problems can be avoided, or at least better understood and mitigated proactively. I’m often taken aback by the number of businesses that are wholly dependent on a supply chain and have not automated ways of dealing with management, disruption, or optimization.

To run a supply chain effectively you need near-perfect information that reflects the current supply chain status. This means that you can see the delivery status of a needed part or product as well as the supply chain that links to that part or product, and any nesting supply chains that deal with them.

In other words, if a component can’t be shipped because of a weather event where the factory exists, that’s known in real time. The business understands that the part will be late and can proactively and automatically find alternatives. This occurs without humans having to view, analyze, or correct the processes. It’s fully automated.

Now, consider the power of public clouds to provide data storage, centralized integration of remote systems, predictive analytics, and even full-blown supply chain automation software as a service. The power is available and cheap, but enterprises are not taking advantage. It’s about time to change that.

If there is any silver lining from the pandemic (besides working from home and not having to commute), it’s that we have seen how our supply chains are vulnerable. Even something little can disrupt them. Many of these disruptions are avoidable and can easily lead to bankruptcy or at least shrink the business.

The businesses that learn to automate and master their supply chains using public cloud technology will expand quickly. Events such as the pandemic will become opportunities for growth. We saw some businesses explode in 2020: those that had processes in place and automation to manage their supply chains effectively. Most of these success stories also leveraged cloud.  

https://www.infoworld.com/

Saturday, 24 April 2021

New AI tool tracks evolution of COVID-19 conspiracy theories on social media

 A new machine-learning program accurately identifies COVID-19-related conspiracy theories on social media and models how they evolved over time—a tool that could someday help public health officials combat misinformation online.

“A lot of machine-learning studies related to misinformation on social media focus on identifying different kinds of conspiracy theories,” said Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at Los Alamos National Laboratory and co-author of the study that was published last week in the Journal of Medical Internet Research.

“Instead, we wanted to create a more cohesive understanding of how misinformation changes as it spreads. Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods.”

The study, titled “Thought I’d Share First,” used publicly available, anonymized Twitter data to characterize four COVID-19 conspiracy theory themes and provide context for each through the first five months of the pandemic.

The four themes the study examined were that 5G cell towers spread the virus; that the Bill and Melinda Gates Foundation engineered or has otherwise malicious intent related to COVID-19; that the virus was bioengineered or was developed in a laboratory; and that the COVID-19 vaccines, which were then all still in development, would be dangerous.

“We began with a dataset of approximately 1.8 million tweets that contained COVID-19 keywords or were from health-related Twitter accounts,” said Dax Gerts, a computer scientist also in Los Alamos’ Information Systems and Modeling Group and the study’s co-author. “From this body of data, we identified subsets that matched the four conspiracy theories using pattern filtering, and hand labeled several hundred tweets in each conspiracy theory category to construct training sets.”

Using the data collected for each of the four theories, the team built random forest machine-learning, or artificial intelligence (AI), models that categorized tweets as COVID-19 misinformation or not.

“This allowed us to observe the way individuals talk about these conspiracy theories on social media, and observe changes over time,” said Gerts.

The study showed that misinformation tweets contain more negative sentiment when compared to factual tweets and that conspiracy theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events.

For example, Bill Gates participated in a Reddit “Ask Me Anything” in March 2020, which highlighted Gates-funded research to develop injectable invisible ink that could be used to record vaccinations. Immediately after, there was an increase in the prominence of words associated with vaccine-averse conspiracy theories suggesting the COVID-19 vaccine would secretly microchip individuals for population control.

Furthermore, the study found that a supervised learning technique could be used to automatically identify conspiracy theories, and that an unsupervised learning approach (dynamic topic modeling) could be used to explore changes in word importance among topics within each theory.

“It’s important for public health officials to know how conspiracy theories are evolving and gaining traction over time,” said Shelley. “If not, they run the risk of inadvertently publicizing conspiracy theories that might otherwise ‘die on the vine.’ So, knowing how conspiracy theories are changing and perhaps incorporating other theories or real-world events is important when strategizing how to counter them with factual public information campaigns.”

https://www.lanl.gov/discover/news-release-archive/2021/April/0419-ai-tool-tracks-conspiracy-theories.php

Friday, 12 February 2021

How the data-center workforce is evolving

The COVID-19 pandemic has profoundly affected many areas of IT, including the data center, where changes to the infrastructure--particularly adoption of cloud services--are bringing about the need for new skill sets among workers who staff them.

Perhaps no technology industry benefitted more from the pandemic than cloud computing; the location independence of cloud services makes them ideal for a world where the majority of line-of-business as well as IT workers are no longer in the office.

But does that mean businesses will rely on infrastructure as a service (IaaS) and no longer need their own on-premises data centers and data-center IT teams? Analysts and futurists have been asking this question for about a decade, but now cloud, already strong before the pandemic, has gone through an inflection point and brought new immediacy to the issue.

The answer is that data centers are not going anywhere anytime soon, but they will look fundamentally different. That's good news for people currently working in data centers and those considering careers there, because adoption of cloud and other changes will create a wave of new opportunities.

Uptime Institute predicts that data-center staff requirements will grow globally from about 2 million full-time employees in 2019 to nearly 2.3 million by 2025. Growth in expected demand will mainly come from cloud and colocation data centers. Enterprise data centers will continue to employ a large number of staff, but cloud data-center staff will outnumber enterprise data-center staff after 2025, Uptime says.

On the hiring side, finding the right talent remains difficult for many organizations. In 2020, 50% of data center owners or operators globally reported having difficulty finding qualified candidates for open jobs, compared to 38% in 2018, according to Uptime Institute.

For IT pros looking to be part of the new data center, here are some of the top roles and in-demand skills to develop.

Technical architect

The role of the technical architect has grown in importance because applications are no longer deployed in technology silos. In the past, each application had its own servers, storage, and security. Modern data centers are built on disaggregated infrastructure where resources are shared across multiple applications.

This requires new infrastructure design skills to ensure application performance remains high as the underlying technology is being shared across a broad set of applications. And it requires high-level domain knowledge of network, storage, servers, virtualization, and other infrastructure.

Data-center architect

The challenging job of data-center architect requires specific knowledge of the physical data center--an understanding of power, cooling, real estate, cost structure, and other factors essential to designing data centers. Architects help determine the layout of the facility as well as its physical security. The internal design involving racks, flooring and wiring is also part of this role. If done poorly, the job can have an enormous negative impact on the workflows of the technical staff.

Cloud management

There is no single cloud provider, and an emerging and continually evolving enterprise role is selecting and managing cloud services--private, public and hybrid. The attributes of cloud providers vary, with some being strong in specific regions while others may be better suited than competitors to provide specific services, for example. In some cases, third-party cloud services are inappropriate, making private cloud the best answer, as is often the case when strict data privacy is called for.

Cloud services need to be constantly monitored and optimized to ensure businesses are not overspending in some areas and underspending in others. At the same time, cost optimization cannot be allowed to result in performance issues. This role requires the skills to properly evaluate cloud offerings and provide ongoing management.

AI and ML

Data volumes are now massive and getting larger by the day, and with the rise of edge computing, more data will reside in more places. Artificial intelligence and machine learning are required to facilitate effective data management. There's a wide range of jobs in this area across the spectrum of the AI lifecycle, including training AI systems, modeling, programming and providing human-in-the-loop participation to ensure AI goals are being met.

Data analytics

The future data center will be driven by analyzing massive amounts of data. Expect this trend to continue as more data is being generated by IoT endpoints, video systems, robots--almost everything we do. Data-center operations teams will make critical decisions based on the analysis of this data. Businesses today have a shortage of people with analytic skills, particularly those who understand how to use AI/ML to accelerate the analysis.

Software skills

Many IT engineers, particularly those who work with network infrastructure, are hardware centric. Sure, they may know how to hunt and peck on a command-line interface, but that's not really a software skill. Most network engineers have never executed even basic software functions like making an API call. Using APIs can make many tasks much easier than trying to write a script to parse a CLI.

Not all network engineers need to become programmers--although those who want to should focus on languages such as Python and Ruby--but all should become software power-users and understand how to use APIs and SDKs to perform administrative tasks. All modern network infrastructure has been designed to be managed through APIs, many of them cloud based. The days of being a CLI jockey are over, and an unwillingness to admit it is the biggest threat to today's data-center engineers.

Data-center security

There are multiple avenues for jobs in data-center security, given that this discipline refers to both physical and cyber security. Data centers house sensitive and proprietary data, and breaches can have disastrous consequences for an organization. Physical security was once done with badge readers and keypads, but there has been a wealth of innovation, including AI-enabled cameras, fingerprint scanners, iris readers and facial-recognition systems. This promises to be an exciting area to work in over the next decade.   

Cyber security has also evolved as security-information and event-management tools transition to ML-based systems that enable security professionals to see things they never could before. Also, many advanced organizations are adopting zero-trust models to isolate application traffic from other systems. Through the use of microsegmentation, secure zones can be created, minimizing the "blast radius" of a breach.

Data-center networking

The role of the network in the data center has changed significantly over the past decade. The traditional multi-tier architectures that were optimized for North-South traffic flows have shifted to leaf-spine networks that are designed for higher volumes of East-West traffic. Also, software-defined networking (SDN) systems are being used to provision virtual-fabric overlays of the physical underlay. This brings greater automation, traffic visibility, and cost effectiveness to the data-center network.

Network engineers who work in data centers need to become familiar with new concepts associated with network fabrics such as Linux-based operating systems, open-source network platforms, VxLAN tunnels and Ethernet VPNs. These all increase the scalability, elasticity and resiliency of the network while simplifying network operations. Also, most data-center platforms are now open by design, making vendor interoperability much easier and breaking the lock-in customers experienced in the past.

Another aspect of data center networking that's changed is cloud connectivity. Historically, network engineers were concerned with the network inside the data center, which is a highly controlled environment.

The rise of cloud and edge computing dictates that the network extend outside the physical confines of customer premises, across the wide area to the cloud provider. It's imperative that the network function as if it is a single, continuous fabric across all cloud locations. There are a number of ways to do this, including SD-WAN, SASE and direct cloud connects.

Jobs outside the data center

What, if any, are the jobs for data center professionals if they want to transition out of that environment but make use of their current skills? Unfortunately, those skills don't translate well. You don't see many mainframe engineers or PBX administrators around anymore.

However, while the future lies with the jobs outlined here, it will take a long time for legacy data centers to transition. After all, businesses do often adopt an "if it ain't broke, don't fix it" mentality when it comes to mission-critical systems. So for those unable or unwilling to reskill, it may be necessary to seek employers in verticals that tend to be on the slower end of technology adoption--state and local government, regional banks, and specialty retail are some examples.

The future of data centers lies in distributed clouds, and that changes the skiill sets needed to run them. Data centers are certainly not going away, but they will look much different in the future, and that should be exciting for all.

https://www.networkworld.com/

Wednesday, 10 February 2021

Top metrics for effective multicloud management

When it comes to effectively managing a multicloud environment, there are a ton of network and application metrics that enterprise customers should be watching.

Among enterprises, the trend is toward multicloud environments, which can include workloads running on-premises and in public clouds run by multiple cloud providers such as AWS, Microsoft Azure, IBM/Red Hat, Google Cloud Platform and others. Gartner predicts by 2021, more than 75% of midsize and large organizations will have adopted some form of a multicloud and/or hybrid IT strategy. Likewise, IDC predicts that by 2022, more than 90% of enterprises worldwide will be relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs.

"As enterprises increasingly embrace multicloud, they must consider what KPIs [key performance indicators] will best measure their success in managing multicloud environments," said Briana Frank, director of product management with IBM Cloud.

"There's a variety of KPIs that can help evaluate success, including financial metrics, such as cost, return on investment, and rate of change in the reduction of total cost of ownership," Frank said. "However, enterprises should go beyond financial metrics alone and also consider measures of other critical causes for concern, such as downtime caused by outages and security breaches," Frank added. 

Useful KPIs in a multicloud setting include those that measure costs and billing, network and application performance, said Roy Ritthaler, vice president of product marketing in VMware's cloud management business unit.

Financial KPIs

Optimizing multicloud costs and eliminating waste are key goals for IT and lines of business, and metrics should include budget tracking and detailed spend analysis, Ritthaler said.

"They should also provide capabilities to uncover hidden costs, flag anomalies, reallocate cloud spend for showback and chargeback, and provide proactive recommendations to purchase and exchange reservations and savings plans," Ritthaler said.

Common multicloud KPIs include looking at the cost of all untagged resources, such as databases that might be using resources. Tagged resources can include myriad items such as the owner of a resource, the environment its operating in, and project name, for example. The idea is to most effectively identify, manage and support resources.

Other financial KPIs include a look at the percentage of infrastructure running on demand and the percentage of total bill charged back, Ritthaler said.

Security and network KPIs

Visualizing the security posture of the connectivity in a multicloud network is absolutely necessary, as there are too many components to consider individually, Ritthaler said.

"Identifying bad actors or bad security posture is handled in a single tool for monitoring and guarding," Ritthaler said. "Capabilities include monitoring traffic to detect vulnerabilities and to design app-centric security and generating recommended firewall rules to implement application security."

Some common KPIs include measuring security incidents per month by team, the number of security lapses, and the time to remediate security violations measured in hours, Ritthaler said.

Security is critical, but so it getting a handle on access policies, managing QoS, and providing consistent measuring capabilities across such a diverse group of systems, said Nabil Bukhari, chief technology officer with Extreme Networks. "Applications are the star of the multicloud show, but getting a handle on network performance metrics – like latency, packet loss– is important too," Bukhari said.

Additional network-related KPIs include measuring response time, bandwidth usage and throughput.

Network monitoring must include end-to-end network visibility across physical and virtual environments, and it's built from flow-based traffic monitoring using NetFlow, sFlow and SNMP device monitoring, Ritthaler said.

Application performance KPIs

On the applications side, each application will have its own performance metrics to monitor, and it's important to have tools that can tie these applications to the infrastructure that they're running on, to deliver an end-to-end picture of the infrastructure.

"Many organizations already have multiple application performance monitoring tools. However, these tools do not provide the end-to-end visibility that you need to troubleshoot over different teams," Ritthaler said. "Being able to consolidate those tools into a single solution brings teams together when problems occur."

Ritthaler said some common application KPIs include user experience measurements from APM (application performance management) packages; configured versus used resource consumption (such as CPU and memory); response times; and connection counts.

"Applications tend to spread out over multiple clouds; the path of least resistance will typically be taken. Documentation will fall behind, and this is where automatic discovery is necessary," Ritthaler said. "Application discovery works over the multicloud architecture to make organizations aware of where their applications are running, who they are talking to, what application dependencies there are, and how they are secured."

Compliance KPIs

Compliance is another area that requires management attention, experts say.  

"Compliance KPIs require continuously inspecting cloud-resource configurations and benchmarking them against cloud, industry and custom security and compliance standards. It should track compliance score, violations, and resolution progress," Ritthaler said. 

https://www.networkworld.com/

Customers need to safeguard the health of hybrid and public cloud services using logs and metrics to proactively monitor and troubleshoot issues with native cloud services as they occur and suggest remediation steps, Ritthaler said.

Thursday, 21 January 2021

Scientists count elephants from space with satellites and computer smarts

 People like to talk about landmarks that can be "seen from space," from the pyramids to the Great Wall of China. But what about something much smaller, like elephants? They can be spotted, too, with the help of satellites and an algorithm trained to look for them.

A team led by researchers with the University of Oxford and the University of Bath in the UK developed a method for counting African elephants using imagery from Maxar satellites, opening up a new way to monitor vulnerable and endangered animals.

"For the first time, scientists have successfully used satellite cameras coupled with deep learning to count animals in complex geographical landscapes," said the University of Bath in a statement Tuesday.

The satellite images could offer an effective alternative to surveillance done by humans in aircraft, which can be an expensive and challenging way of counting elephants. 

The space method has "comparable accuracy to human detection capabilities," according to a Maxar statement. Satellites can also easily cover a tremendous amount of ground. 

The research team published a paper on the elephant detection work in the journal Remote Sensing in Ecology and Conservation in late December.  

Researchers have used satellites for wildlife monitoring projects before, as when NASA located a secret penguin colony. Satellites have been used to collect data on whales, which are fairly easy to spot against blue water. What makes the elephant project so innovative is that the method can pick out elephants from a diverse landscape of grass and woodlands.   

There are an estimated 40,000 to 50,000 African elephants left in the wild and they are listed as "vulnerable" on the IUCN Red List of Threatened Species. The population is under pressure from habitat loss and poaching.

"Accurate monitoring is essential if we're to save the species," said University of Bath computer scientist Olga Isupov, creator of the algorithm that detects the elephants. "We need to know where the animals are and how many there are." 

The team hopes the system and will be adaptable for smaller animals as satellite resolution continues to improve. It might not be ready to track mice just yet, but elephants are an excellent start.

https://www.cnet.com/