Sunday, 13 December 2020

What is neuromorphic computing?

As the name suggests, neuromorphic computing uses a model that's inspired by the workings of the brain.

The brain makes a really appealing model for computing: unlike most supercomputers, which fill rooms, the brain is compact, fitting neatly in something the size of, well... your head. 

Brains also need far less energy than most supercomputers: your brain uses about 20 watts, whereas the Fugaku supercomputer needs 28 megawatts -- or to put it another way, a brain needs about 0.00007% of Fugaku's power supply. While supercomputers need elaborate cooling systems, the brain sits in a bony housing that keeps it neatly at 37°C. 

True, supercomputers make specific calculations at great speed, but the brain wins on adaptability. It can write poetry, pick a familiar face out of a crowd in a flash, drive a car, learn a new language, take good decisions and bad, and so much more. And with traditional models of computing struggling, harnessing techniques used by our brains could be the key to vastly more powerful computers in the future.

Why do we need neuromorphic systems?

Most hardware today is based on the von Neumann architecture, which separates out memory and computing. Because von Neumann chips have to shuttle information back and forth between the memory and CPU, they waste time (computations are held back by the speed of the bus between the compute and memory) and energy -- a problem known as the von Neumann bottleneck.

By cramming more transistors onto these von Neumann processors, chipmakers have for a long time been able to keep adding to the amount of computing power on a chip, following Moore's Law. But problems with shrinking transistors any further, their energy requirements, and the heat they throw out mean without a change in chip fundamentals, that won't go on for much longer.

As time goes on, von Neumann architectures will make it harder and harder to deliver the increases in compute power that we need.

To keep up, a new type of non-von Neumann architecture will be needed: a neuromorphic architecture. Quantum computing and neuromorphic systems have both been claimed as the solution, and it's neuromorphic computing, brain-inspired computing, that's likely to be commercialised sooner. 

As well as potentially overcoming the von Neumann bottleneck, a neuromorphic computer could channel the brain's workings to address other problems. While von Neumann systems are largely serial, brains use massively parallel computing. Brains are also more fault-tolerant than computers -- both advantages researchers are hoping to model within neuromorphic systems.

First, to understand neuromorphic technology it make sense to take a quick look at how the brain works. 

Messages are carried to and from the brain via neurons, a type of nerve cell. If you step on a pin, pain receptors in the skin of your foot pick up the damage, and trigger something known as an action potential -- basically, a signal to activate -- in the neurone that's connected to the foot. The action potential causes the neuron to release chemicals across a gap called a synapse, which happens across many neurons until the message reaches the brain. Your brain then registers the pain, at which point messages are sent from neuron to neuron until the signal reaches your leg muscles -- and you move your foot.

An action potential can be triggered by either lots of inputs at once (spatial), or input that builds up over time (temporal). These techniques, plus the huge interconnectivity of synapses -- one synapse might be connected to 10,000 others -- means the brain can transfer information quickly and efficiently.

Neuromorphic computing models the way the brain works through spiking neural networks. Conventional computing is based on transistors that are either on or off, one or zero. Spiking neural networks can convey information in both the same temporal and spatial way as the brain can and so produce more than one of two outputs. Neuromorphic systems can be either digital or analogue, with the part of synapses played by either software or memristors.

Memristors could also come in handy in modelling another useful element of the brain: synapses' ability to store information as well as transmitting it. Memristors can store a range of values, rather than just the traditional one and zero, allowing it to mimic the way the strength of a connection between two synapses can vary. Changing those weights in artificial synapses in neuromorphic computing is one way to allow the brain-based systems to learn.

Along with memristive technologies, including phase change memory, resistive RAM, spin-transfer torque magnetic RAM, and conductive bridge RAM, researchers are also looking for other new ways to model the brain's synapse, such as using quantum dots and graphene.

What uses could neuromorphic systems be put to?

For compute heavy tasks, edge devices like smartphones currently have to hand off processing to a cloud-based system, which processes the query and feeds the answer back to the device. With neuromorphic systems, that query wouldn't have to be shunted back and forth, it could be conducted within the device itself. 

But perhaps the biggest driving force for investments in neuromorphic computing is the promise it holds for AI.

Current generation AI tends to be heavily rules-based, trained on datasets until it learns to generate a particular outcome. But that's not how the human brain works: our grey matter is much more comfortable with ambiguity and flexibility.

It's hoped that the next generation of artificial intelligence could deal with a few more brain-like problems, including constraint satisfaction, where a system has to find the optimum solution to a problem with a lot of restrictions. 

Neuromorphic systems are also likely to help develop better AIs as they're more comfortable with other types of problems like probabilistic computing, where systems have to cope with noisy and uncertain data. There are also others, such as causality and non-linear thinking, which are relatively immature in neuromorphic computing systems, but once they're more established, they could vastly expand the uses AIs could be put to.

Are there neuromorphic computer systems available today?

Yep, academics, startups and some of tech's big names are already making and using neuromorphic systems.

Intel has a neuromorphic chip, called Loihi, and has used 64 of them to make an 8 million synapse system called Pohoiki Beach, comprising 8 million neurones (it's expecting that to reach 100 million neurones in the near future). At the moment, Loihi chips are being used by researchers, including at the Telluride Neuromorphic Cognition Engineering Workshop, where they're being used in the creation of artificial skin and in the development of powered prosthetic limbs.

IBM also has its own neuromorphic system, TrueNorth, launched in 2014 and last seen with 64 million neurones and 16 billion synapses. While IBM has been comparatively quiet on how TrueNorth is developing, it did recently announce a partnership with the US Air Force Research Laboratory to create a 'neuromorphic supercomputer' known as Blue Raven. While the lab is still exploring uses for the technology, one option could be creating smarter, lighter, less energy-demanding drones.

Neuromorphic computing started off in a research lab (Carver Mead's at Cal-tech) and some of the best known are still in academic institutions. The EU-funded Human Brain Project (HBP), a 10-year project that's been running since 2013, was set up to advance understanding of the brain through six areas of research, including neuromorphic computing.

The HBP has led to two major neuromorphic initiatives, SpiNNaker and BrainScaleS. In 2018, a million-core SpiNNaker system went live, the largest neuromorphic supercomputer at the time, and the university hopes to eventually scale it up to model one million neurones. BrainScaleS has similar aims as SpiNNaker, and its architecture is now on its second generation, BrainScaleS-2.

What are the challenges to using neuromorphic systems?

Shifting from von Neumann to neuromorphic computing isn't going to come without substantial challenges.

Computing norms -- how data is encoded and processed, for example -- have all grown up around the von Neumann model, and so will need to be reworked for a world where neuromorphic computing is more common. One example is dealing with visual input: conventional systems understand them as a series of individual frames, while a neuromorphic processor would encode such information as changes in a visual field over time. 

Programming languages will also need to be rewritten from the ground up, too. There are challenges on the hardware side: new generations of memory, storage and sensor tech will need to be created to take full advantage of neuromorphic devices.  

Neuromorphic technology could even need a fundamental change in how the hardware and software is developed, because of the integration between different elements in neuromorphic hardware, such as the integration between memory and processing.

Do we know enough about the brain to start making brain-like computers?

One side effect of the increasing momentum behind neuromorphic computing is likely to be improvements in neuroscience: as researchers start to try to recreate our grey matter in electronics, they may learn more about the brain's inner workings that help biologists learn more about the brain.

And similarly, the more we learn about the human brain, the more avenues are likely to open up for neuromorphic computing researchers. For example, glial cells -- the brain's support cells -- don't figure highly in most neuromorphic designs, but as more information comes to light about how these cells are involved in information processing, computer scientists are starting to examine whether they should figure in neuromorphic designs too.

And of course, one of the more interesting questions about the increasingly sophisticated work to model the human brain in silicon is whether researchers may eventually end up recreating -- or creating -- consciousness in machines.

https://www.zdnet.com/

Friday, 4 December 2020

Intel Machine Programming Tool Detects Bugs in Code

Intel unveiled ControlFlag – a machine programming research system that can autonomously detect errors in code. Even in its infancy, this novel, self-supervised system shows promise as a powerful productivity tool to assist software developers with the labor-intensive task of debugging. In preliminary tests, ControlFlag trained and learned novel defects on over 1 billion unlabeled lines of production-quality code.

In a world increasingly run by software, developers continue to spend a disproportionate amount of time fixing bugs rather than coding. It’s estimated that of the $1.25 trillion that software development costs the IT industry every year, 50 percent is spent debugging code1.

Debugging is expected to take an even bigger toll on developers and the industry at large. As we progress into an era of heterogenous architectures — one defined by a mix of purpose-built processors to manage the massive sea of data available today — the software required to manage these systems becomes increasingly complex, creating a higher likelihood for bugs. In addition, it is becoming difficult to find software programmers who have the expertise to correctly, efficiently and securely program across diverse hardware, which introduces another opportunity for new and harder-to-spot errors in code.

When fully realized, ControlFlag could help alleviate this challenge by automating the tedious parts of software development, such as testing, monitoring and debugging. This would not only enable developers to do their jobs more efficiently and free up more time for creativity, but it would also address one of the biggest price tags in software development today.

How It Works: ControlFlag’s bug detection capabilities are enabled by machine programming, a fusion of machine learning, formal methods, programming languages, compilers and computer systems.

ControlFlag specifically operates through a capability known as anomaly detection. As humans existing in the natural world, there are certain patterns we learn to consider “normal” through observation. Similarly, ControlFlag learns from verified examples to detect normal coding patterns, identifying anomalies in code that are likely to cause a bug. Moreover, ControlFlag can detect these anomalies regardless of programming language.

A key benefit of ControlFlag’s unsupervised approach to pattern recognition is that it can intrinsically learn to adapt to a developer’s style. With limited inputs for the control tools that the program should be evaluating, ControlFlag can identify stylistic variations in programming language, similar to the way that readers recognize the differences between full words or using contractions in English.

The tool learns to identify and tag these stylistic choices and can customize error identification and solution recommendations based on its insights, which minimizes ControlFlag’s characterizations of code in error that may simply be a stylistic deviation between two developer teams.

Intel has even started evaluating using ControlFlag internally to identify bugs in its own software and firmware product development. It is a key element of Intel’s Rapid Analysis for Developers project, which aims to accelerate velocity by providing expert assistance.

https://newsroom.intel.com/

Saturday, 21 November 2020

With COVID-19 hanging on, migration to the cloud accelerates

With the COVID-19 pandemic showing no signs of abating, migration to the cloud is expected to accelerate as enterprises choose to let someone else worry about their server gear.

In its global IT outlook for 2021 and beyond, IDC predicts the continued migration of enterprise IT equipment out of on-premises data centers and into data centers operated by cloud service providers (such as AWS and Microsoft) and colocation specialists (such as Equinix and Digital Realty).

The research firm expects that by the end of 2021, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic. CIOs must accelerate the transition to a cloud-centric IT model to maintain competitive parity and to make the organization more digitally resilient, the firm said.

"The COVID-19 pandemic highlighted that the ability to rapidly adapt and respond to unplanned/foreseen business disruptions will be a clearer determiner of success in our increasingly digitalized economy," said Rick Villars, IDC group vice president for worldwide research, in a statement. "A large percentage of a future enterprise's revenue depends upon the responsiveness, scalability, and resiliency of its infrastructure, applications, and data resources."

In this new normal, the most important thing enterprises can do is seek opportunities to leverage new technologies to take advantage of competitive/industry disruptions and extend capabilities for business acceleration.

Additional IDC predictions include:

Edge becomes a top priority: Reactions to changed workforce and operations practices during the pandemic will be the dominant accelerators for 80% of edge-driven investments and business model changes in most industries through 2023.

The intelligent digital workspace: By 2023, 75% of global 2000 companies will commit to providing technical parity to a workforce that is hybrid by design rather than by circumstance, enabling them to work together separately and in real time.

The pandemic's IT legacy: Through 2023, coping with technical debt accumulated during the pandemic will shadow 70% of CIOs, causing financial stress, inertial drag on IT agility, and "forced march" migrations to the cloud.

Resiliency is central to the next normal: In 2022, enterprises focused on digital resiliency will adapt to disruption and extend services to respond to new conditions 50% faster than ones fixated on restoring existing business/IT resiliency levels.

A shift towards autonomous IT operations: Thanks to AI/ML advances in analytics, an emerging cloud ecosystem will be the underlying platform for all IT and business automation initiatives by 2023.

Opportunistic AI expansion: By 2023, one quarter of global 2000 companies will acquire at least one AI software start-up to ensure ownership of differentiated skills and IP out of competitive necessity.

Relationships are under review: By 2024, 80% of enterprises will overhaul relationships with suppliers, providers, and partners to better execute digital strategies.

Sustainability becomes a factor: By 2025, 90% of global 2000 companies will mandate reusable materials in IT hardware supply chains, carbon neutrality targets for providers' facilities, and lower energy use as prerequisites for doing business.

People still matter: Through 2023, half of enterprises' hybrid workforce and business automation efforts will be delayed or will fail outright due to underinvestment in building IT/Sec/DevOps teams with the right tools/skills. Enterprises will turn to new ways to find the talent they need.

https://www.networkworld.com/

Sunday, 1 November 2020

Machine learning in network management has promise, challenges

 As part of the trend toward more automation and intelligence in enterprise networks, artificial intelligence and machine learning are increasingly in-demand because the ability to programmatically identify problems with the network and provide instant diagnosis of complex problems is a powerful one.

Applying AI and ML to network management can enable the consolidation of input from multiple management platforms for central analysis. Rather than IT staff manually combing through reports from diverse devices and applications, machine learning can make quick, automated diagnoses of problems.

Gartner senior director and analyst Josh Chessman laid out the problem for the IT worker that machine learning is designed to solve: “I’ve got all these monitoring tools, and they’re all telling me something’s wrong, but they’re not telling me where it is. The biggest strength with this stuff today is that it can identify ‘you’ve got 26 events from seven different tools, and they’re all about a network problem.’”

It’s difficult to say how rapidly enterprises are buying AI and ML systems, but analysts say adoption is in the early stages.

One sticking point is confusion about what, exactly, AI and ML mean. Those imagining AI as being able to effortlessly identify attempted intruders, and to analyze and optimize traffic flows will be disappointed. The use of the term AI to describe what’s really happening with new network management tools is something of an overstatement, according to Mark Leary, research director at IDC.

“Vendors, when they talk about their AI/ML capabilities, if you get an honest read from them, they’re talking about machine learning, not AI,” he said.

There isn’t a hard-and-fast definitional split between the two terms. Broadly, they both describe the same concept—algorithms that can read data from multiple sources and adjust their outputs accordingly. AI is most accurately applied to more robust expressions of that idea than to a system that can identify the source of a specific problem in an enterprise computing network, according to experts.

“We’re probably overusing the term AI, because some of these things, like predictive maintenance, have been in the field for a while now,” said Jagjeet Gill, a principal in Deloitte’s strategy practice.

Another sticking point for a lot of ML systems is cross-compatibility. Much of what’s on the market currently takes the form of a vendor adding a new feature to one of its existing products. That’s handy for all-Cisco shops, for example, but can be a problem in a multi-vendor environment. “A lot of vendors are adding AIops because it’s kind of a buzzword,” said Chessman. “It doesn’t give you a lot of visibility into products from other vendors.”

There are vendor-agnostic ML systems for network management out there—Moogsoft and BigPanda are two of the bigger names in the field—but it’s more common to find ML features bundled with specific vendors’ products. “So take Netscout. They’ve got some ML, and it does a good job, but it’s focused on Netscout [products],” Chessman said.

Regardless of the hurdles the technology has to overcome, ML is likely to make many IT professionals’ jobs a lot easier, according to Peter Suh, the head of Accenture’s North American network practice. “Having those types of tools and solutions is going to be good,” he said. “It’ll help you walk through what’s going on on the network at any given time.”

While it’s also a potential step in the direction of full network automation, it might also result in the loss of jobs for IT staff, that’s not likely to happen in the immediate future, according to Gartner’s Chessman. What’s more probable is that ML will help free up IT staff to work on more revenue-generating activities, rather than putting out fires, he said. “Full automation is still years and years away.”

https://www.networkworld.com/

Sunday, 4 October 2020

IBM to open-source space junk collision avoidance

Space is already a pretty messy place, with tens of thousands of manmade objects, the majority of them unpowered, hurdling around the planet. As space exploration ramps up on the heels of privatization and aided by miniaturization, the debris field is only going to grow.

That's a pretty big problem. So-called man-made anthropogenic space objects (ASOs) travel at speeds up to 8,000 meters per second, meaning a collision involving even a tiny fragment and a satellite or crewed vehicle could be devastating.

All of this makes it extremely important for space agencies and private space companies to be able to anticipate the trajectories of manmade objects long before launch and to plan accordingly. Unfortunately, that's not very easy to do, and as the quantity of space junk increases, it's only going to get more difficult.

Enter the Space Situational Awareness (SSA) project, an open-source venture between IBM and Dr. Moriba Jah at the University of Texas at Austin to determine where ASOs are (orbit determination) and where they will be in the future (orbit prediction).

Some explanation is required here. Current methods for orbit prediction rely on physics-based models that in turn require extremely precise information about ASOs. The problem is that the location data available about ASOs comes from terrestrial-based sensors and tends to be imperfect. Factors like space weather further complicated the picture.

The idea behind SSA is that machine learning can create models that learn when physical models incorrectly predict an ASO's future location. Physics models, according to this strategy, are plenty good when it comes to orbital dynamics, but to maximize effectiveness they need to learn how and when they get it wrong and to account for that variability.

The data used for the project comes from United States Strategic Command (USSTRATCOM) via the space-track.org website. The team used the IBM Cloud Bare Metal Server with 16 Intel Xeon Processors, 120 GB RAM, and two Nvidia Tesla V100 GPUs (each with 16GB of RAM) to run the physical models to predict the orbits of all ASOs in low earth orbit and train ML models to learn the physics model error. As a result, the team was able to predict the future orbits of the ASOs. 

https://www.zdnet.com/

Monday, 28 September 2020

Microsoft's Underwater Data Center Makes Environmental Strides

At the beginning of this summer, with no fanfare and little publicity, Redmond, Wash.-based Microsoft hauled its shipping container-sized underwater data center, consisting of 864 servers at a depth of 117 feet, from the seabed off the coast of the Orkney Islands to the northeast of Scotland. 

Microsoft’s Project Natick

The experiment, called Project Natick, aimed to find out whether it would be economical and better for the environment to place data centers under water. The first conclusions from the project are starting to trickle in and they appear to be positive.

The retrieval of the center represented the final phase of a years-long effort, which was itself Phase 2 of a wider project that started in 2015 off the west coast of America where the company sank a data center to the seabed for 105 days to find out if computing was possible underwater given the extreme environment.

The team hypothesized that a sealed container on the ocean floor could provide ways to improve the overall reliability of data centers. On land, corrosion from oxygen and humidity, temperature fluctuations and bumps and jostles from people who replace broken components are all variables that can contribute to equipment failure.

The Northern Isles experiment, according to the company, has confirmed its hypothesis, which could have major implications for data centers on land.

Lessons learned from Project Natick are also informing Microsoft’s data center sustainability strategy around energy, waste and water, said Ben Cutler, a project manager in Microsoft’s Special Projects research group who leads Project Natick.

What is more, he added, the proven reliability of underwater data centers has prompted discussions with a Microsoft team in Azure that’s looking to serve customers who need to deploy and operate tactical and critical data centers anywhere in the world. “We are populating the globe with edge devices, large and small,” said William Chappell, vice president of mission systems for Azure said in a statement about the project. “To learn how to make data centers reliable enough not to need human touch is a dream of ours.”

The Underwater Advantage

Without really understanding the science behind data centers, it is easy to see the attraction. Apart from the fact that more than half the world’s population lives within 120 miles of the coast, the temperature of the water, which should keep the centers cool, makes for energy-efficient data centers that can use heat exchange plumbing in much the same way submarines do.

There is also the advantage of geography. As the location of data becomes increasingly important for regulators, locating data centers inside geographical boundaries’ around coasts will be easier than on land and will solve that problem. By placing them in the waters off big cities it will make information retrieval and use of the web and video streaming quicker. “We are now at the point of trying to harness what we have done as opposed to feeling the need to go and prove out some more,” Cutler said in a statement. “We have done what we need to do. Natick is a key building block for the company to use if it is appropriate.”*

Early conversations are already taking place about the potential future of Project Natick centered on how to scale up underwater data centers to power the full suite of Microsoft Azure cloud services, which may require linking together a dozen or more vessels the size of the Northern Isles. But it is a major step forward and could see more centers under the sea.

According to an Energy Innovation paper earlier this year on average, servers and cooling systems account for the greatest shares of direct electricity use in data centers at 43% of consumed power each, followed by storage drives and network devices.

Energy Innovation is a San Francisco-based nonpartisan energy and environmental policy firm. It delivers research and original analysis to policymakers to help them make informed choices on energy policy.

The substantial electricity use of data centers also gives rise to concerns over their carbon dioxide emissions. However, it is not yet possible to accurately estimate total CO2 emissions, due to a lack of data on the locations of the vast majority of global data centers and the emissions intensities, according to EI, but it is likely to be substantial. However, this is not the first time companies have tried to make data centers more energy efficient.

Increasing Demand for Energy


Randy Cozzens is EVP and head of energy, utilities, and chemicals at New York City-based Capgemini North America. He points out that resolving this is a key issue for enterprises given that they play an integral role in the evolving digital society, with an increasing consumer demand and appetite for an always on, no latency internet experience.

The increase in demand over the past 20 years naturally led to an increase in data centers and the added energy they use to operate. However, there have been active initiatives implemented to make data centers more energy efficient, he said. These include energy monitoring software, power-saving modes, server cooling systems, and virtualization. In addition, the rapid shift for many organizations to move their data to the cloud will potentially reduce the world's overall energy emission from data centers. “Cloud data centers are typically built to use less energy than standard data centers and are incentivized to reduce energy outputs by operating on solar or wind power,” he said.

“As IT continues to prioritize sustainable initiatives within its data centers, and as pivots to the cloud increase across industry sectors, the amount of energy being used by the world's data centers has the potential to decrease and become less of a threat to the environment.”

Competitive Edge

The problem is only going to get worse too, Akram TariqKhan of India-based ecommerce site YourLibaas, told us. With the demand for cloud computing increasing at an exponential pace, they are going to be disastrous for the environment. “As a heavy user of cloud servers, I can share how we end up buying excessive capacity only because the fierce competition has ended up the industry offering dirt-cheap prices,” he said.

“This leads to a spiral effect with unused capacity leading to an increased negative impact on the environment. Amazon's spot instances offer unused capacity at cheaper prices for temporary projects attempting to resolve this issue.”

Data Center Advantage
Tina Nikolovska is founder of TeamStage points to four different ways that data centers can help environment, ways that are mentioned every time this subject comes up. Globally, they can be summarized as reducing carbon footprint as one of the critical reasons in favor of digital workplaces.

1. No Commuting Carbon Footprint
There are enormous amounts of money and gas spent on commuting — whether using public transportation or self-owned cars. With less commuting, less pollution gets released into the atmosphere. If we are speaking about the trend of digitalization of workspaces, we can significantly reduce vehicle-produced gas emissions.

2. Digitalized Data Means No Paper
The only logical trend to follow remote work is the no-paper policy. Keeping data in print is unnecessary and obsolete, with the expanding cloud storage market and improved cybersecurity. Again, the global effect of paper reduction can be huge for ecology.

3. Less General Waste
General waste, including single-use items that offices abundantly waste, like coffee cups, water cups, paper towels, straws, etc., can be dramatically reduced by adopting the digital workspace. But it is more than general waste; it is the amounts of working clothes, shoes, make-up, and grooming products that are generously spent, with markets grossing billions of dollars on (often non-ethical) production. This is not to say that everyone will lock behind their door and reappear in a few years, looking like cavemen (and women). 

4. Working From Home
Working from home makes another type of consumer, where less is spent on small, single-use items and pleasures, and more on long-term comfort items. Think about the millions of dollars spent on coffees and lunches, and how most people are now cooking their food and preparing home-made coffee. It seems that the trend is to invest in a better coffee machine (that lasts for years) and drink the coffee from a regular, washable cup, instead of a plastic one. Multiply that with millions of workers around the globe, and the result is significantly reduced waste.

Digital Workplace Data Centers
So why does any of this matter? Boston-based Brightcove’s CEO, Jeff Ray, believes that this could be a tipping point issue for technology. “We are living through a pivotal moment in history where the world is undergoing a rapid digital transformation. 2020 has become video’s evolutionary moment and streaming over OTT devices is one area where we will continue to see growth," he said. "Consumers and businesses — many of whom are facing economic challenges due to the pandemic — are seeing the value of cutting the cable cord and subscribing to streaming services instead." That means data, and lots of it.

If video is increasingly important for companies as it is the main way to connect with their audiences right now, not having the capability to store the data it contains will hinder the development of digital workplaces. Not having high touch moments with clients means companies could be more creative and interact in new ways to build the future.

Even with the return of live sports and greater easing of restrictions in Q3, there is no going back to our previous consumption and working habits. This digital transformation will result in an evolution of how we do business and video will become an inherent part of workflows now and in the future.

"Forward thinking companies know the future of work involves a hybrid model," continued Ray. "Physical events will have to have a digital experience to reach attendees that are unable to attend in person. Hybrid events increase ROI and expand reach. It is no longer an either or, it is both. Companies have seen the vast opportunities ahead with virtual connection, there is no going back. If companies are not embracing video now, in the future they will have bigger problems than today. Don’t get left behind.”

https://www.cmswire.com/

Tuesday, 22 September 2020

Edge computing: The next generation of innovation

Like other hot new areas of enterprise tech, edge computing is a broad architectural concept rather than a specific set of solutions. Primarily, edge computing is applied to low-latency situations where compute power must be close to the action, whether that activity is industrial IoT robots flinging widgets or sensors continuously taking the temperature of vaccines in production. The research firm Frost & Sullivan predicts that by 2022, 90 percent of industrial enterprises will employ edge computing.

Edge computing is a form of distributed computing that extends beyond the data center mothership. When you think about it, how else should enterprises invest in the future? Yes, we know that a big chunk of that investment will go to the big public cloud providers – but hardware and software that enterprises own and operate isn’t going away. So why not physically distribute it where the business needs it most?

Augmenting the operational systems of a company’s business on location – where manufacturing or healthcare or logistical operations reside – using the awesome power of modern servers can deliver all kinds of business value. Typically, edge computing nodes collect gobs of data from instrumented operational systems, process it, and send only the results to the mothership, vastly reducing data transmission costs. Embedded in those results are opportunities for process improvement, supply chain optimization, predictive analytics, and more.

CIO, Computerworld, CSO, InfoWorld, and Network World have joined forces to examine edge computing from five different perspectives. These articles help demonstrate that this emerging, complex area is attracting some of the most intriguing new thinking and technology development today.

The many sides of the edge

Edge computing may be relatively new on the scene, but it’s already having a transformational impact. In “4 essential edge-computing use cases,” Network World’s Ann Bednarz unpacks four examples that highlight the immediate, practical benefits of edge computing, beginning with an activity about as old-school as it gets: freight train inspection. Automation via digital cameras and onsite image processing not only vastly reduces inspection time and cost, but also helps improve safety by enabling problems to be identified faster. Bednarz goes on to pinpoint edge computing benefits in the hotel, retail, and mining industries.

CIO contributing editor Stacy Collett trains her sights on the gulf between IT and those in OT (operational technology) who concern themselves with core, industry-specific systems – and how best to bridge that gap. Her article “Edge computing’s epic turf war” illustrates that improving communication between IT and OT, and in some cases forming hybrid IT/OT groups, can eliminate redundancies and spark creative new initiatives.

One frequent objection on the OT side of the house is that IoT and edge computing expose industrial systems to unprecedented risk of malicious attack. CSO contributing writer Bob Violino addresses that problem in “Securing the edge: 5 best practices.” One key recommendation is to implement zero trust security, which mandates persistent authentication and micro-segmentation, so a successful attack in one part of an organization can be isolated rather than spreading to critical systems.

Computerworld contributing writer Keith Shaw examines the role of 5G in “Edge computing and 5G give business apps a boost.” One of 5G’s big selling points is its low latency, a useful attribute for connecting IoT devices. But as IDC research director Dave McCarthy explains in the article, the reduction in latency won’t help you when you’re connecting to a far-flung data center. On the other hand, if you deploy “edge computing into the 5G network, it minimizes this physical distance, greatly improving response times,” he says.

In case you’re wondering, the hyperscale cloud providers aren’t taking this edge stuff lying down. In “Amazon, Google, and Microsoft take their clouds to the edge,” InfoWorld contributing editor Isaac Sacolick digs into the early-stage edge computing offerings now available from the big three, including mini-clouds deployed in various localities as well as their exiting on-prem offerings (such as AWS Outposts or Azure Stack) that are fully managed by the provider. Sacolick writes that “the unique benefit public cloud edge computing offers is the ability to extend underlying cloud architecture and services.”

The crazy variety of edge computing offerings and use cases covers such a wide range, it begins to sound like, well, computing. As many have noted, the “big cloud” model is reminiscent of the old mainframe days, when customers tapped into centralized compute and storage through terminals rather than browsers. Edge computing recognizes that not everything can or should be centralized. And the inventive variations on that simple notion are playing a key role in shaping the next generation of computing.

https://www.networkworld.com/

Sunday, 30 August 2020

Open standards vs. open source: A basic explanation

What are open standards, exactly? You’ve probably heard the term thrown around, but why does it matter to your business? How does it relate to open source? What’s the difference?

Take a common example. Have you ever noticed that Wi-Fi seems to work the same with any router, phone or computer? We tend to take these types of standards for granted, but they bring huge benefits to our daily lives.

Imagine if there were no standards like Wi-Fi. Every business might have its own form of wireless technology. If your favorite coffee shop had a router made by Company X, and you owned a computer made by Company Y, you might have to find another coffee shop to check your email.

Even if each business had a functioning form of wireless internet, a lack of standards would make interoperability nearly impossible. Customers of every company would suffer.

Have you ever wondered how competing businesses all across the world somehow converge on one format for these things?

The answer is often open standards.

What are open standards?

An open standard is a standard that is freely available for adoption, implementation and updates. A few famous examples of open standards are XML, SQL and HTML.

Businesses within an industry share open standards because this allows them to bring huge value to both themselves and to customers. Standards are often jointly managed by a foundation of stakeholders. There are typically rules about what kind of adjustments or updates users can make, to ensure that the standard maintains interoperability and quality.

What is open source?

What is open source, then? The term may sound similar to open standards; but, in reality, it is fundamentally different.

At its core, open source code is created to be freely available, and most licenses allow for the redistribution and modification of the code by anyone, anywhere, with attribution. In many cases the license further dictates that any updates from contributors will also become free and open to the community. This allows a decentralized community of developers to collaborate on a project and jointly benefit from the resulting software.

How open standards and open source help prevent vendor lock-in

Both open source and open standards can help protect clients from vendor lock-in, but they do it in different ways.

Let’s start with an example of an open standard. A business might buy a PDF reader and editor from a vendor. Over time, the team could create a huge number of PDF documents. Maybe these documents become a valuable asset for the company. Since the PDF format is an open standard, the business would have no problem switching from one PDF software to another. There is no concern that it would be unable to access its documents. Even if the PDF reader software isn’t open source, the PDF format is an open standard. Everyone uses this format.

Now, let’s instead take a look at the benefits of open source. Imagine that a business had spent millions of dollars writing internal software code for a proprietary operating system. That business would no longer have the option of changing vendors. It would be stuck with that operating system, unless it wanted to make a significant investment re-writing that code to run on a different system.

Open source software could have prevented that issue. Because open source software does not belong to any particular business, clients are not locked-in to any particular provider.

In both of these examples, the client would be able to avoid vendor lock-in. In one case this is because a piece of closed software followed a common open standard. In the other case, it is because the software itself belonged to an open source community.

While these are fundamentally different things, both help foster innovation while also providing more options to customers. 

https://www.ibm.com/

Friday, 21 August 2020

Importance of Software Engineering

Software engineering is the study of and practice of engineering to build, design, develop, maintain, and retire software. There are different areas of software engineering and it serves many functions throughout the application lifecycle. Effective software engineering requires software engineers to be educated about good software engineering best practices, disciplined and cognizant of how your company develops software, the operation it will fulfill, and how it will be maintained.

Software engineering is a new era as CIOs and Digital Leaders now understand the importance of software engineering and the impact – both good and bad – it can have on your bottom line.

Vendors, IT staff, and even departments outside of IT need to be aware that software engineering is increasing in its impact – it is affecting almost all aspects of your daily business.

The Importance of Software Engineers

Software engineers of all kinds, full-time staff, vendors, contracted workers, or part-time workers, are important members of the IT community.

What do software engineers do? Software engineers apply the principles of software engineering to the design, development, maintenance, testing, and evaluation of software. There is much discussion about the degree of education and or certification that should be required for software engineers.

According to StackOverflow Survey 2018, software engineers are lifelong learners; almost 90% of all developers say they have taught themselves a new language, framework, or tool outside of their formal education.

Software engineers are well versed in the software development process, though they typically need input from IT leader regarding software requirements and what the end result needs to be. Regardless of formal education, all software engineers should work within a specific set of best practices for software engineering so that others can do some of this work at the same time.

Software engineering almost always includes a vast amount of teamwork. Designers, writers, coders, testers, various team members, and the entire IT team need to understand the code.

Software engineers should understand how to work with several common computer languages, including Visual Basic, Python, Java, C, and C++. According to Stackoverflow, for the sixth year in a row, JavaScript is the most commonly used programming language. Python has risen in the ranks, surpassing C# this year, much like it surpassed PHP last year. Python has a solid claim to being the fastest-growing major programming language.

Software engineering is important because specific software is needed in almost every industry, in every business, and for every function. It becomes more important as time goes on – if something breaks within your application portfolio, a quick, efficient, and effective fix needs to happen as soon as possible.

Whatever you need software engineering to do – it is something that is vitally important and that importance just keeps growing. When you work with software engineers, you need to have a check and balance system to see if they are living up to their requirements and meeting KPIs.

Thursday, 20 August 2020

How Natural Language Processing Is Changing Data Analytics

Natural language processing (NLP) is the process by which computers understand and process natural human language. If you use Google Search, Alex, Siri, or Google Assistant, you’ve already seen it at work. The advantage of NLP is that it allows users to make queries without first having to translate them into “computer-speak.”

NLP has the potential to make both business and consumer applications easier to use. Software developers are already incorporating it in more applications than ever, including machine translation, speech recognition, sentiment analysis, chatbots, market intelligence, text classification, and spell checking.

This technology can be especially useful within data analytics, which analyzes data to help business leaders, researchers, and others gain insights that assist them in making effective decisions. As we’ll see below, NLP can support data analytics efforts in multiple ways, such as solving major global problems and helping more people, even those not trained in data processing, use these systems.

Managing Big Data

With the help of NLP, users can analyze more data than ever, including for critical processes like medical research. This technology is especially important now, as researchers attempt to find a vaccine for COVID-19.

In a recent article, the World Economic Forum (WEF) points out that NLP can help researchers tackle COVID-19 by going through vast amounts of data that would be impossible for humans to analyze. “Machines can find, evaluate, and summarise the tens of thousands of research papers on the new coronavirus, to which thousands are added every week….” In addition, this technology can help track the spread of the virus by detecting new outbreaks.

According to the WEF article, NLP can aid the research process when data analysts “[train] machines to analyze a user question in a full sentence, then to read the tens of thousands of scholarly articles in the database, rank them and generate answer snippets and summaries.” For example, a researcher may use the question, “Is COVID-19 seasonal?” and the system reviews the data and returns relevant responses.

Solving Problems

In addition to pressing health problems, NLP used in conjunction with artificial intelligence (AI) can help professionals solve other global challenges, such as clean energy, global hunger, improving education, and natural disasters. For example, according to a Council Post appearing on Forbes, “Huge companies like Google are setting their sights on flood prevention, utilizing AI to predetermine areas of risk and notify people in impacted areas.”

Enabling More Professionals

According to an InformationWeek article, “With natural language search capabilities, users don’t have to understand SQL or Boolean search, so the act of searching is easier.” As the quality of insights depends on knowing how to “ask the right questions,” this skill may soon become essential for business operators, managers, and administrative staff.

For example, anyone within a company could use NLP to query a BI system with a question like, “What was the inventory turnover rate last fiscal year compared to this fiscal year?” The system would convert each phrase to numeric information, search for the needed data, and return it in natural language format. Such queries allow any employee in any department to gain critical insights to help them make informed decisions.

Creating a Data-Driven Culture

In the past, business intelligence (BI) powered by data analytics required trained data professionals to correctly input queries and understand results. But NLP is changing that dynamic, resulting in what some experts are calling “data democratization”: the ability for more people to have access to data sets formerly reserved only for those with the advanced skills needed to interpret it.

The more people within a company who know how to gather insights based on data, the more that company can benefit from a data-driven culture, which is one that relies on hard evidence rather than guesswork, observation, or theories to make decisions. Such a culture can be nurtured in any industry, including healthcare, manufacturing, finance, retail, or logistics.

For example, a retail marketing manager might want to determine the demographics of customers who spend the most per purchase and target those customers with special offers or loyalty rewards. A manufacturing shift leader might want to test different methods within its operations to determine which one yields the greatest efficiency. With NLP, the commands needed to get this information can be executed by anyone in the business.

In Summary

NLP is not yet widespread. According to the InformationWeek article, “A few BI and analytics vendors are offering NLP capabilities but they're in the minority for now. More will likely enter the market soon to stay competitive.”

As it becomes more prevalent, NLP will enable humans to interact with computers in ways not possible before. This new type of collaboration will allow improvements in a wide variety of human endeavors, including business, philanthropy, health, and communication.

These advancements will become even more useful as computers learn to recognize context and even nonverbal human cues like body language and facial expressions. In other words, conversations with computers are likely to continue becoming more and more human.

https://www.kdnuggets.com/2020/08/natural-language-processing-changing-data-analytics.html

Monday, 10 August 2020

Software Architecture Guide

 What is architecture?

People in the software world have long argued about a definition of architecture. For some it's something like the fundamental organization of a system, or the way the highest level components are wired together. My thinking on this was shaped by an email exchange with Ralph Johnson, who questioned this phrasing, arguing that there was no objective way to define what was fundamental, or high level and that a better view of architecture was the shared understanding that the expert developers have of the system design.

A second common style of definition for architecture is that it's “the design decisions that need to be made early in a project”, but Ralph complained about this too, saying that it was more like the decisions you wish you could get right early in a project.

His conclusion was that “Architecture is about the important stuff. Whatever that is”. On first blush, that sounds trite, but I find it carries a lot of richness. It means that the heart of thinking architecturally about software is to decide what is important, (i.e. what is architectural), and then expend energy on keeping those architectural elements in good condition. For a developer to become an architect, they need to be able to recognize what elements are important, recognizing what elements are likely to result in serious problems should they not be controlled.

Why does architecture matter?

Architecture is a tricky subject for the customers and users of software products - as it isn't something they immediately perceive. But a poor architecture is a major contributor to the growth of cruft - elements of the software that impede the ability of developers to understand the software. Software that contains a lot of cruft is much harder to modify, leading to features that arrive more slowly and with more defects.

This situation is counter to our usual experience. We are used to something that is "high quality" as something that costs more. For some aspects of software, such as the user-experience, this can be true. But when it comes to the architecture, and other aspects of internal quality, this relationship is reversed. High internal quality leads to faster delivery of new features, because there is less cruft to get in the way.

While it is true that we can sacrifice quality for faster delivery in the short term, before the build up of cruft has an impact, people underestimate how quickly the cruft leads to an overall slower delivery. While this isn't something that can be objectively measured, experienced developers reckon that attention to internal quality pays off in weeks not months.

https://martinfowler.com/architecture/

Monday, 27 July 2020

How to protect algorithms as intellectual property

Ogilvy is in the midst of a project that converges robotic process automation and Microsoft Vision AI to solve a unique business problem for the advertising, marketing and PR firm. Yuri Aguiar is already thinking about how he will protect the resulting algorithms and processes from theft.

“I doubt it is patent material, but it does give us a competitive edge and reduces our time-to-market significantly,” says Aguiar, chief innovation and transformation officer. “I look at algorithms as modern software modules. If they manage proprietary work, they should be protected as such.”

Intellectual property theft has become a top concern of global enterprises. As of February 2020, the FBI had about 1,000 investigations involving China alone for attempted theft of US-based technology spanning just about every industry. It’s not just nation-states who look to steal IP; competitors, employees and partners are often culprits, too.

Security teams routinely take steps to protect intellectual property like software, engineering designs, and marketing plans. But how do you protect IP when it's an algorithm and not a document or database? Proprietary analytics are becoming an important differentiator as companies implement digital transformation projects. Luckily, laws are changing to include algorithms among the IP that can be legally protected.

Patent and classify algorithms as trade secrets
For years, in-house counsel rightly insisted that companies couldn’t patent an algorithm. Traditional algorithms simply told a computer what to do, but AI and machine learning require a set of algorithms that enable software to update and “learn” from previous outcomes without the need for a programmer intervention, which can produce competitive advantage.

“People are getting more savvy about what they want to protect,” and guidelines have changed to accommodate them, says Mary Hildebrand, chair and founder of the privacy and cybersecurity practice at Lowenstein Sandler. “The US Patent Office issued some new guidelines and made it far more feasible to patent an algorithm and the steps that are reflected in the algorithm.”

Patents have a few downsides and tradeoffs. “If you just protect an algorithm, it doesn’t stop a competitor from figuring out another algorithm that takes you through the same steps,” Hildebrand says.

What’s more, when a company applies for a patent, it also must disclose and make public what is in the application. “You apply for a patent, spend money to do that, and there’s no guarantee you’re going to get it,” says David Prange, co-head of the trade secrets sub-practice at Robins Kaplan LLP in Minneapolis.

Many companies opt to classify an algorithm as a trade secret as a first line of defense. Trade secrets don’t require any federal applications or payments, “but you have to be particularly vigilant in protecting it,” Prange adds.

To defend against a possible lawsuit over ownership of an algorithm, companies must take several actions to maintain secrecy beginning at conception.

Take a zero-trust approach
As soon as an algorithm is conceived, a company could consider it a trade secret and take reasonable steps to keep it a secret, Hildebrand says. “That would mean, for example, knowing about it would be limited to a certain number of people, or employees with access to it would sign a confidentiality agreement.” Nobody would be permitted to take the algorithm home overnight, and it must be kept in a safe place. “Those are very common-sense steps but it’s also very important if you’re propelled to prove that something is trade secret.”

On the IT front, best practices for protecting algorithms are rooted in the principles of a zero-trust approach, says Doug Cahill, vice president and group director of cybersecurity at Enterprise Strategy Group. Algorithms deemed trade secrets “should be stored in a virtual vault,” he says. “The least amount of users should be granted access to the vault with the least amount of privileges required to do their job. Access to the vault should require a second factor of authentication and all access and use should be logged and monitored.”

Confidentiality agreements for all
Companies should ensure that every employee with access to the project or algorithm signs a confidentiality agreement. Hildebrand recalls one inventor who met with three potential partners whom he believed were all representing the same company. He thought that he was covered by a confidentiality agreement signed by the company. It turned out that one of them was an independent consultant who hadn’t signed anything and ran away with the IP. The inventor lost the trade secret status to his invention. Hildebrand always counsels clients going into those meetings to make sure everyone in the room has signed.

Another reason to take signed confidentially agreements seriously: “Engineers and scientists in particular love to talk to their peers about what they’re working on,” which is fine when they’re working in teams and learning from one another, Hildebrand says, but it’s not OK when they go out to dinner with competitors or discuss their research at the neighborhood BBQ.

Small teams and need-to-know access
Consider who really needs to have first-hand knowledge of the project or algorithm, Prange says. In smaller companies, people wear more hats and may need to know more, but in larger, more diversified companies, fewer people need to know everything. Even with a small group having access, “maybe use two-factor authentication, limit whether you can work on things outside the company or the physical building. Or you lock down computers so you can’t use thumb drives,” he adds.

Educate lines of business on protecting algorithms
IT leaders must educate lines of business so they understand what it is they need to protect and investments the company is making, Prange says. For instance, “Salespeople like to know a lot about their products. Educate them on what aspects of the product are confidential.”

Don’t let departing employees take algorithms with them
Make sure employees know what they can’t take with them when they leave for another job. “Whenever there’s an employee working in a sensitive area or has access to sensitive information, they should be put through an exit interview to understand what they have and to emphasize that they have these signed obligations” that prohibit them from using the information in their next job, Prange says.

Partnerships should be treated the same way, Prange adds. “We see a lot of cases where a company is in a joint development relationship and it sours or fizzles out, and one or both of the companies may independently move on. Then suddenly there’s a dispute when one hits the market with the information they were sharing.”

Establish proof you own an algorithm
“Tried and true tactics will clearly be employed to gain access to algorithms, including socially engineered spear-phishing attacks to steal developer credentials via bogus login and password reset pages to gain access to the systems that store such intellectual property,” Cahill says.

It’s hard to protect against someone with the intention of taking an algorithm or process, Prange says. “You can have all kinds of restrictions, but if someone has the intent, they’re going to do it — but that doesn’t mean you don’t do anything.”

To help prove ownership of an algorithm and prevent theft or sabotage, IBM and others have been working on ways to embed digital watermarks into the deep neural networks in AI, similar to the multimedia concept of watermarking digital images. The IBM team’s method, unveiled in 2018, allows applications to verify the ownership of neural networks services with API queries, which is essential to protect against attacks that might, for instance, fool an algorithm in an autonomous car to drive past a stop sign.

The two-step process involves an embedding stage, where the watermark is applied to the machine learning model, and a detection stage, where it’s extracted to prove ownership.

The concept does have a few caveats. It doesn’t work on offline models, and it can’t protect against infringement through “prediction API” attacks that extract the parameters of machine learning models by sending queries and analyzing the responses.

Researchers at KDDI Research and the National Institute of Informatics have also introduced a method of watermarking deep learning models in 2017.

Another problem with many watermark solutions is that current designs have not been able to address piracy attacks, where third-parties falsely claim model ownership by embedding their own watermarks into already-watermarked models.

In February 2020, researchers at The University of Chicago unveiled “null embedding,” a way to build piracy-resistant watermarks into deep neural networks (DNNs) at a model’s initial training. It builds strong dependencies between the model’s normal classification accuracy and the watermark, and as a result, attackers can’t remove an embedded watermark or add a new pirate watermark to an already-watermarked model.  These concepts are in the early stages of development.

https://www.csoonline.com/

Monday, 25 May 2020

What is project scope? Defining and outlining project success

Clearly defining your project’s scope helps to effectively manage stakeholder expectations and ensures that all of the project’s elements are aligned with the objectives — increasing the chances of success. Here’s what you need to know about defining project scope.

Project scope definition
Project scope is a detailed outline of all aspects of a project, including all related activities, resources, timelines, and deliverables, as well as the project’s boundaries. A project scope also outlines key stakeholders, processes, assumptions, and constraints, as well as what the project is about, what is included, and what isn’t. All of this essential information is documented in a scope statement.

The project scope statement  
The project scope statement is a key document that provides all stakeholders with a clear understanding of why the project was initiated and defines its key goals. Most project scope statements will include these elements.

  • A project statement of work (SoW), which is a detailed breakdown of all work to be performed by a project team and any important elements that may impact the outcome
  • Constraints that might limit or negatively impact the outcome of the project, including resources, procurement issues, timing, or lack of information
  • Scope exclusions, which can be anything that will not be part of the project or its deliverables
  • Milestones that provide the exact date that something will be delivered or completed
  • The final deliverables that will be provided to the customer at the end of the project — for example, a report, a software feature, any process insights or analysis, or any product or service that a customer needs
  • Acceptance criteria that spell out exactly how success will be measured
  • Final approval whereby the customer will sign-off on the scope statement confirming that all parameters have been included and the document is complete and accurate

Key steps for defining your project scope
Properly defining the scope of a project is the key to successfully managing your project. Here are the steps you can follow to define your project scope.

Work with key stakeholders to define and create a scope statement by identifying what is within scope, and out of scope. Collaborating with stakeholders helps to ensure essential things do not fall through the cracks.
Identify, document, and communicate assumptions. Assumptions are those elements that relate to the project that are assumed to be true for the duration of the project. Assumptions are necessary to provide an estimate of the cost and schedule to deliver the project’s scope during the planning phase of a project.
Gain buy-in for the scope statement with the stakeholders who are most impacted to ensure that everyone is on the same page.
Project scope example
Let’s say you are a project manager defining the scope for a content marketing project. A very simple project scope statement might include the following.

Introduction
This content marketing project is being undertaken for XYZ company for the purpose of creating an article to be posted on their site to create brand awareness.

Project Scope
This project will include research, content strategy, writing the article, and publishing it on XYZ’s website under the XYZ blog. It will also include sharing the article on social media for the month of April 2020. All activities will be conducted by Joe Smith of ABC company.

Project Deliverables
Project deliverables will include one well-researched written article of up to 1,000 words to be delivered by email to Jane@XYZ.com no later than ___ date.    

Project Acceptance Criteria
Jane at XYZ company will review and approve the final article version before publishing.

Project Exclusions
This project will not include payment to external vendors for research or outsourced services.

Project Constraints
Constraints may include communication delays, changes in scope, or technical difficulties.

Once the project scope statement is complete and approved, and a project is underway, the project scope will need to be carefully managed to avoid scope creep.

What is scope creep?
Scrope creep refers to a scenario whereby changes occur after the project has been started and the changes are not defined or anticipated within the scope statement. When scope creep occurs, it can negatively impact the project timeline, deliverable quality, resources, budget, and other aspects. Managing the scope of your project can help avoid unwelcome surprises.

Project scope management
In addition to the ongoing review and monitoring of project activities, there are steps that should be undertaken to manage the scope of the project to avoid scope creep.

Identify whether there are any changes to the requirements for your project. This is a vital step since these changes directly affect the project goals and all related activities.
Identify how the changes will impact the project. Before you can make adjustments to the scope of the project, you need to understand where and how changes impact the outcome.
Gain approval for changes before proceeding with a change in activities or direction.
Implement the approved changes in a timely manner to reduce delays and risks.
Project scope template
                         [Project Title] – Project Scope                                   
Introduction
The Introduction provides a high-level overview of the project.

Project Scope
State the scope of the project. This should include what the project does and does not include. This will help to clarify what is included in the project and help to avoid any confusion from project team members and stakeholders.

Project Deliverables
State the planned deliverables for the project.         

Project Acceptance Criteria
Define the acceptance criteria. What objectives will be met, and how will success be measured?

Project Exclusions
What is not included in the scope of this project.

Project Constraints
Provide any constraints on the project, hard dates, staff or equipment limitations, financial or budget constraints, or any technical limitations. 

Developing a solid understanding of a project’s purpose and clearly defining, documenting, and managing your project scope, you can ensure that you are well-positioned to deliver a successful project without having to deal with scope creep.

Saturday, 23 May 2020

Is COVID-19 a ‘Forcing Function’ for Cloud Native?

The ongoing COVID-19 pandemic is causing significant ripples throughout organizations that have had to rapidly upend economic market where anything that is not essential is being tossed aside. However, for those in the midst of a digital transformation journey, such tossing can’t be done as haphazardly.

During a panel discussion at this week’s Software Circus digital event, Kelsey Hightower, staff developer advocate at Google Cloud, used the term “forcing function” to describe how the current health crisis is forcing organizations to make technology decisions. He explained that organizations have typically only made hard decisions when an outside force required immediate actions.

“I think for COVID-19, it was a big forcing function,” Hightower said. “You don’t get to ask for six months, you don’t get to ask for an 18-month delay, it’s not up to you, actually. It is what it is and you have no choice. So once you take the choices off the table I think that’s what forces people to innovate.”

Hightower added that while some people might do their best work while procrastinating, the COVID-19 situation means “nope, you got to pick one. No one’s going into the office so you have to choose. … This is not a decision you can make in the next 18 months. Buy one now and then learn how to use it. So you need that forcing function.”

Jamie Dobson, CEO of Container Solutions, concurred, adding that organizations will find this sort of timing pressure a necessary evil.

“Without that forcing function, there is no rabbit hole to go through. You will not find the chaos. Nobody does this. Nobody moves to cloud native unless there’s a good [reason],” Dobson said.

Hightower did note that while organizations are feeling this pressure to decide, it should be less a decision on how or why to innovate toward cloud native and more on just what tools they should be using to make that pivot.

“Most companies are not really struggling with the ability to innovate,” Hightower said. “A lot of the stuff that they’re going to use are tools and there was innovation that went into producing those tools. The innovative thing that we’re asking companies to do is just pick a tool, literally pick one of the 10 and as soon as you pick one then that will be the most innovative thing that some companies do in a long time. Literally picking something. Not building the thing. Not actually knowing how to actually leverage it 100%. Sometimes the biggest hurdle for most companies is just the picking part.”

Saturday, 9 May 2020

Why didn’t COVID-19 break the internet?

Just a few months into its fifty-first year, the internet has proven its flexibility and survivability. 

In the face of a rapid world-wide traffic explosion from private, public and government entities requiring employees to work from home to help curb the spread of the coronavirus, some experts were concerned the bandwidth onslaught might bring the internet to its knees. All indications are that while there have been hot spots, the internet infrastructure has held its own so far – a silver lining of sorts in dreadful situation.

Evidence of the increased traffic is manifold:

  • Video on Verizon’s network is up 41%, VPN usage is up 65%, and there’s been a tenfold increase in collaboration tool usage, said AndrĂ©s Irlando, senior vice president and president at Verizon’s public sector division.  
  • Downstream traffic has increased up to 20% and upstream traffic has up to 40% during the last two months, according to Cox Communications CTO Kevin Hart. “To keep ahead of the traffic we have been executing on our long-term plan that stays 12-18 months ahead of demand curves. We’ve had to scramble to stay ahead but 99% of our nodes are healthy,” he said.
  • The DE-CIX (the Deutsche Commercial Internet Exchange) in Frankfurt set a new world record for data throughput on in early March hitting more than 9.1 Terabits per/second. Never before has so much data been exchanged at peak times at an Internet Exchange, the DE-CIX stated.

How is the internet handling this situation?

First, what does the internet look like? It consists of access links that move traffic from individual connected devices to high-bandwidth routers that move traffic from its source over the best available path toward its destination using TCP/IP. The core that it travels through is made up of individual high-speed fiber-optic networks that peer with each other to create the internet backbone.

The individual core networks are privately owned by Tier 1 internet service providers (ISP), giant carriers whose networks are tied together. These providers include AT&T, CenturyLink, Cogent Communications, Deutsche Telekom, GTT Communications, NTT Communications, Sprint, Tata Communications, Telecom Italia Sparkle, Telia Carrier, and Verizon. 

These backbone ISPs connect their networks at peering points, neutrally owned facilities with high-speed switches and routers that move traffic among the peers. These are often owned by third parties, sometimes non-profits, that help unifying the backbone.

The backbone infrastructure relies on the fastest routers, which can deliver 100Gbps trunk speeds. Internet equipment is made by variety of vendors including Cisco, Extreme, Huawei, Juniper, and Nokia.

Cisco said it has been analyzing traffic statistics with major carriers across Asia, Europe, and the Americas, and its data shows that typically, the most congested point in the network occurs at inter-provider peering points, Jonathan Davidson, senior vice president and general manager of Cisco’s Mass-Scale Infrastructure Group, wrote in a blog on March 26.

“Our analysis at these locations shows an increase in traffic of 10% to [41%] over normal levels. In every country [with peering points in Hong Kong, Italy and France and Russia seeing the biggest traffic jumps], traffic spiked with the decision to shut down non-essential businesses and keep people at home. Since then, traffic has remained stable or has experienced a slight uptick over the days that followed,” Davidson stated.

While overall the story has been positive, the situation hasn’t been perfect. There have been a variety of outages, according to traffic watchers at ThousandEyes, which writes weekly reports on outages among ISPs, cloud providers and conferencing services. Globally, the number  of outages to ISPs hit a record high of  250 during the week of April 20-26, 124 of the in the U.S.The number of outages is the most since the end of March, but two issues – fiber cuts in CenturyLink’s network and a broad Tata Communications outage – helped push that number up. Typically though,  these problems have not been caused by networks being overwhelmed with traffic.

Resilient by design
Network planning, traffic engineering and cutting-edge equipment can take most of the credit for the internet’s ability to adjust in times of need.

“IP was built to last through any sort of disaster, and the core was built to live through almost anything,” Davidson said. “Over the years there has been a tremendous amount if infrastructure and CAPEX spending to build out this massive network. We are no longer in the days of the wild west of years ago; the internet is a critical resource and the expectations are much higher.”

Indeed, the principle of over-building capacity is one of the key reasons the internet has performed so well. “Network capacity is critical. Our network team and engineers have been able to keep the same amount of capacity or headroom on our networks during this crisis,” said Verizon’s Irlando. “We continue to augment capacity and connectivity.”

“There was some anxiety as traffic began to ramp up at the start. We’ve seen a 35% increase in internet traffic – but ultimately the networks have handled it quite well,” said Andrew Dugan, chief technology officer at CenturyLink.

Internet planning actually took into account the demands a pandemic would place on the network, Dugan said. “CenturyLink and other providers began developing pandemic plans more than a decade ago, and we knew that part of the response would rely significantly on our infrastructure,” he said.

People who build large IP networks engineer them for unexpected congestion, he said.  Dugan pointed to three factors that are helping the internet successfully support the increased volume of traffic:

  • Networks are built with redundancy to handle fiber cuts and equipment failures. This means creating capacity headroom to support sudden disasters.
  • Network monitoring helps operators anticipate where congestion is occurring, allowing them to move traffic to less congested paths.
  • ISPs have been building out networks for years to account for increasing demand, and planning specifications help prevent networks from reaching capacity

When building fiber backbones, ISPs often bury the cabling to protect it from storms and accidents that can take down above-ground power grids. Since much of the cost of deploying the cable is in the labor to dig the trenches, while they’re at it, most ISPs install more fiber strands  than they have a current use for, according to ISP OTELCO. This so-called dark fiber can take the form of additional cables or cables with unused fiber strands within them that optical switches can light up quickly as the need arises.

“We had some infrastructure segments that ran hot,” Dugan said about the COVID-19 spike in traffic on CenturyLink’s network, “but we are fiber-based so we quickly were able to add capacity.” And ISPs are adding more fiber all the time, which “is key to ensuring networks can meet growing demands and provide support in times of crisis, like these,” Dugan said.

The shifting last mile
Fiber may be commonplace in the largest internet backbones, but it is much less so in the last-mile connections that reach homes. While fiber is the fastest home internet option by far, availability is still scattered in the US, according to Broadbandnow. Due to the high cost of installing fiber service directly to homes, ISP connections are still predominantly served by coax cable TV services even major cities. Chicago, for example, only has 21% fiber availability as of 2020. Dallas has about 61%, and that's actually high compared to other major metros in the US, the company stated.

When work-at-home orders came down in March, the source of internet traffic shifted dramatically. Rather than coming from business sites connected by high-bandwidth links, suddenly significant amounts of traffic was coming from private homes, dumping more traffic onto the access networks during what would otherwise be off-peak hours.

It was a significant enough issue that AT&T CEO Randall Stephenson noted it during the company’s first-quarter financial call with analysts. “What we are seeing is the volumes of network usage moving out of urban and into suburban areas… and we are seeing heavy, heavy volume on the networks out of homes,” Stephenson said. Work-at-home employees, students doing online classwork and online shopping added to the load.

But as CenturyLink’s Dugan noted, the work from home activity is generally happening during the day while peak internet usage continues to occur in the evening when people generally consume video and gaming. This has helped balance out the additional internet use.

In addition, traffic engineering may be able to find less congested routes if the traffic load gets too great. When that’s not possible, providers have to look elsewhere. For example, U.S. Cellular boosted its mobile broadband capacity in six states by borrowing wireless spectrum for 60 days from other carriers who owned the licenses for those spectrum bands.

AI and automation help dodge issues
Other attributes have helped the internet’s performance as well. For example, AT&T said its artificial intelligence is helping remotely troubleshoot problems with customer equipment and identify issues before they become problems. “We’ve expedited deployments of new AI capabilities in certain markets that will allow us to balance the traffic load within a sector and across sectors to help avoid overloading specific cells and improve the experience,” AT&T stated.

Increased use of automation has also had an impact by enabling network engineers to quickly manage traffic, Dugan said. “Service providers who invested in software-defined networking prior to the coronavirus crisis may have been more responsive to changing traffic patterns than ones that are still using legacy or hybrid networks,” Dugan said.

Going forward Verizon’s Irlando said he doesn’t think current internet traffic levels are the new normal. “No one knows the future, but we will not have 90% of America working from home,” he said.

One indication of remote-worker impact comes from a Gartner survey of 317 CFOs and finance leaders in March that said 74% of businesses will move at least 5% of their previously on-site workforce to permanently remote positions post-COVID-19.

Cox’s Hart says the situation underscores the need to continue investing in the backbone. He says his company will spend $10 billion over the next five years to build out network capacity, improve access, drive higher speeds and improve latency and security.

Internet access isn’t universal
There is one overarching problem the COVID-19 crisis is shining a light on: the digital divide. For an estimated 3.7 billion people worldwide, internet access is either unavailable or too expensive, and that is palpable when connectivity to the outside world becomes essential. 

“Internet access has become increasingly vital to our health, safety, and economic and societal survival. As cities and countries across the globe ask their citizens to stay at home, billions of us are fortunate enough to be able to heavily rely on the internet to fill the gaps in our work and life,” wrote Tae Yoo, senior vice president of Cisco Corporate Affairs in a blog about the digital divide.

“There is no silver bullet on how to solve this problem,” Dan Rabinovitsj, vice president of connectivity with FaceBook. "It’s going to take a lot in investment and innovation from network operators to drive costs out of the ecosystem so that they can pour more money back into the network," he said.  “Infrastructure is having its moment right now, everyone is depending on it,” Rabinovitsj said.

“The internet is moving from huge to absolutely massive. It’s moving from being critical to being essential to economies, businesses and governments,” Cisco’s Davidson said. “As a result of COVID-19, we’re getting a glimpse of what the internet of the future is today,” Davidson said.