Monday, 27 July 2020

How to protect algorithms as intellectual property

Ogilvy is in the midst of a project that converges robotic process automation and Microsoft Vision AI to solve a unique business problem for the advertising, marketing and PR firm. Yuri Aguiar is already thinking about how he will protect the resulting algorithms and processes from theft.

“I doubt it is patent material, but it does give us a competitive edge and reduces our time-to-market significantly,” says Aguiar, chief innovation and transformation officer. “I look at algorithms as modern software modules. If they manage proprietary work, they should be protected as such.”

Intellectual property theft has become a top concern of global enterprises. As of February 2020, the FBI had about 1,000 investigations involving China alone for attempted theft of US-based technology spanning just about every industry. It’s not just nation-states who look to steal IP; competitors, employees and partners are often culprits, too.

Security teams routinely take steps to protect intellectual property like software, engineering designs, and marketing plans. But how do you protect IP when it's an algorithm and not a document or database? Proprietary analytics are becoming an important differentiator as companies implement digital transformation projects. Luckily, laws are changing to include algorithms among the IP that can be legally protected.

Patent and classify algorithms as trade secrets
For years, in-house counsel rightly insisted that companies couldn’t patent an algorithm. Traditional algorithms simply told a computer what to do, but AI and machine learning require a set of algorithms that enable software to update and “learn” from previous outcomes without the need for a programmer intervention, which can produce competitive advantage.

“People are getting more savvy about what they want to protect,” and guidelines have changed to accommodate them, says Mary Hildebrand, chair and founder of the privacy and cybersecurity practice at Lowenstein Sandler. “The US Patent Office issued some new guidelines and made it far more feasible to patent an algorithm and the steps that are reflected in the algorithm.”

Patents have a few downsides and tradeoffs. “If you just protect an algorithm, it doesn’t stop a competitor from figuring out another algorithm that takes you through the same steps,” Hildebrand says.

What’s more, when a company applies for a patent, it also must disclose and make public what is in the application. “You apply for a patent, spend money to do that, and there’s no guarantee you’re going to get it,” says David Prange, co-head of the trade secrets sub-practice at Robins Kaplan LLP in Minneapolis.

Many companies opt to classify an algorithm as a trade secret as a first line of defense. Trade secrets don’t require any federal applications or payments, “but you have to be particularly vigilant in protecting it,” Prange adds.

To defend against a possible lawsuit over ownership of an algorithm, companies must take several actions to maintain secrecy beginning at conception.

Take a zero-trust approach
As soon as an algorithm is conceived, a company could consider it a trade secret and take reasonable steps to keep it a secret, Hildebrand says. “That would mean, for example, knowing about it would be limited to a certain number of people, or employees with access to it would sign a confidentiality agreement.” Nobody would be permitted to take the algorithm home overnight, and it must be kept in a safe place. “Those are very common-sense steps but it’s also very important if you’re propelled to prove that something is trade secret.”

On the IT front, best practices for protecting algorithms are rooted in the principles of a zero-trust approach, says Doug Cahill, vice president and group director of cybersecurity at Enterprise Strategy Group. Algorithms deemed trade secrets “should be stored in a virtual vault,” he says. “The least amount of users should be granted access to the vault with the least amount of privileges required to do their job. Access to the vault should require a second factor of authentication and all access and use should be logged and monitored.”

Confidentiality agreements for all
Companies should ensure that every employee with access to the project or algorithm signs a confidentiality agreement. Hildebrand recalls one inventor who met with three potential partners whom he believed were all representing the same company. He thought that he was covered by a confidentiality agreement signed by the company. It turned out that one of them was an independent consultant who hadn’t signed anything and ran away with the IP. The inventor lost the trade secret status to his invention. Hildebrand always counsels clients going into those meetings to make sure everyone in the room has signed.

Another reason to take signed confidentially agreements seriously: “Engineers and scientists in particular love to talk to their peers about what they’re working on,” which is fine when they’re working in teams and learning from one another, Hildebrand says, but it’s not OK when they go out to dinner with competitors or discuss their research at the neighborhood BBQ.

Small teams and need-to-know access
Consider who really needs to have first-hand knowledge of the project or algorithm, Prange says. In smaller companies, people wear more hats and may need to know more, but in larger, more diversified companies, fewer people need to know everything. Even with a small group having access, “maybe use two-factor authentication, limit whether you can work on things outside the company or the physical building. Or you lock down computers so you can’t use thumb drives,” he adds.

Educate lines of business on protecting algorithms
IT leaders must educate lines of business so they understand what it is they need to protect and investments the company is making, Prange says. For instance, “Salespeople like to know a lot about their products. Educate them on what aspects of the product are confidential.”

Don’t let departing employees take algorithms with them
Make sure employees know what they can’t take with them when they leave for another job. “Whenever there’s an employee working in a sensitive area or has access to sensitive information, they should be put through an exit interview to understand what they have and to emphasize that they have these signed obligations” that prohibit them from using the information in their next job, Prange says.

Partnerships should be treated the same way, Prange adds. “We see a lot of cases where a company is in a joint development relationship and it sours or fizzles out, and one or both of the companies may independently move on. Then suddenly there’s a dispute when one hits the market with the information they were sharing.”

Establish proof you own an algorithm
“Tried and true tactics will clearly be employed to gain access to algorithms, including socially engineered spear-phishing attacks to steal developer credentials via bogus login and password reset pages to gain access to the systems that store such intellectual property,” Cahill says.

It’s hard to protect against someone with the intention of taking an algorithm or process, Prange says. “You can have all kinds of restrictions, but if someone has the intent, they’re going to do it — but that doesn’t mean you don’t do anything.”

To help prove ownership of an algorithm and prevent theft or sabotage, IBM and others have been working on ways to embed digital watermarks into the deep neural networks in AI, similar to the multimedia concept of watermarking digital images. The IBM team’s method, unveiled in 2018, allows applications to verify the ownership of neural networks services with API queries, which is essential to protect against attacks that might, for instance, fool an algorithm in an autonomous car to drive past a stop sign.

The two-step process involves an embedding stage, where the watermark is applied to the machine learning model, and a detection stage, where it’s extracted to prove ownership.

The concept does have a few caveats. It doesn’t work on offline models, and it can’t protect against infringement through “prediction API” attacks that extract the parameters of machine learning models by sending queries and analyzing the responses.

Researchers at KDDI Research and the National Institute of Informatics have also introduced a method of watermarking deep learning models in 2017.

Another problem with many watermark solutions is that current designs have not been able to address piracy attacks, where third-parties falsely claim model ownership by embedding their own watermarks into already-watermarked models.

In February 2020, researchers at The University of Chicago unveiled “null embedding,” a way to build piracy-resistant watermarks into deep neural networks (DNNs) at a model’s initial training. It builds strong dependencies between the model’s normal classification accuracy and the watermark, and as a result, attackers can’t remove an embedded watermark or add a new pirate watermark to an already-watermarked model.  These concepts are in the early stages of development.

https://www.csoonline.com/

Monday, 25 May 2020

What is project scope? Defining and outlining project success

Clearly defining your project’s scope helps to effectively manage stakeholder expectations and ensures that all of the project’s elements are aligned with the objectives — increasing the chances of success. Here’s what you need to know about defining project scope.

Project scope definition
Project scope is a detailed outline of all aspects of a project, including all related activities, resources, timelines, and deliverables, as well as the project’s boundaries. A project scope also outlines key stakeholders, processes, assumptions, and constraints, as well as what the project is about, what is included, and what isn’t. All of this essential information is documented in a scope statement.

The project scope statement  
The project scope statement is a key document that provides all stakeholders with a clear understanding of why the project was initiated and defines its key goals. Most project scope statements will include these elements.

  • A project statement of work (SoW), which is a detailed breakdown of all work to be performed by a project team and any important elements that may impact the outcome
  • Constraints that might limit or negatively impact the outcome of the project, including resources, procurement issues, timing, or lack of information
  • Scope exclusions, which can be anything that will not be part of the project or its deliverables
  • Milestones that provide the exact date that something will be delivered or completed
  • The final deliverables that will be provided to the customer at the end of the project — for example, a report, a software feature, any process insights or analysis, or any product or service that a customer needs
  • Acceptance criteria that spell out exactly how success will be measured
  • Final approval whereby the customer will sign-off on the scope statement confirming that all parameters have been included and the document is complete and accurate

Key steps for defining your project scope
Properly defining the scope of a project is the key to successfully managing your project. Here are the steps you can follow to define your project scope.

Work with key stakeholders to define and create a scope statement by identifying what is within scope, and out of scope. Collaborating with stakeholders helps to ensure essential things do not fall through the cracks.
Identify, document, and communicate assumptions. Assumptions are those elements that relate to the project that are assumed to be true for the duration of the project. Assumptions are necessary to provide an estimate of the cost and schedule to deliver the project’s scope during the planning phase of a project.
Gain buy-in for the scope statement with the stakeholders who are most impacted to ensure that everyone is on the same page.
Project scope example
Let’s say you are a project manager defining the scope for a content marketing project. A very simple project scope statement might include the following.

Introduction
This content marketing project is being undertaken for XYZ company for the purpose of creating an article to be posted on their site to create brand awareness.

Project Scope
This project will include research, content strategy, writing the article, and publishing it on XYZ’s website under the XYZ blog. It will also include sharing the article on social media for the month of April 2020. All activities will be conducted by Joe Smith of ABC company.

Project Deliverables
Project deliverables will include one well-researched written article of up to 1,000 words to be delivered by email to Jane@XYZ.com no later than ___ date.    

Project Acceptance Criteria
Jane at XYZ company will review and approve the final article version before publishing.

Project Exclusions
This project will not include payment to external vendors for research or outsourced services.

Project Constraints
Constraints may include communication delays, changes in scope, or technical difficulties.

Once the project scope statement is complete and approved, and a project is underway, the project scope will need to be carefully managed to avoid scope creep.

What is scope creep?
Scrope creep refers to a scenario whereby changes occur after the project has been started and the changes are not defined or anticipated within the scope statement. When scope creep occurs, it can negatively impact the project timeline, deliverable quality, resources, budget, and other aspects. Managing the scope of your project can help avoid unwelcome surprises.

Project scope management
In addition to the ongoing review and monitoring of project activities, there are steps that should be undertaken to manage the scope of the project to avoid scope creep.

Identify whether there are any changes to the requirements for your project. This is a vital step since these changes directly affect the project goals and all related activities.
Identify how the changes will impact the project. Before you can make adjustments to the scope of the project, you need to understand where and how changes impact the outcome.
Gain approval for changes before proceeding with a change in activities or direction.
Implement the approved changes in a timely manner to reduce delays and risks.
Project scope template
                         [Project Title] – Project Scope                                   
Introduction
The Introduction provides a high-level overview of the project.

Project Scope
State the scope of the project. This should include what the project does and does not include. This will help to clarify what is included in the project and help to avoid any confusion from project team members and stakeholders.

Project Deliverables
State the planned deliverables for the project.         

Project Acceptance Criteria
Define the acceptance criteria. What objectives will be met, and how will success be measured?

Project Exclusions
What is not included in the scope of this project.

Project Constraints
Provide any constraints on the project, hard dates, staff or equipment limitations, financial or budget constraints, or any technical limitations. 

Developing a solid understanding of a project’s purpose and clearly defining, documenting, and managing your project scope, you can ensure that you are well-positioned to deliver a successful project without having to deal with scope creep.

Saturday, 23 May 2020

Is COVID-19 a ‘Forcing Function’ for Cloud Native?

The ongoing COVID-19 pandemic is causing significant ripples throughout organizations that have had to rapidly upend economic market where anything that is not essential is being tossed aside. However, for those in the midst of a digital transformation journey, such tossing can’t be done as haphazardly.

During a panel discussion at this week’s Software Circus digital event, Kelsey Hightower, staff developer advocate at Google Cloud, used the term “forcing function” to describe how the current health crisis is forcing organizations to make technology decisions. He explained that organizations have typically only made hard decisions when an outside force required immediate actions.

“I think for COVID-19, it was a big forcing function,” Hightower said. “You don’t get to ask for six months, you don’t get to ask for an 18-month delay, it’s not up to you, actually. It is what it is and you have no choice. So once you take the choices off the table I think that’s what forces people to innovate.”

Hightower added that while some people might do their best work while procrastinating, the COVID-19 situation means “nope, you got to pick one. No one’s going into the office so you have to choose. … This is not a decision you can make in the next 18 months. Buy one now and then learn how to use it. So you need that forcing function.”

Jamie Dobson, CEO of Container Solutions, concurred, adding that organizations will find this sort of timing pressure a necessary evil.

“Without that forcing function, there is no rabbit hole to go through. You will not find the chaos. Nobody does this. Nobody moves to cloud native unless there’s a good [reason],” Dobson said.

Hightower did note that while organizations are feeling this pressure to decide, it should be less a decision on how or why to innovate toward cloud native and more on just what tools they should be using to make that pivot.

“Most companies are not really struggling with the ability to innovate,” Hightower said. “A lot of the stuff that they’re going to use are tools and there was innovation that went into producing those tools. The innovative thing that we’re asking companies to do is just pick a tool, literally pick one of the 10 and as soon as you pick one then that will be the most innovative thing that some companies do in a long time. Literally picking something. Not building the thing. Not actually knowing how to actually leverage it 100%. Sometimes the biggest hurdle for most companies is just the picking part.”

Saturday, 9 May 2020

Why didn’t COVID-19 break the internet?

Just a few months into its fifty-first year, the internet has proven its flexibility and survivability. 

In the face of a rapid world-wide traffic explosion from private, public and government entities requiring employees to work from home to help curb the spread of the coronavirus, some experts were concerned the bandwidth onslaught might bring the internet to its knees. All indications are that while there have been hot spots, the internet infrastructure has held its own so far – a silver lining of sorts in dreadful situation.

Evidence of the increased traffic is manifold:

  • Video on Verizon’s network is up 41%, VPN usage is up 65%, and there’s been a tenfold increase in collaboration tool usage, said AndrĂ©s Irlando, senior vice president and president at Verizon’s public sector division.  
  • Downstream traffic has increased up to 20% and upstream traffic has up to 40% during the last two months, according to Cox Communications CTO Kevin Hart. “To keep ahead of the traffic we have been executing on our long-term plan that stays 12-18 months ahead of demand curves. We’ve had to scramble to stay ahead but 99% of our nodes are healthy,” he said.
  • The DE-CIX (the Deutsche Commercial Internet Exchange) in Frankfurt set a new world record for data throughput on in early March hitting more than 9.1 Terabits per/second. Never before has so much data been exchanged at peak times at an Internet Exchange, the DE-CIX stated.

How is the internet handling this situation?

First, what does the internet look like? It consists of access links that move traffic from individual connected devices to high-bandwidth routers that move traffic from its source over the best available path toward its destination using TCP/IP. The core that it travels through is made up of individual high-speed fiber-optic networks that peer with each other to create the internet backbone.

The individual core networks are privately owned by Tier 1 internet service providers (ISP), giant carriers whose networks are tied together. These providers include AT&T, CenturyLink, Cogent Communications, Deutsche Telekom, GTT Communications, NTT Communications, Sprint, Tata Communications, Telecom Italia Sparkle, Telia Carrier, and Verizon. 

These backbone ISPs connect their networks at peering points, neutrally owned facilities with high-speed switches and routers that move traffic among the peers. These are often owned by third parties, sometimes non-profits, that help unifying the backbone.

The backbone infrastructure relies on the fastest routers, which can deliver 100Gbps trunk speeds. Internet equipment is made by variety of vendors including Cisco, Extreme, Huawei, Juniper, and Nokia.

Cisco said it has been analyzing traffic statistics with major carriers across Asia, Europe, and the Americas, and its data shows that typically, the most congested point in the network occurs at inter-provider peering points, Jonathan Davidson, senior vice president and general manager of Cisco’s Mass-Scale Infrastructure Group, wrote in a blog on March 26.

“Our analysis at these locations shows an increase in traffic of 10% to [41%] over normal levels. In every country [with peering points in Hong Kong, Italy and France and Russia seeing the biggest traffic jumps], traffic spiked with the decision to shut down non-essential businesses and keep people at home. Since then, traffic has remained stable or has experienced a slight uptick over the days that followed,” Davidson stated.

While overall the story has been positive, the situation hasn’t been perfect. There have been a variety of outages, according to traffic watchers at ThousandEyes, which writes weekly reports on outages among ISPs, cloud providers and conferencing services. Globally, the number  of outages to ISPs hit a record high of  250 during the week of April 20-26, 124 of the in the U.S.The number of outages is the most since the end of March, but two issues – fiber cuts in CenturyLink’s network and a broad Tata Communications outage – helped push that number up. Typically though,  these problems have not been caused by networks being overwhelmed with traffic.

Resilient by design
Network planning, traffic engineering and cutting-edge equipment can take most of the credit for the internet’s ability to adjust in times of need.

“IP was built to last through any sort of disaster, and the core was built to live through almost anything,” Davidson said. “Over the years there has been a tremendous amount if infrastructure and CAPEX spending to build out this massive network. We are no longer in the days of the wild west of years ago; the internet is a critical resource and the expectations are much higher.”

Indeed, the principle of over-building capacity is one of the key reasons the internet has performed so well. “Network capacity is critical. Our network team and engineers have been able to keep the same amount of capacity or headroom on our networks during this crisis,” said Verizon’s Irlando. “We continue to augment capacity and connectivity.”

“There was some anxiety as traffic began to ramp up at the start. We’ve seen a 35% increase in internet traffic – but ultimately the networks have handled it quite well,” said Andrew Dugan, chief technology officer at CenturyLink.

Internet planning actually took into account the demands a pandemic would place on the network, Dugan said. “CenturyLink and other providers began developing pandemic plans more than a decade ago, and we knew that part of the response would rely significantly on our infrastructure,” he said.

People who build large IP networks engineer them for unexpected congestion, he said.  Dugan pointed to three factors that are helping the internet successfully support the increased volume of traffic:

  • Networks are built with redundancy to handle fiber cuts and equipment failures. This means creating capacity headroom to support sudden disasters.
  • Network monitoring helps operators anticipate where congestion is occurring, allowing them to move traffic to less congested paths.
  • ISPs have been building out networks for years to account for increasing demand, and planning specifications help prevent networks from reaching capacity

When building fiber backbones, ISPs often bury the cabling to protect it from storms and accidents that can take down above-ground power grids. Since much of the cost of deploying the cable is in the labor to dig the trenches, while they’re at it, most ISPs install more fiber strands  than they have a current use for, according to ISP OTELCO. This so-called dark fiber can take the form of additional cables or cables with unused fiber strands within them that optical switches can light up quickly as the need arises.

“We had some infrastructure segments that ran hot,” Dugan said about the COVID-19 spike in traffic on CenturyLink’s network, “but we are fiber-based so we quickly were able to add capacity.” And ISPs are adding more fiber all the time, which “is key to ensuring networks can meet growing demands and provide support in times of crisis, like these,” Dugan said.

The shifting last mile
Fiber may be commonplace in the largest internet backbones, but it is much less so in the last-mile connections that reach homes. While fiber is the fastest home internet option by far, availability is still scattered in the US, according to Broadbandnow. Due to the high cost of installing fiber service directly to homes, ISP connections are still predominantly served by coax cable TV services even major cities. Chicago, for example, only has 21% fiber availability as of 2020. Dallas has about 61%, and that's actually high compared to other major metros in the US, the company stated.

When work-at-home orders came down in March, the source of internet traffic shifted dramatically. Rather than coming from business sites connected by high-bandwidth links, suddenly significant amounts of traffic was coming from private homes, dumping more traffic onto the access networks during what would otherwise be off-peak hours.

It was a significant enough issue that AT&T CEO Randall Stephenson noted it during the company’s first-quarter financial call with analysts. “What we are seeing is the volumes of network usage moving out of urban and into suburban areas… and we are seeing heavy, heavy volume on the networks out of homes,” Stephenson said. Work-at-home employees, students doing online classwork and online shopping added to the load.

But as CenturyLink’s Dugan noted, the work from home activity is generally happening during the day while peak internet usage continues to occur in the evening when people generally consume video and gaming. This has helped balance out the additional internet use.

In addition, traffic engineering may be able to find less congested routes if the traffic load gets too great. When that’s not possible, providers have to look elsewhere. For example, U.S. Cellular boosted its mobile broadband capacity in six states by borrowing wireless spectrum for 60 days from other carriers who owned the licenses for those spectrum bands.

AI and automation help dodge issues
Other attributes have helped the internet’s performance as well. For example, AT&T said its artificial intelligence is helping remotely troubleshoot problems with customer equipment and identify issues before they become problems. “We’ve expedited deployments of new AI capabilities in certain markets that will allow us to balance the traffic load within a sector and across sectors to help avoid overloading specific cells and improve the experience,” AT&T stated.

Increased use of automation has also had an impact by enabling network engineers to quickly manage traffic, Dugan said. “Service providers who invested in software-defined networking prior to the coronavirus crisis may have been more responsive to changing traffic patterns than ones that are still using legacy or hybrid networks,” Dugan said.

Going forward Verizon’s Irlando said he doesn’t think current internet traffic levels are the new normal. “No one knows the future, but we will not have 90% of America working from home,” he said.

One indication of remote-worker impact comes from a Gartner survey of 317 CFOs and finance leaders in March that said 74% of businesses will move at least 5% of their previously on-site workforce to permanently remote positions post-COVID-19.

Cox’s Hart says the situation underscores the need to continue investing in the backbone. He says his company will spend $10 billion over the next five years to build out network capacity, improve access, drive higher speeds and improve latency and security.

Internet access isn’t universal
There is one overarching problem the COVID-19 crisis is shining a light on: the digital divide. For an estimated 3.7 billion people worldwide, internet access is either unavailable or too expensive, and that is palpable when connectivity to the outside world becomes essential. 

“Internet access has become increasingly vital to our health, safety, and economic and societal survival. As cities and countries across the globe ask their citizens to stay at home, billions of us are fortunate enough to be able to heavily rely on the internet to fill the gaps in our work and life,” wrote Tae Yoo, senior vice president of Cisco Corporate Affairs in a blog about the digital divide.

“There is no silver bullet on how to solve this problem,” Dan Rabinovitsj, vice president of connectivity with FaceBook. "It’s going to take a lot in investment and innovation from network operators to drive costs out of the ecosystem so that they can pour more money back into the network," he said.  “Infrastructure is having its moment right now, everyone is depending on it,” Rabinovitsj said.

“The internet is moving from huge to absolutely massive. It’s moving from being critical to being essential to economies, businesses and governments,” Cisco’s Davidson said. “As a result of COVID-19, we’re getting a glimpse of what the internet of the future is today,” Davidson said.   

Saturday, 25 April 2020

Coronavirus lockdown, lack of broadband could lead to 'education breakdown'

Larissa Rosa, an English-as-a-second-language teacher at Public School 7 Samuel Stern in the East Harlem neighborhood of New York, has for the last five weeks taught remote classes from her apartment in Manhattan. But she's increasingly worried that too many of her students are being left behind as they're unable to connect to the online sessions.

The coronavirus pandemic has forced a lockdown of millions of people around the world, and New York, where schools have been shut down since March 16, has been one of the major epicenters of COVID-19 cases, with more than 145,000 confirmed cases as of Thursday afternoon. As a result, teachers and students have resorted to distance learning with online classes. 

But Rosa said at least 45 of the roughly 400 students at her school haven't logged on once. There are many reasons why students may not be showing up, such as parents working or families that are dealing with the virus, but one of the biggest issues she hears from families is a lack of broadband access. 

"These are already students who were not at grade level," Rosa said. "I just worry that they're falling further behind. And it doesn't look like anyone is trying to fix this."

Since March, when governors across the country began declaring public health emergencies and issuing shelter-in-place orders, 47 states and the District of Columbia have closed schools due to the coronavirus, according to Education Week. All told, at least 124,000 US public and private schools across the country have closed their doors, affecting 55 million students. And as many as 38.6 million students won't be going back to school until at least the fall.

Districts have scrambled to replace their in-person instruction with some form of online learning. Some schools are offering live video streams, while others post assignments online and expect students to access content and assignments. 

But as the weeks drag on, it's become clear that not all students have access to broadband, exacerbating an existing equity problem in American education. The result is that millions of students throughout the country aren't getting the same educational opportunity as their peers. 

Wednesday, 15 April 2020

Coding together apart: Software development after COVID-19

Pandemics are not the “new normal” for the human race. As with practically every other type of disaster, we’ve survived them countless times in the past.

But there’s no doubt we’re living and working through an emergency situation right now. As we try to avoid exposing ourselves to the novel coronavirus, we must also prevent our working lives from stalling out completely. For most professionals, remote collaboration will be our primary fallback method until normality returns.

Tech marketing is in crisis
Working remotely can be awkward when your productivity depends on being able to share a physical space with others for at least a few hours per day or week. Tech marketers have been hit particularly hard because a large part of their annual cycles are aligned with conferences and other industry events, most of which have been canceled, postponed, or made entirely virtual.

Indeed, I’ve noticed far fewer tech product releases in the past few months than in a normal spring season. This goes against the pattern I’ve seen practically every year since I entered the IT field in the mid-1980s. Typically, a burst of launches grabs everyone’s attention from late February through early June, until an equal or larger batch of vendor announcements in the fall takes the spotlight.

Right now in the midst of the global COVID-19 emergency, it’s difficult to get any new product launch noticed unless your new product has a clear role in helping humanity cope with the pandemic. However, those sorts of offerings pretty much by definition have a short shelf life and will almost certainly be forgotten or discarded when the emergency wanes in the next few months.

Software developers have embraced distance coding
Though tech marketing seems to have ground to a halt, software developers aren’t letting sheltering in place crimp their productivity. Many software vendors I’ve spoken to during the past few months say that their locked-down coders are working as hard as ever. If anything, this current crisis may be the tipping point in the advent of a new normal for software development practices.

Physical distancing spares coders from wasting their time in pointless meetings and can make them ever more effective multitaskers. If work-from-home coders prove themselves to be just as effective as they were in shared offices, their employers may let them continue when the crisis subsides. After all, office space is expensive, and needing less of it is a great way to keep overhead low.

In a practical sense, programming teams rarely need to occupy the same physical office as long as they can hammer out code, test it thoroughly, and deploy it in a devops pipeline. But programming is a creative human endeavor, and there are often more face-to-face meetings and conversations in coding projects than people realize.

While they fend off cabin fever, programmers will have to find the right set of collaboration tools to suit their needs. They’ll need to look beyond Zoom, Slack, and Microsoft Teams, which have received more than their fair share of attention in the trade and popular press in the past month. Suddenly back in vogue, these collaborative software tools were not designed to facilitate structured interactions among coders working on common projects.

Live code collaboration comes to devops
The opportunity for live, real-time collaboration is an obvious advantage of in-person team arrangements, though its importance is debatable in the modern world of virtual collaboration.

If today’s work-from-home coders need strong code collaboration tools, there are many on the market. However, only a handful provide the strong, real-time collaboration one would enjoy in a shared brick-and-mortar office. For a good roundup on the live collaboration features in today’s leading coding tools, check out Serdar Yegulalp’s recent InfoWorld article in which he dissects such offerings as AWS Cloud9, Codeanywhere, CodeSandbox, Codeshare, Floobits, Teletype, and Microsoft’s Visual Studio Live Share.

Available as web-based services or add-ons to existing editors, these tools enable real-time sharing and collaboration on cloud-hosted coding projects. Typically, users can share project environments with multiple team members. Users can edit files together in real time, invite others to join them in active tabs, and follow them between tabs as they switch files.

Typically, coders can watch each other’s typing, as the tools often provide visual cues that indicate who wrote which lines of code. Many also offer a text chat and/or video chat pane within the development environment. Users can often share running cloud-hosted web application servers with each other.

Just as important, users can often share out both workspaces that use various repositories for source control and project governance. This is an absolutely essential feature for development teams who need their live code collaboration tooling to plug into their enterprise devops pipelines. More often than not, remote coding teams will rely on public and private Git repositories as the pivot points in the collaborative workflows.

In the post-pandemic days to come, we’ll probably recognize that this work-from-home crisis tipped enterprise development practices more firmly toward the new paradigm being called “Gitops.” Under Gitops, devops teams store and manage every application artifact in a Git repository, such as GitHub. This generally includes all policies, code, configuration, and events that are integral to an application’s design, as well as machine learning models that are vital to deployed artificial intelligence applications.

Coding’s brick-and-mortar days are coming to a close
As we return to post-pandemic normality, I also expect that live code collaboration will become the norm. In our new socially distanced world, code-collaboration tools will make it possible to build and deploy any type of application without the need for two or more humans to physically co-locate.

Abetted by no-code tools, this new hermetic world of software development will enable coders to retreat to their homes or another safe place to do their work, should pandemics or other disasters make it too dangerous to venture outside.

What is power over Ethernet (PoE)?

Power over Ethernet has become one of those checklist items many enterprises rely on to bring electricity over existing data cables to Wi-Fi access points, firewalls, IP phones and other infrastructure throughout their networks.

PoE’s use has grown substantially since the IEEE standardized it in 2003, and its use will only increase in the coming years as new applications develop. In fact, the Dell’Oro group says that PoE port shipments will total over 624 million over the next five years.

“There are a number of drivers for the current PoE technology. For example, if you look at WLAN [wireless LAN] access points, you have increased number of [wireless-spectrum] bands and higher speeds which require higher power,” said  Sameh Boujelbene, Senior Research Director for Ethernet Switch market research at Dell’Oro. “The new generation of IP phones is adding telepresence features. If you look at surveillance cameras, you have zooming features, you have added analytics. All these new features require higher power.”

What is Power over Ethernet?
PoE is the delivery of electrical power to networked devices over the same data cabling that connects them to the LAN. This simplifies the devices themselves by eliminating the need for an electric plug  and power converter, and makes it unnecessary to have separate AC electric wiring and sockets installed near each device.

In the case replacing legacy phone systems with of IP phones, the need for separate dedicated DC power cables is eliminated. When networks are expanded or reconfigured, so long as data cable is pulled to the devices, they will have power.

The original IEEE PoE standard (802.3af-2003) specifies how to deliver up to 15.4W of DC power per switch port to each device at up to 33 feet over Category 3, 5,  5e and 6 Ethernet cables. The standard sets 15.4W as the maximum but provides for only 12.95W to reach the devices because power is dissipated within the cable over distance. That loss doesn’t affect network performance of 10/100/1000Mbps Ethernet links to the devices.

Over time newer devices required more power, so a new standard, PoE+ (IEEE 802.3at), was created in 2009, bumping the maximum power to 30W with 25.5W reaching devices.

The latest standard, 802.3bt, pushes the maximum power from the source switch to 90W, with 71.3W available to devices. It is expected to be the last PoE standard, according to David Tremblay, Ethernet Alliance PoE Subcommittee chair and system architect, at Aruba Networks an HPE company.

Benefits
The driving ideas behind PoE were to eliminate the need for electrical outlet installation, especially in remote or hard to reach locations. PoE also promised to:

  • Reduce deployment costs up to $1,000 per device.
  • Reduce the need for AC power adaptors.
  • Simplify installation by letting customers employ a single Cat5/5e/6 cable for both data and power.
  • Offer customers centralized power backup and management.
  • Make it possible to repurpose copper from legacy phone networks.
  • Enable moving PoE devices without the network seeing any down time.

“Energy saving is a big part of PoE in particular, but the standard is really focused on energy efficiency as it uses all 4 pairs of [wires in] Ethernet Cat 5 cabling whereas previous versions of the standard used two,” Tremblay said.

The latest standard maintains a power-signature level that supports lighting or IoT applications to be powered with PoE and have acceptable standby performance when needed.

Another benefit, Tremblay said, is that PoE in combination with analytics software can let facilities-management teams determine what areas of buildings are unoccupied and save electricity by remotely turning off lights and HVAC devices.

An important and growing benefit of PoE is in deploying Wi-Fi access points. These devices are often placed in locations where it would be difficult to extend traditional electric lines, such as behind ceiling panels, Boujelbene said. 

The growth of wireless in buildings, offices and places like sports arenas fuels the need for PoE, Tremblay said. “PoE makes wireless rollouts so much more tangible.”

PoE and IoT
Using PoE in wireless rollouts may be the technology’s primary application but many think it will find a home in the internet of things where wired IoT devices can receive power from their network connection.

Versa technology wrote a blog about the use of PoE and IoT by the city of San Diego, Calif., which is using Ethernet cabling to deliver power to thousands of interconnected LED streetlights, which are integrated into the city’s IoT network. Power to the smart lamps can be turned up and down to optimize illumination for each space.

Such lighting systems have low power requirements, making them cheaper to use. The PoE streetlights are integrated with the city’s IoT network, which makes it possible to monitor and control them remotely. The smart lamps are fitted with motion sensors to conserve energy by optimizing lighting based on the needs of each space. The system saves the city $250,000 or more per year, Versa stated.

IP security cameras, which are often placed in difficult-to-access locations, are another key PoE application target.

The big challenge: Interoperability
The single greatest challenge for PoE is assuring interoperability.

The Ethernet Alliance’s Power over Ethernet (PoE) Certification Program can help enable faster PoE installations and avoid interoperability issues, Tremblay said. Ethernet vendors including Analog Devices, Cisco, HPE, Huawei, Microsemi, and Texas Instruments are part of the certification program.

But as new classes of devices are developed, industry players need to forge new partnerships with companies offering certified equipment, the Dell’Oro group said. “With the diversity of application, come interoperability problems which dictate the need for testing and certification,” Boujelbene said.

Certified products range from component-level evaluation boards, to power-sourcing enterprise switches, to midspan PoE power sources. Details of certified products are available via the program’s public registry.