Sunday, 4 October 2020

IBM to open-source space junk collision avoidance

Space is already a pretty messy place, with tens of thousands of manmade objects, the majority of them unpowered, hurdling around the planet. As space exploration ramps up on the heels of privatization and aided by miniaturization, the debris field is only going to grow.

That's a pretty big problem. So-called man-made anthropogenic space objects (ASOs) travel at speeds up to 8,000 meters per second, meaning a collision involving even a tiny fragment and a satellite or crewed vehicle could be devastating.

All of this makes it extremely important for space agencies and private space companies to be able to anticipate the trajectories of manmade objects long before launch and to plan accordingly. Unfortunately, that's not very easy to do, and as the quantity of space junk increases, it's only going to get more difficult.

Enter the Space Situational Awareness (SSA) project, an open-source venture between IBM and Dr. Moriba Jah at the University of Texas at Austin to determine where ASOs are (orbit determination) and where they will be in the future (orbit prediction).

Some explanation is required here. Current methods for orbit prediction rely on physics-based models that in turn require extremely precise information about ASOs. The problem is that the location data available about ASOs comes from terrestrial-based sensors and tends to be imperfect. Factors like space weather further complicated the picture.

The idea behind SSA is that machine learning can create models that learn when physical models incorrectly predict an ASO's future location. Physics models, according to this strategy, are plenty good when it comes to orbital dynamics, but to maximize effectiveness they need to learn how and when they get it wrong and to account for that variability.

The data used for the project comes from United States Strategic Command (USSTRATCOM) via the space-track.org website. The team used the IBM Cloud Bare Metal Server with 16 Intel Xeon Processors, 120 GB RAM, and two Nvidia Tesla V100 GPUs (each with 16GB of RAM) to run the physical models to predict the orbits of all ASOs in low earth orbit and train ML models to learn the physics model error. As a result, the team was able to predict the future orbits of the ASOs. 

https://www.zdnet.com/

Monday, 28 September 2020

Microsoft's Underwater Data Center Makes Environmental Strides

At the beginning of this summer, with no fanfare and little publicity, Redmond, Wash.-based Microsoft hauled its shipping container-sized underwater data center, consisting of 864 servers at a depth of 117 feet, from the seabed off the coast of the Orkney Islands to the northeast of Scotland. 

Microsoft’s Project Natick

The experiment, called Project Natick, aimed to find out whether it would be economical and better for the environment to place data centers under water. The first conclusions from the project are starting to trickle in and they appear to be positive.

The retrieval of the center represented the final phase of a years-long effort, which was itself Phase 2 of a wider project that started in 2015 off the west coast of America where the company sank a data center to the seabed for 105 days to find out if computing was possible underwater given the extreme environment.

The team hypothesized that a sealed container on the ocean floor could provide ways to improve the overall reliability of data centers. On land, corrosion from oxygen and humidity, temperature fluctuations and bumps and jostles from people who replace broken components are all variables that can contribute to equipment failure.

The Northern Isles experiment, according to the company, has confirmed its hypothesis, which could have major implications for data centers on land.

Lessons learned from Project Natick are also informing Microsoft’s data center sustainability strategy around energy, waste and water, said Ben Cutler, a project manager in Microsoft’s Special Projects research group who leads Project Natick.

What is more, he added, the proven reliability of underwater data centers has prompted discussions with a Microsoft team in Azure that’s looking to serve customers who need to deploy and operate tactical and critical data centers anywhere in the world. “We are populating the globe with edge devices, large and small,” said William Chappell, vice president of mission systems for Azure said in a statement about the project. “To learn how to make data centers reliable enough not to need human touch is a dream of ours.”

The Underwater Advantage

Without really understanding the science behind data centers, it is easy to see the attraction. Apart from the fact that more than half the world’s population lives within 120 miles of the coast, the temperature of the water, which should keep the centers cool, makes for energy-efficient data centers that can use heat exchange plumbing in much the same way submarines do.

There is also the advantage of geography. As the location of data becomes increasingly important for regulators, locating data centers inside geographical boundaries’ around coasts will be easier than on land and will solve that problem. By placing them in the waters off big cities it will make information retrieval and use of the web and video streaming quicker. “We are now at the point of trying to harness what we have done as opposed to feeling the need to go and prove out some more,” Cutler said in a statement. “We have done what we need to do. Natick is a key building block for the company to use if it is appropriate.”*

Early conversations are already taking place about the potential future of Project Natick centered on how to scale up underwater data centers to power the full suite of Microsoft Azure cloud services, which may require linking together a dozen or more vessels the size of the Northern Isles. But it is a major step forward and could see more centers under the sea.

According to an Energy Innovation paper earlier this year on average, servers and cooling systems account for the greatest shares of direct electricity use in data centers at 43% of consumed power each, followed by storage drives and network devices.

Energy Innovation is a San Francisco-based nonpartisan energy and environmental policy firm. It delivers research and original analysis to policymakers to help them make informed choices on energy policy.

The substantial electricity use of data centers also gives rise to concerns over their carbon dioxide emissions. However, it is not yet possible to accurately estimate total CO2 emissions, due to a lack of data on the locations of the vast majority of global data centers and the emissions intensities, according to EI, but it is likely to be substantial. However, this is not the first time companies have tried to make data centers more energy efficient.

Increasing Demand for Energy


Randy Cozzens is EVP and head of energy, utilities, and chemicals at New York City-based Capgemini North America. He points out that resolving this is a key issue for enterprises given that they play an integral role in the evolving digital society, with an increasing consumer demand and appetite for an always on, no latency internet experience.

The increase in demand over the past 20 years naturally led to an increase in data centers and the added energy they use to operate. However, there have been active initiatives implemented to make data centers more energy efficient, he said. These include energy monitoring software, power-saving modes, server cooling systems, and virtualization. In addition, the rapid shift for many organizations to move their data to the cloud will potentially reduce the world's overall energy emission from data centers. “Cloud data centers are typically built to use less energy than standard data centers and are incentivized to reduce energy outputs by operating on solar or wind power,” he said.

“As IT continues to prioritize sustainable initiatives within its data centers, and as pivots to the cloud increase across industry sectors, the amount of energy being used by the world's data centers has the potential to decrease and become less of a threat to the environment.”

Competitive Edge

The problem is only going to get worse too, Akram TariqKhan of India-based ecommerce site YourLibaas, told us. With the demand for cloud computing increasing at an exponential pace, they are going to be disastrous for the environment. “As a heavy user of cloud servers, I can share how we end up buying excessive capacity only because the fierce competition has ended up the industry offering dirt-cheap prices,” he said.

“This leads to a spiral effect with unused capacity leading to an increased negative impact on the environment. Amazon's spot instances offer unused capacity at cheaper prices for temporary projects attempting to resolve this issue.”

Data Center Advantage
Tina Nikolovska is founder of TeamStage points to four different ways that data centers can help environment, ways that are mentioned every time this subject comes up. Globally, they can be summarized as reducing carbon footprint as one of the critical reasons in favor of digital workplaces.

1. No Commuting Carbon Footprint
There are enormous amounts of money and gas spent on commuting — whether using public transportation or self-owned cars. With less commuting, less pollution gets released into the atmosphere. If we are speaking about the trend of digitalization of workspaces, we can significantly reduce vehicle-produced gas emissions.

2. Digitalized Data Means No Paper
The only logical trend to follow remote work is the no-paper policy. Keeping data in print is unnecessary and obsolete, with the expanding cloud storage market and improved cybersecurity. Again, the global effect of paper reduction can be huge for ecology.

3. Less General Waste
General waste, including single-use items that offices abundantly waste, like coffee cups, water cups, paper towels, straws, etc., can be dramatically reduced by adopting the digital workspace. But it is more than general waste; it is the amounts of working clothes, shoes, make-up, and grooming products that are generously spent, with markets grossing billions of dollars on (often non-ethical) production. This is not to say that everyone will lock behind their door and reappear in a few years, looking like cavemen (and women). 

4. Working From Home
Working from home makes another type of consumer, where less is spent on small, single-use items and pleasures, and more on long-term comfort items. Think about the millions of dollars spent on coffees and lunches, and how most people are now cooking their food and preparing home-made coffee. It seems that the trend is to invest in a better coffee machine (that lasts for years) and drink the coffee from a regular, washable cup, instead of a plastic one. Multiply that with millions of workers around the globe, and the result is significantly reduced waste.

Digital Workplace Data Centers
So why does any of this matter? Boston-based Brightcove’s CEO, Jeff Ray, believes that this could be a tipping point issue for technology. “We are living through a pivotal moment in history where the world is undergoing a rapid digital transformation. 2020 has become video’s evolutionary moment and streaming over OTT devices is one area where we will continue to see growth," he said. "Consumers and businesses — many of whom are facing economic challenges due to the pandemic — are seeing the value of cutting the cable cord and subscribing to streaming services instead." That means data, and lots of it.

If video is increasingly important for companies as it is the main way to connect with their audiences right now, not having the capability to store the data it contains will hinder the development of digital workplaces. Not having high touch moments with clients means companies could be more creative and interact in new ways to build the future.

Even with the return of live sports and greater easing of restrictions in Q3, there is no going back to our previous consumption and working habits. This digital transformation will result in an evolution of how we do business and video will become an inherent part of workflows now and in the future.

"Forward thinking companies know the future of work involves a hybrid model," continued Ray. "Physical events will have to have a digital experience to reach attendees that are unable to attend in person. Hybrid events increase ROI and expand reach. It is no longer an either or, it is both. Companies have seen the vast opportunities ahead with virtual connection, there is no going back. If companies are not embracing video now, in the future they will have bigger problems than today. Don’t get left behind.”

https://www.cmswire.com/

Tuesday, 22 September 2020

Edge computing: The next generation of innovation

Like other hot new areas of enterprise tech, edge computing is a broad architectural concept rather than a specific set of solutions. Primarily, edge computing is applied to low-latency situations where compute power must be close to the action, whether that activity is industrial IoT robots flinging widgets or sensors continuously taking the temperature of vaccines in production. The research firm Frost & Sullivan predicts that by 2022, 90 percent of industrial enterprises will employ edge computing.

Edge computing is a form of distributed computing that extends beyond the data center mothership. When you think about it, how else should enterprises invest in the future? Yes, we know that a big chunk of that investment will go to the big public cloud providers – but hardware and software that enterprises own and operate isn’t going away. So why not physically distribute it where the business needs it most?

Augmenting the operational systems of a company’s business on location – where manufacturing or healthcare or logistical operations reside – using the awesome power of modern servers can deliver all kinds of business value. Typically, edge computing nodes collect gobs of data from instrumented operational systems, process it, and send only the results to the mothership, vastly reducing data transmission costs. Embedded in those results are opportunities for process improvement, supply chain optimization, predictive analytics, and more.

CIO, Computerworld, CSO, InfoWorld, and Network World have joined forces to examine edge computing from five different perspectives. These articles help demonstrate that this emerging, complex area is attracting some of the most intriguing new thinking and technology development today.

The many sides of the edge

Edge computing may be relatively new on the scene, but it’s already having a transformational impact. In “4 essential edge-computing use cases,” Network World’s Ann Bednarz unpacks four examples that highlight the immediate, practical benefits of edge computing, beginning with an activity about as old-school as it gets: freight train inspection. Automation via digital cameras and onsite image processing not only vastly reduces inspection time and cost, but also helps improve safety by enabling problems to be identified faster. Bednarz goes on to pinpoint edge computing benefits in the hotel, retail, and mining industries.

CIO contributing editor Stacy Collett trains her sights on the gulf between IT and those in OT (operational technology) who concern themselves with core, industry-specific systems – and how best to bridge that gap. Her article “Edge computing’s epic turf war” illustrates that improving communication between IT and OT, and in some cases forming hybrid IT/OT groups, can eliminate redundancies and spark creative new initiatives.

One frequent objection on the OT side of the house is that IoT and edge computing expose industrial systems to unprecedented risk of malicious attack. CSO contributing writer Bob Violino addresses that problem in “Securing the edge: 5 best practices.” One key recommendation is to implement zero trust security, which mandates persistent authentication and micro-segmentation, so a successful attack in one part of an organization can be isolated rather than spreading to critical systems.

Computerworld contributing writer Keith Shaw examines the role of 5G in “Edge computing and 5G give business apps a boost.” One of 5G’s big selling points is its low latency, a useful attribute for connecting IoT devices. But as IDC research director Dave McCarthy explains in the article, the reduction in latency won’t help you when you’re connecting to a far-flung data center. On the other hand, if you deploy “edge computing into the 5G network, it minimizes this physical distance, greatly improving response times,” he says.

In case you’re wondering, the hyperscale cloud providers aren’t taking this edge stuff lying down. In “Amazon, Google, and Microsoft take their clouds to the edge,” InfoWorld contributing editor Isaac Sacolick digs into the early-stage edge computing offerings now available from the big three, including mini-clouds deployed in various localities as well as their exiting on-prem offerings (such as AWS Outposts or Azure Stack) that are fully managed by the provider. Sacolick writes that “the unique benefit public cloud edge computing offers is the ability to extend underlying cloud architecture and services.”

The crazy variety of edge computing offerings and use cases covers such a wide range, it begins to sound like, well, computing. As many have noted, the “big cloud” model is reminiscent of the old mainframe days, when customers tapped into centralized compute and storage through terminals rather than browsers. Edge computing recognizes that not everything can or should be centralized. And the inventive variations on that simple notion are playing a key role in shaping the next generation of computing.

https://www.networkworld.com/

Sunday, 30 August 2020

Open standards vs. open source: A basic explanation

What are open standards, exactly? You’ve probably heard the term thrown around, but why does it matter to your business? How does it relate to open source? What’s the difference?

Take a common example. Have you ever noticed that Wi-Fi seems to work the same with any router, phone or computer? We tend to take these types of standards for granted, but they bring huge benefits to our daily lives.

Imagine if there were no standards like Wi-Fi. Every business might have its own form of wireless technology. If your favorite coffee shop had a router made by Company X, and you owned a computer made by Company Y, you might have to find another coffee shop to check your email.

Even if each business had a functioning form of wireless internet, a lack of standards would make interoperability nearly impossible. Customers of every company would suffer.

Have you ever wondered how competing businesses all across the world somehow converge on one format for these things?

The answer is often open standards.

What are open standards?

An open standard is a standard that is freely available for adoption, implementation and updates. A few famous examples of open standards are XML, SQL and HTML.

Businesses within an industry share open standards because this allows them to bring huge value to both themselves and to customers. Standards are often jointly managed by a foundation of stakeholders. There are typically rules about what kind of adjustments or updates users can make, to ensure that the standard maintains interoperability and quality.

What is open source?

What is open source, then? The term may sound similar to open standards; but, in reality, it is fundamentally different.

At its core, open source code is created to be freely available, and most licenses allow for the redistribution and modification of the code by anyone, anywhere, with attribution. In many cases the license further dictates that any updates from contributors will also become free and open to the community. This allows a decentralized community of developers to collaborate on a project and jointly benefit from the resulting software.

How open standards and open source help prevent vendor lock-in

Both open source and open standards can help protect clients from vendor lock-in, but they do it in different ways.

Let’s start with an example of an open standard. A business might buy a PDF reader and editor from a vendor. Over time, the team could create a huge number of PDF documents. Maybe these documents become a valuable asset for the company. Since the PDF format is an open standard, the business would have no problem switching from one PDF software to another. There is no concern that it would be unable to access its documents. Even if the PDF reader software isn’t open source, the PDF format is an open standard. Everyone uses this format.

Now, let’s instead take a look at the benefits of open source. Imagine that a business had spent millions of dollars writing internal software code for a proprietary operating system. That business would no longer have the option of changing vendors. It would be stuck with that operating system, unless it wanted to make a significant investment re-writing that code to run on a different system.

Open source software could have prevented that issue. Because open source software does not belong to any particular business, clients are not locked-in to any particular provider.

In both of these examples, the client would be able to avoid vendor lock-in. In one case this is because a piece of closed software followed a common open standard. In the other case, it is because the software itself belonged to an open source community.

While these are fundamentally different things, both help foster innovation while also providing more options to customers. 

https://www.ibm.com/

Friday, 21 August 2020

Importance of Software Engineering

Software engineering is the study of and practice of engineering to build, design, develop, maintain, and retire software. There are different areas of software engineering and it serves many functions throughout the application lifecycle. Effective software engineering requires software engineers to be educated about good software engineering best practices, disciplined and cognizant of how your company develops software, the operation it will fulfill, and how it will be maintained.

Software engineering is a new era as CIOs and Digital Leaders now understand the importance of software engineering and the impact – both good and bad – it can have on your bottom line.

Vendors, IT staff, and even departments outside of IT need to be aware that software engineering is increasing in its impact – it is affecting almost all aspects of your daily business.

The Importance of Software Engineers

Software engineers of all kinds, full-time staff, vendors, contracted workers, or part-time workers, are important members of the IT community.

What do software engineers do? Software engineers apply the principles of software engineering to the design, development, maintenance, testing, and evaluation of software. There is much discussion about the degree of education and or certification that should be required for software engineers.

According to StackOverflow Survey 2018, software engineers are lifelong learners; almost 90% of all developers say they have taught themselves a new language, framework, or tool outside of their formal education.

Software engineers are well versed in the software development process, though they typically need input from IT leader regarding software requirements and what the end result needs to be. Regardless of formal education, all software engineers should work within a specific set of best practices for software engineering so that others can do some of this work at the same time.

Software engineering almost always includes a vast amount of teamwork. Designers, writers, coders, testers, various team members, and the entire IT team need to understand the code.

Software engineers should understand how to work with several common computer languages, including Visual Basic, Python, Java, C, and C++. According to Stackoverflow, for the sixth year in a row, JavaScript is the most commonly used programming language. Python has risen in the ranks, surpassing C# this year, much like it surpassed PHP last year. Python has a solid claim to being the fastest-growing major programming language.

Software engineering is important because specific software is needed in almost every industry, in every business, and for every function. It becomes more important as time goes on – if something breaks within your application portfolio, a quick, efficient, and effective fix needs to happen as soon as possible.

Whatever you need software engineering to do – it is something that is vitally important and that importance just keeps growing. When you work with software engineers, you need to have a check and balance system to see if they are living up to their requirements and meeting KPIs.

Thursday, 20 August 2020

How Natural Language Processing Is Changing Data Analytics

Natural language processing (NLP) is the process by which computers understand and process natural human language. If you use Google Search, Alex, Siri, or Google Assistant, you’ve already seen it at work. The advantage of NLP is that it allows users to make queries without first having to translate them into “computer-speak.”

NLP has the potential to make both business and consumer applications easier to use. Software developers are already incorporating it in more applications than ever, including machine translation, speech recognition, sentiment analysis, chatbots, market intelligence, text classification, and spell checking.

This technology can be especially useful within data analytics, which analyzes data to help business leaders, researchers, and others gain insights that assist them in making effective decisions. As we’ll see below, NLP can support data analytics efforts in multiple ways, such as solving major global problems and helping more people, even those not trained in data processing, use these systems.

Managing Big Data

With the help of NLP, users can analyze more data than ever, including for critical processes like medical research. This technology is especially important now, as researchers attempt to find a vaccine for COVID-19.

In a recent article, the World Economic Forum (WEF) points out that NLP can help researchers tackle COVID-19 by going through vast amounts of data that would be impossible for humans to analyze. “Machines can find, evaluate, and summarise the tens of thousands of research papers on the new coronavirus, to which thousands are added every week….” In addition, this technology can help track the spread of the virus by detecting new outbreaks.

According to the WEF article, NLP can aid the research process when data analysts “[train] machines to analyze a user question in a full sentence, then to read the tens of thousands of scholarly articles in the database, rank them and generate answer snippets and summaries.” For example, a researcher may use the question, “Is COVID-19 seasonal?” and the system reviews the data and returns relevant responses.

Solving Problems

In addition to pressing health problems, NLP used in conjunction with artificial intelligence (AI) can help professionals solve other global challenges, such as clean energy, global hunger, improving education, and natural disasters. For example, according to a Council Post appearing on Forbes, “Huge companies like Google are setting their sights on flood prevention, utilizing AI to predetermine areas of risk and notify people in impacted areas.”

Enabling More Professionals

According to an InformationWeek article, “With natural language search capabilities, users don’t have to understand SQL or Boolean search, so the act of searching is easier.” As the quality of insights depends on knowing how to “ask the right questions,” this skill may soon become essential for business operators, managers, and administrative staff.

For example, anyone within a company could use NLP to query a BI system with a question like, “What was the inventory turnover rate last fiscal year compared to this fiscal year?” The system would convert each phrase to numeric information, search for the needed data, and return it in natural language format. Such queries allow any employee in any department to gain critical insights to help them make informed decisions.

Creating a Data-Driven Culture

In the past, business intelligence (BI) powered by data analytics required trained data professionals to correctly input queries and understand results. But NLP is changing that dynamic, resulting in what some experts are calling “data democratization”: the ability for more people to have access to data sets formerly reserved only for those with the advanced skills needed to interpret it.

The more people within a company who know how to gather insights based on data, the more that company can benefit from a data-driven culture, which is one that relies on hard evidence rather than guesswork, observation, or theories to make decisions. Such a culture can be nurtured in any industry, including healthcare, manufacturing, finance, retail, or logistics.

For example, a retail marketing manager might want to determine the demographics of customers who spend the most per purchase and target those customers with special offers or loyalty rewards. A manufacturing shift leader might want to test different methods within its operations to determine which one yields the greatest efficiency. With NLP, the commands needed to get this information can be executed by anyone in the business.

In Summary

NLP is not yet widespread. According to the InformationWeek article, “A few BI and analytics vendors are offering NLP capabilities but they're in the minority for now. More will likely enter the market soon to stay competitive.”

As it becomes more prevalent, NLP will enable humans to interact with computers in ways not possible before. This new type of collaboration will allow improvements in a wide variety of human endeavors, including business, philanthropy, health, and communication.

These advancements will become even more useful as computers learn to recognize context and even nonverbal human cues like body language and facial expressions. In other words, conversations with computers are likely to continue becoming more and more human.

https://www.kdnuggets.com/2020/08/natural-language-processing-changing-data-analytics.html

Monday, 10 August 2020

Software Architecture Guide

 What is architecture?

People in the software world have long argued about a definition of architecture. For some it's something like the fundamental organization of a system, or the way the highest level components are wired together. My thinking on this was shaped by an email exchange with Ralph Johnson, who questioned this phrasing, arguing that there was no objective way to define what was fundamental, or high level and that a better view of architecture was the shared understanding that the expert developers have of the system design.

A second common style of definition for architecture is that it's “the design decisions that need to be made early in a project”, but Ralph complained about this too, saying that it was more like the decisions you wish you could get right early in a project.

His conclusion was that “Architecture is about the important stuff. Whatever that is”. On first blush, that sounds trite, but I find it carries a lot of richness. It means that the heart of thinking architecturally about software is to decide what is important, (i.e. what is architectural), and then expend energy on keeping those architectural elements in good condition. For a developer to become an architect, they need to be able to recognize what elements are important, recognizing what elements are likely to result in serious problems should they not be controlled.

Why does architecture matter?

Architecture is a tricky subject for the customers and users of software products - as it isn't something they immediately perceive. But a poor architecture is a major contributor to the growth of cruft - elements of the software that impede the ability of developers to understand the software. Software that contains a lot of cruft is much harder to modify, leading to features that arrive more slowly and with more defects.

This situation is counter to our usual experience. We are used to something that is "high quality" as something that costs more. For some aspects of software, such as the user-experience, this can be true. But when it comes to the architecture, and other aspects of internal quality, this relationship is reversed. High internal quality leads to faster delivery of new features, because there is less cruft to get in the way.

While it is true that we can sacrifice quality for faster delivery in the short term, before the build up of cruft has an impact, people underestimate how quickly the cruft leads to an overall slower delivery. While this isn't something that can be objectively measured, experienced developers reckon that attention to internal quality pays off in weeks not months.

https://martinfowler.com/architecture/