Saturday, 24 November 2018

DNS over HTTPS seeks to make internet use more private

Unauthorized interception of DNS traffic provides enough information to ascertain internet users’ thoughts, desires, hopes and dreams.  Not only is there concern for privacy from nearby nosey neighbors, but governments and corporations could use that information to learn about individuals’ internet behavior and use it to profile them and their organization for political purposes or target them with ads. 

Efforts like the DNS Privacy Project aim to raise awareness of this issue and provide pointers to resources to help mitigate these threats.

The IETF has been working on the problem as well. It formed the DNS PRIVate Exchange (DPRIVE) working group to define the problems and evaluate options to mitigate the security threats.  One of its major efforts has been to create methods whereby DNS can be used over HTTP (DOH).  Even though DNS queries could take place over HTTP in the clear, that wouldn’t solve the unencrypted privacy issue.  Therefore, the protocol development has been on DNS Queries over HTTPS (also referred to as DOH), which was standardized in October 2018.

(While this article addresses DNS over HTTPS, the IETF’s primary published proposed standard for securing DNS traffic is “Specification for DNS over Transport Layer Security (TLS)” (DOT) (RFC 7858).  Since DNS traffic uses UDP messages, the IETF also published “DNS over Datagram Transport Layer Security (DTLS)” (RFC 8094).  The IETF DPRIVE working group has also published “Usage Profiles for DNS over TLS and DNS over DTLS” (RFC 8310).)

How DNS over HTTPS works
DOH uses a direct connection between the end-user and the web server’s interface.  Since the DNS query and response are taking place over a web-based HTTP interface, the DNS response format uses JSON notation.  This is different than the traditional DNS query and resource record format and lends itself to simpler integration with web-based applications.

DOH could be implemented as a local proxy service running on the end-user’s computer that is listening for DNS queries using TCP or UDP port 53.  This local proxy service converts the DNS queries into an HTTPS connection to the DOH service.  In the case of DNS over HTTPS, the connection is made using TCP port 443.  (When DNS over TLS is used, then TCP port 853 is employed.)

DOH can also be implemented in the user’s web browser.  When the browser makes a connection to a new URL, it connects to the pre-configured DOH service using TCP 853 and retrieves the JSON response containing the resulting IP address.

DOH is of significant interest to content providers because they want to help preserve the privacy of their user and subscriber populations.  Content providers desire greater control over DNS for their clients, guaranteeing that their clients are provided accurate information about IP addresses, mitigating man in the middle attacks, and provide a faster service regardless of the client’s operating system or location.

The terms DNS over HTTP (DOH), DNS over HTTPS (DOH), and DNS over TLS (DOT) are often used interchangeably, but it is important to distinguish among HTTP, HTTPS, and TLS underlying this web-based DNS function.

While DOH can make contribute to internet privacy, it’s also important to recognize there are other ways to address the problem.

DOH alternatives
In the interest of completeness, there are also other methods that have been proposed and are in use that function like DOH.  For example, DNS over HTTP can also use HTTP/2.  HTTP/2 is an optimized version of HTTP that allows for multiplexed streams for simultaneous fetches, request prioritization, header compression and server push.  In this case, the web resolver could use the HTTP/2 Server Push method to send/push DNS updates to the client.  This could be used to proactively notify clients that an update has occurred.  This could be a more immediate method than the historical approach of waiting for the DNS record’s TTL to expire. 

DNS can also work over the QUIC protocol.  Quick UDP Internet Connections (QUIC) is an optimized transport layer protocol that provides the reliability of TCP with multiplexed connections and performance optimizations.  Although this is currently and IETF draft, there is interest in ways to leverage the QUIC protocol because of its performance improvements for web servers.

There are also other non-IETF methods for providing encryption of DNS queries.  DNSCrypt is a method of using encryption to secure traditional DNS messages between an end-user and a resolver.  DNSCrypt can support TCP or UDP DNS messages over TCP port 443.  The current version 2 of the DNSCrypt protocol specification is documented publicly.  DNSCurve is a similar method, but it uses elliptic curve cryptography with the Curve25519 (X25519 algorithm) for securing DNS.  DNSCurve has been being developed since 2009.

Implementations of DOH
Momentum is building for DOH solutions and now there are implementation examples proving that these methods work.  This list of publicly-available DOH servers provides links to those services and the DNS Privacy Project provides a list of test servers.  Here are some DOH implementations that provide information about the current state of DOH that can provide a place to test solutions to improve public-facing web applications.

Google

Google operates its global public DNS service using IPv4 (8.8.8.8, 8.8.4.4), IPv6 (2001:4860:4860::8888, 2001:4860:4860::8844), and now operates using DOH.  They have one method that is available for programmatic API access and another method that works with a human-friendly web browser interface.  An example of using the API access is to use this URL that contains the query you want to make.

https://dns.google.com/resolve?type=AAAA&name=hoggnet.com

With the Google DOH web interface, you can enter the FQDN you would like to resolve, the type or DNS resource record (A, AAAA, CNAME, NS, MS, etc.), the EDNS client IP address (RFC 7871) and select if you want to use DNSSEC, then click the “Resolve” button.  The system then shows you the JSON output of your DNS over HTTPS query.  The web interface then provides you a restful link at the bottom of the page.  Following is a URL that you can use with the web interface.

https://dns.google.com/query?type=AAAA&name=hoggnet.com

There is a Docker container available on Dockerhub that is a small DNS server that performs queries over HTTPS via Google's DNS API.  Furthermore, there is an implementation of Google’s DOH system for CoreDNS used in Kubernetes environments.

CloudFlare

CloudFlare operates a public DNS service that has a user-favorable privacy policy.  CloudFlare also operates a public DOH service which is available over IPv4 (1.1.1.1, 1.0.0.1) and IPv6 (2606:4700:4700::1111, 2606:4700:4700::1001).  The CloudFlare DOH service can operate using either DNS wireformat or JSON or CloudFlare also offers their “Cloudflared” DOH client proxy.  CloudFlare also offers an Android application that uses their DoH service.  Here is an example of how to use curl to query the DOH interface:

curl 'https://cloudflare-dns.com/dns-query?ct=application/dns-json&name=www.hoggnet.com'

Quad9

Quad9 operates its public DNS service over IPv4 (9.9.9.9, 149.112.112.112) and IPv6 (2620:fe::fe, 2620:fe::9).  More about its service are here and the here.  Quad9’s service uses DNS over TLS using TCP port 853 using dns.quad9.net.

CleanBrowsing

CleanBrowsing operates a family-friendly public DNS service using IPv4 (185.228.168.168, 185.228.168.169), IPv6 (2a0d:2a00:1::, 2a0d:2a00:2::), and also using DOH with its family filter, adult filter and security filter.

Facebook

Facebook operates a DOH proxy service and has published a set of Python 3 scripts that create a DOH stub resolver and a DOH client, and can proxy using HTTP/2.

Firefox

Mozilla has implemented DOH in its Firefox browser version 62 or newer. The function is known as Trusted Recursive Resolver (TRR) and it is disabled by default, but can be easily turned on. Just open Firefox and open the URL about:config and then search for trr. It will display the different options to enable TRR (network.trr).  Modify network.trr.mode from 0 to 2 and set network.trr.uri to the name of your DOH server (e.g. https://mozilla.cloudflare-dns.com/dns-query), then browse to some sites.  Mozilla Firefox have partnered with CloudFlare on their TRR and DNS over HTTPS integration.  To observe how it works, go to the about:networking page and click on the DNS section to see which queries it has made and look for the TRR column to be true or false. An illustrated explanation of how it works is here.

Tenta

The Tenta browser supports Secure DNS over TLS and DNSSEC, along with decentralized DNS.  Tenta’s DNS service supports DNS over TLS, DNSSEC and a Golang interface.

Curl

The curl utility can be used to test DOH services, as shown above, but there are also extensions to curl that make it work directly with DOH. There is also a libcurl-using application that performs DOH. It’s a small stand-alone tool that issues requests for the A and AAAA records of a given host name from the given DOH URI.

Android 9

Android 9 “Pie” has a “Select Private DNS Mode” that offers a DOT service to support added privacy for DNS traffic using mobile devices.

Stubby

The Getdns group offers Stubby as an open source application that operates as a stub resolver using DNS over TLS.  Stubby uses a YAML configuration file, and there are examples of how to configure this for specific DNS privacy servers.

PowerDNS

PowerDNS offers a DOH interface and its code is available on GitHub.

pfSense

pfSense supports configuring DNS over TLS on pfSense security devices, and there is a video showing how to configure and test it.

Other DOH Clients

There are also numerous DOH and DOT clients and proxies that are available.  Among these are Daniel’s dns2doh, Frank’s doh-proxy (server-side proxy), Travis’s jDnsProxy, Daniel’s PHP DOH client, Star’s Golang implementation of DOH, Pawel’s Dingo (DNS client in Go), a Python implementation of DOH,  and a DOH C++ client.

DOH doesn’t solve Internet privacy, but it is a contributing mechanism of a wholistic approach that helps improve privacy. If an eavesdropper were along the traffic path from the user to the web server, they could still observe the initial connection request prior to HTTPS encryption for a site’s IP address. Once the HTTPS session is in place, then the eavesdropper would only observe the encrypted TCP port 443 packets.

DOH compliments DNS security measures such as DNSSEC and DNS-Based Authentication of Named Entities (DANE) can provide validation of the Certificate Authority (CA) used for a service. Use of TLS for the web connections using a validated public certificate using Let’s Encrypt can help fortify user’s connections.

The field of DNS security is rapidly evolving. Vendors such as Infoblox ActiveTrust Cloud and Cisco Umbrella (formerly OpenDNS) have selected DNSCrypt/DNSCurve as their DNS privacy method. Some organizations like Quad9 support both DNSCrypt and DOH/DOT, thus hedging their bets on which may prove dominant. There is speculation to what extent DOH will impact use of traditional DNS infrastructure, because these DOH methods are new and have yet to gain widespread adoption.

Regardless of how the future plays out and which method DOH dominates the industry, organizations should be cognizant of the information they disclose through their wide-open DNS communications. It is free to test out these DOH methods to determine which approach may provide added privacy for individuals’ and organizations’ public web applications.

https://www.networkworld.com

Wednesday, 14 November 2018

Maxta MxIQ Uses Analytics to Improve Storage Availability Across Clouds

Hyperconverged storage software maker Maxta, which is growing a niche clientele in a hot segment that includes Nutanix and others, has launched an analytics-insight platform to go with its frontline product.
The Santa Clara, Calif.-based company claims that its new MxIQ data analytics product, released Nov. 6, gives users real-time visibility into their private cloud and multi-cloud environments to eliminate storage and data-movement issues before they occur and also obtain insights into key performance metrics.
Maxta MxIQ combines configuration metrics with visibility into capacity, performance and system health trends across data centers and clouds to provide a granular overview of customers’ IT environments.  In this way, better storage-management decisions can be made—on the fly, if necessary.
MxIQ aggregates metrics and trends across the entire Maxta customer base to identify potential impacts on system health and availability to enable proactive technical support.  Metadata is collected via pre-installed agents on customers’ servers and transmitted securely to the MxIQ cloud-based service where correlated issues are analyzed and resolved.
Using MxIQ, users and/or reseller partners gain insight into:
  • Whether an SSD or hard drive was about to fail in advance of it happening
  • Compatibility issues of newly-installed components based on other customers’ deployments
  • Tailored recommendations to improve performance and optimize resources
  • Whether performance issues are related to storage, compute or networking
  • Capacity needs with alerts to add drives to servers
  • Performance and capacity trends over the past month, year or more 
With insight not only into their own system capacity, performance and health information but also learned from other Maxta customers, MxIQ gives users the ability to predict future trends so that CIOs and IT administrators can plan on how best to allocate their resources. Historical data trends for capacity and performance are available, as well as metadata concerning cluster configuration, licensing information, VM inventory and logs.
MxIQ is an integrated component of Maxta Hyperconvergence software, which offers customers the freedom and flexibility to choose or change server hardware and hypervisors so they can avoid vendor lock-in. Maxta says it supports every major server brand, multiple hypervisors, and containers using OpenShift Container Management Platform. It is also easy to run mixed workloads on the same cluster by natively optimizing application performance and resiliency policies in each cluster on a per application basis, Maxta said. 
Basic MxIQ reporting capabilities are included with Maxta Hyperconvergence software with optional value-added data analytics available at an additional cost. 
http://www.eweek.com

Friday, 9 November 2018

Cisco Continues Kubernetes Kraze With AWS Hybrid Software

Cisco continued its Kubernetes craze today with software built for Amazon Web Services (AWS) that allows customers to deploy and monitor containerized applications across private data centers and in AWS.
The Cisco Hybrid Solution for Kubernetes on AWS will be available in December. Customers can buy a subscription for a software-only version, or they can purchase a bundled product with the software running on top of its HyperFlex hyperconverged infrastructure.
Cisco earlier this year launched its container management play based on Kubernetes, aptly named Container Platform.
The new product integrates Container Platform with Amazon Elastic Container Service for Kubernetes (EKS). “And the main piece of integration is the identity and access piece [with AWS Identity and Access Management] that makes it much easier to deploy and manage services across both environments,” said Kip Compton, SVP for the Cloud Platform and Solutions Group at Cisco. This integration enables consistent networking, security, management, monitoring, and identity across data centers and the AWS public cloud.
The move also continues Cisco’s push into containers and more specifically Kubernetes. In May, Cisco enabled Kubernetes support for its CloudCenter management tool and its AppDynamics monitoring platform. Its hybrid cloud product with Google is also based on Kubernetes.
Containers is the way that modern applications are being built,” Compton said. “So we’ve responded to that by working across our entire portfolio to essentially container-enable our entire portfolio. As our customers continue to embrace and grow their use of containers you’ll see us continue to deepen our container support and focus on things like networking, security, analytics, and management.”
Jumping on the Container Train
This also represents a broader industry shift as traditional infrastructure vendors try to boost their container cred.
In addition to its hosted on-premises VMware Kubernetes Engine (VKE) software, VMware this week added Kubernetes support to its hybrid cloud stack and is buying Heptio, a startup whose founders co-created Kubernetes.
VMware also announced new integration with IBM Cloud’s managed Kubernetes service. And late last month IBM reached a $34 billion deal to buy Red Hat to bolster its own hybrid cloud and container push.
“Companies are really thinking about the future of application development and modern application architecture with Kubernetes and containers, and thinking through how do you make money off of this when you’re at the data center core, the network core,” said IDC analyst Stephen Elliot. “If you’re not doing these types of partnerships, these types of acquisitions, you’re likely going to fall behind. The reality is these vendors understand that it’s a multi-cloud world and they’re trying to help companies navigate their migration from private to public.”

https://www.sdxcentral.com

Thursday, 8 November 2018

The ‘born in the cloud’ advantage is real, but not absolute

You often see companies, especially new ones, state that they are “born in the cloud.” But what does that mean? It means that the company was founded at a time where all of its IT assets have always been and are currently in the cloud. It has never owned physical servers or understands what a data center is.

Such “born in the cloud” companies were very rare when the cloud was new; it seemed that cloud computing’s real purpose was for startups. But fast-forward a decade to today, these companies are no longer startups, yet they are still using cloud computing for all their IT needs.

These “born in the cloud” companies have typically been disrupters, innovating in their industries. They used cloud computing as a force multiplier, letting them pivot quickly, fail fast, and expand at the “speed of need.”

But now that the cloud is so established and gaining adoption at legacy companies, is there still an advantage today of being born in the cloud?

Generally speaking, yes. These companies have not gone through the pain of migrating to the cloud but have all the advantages that cloud computing provides. Thus, they can sit back and watch their larger and older competitors struggle with application modernization, data centralization, and security as they move to the cloud and deal with hybrid environments of cloud and on-premises—things a “born in the cloud” company never has to do.

However, there is a downside of “born in the cloud” companies’ cloud provider lockin, perception, and cost. Indeed, many “born in the cloud” companies have migrated off of a particular cloud provider due to perception issues, such as the perception that the risk that the data could be compromised by being in a public cloud provider. Or, for financial issues, such as discovering it’s cheaper to use your own hardware and software in some cases.

There is no guarantee that cloud will be cheaper and better, and many “born in the cloud” companies have discovered that fact, even as “born in the data center” companies are discovering the value of the cloud. Having a single approach is rarely the right strategy.

Still, it’s true that it is better to have been born in the cloud and than to be born in the data center and have to move to the cloud. The “born in the cloud” companies do have an advantage, especially over traditional companies that just can’t get things going around cloud computing fast enough. Even if you weren’t born in the cloud, do your best to act as if you were.

https://www.infoworld.com

Monday, 5 November 2018

IBM to move Watson Health to a hybrid cloud

After announcing plans to acquire open source software provider Red Hat this week, IBM now plans to move its Watson Health cognitive services to a hybrid cloud model.

Watson, the IBM supercomputer that uses artificial intelligence (AI) to analyze natural language and perform data analytics, has been used to identify medical data sources, generate hypotheses, recommend patient treatments to physicians or match patients to clinical trials.

The Veterans Administration has also used Watson for genomics as part of its precision oncology program, which primarily looks for possible new treatments for stage 4 cancer patients who have exhausted other options.

The artificial intelligence engine has been used to comb through massive data stores from published medical literature, patient medical records and physician notes to help researchers identify new uses for drugs; connect patients to clinical drug trials; and offer up potential treatment options based on previous outcomes of patients with similar heath profiles.

The Watson Health service has been primarily offered by setting up on-premise services in hospitals and other healthcare and research facilities that are closely monitored by IBM staff.

Better healthcare from a hybrid cloud?
"It has become apparent to us that the right answer for healthcare, like so many other industries, is a hybrid cloud because some institutions want their data on [premise] and yet they want to be able to hook to other data sets, public clouds and do big time AI and analytics on the public side," said John Kelly, who took over the IBM Watson Health division last week.

The buyout of Red Hat will become integral to the IBM's hybrid cloud strategy, Kelly said, as many of its clients' private clouds run on Red Hat Linux, as do public clouds, like that offered by IBM. Red Hat brings with it software connect those the two and enable data to be passed back and forth, Kelly said.

"We became convinced that this hybrid model, where you can move data seamlessly back and forth, where you can move analytics and AI seamlessly back and forth, is the right answer. And our clients are telling us that's the right answer," Kelly said.

Offering a hybrid cloud to healthcare and insurance provider customers of Watson Health will also reduce the need for IBM onsite services, since Watson's AI engine will be exposed through different user interfaces, Kelly said.

IBM will provide the services to go to user sites and move their data to a private cloud,  connect them to IBM's public cloud, and then move that data to a HIPAA-compliant cloud for healthcare use.

Cynthia Burghard, a research director for IDC Health Insights, said "in theory" the more flexibility organizations have as to where they house their workloads, the better it is for them.

"What is confusing to me is that as far as I know the analytic applications that IBM has for healthcare payers and providers are on premise and have not been re-architected for the cloud, so I don't really know what IBM is moving to a cloud infrastructure," Burghard said.

An IBM spokesman said IBM's Watson Health service has 11,000 clinical measures that are part of it analytics suite. Some are on-premise and others are on IBM's cloud, depending on the client's preference and the market segment (such as payers, government agencies, hospital systems, for instance). Traditionally, however, most of those services did begin on premise, he said.

http://www.computerworld.in

Saturday, 3 November 2018

How the cloud is driving the enterprise database

Each of the big three cloud providers—Amazon, Microsoft, and Google—recently announced earnings, with Amazon Web Services (AWS) and Microsoft Azure each touting impressive revenue gains. (Google was largely silent on its Google Cloud Platform business.) For AWS, cloud revenue grew 46 percent to put the company on a $27 billion run rate. Microsoft’s growth slowed to 76 percent, but that’s on an estimated $7.7 billion run rate in 2018, with the company’s hybrid cloud story selling well to enterprises.

One thing that’s pumping up those numbers no matter which of the big clouds you analyze: databases.

The money is in (cloud) data
Yes, databases. Given the pull of data gravity, data sat in on-premises database servers for decades, and companies like Oracle and IBM printed money selling them. As more applications move to the cloud, so too is their data and, by extension, the databases in which the data resides.

So much so, in fact, that Gartner is now projecting that 75 percent of all databases will live on a cloud platform by 2023. As Gartner analyst Merv Adrian has pointed out, this shift to the cloud isn’t a zero-sum game of pushing workloads from on-premises deployments to the cloud. Rather, the cloud is growing the overall pie: The database market “grew by nearly 13 percent from 2016 to 2017, to $38.8 billion— its first double-digit year-over-year growth in five years. And change continues—growth is coming from the cloud,” he said.

This transition to cloud is all the more impressive given that old database habits die hard. “The database has the most inertia [of all enterprise software],” said Dremio CMO (and former MongoDB executive) Kelly Stirman. “It’s the hardest thing to move because it has state. And it has the most valuable asset, the data itself.” Or, as Adrian once told me, “The greatest force in legacy DBMS is inertia.”

That inertia helps to explain why Oracle remains a database force despite its inability to make a dent on the cloud, as it continues to lose what little market share it has managed to muster in the cloud. Meanwhile, the Big 3—AWS, Microsoft Azure, and Google Cloud Platform—keep printing cloudy cash on the backs of their superior database investments.

Where database innovation happens: the cloud
As Bloomberg has highlighted, four of the world’s biggest R&D spenders are public cloud companies: AWS, Microsoft, Google, and Apple. Although not all of these companies’ R&D expenditures are cloud-related, much of them are, as Deloitte analyst and InfoWorld columnist David Linthicum has posited, leading to a “forced march” for the industry to the public cloud. Why? Because “most enterprises will move to technology where the real or perceived innovation does occur.”

Today that is public cloud and, by extension, the databases that public cloud providers keep introducing or improving.

This isn’t going unnoticed. According to DB-Engines’ comprehensive ranking of database popularity, the cloud databases keep soaring up the charts. No, none of them threatens to displace Oracle or Microsoft SQL Server anytime soon in terms of overall enterprise adoption, but for the new, innovative workloads that will increasingly differentiate enterprises? Those are going to be all cloud, all of the time.

Of course, there may be a downside.

Just as Oracle once built up a seemingly impregnable fortress of data so too are the public cloud vendors building services that capture more and more enterprise data, potentially prophesying decades of self-inflicted enterprise lockin. The more enterprises choose to embrace those services and associated databases, the more the data will live in their clouds, and the harder it will be to leave. For now, however, enterprises seem too determined to embrace the fast-paced innovation of the clouds to consider the slow-paced exit they may later want to take.

https://www.infoworld.com

Friday, 2 November 2018

Cray introduces a multi-CPU supercomputer design

Supercomputer maker Cray announced what it calls its last supercomputer architecture before entering the era of exascale computing. It is code-named “Shasta,” and the Department of Energy, already a regular customer of supercomputing, said it will be the first to deploy it, in 2020.

The Shasta architecture is unique in that it will be the first server (unless someone beats Cray to it) to support multiple processor types. Users will be able to deploy a mix of x86, GPU, ARM and FPGA processors in a single system.

Up to now, servers either came with x86 or, in a few select cases, ARM processors, with GPUs and FPGAs as add-in cards plugged into PCI Express slots. This will be the first case of fully native onboard processors, and I hardly expect Cray to be alone in using this design.

Also beefing up the system is the use of three distinct interconnects. Shasta will feature a new Cray-designed interconnect technology called Slingshot, which the company claims is both faster and more flexible than other protocols for interconnecting, along with Intel’s Omni-Path technology and Mellanox’s Infiniband.

There has been an effort to improve interconnect technology, since communication between processors and memory is often the source of slowdown. Processors, while not growing at the rate of Moore’s Law anymore, are still left waiting to hear from other processors and memory, so expanding the interconnects has been a growing effort.

Slingshot is a high-speed, purpose-built supercomputing interconnect that Cray claims will offer up to five times more bandwidth per node than existing interconnects and is designed for data-centric computing.  

Slingshot will feature Ethernet compatibility, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities. Support for both IP-routed and remote memory operations will broaden the range of applications beyond traditional modeling and simulation. Reduction in the network diameter from five hops in the current Cray XC generation of supercomputers to three will reduce latency and power while improving sustained bandwidth and reliability.

Cray is looking beyond just the HPC market with Shasta, though. It’s targeting modeling, simulation, AI and analytics workloads — all data-centric enterprise workloads — and says the design of Shasta allows it to run diverse workloads and workflows all on one system, all at the same time. Shasta’s hardware and software designs are meant to tackle the bottlenecks and other manageability issues that emerge as systems scale up.

Slingshot’s architecture is designed for applications that deal with massive amounts of data and need to run across large numbers of processors, like AI, big data and analytics to provide synchronization across all processors.

One sign that Cray is targeting the enterprise is Shasta has the option of industry-standard 19-inch cabinets instead of Cray’s custom supercomputer cabinets, and it supports Ethernet, the data center standard for interconnectivity, along with the standard supercomputer interconnects.

A supercomputer company pushing down into the enterprise will certainly force HPE, Dell, Cisco and the white-box vendors to up their game quite a bit.

https://www.networkworld.com