Sunday, 29 April 2018

Gmail Update Helps Protect Businesses from Phishing, BEC Threats

The new release of Gmail, which became available on April 25, is being aimed primarily at Google’s G Suite customers, but individuals with Gmail accounts will see the same improvements. The changes include a refreshed interface along with side panels that offer access to other features, such as the calendar and a to-do list. 

Other more important features aren’t so obvious. The new security features mostly run in the background and won’t appear until you need them. Other features are less obvious because they’re not actually part of Gmail yet. 

Google said in its announcement that they will appear in the future. Gmail users can take a look at the new features that are available now by going to Gmail’s settings, and clicking on the trial of the new Gmail.

The most important features for business allow an email sender to control what happens to the message after it’s delivered. There’s an information rights management feature that allows the sender to prevent the email from being forwarded, printed, downloaded or copied. This effectively closes a significant security hole that affects Gmail, as well as a number of other mail clients. 

Also important are phishing protections that are intended to flag business email compromises (BEC) as well as spoofing attacks. The updated Gmail will also flag untrusted senders. Suspected phishing emails are flagged with a prominent red notice that the email is suspicious. While such a notice won’t eliminate the need for training, it will help employees determine when they should stop and think before clicking on a link in an email. 

The phishing protections will look at the contents of shortened URLs to see if they lead to malicious sites. They will flag spoofed domains and domain names that are intended to look like your company’s domain, which is an important tactic in email spoofing. Gmail will also flag unauthenticated emails to help fight spear phishing. 

Google says that its Artificial Intelligence-driven filtering is good enough that more than 99 percent of suspected BEC emails are either sent to the spam folder or flagged as suspicious. Considering that BEC is a serious and growing problem for businesses, this could make a huge difference. 

But the ability to flag most if not all phishing emails is critical, since virtually all of the successful data breaches recently have started with a phishing email that either yielded credentials, or which loaded malware on to a network. Of course, once Gmail’s protections get into full swing, the attackers will find other attack tactics, but perhaps those AI capabilities will flag those as well. 

Google is also adding the ability to give messages an expiration date and to revoke previously sent messages. There will also be a feature that can require two-factor authentication by the recipient of a message. This means that they will need to respond to a message sent to their mobile device before they can receive a message. Google said that this way, even if a person’s Gmail account had been hijacked, a message protected by the extra authentication still couldn’t be viewed. 

Most of the other changes to Gmail are convenience items, such as a “nudge” feature that reminds you to answer emails that might have scrolled off the screen. There’s also an automatic reply feature that lets you click on a button to agree to a question, or to select another simple preconfigured reply when all that’s required is a quick response or an acknowledgement that the email has been received. 

Such features can save a busy user hours of time when added up, as can the ability to prioritize some emails, while putting others into a “snooze” mode so that you can deal with them at a more convenient time, while also removing them from the list of messages in your inbox. 

Other conveniences are the ability to open attachments without having to open an email, and the ability to respond to some types of emails simply by hovering your mouse pointer over them. You can also use an email to set up a meeting in the calendar. 

Google gives you three choices of how the Gmail screen looks, and you can change those whenever you want to. The differences are fairly minor, but ultimately it affects how densely the emails are displayed on your screen. However, none of the changes looks markedly different from the previous Gmail screen. 

As nice as the new screen and the convenience features are, it’s the security that really matters. Google has found specific ways to overcome the social engineering that’s one of the biggest security threats these days. 

By providing a real capability to flag phishing and BEC emails, Gmail users are less likely to fall for those schemes and that reduces the chances companies will be successfully attacked through phishing and other malicious messages. 

Google has also shown that it’s possible to create an email system that has the means to defend users and organization from the most serious security threats that they face every day. By finding ways to combat phishing and BEC, companies have another line of protection and for companies using Google’s G Suite this is very important. 

But not every company is using G Suite and changing from one email system to another is not a trivial matter. What needs to happen is for other email providers, notably Microsoft, to emulate what Google has done in terms of email security. While employee training is still important in the fight against breaches, having automated tools will go a long way in at least blunting the threats.

http://www.eweek.com

Google Mimics Microsoft With Direct Kubernetes Link Into Its Cloud

Google launched a new framework that uses a pair of platforms to make it easier to connect a Kubernetes cluster to Google Cloud Platform (GCP) services. The move follows a similar launch by Microsoft last year.

The Kubernetes Service Catalog and the Google Cloud Platform Service Broker form the new superstructure. Together they help to connect to GCP services from a GCP-hosted Kubernetes cluster or an on-premises Kubernetes cluster.

The service catalog is installed into an existing Kubernetes or Google Kubernetes Engine (GKE) cluster. Once installed, the service catalog uses the service broker to provide access to GCP services. These include Google Cloud Storage, its BigQuery enterprise data warehouse, and Cloud SQL database manager.

It basically provides a direct connection between a running Kubernetes cluster and supporting services running outside of that cluster.

Both are based on the Kubernetes Catalog SIG and the Open Service Broker API. The former allows a developer to request services from a catalog without having to deal with provisioning. The SIG has also been working on integrating the Open Service Broker API with the Kubernetes ecosystem.

The Open Service Broker API was formed by the Cloud Foundry Project in late 2016 as an effort to create APIs for connecting applications to cloud-platform services. Founding members included Fujitsu, Google, IBM, Pivotal, Red Hat, and SAP.

Chris Crall, product manager at Google Cloud, said in a blog post that the combination allows developers to work in environments they are familiar with to create service instances and to connect those instances.

“With two commands you can create the service instance and set the security policy to give you application access to the resource,” Crall wrote. “You don’t need to know how to create or manage the services to use them in your application.”

Kubernetes began within Google as the Borg platform. Google spun off the platform into the open source community in 2015. The Kubernetes Project currently resides underneath the Cloud Native Computing Foundation (CNCF).

Microsoft Already Did It
Microsoft launched a similar platform last year with its Open Service Broker for Azure. It uses a specific implementation of the Open Service Broker API targeted at Azure services.

It provides similar features but is tied to various Azure cloud services. It can be run through Kubernetes, Cloud Foundry, OpenShift in Azure, Azure Stack, or on-premises.

https://www.sdxcentral.com

A corporate guide to addressing IoT security concerns

The Internet of Things (IoT) promises benefits for companies, including rich supplies of data that can help them more effectively serve their customers. There’s also a lot to be worried about.

Because so many devices, products, assets, vehicles, buildings, etc. will be connected, there is a possibility that hackers and other cyber criminals will try to exploit weaknesses.

“In IoT ecosystems, where myriad device types, applications and people are linked via a variety of connectivity mechanisms, the attack vector or surface is potentially limitless,” says Laura DiDio, principal analyst at research and consulting firm ITIC.

“Any point in the network — from the network edge/perimeter to corporate servers and main line-of-business applications to an end-user device to the transmission mechanisms [is] vulnerable to attack. Any and all of these points can be exploited.”

As a result, IoT security ranks as a big concern for many companies. Research firm 451 Research recently conducted an online survey of more than 600 IT decision-makers worldwide and found that 55% rated IoT security as their top priority when asked to rank which technologies or processes their organizations considered for existing or planned IoT initiatives. The very nature of IoT makes it particularly challenging to protect against attacks, the report says.

What can enterprises do to strengthen the security of their IoT environments? Here are some suggested best practices from industry experts.

Identify, track, and manage endpoint devices
Without knowing which devices are connected and tracking their activity, ensuring security of these endpoints is difficult if not impossible.

“This is a critical area,” says Ruggero Contu, research director at Gartner Inc. “One key concern for enterprises is to gain full visibility of smart connected devices. This is a requirement to do with both operational and security aspects.”

For some organizations, “this discovery and identification is about asset management and less about security,” says Robert Westervelt, research director of the Data Security Practice at International Data Corp. (IDC). “This is the area that network access control and orchestration vendors are positioning their products to address, with the added component of secure connectivity and monitoring for signs of potential threats.”

Companies should take a thorough inventory of everything on the IoT network and search for forgotten devices that may contain back doors or open ports, DiDio says.

Patch and remediate security flaws as they’re discovered
Patching is one of the foundational concepts of good IT security hygiene, says John Pironti, president of consulting firm IP Architects and an expert on IoT.

“If a security-related patch exists for an IoT device, that is the vendors acknowledgement of a weakness in their devices and the patch is the remediation,” Pironti says. “Once the patch is available, the accountability for the issue transfers from the vendor to the organization using the device.”

It might make sense to use vulnerability and configuration management, and this would be provided in some cases by vulnerability-scanner products, Westervelt says. Then do the patching and remediation. “Configuration management may be an even bigger issue opening weaknesses than patching for some enterprises,” he says.

It’s important to remember that IoT patch management is often difficult, Contu says. “This is why it is important to do a full asset-discovery to identify where organizations are potentially vulnerable,” he says. “There is as a result the need to seek out alternative measures and models to apply security, given [that] patching is not always possible.” Monitoring network traffic is one way to compensate for the inability to apply patches, Contu says.

Prioritize security of the most valuable IoT infrastructure
Not all data in the IoT world is created equal. “It is important to take a risk-based approach to IoT security to ensure high-value assets are addressed first to try and protect them based on their value and importance to the organization [that] is using them,” Pironti says.

In the case of IoT devices, an organization might have to contend with exponentially more devices then it did with traditional IT gear, Pironti says. “It is often not realistic to believe that all of these devices can be patched in short periods of time,” he says.

Pen test IoT hardware and software before deploying
If hiring a service provider or consulting firm to handle this, be specific about what type of penetration testing is needed.

“The pen testers I speak to do network penetration tests along with ensuring the integrity of network segmentations,” Westervelt says. “Some environments will require an assessment of their wireless infrastructure. I believe application penetration testing is a slightly lower priority within IoT for now, with exception for certain use cases.”

Penetration testing should be part of a broader risk assessment program, Contu says. “We expect an increasing demand for security certification [related to] these activities,” he says.

If an actual IoT-related attack occurs, be ready to act immediately. “Construct a security response plan and issue guidance and governance around it,” DiDio says. “Put together a chain of responsibility and command in the event of a successful penetration.”

Know how IoT interacts with data to ID anomalies, protect personal information
You might want to focus on secure sensor-data collection and aggregation, Westervelt says. This could require both cyber security and physical anti-tampering capabilities, depending on where the device will be deployed and the device’s risk profile.

“It may require hardware and/or software encryption – depending on the sensitivity of the data being collected – and PKI [public key infrastructure] to validate device, sensors and other components,” Westervelt says.

“Other IoT devices like point-of-sale systems may require whitelisting, operating-system restrictions and possibly anti-malware, depending on the device functionality.”

Don’t use default security settings
In some cases, organizations will choose security settings according to their unique security posture.

“If a network security appliance is being implemented in a critical juncture, some organizations may choose to deploy it in passive mode only,” Westervelt says. “Remember that with industrial processes – where we are seeing IoT sensors and devices being deployed – there may be no tolerance for false positives. Blocking something important could cause an explosion or even trigger a shutdown of industrial machinery, which can be extremely costly.”

Changing the security settings can also apply to the actual devices connected via IoT. For example, there’s been a distributed denial-of-service attack that arose from the compromise of millions of video cameras configured with default settings.

Provide secure remote access
Remote-access weaknesses have long been a favorite target of attackers, and within IoT a lot of organizations are looking for ways to provide contractors with remote access to certain devices, Westervelt says.

“Organizations must ensure that any solution that provides remote access is properly configured when implemented, and other mechanisms are in place to monitor, grant and revoke remote access,” Westervelt says “In some high-risk scenarios, if remote access software is being considered, it should be thoroughly checked for vulnerabilities.”

Segment networks to enable secure communication among devices
Segmenting IoT devices within networks enable organizations to limit their impact if they are found to be acting maliciously, Pironti says.

“Once malicious behavior is identified from an IoT device, it can be isolated from communicating with other devices on the network until they can be investigated and the situation remediated,” he says.

When segmenting IoT devices, it is important to implement an inspection element or layer between the IoT network segment and other network segments to create a common inspection point, Pironti says. At this point, decisions can be made about what kinds of traffic can pass between networks, as well as a meaningful and focused inspection of traffic.

This allows organizations to direct inspection activities at specific traffic types and behaviors that are typical to the IoT devices instead of trying to account for all traffic types, Pironti says.

Remember people and policies
IoT is not just about securing devices and networks. It’s also crucial to consider the human element in securing the IoT ecosystem, DiDio says.

“Security is 50 percent devices and protection, tracking and authentication mechanisms and 50 percent the responsibility of the humans who administer and oversee the IoT ecosystem,” she says. “It is imperative that all stakeholders from the C-level executives to the IT departments, security administrators, and the end users themselves must fully participate in defending and securing the IoT ecosystem from attacks.”

In addition, review and update the existing corporate computer security policy and procedures. “If the company policy is more than a year old, it’s outdated and needs revision to account for IoT deployments,” DiDio says. “Make sure that the corporate computer security policy and procedures clearly specify and articulate the penalties for first, second and third infractions. These may include everything from warnings for a first-time offense up to termination for repeat offenses.”

https://www.networkworld.com

Cisco reinforces storage with new switches, mgmt. software

Cisco this week fortified its storage family with two new 32G Fibre Channel switches and software designed to help customers manage and troubleshoot their SANS.

The new switches, the 48-Port MDS 9148T and 96-Port MDS 9396T feature a technology called Auto Zone that automatically detects any new  storage servers or devices that log into a SAN and automatically zones them without having to do manual configuration.

The idea is to eliminate the cycles spent in provisioning new devices and avert errors that typically occur when manually configuring complex zones. Even when a host or storage hardware is upgraded or a faulty facility is replaced, the switch automatically detects the change and zones them into the SAN, Cisco said.

The switches also support a number of features that are typically only found in higher-end boxes, according to Cisco’s Adarsh Viswanathan senior manager, storage product management and marketing.  These include redundancy of components, HVAC/HVDC power options and smaller failure domains to ensure higher reliability.

The switches also support Fibre Channel-NVMe to help customers moving towards all-flash storage environments. NVMe was developed for SSDs by a consortium of vendors including Intel, Samsung, Sandisk, Dell, and Seagate and is designed as a standard controller technology for PCI-Express interfaces between CPUs and flash storage.

The switches fill out Cisco’s existing MDS storage-fabric switch line which includes the 9132T 32 Port 32G Fibre Channel Switch and MDS 9396S 16G Multilayer Fabric Switch.  The MDS family also includes the much larger 9700 Director line of switches.

On the software side, Cisco enhanced its MDS Diagnostics (version 11) suite to help customers spot and fix SAN-wide problems. For example, new features called HBA Diagnostics and Read Diagnostic Parameter let MDS switches initiate tests to validate and confirm normal operations on edge-connected devices.

Another feature, Link Cable Beacon, helps administrators more easily identify physical ports by flashing lights to help them identify where specific devices are attached, reducing downtime, Cisco says. 

The new switches also support a feature found in its higher-level switches known as telemetry. Telemetry lets customers analyze SAN operations more efficiently by letting administrators have a more sophisticated view of the enterprise Fibre Channel fabric. The idea is to spot problems and solve them in real time.

Cisco research says by the year 2020, the amount of data stored inside data centers is expected to grow by a factor of five, from 400 exabytes to almost 2 zettabytes. By 2020, data-center storage installed capacity will grow to 1.8 ZB, up from 382 EB in 2015, nearly a five-fold growth.

A 2016 report from Dell’Oro Group noted that 32Gb Fibre Channel revenue is expected to exceed $1 billion by 2020 as 32 Gbps Fibre Channel switch port shipments are expected to eclipse over half the market by then. Brocade, Cisco, Broadcom/Emulex and Cavium/QLogic are some of the main players in that arena.

https://www.networkworld.com

Google's Partner Interconnect connects SMBs to its data centers

If you are a large-scale enterprise, Google has a service called Dedicated Interconnect that offers 10Gbps connections between your data center and one of theirs. But what if you are a smaller firm and don’t need that kind of bandwidth and the expense that goes with it?

Google now has you covered. The cloud giant recently announced Google Cloud Partner Interconnect, a means of establishing a direct connection between a SMB data center, with emphasis on the medium-sized business, and Google's hybrid cloud platform. The company did this in concert with 23 ISP partners around the globe.

Instead of the 10Gbps full circuits, Partner Interconnect allows users to select partial circuits from 50Mbps to 10Gbps.

Participating partners include, but are not limited to, AT&T and Verizon in North America, NTT and Softbank in Japan, BT and Orange in EMEA, Macquarie and Megaport in Australia, plus global providers such as Digital Realty and Equinix.

“Getting up and running with Partner Interconnect is easy, as our partners have already set up and certified the infrastructure with Google Cloud. This provides a turnkey solution that minimizes the effort needed to bring up network connectivity from your data center to GCP,” said John Veizades, Google Cloud product manager, in the blog post announcing the service.

Dedicated and Partner Interconnect mean businesses can connect their data center directly to Google Cloud services without having to use the public internet, reducing latency and securing data transfers at the same time. Google also offers Cloud VPN for customers using the public internet for whatever reason.

Google's growing interest in hybrid cloud
Google started out as a pure cloud player, but this is the second major effort, following Dedicated Interconnect, that shows Google’s growing interest in hybrid cloud installations. Google has finally acknowledged that the hybrid cloud is the dominant style and most enterprises are going with a mix and has adjusted accordingly.

The company has made other hybrid deals recently, most notably a deal with Cisco last October to help connect on-premises services to Google Cloud, plus similar a partnership with Nutanix announced last June to allow on-premises and cloud deployments to be managed as a unified service.

Google says general availability for Partner Interconnect will come in the next few weeks.

https://www.networkworld.com

Friday, 27 April 2018

Percona Updates MongoDB Server, Enhances MySQL Security

Open-source server specialist Percona is branching out yet again.

After establishing a reputation for MySQL server solutions, the Raleigh, N.C., technology firm released in 2015 its Percona Server for the popular open-source MongoDB database. The following year, it rolled the second version of its product for MongoDB 3.2. Today, the company is at it again with Percona Server for MongoDB 3.6.

On April 24, coinciding with the Percona Live Conference in Santa Clara, Calif. (April 23-25), the company announced the general availability of the solution featuring updates and enhancements found in MongoDB Community Edition 3.6.

Percona's bet on MongoDB appears to have paid off. Two years ago, 30,000 users had downloaded Percona Server for MongoDB since its initial launch in the fall of 2015. Today, the company reported that the software had been downloaded 300,000 times.

New features include retryable writes, which allow data to be written to the database when network issues crop up, and causal consistency capabilities that provide reliable read operations on secondary nodes. Security gets an upgrade with updated access controls and improved network listening features.

Meanwhile, the company is also rolling out a new version of its MySQL server product, with three new features that help safeguard business data. Percona Server for MySQL 5.7.21 features encryption for InnoDB general tablespaces, binary log file encryption and a Vault keyring plug-in.

"One of the key challenges for enterprises is how to secure their data to meet compliance objectives.  While encryption alone is useful, it is often very complex operationally to implement," said Peter Zaitsev, co-founder and CEO of Percona. "The new features of Percona Server for MySQL not only provide a pathway for organizations to encrypt their data to meet ever tighter compliance requirements, but also provides a method to centrally manage the keys used to encrypt their databases using Hashicorp Vault."

Percona DBA Service, a new database managed service, offers monitoring, reporting and security assessments, among other essentials, for both cloud-based and on-premises databases. Cloud support includes Amazon Aurora, Amazon Relational Database Service (RDS), Google Cloud and Microsoft Azure.

This summer, Percona is expanding its support services slate with another popular open-source database, PostgreSQL.

Providing support for PostreSQL, whose profile has been growing higher of late, is a natural next step for Percona. The database has attracted some big-name brands, including IMDB.com, Fujitsu and Etsy.com. "Because of its feature set, PostgreSQL is also a popular choice for enterprises that are looking to move away from proprietary databases such as Oracle, Microsoft SQL Server, or DB2 to reduce costs or decrease operational complexity," Zaitsev explained.

Percona Support for PostgreSQL joins the company's other support offerings for MySQL, MongoDB and MariaDB, allowing customers to consult with database experts via phone, email, chat and the web when it launches on July 1. As a bonus, customers running a mix of open-source databases in their environments will be able consolidate their enterprise support services to one vendor, Zaitsev added.

Finally, Percona announced two major new partnerships.

The company has teamed with Microsoft to make its software available in the Azure Marketplace, a move that helps simplify deployments in Azure instances, asserted Percona. In addition, the company formed an alliance with San Francisco-based Mesosphere to bring Percona Server for MongoDB to Mesosphere's container-driven DC/OS (Data Center Operating System) platform.

http://www.eweek.com

HPE partners with Portworx for easier Kubernetes deployment

Hewlett Packard Enterprise (HPE) has partnered with Kubernetes container vendor Portworx to provide a reference configuration for enterprises to launch stateful container workloads on Kubernetes.

Containers are a lightweight form of virtualization, where just what is needed is loaded rather than the full operating system like in a virtualized environment. Docker was the first with containers, but it has been steamrolled by Kubernetes, which was developed by Google. Google just had way more resources to bring to bear than Docker, a startup that has relied on venture funding.

One of the big changes as containers have evolved is adopting the stateful condition. Initially they were stateless, meaning the data was erased from memory when the container was shut down at the completion of its workload. Stateful applications, on the other hand, are services that require retention of data, usually through a connection to a back-end database so they have persistent storage.

HPE's and Portworx's new solution
This new solution pairs HPE’s Synergy composable system with Portworx’s PX-Enterprise cloud-native storage platform to deploy a scale-out container platform on bare metal. One of Portworx’s target markets is DevOps teams, and developers can deploy a container in 30 minutes that is elastic and scalable to many nodes.

“Running enterprise container workloads at scale requires compute and storage that are highly flexible, scalable and available,” said McLeod Glass, vice president of production management at HPE, in a statement. “Together Portworx and HPE deliver a fully integrated cloud-native storage layer on top of HPE Synergy’s composable infrastructure, enabling scalable data and compute services for containers on a Kubernetes cluster. This will vastly simplify the customer’s ability to deliver stateful container services through deployment automation and running native container storage on HPE’s composable systems.”

What are HPE Synergy and Portworx PX-Enterprise?
Both products are pretty new. HPE introduced Synergy in December 2015, while Portworx first introduced PX-Enterprise product in June 2016. Synergy is actually a hardware platform, what HPE calls "composable" hardware that combines compute, storage, and network equipment in one chassis, along with management software that can quickly configure the hardware needed to run an application.

Synergy stores configurations for particular applications as templates and deploys them through an app called Composer. The hardware is configured for the app and the OS images are deployed, all without any human intervention.

PX-Enterprise is an easy-to-deploy container data services that provide persistence, replication, snapshots, encryption, and secure distributed storage for containers. It works both on premises and in the cloud and spans both.

Containers are becoming a hot commodity, and Kubernetes has the lead. According to a Portworx Annual Container Adoption survey of 491 IT professionals, 43 percent said they use Kubernetes, with 32 percent using it as their primary orchestration tool. Docker Swarm was a distant second at 30 percent. The survey was nearly a year ago, however, and things have likely changed.

https://www.networkworld.com

IBM blockchain alliance tracks jewelry from the mine to the mall

If you're checking out engagement rings or earrings in one of Helzberg Diamonds' 200 stores later this year, your phone should be able to tell you exactly where that jewelry came from.

Through an alliance called TrustChain, IBM, Helzberg and companies involved with mining and refining will be able to track gems and precious metals from their origins all the way to the mall. It's based on a technology called blockchain that's outgrown its origin -- the way to record transactions with the bitcoin cryptocurrency -- into a tool to cement all kinds of transactions and digital data into a shared, tamperproof, permanent record.

The result in the case is a way that a host of companies can track finished jewelry in a store back through its entire history, said Jason Kelley, IBM's general manager of blockchain services. That'll let you be sure an item has the history and value you think it has -- particularly important if you want to sell it yourself. You and law enforcement also can have some faith that your necklace isn't tied to sordid aspects of the trade such as corruption, terrorist financing, slavery and other human rights abuses.

"We have a need in our industry for truth and greater trust," Kelley tells me.

Blockchain, designed to hard-wire that trust into digital records of what's going on, has the potential to revolutionize not just commerce but also voting, home buying, hiring, online advertising and other fields. But there's a long way between today's reality and that promise. Blockchain is more likely to be seen in pilot projects than the actual systems that govern our lives, and integrating it isn't simple.

The TrustChain alliance includes Asahi Refining and LeachGarner, which refine and supply precious metals; Richline Group, which makes jewelry; and Helzberg Diamonds, which sells the products. Blockchain networks offer more utility with more members in an alliance, though, so Kelley says alliance members plan to expand.

TrustChain has graduated from its pilot stage, but getting it to work isn't simple. At Richline, more than 100 people from accounting, information technology and business operations were involved, according to Kelley.

The hard part of blockchain
"The technology is pretty easy," Kelley says. "What's tough is getting the businesses and the business processes, the different players and personalities, together to execute in a single consortium."

The benefits can be significant, though. Blockchain also could help remove uncertainty that clouds other transactions. Is that really fair trade coffee? Is that a counterfeit Louis Vuitton purse? Did those running shoes come from a sweatshop? No wonder there are blockchain programs to assure the authenticity and provenance of diamonds.

A lot of blockchain work involves digital assets -- data that can be recorded directly on a blockchain. It's trickier with physical assets like diamonds, pharmaceuticals, high-end fashion products and jewelry, though. There have to be procedures to ensure nobody switches out genuine article and switches in a fake that assumes its identity on the blockchain.

Molecular verification for blockchain
Authentication options are an active area of improvement, though, for example with a "molecular watermark in a certain place" in a product, Kelley says. Naturally, IBM is working on this verification technology.

"Blockchain doesn't solve that -- the original recording of the physical asset," Kelley says. "We can take a picture at molecular level to verify an object is what we think it is. That is then put into the blockchain."

So the trust isn't perfect. But to be useful, blockchain doesn't have to be perfect. It just has to be better.

https://www.cnet.com

What’s new with Eclipse’s Jakarta EE Java

The Eclipse Foundation, which has taken over development of enterprise Java, plans two releases of the GlassFish Java application server this year, including one that will pass through Eclipse’s new enterprise Java specification process. The rollouts are the first steps in the foundation’s efforts to advance the enterprise Java platform, which, going forward, will emphasize microservices and cloud deployments.
GlassFish historically has served as a reference implementation of Java EE (Enterprise Edition}, which is being relabeled Jakarta EE. GlassFish will serve as the reference implementation of Jakarta EE as well. In the third quarter of this year, Eclipse GlassFish 5.1 will debut, becoming the first release of a project from the Eclipse Enterprise for Java (EE4J) top-level project.

Cloud native Java

Eclipse’s roadmap for GlassFish is part of a multifaceted announcement today detailing both development plans and the organization’s overall vision for Jakarta. Key goals and areas of focus, based on feedback from developers and stakeholders, include:
  • Enhanced support for microservices architecture. The existing Eclipse MicroProfile community will help take the lead on this. Jakarta will have a microservices-first outlook and a simpler consumption model where enterprises can use the best of the platform without having to use all of it.
  • A move to “cloud native Java,” with better integrations with technologies including Docker containers and Kubernetes container orchestration. Some integrations have to happen at the JVM level. The Eclipse Jakarta community is expected to work with OpenJDK and the Eclipse OpenJ9 VM team on this effort.
  • Provision of production-quality reference implementations.
  • Building of a vibrant developer community.
  • Establishing Eclipse as an open source “center of gravity” to attract other technologies in realms such as cloud-friendly Java, microservices, and Docker and Kubernetes integration.
Eclipse’s emphasis on cloud support and microservices echoes the plans Oracle had previously outlined for enterprise Java. As part of Eclipse’s takeover of the project, the organization is endeavoring to make community participation easier. Although the previous Java Community Process under Oracle had engaged the Java community, participation in open source projects such as GlassFish and the Jersey web services API required signing an Oracle Contributor Agreement—a barrier to some Java developers, Eclipse said. New processes for Jakarta EE specifications and development will be “open,” vendor-neutral, and provide a level playing field for all participants, the organization said.

Jakarta EE compatibility testing

Test compatibility kits (TCKs), to verify compliance with Jakarta EE platform specifications, could arrive as soon as 2018. These are intended to be more open and less arduous than before. The TCKs will be available under an open source license for the first time. Previously, TCKs had been available only to Java EE licensors, who had to pay for them. Certifying compatible implementations will require a Jakarta EE trademark license. Whether Eclipse will charge for the license is still to be determined.
https://www.infoworld.com

Juniper multicloud management software targets enterprise data centers

Facing the reality that many enterprise data-center managers now work in a hybrid cloud environment, Juniper Networks is set to release Contrail Enterprise Multicloud, a software package designed to monitor and manage workloads and servers deployed across networking and cloud infrastructure from multiple vendors.

Enterprises are moving to the cloud for operational efficiency and cost optimization, but at the moment most big companies are operating hybrid environments, which has added to the complexity of managing computing infrastructure.

Juniper is competing with a variety of networking and multicloud orchestration tools from major data center players, including VMware's NSX, Cisco's ACI, and HPE's OneSphere. What's more, Juniper does not have as big a presence in the data center as some of its rivals, particularly Cisco.

Juniper unveiled Contrail Enterprise Multicloud at its NXTWORK event in December last year, and is now set to come out with the software in a staggered release schedule over the next five months.

The package as a whole is designed to give data center managers end-to-end policy and control capability for any workload across multi-vendor networking environments and server deployments in different cloud services, Juniper said. It's meant to act as a single point of management both for both software overlays -- software-defined network services -- and underlays, or in other words the traditional networking equipment infrastructure.

Managing policy across clouds
"From a single platform, enterprises can manage policy across multiple public clouds and private clouds," said Bikash Koley, Juniper CTO, in an email response to questions about the new product. "The integrated underlay capabilities mean enterprises can also manage their IP fabric, switches, routers and their bare-metal compute from the same platform," Koley said.

Juniper's journey to multivendor and multicloud management software began in earnest when it acquired Contrail Systems in late 2012, when it was on the hunt for software-defined networking technology. Since then, it's developed Contrail into a full-blown network management platform. It developed Contrail Cloud for telecom companies, and recently contributed the code for OpenContrail, the open source version of the software-defined networking (SDN) platform, to the Linux Foundation. (The software is now called TungstenFabric.)

On its part, Contrail Enterprise Multicloud offers real-time infrastructure performance monitoring for data center networking devices in addition to cloud infrastructure and application monitoring.

The software maps software-defined overlay services to cloud-specific environments. It uses IP routing across clouds to provide connectivity across application components, independently of which cloud execution environment they use, Juniper said.

The physical data center fabric is programmed and controlled via networking, configuration and telemetry protocols.

For virtualized workloads in private clouds, the Contrail vRouter connects virtual machines and containers to virtual networks, according to a blog post by Jacopo Pianigiani, a Juniper product manager.

Contrail Enterprise Multicloud also provides for monitoring and management of public cloud tenants in public clouds including AWS and Azure through public cloud APIs.

"Customers can manage network policy for workloads running on bare metal servers, in virtual machines or in containers, and across both public and private cloud environments," Koley said. "Workflows—from provisioning to troubleshooting to maintenance—can be executed across a heterogeneous environment."

The software, however, is not meant to be a multi-vendor element management system that, for example, would allow network administrators to get down to the nitty-gritty of changing configuration settings on any network device from any given vendor.

"This isn’t about manipulating point configuration on devices but rather pushing intent-based workflows down into the supporting infrastructure," Koley said.

Contrail Enterprise Multicloud is licensed as a subscription based on the number of devices or nodes that are deployed. With the general availability of  Contrail Enterprise Multicloud, Juniper is also releasing a 5-step multicloud migration framework document as a guide for data center managers moving to the cloud (or multiple clouds) and offering related professional services. It's also offering bundles of QFX Series Switches with Contrail Enterprise Multicloud software; pricing depends on configuration of switches and deployed devices.

The release schedule calls for Contrail Enterprise Multicloud with fabric management capabilities for QFX series, along with a unified dashboard called Contrail Command, to be available by the end of June. Contrail Enterprise Multicloud software for managing public clouds as a fabric connecting workloads across private and public clouds is slated to be released in the third quarter.

https://www.networkworld.com

Thursday, 26 April 2018

Google Cloud Platform adds more managed database services

Google Cloud Platform is rounding out its stable of managed database services as it on boards more large enterprises.

Managed database services are increasingly popular as enterprises aim to abstract the underlying infrastructure and connect with databases via application programming interfaces.

Dominic Preuss, director of product management at Google Cloud, said that the latest additions to the database roster cover the four largest asks from enterprise customers.

"Every enterprise has many database technologies as well as programming languages. These companies are replatforming on more managed services," said Preuss. "We are laser focused on enterprise use cases."

Managed database services are offered by rivals Amazon Web Services, which has an extensive lineup, as well as Microsoft Azure, IBM Cloud and a bevy of others.

The managed database additions include:
  • Commit timestamps for Cloud Spanner across multiple regions. The commit timestamps lets enterprises determine the ordering of mutations and build change logs.
  • Cloud Bigtable replication beta is rolling out and will be available to all customers by May 1. A replicated Cloud Bigtable database provides more availability by enabling use across zones in a region.
  • Cloud Memorystore for Redis beta. On May 9, Google Cloud will offer a Redis managed service. Preuss noted that Redis has become a popular enterprise option for moving apps to in-memory architectures.
  • Cloud SQL for PostgreSQL, which is now generally available. Preuss said that Google added more availability, replication and performance to PostgreSQL, which has 99.95 percent availability.
Preuss said that Google Cloud Platform chose those aforementioned database services due to requests by large enterprises, its own services unit and systems integrators. He added that Google Cloud will continue to add database managed services.
"There are other areas we're investigating," said Preuss. "Whatever enterprises are asking for we will go build. This extension gets us to the majority of use cases."
https://www.zdnet.com

Wednesday, 25 April 2018

Azure IoT Hub goes basic for cheaper telemetry deployments

If we’re to build a massively scalable internet of things, we’re going to need tools that can handle hundreds of thousands of devices and a massive throughput of data. That’s hard to deliver with on-premises systems, but one that’s eminently suited to the scale and scalability of the public cloud.
While there’s a Windows for IoT hardware, the heart of Microsoft’s IoT platform is its Azure cloud, with a suite of tools and services that can build massive scale industrial IoT applications. One key element of that suite is IoT Hub, a routing service that sits between your devices and gateways and your back-end cloud services.

Inside Azure IoT Hub

Azure IoT Hub manages messaging connections to and from your devices, either directly for devices with IP connectivity or via gateways for hardware that uses proprietary or low-power protocols. It sits in Azure, behind edge computing services, providing a management layer and the ability to ingest significant amounts of data from a large number of connected devices.
Microsoft markets it as a key component of an industrial IoT platform, supporting the applications, services, and hardware that go into automating industrial applications, whether on a single site or distributed around the world. Connections are secure, and you can provide declarative rules to route messages to specific applications and services running on Azure. There’s a close relationship between IoT Hub and Event Grid: messages routed by IoT Hub are a source of events for Event Grid’s publish and subscribe service.
The result is a useful tool, but it’s overkill for many IoT scenarios because of the overhead of its built-in management tool. Although complex devices need support for high-level management tools and the ability to reconfigure applications on the fly, many IoT devices are very simple microcontrollers with limited storage, where applications are loaded as firmware.

Introducing the IoT Huh Basic service tier

That’s led to Microsoft’s recent launch of the new Basic service tier for IoT Hub, designed to support simpler devices, working only with device-to-cloud connections.
Much of Azure IoT Hub’s message routing functionality is available in the new Basic tier. However, many management functions have been dropped because they’re not needed by microcontrollers. For example, you can’t use it to configure or update your devices, so there’s no cloud-to-device messaging or support for complex management techniques like digital twins. If you’re deploying hardware only to deliver telemetry to a management application, that shouldn’t be a problem; you’re using microcontroller-based IoT hardware to integrate with existing sensors or adding new sensors to a process.
With a proliferation of low-end IoT chipsets built around wireless communications, and with low-cost cellular connections specifically targetted at IoT scenarios, it’s clear that telemetry is a key scenario for industrial IoT applications. Microsoft’s own Azure IoT Starter Kit is a Wi-Fi-connected sensor board, ideal for building your first IoT Hub application. Arduino-style devices like this are easy to program, easy to deploy, and cheap enough to treat as disposable. If you need to deploy new code, program up a new device and swap it out. It’s likely to be as quick as deploying new firmware over a wireless connection.
If you drill into the available APIs, you’ll also see that there’s a significant difference in the capabilities IoT Hub Basic applications have for interacting with your devices. The Basic tier only offers tools for adding and removing devices, as well as handling incoming data. More complex device interactions need the Standard tier subscription, reinforcing Microsoft’s focus on Basic as a device-to-cloud platform.
For its focus on telemetry models, the IoT Hub Basic tier is significantly cheaper than the IoT Hub Standard tier. The price per unit per month for the Basic B1 version is $10 for 400,000 messages per day. If you’re looking at sending significant amounts of data, the B3 offering is a bargain: $500 a month for 300,000 messages a day for each connect device, versus $2,500 a month for the Standard tier.
For industrial telemetry, where you’re using devices to monitor production lines, that can be a significant boost to thin profit lines. Reducing the cost to these levels should also make implementing an industrial IoT program more attractive, letting you take advantage of machine learning-based predictive maintenance, as well as enhanced monitoring of production processes.
Although there’s no free option with the IoT Hub Basic tier, you can use the IoT Hub Standard free tier for prototyping. Once you go into production, you’ll switch to Basic, because the free Standard version gives you only 8,000 messages per day per unit.
Massive IoT deployments can be expensive, even when taking advantage of options like IoT Hub Basic. You can use the Standard free tier to understand what your options are, with a small number of small messages, before scaling up to a full-scale deployment, finally registering different devices on different plans as you gain a picture of message density and complexity. Azure lets you mix and match plans, so where you need a high-resolution view of an operation, you can place a handful of devices on IoT Hub Basic B3 subscriptions. Meanwhile, normal operations and lower-resolution measurements can deploy on IoT Hub Basic B1.
You can manage costs further up the stack by using IoT Hub to feed into Azure Event Grid and then into serverless Azure Functions. The resulting on-demand architecture treats messages from devices as events and processes them appropriately, working with pre-trained machine learning models and with functions that handle alerts to implement an effective industrial control system for only a few hundred dollars a month of per-second Azure billings.
https://www.infoworld.com

Data could one day be stored on molecules

Billions of terabytes of data could be stored in one small flask of liquid, a group of scientists believe. The team from Brown University says soon it will be able to figure out a chemical-derived way of storing and manipulating mass-data by loading it onto molecules and then dissolving the molecules into liquids.

If the method is successful, large-scale, synthetic molecule storage in liquids could one day replace hard drives. It would be a case of the traditional engineering that we’ve always pursued for storage being replaced by chemistry in our machines and data centers.

The U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA) has awarded the Brown team $4.1 million to work out how move the concept forward.

“The aim of this project is to come up with a new form of storage that is many times more compact that what we currently have,” says Brenda Rubenstein, assistant professor of chemistry at the university, in a press release.

The researchers say they’re getting there, but they need to figure a way to get beyond their current, small-sized, proof of concept of an 81-pixel black-and-white image loaded onto 25 unique molecules.

They claim that if they can encode millions of distinct data permutations onto the molecules, and then store the synthesis in liquids, massive amounts of data could be stored in a relatively small amount of liquid. It would be “dizzying quantities,” the researchers say.

They plan to use a technique called an Ugi reaction to achieve it. That’s a chemical way of getting multiple components onto one molecule. It’s currently used in pharmaceutical development, the school explains. A mass spectrometer then reads out the molecular data results.

Space-saving isn’t the only advantage to molecule-level data storage, though. One of the beauties of using liquids over traditional storage is that it’s three-dimensional, the researchers explain. That depth lends itself well to modern computations. It’s better for things such as image recognition and search algorithms. Those aren’t as traditionally two-dimensional as regular number crunching.

Not the only chemical medium being researched for data storage
Natural DNA is also a contender for mass storage in a tiny space. I wrote about it a few years ago. In those experiments the scientists put forward the longevity possibilities of the minuscule medium — DNA survived 45,000 years in a Siberian-discovered femur bone, those researchers point out.

Reliability, though, has been an issue in the DNA data storage experiments. But that may change.

Data density improvements, though, are where the scientists are headed in general. And the aforementioned synthetic molecule data storage and DNA exploration may be just the tip of the iceberg in this shift to chemistry.

Changes in methods of coding, too, could significantly diminish the amount of room needed for data. I wrote about “beyond binary” last year. That’s a four-symbol code that proponents say is much more efficient than the two-digit ones and zeroes we use today.

The students in that case want to use dyes triggered by light to achieve it. Chemistry, again, rather than engineering.

https://www.networkworld.com

Couchbase Mobile Secures Data Between Cloud, Edge

NoSQL database and data management software maker Couchbase isn’t well known yet for its new enterprise mobility platform, but it’s already seen some market traction and is moving ahead to the second version of the product.

The Mountain View, Calif.-based company, which also makes what it terms the “world’s first engagement database” focused on UX (user experience), on April 12 launched Couchbase Mobile 2.0. This latest release—a mobile version of the engagement DB—offers enterprise developers SQL queries, full-text search, synchronization and end-to-end security among other features to mobilize business applications to create exceptional customer experiences and a more effective workforce.

“Mobile first” and “offline first” strategies within the enterprise continue to grow in importance, Couchbase Chief Architect of Mobile Products Wayne Carter told eWEEK. With the enterprise mobile application market expected to reach $6.7 billion by 2022, according to IDC, it is clear that mobile technology continues to revolutionize the way businesses now operate, and that customers, employees and partners are increasingly engaging with businesses through applications.

“Developers can use this platform to share data between individual edge devices, and not have to have an internet connection in between,” Carter said. “This is really about the needs enterprises have when mobilizing their critical processes on the edge.”
Couchbase Mobile extends the Couchbase Data Platform to the edge, securely managing and syncing data from any cloud to all edge devices. Using Couchbase Mobile, organizations can build applications with synchronization from the cloud to edge with guaranteed data availability and millisecond query execution time, irrespective of network connectivity or speed, Carter said.
Built-in enterprise-grade security with end-to-end encryption and data access control from the cloud to the edge helps safeguard data while a consistent programming model for building web and mobile apps simplifies development. With the flexibility to add capacity at every tier, enterprises can easily scale up and down as demand fluctuates, the company said.
The platform includes:
  • Couchbase Lite, an embedded NoSQL database for managing data locally on the device; and
  • Sync Gateway, a secure web gateway that orchestrates data synchronization between Couchbase Lite and Couchbase Server.
New features and benefits in Couchbase Mobile 2.0 include:
SQL query and full-text search: Users can query and search JSON documents stored in Couchbase Lite. Developers can create a mobile app with rich query capabilities and search experiences that mirror those of Google, Yahoo and others.
Data change events: Developers can listen and react to data and query change events to create a richer, more reactive, and more engaging app experience.
Replication over WebSocket: Based on WebSocket, replication is designed to be fast and efficient.
Automatic conflict resolution: With Couchbase Lite 2.0, data conflicts are detected and automatically resolved.
On-device replicas: An on-device replica enables data recovery on the edge.
“Couchbase Mobile 2.0 was designed in partnership with our most innovative Fortune 500 and Global 2000 customers to meet the demanding needs of their business-critical mobile applications. These types of applications require guaranteed 100 percent data availability, millisecond query execution time, end-to-end security, and device side data recovery – regardless of network connectivity and speed,” Carter said.
Couchbase customers include Amadeus, AT&T, Carrefour, Cisco, Comcast, Concur, Disney, Dixons Carphone, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Rakuten/Viber, Tesco, Tommy Hilfiger, Verizon, Wells Fargo and hundreds of other household names.
http://www.eweek.com