Friday, 8 November 2019

IBM launches blockchain-based supply chain service with AI, IoT integration


IBM blockchain supply chain sterling

IBM this week launched a new supply chain service based on its blockchain platform and open-source software from recently-acquired Red Hat that allows developers and third-party apps to integrate legacy corporate data systems onto a distributed ledger.

Through the use of open APIs, the new Sterling Supply Chain Suite allows distributors, manufacturers and retailers to integrate their own data and networks – as well as those of their suppliers – onto a Hyperledger-based blockchain to track and trace products and parts. Among the data that can be integrated are IoT sensor systems for real-time shipment position location.

"This is the first move from IBM in what we anticipate to be a significant investment in the reinvention of supply chains by global organizations in the coming decades," an IBM spokesperson said via email.

Through APIs, the IBM Sterling Supply Chain Suite ties to legacy infrastructure such as Warehouse Management Systems (WMS), ERP systems, Order Management Systems  and commerce applications.

Because the new suite falls under the "Sterling" Order Management (SOM) brand name, which IBM acquired from AT&T in 2010, it already has an existing user base of more than 7,000 customers who have an additional 500,000 trading partners, according to Inhi Cho Suh, general manager of IBM's Watson Customer Engagement business unit.

"The complex, global nature of our omni-channel operations presents a significant supply chain challenge that could be turned into a business opportunity if the right technology is applied," said Juan Andres Pro Dios, CIO of El Corte Ingles, Europe's largest department store conglomerate. "The IBM Sterling Supply Chain Suite provides open development capabilities that let us quickly tailor solutions to meet our unique business needs. This allows us to embrace operational complexity while optimizing ... performance and improving omni-channel customer experiences."

IBM integrated Watson's AI capability to offer applications – among them, Order Optimizer and Supply Chain Insights – that can produce real-time alerts and recommendations through its Supply Chain Business Assistant (SCBA). SCBA, for example, can generate faster response times to anomalies like supply chain disruptions.

Simon Ellis, a research director for IDC, said IBM may not be alone in promoting a multi-tenant cloud network for supply chains, but it has advanced A.I. and blockchain as components of that service more than other vendors.

The new service, Ellis said, is a solid foray into the supply chain market.

"I think companies can leverage this with some other supply chain apps they already have so they don't need to rip and replace stuff," Ellis said. "The value of any blockchain will be square of the number of users it has, so how you make those connections [is] important, and this certainly moves it forward."

Current IBM Sterling SOM clients include companies in distribution, industrial manufacturing, retail and financial services: Adidas, AmerisourceBergen, Fossil, Greenworks, Home Depot, Lenovo, Li & Fung, Misumi, Parker Hannifin, Scotiabank, and Whirlpool Corporation.

Outdoor sports retailer REI, for example, is using the Watson Order Optimizer for its supply chain to factor in the various goals it has throughout the year, such as product margin, shipping speed and fulfillment costs, and matches that to its inventory in its three distribution centers and 155 stores.

"For us, the one thing we discovered was in existing supply chain networks...the majority of the industry was on point-to-point interactions through EDI systems and paper," Suh said. "Clients want to digitize...and understand the state of where their goods and services might be across multiple parties...so we added a blockchain shared ledger capability on top of our existing network.

"So any customers and their partners in their broader ecosystem have visibility into transactions and interactions they have," Suh continued. "Those transactions could be around invoicing, shipping, delivery – and then the combination of that shared ledger allows you to have a trusted understanding of who those partners are."

Once a customer is logged into the Sterling Supply Chain Suite service it has its own dashboard allowing it to search the status of a purchase order or product inventory. Users can also quickly onboard trading partners by choosing an "add new partner" icon and then filling out fields that include company name, contact communication protocol (email, for example), and what transactions and data sets they're allowed to view.

"Then you click 'OK' and the other party gets a notice that they click on and they're onboarded," Suh said. "It's pretty fast."

IBM had already launched supply chain network pilots for food, general cargo shipping and even the diamond trade to track products through its cloud-based Hyperledger blockchain platform. The new supply chain network will enable greater integration with existing enterprise ERP and database systems, Suh said.

IBM, for example, has already created an SAP connection to the Sterling Supply Chain service.

"We also created an open framework for applications and ISVs to be able to connect into and expand," Suh said. "It's live and in production."

Five Positive Use Cases for Facial Recognition

While negative headlines around facial recognition tend to dominate the media landscape, positive impacts of facial recognition technology are being created on a daily basis -- despite these stories often being overshadowed by the negative noise. It is the mission of industry leaders in computer vision, bio-metric and facial recognition technologies to help the public see how this technology can solve a range of human problems. 

In fact, the industry as a whole is tasked with advocating for clear and sensible regulation, all while applying guiding principles to the design, development and distribution of the technologies they are pursuing. AI solutions are solving real-world problems, with a special focus on deploying this technology for good. 

In this eWEEK Data Points article, Dan Grimm, VP of Computer Vision and GM of SAFR, a RealNetworks company, uses his own industry information to describe five use cases on the socially beneficial side of facial recognition.

Data Point No. 1: Facial Recognition for School Safety 
With school security a top priority for parents, teachers and communities, ensuring a safe space is vitally important. It can be difficult to always monitor who’s coming and going, and school administrators need a way to streamline secure entry onto campus property. 

K-12 schools are using facial recognition for secure access--a system that requires a person to be an authorized individual (such as teachers and staff)--in order to gain access to the building. This not only helps keep students safe but also makes it easier for parents and faculty staff to enter school grounds during non-peak hours. 

Facial recognition is being used to alert staff when threats, concerns or strangers are present on school grounds. Any number of security responses can be configured for common if-this-then-that scenarios, including initiating building lockdowns and notifying law enforcement, when needed. 

Data Point No. 2: Facial Recognition for Health Care 
As our population grows, so does the need for more efficient healthcare. Plain and simple, there simply isn’t time in busy physician offices for mistakes or delays. Facial recognition is revolutionizing the healthcare industry, whether it be AI-powered screenings and diagnoses, or utilizing secure access.

Healthcare professionals are using facial recognition technologies in some patient screening procedures. For example, the technology is being used to identify changes to facial features over time, which in some cases represent symptoms of illnesses that might otherwise require extensive tests to diagnose--or worse, go unnoticed.

Data Point No. 3: Facial Recognition for Disaster Response and Recovery 
When first responders arrive on the scene of an emergency, they’re looked to as calming forces among the chaos. With every moment critical, time is precious as each second could spell the difference between favorable and unfavorable outcomes.

A first responder outfitted with a facial recognition bodycam could quickly scan a disaster site for matches to a database of victims. This piece of technology has the ability to immediately know the names of victims, which enables first responders to deliver more efficient care, transform outcomes and deliver faster peace of mind to family members awaiting news of their loved ones. 

In critical-care situations, knowing the blood types of each resident in a disaster zone when identified by first responders could in turn, save more lives. This application would require the victims' family members to provide photos and blood type information so the emergency responders could scan the disaster area for the blood types needed. 

Data Point No. 4: Facial Recognition for Assisting the Blind
In our media-driven world, it can be challenging for blind persons to gain access to information. Finding ways to translate visual information into aural cues to make data more easily accessible has the potential to be life changing. 

Facial recognition apps highly tuned to facial expressions help blind persons read body language; specifically, an app equipped with this technology would enable a person to “see” a smile by facing their mobile phones outward. When someone around them is smiling, the phone vibrates--a transformative experience for someone who has not only never seen a smile but also has to work extra hard to detect with other senses as to whether the people around him/her are smiling. 

Another mobile app is geared toward achieving greater situational awareness for the blind, announcing physical obstacles like a chair or a dog along the way, as well as reading exit signs and currency values when shopping. This not only enables blind persons to navigate their surroundings more efficiently, but also gives them greater control and confidence to go about their everyday life without those accustomed hurdles. 

Data Point No. 5: Facial Recognition for Missing Persons 
From runaways to victims of abduction and child trafficking, it’s believed that tens of thousands of kids go missing every year. This statistic is unacceptable, especially in spite of our digitally connected world. It is up to us, as technology entrepreneurs, to find new ways to work with local authorities to protect our most vulnerable demographic. 

Facial recognition is addressing the missing persons crisis in India. In New Delhi, police reportedly traced nearly 3,000 missing children within four days of kickstarting a new facial recognition system. Using a custom database, the system matched previous images of missing kids with about 45,000 current images of kids around the city. 

Because children tend to change in appearance significantly as they mature, facial recognition technology has also been used with images of missing children to identify them years -- or even decades -- later. Parents and guardians provide local authorities with the last known photos they have of their children, and police match those against a missing persons database. Police can then search local shelters, homeless encampments and abandoned homes with this advanced technology, giving parents hope long after investigations have seemingly stalled.

Thursday, 7 November 2019

Software Quality Assurance

Software Quality Assurance

SOFTWARE QUALITY ASSURANCE (SQA) is a set of activities for ensuring quality in software engineering processes that ultimately results, or at least gives confidence, in the quality of software products.

Definition by ISTQB

  • quality assurance: Part of quality management focused on providing confidence that quality requirements will be fulfilled.

SQA Activities

SQA includes the following activities:
  • Process definition 
  • Process training
  • Process implementation
  • Process audit

SQA Processes

SQA includes the following processes:
  • Project Management
  • Project Estimation
  • Configuration Management
  • Requirements Management
  • Software Design
  • Software Development [Refer to SDLC]
  • Software Testing [Refer to STLC]
  • Software Deployment
  • Software Maintenance
  • etc.
Software Quality Assurance encompasses the entire software development life cycle and the goal is to ensure that the development and maintenance processes are continuously improved to produce products that meet specifications. Note that the scope of Quality is NOT limited to just Software Testing. For example, how well the requirements are stated and managed matters a lot!
 
Once the processes have been defined and implemented, Quality Assurance has responsibility of identifying weaknesses in the processes and correcting those weaknesses to continually improve the processes.
 
Capability Maturity Model Integration (CMMI) and ISO 9000 are some quality management systems that are widely used.
 
The process of Software Quality Control (SQC) is also governed by Software Quality Assurance (SQA). Read Differences between Software Quality Assurance and Software Quality Control.
 
SQA is generally shortened to just QA.

Enterprises tap edge computing for IoT analytics

IoT needs edge computing. The world is on pace to hit 41.6 billion connected IoT devices generating 79.4 zettabytes of data in 2025, according to research firm IDC. To make the most of that data, enterprises are investing in compute, storage and networking gear at the edge, including IoT gateways and hyperconverged infrastructure.

Moving processing and analysis to the edge can enable new IoT capabilities, by reducing latency for critical applications, for example, and improve the speed of alerts while easing network loads.

We talked to IoT early adopters in three different industries to find out how they’re advancing their IoT deployments by building up their edge computing infrastructure. Here’s what we learned.

Managed service provides benefits of edge computing, reduced load on IT staff

SugarCreek is preparing for the next generation of food manufacturing, where high-definition cameras and analytics can work together to quickly mitigate contamination or other processing issues. The only way to handle that automation in a timely fashion, though, is to beef up the company’s edge computing, according to SugarCreek CIO Todd Pugh.

Putting analytics, servers and storage together at the edge to process data from the cameras and IoT sensors on the equipment eliminates the need “to send command and control to the cloud or a centralized data center,” which can take 40 milliseconds to get from one spot to another, Pugh says. “That’s too long to interpret the data and then do something about it without impacting production.” That type of decision-making needs to happen in real time, he says.

Edge computing can be taxing on an IT department, though, with resources distributed across sites. In SugarCreek’s case, six manufacturing plants span the Midwest U.S. SugarCreek plans to move from its internally managed Lenovo edge-computing infrastructure to the recently launched VMware Cloud on Dell EMC managed service. SugarCreek beta-tested the service for Dell EMC and VMware when it was code-named Project Dimension.

SugarCreek already uses edge computing for local access to file and print services and Microsoft Active Directory; to store video from indoor and outdoor surveillance cameras; and to aggregate temperature and humidity sensors to assess how well a machine is running.

Having this data at the edge versus interacting with the data center in real time, which Pugh calls “financially impractical,” reduces overall bandwidth demands. For instance, the company can store 30 days worth of high-definition video without chewing up limited bandwidth. Other data, such as that generated by the sensors, is gathered and then forwarded at regular intervals back to the data center.

The managed service will ready SugarCreek for its more advanced surveillance and analytics plans. VMware Cloud on Dell EMC includes an on-premises Dell EMC VxRail hyperconverged infrastructure, VMWare vSphere, vSAN, and NSX SD-WAN.

“The cloud service is fully managed by VMware and if a hard drive fails, Dell comes in and takes care of it” rather than having specialized IT at each site or making an IT team member travel in case of issues, Pugh says, which helps when working with “pretty tight resources."

Implementing edge computing in this way also will enable the team to do at the edge anything it can do at the main data centers. “We’ll be able to secure the edge and, using microsegmentation, treat it like it’s just another data center,” he says. “Switching to a managed service at the edge will allow my people to concentrate on making bacon faster and better rather than worrying about compute power and maintenance.”

Homegrown hyperconverged infrastructure keeps IoT systems on track

Edge computing is helping keep the Wabtec Corp. fleet of 18,000 locomotives on track.

A network of IoT sensors, software embedded in more than 20 computers used to control the locomotive, and human/machine interfaces all send information to be processed in an onboard “mini data center” that handles data acquisition, algorithms and storage. The thousands of messages that come from each locomotive assist the company in “getting ahead of and visibility into 80 percent of failures that occur,” according to Glenn Shaffer, prognostics leader for Wabtec Global Services. That has led him to refer to the edge as “a diagnostic utopia.”

Wabtec (which recently merged with GE Transportation) is not new to data aggregation using wireless sensors, though. The rail transport company first started using a version of IoT (before it was called IoT) on its locomotives in 2000 but found capabilities constrained by the expense of satellite communications, which at the time was the only option to transmit information back and forth to the data center. Also, trains travel through a multitude of climates, terrain and obstructions (such as tunnels), making connections unreliable.

With edge computing, though, the information generated onboard now can be analyzed, reacted upon, and stored within the confines of the locomotive and without exhausting costly bandwidth. Wabtec’s homegrown rugged mini data center can sense critical failures and respond in real time.

For example, the custom infrastructure monitors parts such as cylinders, gauges their wear, maps that against the demands of upcoming routes such as an intense climb up a mountain, and schedules maintenance before the part has a chance to fail, according to John Reece, Wabtec Freight Global Services CIO.

Similarly, if the onboard mini data center receives an alert that an axle is beginning to lock up, torque can automatically be redistributed to the other axles, preventing a costly breakdown that would require a crane to be brought in to move the vehicle. “Some things fail fast on a locomotive, requiring decisions that run at the millisecond level, so we have to act quickly,” Reece says.

While edge computing is perfectly suited to such “fast fails,” Wabtec also relies on cloud resources for more comprehensive monitoring of the locomotive environment, Reece says. For instance, once failing parts are detected and mitigated onboard, maintenance shops are alerted via an edge-attached cell modem so they can order parts and schedule appropriate technicians to perform the repair work. Logistics teams receive a heads-up so they can alert customers to delays, conduct re-routes or assign replacement locomotives.

Maintenance shops, which Shaffer considers part of Wabtec's edge-computing strategy as well because of the computing capacity placed there, also serve as great outposts for full-volume data uploads. Once a locomotive pulls in, techs link the mini data center to the cloud via a high-speed connection and upload all the stored data. That data is used to conduct fuel performance analyses, lifecycle management and to develop predictive/prescriptive analytics via a big-data platform.

The Wabtec team is careful not to overload the onboard system with unnecessary data, minimizing the number of sensors and leaving some insight, such as the status of windshield wipers, to humans. Even as 5G wireless connections come into play as well as the emergence of autonomous trains, Reece says it will be important to be discriminating about where sensors are placed, what data is collected onboard, and how it is processed at the edge. Already IT operates on a philosophy of updating compute power 10x the current state “and it still gets obsolete quickly.” Storage, he finds, has the same issue. “Connectivity along the routes will never be 100 percent reliable, and there’s a risk associated with bogging down the system at the edge where these decisions get made,” he says.

Edge computing complements public cloud resources

Evoqua Water Technologies, a provider of mission-critical water-treatment solutions, is a veteran of IoT technology. For more than a decade it has relied on sensors attached to and embedded in its equipment to remotely monitor its purifying and filtration systems, collect data, and then leverage any insights internally and externally for customers.

“Data transmission was very, very expensive, leading us to only send what was important,” says Scott Branum, senior manager of digital solutions at Evoqua. If the equipment were running correctly, data from the sensors would only be sent once a day. However, if an alarm went off, all relevant data would be relayed to the data center. This methodology is how Evoqua controlled its cellular costs.

More recently, Evoqua has migrated to edge computing, embedding a small Linux-based gateway device from Digi International to its water treatment systems. While data generated from sensors and other inputs eventually flows from that compute and storage gateway via cellular connectivity to a data processing platform in the Microsoft Azure cloud, some business logic is enacted at the edge.

“We are taking various points of data and aggregating them via proprietary algorithms so business rules can be triggered as necessary,” Branum says. For instance, if a catastrophic incident is detected, analytics at the edge instruct the system to shut itself down based on predefined rules. “There are some things that happen where we can’t wait to take action, and we certainly can’t wait until data is transmitted once a day and then analyzed,” he says.

The edge computing setup is also programmed to detect anomalies in equipment performance, pinpoint the issue, and alert an off-site tech team, without involving the data center. “We’re not just sending out labor to check on a piece of equipment and see how it’s running; we’re getting business intelligence noting a vibration was detected in a specific pump as well as possible solutions,” Branum says. Not only is a technician’s time put to better use on value-added activities, but also the appropriate skill set can be deployed based on the issue at hand.

Branum’s team keeps a close eye on inputs and fine-tunes sensor data to avoid false alarms. “We spend so much time on the front end thinking how we are going to use the edge,” he says. “There hasn’t been a deployment yet that we haven’t had to refine. If the market – our customers – tells us there is no value in certain functionality and we are only creating nuisance alarms, we change it.”

Outside of immediate decisions that need to be made at the edge, data is sent to the data center for deeper analysis such as cost–benefit reports and lifecycle planning. Branum says using the public cloud rather than a private data center has helped reduce development costs and keep up with industry standards on security. “We are trying to design an edge with Digi International and a centralized data processing platform with Azure that can scale together over time,” he says.

Monday, 4 November 2019

The age of agile: transforming QA

In the age of agile, quality assurance (QA), has evolved from ‘test everything’ to ‘test as fast as you can’.
As teams’ race to adopt agile and DevOps, and the need for speed overtakes quality, QA is in danger of being left behind. Traditionally, software quality assurance (SQA) is defined as a planned and systematic approach to the evaluation of the quality of and adherence to software product standards, processes, and procedures. This systematic approach is quite different in Agile and non-Agile environments.
How can traditional QA survive?
QA needs to change. It needs to know what it can test, when it can – but more importantly it needs to know what it cannot do due to the limits of time and resources. QA has to change. It has to transform from staid, steady, find-the-bugs ways of working and become fully integrated into the product lifecycle, from the start of the ideas, through to delivery of that first, viable product and beyond into ongoing support and growth.
Sogeti uses a QA Transformation process, built on years of experience, that builds an approach to measure, manage, and integrate processes to deliver financial and timely benefits on a consistent basis. It further enables various teams to build quality metrics and assessment models to measure the impact of the newly adopted technology on the overall transformation process.
QA transformation approach
  • focus on increasing the level of test automation
  • adopt smart solutions and build an appropriate strategy for them
  • build and elevate new QA and testing skills and embed them in Agile teams
  • develop a strategy for testing AI – incorporating analytics, self-learning and RPA.

Plugging the QA talent gap
Currently there is a talent shortage. The number of automator’s fluent in every tool ever created, with a background in software development, networks, marketing, finance, object and functional programming isn’t many.  Especially when they need to be expert in all things from agile, DevOps, and mobile.
Smaller QA teams can’t test at the speed and scale of the software being developed. E.g. the world quality report revealed that 42% of respondents listed a lack of professional test expertise in agile teams as a challenge in applying testing to agile development.
The diversification of platforms and the speed that they can change – look how much mobile devices have changed in the last 10 years. Then there is IoT and connected devices such as Amazon’s Alexa family, or Google Home and the range of platforms needing to test stretches even the biggest teams and organisations, let alone the rest of the QA industry. These circumstances demand that dedicated teams broaden their skills – QA teams must ensure they are diversifying their intake of new members to reflect this.
Adopting RPA
Demand for RPA grows at a rate of 20 to 30 percent every quarter, and it’s no mystery as to why. It empowers modern workers to automate tedious knowledge work through a non-technical user interface — click, drag and drop. Business users can simply tell one of these trained software robots to do something, and it will do it.
RPA can enable high-quality performance from your IT systems. It reduces the amount of end-to-end testing that would be required when building out custom APIs, integrations and ETL logic to dissolve information silos.
RPA’s ability to automate the dull, reputative task is key to modern businesses. It’s ability to remove the drudge work, reduce errors, and allow staff to focus on more important and rewarding activities is key to QA’s survival.
QA teams will need to learn and adapt to testing RPA scripts and products, and in many cases writing the RPA scripts will fall to people in agile test teams – after all they are already doing this – its automation but instead of proving functionality is correct, it is taking the correct functionality and automating it. How long before UAT stops being about the users and shifts to making sure the bots can do their jobs?
Adopting automation
Automation drives the ‘test early, test often, test everywhere’ strategy in DevOps & Agile enterprises. Automation is not a replacement of internal testing, however it does augment the strategy by taking on lower-priority tests (smoke and regression testing, for example) that can be carried out without a human touch.
Automation approaches vary from team to team – e.g. smaller teams should start with an off-the-shelf enterprise solution, allowing them to get up and running straight away, and have the support to handle software bugs along the way. Transformation of SDLCs without the transformation of testing approach is just a futile attempt at optimisation.
Testing often seemed to be an afterthought, a last-minute protocol in legacy SDLCs. Agile and DevOps put testing on forefront, bringing in the ‘shift-left’ mindset. But manual testing takes away the essence of these modern methodologies, thus, making testing an inhibitor rather than a facilitator in delivering quality products at speed.
Automated testing upholds efficiency of Agile sprints and empower collaboration in DevOps teams. As enterprises are now adopting an ‘Agile+DevOps’ approach for software development, automated testing becomes the key to a successful and timely deployment.
Real-world testing
Real-world testing tests real users on personal devices within the home environments. This allows brands to uncover edge use cases and functional problems that only exist in the real world.
This method can be started quickly, and has strong scalability benefits, meaning that agile teams can add more testers on-demand to account for peak periods.  Real-world testing is a valuable solution for brands looking to expand test coverage or those that don’t have the resources to replicate real-world scenarios themselves.
QA transformation requires a four-factor change including people, processes, technology and tools. By making QA omnipresent in an SDLC, enterprises can guarantee quality releases at a faster pace with significant decrease in post-production defects.

What is JSON? A better format for data exchange

JavaScript Object Notation is a schema-less, text-based representation of structured data that is based on key-value pairs and ordered lists. Although JSON is derived from JavaScript, it is supported either natively or through libraries in most major programming languages. JSON is commonly, but not exclusively, used to exchange information between web clients and web servers. 

Over the last 15 years, JSON has become ubiquitous on the web. Today it is the format of choice for almost every publicly available web service, and it is frequently used for private web services as well.

The popularity of JSON has also resulted in native JSON support by many databases. Relational databases like PostgreSQL and MySQL now ship with native support for storing and querying JSON data. NoSQL databases like MongoDB and Neo4j also support JSON, though MongoDB uses a slightly modified, binary version of JSON behind the scenes.

In this article, we’ll take a quick look at JSON and discuss where it came from, its advantages over XML, its drawbacks, when you should use it, and when you should consider alternatives. But first, let’s dive into the nitty gritty of what JSON looks like in practice.

JSON example
Here’s an example of data encoded in JSON:

{
  “firstName”: “Jonathan”,
  “lastName”: “Freeman”,
  “loginCount”: 4,
  “isWriter”: true,
  “worksWith”: [“Spantree Technology Group”, “InfoWorld”],
  “pets”: [
    {
      “name”: “Lilly”,
      “type”: “Raccoon”
    }
  ]
}
The structure above clearly defines some attributes of a person. It includes a first and last name, the number of times the person has logged in, whether this person is a writer, a list of companies the person works with, and a list of the person’s pets (only one, in this case). A structure like the one above may be passed from a server to a web browser or a mobile application, which will then perform some action such as displaying the data or saving it for later reference.

JSON is a generic data format with a minimal number of value types: strings, numbers, booleans, lists, objects, and null. Although the notation is a subset of JavaScript, these types are represented in all common programming languages, making JSON a good candidate to transmit data across language gaps.

JSON files
JSON data is stored in files that end with the .json extension. In keeping with JSON’s human-readable ethos, these are simply plain text files and can be easily opened and examined. As the SQLizer blog explains, this is also a key to JSON’s wider interoperability, as just about every language you can name can read and process plain text files, and they’re easy to send over the Internet.

Why should I use JSON? 
To understand the usefulness and importance of JSON, we’ll have to understand a bit about the history of interactivity on the web. 

In the early 2000s, interactivity on the web began to transform. At the time, the browser served mainly as a dumb client to display information, and the server did all of the hard work to prepare the content for display. When a user clicked on a link or a button in the browser, a request would be sent to the server, the server would prepare the information needed as HTML, and the browser would render the HTML as a new page. This pattern was sluggish and inefficient, requiring the browser to re-render everything on the page even if only a section of the page had changed.

Because full-page reloads were costly, web developers looked to newer technologies to improve the overall user experience. Meanwhile, the capability of making web requests in the background while a page was being shown, which had recently been introduced in Internet Explorer 5, was proving to be a viable approach to loading data incrementally for display. Instead of reloading the entire contents of the page, clicking the refresh button would trigger a web request that would load in the background. When the contents were loaded, the data could be manipulated, saved, and displayed on the page using JavaScript, the universal programming language in browsers.

REST vs. SOAP: The JSON connection
Originally, this data was transferred in XML format (see below for an example) using a messaging protocol called SOAP (Simple Object Access Protocol). But XML was verbose and difficult to manage in JavaScript. JavaScript already had objects, which are a way of expressing data within the language, so Douglas Crockford took a subset of that expression as a specification for a new data interchange format and dubbed it JSON. JSON was much easier for people to read and for browsers to parse.

Over the course of the ’00s, another Web services technology, called Representational State Transfer, or REST, began to overtake SOAP for the purpose of transferring data. One of the big advantages of programming using REST APIs is that you can use multiple data formats — not just XML, but JSON and HTML as well. As web developers came to prefer JSON over XML, so too did they come to favor REST over SOAP. As Kostyantyn Kharchenko put it on the Svitla blog, “In many ways, the success of REST is due to the JSON format because of its easy use on various platforms.”

Today, JSON is the de-facto standard for exchanging data between web and mobile clients and back-end services. 

JSON vs. XML
As noted above, the main alternative to JSON is XML. However, XML is becoming less and less common in new systems, and it’s easy to see why. Below is a version of the data you saw above, this time in XML:

<?xml version="1.0"?>
<person>
  <first_name>Jonathan</first_name>
  <last_name>Freeman</last_name>
  <login_count>4</login_count>
  <is_writer>true</is_writer>
  <works_with_entities>
    <works_with>Spantree Technology Group</works_with>
    <works_with>InfoWorld</works_with>
  </works_with_entities>
  <pets>
    <pet>
      <name>Lilly</name>
      <type>Raccoon</type>
    </pet>
  </pets>
</person>
In addition to being more verbose (exactly twice as verbose in this case), XML also introduces some ambiguity when parsing into a JavaScript-friendly data structure. Converting XML to a JavaScript object can take from tens to hundreds of lines of code and ultimately requires customization based on the specific object being parsed. Converting JSON to a JavaScript object takes one line of code and doesn’t require any prior knowledge about the object being parsed.

Limitations of JSON
Although JSON is a relatively concise, flexible data format that is easy to work with in many programming languages, there are some drawbacks to the format. Here are the five main limitations: 

No schema. On the one hand, that means you have total flexibility to represent the data in any way you want. On the other, it means you could accidentally create misshapen data very easily.
Only one number type: the IEEE-754 double-precision floating-point format. That’s quite a mouthful, but it simply means that you cannot take advantage of the diverse and nuanced number types available in many programming languages.

  1. No date type. This omission means developers must resort to using string representations of dates, leading to formatting discrepancies, or must represent dates in the form of milliseconds since the epoch (January 1, 1970).
  2. No comments. This makes it impossible to annotate fields inline, requiring additional documentation and increasing the likelihood of misunderstanding.
  3. Verbosity. While JSON is less verbose than XML, it isn’t the most concise data interchange format. For high-volume or special-purpose services, you’ll want to use more efficient data formats.
  4. When should I use JSON?
  5. If you’re writing software that communicates with a browser or native mobile application, you should use JSON as the data format. Using a format like XML is an out-of-date choice and a red flag to front-end and mobile talent you’d otherwise like to attract.

In the case of server-to-server communication, you might be better off using a serialization framework like Apache Avro or Apache Thrift. JSON isn’t a bad choice here, and still might be exactly what you need, but the answer isn’t as clear as for web and mobile communication.

If you’re using NoSQL databases, you’re pretty much stuck with whatever the database gives you. In relational databases that support JSON as a type, a good rule of thumb is to use it as little as possible. Relational databases have been tuned for structured data that fits a particular schema. While most now support more flexible data in the form of JSON, you can expect a performance hit when querying for properties within those JSON objects.

JSON is the ubiquitous, de facto format for sending data between web servers and browsers and mobile applications. Its simple design and flexibility make it easy to read and understand, and in most cases, easy to manipulate in the programming language of your choice. The lack of a strict schema enables flexibility of the format, but that flexibility sometimes makes it difficult to ensure that you’re reading and writing JSON properly.

JSON parser
The part of an application’s code that transforms data stored as JSON into a format the application can use is called a parser. JavaScript, as you’d expect, includes a native parser, the JSON.parse() method.

You may have to do a little more work to work with JSON in strongly typed languages like Scala or Elm, but the widespread adoption of JSON means there are libraries and utilities to help you through all of the hardest parts. 

The json.org website includes a comprehensive list of code libraries you can use to parse, generate, and manipulate JSON, in languages as diverse as Python, C#, and COBOL.

JSON utilities
If you're looking to manipulate or examine JSON-encoded data directly, without writing code yourself, there are a number of online utilities that can help you. All of programmatic equivalents in the code libraries linked to above, but you can cut and paste JSON code into these browser-based tools to help you understand JSON better or perform quick-and-dirty analysis:

JSON Formatter: JSONLint will format and validate arbitrary JSON code.
JSON Viewer: Stack.hu has a site that will create an interactive tree to help you understand the structure of your JSON code. 
JSON Beautifier: If you want to “pretty print” your JSON code, with syntax coloring and the like, Prettydiff can help you out. 
JSON Converter: Need to quickly move data from a JSON format into something else? Convertcsv.com has tools that can convert JSON to CSV (which can then be opened in Excel) or XML.
JSON tutorial

Ready to dive in and learn more about how work with JSON in your interactive applications? The Mozilla Developer Network has a great tutorial that will get you started with JSON and JavaScript. If you’re ready to move on to other languages, check out tutorial on using JSON with Java (from Baeldung), with Python (from DataCamp), or with C# (from Software Testing Help). Good luck!

Google Dex language simplifies array math for machine learning

Engineers at Google have unveiled Dex, a prototype functional language designed for array processing. Array processing is a cornerstone of the math used in machine learning applications and other computationally intensive work.
The chief goal for the Dex language, according to a paper released by Google researchers, is to allow programmers to work efficiently and concisely with arrays using a compact, functional syntax.
Dex, patterned after the Haskell and ML family of languages, uses type information to make writing code for processing arrays both succinct and explicit. Introductory Dex examples show how the type system works with both regular values (integers and reals) and arrays. Other examples show how to express common problems such as estimating pi or plotting a Mandelbrot fractal.
Like Python or the R language, Dex can run prewritten programs from the CLI, interactively in a REPL, or by way of a notebook-style interface. The current prototype supports all three modes.
Dex uses the LLVM language-compiler framework, which powers many general-purpose languages like Rust and Swift. LLVM is also proving useful for constructing domain-specific languages, or DSLs—languages designed to ease the handling of a deliberately small set of tasks. Other LLVM-powered DSL projects for computational work include DLVM, a compiler for DSLs used in neural networks; and Triton, an intermediate language and compiler used for tiled neural networks.
Dex is still early-stage experimental and not officially supported by Google in any capacity. The largest still-missing piece is integration with other languages, where Dex could be used for offloading computationally intensive work (for instance, as libraries like Numba do for Python). The Dex project, which is licensed under the BSD 3-clause license, welcomes contributions and collaboration.