Wednesday, 14 December 2022

What is SASE? A cloud service that marries SD-WAN with security

 Secure Access Service Edge (SASE) is a network architecture that combines software-defined wide area networking (SD-WAN) and security functionality into a unified cloud service that promises simplified WAN deployments, improved efficiency and security, and application-specific bandwidth policies.

First outlined by Gartner in 2019, SASE (pronounced “sassy”) has quickly evolved from a niche, security-first SD-WAN alternative into a popular WAN sector that analysts project will grow to become a $10-billion-plus market within the next couple of years.

Market research firm Dell’Oro group forecasts that the SASE market will triple by 2026, topping $13 billion. Gartner is more bullish, predicting that the SASE market will grow at a 36% CAGR between 2020 and 2025 to reach $14.7 billion by 2025.

What is SASE?

SASE consolidates SD-WAN with a suite of security services to help organizations safely accommodate an expanding edge that includes branch offices, public clouds, remote workers and IoT networks.

While some SASE vendors offer hardware appliances to connect edge users and devices to nearby points of presence (PoPs), most vendors handle the connections through software clients or virtual appliances. SASE is typically consumed as a single service, but there are a number of moving parts, so some SASE offerings piece together services from various partners.

On the networking side, the key features of SASE are WAN optimization, content delivery network (CDN), caching, SD-WAN, SaaS acceleration, and bandwidth aggregation. The vendors that make the WAN side of SASE work include SD-WAN providers, carriers, content-delivery networks, network-as-a-service (NaaS) providers, bandwidth aggregators and networking equipment vendors.

The security features of SASE can include encryption, multifactor authentication, threat protection, data leak prevention (DLP), DNS, Firewall-as-a-Service (FWaaS), Secure Web Gateway (SWG), and Zero Trust Network Access (ZTNA). The security side of SASE relies on a range of providers, including cloud-access security brokers, cloud secure web gateways providers, zero-trust network access providers, and more.

The feature set will vary from vendor to vendor, and the top SASE vendors are investing in advanced capabilities, such as support for 5G for WAN links, advanced behavior- and context-based security capabilities, and integrated AIOps for troubleshooting and automatic remediation.

Ideally, all these capabilities are offered as a unified SASE service by a single service provider, even if certain components are white labeled from other providers.

What are the benefits of SASE?

 Because it is billed as a unified service, SASE promises to cut complexity and cost. Enterprises deal with fewer vendors, the amount of hardware required in branch offices and other remote locations declines, and the number agents on end-user devices also decreases.

SASE removes management burdens from IT’s plate, while also offering centralized control for things that must remain in-house, such as setting user policies. IT executives can set policies centrally via cloud-based management platforms, and the policies are enforced at distributed PoPs close to end users. Thus, end users receive the same access experience regardless of what resources they need, and where they and the resources are located.

SASE also simplifies the authentication process by applying appropriate policies for whatever resources the user seeks, based on the initial sign-in. SASE also supports zero-trust networking, which controls access based on user, device and application, not location and IP address.

Security is increased because policies are enforced equally regardless of where users are located. As new threats arise, the service provider addresses how to protect against them, with no new hardware requirements for the enterprise.

More types of end users – employees, partners, contractors, customers – can gain access without the risk that traditional security – such as VPNs and DMZs – might be compromised and become a beachhead for potential attacks on the enterprise.

SASE providers can supply varying qualities of service, so each application gets the bandwidth and network responsiveness it needs. With SASE, enterprise IT staff have fewer chores related to deployment, monitoring and maintenance, and can be assigned higher-level tasks.

What are the SASE challenges?

Organizations thinking about deploying SASE need to address several potential challenges. For starters, some features could come up short initially because they are implemented by providers with backgrounds in either networking or security, but might lack expertise in the area that is not their strength.

Another issue to consider is whether the convenience of an all-in-one service meets the organization’s needs better than a collection of best-in-breed tools.

SASE offerings from a vendor with a history of selling on-premises hardware may not be designed with a cloud-native mindset. Similarly, legacy hardware vendors may lack experience with the in-line proxies needed by SASE, so customers may run into unexpected cost and performance problems.

Some traditional vendors may also lack experience in evaluating user contexts, which could limit their ability to enforce context-dependent policies. Due to SASE’s complexity, providers may have a feature list that they say is well integrated, but which is really a number of disparate services that are poorly stitched together.

Because SASE promises to deliver secure access to the edge, the global footprint of the service provider is important. Building out a global network could prove too costly for some SASE providers. This could lead to uneven performance across locations because some sites may be located far from the nearest PoP, introducing latency.

SASE transitions can also put a strain on personnel. Turf wars could flare up as SASE cuts across networking and security teams. Changing vendors to adopt SASE could also require retraining IT staff to handle the new technology.

What is driving the adoption of SASE?

The key drivers for SASE include supporting hybrid clouds, remote and mobile workers, and IoT devices, as well as finding affordable replacements for expensive technologies like MPLS and IPsec VPNs.

As part of digital transformation efforts, many organizations are seeking to break down tech siloes, eliminate outdated technologies like VPNs, and automate mundane networking and security chores. SASE can help with all of those goals, but you’ll need to make sure vendors share a vision for the future of SASE that aligns with your own.

According to Gartner, there are currently more traditional data-center functions hosted outside the enterprise data center than in it – in IaaS providers clouds, in SaaS applications and cloud storage. The needs of IoT and edge computing will only increase this dependence on cloud-based resources, yet typical WAN security architectures remain tailored to on-premises enterprise data centers.

In a post-COVID, hybrid work economy, this poses a major problem. The traditional WAN model requires that remote users connect via VPNs, with firewalls at each location or on individual devices. Traditional models also force users to authenticate to centralized security that grants access but may also route traffic through that central location.

This model does not scale. Moreover, this legacy architecture was already showing its age before COVID hit, but today its complexity and delay undermine competitiveness.

With SASE, end users and devices can authenticate and gain secure access to all the resources they are authorized to reach, and users are protected by security services located in clouds close to them. Once authenticated, they have direct access to the resources, addressing latency issues.

What is the SASE architecture?

Traditionally, the WAN was comprised of stand-alone infrastructure, often requiring a heavy investment in hardware. SD-WAN didn’t replace this, but rather augmented it, removing non-mission-critical and/or non-time-sensitive traffic from expensive links.

In the short term, SASE might not replace traditional services like MPLS, which will endure for certain types of mission-critical traffic, but on the security side, tools such as IPsec VPNs will likely give way to cloud-delivered alternatives.

Other networking and security functions will be decoupled from underlying infrastructure, creating a WAN that is cloud-first, defined and managed by software, and run over a global network that, ideally, is located near enterprise data centers, branches, devices, and employees.

With SASE, customers can monitor the health of the network and set policies for their specific traffic requirements. Because traffic from the internet first goes through the provider’s network, SASE can detect dangerous traffic and intervene before it reaches the enterprise network. For example, DDoS attacks can be mitigated within the SASE network, saving customers from floods of malicious traffic.

What are the core security features of SASE?

The key security features that SASE provides include:  

- Firewall as a Service (FWaaS)

In today’s distributed environment, both users and computing resources are located at the edge of the network. A flexible, cloud-based firewall delivered as a service can protect these edges. This functionality will become increasingly important as edge computing grows and IoT devices get smarter and more powerful.

Delivering FWaaS as part of the SASE platform makes it easier for enterprises to manage the security of their network, set uniform policies, spot anomalies, and quickly make changes.

- Cloud Access Security Broker (CASB)

As corporate systems move away from on-premises to SaaS applications, authentication and access become increasingly important. CASBs are used by enterprises to make sure their security policies are applied consistently even when the services themselves are outside their sphere of control.

With SASE, the same portal employees use to get to their corporate systems is also a portal to all the cloud applications they are allowed to access, including CASB. Traffic doesn't have to be routed outside the system to a separate CASB service.

- Secure Web Gateway (SWG)

Today, network traffic is rarely limited to a pre-defined perimeter. Modern workloads typically require access to outside resources, but there may be compliance reasons to deny employees access to certain sites. In addition, companies want to block access to phishing sites and botnet command-and-control servers. Even innocuous web sites may be used maliciously by, say, employees trying to exfiltrate sensitive corporate data.

SGWs protect companies from these threats. SASE vendors that offer this capability should be able to inspect encrypted traffic at cloud scale. Bundling SWG in with other network security services improves manageability and allows for a more uniform set of security policies.

- Zero Trust Network Access (ZTNA)

Zero Trust Network Access provides enterprises with granular visibility and control of users and systems accessing corporate applications and services.

A core element of ZTNA is that security is based on identity, rather than, say, IP address. This makes it more adaptable for a mobile workforce, but requires additional levels of authentication, such as multi-factor authentication and behavioral analytics.

What other technologies may be part of SASE?

In addition to those four core security capabilities, various vendors offer a range of additional features.

These include web application and API protection, remote browser isolation, DLP, DNS, unified threat protection, and network sandboxes. Two features many enterprises will find attractive are network privacy protection and traffic dispersion, which make it difficult for threat actors to find enterprise assets by tracking their IP addresses or eavesdrop on traffic streams.

Other optional capabilities include Wi-Fi-hotspot protection, support for legacy VPNs, and protection for offline edge-computing devices or systems.

Centralized access to network and security data can allow companies to run holistic behavior analytics and spot threats and anomalies that otherwise wouldn't be apparent in siloed systems. When these analytics are delivered as a cloud-based service, it will be easier to include updated threat data and other external intelligence.

The ultimate goal of bringing all these technologies together under the SASE umbrella is to give enterprises flexible and consistent security, better performance, and less complexity – all at a lower total cost of ownership.

Enterprises should be able to get the scale they need without having to hire a correspondingly large number of network and security administrators.

Who are the top SASE providers?

The leading SASE vendors include both established networking incumbents and well-funded startups. Many telcos and carriers also either offer their own SASE solutions (which they have typically gained through acquisitions) or resell and/or white-label services from pure-play SASE providers. Top vendors, in alphabetical order, include:

  • Akamai
  • Broadcom
  • Cato Networks
  • Cisco
  • Cloudflare
  • Forcepoint
  • Fortinet
  • HPE
  • Netskope
  • Palo Alto Networks
  • Perimeter 81
  • Proofpoint
  • Skyhigh Security
  • Versa
  • VMware
  • Zscaler

How to adopt SASE

Enterprises that must support a large, distributed workforce, a complicated edge with far-flung devices, and hybrid/multi-cloud applications should have SASE on their radar. For those with existing WAN investments, the logical first step is to investigate your WAN provider’s SASE services or preferred partners.

On the other hand, if your existing WAN investments are sunk costs that you’d prefer to walk away from, SASE offers a way to outsource and consolidate both WAN and security functions.

Over time, the line between SASE and SD-WAN will blur, so choosing one over the other won’t necessarily lock you into a particular path, aside from the constraints that vendors might erect.

For most enterprises, however, SASE will be part of a hybrid WAN/security approach. Traditional networking and security systems will handle pre-existing connections between data centers and branch offices, while SASE will be used to handle new connections, devices, users, and locations.

SASE isn't a cure-all for network and security issues, nor is it guaranteed to prevent future disruptions, but it will allow companies to respond faster to disruptions or crises and to minimize their impact on the enterprise. In addition, SASE will allow companies to be better positioned to take advantage of new technologies, such as edge computing, 5G and mobile AI.

https://www.networkworld.com/

Saturday, 19 November 2022

How intelligent automation will change the way we work

Automation in the workplace is nothing new — organizations have used it for centuries, points out Rajendra Prasad, global automation lead at Accenture and co-author of The Automation Advantage. In recent decades, companies have flocked to robotic process automation (RPA) as a way to streamline operations, reduce errors, and save money by automating routine business tasks.

Now organizations are turning to intelligent automation to automate key business processes to boost revenues, operate more efficiently, and deliver exceptional customer experiences. Intelligent automation is a smarter version of RPA that makes use of machine learning, artificial intelligence (AI) and cognitive technologies such as natural language processing to handle more complex processes, guide better business decisions, and shed light on new opportunities, said Prasad.

For example, Newsweek has automated many aspects of managing its presence on social media, a crucial channel for broadening its reach and reputation, said Mark Muir, head of social media at the news magazine. Newsweek staffers used to manage every aspect of its social media postings manually, which involved manually selecting and sharing each new story to its social pages, figuring out what content to recycle, and testing different strategies. By moving to a more automated approach, the company now spends much less time on these processes.

“We use Echobox’s automation to help determine which content should be shared to our social media and to optimize how and when it is posted so that the largest possible audience will see it,” Muir said. “Automating in this way has created more time for us to focus on our readers and find new ways to engage our audience.”

Industry watchers predict that intelligent automation will usher in a workplace where AI not only frees up human workers’ time for more creative work but also helps them set strategies and drive innovation. Most companies are not fully there yet but do have numerous opportunities for business process automation throughout the organization.

Business processes that are ripe for automation

Ravi Vasantraj, global delivery head at IT services provider Mphasis, cites several characteristics that make business processes good candidates for automation:
  • Processes that deal with structured, digital or non-digital data having definitive steps
  • Processes with seasonal spikes that can’t be fulfilled by a manual workforce, such as policy renewals, premium adjustments, claims payments in insurance, and so on
  • Processes with stringent service level agreements that need quick turnarounds, such as transactions posting, order fulfillment, etc.
Many companies are automating contract management, added Doug Barbin, managing principal and chief growth officer at Schellman, a provider of attestation and compliance services. “If you consider all the steps needed to draft, send, redline, and execute contracts via email, the use of technology to manage the content, coordinate change approval, and automate the signing process, the savings in time and reduction of errors is significant,” he said.

Beyond contracts, anything that reduces manual interaction for sales is an opportunity. For example, companies are providing chatbots to automate the ability to answer key questions and connect prospects to sales, according to Barbin.

UMC, a mechanical services contractor in Seattle, has automated many of its sales processes, said Bob Frey, director of sales operations. “We’ve automated various sales stages so we can track sales through our pipelines,” he said. “We are able to track what stages the different sales are in. We do this using Unanet CRM by Cosential that’s designed specifically for the construction industry.”

Schellman’s Barbin cites security as another area where automation is making inroads. “In cybersecurity, the mundane often resides in compliance and the need to test controls in an increasingly complex environment,” he said. “There is an entire segment of compliance automation tools that are being built to collect data and perform initial analysis before triaging and passing to [a human] assessor.”

In addition, more organizations are automating the procure-to-pay process in finance and the hire-to-retire process in human resources, said Wayne Butterfield, global lead for intelligent automation solutions at ISG (Information Services Group), a research and advisory firm.

“There are large numbers of tasks in every organization across just about every function that can be automated,” he said. “The question is: What is the technology needed to automate them, and does it make sense from a value realization perspective?”

The contact center is a huge opportunity, not only because of the large number of people completing similar activities with every contact but because of the positive impact it can have on customer experience and agent efficiency, Butterfield said. For example, companies can use automated virtual agents to handle the more routine customer requests, such as balance inquiries, bill payment, or change of address requests. This enables human agents to handle the more complicated customer inquiries that require creative problem solving. Handing these routine tasks off to automated virtual agents shortens the time it takes to resolve customer issues.

Where intelligent automation is taking us

In coming years, the architecture of work will change and become more event-driven, with business processes controlled with intelligent automation and work broken down into discrete tasks that are performed via automation, assigned to a worker, or interactively executed between a robot assistant and a worker, said IDC’s Fleming.

“There will be far fewer task workers using enterprise applications on a constant basis; task work will increasingly be delivered to workers via automation,” she said. “Employees will spend more time digitally enabling themselves by learning how to develop using low-code tools. And employees will spend more time planning, proactively identifying and resolving problems, making decisions, creating, etc. — in other words, performing knowledge work and/or creative work.”

Prasad said that in the years to come, there will be a huge opportunity for automation to be viewed as an indispensable co-worker with a vital role to play in companies’ successes by bringing in opportunities to reinvent individual processes, transform customer and employee experiences, and drive revenue growth.

“Intelligent automation promises to usher in a new era in business, one where companies are more efficient and effective than ever before and able to meet the needs of customers, employees, and society in new and powerful ways,” he said.

Automation pitfalls to watch out for

As organizations automate their business processes, there are many potential hazards to avoid.

“The main one is ignoring your people and underestimating that,” Butterfield said. “Although the outcome is driven by using a technology, everything up to the actual automation of a process is generally very people-focused. A lack of change management will unfortunately cause many issues in the long term. Organizations need to keep their people aligned with their overall goals.”

Security, mainly authentication, is also a key concern, Barbin said. “Any automation, API [application programming interface] or other, requires some means to pass access credentials,” he said. “If the systems that automate and contain those credentials are compromised, access to the connected systems could be too.” To help minimize that risk, Barbin suggests using SAML 2.0 and other technologies that take stored passwords out of the systems.

Another pitfall is selecting only one technology as the automation tool of choice. Typically organizations need multiple technologies to get the best results, said Maureen Fleming, program vice president for intelligent process automation research at IDC.

And when companies decide to automate a business process previously carried out by a person or a team of people, it’s natural to receive some pushback, Newsweek’s Muir said. “Some of our journalists had initially struggled with the idea of letting an algorithm make choices that were previously weighed up and decided by a human,” he said. “There can be a bit of fear around AI and algorithms and a perceived lack of control when processes are suddenly automated.”

Organizations also need to establish clear strategies for business process automation, according to Vasantraj. “Automating the processes without understanding the ROI [return on investment] could lead to business loss, or automations built with multiple user interventions may not yield any benefit at all,” he said.

Take it slow, plan carefully, and listen to your people

Scaling intelligent automation is one of the biggest challenges for organizations, said Accenture’s Prasad. Therefore, it’s crucial that companies be clear about the strategic intent behind this initiative from the outset and ensure that it’s embedded into their entire modernization journeys, from cloud adoption to data-led transformation.

“Intelligent automation is not a race to be the first to implement the latest technology,” he said. “Success depends on understanding people’s needs, introducing new technologies in a way that is helpful and involves minimal disruption, and addressing issues related to new skills, roles, and job content.”

In other words, focusing on people is just as important as focusing on technology, Prasad said. Investments in intelligent automation must be “people first” — designed to elevate human strengths and supported by investments in skills, change management, experience, organization, and culture.

Butterfield agreed that strategic thinking is critical. “My advice would be to start small and think strategically,” he said. “Understand the shape and type of problems you are trying to automate or improve before you move to a technology solution. Work with your people, and ensure you use their tribal knowledge to understand why they do something.”

However, Butterfield cautions that organizations should avoid relying on people’s opinions on how long things take and how many actions they are able to complete in a given timeframe. “Such reliance often causes your business cases to be inaccurate, as they include the agent’s local management bias versus hard data and facts,” he said.

Muir’s advice is to let the results speak for themselves. Once an organization has introduced AI and automation to a process, it should let any time gains and increases in performance be key factors in objectively determining whether the project was a success. “In our experience, using Echobox proved the quantifiable value of automation to our organization, which made it easier for our teams to embrace it,” he said.

“Another piece of advice would be to find a balance that works for your team or your business when it comes to how much automation you use,” Muir said. For businesses that want to dip their toes into automation but are hesitant to automate 100% of their processes and relinquish manual control, there are often ways to just partially automate tasks, he added. “Take a realistic look at where you’re regularly spending time and talent on repetitive, manual tasks and explore how you can automate those parts of your workday.”

How workers can keep pace with automation

Rather than push back, employees should embrace automation and the opportunities it creates for them to provide high-value contributions versus management of administrative tasks, Barbin said.

“For security operations, for instance, leveraging automation allows those watching the networks for attackers to focus on high-priority threats and incidents, keeping up with a faster-moving landscape,” he said. “For compliance, they can move from managing a single US framework, [such as] SOC 2, to global compliance requirements, all from a single management plane.”

IDC’s Fleming noted that most organizations try to upskill and shift workers into new roles when their current roles are automated. They also consolidate new responsibilities into an existing role. And they tend to hire internal candidates for open jobs.  “When offered an opportunity to learn how to develop for automation, process improvement, etc., employees should embrace that opportunity,” she said. “Employees should look for internal upskilling programs as well as external ones.”

Newsweek’s Muir agreed that employees need to remain open to learning about new technologies and keep an open mind about how they can be leveraged. “Technology changes fast, and the tools and systems we use today may not be the same ones five years from now,” he said.

https://www.computerworld.com/

Tuesday, 15 November 2022

Recession in the US may cool off attrition in IT sector along with revenues

After two years of bumper profits and mind-boggling salary hikes, TCS, Infosys, Wipro and other Indian IT companies are treading on a cautious path as wages and a likely slowdown in demand add to margin woes. However, an economic slowdown in the US might not be all that bad for India IT majors. Experts believe that a potential slowdown could have a positive effect on spitballing wage costs and attrition.

A rapid shift towards digitalisation due to the Covid pandemic in the last two years proved to be a big boon for the Indian IT sector. Giants like TCS, Infosys and Wipro rely predominantly on the US and European markets, which contribute to 80-90% of their revenues.

Recession and high attrition rates – a double whammy for IT majors

Now, with talks of recession in the US and Europe gaining momentum, these IT companies are already under stress. The stress due to economic slowdown in the US and Europe has reflected in the FY23 earnings guidance of these IT companies.

However, an economic slowdown in the US might not be all that bad for India IT majors. Slower revenue growth could curb wage hikes and slow down attrition too, say experts.

“Indian IT companies source a lion’s share of their revenue from the US and Europe. Both these geographies face looming macro pressures in the form of one of the highest inflationary pressures and a slowdown in GDP growth,” said a Motilal Oswal report.

After clocking 19% revenue growth in FY22, the Indian IT sector is headed for two years of moderation, according to a Crisil report.

“Revenue growth is expected to moderate to 12-13% this fiscal and 9-10% in the next, [due to] an expected tightening in corporate capital spends because of inflationary headwinds,” the report stated.

“An economic slowdown in the US and EU could prove to be the inflection point for a cool down in wage hikes and attrition rates as well,” Dhananjay Sinha, head of strategy research and chief economist at JM Financial, told Business Insider India.

Attrition levels remain elevated – another source of margin stress

With attrition levels remaining elevated – Infosys is the worst affected with an attrition rate of 28.4% in Q1 FY23, research firms suggest that the margins will remain stressed, too.

“The companies had reduced their margin guidance at the start of FY23, but we believe continued pressure due to elevated attrition levels is likely to result in margins dropping near the lower end of guidance,” stated a report by ICICI Securities.

Wage hikes and attrition rates could simmer down come December

Sinha explained that wage hikes and attrition rates could simultaneously simmer down by the December quarter this year. The cool down in wages across the IT sector could also help solve the attrition headache for IT companies, he said. With startups facing funding crunch, too, there could be fewer exit routes for IT executives.

Amongst the industry, Sinha said that Infosys could lead the pack as its decision to cut variable pay to 70% has shown it is ready to control costs. Media reports suggested that Wipro delayed payouts for certain employee categories, suggesting that companies are beginning to feel the pressure.

However, in contrast, TCS rolled out 100% variable pay days after Infosys.

An economic slowdown in the US is already showing signs of spillover in the Big Tech revenues – Amazon Web Services, Microsoft Azure and Google Cloud, the world’s top cloud platforms, reported a 7% decline in revenue.

This could have a direct impact on TCS, Infosys and Wipro – according to media reports, the revenues of these IT companies could be impacted by up to 33%.

“A weakening macro environment may translate into lower IT spends and slower growth for Indian IT companies,” stated a report by Motilal Oswal.

Courtesy: https://www.businessinsider.in/

Tuesday, 1 November 2022

Why Wasm is the future of cloud computing

Wasm may just be the most important emerging technology that you’ve never heard of.  It’s important!

Shorthand for WebAssembly language, Wasm was developed for the web. However, Wasm technology has expanded beyond the web browser. Now organizations are starting to run Wasm on the server side. For example, my company, SingleStore, is using it in our database.

Some think Wasm will replace container technology and the ubiquitous JavaScript.

Whether or not you believe that, Wasm is clearly making an impact on cloud computing. 

Wasm is cross-platform: Making it safer and simpler to bring cloud components together

People use all different kinds of languages to write software. Getting those languages to interact with each other is difficult. Wasm provides a framework in which you can write in whatever language you want. Then it produces a common, simulated machine format.

That format allows components written in various languages—like Rust, C/C++, and Go—to talk to each other. Wasm also provides the ability for server-side systems like databases to embed components from different languages without requiring you to know or care how that module was produced.

Think of Wasm as a universal plugin format. Say you would like to augment your system’s capabilities with a component developed by a third party. Wasm lets you bring the new component into your system without the risks that typically come with integrating add-ons. For example, an external component might crash the system or work in an unexpected way. Wasm mitigates these problems by creating an extremely safe framework for disparate systems and components to interact together.

The cloud is a big driver of Wasm’s expansion. Wasm is a good match for cloud because it’s virtualized and can work in any environment that supports the Wasm runtime. Also, cloud systems are typically composed of many services pieced together and connected in different ways. That can get complicated. But the more you can simplify your cloud environment, the easier it is for various aspects of cloud systems to work together correctly.

Wasm is secure: Lowering risk with its approach to running code and representing functions

In most language runtimes, functions have addresses. Those addresses are executable points in memory. If you are just looking at memory as a bunch of bytes, a function may be indistinguishable from the rest of the memory. This opens the door for people to find the function and inject code into it, or call a function in a privileged way so the function does something that it’s not supposed to do. Wasm’s design eliminates those problems.

Wasm represents functions in a way that is not exploitable. It also runs the code in a sandbox, which mitigates common security problems associated with running untrusted code. Because Wasm encapsulates the program memory in a safe area, nothing can get outside of it and access other places that might affect the host that’s running the program or compromise security.

And with Wasm’s capability-based security model, hosts have complete control over what kinds of privileged operations the Wasm program can run. For example, hosts must explicitly grant access to directories if file access is a requirement.

Wasm is fast: Eliminating what is not needed and enabling greater speed and efficiency

Clearly, Wasm isn’t the first technology people have used to bring things together in a safer, more simplified way. However, Wasm is much faster than some of those other technologies.

Compilers can generate Wasm programs by leveraging the LLVM back end, compiling down to the LLVM intermediate representation. LLVM, or low level virtual machine, is an extracted machine that many languages already compile down to. As a result of this approach, and thanks to many years of community effort around the LLVM project, Wasm programs can be compiled to highly optimized machine code.

At SingleStore, we created the Wasm Space Program—a virtual real-time universe inside a database—to demonstrate how fast and lightweight Wasm is. In this simulation, spaceships use different strategies to replenish energy and fight other spacecraft in a vast, real-time “universe.” That involves a vast amount of data, with more than one million ships in the system and nearly three million database updates per second.

Traditionally, integrating that data and assembling it on a mid-tier layer would require you to pull up a lot of data to the mid-tier. That could introduce a huge amount of lag, and require some complex caching to achieve a real-time response. Rather than taking that approach, each spaceship’s strategy has been written in Wasm, and loaded into the database as a UDF. Each second, each of the spaceships’ strategy functions are invoked to decide on its next move.

There’s nothing on the front end—a JavaScript program running in the browser—that understands these strategies, or anything about the state of the universe. Its job is simply to issue SQL queries directly to the database and graphically present the information that is returned. The database maintains all of the state  information, and because Wasm has allowed the compute to be right next to the data, it’s a lot faster. No mid-tier was even necessary.

But Wasm isn’t all fun and games. You can use it to address countless other applications and use cases. For example, you could use Wasm for sentiment analysis. The kind of complex logic required for sentiment analysis isn’t something that can easily be expressed in a database SQL dialect. So, in order to do this, you usually need to implement it in a more sophisticated language and then bring the data to it by downloading each row of data. Then you need to push the sentiment analysis rating back into the database. That means a round trip for every row in the database you use. If you have millions of rows, that creates a lot of network traffic. But with the way SingleStore has integrated Wasm, you are already in the database, so you don’t incur that overhead.

Wasm is getting better all the time: Creating standards makes it even more powerful

Wasm is already very capable. And with the new technologies and standards that are on the way, Wasm will let you do even more.

For example, the W3C WebAssembly Community Group, with help from members of organizations such as the Bytecode Alliance (of which SingleStore is a member), is currently working on standardizing the WebAssembly System Interface (WASI). WASI will provide a standard set of APIs and services that can be used when Wasm modules are running on the server. Many standard proposals are still in progress, such as garbage collection, network I/O, and threading, so you can’t always map the things that you’re doing in other programming languages to Wasm. But eventually, WASI aims to provide a full standard that will help to achieve that. In many ways, the goals of WASI are similar to POSIX.

Wasm as it now stands also doesn’t address the ability to link or communicate with other Wasm modules. But the Wasm community, with support from members of the computing industry, is working on the creation of something called the component model. This aims to create a dynamic linking infrastructure around Wasm modules, defining how components start up and communicate with each other (similar to a traditional OS’s process model).

Additionally, an emerging standard IDL syntax, called WIT (for WebAssembly Interface Types), will allow people to describe their Wasm interfaces in a language-agnostic way. As a result, binding generators will be able to take what’s in the IDL and compile code that will allow both the Wasm host and the guest to communicate data back and forth in a common way.

Wasm is the future: Providing a faster, more secure, and more efficient way to bring things together
Wasm, though more lightweight, may not replace containers any time soon. But you can expect Wasm to become part of a whole lot of software going forward.

Whether on the server or on the edge, Wasm lets you create custom logic that runs much closer to the data than it could before—and you can do it securely, efficiently, and with greater flexibility.

And now with SingleStore, you can compile your existing programs to Wasm, push them into the database, and run them there. That means that you may not  have to rewrite that code and put it somewhere the data is not. With Wasm technology, you can have the best of both worlds. 

https://www.infoworld.com/

Wednesday, 26 October 2022

Java 20 begins to take shape

Java 20, the next planned version of standard Java, has its first feature proposal: Pattern matching for switch statements and expressions will be previewed for the fourth time in the forthcoming Java SE (Standard Edition) release.

While OpenJDK’s JDK (Java Development Kit) 20 web page still lists no features for the planned release, the Java Enhancement Proposal (JEP) index cites a fourth preview of pattern matching for switch as targeted for JDK 20. JDK 20 is due next March.

Pattern matching for switch statements and expressions is viewed as a mechanism to enable concise and safe expression of complex data-oriented queries. Previously previewed in JDK 17, JDK 18, and JDK 19, the fourth preview would enable a continued co-evolution with Record Patterns, also included as a preview in JDK 19, allowing for continued refinements based on experience and feedback.

The main changes in pattern matching for switch since the third preview include simplified grammar for switch labels and support for inference of type arguments for generic patterns and record patterns in switch statements and expressions.

Record Patterns has been designated for a second preview but no specific targeted release of Java SE has been set for it yet. In addition to pattern matching and Record Patterns, other possible features for JDK 20 include universal generics and string templates.

JDK 20 is set to be a short-term feature release, with only six months of Premier-level support from Oracle. JDK 21, due in September 2023, will be a Long-Term Support release, backed by multiple years of support.

https://www.infoworld.com/

Condensers promise to accelerate Java programs

Project Leyden, an ambitious effort to improve startup time, performance, and footprint of Java programs, is set to offer condensers. A condenser is code that runs between compile time and run time and transforms the original program into a new, faster, and potentially smaller program.

In an online paper published October 13, Mark Reinhold, chief architect of the Java platform group at Oracle, said a program’s startup and warmup times and footprint could be improved by temporarily shifting some of its computation to a point either later in run time or backward to a point earlier than run time. Performance could further be boosted by constraining some computation related to Java’s dynamic features, such as class loading, class redefinition, and reflection, thus enabling better code analysis and even more optimization.

Project Leyden will implement these shifting, constraining, and optimizing transformations as condensers, Reinhold said. Also, new language features will be investigated to allow developers to shift computation themselves, enabling further condensation. However, the Java Platform Specification will need to evolve to support these transformations. The JDK’s tools and formats for code artifacts such as JAR files will also need to be extended to support condensers.

The condensation model offers developers greater flexibility, Reinhold said. Developers can choose which condensers to apply and in so doing choose whether and how to accept constraints that limit Java’s natural dynamism. The condensation model also gives Java implementations considerable freedom. As long as a condenser preserves program meaning and does not impose constraints except those accepted by the developer, an implementation will have wide latitude for optimizing the result.

To improve startup and warmup time and footprint can be best done by identifying computation that can simply be eliminated, Reinhold said. Failing that, computation can be shifted backward or forward in time. This concept of shifting computation in time is not new. Java implementations already have many features to shift computation. For example, compile-time constant folding shifts computation backward in time from run time to compile time, and garbage collection shifts the reclamation of memory forward in time. Other computation-shifting mechanisms are optional including ahead-of-time computation and class-data sharing.

Project Leyden was under discussion for more than two years before beginning to move forward earlier this year. The project is sponsored by the HotSpot virtual machine and core libraries groups within the Java development domain.

https://www.infoworld.com/

Saturday, 1 October 2022

Why the C programming language still rules

The C programming language has been alive and kicking since 1972, and it still reigns as one of the fundamental building blocks of our software-studded world. But what about the dozens of of newer languages that have emerged over the last few decades? Some were explicitly designed to challenge C’s dominance, while others chip away at it as a byproduct of their own popularity.

It's hard to beat C for performance, bare-metal compatibility, and ubiquity. Still, it’s worth seeing how it stacks up against some of the big-name language competition.

C vs. C++

C is frequently compared to C++, the language that—as the name indicates—was created as an extension of C. The differences between C++ and C could be characterized as extensive, or excessive, depending on whom you ask.

While still being C-like in its syntax and approach, C++ provides many genuinely useful features that aren’t available natively in C: namespaces, templates, exceptions, automatic memory management, and so on. Projects that demand top-tier performance—like databases and machine learning systems—are frequently written in C++, using those features to wring every drop of performance out of the system.

Further, C++ continues to expand far more aggressively than C. The forthcoming C++ 23 brings even more to the table including modules, coroutines, and a modularized standard library for faster compilation and more sucking code. By contrast, the next planned version to the C standard, C2x, adds little and focuses on retaining backward compatibility.

The thing is, all of the pluses in C++ can also work as minuses. Big ones. The more C++ features you use, the more complexity you introduce and the more difficult it becomes to tame the results. Developers who confine themselves to a subset of C++ can avoid many of its worst pitfalls. But some shops want to guard against that complexity altogether. The Linux kernel development team, for instance, eschews C++, and while it's now eyeing Rust as a language for future kernel additions, the majority of Linux will still be written in C.

Picking C over C++ is a way for developers and those who maintain their code to embrace enforced minimalism and avoid tangling with the excesses of C++. Of course, C++ has a rich set of high-level features for good reason. But if minimalism is a better fit for current and future projects—and project teams—then C makes more sense.

C vs. Java

After decades, Java remains a staple of enterprise software development—and a staple of development generally. Java syntax borrows a great deal from C and C++. Unlike C, though, Java doesn’t by default compile to native code. Instead, Java's JIT (just-in-time) compiler compiles Java code to run in the target environment. The JIT engine optimizes routines at runtime based on program behavior, allowing for many classes of optimization that aren’t possible with ahead-of-time compiled C. Under the right circumstances, JIT-compiled Java code can approach or even exceed the performance of C.

And, while the Java runtime automates memory management, it's possible to work around that. For example, Apache Spark optimizes in-memory processing in part by using "unsafe" parts of the Java runtime to directly allocate and manage memory and avoid the overhead of the JVM's garbage collection system.

Java's “write once, run anywhere” philosophy also makes it possible for Java programs to run with relatively little tweaking for a target architecture. By contrast, although C has been ported to a great many architectures, any given C program may still require customization to run properly on, say, Windows versus Linux.

This combination of portability and strong performance, along with a massive ecosystem of software libraries and frameworks, makes Java a go-to language and runtime for building enterprise applications. Where it falls short of C is an area where the language was never meant to compete: running close to the metal, or working directly with hardware.

C code is compiled into machine code, which is executed by the process directly. Java is compiled into bytecode, which is intermediate code that the JVM interpreter then converts to machine code. Further, although Java’s automatic memory management is a blessing in most circumstances, C is better suited for programs that must make optimal use of limited memory resources, because of its small initial footprint.

C vs. C# and .NET

Nearly two decades after their introduction, C# and .NET remain major parts of the enterprise software world. It has been said that C# and .NET were Microsoft’s response to Java—a managed code compiler system and universal runtime—and so many comparisons between C and Java also hold up for C and C#/.NET.

Like Java (and to some extent Python), .NET offers portability across a variety of platforms and a vast ecosystem of integrated software. These are no small advantages given how much enterprise-oriented development takes place in the .NET world. When you develop a program in C#, or any other .NET language, you are able to draw on a universe of tools and libraries written for the .NET runtime. 

Another Java-like .NET advantage is JIT optimization. C# and .NET programs can be compiled ahead of time as per C, but they’re mainly just-in-time compiled by the .NET runtime and optimized with runtime information. JIT compilation allows all sorts of in-place optimizations for a running .NET program that can’t be done in C.

Like C (and Java, to a degree), C# and .NET provide various mechanisms for accessing memory directly. Heap, stack, and unmanaged system memory are all accessible via .NET APIs and objects. And developers can use the unsafe mode in .NET to achieve even greater performance.

None of this comes for free, though. Managed objects and unsafe objects cannot be arbitrarily exchanged, and marshaling between them incurs a performance cost. Therefore, maximizing the performance of .NET applications means keeping movement between managed and unmanaged objects to a minimum.

When yo can’t afford to pay the penalty for managed versus unmanaged memory, or when the .NET runtime is a poor choice for the target environment (e.g., kernel space) or may not be available at all, then C is what you need. And unlike C# and .NET, C unlocks direct memory access by default. 

C vs. Go

Go syntax owes much to C—curly braces as delimiters and statements terminated with semicolons are just two examples. Developers proficient in C can typically leap right into Go without much difficulty, even taking into account new Go features like namespaces and package management.

Readable code was one of Go’s guiding design goals: Make it easy for developers to get up to speed with any Go project and become proficient with the codebase in short order. C codebases can be hard to grok, as they are prone to turning into a rat’s nest of macros and #ifdefs specific to both a project and a given team. Go’s syntax, and its built-in code formatting and project management tools, are meant to keep those kinds of institutional problems at bay.

Go also features extras like goroutines and channels, language-level tools for handling concurrency and message passing between components. C would require such things to be hand-rolled or supplied by an external library, but Go provides them out-of-the-box, making it far easier to construct software that needs them.

Where Go differs most from C under the hood is in memory management. Go objects are automatically managed and garbage-collected by default. For most programming jobs, this is tremendously convenient. But it also means that any program that requires deterministic handling of memory will be harder to write.

Go does include the unsafe package for circumventing some of Go’s type handling safeties, such as reading and writing arbitrary memory with a Pointer type. But unsafe comes with a warning that programs written with it “may be non-portable and are not protected by the Go 1 compatibility guidelines.”

Go is well-suited for building programs like command-line utilities and network services, because they rarely need such fine-grained manipulations. But low-level device drivers, kernel-space operating system components, and other tasks that demand exacting control over memory layout and management are best created in C.

C vs. Rust

In some ways, Rust is a response to the memory management conundrums created by C and C++, and to many other shortcomings of these languages, as well. Rust compiles to native machine code, so it’s considered on a par with C as far as performance. Memory safety by default, though, is Rust’s main selling point.

Rust’s syntax and compilation rules help developers avoid common memory management blunders. If a program has a memory management issue that crosses Rust syntax, it simply won’t compile. Newcomers to the language—especially coming from a language like C, that provides plenty of room for such bugs—spend the first phase of their Rust education learning how to appease the compiler. But Rust proponents argue that this near-term pain has a long-term payoff: safer code that doesn’t sacrifice speed.

Rust's tooling also improves on C. Project and component management are part of the toolchain supplied with Rust by default, which is the same as with Go. There is a default, recommended way to manage packages, organize project folders, and handle a great many other things that in C are ad-hoc at best, with each project and team handling them differently.

Still, what is touted as an advantage in Rust may not seem like one to a C developer. Rust’s compile-time safety features can’t be disabled, so even the most trivial Rust program must conform to Rust’s memory safety strictures. C may be less safe by default, but it is much more flexible and forgiving when necessary.

Another possible drawback is the size of the Rust language. C has relatively few features, even when taking into account the standard library. The Rust feature set is sprawling and continues to grow. As with C++, the larger feature set means more power, but also more complexity. C is a smaller language, but that much easier to model mentally, so perhaps better suited to projects where Rust would be too much.

C vs. Python

These days, whenever the talk is about software development, Python always seems to enter the conversation. After all, Python is “the second best language for everything,” and unquestionably one of the most versatile, with thousands of third-party libraries available.

What Python emphasizes, and where it differs most from C, is favoring speed of development over speed of execution. A program that might take an hour to put together in another language—like C—might be assembled in Python in minutes. On the flip-side, that program might take seconds to execute in C, but a minute to run in Python. (As a good rule of thumb, Python programs generally run an order of magnitude slower than their C counterparts.) But for many jobs on modern hardware, Python is fast enough, and that has been key to its uptake.

Another major difference is memory management. Python programs are fully memory-managed by the Python runtime, so developers don’t have to worry about the nitty-gritty of allocating and freeing memory. But here again, developer ease comes at the cost of runtime performance. Writing C programs requires scrupulous attention to memory management, but the resulting programs are often the gold standard for pure machine speed.

Under the skin, though, Python and C share a deep connection: the reference Python runtime is written in C. This allows Python programs to wrap libraries written in C and C++. Significant chunks of the Python ecosystem of third-party libraries, such as for machine learning, have C code at their core. In many cases, it isn't a question of C versus Python, but more a question of which parts of your application should be written in C and which in Python.

If speed of development matters more than speed of execution, and if most of the performant parts of the program can be isolated into standalone components (as opposed to being spread throughout the code), either pure Python or a mix of Python and C libraries make a better choice than C alone. Otherwise, C still rules.

C vs. Carbon

Another recent possible contender for both C and C++ is Carbon, a new language that is currently under heavy development.

Carbon's goal is to be a modern alternative to C and C++, with a straightforward syntax, modern tooling and code-organization techniques, and solutions to problems C and C++ programmers have long faced. It's also meant to provide interoperation with C++ codebases, so existing code can be migrated incrementally. All this is a welcome effort, since C and C++ have historically had primitive tooling and processes compared to more recently developed languages.

So what's the downside? Right now Carbon is an experimental project, not remotely ready for production use. There isn't even a working compiler; just an online code explorer. It's going to be a while before Carbon becomes a practical alternative to C or C++, if it ever does.

Courtesy: https://www.infoworld.com/

Tuesday, 23 August 2022

Microsoft urges Windows users to run patch for DogWalk zero-day exploit

 Microsoft has confirmed that a high-severity, zero-day security vulnerability is actively being exploited by threat actors and is advising all Windows and Windows Server users to apply its latest monthly Patch Tuesday update as soon as possible.

The vulnerability, known as CVE-2022-34713 or DogWalk, allows attackers to exploit a weakness in the Windows Microsoft Support Diagnostic Tool (MSDT). By using social engineering or phishing, attackers can trick users into visiting a fake website or opening a malicious document or file and ultimately gain remote code execution on compromised systems.

DogWalk affects all Windows versions under support, including the latest client and server releases, Windows 11 and Windows Server 2022.

The vulnerability was first reported in January 2020 but at the time, Microsoft said it didn’t consider the exploit to be a security issue. This is the second time in recent months that Microsoft has been forced to change its position on a known exploit, having initially rejected reports that another Windows MSDT zero-day, known as Follina, posed a security threat. A patch for that exploit was released in June’s Patch Tuesday update.

Charl van der Walt, head of security research at Orange Cyberdefense, said that although Microsoft could perhaps be criticised for failing to consider how frequently and easily files with apparently innocent extensions are used to deliver malicious payloads, also noted that with several thousand vulnerabilities reported each year, it’s to be expected that Microsoft’s risk-based triage approach to assessing vulnerabilities won’t be infallible.

“If everything is urgent, then nothing is urgent,” he said. “The security community has long stopped believing vulnerabilities and threats will be eradicated any time soon, so the challenge now becomes the development of a kind of agility that can perceive changes in the threat landscape and adapt accordingly.”

https://www.computerworld.com/

Thursday, 7 July 2022

What is Podman? The container engine replacing Docker

Podman is a container engine—a tool for developing, managing, and running containers and container images. Containers are standardized, self-contained software packages that hold all the elements necessary to run anywhere without the need for customization, including application code and supporting libraries. Container-based applications have revolutionized software development over the past decade, making distributed and cloud-based systems easy to deploy and maintain.

Podman is a project from Red Hat that is open source and free to download. It is a relative newcomer to the containerization scene, with version 1.0 being released in 2019. Podman has since made great strides, and its rise has been compounded by the gradual decline of Docker, the project that in many ways created the world of containers as we know it today.

Podman and Kubernetes

If you're even slightly familiar with container-based development, you'll know the name Kubernetes. As containerized applications grew more complex, developers needed tools that could coordinate containers that interacted with each other while running on different virtual machines, or even on different physical machines. Such a tool is called a container orchestration platform, and Kubernetes is by far the most prominent example. Kubernetes can work with any container that meets the Open Container Initiative (OCI) image specification, which Podman's containers do.

One of the important features of Kubernetes is the concept of a pod, an ephemeral grouping of one or more containers that is the smallest unit of computing that Kubernetes can manage. Podman is also centered on the idea of a pod, as its name implies. A Podman pod also includes one or more containers, which are grouped together in a single namespace, network, and security context. This similarity makes Podman and Kubernetes a natural fit, and from the beginning one of Red Hat's goals was to have Podman users orchestrate containers with Kubernetes.

Podman vs. Docker

The other big name from the world of containers that you've almost certainly heard is Docker. Docker wasn't the first container engine but in many ways it has come to define containerization. Much of how Docker works is the de facto standard for container-based development—enough so that many people use "Docker" as a shorthand for containers.

While Docker and Podman occupy a similar space in the container ecosystem, they are not the same, and they have different philosophies and approaches as to how they work. For instance, Docker is an all-in-one platform with tools for specific tasks, whereas Podman collaborates with other projects for certain purposes—for instance, it relies on Buildah to build container images.

There are also architectural differences: Docker has no native concept of pods, for instance. Another important difference is that Docker relies on a continuously running background daemon program to create images and run containers, whereas Podman launches containers and pods as separate child processes. This aspect of Docker's design has important implications for security, which we'll discuss shortly.

Docker commands on Podman

By design and necessity, Podman and Docker are overall compatible. Part of that compatibility can be attributed to adherence to open standards. Because both engines work with containers that conform to the OCI standard, you can create a container with Docker and modify it in Podman, or vice versa, then deploy either container onto Kubernetes.

When Podman rolled out in 2019, Docker was so dominant that its command-line interface had become a part of many developers' programming routines and muscle memory. In order to make a potential move to Podman more seamless, Podman's creators made sure that its commands and syntax mirrored Docker's as much as possible. They went so far as to make it possible to set an alias that re-routes Docker commands to Podman.

Better security with rootless containers

With Podman and Docker working so similarly in so many ways, why would you choose one over the other? Well, one important reason is security. Remember how Docker relies on a daemon to do much of its ongoing work? That daemon runs as root, which makes it a potential entry point for attackers. This isn't an insurmountable obstacle to secure computing, but it does mean that you have to put some thought into navigating Docker security issues.

In some situations, you'll want to run a container with root privileges on its host machine, and Podman lets you do that. But if you would rather keep your containers safely restricted to user space, you can do that as well, by running what's called a rootless container. A rootless container has no more privileges than the user that launched it; within the container, that user has root privileges. You can also use command-line flags to add privileges to your containers in a granular way.

What about performance?

One area where Docker has a leg up on Podman is performance, at least according to some. While there's little concrete information on this subject, it's not hard to find frustrated developers on Hacker News, Stack Overflow, and Reddit complaining about Podman's performance, especially when it's running rootless. Some Swedish university students ran a benchmark suite on several different container platforms and found Podman lacking, though this was admittedly an older pre-1.0 version of Podman. While there's not a lot of technical information on this topic, anecdotally Podman gets dinged for its performance.

Will Podman replace Docker?

From the discussion so far, it may not sound like any great vibe shift is in the works to replace Docker with Podman. But a major change is coming that will displace Docker from one of its longtime niches: Kubernetes itself.

Kubernetes and Docker have for years been the twin giants of the container world. But their coexistence was always somewhat uneasy. The rise of Kubernetes came after Docker was well established in its niche—indeed, you could say that Kubernetes became popular in part because Docker wasn't up to the task of managing all the containers that needed to be coordinated in a large, distributed application.

Docker (the company) developed its own container orchestration platform in 2015, dubbed Swarm, that was designed to play to Docker's strengths. Swarm was launched with great fanfare, but never quite caught up to Kubernetes. While Swarm still has devotees, Kubernetes has become the de facto standard for container orchestration, just as Docker became the de facto standard for other aspects of the container ecosystem.

Additionally, Docker never quite played nice with Kubernetes in terms of its container runtime, the low-level component of the container engine that, among other tasks, works with the underlying operating system (OS) kernel and mounts individual container images. Both Docker and Kubernetes conform to the OCI image spec, which Kubernetes uses to coordinate images built to containers. But Kubernetes also relies on container runtimes compatible with a standardized plugin API called the Container Runtime Interface (CRI), which Docker has never gotten around to implementing.

For a long time, Docker's popularity forced Kubernetes to use Dockershim, a CRI-compliant layer that was an intermediary between Kubernetes and the Docker daemon. This was always something of a hack, however, and earlier this year, Kubernetes jettisoned support for Dockershim. (Podman, by contrast, uses the compatible CRI-O runtime from the Cloud Native Computing Foundation.)

This is part of a larger story about Docker trying and failing to become an enterprise company. In short, Docker was never fully able to break away from Kubernetes. Kubernetes, meanwhile, no longer needs Docker to the extent it once did.

Whether Podman will replace Docker is unclear, but it will definitely be one of the contenders. It helps that Podman is not a flagship product looking to be monetized, but rather a single open source technology offering from a much larger company. We can expect Podman and Kubernetes to remain intertwined for some time to come.

Which container engine should you use?

Hopefully, this discussion gives you a sense of the factors to help you choose between these two container engines. Podman is based on a more secure architecture, while Docker has a deeper history. Podman is native to Kubernetes, whereas Docker also works with Docker Swarm. Docker includes all the functionality you need for many container-related tasks. Podman is modular and lets you experiment with different tools for different purposes.

With that said, the "Podman vs. Docker" question is on some level a false choice. Both platforms create images that conform to the OCI spec, and both are driven by many of the same commands, so you can move seamlessly between the two. You may, for instance, want to use Docker for local development, then use Podman to deploy the containers you built inside Kubernetes.

https://www.infoworld.com/

Healthcare AI in a year: 3 trends to watch

Between the COVID-19 pandemic, a mental health crisis, rising healthcare costs, and aging populations, industry leaders are rushing to develop healthcare-specific artificial intelligence (AI) applications. One signal comes from the venture capital market: over 40 startups have raised significant funding—$20M or more —to build AI solutions for the industry. But how is AI actually being put to use in healthcare? 

The “2022 AI in Healthcare Survey” queried more than 300 respondents from across the globe to better understand the challenges, triumphs, and use cases defining healthcare AI. In its second year, the results did not change significantly, but they do point to some interesting trends foreshadowing how the pendulum will swing in years to come. While parts of this evolution are positive (the democratization of AI), other aspects come with less excitement (a much larger attack surface). Here are the three trends enterprises need to know. 

1. Ease of use and democratization of AI with no-code tools

Gartner estimates by 2025, 70% of new applications developed by enterprises will use no-code or low-code technologies (up from less than 25% in 2020). While low-code has the ability to simplify workloads for programmers, no-code solutions, which require no data science intervention, will have the biggest impact on the enterprise and beyond. That’s why it’s exciting to see a clear shift in AI use from technical titles to the domain experts themselves. 

For healthcare, this means more than half (61%) of respondents from the AI in Healthcare Survey identified clinicians as their target users, followed by healthcare payers (45%), and health IT companies (38%). This, paired with significant developments and investments in healthcare-specific AI applications and availability of open source technologies, is indicative of wider industry adoption.

This is significant: putting code in the hands of healthcare workers in the way that common office tools, like Excel or Photoshop, will change AI for the better. In addition to making the technology more accessible, it also enables more accurate and reliable results, since a medical professional—not a software professional—is now in the driver’s seat. These changes are not happening overnight, but the uptick in domain experts as primary users of AI is a big step forward. 

2. Growing sophistication of tools, and the growing utility of text

Additional encouraging findings involved advances in AI tools and a desire for users to drill down on specific models. When asked what technologies they plan to have in place by the end of 2022, technical leaders from the survey cited data integration (46%), BI (44%), NLP (43%), and data annotation (38%). Text is now the most likely data type used in AI applications and the emphasis on Natural Language Processing (NLP) and data annotation indicate an uptick in more sophisticated AI technologies.  

These tools enable important activities like clinical decision support, drug discovery, and medical policy assessment. After living through two years of the pandemic, it’s clear how crucial progress in these areas is, as we develop new vaccines and uncover how to better support healthcare system needs in the wake of a mass event. And by these examples, it’s also evident that healthcare’s use of AI varies greatly from other industries, requiring a different approach. 

As such, it should come as no surprise that technical leaders and respondents from mature organizations both cited the availability of healthcare-specific models and algorithms as the most important requirement for evaluating locally installed software libraries or SaaS solutions. As seen by the venture capital landscape, existing libraries on the market, and the demand from AI users, healthcare-specific models will only grow in coming years. 

3. Security & safety concerns grow

With all the AI progress that’s been made over the past year, it’s also opened up a range of new attack vectors. When asked what types of software respondents are using to build their AI applications, the most popular selections were locally installed commercial software (37%), and open source software (35%). Most notably was a 12% decline in use of cloud services (30%) from last year’s survey, most likely due to privacy concerns around data sharing. 

Additionally, a majority of respondents (53%) chose to rely on their own data to validate models, rather than on third-party or software vendor metrics. Respondents from mature organizations (68%) signaled a clear preference for using in-house evaluation and for tuning their models themselves. Again, with stringent controls and procedures around healthcare data handling, it’s obvious why AI users would want to keep operations in-house when possible. 

But regardless of software preferences or how users validate models, escalating security threats to healthcare are likely to have a substantial impact. While other critical infrastructure services face challenges, healthcare breaches have ramifications beyond reputational and financial loss. The loss of data or tampering with hospital devices can be the difference between life and death. 

AI is poised for even more significant growth as developers and investors work to get the technology in the hands of everyday users. But as AI becomes more widely available, and as models and tools improve, security, safety, and ethics will take center stage as an important area to keep tabs on. It will be interesting to see how these areas of AI in healthcare evolve this year, and what it means for the future of the industry. 

https://www.cio.com/

What is RPA? A revolution in business process automation

 What is robotic process automation?

Robotic process automation (RPA) is an application of technology, governed by business logic and structured inputs, aimed at automating business processes. Using RPA tools, a company can configure software, or a “robot,” to capture and interpret applications for processing a transaction, manipulating data, triggering responses, and communicating with other digital systems. RPA scenarios range from generating an automatic response to an email to deploying thousands of bots, each programmed to automate jobs in an ERP system.

Many CIOs are turning to RPA to streamline enterprise operations and reduce costs. Businesses can automate mundane rules-based business processes, enabling business users to devote more time to serving customers or other higher-value work. Others see RPA as a stopgap en route to intelligent automation (IA) via machine learning (ML) and artificial intelligence (AI) tools, which can be trained to make judgments about future outputs.

What are the benefits of RPA?

RPA provides organizations with the ability to reduce staffing costs and human error. Intelligent automation specialist Kofax says the principle is simple: Let human employees work on what humans excel at while using robots to handle tasks that get in the way.

Bots are typically low-cost and easy to implement, requiring no custom software or deep systems integration. Such characteristics are crucial as organizations pursue growth without adding significant expenditures or friction among workers.

When properly configured, software robots can increase a team’s capacity for work by 35% to 50%, according to Kofax. For example, simple, repetitive tasks such as copying and pasting information between business systems can be accelerated by 30% to 50% when completed using robots. Automating such tasks can also improve accuracy by eliminating opportunities for human error, such as transposing numbers during data entry.

Enterprises can also supercharge their automation efforts by injecting RPA with cognitive technologies such as ML, speech recognition, and natural language processing, automating higher-order tasks that in the past required the perceptual and judgment capabilities of humans.

Such RPA implementations, in which upwards of 15 to 20 steps may be automated, are part of a value chain known as intelligent automation (IA).

For a deeper look at the benefits of RPA, see “Why bots are poised to disrupt the enterprise” and “Robotic process automation is a killer app for cognitive computing.”

What are the top RPA tools?

The RPA market consists of a mix of new, purpose-built tools and older tools that have added new features to support automation. Some were originally business process management (BPM) tools. Some vendors position their tools as “workflow automation” or “work process management.” Overall, the RPA software market is expected to grow from $2.4 billion in 2021 to $6.5 billion by 2025, according to Forrester research.

Some of the top RPA tools vendors include:

Appian

Automation Anywhere

AutomationEdge

Blue Prism

Cyclone Robotics

Datamatics

EdgeVerve Systems

HelpSystems

IBM

Kofax

Kryon

Laiye

Microsoft

NICE

Nintex

NTT-AT

Pegasystems

Samsung SDS

Servicetrace

WorkFusion

https://www.cio.com/

Wednesday, 6 July 2022

What is data science? The ultimate guide

 Data science is the field of applying advanced analytics techniques and scientific principles to extract valuable information from data for business decision-making, strategic planning and other uses. It's increasingly critical to businesses: The insights that data science generates help organizations increase operational efficiency, identify new business opportunities and improve marketing and sales programs, among other benefits. Ultimately, they can lead to competitive advantages over business rivals.

Data science incorporates various disciplines -- for example, data engineering, data preparation, data mining, predictive analytics, machine learning and data visualization, as well as statistics, mathematics and software programming. It's primarily done by skilled data scientists, although lower-level data analysts may also be involved. In addition, many organizations now rely partly on citizen data scientists, a group that can include business intelligence (BI) professionals, business analysts, data-savvy business users, data engineers and other workers who don't have a formal data science background.

This comprehensive guide to data science further explains what it is, why it's important to organizations, how it works, the business benefits it provides and the challenges it poses. You'll also find an overview of data science applications, tools and techniques, plus information on what data scientists do and the skills they need. Throughout the guide, there are hyperlinks to related TechTarget articles that delve more deeply into the topics covered here and offer insight and expert advice on data science initiatives.

Why is data science important?

Data science plays an important role in virtually all aspects of business operations and strategies. For example, it provides information about customers that helps companies create stronger marketing campaigns and targeted advertising to increase product sales. It aids in managing financial risks, detecting fraudulent transactions and preventing equipment breakdowns in manufacturing plants and other industrial settings. It helps block cyber attacks and other security threats in IT systems.

From an operational standpoint, data science initiatives can optimize management of supply chains, product inventories, distribution networks and customer service. On a more fundamental level, they point the way to increased efficiency and reduced costs. Data science also enables companies to create business plans and strategies that are based on informed analysis of customer behavior, market trends and competition. Without it, businesses may miss opportunities and make flawed decisions.

Data science is also vital in areas beyond regular business operations. In healthcare, its uses include diagnosis of medical conditions, image analysis, treatment planning and medical research. Academic institutions use data science to monitor student performance and improve their marketing to prospective students. Sports teams analyze player performance and plan game strategies via data science. Government agencies and public policy organizations are also big users.

Data science process and lifecycle

Data science projects involve a series of data collection and analysis steps. In an article that describes the data science process, Donald Farmer, principal of analytics consultancy TreeHive Strategy, outlined these six primary steps:

  • Identify a business-related hypothesis to test.
  • Gather data and prepare it for analysis.
  • Experiment with different analytical models.
  • Pick the best model and run it against the data.
  • Present the results to business executives.
  • Deploy the model for ongoing use with fresh data.

Farmer said the process does make data science a scientific endeavor. However, he wrote that in corporate enterprises, data science work "will always be most usefully focused on straightforward commercial realities" that can benefit the business. As a result, he added, data scientists should collaborate with business stakeholders on projects throughout the analytics lifecycle.

Benefits of data science

In an October 2020 webinar organized by Harvard University's Institute for Applied Computational Science, Jessica Stauth, managing director for data science in the Fidelity Labs unit at Fidelity Investments, said there's "a very clear relationship" between data science work and business results. She cited potential business benefits that include higher ROI, sales growth, more efficient operations, faster time to market and increased customer engagement and satisfaction.

Generally speaking, one of data science's biggest benefits is to empower and facilitate better decision-making. Organizations that invest in it can factor quantifiable, data-based evidence into their business decisions. Ideally, such data-driven decisions will lead to stronger business performance, cost savings and smoother business processes and workflows.

The specific business benefits of data science vary depending on the company and industry. In customer-facing organizations, for example, data science helps identify and refine target audiences. Marketing and sales departments can mine customer data to improve conversion rates and create personalized marketing campaigns and promotional offers that produce higher sales.

In other cases, the benefits include reduced fraud, more effective risk management, more profitable financial trading, increased manufacturing uptime, better supply chain performance, stronger cybersecurity protections and improved patient outcomes. Data science also enables real-time analysis of data as it's generated -- read about the benefits that real-time analytics provides, including faster decision-making and increased business agility, in another article by Farmer.

What do data scientists do and what skills do they need?

The primary role of data scientists is analyzing data, often large amounts of it, in an effort to find useful information that can be shared with corporate executives, business managers and workers, as well as government officials, doctors, researchers and many others. Data scientists also create AI tools and technologies for deployment in various applications. In both cases, they gather data, develop analytical models and then train, test and run the models against the data.

As a result, data scientists must possess a combination of data preparation, data mining, predictive modeling, machine learning, statistical analysis and mathematics skills, as well as experience with algorithms and coding -- for example, programming skills in languages such as Python, R and SQL. Many are also tasked with creating data visualizations, dashboards and reports to illustrate analytics findings.

https://www.techtarget.com/searchenterpriseai/definition/data-science