Saturday, 11 March 2023

Can Ageing be Prevented? Retro Biosciences says 'Yes, we increase your life by 10 years!'

When a startup called Retro Biosciences eased out of stealth mode in mid-2022, it announced it had secured $180 million to bankroll an audacious mission: to add 10 years to the average human life span. It had set up its headquarters in a raw warehouse space near San Francisco just the year before, bolting shipping containers to the concrete floor to quickly make lab space for the scientists who had been enticed to join the company.

Retro said that it would “prize speed” and “tighten feedback loops” as part of an “aggressive mission” to stall aging, or even reverse it. But it was vague about where its money had come from. At the time, it was a “mysterious startup,” according to press reports, “whose investors remain anonymous.”

Now MIT Technology Review can reveal that the entire sum was put up by Sam Altman, the 37-year-old startup guru and investor who is CEO of OpenAI. 

Altman spends nearly all his time at OpenAI, an artificial intelligence company whose chatbots and electronic art programs have been convulsing the tech sphere with their human-like capabilities. 

But Altman’s money is a different matter. He says he’s emptied his bank account to fund two other very different but equally ambitious goals: limitless energy and extended life span.

One of those bets is on the fusion power startup Helion Energy, into which he’s poured more than $375 million, he told CNBC in 2021. The other is Retro, to which Altman cut checks totaling $180 million the same year. 

“It’s a lot. I basically just took all my liquid net worth and put it into these two companies,” Altman says.

Altman’s investment in Retro hasn’t been previously reported. It is among the largest ever by an individual into a startup pursuing human longevity.

Altman has long been a prominent figure in the Silicon Valley scene, where he previously ran the startup incubator Y Combinator in San Francisco. But his profile has gone global with OpenAI’s release of ChatGPT, software that’s able to write poems and answer questions.

The AI breakthrough, according to Fortune, has turned the seven-year-old company into “an unlikely member of the club of tech superpowers.” Microsoft committed to investing $10 billion, and Altman, with 1.5 million Twitter followers, is consolidating a reputation as a heavy hitter whose creations seem certain to alter society in profound ways.  

Altman does not appear on the Forbes billionaires list, but that doesn’t mean he isn’t extremely wealthy. His wide-ranging investments have included early stakes in companies like Stripe and Airbnb. 

 “I have been an early-stage tech investor in the greatest bull market in history,” he says. 

Young Blood

About eight years ago, Altman became interested in so-called “young blood” research. These were studies in which scientists sewed young and old mice together so that they shared one blood system. The surprise: the old mice seemed to be partly rejuvenated.

A grisly experiment, but in a way, remarkably simple. Altman was head of Y Combinator at the time, and he tasked his staff with looking into the progress being made by anti-aging scientists.

“It felt like, all right, this was a result I didn’t expect and another one I didn’t expect,” he says. “So there’s something going on where … maybe there is a secret here that is going to be easier to find than we think.” 

In 2018, Y Combinator launched a special course for biotech companies, inviting those with “radical anti-aging schemes” to apply, but before long, Altman moved away from Y Combinator to focus on his growing role at OpenAI. 

Then, in 2020, researchers in California showed they could achieve an effect similar to young blood by replacing the plasma of old mice with salt water and albumin. That suggested the real problem lay in the old blood. Simply by diluting it (and the toxins in it), medicine might get one step closer to a cure for aging.

The new company would need a lot of money—enough to keep it afloat at least seven or eight years while it carried out research, ran into setbacks, and overcame them. It would also need to get things done quickly. Spending at many biotech startups is decided on by a board of directors, but at Retro, Betts-LaCroix has all the decision-making power.  “We have no bureaucracy,’ he says. “I am the bureaucracy.” 

https://tinyurl.com/4zsukek9

https://retro.bio/announcement/

Saturday, 18 February 2023

Why We're All Obsessed With the Mind-Blowing ChatGPT AI Chatbot

 There's a new AI bot in town: ChatGPT. Even if you aren't into artificial intelligence, pay attention, because this one is a big deal.

The tool, from a power player in artificial intelligence called OpenAI, lets you type natural-language prompts. ChatGPT then offers conversational, if somewhat stilted, responses. The bot remembers the thread of your dialogue, using previous questions and answers to inform its next responses. It derives its answers from huge volumes of information on the internet.

ChatGPT is a big deal. The tool seems pretty knowledgeable in areas where there's good training data for it to learn from. It's not omniscient or smart enough to replace all humans yet, but it can be creative, and its answers can sound downright authoritative. A few days after its launch, more than a million people were trying out ChatGPT.

But be careful, OpenAI warns. ChatGPT has all kinds of potential pitfalls, some easy to spot and some more subtle.

"It's a mistake to be relying on it for anything important right now," OpenAI Chief Executive Sam Altman tweeted. "We have lots of work to do on robustness and truthfulness." Here's a look at why ChatGPT is important and what's going on with it.

And it's becoming big business. In January, Microsoft pledged to invest billions of dollars into OpenAI. A modified version of the technology behind ChatGPT is now powering Microsoft's new Bing challenge to Google search and, eventually, it'll power the company's effort to build new AI co-pilot smarts in to every part of your digital life.

Bing uses OpenAI technology to process search queries, compile results from different sources, summarize documents, generate travel itineraries, answer questions and generally just chat with humans. That's a potential revolution for search engines, but it's been plagued with problems like factual errors and and unhinged conversations.

What is ChatGPT?

ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You can ask it countless questions and often will get an answer that's useful.

For example, you can ask it encyclopedia questions like, "Explain Newton's laws of motion." You can tell it, "Write me a poem," and when it does, say, "Now make it more exciting." You ask it to write a computer program that'll show you all the different ways you can arrange the letters of a word.

Here's the catch: ChatGPT doesn't exactly know anything. It's an AI that's trained to recognize patterns in vast swaths of text harvested from the internet, then further trained with human assistance to deliver more useful, better dialog. The answers you get may sound plausible and even authoritative, but they might well be entirely wrong, as OpenAI warns.

Chatbots have been of interest for years to companies looking for ways to help customers get what they need and to AI researchers trying to tackle the Turing Test. That's the famous "Imitation Game" that computer scientist Alan Turing proposed in 1950 as a way to gauge intelligence: Can a human conversing with a human and with a computer tell which is which?

But chatbots have a lot of baggage, as companies have tried with limited success to use them instead of humans to handle customer service work. A study of 1,700 Americans, sponsored by a company called Ujet, whose technology handles customer contacts, found that 72% of people found chatbots to be a waste of time.

ChatGPT has rapidly become a widely used tool on the internet. UBS analyst Lloyd Walmsley estimated in February that ChatGPT had reached 100 million monthly users the previous month, accomplishing in two months what took TikTok about nine months and Instagram two and a half years. The New York Times, citing internal sources, said 30 million people use ChatGPT daily.

What kinds of questions can you ask?

You can ask anything, though you might not get an answer. OpenAI suggests a few categories, like explaining physics, asking for birthday party ideas, and getting programming help.

I asked it to write a poem, and it did, though I don't think any literature experts would be impressed. I then asked it to make it more exciting, and lo, ChatGPT pumped it up with words like battlefield, adrenaline, thunder, and adventure.

One wacky example shows how ChatGPT is willing to just go for it in domains where people would fear to tread: a command to write "a folk song about writing a rust program and fighting with lifetime errors."

ChatGPT's expertise is broad, and its ability to follow a conversation is notable. When I asked it for words that rhymed with "purple," it offered a few suggestions, then when I followed up "How about with pink?" it didn't miss a beat. (Also, there are a lot more good rhymes for "pink.")

When I asked, "Is it easier to get a date by being sensitive or being tough?" GPT responded, in part, "Some people may find a sensitive person more attractive and appealing, while others may be drawn to a tough and assertive individual. In general, being genuine and authentic in your interactions with others is likely to be more effective in getting a date than trying to fit a certain mold or persona."

You don't have to look far to find accounts of the bot blowing people's minds. Twitter is awash with users displaying the AI's prowess at generating art prompts and writing code. Some have even proclaimed "Google is dead," along with the college essay. We'll talk more about that below.

CNET writer David Lumb has put together a list of some useful ways ChatGPT can help, but more keep cropping up. One doctor says he's used it to persuade a health insurance company to pay for a patient's procedure.

Who built ChatGPT and how does it work?

ChatGPT is the brainchild of OpenAI, an artificial intelligence research company. Its mission is to develop a "safe and beneficial" artificial general intelligence system or to help others do so. OpenAI has 375 employees, Altman tweeted in January. "OpenAI has managed to pull together the most talent-dense researchers and engineers in the field of AI," he also said in a January talk.

It's made splashes before, first with GPT-3, which can generate text that can sound like a human wrote it, and then with DALL-E, which creates what's now called "generative art" based on text prompts you type in.

GPT-3, and the GPT 3.5 update on which ChatGPT is based, are examples of AI technology called large language models. They're trained to create text based on what they've seen, and they can be trained automatically — typically with huge quantities of computer power over a period of weeks. For example, the training process can find a random paragraph of text, delete a few words, ask the AI to fill in the blanks, compare the result to the original, and then reward the AI system for coming as close as possible. Repeating over and over can lead to a sophisticated ability to generate text.

It's not totally automated. Humans evaluate ChatGPT's initial results in a process called finetuning. Human reviewers apply guidelines that OpenAI's models then generalize from. In addition, OpenAI used a Kenyan firm that paid people up to $3.74 per hour to review thousands of snippets of text for problems like violence, sexual abuse, and hate speech, Time reported, and that data was built into a new AI component designed to screen such materials from ChatGPT answers and OpenAI training data.

ChatGPT doesn't actually know anything the way you do. It's just able to take a prompt, find relevant information in its oceans of training data, and convert that into plausible-sounding paragraphs of text. "We are a long way away from the self-awareness we want," said computer scientist and internet pioneer Vint Cerf of the large language model technology ChatGPT and its competitors use.

Is ChatGPT free?

Yes, for the moment at least, but in January OpenAI added a paid version that responds faster and keeps working even during peak usage times when others get messages saying, "ChatGPT is at capacity right now."

You can sign up on a waiting list if you're interested. OpenAI's Altman warned that ChatGPT's "compute costs are eye-watering" at a few cents per response, Altman estimated. OpenAI charges for DALL-E art once you exceed a basic free level of usage.

But OpenAI seems to have found some customers, likely for its GPT tools. It's told potential investors that it expects $200 million in revenue in 2023 and $1 billion in 2024, according to Reuters.

What are the limits of ChatGPT?

As OpenAI emphasizes, ChatGPT can give you wrong answers and can give "a misleading impression of greatness," Altman said. Sometimes, helpfully, it'll specifically warn you of its own shortcomings. For example, when I asked it who wrote the phrase "the squirming facts exceed the squamous mind," ChatGPT replied, "I'm sorry, but I am not able to browse the internet or access any external information beyond what I was trained on." (The phrase is from Wallace Stevens' 1942 poem Connoisseur of Chaos.)

ChatGPT was willing to take a stab at the meaning of that expression once I typed it in directly, though: "a situation in which the facts or information at hand are difficult to process or understand." It sandwiched that interpretation between caution that it's hard to judge without more context and that it's just one possible interpretation.

ChatGPT's answers can look authoritative but be wrong.

"If you ask it a very well-structured question, with the intent that it gives you the right answer, you'll probably get the right answer," said Mike Krause, data science director at a different AI company, Beyond Limits. "It'll be well articulated and sound like it came from some professor at Harvard. But if you throw it a curveball, you'll get nonsense."

The journal Science banned ChatGPT text in January. "An AI program cannot be an author. A violation of these policies will constitute scientific misconduct no different from altered images or plagiarism of existing works," Editor in Chief H. Holden Thorp said.

The software developer site StackOverflow banned ChatGPT answers to programming questions. Administrators cautioned, "because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers."

You can see for yourself how artful a BS artist ChatGPT can be by asking the same question multiple times. I asked twice whether Moore's Law, which tracks the computer chip industry's progress in increasing the number of data-processing transistors, is running out of steam, and I got two different answers. One pointed optimistically to continued progress, while the other pointed more grimly to the slowdown and the belief "that Moore's Law may be reaching its limits."

Both ideas are common in the computer industry itself, so this ambiguous stance perhaps reflects what human experts believe.

With other questions that don't have clear answers, ChatGPT often won't be pinned down. 

The fact that it offers an answer at all, though, is a notable development in computing. Computers are famously literal, refusing to work unless you follow exact syntax and interface requirements. Large language models are revealing a more human-friendly style of interaction, not to mention an ability to generate answers that are somewhere between copying and creativity.

Will ChatGPT help students cheat better?

Yes, but as with many other technology developments, it's not a simple black-and-white situation. Decades ago, students could copy encyclopedia entries and use calculators, and more recently, they've been able to use search engines and Wikipedia. ChatGPT offers new abilities for everything from helping with research to doing your homework for you outright. Many ChatGPT answers already sound like student essays, though often with a tone that's stuffier and more pedantic than a writer might prefer.

Google programmer Kenneth Goodman tried ChatGPT on a number of exams. It scored 70% on the United States Medical Licensing Examination, 70% on a bar exam for lawyers, nine out of 15 correct on another legal test, the Multistate Professional Responsibility Examination, 78% on New York state's high school chemistry exam's multiple choice section, and ranked in the 40th percentile on the Law School Admission Test. 

High school teacher Daniel Herman concluded ChatGPT already writes better than most students today. He's torn between admiring ChatGPT's potential usefulness and fearing its harm to human learning: "Is this moment more like the invention of the calculator, saving me from the tedium of long division, or more like the invention of the player piano, robbing us of what can be communicated only through human emotion?"

Dustin York, an associate professor of communication at Maryville University, hopes educators will learn to use ChatGPT as a tool and realize it can help students think critically.

"Educators thought that Google, Wikipedia, and the internet itself would ruin education, but they did not," York said. "What worries me most are educators who may actively try to discourage the acknowledgment of AI like ChatGPT. It's a tool, not a villain."

Can teachers spot ChatGPT use?

Not with 100% certainty, but there's technology to spot AI help. The companies that sell tools to high schools and universities to detect plagiarism are now expanding to detecting AI, too.

One, Coalition Technologies, offers an AI content detector on its website. Another, Copyleaks, released a free Chrome extension designed to spot ChatGPT-generated text with a technology that's 99% accurate, CEO Alon Yamin said. But it's a "never-ending cat and mouse game" to try to catch new techniques to thwart the detectors, he said.

Copyleaks performed an early test of student assignments uploaded to its system by schools. "Around 10% of student assignments submitted to our system include at least some level of AI-created content," Yamin said.

OpenAI launched its own detector for AI-written text in February. But one plagiarism-detecting company, CrossPlag, said it spotted only two of 10 AI-generated passages in its test. "While detection tools will be essential, they are not infallible," the company said.

Researchers at Pennsylvania State University studied the plagiarism issue using OpenAI's earlier GPT-2 language model. It's not as sophisticated as GPT-3.5, but its training data is available for closer scrutiny. The researchers found GPT-2 plagiarized information not just word for word at times, but also paraphrased passages and lifted ideas without citing its sources. "The language models committed all three types of plagiarism, and ... the larger the dataset and parameters used to train the model, the more often plagiarism occurred," the university said.

Can ChatGPT write software?

Yes, but with caveats. ChatGPT can retrace steps humans have taken, and it can generate actual programming code. "This is blowing my mind," said one programmer in February, showing on Imgur the sequence of prompts he used to write software for a car repair center. "This would've been an hour of work at least, and it took me less than 10 minutes."

You just have to make sure it's not bungling programming concepts or using software that doesn't work. The StackOverflow ban on ChatGPT-generated software is there for a reason.

But there's enough software on the web that ChatGPT really can work. One developer, Cobalt Robotics Chief Technology Officer Erik Schluntz, tweeted that ChatGPT provides useful enough advice that, over three days, he hadn't opened StackOverflow once to look for advice.

Another, Gabe Ragland of AI art site Lexica, used ChatGPT to write website code built with the React tool.

ChatGPT can parse regular expressions (regex), a powerful but complex system for spotting particular patterns, for example, dates in a bunch of text or the name of a server in a website address. "It's like having a programming tutor on hand 24/7," tweeted programmer James Blackwell about ChatGPT's ability to explain regex.

Here's one impressive example of its technical chops: ChatGPT can emulate a Linux computer, delivering correct responses to command-line input.

What's off-limits?

ChatGPT is designed to weed out "inappropriate" requests, a behavior in line with OpenAI's mission "to ensure that artificial general intelligence benefits all of humanity."

If you ask ChatGPT itself what's off limits, it'll tell you: any questions "that are discriminatory, offensive, or inappropriate. This includes questions that are racist, sexist, homophobic, transphobic, or otherwise discriminatory or hateful." Asking it to engage in illegal activities is also a no-no.

Is this better than Google search?

Asking a computer a question and getting an answer is useful, and often ChatGPT delivers the goods.

Google often supplies you with its suggested answers to questions and links to websites that it thinks will be relevant. Often ChatGPT's answers far surpass what Google will suggest, so it's easy to imagine GPT-3 is a rival.

But you should think twice before trusting ChatGPT. As when using Google and other sources of information like Wikipedia, it's best practice to verify information from original sources before relying on it.

Vetting the veracity of ChatGPT answers takes some work because it just gives you some raw text with no links or citations. But it can be useful and in some cases thought provoking. You may not see something directly like ChatGPT in Google search results, but Google has built large language models of its own and uses AI extensively already in search.

That said, Google is keen to tout its deep AI expertise, ChatGPT triggered a "code red" emergency within Google, according to The New York Times, and drew Google co-founders Larry Page and Sergey Brin back into active work. Microsoft could build ChatGPT into its rival search engine, Bing. Clearly ChatGPT and other tools like it have a role to play when we're looking for information.

So ChatGPT, while imperfect, is doubtless showing the way toward our tech future.

https://www.cnet.com/

Wednesday, 15 February 2023

How Rust went from a side project to the world’s most-loved programming language

 Many software projects emerge because—somewhere out there—a programmer had a personal problem to solve.

That’s more or less what happened to Graydon Hoare. In 2006, Hoare was a 29-year-old computer programmer working for Mozilla, the open-source browser company. Returning home to his apartment in Vancouver, he found that the elevator was out of order; its software had crashed. This wasn’t the first time it had happened, either. 

Hoare lived on the 21st floor, and as he climbed the stairs, he got annoyed. “It’s ridiculous,” he thought, “that we computer people couldn’t even make an elevator that works without crashing!” Many such crashes, Hoare knew, are due to problems with how a program uses memory. The software inside devices like elevators is often written in languages like C++ or C, which are famous for allowing programmers to write code that runs very quickly and is quite compact. The problem is those languages also make it easy to accidentally introduce memory bugs—errors that will cause a crash. Microsoft estimates that 70% of the vulnerabilities in its code are due to memory errors from code written in these languages.

Most of us, if we found ourselves trudging up 21 flights of stairs, would just get pissed off and leave it there. But Hoare decided to do something about it. He opened his laptop and began designing a new computer language, one that he hoped would make it possible to write small, fast code without memory bugs. He named it Rust, after a group of remarkably hardy fungi that are, he says, “over-engineered for survival.”

Seventeen years later, Rust has become one of the hottest new languages on the planet—maybe the hottest. There are 2.8 million coders writing in Rust, and companies from Microsoft to Amazon regard it as key to their future. The chat platform Discord used Rust to speed up its system, Dropbox uses it to sync files to your computer, and Cloudflare uses it to process more than 20% of all internet traffic. 

When the coder discussion board Stack Overflow conducts its annual poll of developers around the world, Rust has been rated the most “loved” programming language for seven years running. Even the US government is avidly promoting software in Rust as a way to make its processes more secure. The language has become, like many successful open-source projects, a barn-raising: there are now hundreds of die-hard contributors, many of them volunteers. Hoare himself stepped aside from the project in 2013, happy to turn it over to those other engineers, including a core team at Mozilla.

It isn’t unusual for someone to make a new computer language. Plenty of coders create little ones as side projects all the time. But it’s meteor-strike rare for one to take hold and become part of the pantheon of well-known languages alongside, say, JavaScript or Python or Java. How did Rust do it?

To grasp what makes Rust so useful, it’s worth taking a peek beneath the hood at how programming languages deal with computer memory.

You could, very crudely, think of the dynamic memory in a computer as a chalkboard. As a piece of software runs, it’s constantly writing little bits of data to the chalkboard, keeping track of which one is where, and erasing them when they’re no longer needed. Different computer languages manage this in different ways, though. An older language like C or C++ is designed to give the programmer a lot of power over how and when the software uses the chalkboard. That power is useful: with so much control over dynamic memory, a coder can make the software run very quickly. That’s why C and C++ are often used to write “bare metal” code, the sort that interacts directly with hardware. Machines that don’t have an operating system like Windows or Linux, including everything from dialysis machines to cash registers, run on such code. (It’s also used for more advanced computing: at some point an operating system needs to communicate with hardware. The kernels of Windows, Linux, and MacOS are all significantly written in C.)

But as speedy as they are, languages like C and C++ come with a trade-off. They require the coder to keep careful track of what memory is being written to, and when to erase it. And if you accidentally forget to erase something? You can cause a crash: the software later on might try to use a space in memory it thinks is empty when there’s really something there. Or you could give a digital intruder a way to sneak in. A hacker might discover that a program isn’t cleaning up its memory correctly—information that should have been wiped (passwords, financial info) is still hanging around—and sneakily grab that data. As a piece of C or C++ code gets bigger and bigger, it’s possible for even the most careful coder to make lots of memory mistakes, filling the software with bugs.

“In C or C++ you always have this fear that your code will just randomly explode,” says Mara Bos, cofounder of the drone firm Fusion Engineering and head of Rust’s library team.

In the ’90s, a new set of languages like Java, JavaScript, and Python became popular. These took a very different approach. To relieve stress on coders, they automatically managed the memory by using “garbage collectors,” components that would periodically clean up the memory as a piece of software was running. Presto: you could write code that didn’t have memory mistakes. But the downside was a loss of that fine-grained control. Your programs also performed more sluggishly (because garbage collection takes up crucial processing time). And software written in these languages used much more memory. So the world of programming became divided, roughly, into two tribes. If software needed to run fast or on a tiny chip in an embedded device, it was more likely to be written in C or C++. If it was a web app or mobile-phone app—an increasingly big chunk of the world of code—then you used a newer, garbage-collected language.

With Rust, Hoare aimed to create a language that split the difference between these approaches. It wouldn’t require programmers to manually figure out where in memory they were putting data; Rust would do that. But it would impose many strict rules on how data could be used or copied inside a program. You’d have to learn those coding rules, which would be more onerous than the ones in Python or JavaScript. Your code would be harder to write, but it’d be “memory safe”—no fears that you’d accidentally inserted lethal memory bugs. Crucially, Rust would also offer “concurrency safety.” Modern programs do multiple things at once—concurrently, in other words—and sometimes those different threads of code try to modify the same piece of memory at nearly the same time. Rust’s memory system would prevent this.

When he first opened his laptop to begin designing Rust, Hoare was already a 10-year veteran of software, working full time at Mozilla. Rust was just a side project at first. Hoare beavered away at it for a few years, and when he showed it to other coders, reaction was mixed. “Some enthusiasm,” he told me in an email. “A lot of eye-rolls and ‘This will never work’ or ‘This will never be usable.’”

Executives at Mozilla, though, were intrigued. Rust, they realized, could help them build a better browser engine. Browsers are notoriously complex pieces of software with many opportunities for dangerous memory bugs.

One employee who got involved was Patrick Walton, who’d joined Mozilla after deciding to leave his PhD studies in programming languages. He remembers Brendan Eich, the inventor of JavaScript, pulling him into a meeting at Mozilla: “He said, ‘Why don’t you come into this room where we’re going to discuss design decisions for Rust?’” Walton thought Rust sounded fantastic; he joined Hoare and a growing group of engineers in developing the language. Many, like Mozilla engineers Niko Matsakis and Felix Klock, had academic experience researching memory and coding languages.

In 2009, Mozilla decided to officially sponsor Rust. The language would be open source, and accountable only to the people making it, but Mozilla was willing to bootstrap it by paying engineers. A Rust group took over a conference room at the company; Dave Herman, cofounder of Mozilla Research, dubbed it “the nerd cave” and posted a sign outside the door. Over the next 10 years, Mozilla employed over a dozen engineers to work on Rust full time, Hoare estimates.

“Everyone really felt like they were working on something that could be really big,” Walton recalls. That excitement extended outside Mozilla’s building, too. By the early 2010s, Rust was attracting volunteers from around the world, from every nook of tech. Some worked for big tech firms. One major contributor was a high school student in Germany. At a Mozilla conference in British Columbia in 2010, Eich stood up to say there’d be a talk on an experimental language, and “don’t attend unless you’re a real programming language nerd,” Walton remembers. “And of course, it filled the room.”

Through the early 2010s, Mozilla engineers and Rust volunteers worldwide gradually honed Rust’s core—the way it is designed to manage memory. They created an “ownership” system so that a piece of data can be referred to by only one variable; this greatly reduces the chances of memory problems. Rust’s compiler—which takes the lines of code you write and turns them into the software that runs on a computer—would rigorously enforce the ownership rules. If a coder violated the rules, the compiler would refuse to compile the code and turn it into a runnable program.

Many of the tricks Rust employed weren’t new ideas: “They’re mostly decades-old research,” says Manish Goregaokar, who runs Rust’s developer-­tools team and worked for Mozilla in those early years. But the Rust engineers were adept at finding these well-honed concepts and turning them into practical, usable features.

As the team improved the memory-management system, Rust had increasingly little need for its own garbage collector—and by 2013, the team had removed it. Programs written in Rust would now run even faster: no periodic halts while the computer performed cleanup. There are, Hoare points out, some software engineers who would argue that Rust still possesses elements that are a bit like garbage collection—its “reference counting” system, part of how its memory-­ownership mechanics work. But either way, Rust’s performance had become remarkably efficient. It dove closer to the metal, down to where C and C++ were—yet it was memory safe.

Removing garbage collection “led to a leaner and meaner language,” says Steve Klabnik, a coder who got involved with Rust in 2012 and wrote documentation for it for the next 10 years.

Along the way, the Rust community was also building a culture that was known for being unusually friendly and open to newcomers. “No one ever calls you a noob,” says Nell Shamrell-Harrington, a principal engineer at Microsoft who at the time worked on Rust at Mozilla. “No question is considered a stupid question.” 

Part of this, she says, is that Hoare had very early on posted a “code of conduct,” prohibiting harassment, that anyone contributing to Rust was expected to adhere to. The community embraced it, and that, longtime Rust community members say, drew queer and trans coders to get involved in Rust in higher proportions than you’d find with other languages. Even the error messages that the compiler creates when the coder makes a mistake are unusually solicitous; they describe the error, and also politely suggest how to fix it. 

......

http://surl.li/ewows

Wednesday, 14 December 2022

What is SASE? A cloud service that marries SD-WAN with security

 Secure Access Service Edge (SASE) is a network architecture that combines software-defined wide area networking (SD-WAN) and security functionality into a unified cloud service that promises simplified WAN deployments, improved efficiency and security, and application-specific bandwidth policies.

First outlined by Gartner in 2019, SASE (pronounced “sassy”) has quickly evolved from a niche, security-first SD-WAN alternative into a popular WAN sector that analysts project will grow to become a $10-billion-plus market within the next couple of years.

Market research firm Dell’Oro group forecasts that the SASE market will triple by 2026, topping $13 billion. Gartner is more bullish, predicting that the SASE market will grow at a 36% CAGR between 2020 and 2025 to reach $14.7 billion by 2025.

What is SASE?

SASE consolidates SD-WAN with a suite of security services to help organizations safely accommodate an expanding edge that includes branch offices, public clouds, remote workers and IoT networks.

While some SASE vendors offer hardware appliances to connect edge users and devices to nearby points of presence (PoPs), most vendors handle the connections through software clients or virtual appliances. SASE is typically consumed as a single service, but there are a number of moving parts, so some SASE offerings piece together services from various partners.

On the networking side, the key features of SASE are WAN optimization, content delivery network (CDN), caching, SD-WAN, SaaS acceleration, and bandwidth aggregation. The vendors that make the WAN side of SASE work include SD-WAN providers, carriers, content-delivery networks, network-as-a-service (NaaS) providers, bandwidth aggregators and networking equipment vendors.

The security features of SASE can include encryption, multifactor authentication, threat protection, data leak prevention (DLP), DNS, Firewall-as-a-Service (FWaaS), Secure Web Gateway (SWG), and Zero Trust Network Access (ZTNA). The security side of SASE relies on a range of providers, including cloud-access security brokers, cloud secure web gateways providers, zero-trust network access providers, and more.

The feature set will vary from vendor to vendor, and the top SASE vendors are investing in advanced capabilities, such as support for 5G for WAN links, advanced behavior- and context-based security capabilities, and integrated AIOps for troubleshooting and automatic remediation.

Ideally, all these capabilities are offered as a unified SASE service by a single service provider, even if certain components are white labeled from other providers.

What are the benefits of SASE?

 Because it is billed as a unified service, SASE promises to cut complexity and cost. Enterprises deal with fewer vendors, the amount of hardware required in branch offices and other remote locations declines, and the number agents on end-user devices also decreases.

SASE removes management burdens from IT’s plate, while also offering centralized control for things that must remain in-house, such as setting user policies. IT executives can set policies centrally via cloud-based management platforms, and the policies are enforced at distributed PoPs close to end users. Thus, end users receive the same access experience regardless of what resources they need, and where they and the resources are located.

SASE also simplifies the authentication process by applying appropriate policies for whatever resources the user seeks, based on the initial sign-in. SASE also supports zero-trust networking, which controls access based on user, device and application, not location and IP address.

Security is increased because policies are enforced equally regardless of where users are located. As new threats arise, the service provider addresses how to protect against them, with no new hardware requirements for the enterprise.

More types of end users – employees, partners, contractors, customers – can gain access without the risk that traditional security – such as VPNs and DMZs – might be compromised and become a beachhead for potential attacks on the enterprise.

SASE providers can supply varying qualities of service, so each application gets the bandwidth and network responsiveness it needs. With SASE, enterprise IT staff have fewer chores related to deployment, monitoring and maintenance, and can be assigned higher-level tasks.

What are the SASE challenges?

Organizations thinking about deploying SASE need to address several potential challenges. For starters, some features could come up short initially because they are implemented by providers with backgrounds in either networking or security, but might lack expertise in the area that is not their strength.

Another issue to consider is whether the convenience of an all-in-one service meets the organization’s needs better than a collection of best-in-breed tools.

SASE offerings from a vendor with a history of selling on-premises hardware may not be designed with a cloud-native mindset. Similarly, legacy hardware vendors may lack experience with the in-line proxies needed by SASE, so customers may run into unexpected cost and performance problems.

Some traditional vendors may also lack experience in evaluating user contexts, which could limit their ability to enforce context-dependent policies. Due to SASE’s complexity, providers may have a feature list that they say is well integrated, but which is really a number of disparate services that are poorly stitched together.

Because SASE promises to deliver secure access to the edge, the global footprint of the service provider is important. Building out a global network could prove too costly for some SASE providers. This could lead to uneven performance across locations because some sites may be located far from the nearest PoP, introducing latency.

SASE transitions can also put a strain on personnel. Turf wars could flare up as SASE cuts across networking and security teams. Changing vendors to adopt SASE could also require retraining IT staff to handle the new technology.

What is driving the adoption of SASE?

The key drivers for SASE include supporting hybrid clouds, remote and mobile workers, and IoT devices, as well as finding affordable replacements for expensive technologies like MPLS and IPsec VPNs.

As part of digital transformation efforts, many organizations are seeking to break down tech siloes, eliminate outdated technologies like VPNs, and automate mundane networking and security chores. SASE can help with all of those goals, but you’ll need to make sure vendors share a vision for the future of SASE that aligns with your own.

According to Gartner, there are currently more traditional data-center functions hosted outside the enterprise data center than in it – in IaaS providers clouds, in SaaS applications and cloud storage. The needs of IoT and edge computing will only increase this dependence on cloud-based resources, yet typical WAN security architectures remain tailored to on-premises enterprise data centers.

In a post-COVID, hybrid work economy, this poses a major problem. The traditional WAN model requires that remote users connect via VPNs, with firewalls at each location or on individual devices. Traditional models also force users to authenticate to centralized security that grants access but may also route traffic through that central location.

This model does not scale. Moreover, this legacy architecture was already showing its age before COVID hit, but today its complexity and delay undermine competitiveness.

With SASE, end users and devices can authenticate and gain secure access to all the resources they are authorized to reach, and users are protected by security services located in clouds close to them. Once authenticated, they have direct access to the resources, addressing latency issues.

What is the SASE architecture?

Traditionally, the WAN was comprised of stand-alone infrastructure, often requiring a heavy investment in hardware. SD-WAN didn’t replace this, but rather augmented it, removing non-mission-critical and/or non-time-sensitive traffic from expensive links.

In the short term, SASE might not replace traditional services like MPLS, which will endure for certain types of mission-critical traffic, but on the security side, tools such as IPsec VPNs will likely give way to cloud-delivered alternatives.

Other networking and security functions will be decoupled from underlying infrastructure, creating a WAN that is cloud-first, defined and managed by software, and run over a global network that, ideally, is located near enterprise data centers, branches, devices, and employees.

With SASE, customers can monitor the health of the network and set policies for their specific traffic requirements. Because traffic from the internet first goes through the provider’s network, SASE can detect dangerous traffic and intervene before it reaches the enterprise network. For example, DDoS attacks can be mitigated within the SASE network, saving customers from floods of malicious traffic.

What are the core security features of SASE?

The key security features that SASE provides include:  

- Firewall as a Service (FWaaS)

In today’s distributed environment, both users and computing resources are located at the edge of the network. A flexible, cloud-based firewall delivered as a service can protect these edges. This functionality will become increasingly important as edge computing grows and IoT devices get smarter and more powerful.

Delivering FWaaS as part of the SASE platform makes it easier for enterprises to manage the security of their network, set uniform policies, spot anomalies, and quickly make changes.

- Cloud Access Security Broker (CASB)

As corporate systems move away from on-premises to SaaS applications, authentication and access become increasingly important. CASBs are used by enterprises to make sure their security policies are applied consistently even when the services themselves are outside their sphere of control.

With SASE, the same portal employees use to get to their corporate systems is also a portal to all the cloud applications they are allowed to access, including CASB. Traffic doesn't have to be routed outside the system to a separate CASB service.

- Secure Web Gateway (SWG)

Today, network traffic is rarely limited to a pre-defined perimeter. Modern workloads typically require access to outside resources, but there may be compliance reasons to deny employees access to certain sites. In addition, companies want to block access to phishing sites and botnet command-and-control servers. Even innocuous web sites may be used maliciously by, say, employees trying to exfiltrate sensitive corporate data.

SGWs protect companies from these threats. SASE vendors that offer this capability should be able to inspect encrypted traffic at cloud scale. Bundling SWG in with other network security services improves manageability and allows for a more uniform set of security policies.

- Zero Trust Network Access (ZTNA)

Zero Trust Network Access provides enterprises with granular visibility and control of users and systems accessing corporate applications and services.

A core element of ZTNA is that security is based on identity, rather than, say, IP address. This makes it more adaptable for a mobile workforce, but requires additional levels of authentication, such as multi-factor authentication and behavioral analytics.

What other technologies may be part of SASE?

In addition to those four core security capabilities, various vendors offer a range of additional features.

These include web application and API protection, remote browser isolation, DLP, DNS, unified threat protection, and network sandboxes. Two features many enterprises will find attractive are network privacy protection and traffic dispersion, which make it difficult for threat actors to find enterprise assets by tracking their IP addresses or eavesdrop on traffic streams.

Other optional capabilities include Wi-Fi-hotspot protection, support for legacy VPNs, and protection for offline edge-computing devices or systems.

Centralized access to network and security data can allow companies to run holistic behavior analytics and spot threats and anomalies that otherwise wouldn't be apparent in siloed systems. When these analytics are delivered as a cloud-based service, it will be easier to include updated threat data and other external intelligence.

The ultimate goal of bringing all these technologies together under the SASE umbrella is to give enterprises flexible and consistent security, better performance, and less complexity – all at a lower total cost of ownership.

Enterprises should be able to get the scale they need without having to hire a correspondingly large number of network and security administrators.

Who are the top SASE providers?

The leading SASE vendors include both established networking incumbents and well-funded startups. Many telcos and carriers also either offer their own SASE solutions (which they have typically gained through acquisitions) or resell and/or white-label services from pure-play SASE providers. Top vendors, in alphabetical order, include:

  • Akamai
  • Broadcom
  • Cato Networks
  • Cisco
  • Cloudflare
  • Forcepoint
  • Fortinet
  • HPE
  • Netskope
  • Palo Alto Networks
  • Perimeter 81
  • Proofpoint
  • Skyhigh Security
  • Versa
  • VMware
  • Zscaler

How to adopt SASE

Enterprises that must support a large, distributed workforce, a complicated edge with far-flung devices, and hybrid/multi-cloud applications should have SASE on their radar. For those with existing WAN investments, the logical first step is to investigate your WAN provider’s SASE services or preferred partners.

On the other hand, if your existing WAN investments are sunk costs that you’d prefer to walk away from, SASE offers a way to outsource and consolidate both WAN and security functions.

Over time, the line between SASE and SD-WAN will blur, so choosing one over the other won’t necessarily lock you into a particular path, aside from the constraints that vendors might erect.

For most enterprises, however, SASE will be part of a hybrid WAN/security approach. Traditional networking and security systems will handle pre-existing connections between data centers and branch offices, while SASE will be used to handle new connections, devices, users, and locations.

SASE isn't a cure-all for network and security issues, nor is it guaranteed to prevent future disruptions, but it will allow companies to respond faster to disruptions or crises and to minimize their impact on the enterprise. In addition, SASE will allow companies to be better positioned to take advantage of new technologies, such as edge computing, 5G and mobile AI.

https://www.networkworld.com/

Saturday, 19 November 2022

How intelligent automation will change the way we work

Automation in the workplace is nothing new — organizations have used it for centuries, points out Rajendra Prasad, global automation lead at Accenture and co-author of The Automation Advantage. In recent decades, companies have flocked to robotic process automation (RPA) as a way to streamline operations, reduce errors, and save money by automating routine business tasks.

Now organizations are turning to intelligent automation to automate key business processes to boost revenues, operate more efficiently, and deliver exceptional customer experiences. Intelligent automation is a smarter version of RPA that makes use of machine learning, artificial intelligence (AI) and cognitive technologies such as natural language processing to handle more complex processes, guide better business decisions, and shed light on new opportunities, said Prasad.

For example, Newsweek has automated many aspects of managing its presence on social media, a crucial channel for broadening its reach and reputation, said Mark Muir, head of social media at the news magazine. Newsweek staffers used to manage every aspect of its social media postings manually, which involved manually selecting and sharing each new story to its social pages, figuring out what content to recycle, and testing different strategies. By moving to a more automated approach, the company now spends much less time on these processes.

“We use Echobox’s automation to help determine which content should be shared to our social media and to optimize how and when it is posted so that the largest possible audience will see it,” Muir said. “Automating in this way has created more time for us to focus on our readers and find new ways to engage our audience.”

Industry watchers predict that intelligent automation will usher in a workplace where AI not only frees up human workers’ time for more creative work but also helps them set strategies and drive innovation. Most companies are not fully there yet but do have numerous opportunities for business process automation throughout the organization.

Business processes that are ripe for automation

Ravi Vasantraj, global delivery head at IT services provider Mphasis, cites several characteristics that make business processes good candidates for automation:
  • Processes that deal with structured, digital or non-digital data having definitive steps
  • Processes with seasonal spikes that can’t be fulfilled by a manual workforce, such as policy renewals, premium adjustments, claims payments in insurance, and so on
  • Processes with stringent service level agreements that need quick turnarounds, such as transactions posting, order fulfillment, etc.
Many companies are automating contract management, added Doug Barbin, managing principal and chief growth officer at Schellman, a provider of attestation and compliance services. “If you consider all the steps needed to draft, send, redline, and execute contracts via email, the use of technology to manage the content, coordinate change approval, and automate the signing process, the savings in time and reduction of errors is significant,” he said.

Beyond contracts, anything that reduces manual interaction for sales is an opportunity. For example, companies are providing chatbots to automate the ability to answer key questions and connect prospects to sales, according to Barbin.

UMC, a mechanical services contractor in Seattle, has automated many of its sales processes, said Bob Frey, director of sales operations. “We’ve automated various sales stages so we can track sales through our pipelines,” he said. “We are able to track what stages the different sales are in. We do this using Unanet CRM by Cosential that’s designed specifically for the construction industry.”

Schellman’s Barbin cites security as another area where automation is making inroads. “In cybersecurity, the mundane often resides in compliance and the need to test controls in an increasingly complex environment,” he said. “There is an entire segment of compliance automation tools that are being built to collect data and perform initial analysis before triaging and passing to [a human] assessor.”

In addition, more organizations are automating the procure-to-pay process in finance and the hire-to-retire process in human resources, said Wayne Butterfield, global lead for intelligent automation solutions at ISG (Information Services Group), a research and advisory firm.

“There are large numbers of tasks in every organization across just about every function that can be automated,” he said. “The question is: What is the technology needed to automate them, and does it make sense from a value realization perspective?”

The contact center is a huge opportunity, not only because of the large number of people completing similar activities with every contact but because of the positive impact it can have on customer experience and agent efficiency, Butterfield said. For example, companies can use automated virtual agents to handle the more routine customer requests, such as balance inquiries, bill payment, or change of address requests. This enables human agents to handle the more complicated customer inquiries that require creative problem solving. Handing these routine tasks off to automated virtual agents shortens the time it takes to resolve customer issues.

Where intelligent automation is taking us

In coming years, the architecture of work will change and become more event-driven, with business processes controlled with intelligent automation and work broken down into discrete tasks that are performed via automation, assigned to a worker, or interactively executed between a robot assistant and a worker, said IDC’s Fleming.

“There will be far fewer task workers using enterprise applications on a constant basis; task work will increasingly be delivered to workers via automation,” she said. “Employees will spend more time digitally enabling themselves by learning how to develop using low-code tools. And employees will spend more time planning, proactively identifying and resolving problems, making decisions, creating, etc. — in other words, performing knowledge work and/or creative work.”

Prasad said that in the years to come, there will be a huge opportunity for automation to be viewed as an indispensable co-worker with a vital role to play in companies’ successes by bringing in opportunities to reinvent individual processes, transform customer and employee experiences, and drive revenue growth.

“Intelligent automation promises to usher in a new era in business, one where companies are more efficient and effective than ever before and able to meet the needs of customers, employees, and society in new and powerful ways,” he said.

Automation pitfalls to watch out for

As organizations automate their business processes, there are many potential hazards to avoid.

“The main one is ignoring your people and underestimating that,” Butterfield said. “Although the outcome is driven by using a technology, everything up to the actual automation of a process is generally very people-focused. A lack of change management will unfortunately cause many issues in the long term. Organizations need to keep their people aligned with their overall goals.”

Security, mainly authentication, is also a key concern, Barbin said. “Any automation, API [application programming interface] or other, requires some means to pass access credentials,” he said. “If the systems that automate and contain those credentials are compromised, access to the connected systems could be too.” To help minimize that risk, Barbin suggests using SAML 2.0 and other technologies that take stored passwords out of the systems.

Another pitfall is selecting only one technology as the automation tool of choice. Typically organizations need multiple technologies to get the best results, said Maureen Fleming, program vice president for intelligent process automation research at IDC.

And when companies decide to automate a business process previously carried out by a person or a team of people, it’s natural to receive some pushback, Newsweek’s Muir said. “Some of our journalists had initially struggled with the idea of letting an algorithm make choices that were previously weighed up and decided by a human,” he said. “There can be a bit of fear around AI and algorithms and a perceived lack of control when processes are suddenly automated.”

Organizations also need to establish clear strategies for business process automation, according to Vasantraj. “Automating the processes without understanding the ROI [return on investment] could lead to business loss, or automations built with multiple user interventions may not yield any benefit at all,” he said.

Take it slow, plan carefully, and listen to your people

Scaling intelligent automation is one of the biggest challenges for organizations, said Accenture’s Prasad. Therefore, it’s crucial that companies be clear about the strategic intent behind this initiative from the outset and ensure that it’s embedded into their entire modernization journeys, from cloud adoption to data-led transformation.

“Intelligent automation is not a race to be the first to implement the latest technology,” he said. “Success depends on understanding people’s needs, introducing new technologies in a way that is helpful and involves minimal disruption, and addressing issues related to new skills, roles, and job content.”

In other words, focusing on people is just as important as focusing on technology, Prasad said. Investments in intelligent automation must be “people first” — designed to elevate human strengths and supported by investments in skills, change management, experience, organization, and culture.

Butterfield agreed that strategic thinking is critical. “My advice would be to start small and think strategically,” he said. “Understand the shape and type of problems you are trying to automate or improve before you move to a technology solution. Work with your people, and ensure you use their tribal knowledge to understand why they do something.”

However, Butterfield cautions that organizations should avoid relying on people’s opinions on how long things take and how many actions they are able to complete in a given timeframe. “Such reliance often causes your business cases to be inaccurate, as they include the agent’s local management bias versus hard data and facts,” he said.

Muir’s advice is to let the results speak for themselves. Once an organization has introduced AI and automation to a process, it should let any time gains and increases in performance be key factors in objectively determining whether the project was a success. “In our experience, using Echobox proved the quantifiable value of automation to our organization, which made it easier for our teams to embrace it,” he said.

“Another piece of advice would be to find a balance that works for your team or your business when it comes to how much automation you use,” Muir said. For businesses that want to dip their toes into automation but are hesitant to automate 100% of their processes and relinquish manual control, there are often ways to just partially automate tasks, he added. “Take a realistic look at where you’re regularly spending time and talent on repetitive, manual tasks and explore how you can automate those parts of your workday.”

How workers can keep pace with automation

Rather than push back, employees should embrace automation and the opportunities it creates for them to provide high-value contributions versus management of administrative tasks, Barbin said.

“For security operations, for instance, leveraging automation allows those watching the networks for attackers to focus on high-priority threats and incidents, keeping up with a faster-moving landscape,” he said. “For compliance, they can move from managing a single US framework, [such as] SOC 2, to global compliance requirements, all from a single management plane.”

IDC’s Fleming noted that most organizations try to upskill and shift workers into new roles when their current roles are automated. They also consolidate new responsibilities into an existing role. And they tend to hire internal candidates for open jobs.  “When offered an opportunity to learn how to develop for automation, process improvement, etc., employees should embrace that opportunity,” she said. “Employees should look for internal upskilling programs as well as external ones.”

Newsweek’s Muir agreed that employees need to remain open to learning about new technologies and keep an open mind about how they can be leveraged. “Technology changes fast, and the tools and systems we use today may not be the same ones five years from now,” he said.

https://www.computerworld.com/

Tuesday, 15 November 2022

Recession in the US may cool off attrition in IT sector along with revenues

After two years of bumper profits and mind-boggling salary hikes, TCS, Infosys, Wipro and other Indian IT companies are treading on a cautious path as wages and a likely slowdown in demand add to margin woes. However, an economic slowdown in the US might not be all that bad for India IT majors. Experts believe that a potential slowdown could have a positive effect on spitballing wage costs and attrition.

A rapid shift towards digitalisation due to the Covid pandemic in the last two years proved to be a big boon for the Indian IT sector. Giants like TCS, Infosys and Wipro rely predominantly on the US and European markets, which contribute to 80-90% of their revenues.

Recession and high attrition rates – a double whammy for IT majors

Now, with talks of recession in the US and Europe gaining momentum, these IT companies are already under stress. The stress due to economic slowdown in the US and Europe has reflected in the FY23 earnings guidance of these IT companies.

However, an economic slowdown in the US might not be all that bad for India IT majors. Slower revenue growth could curb wage hikes and slow down attrition too, say experts.

“Indian IT companies source a lion’s share of their revenue from the US and Europe. Both these geographies face looming macro pressures in the form of one of the highest inflationary pressures and a slowdown in GDP growth,” said a Motilal Oswal report.

After clocking 19% revenue growth in FY22, the Indian IT sector is headed for two years of moderation, according to a Crisil report.

“Revenue growth is expected to moderate to 12-13% this fiscal and 9-10% in the next, [due to] an expected tightening in corporate capital spends because of inflationary headwinds,” the report stated.

“An economic slowdown in the US and EU could prove to be the inflection point for a cool down in wage hikes and attrition rates as well,” Dhananjay Sinha, head of strategy research and chief economist at JM Financial, told Business Insider India.

Attrition levels remain elevated – another source of margin stress

With attrition levels remaining elevated – Infosys is the worst affected with an attrition rate of 28.4% in Q1 FY23, research firms suggest that the margins will remain stressed, too.

“The companies had reduced their margin guidance at the start of FY23, but we believe continued pressure due to elevated attrition levels is likely to result in margins dropping near the lower end of guidance,” stated a report by ICICI Securities.

Wage hikes and attrition rates could simmer down come December

Sinha explained that wage hikes and attrition rates could simultaneously simmer down by the December quarter this year. The cool down in wages across the IT sector could also help solve the attrition headache for IT companies, he said. With startups facing funding crunch, too, there could be fewer exit routes for IT executives.

Amongst the industry, Sinha said that Infosys could lead the pack as its decision to cut variable pay to 70% has shown it is ready to control costs. Media reports suggested that Wipro delayed payouts for certain employee categories, suggesting that companies are beginning to feel the pressure.

However, in contrast, TCS rolled out 100% variable pay days after Infosys.

An economic slowdown in the US is already showing signs of spillover in the Big Tech revenues – Amazon Web Services, Microsoft Azure and Google Cloud, the world’s top cloud platforms, reported a 7% decline in revenue.

This could have a direct impact on TCS, Infosys and Wipro – according to media reports, the revenues of these IT companies could be impacted by up to 33%.

“A weakening macro environment may translate into lower IT spends and slower growth for Indian IT companies,” stated a report by Motilal Oswal.

Courtesy: https://www.businessinsider.in/

Tuesday, 1 November 2022

Why Wasm is the future of cloud computing

Wasm may just be the most important emerging technology that you’ve never heard of.  It’s important!

Shorthand for WebAssembly language, Wasm was developed for the web. However, Wasm technology has expanded beyond the web browser. Now organizations are starting to run Wasm on the server side. For example, my company, SingleStore, is using it in our database.

Some think Wasm will replace container technology and the ubiquitous JavaScript.

Whether or not you believe that, Wasm is clearly making an impact on cloud computing. 

Wasm is cross-platform: Making it safer and simpler to bring cloud components together

People use all different kinds of languages to write software. Getting those languages to interact with each other is difficult. Wasm provides a framework in which you can write in whatever language you want. Then it produces a common, simulated machine format.

That format allows components written in various languages—like Rust, C/C++, and Go—to talk to each other. Wasm also provides the ability for server-side systems like databases to embed components from different languages without requiring you to know or care how that module was produced.

Think of Wasm as a universal plugin format. Say you would like to augment your system’s capabilities with a component developed by a third party. Wasm lets you bring the new component into your system without the risks that typically come with integrating add-ons. For example, an external component might crash the system or work in an unexpected way. Wasm mitigates these problems by creating an extremely safe framework for disparate systems and components to interact together.

The cloud is a big driver of Wasm’s expansion. Wasm is a good match for cloud because it’s virtualized and can work in any environment that supports the Wasm runtime. Also, cloud systems are typically composed of many services pieced together and connected in different ways. That can get complicated. But the more you can simplify your cloud environment, the easier it is for various aspects of cloud systems to work together correctly.

Wasm is secure: Lowering risk with its approach to running code and representing functions

In most language runtimes, functions have addresses. Those addresses are executable points in memory. If you are just looking at memory as a bunch of bytes, a function may be indistinguishable from the rest of the memory. This opens the door for people to find the function and inject code into it, or call a function in a privileged way so the function does something that it’s not supposed to do. Wasm’s design eliminates those problems.

Wasm represents functions in a way that is not exploitable. It also runs the code in a sandbox, which mitigates common security problems associated with running untrusted code. Because Wasm encapsulates the program memory in a safe area, nothing can get outside of it and access other places that might affect the host that’s running the program or compromise security.

And with Wasm’s capability-based security model, hosts have complete control over what kinds of privileged operations the Wasm program can run. For example, hosts must explicitly grant access to directories if file access is a requirement.

Wasm is fast: Eliminating what is not needed and enabling greater speed and efficiency

Clearly, Wasm isn’t the first technology people have used to bring things together in a safer, more simplified way. However, Wasm is much faster than some of those other technologies.

Compilers can generate Wasm programs by leveraging the LLVM back end, compiling down to the LLVM intermediate representation. LLVM, or low level virtual machine, is an extracted machine that many languages already compile down to. As a result of this approach, and thanks to many years of community effort around the LLVM project, Wasm programs can be compiled to highly optimized machine code.

At SingleStore, we created the Wasm Space Program—a virtual real-time universe inside a database—to demonstrate how fast and lightweight Wasm is. In this simulation, spaceships use different strategies to replenish energy and fight other spacecraft in a vast, real-time “universe.” That involves a vast amount of data, with more than one million ships in the system and nearly three million database updates per second.

Traditionally, integrating that data and assembling it on a mid-tier layer would require you to pull up a lot of data to the mid-tier. That could introduce a huge amount of lag, and require some complex caching to achieve a real-time response. Rather than taking that approach, each spaceship’s strategy has been written in Wasm, and loaded into the database as a UDF. Each second, each of the spaceships’ strategy functions are invoked to decide on its next move.

There’s nothing on the front end—a JavaScript program running in the browser—that understands these strategies, or anything about the state of the universe. Its job is simply to issue SQL queries directly to the database and graphically present the information that is returned. The database maintains all of the state  information, and because Wasm has allowed the compute to be right next to the data, it’s a lot faster. No mid-tier was even necessary.

But Wasm isn’t all fun and games. You can use it to address countless other applications and use cases. For example, you could use Wasm for sentiment analysis. The kind of complex logic required for sentiment analysis isn’t something that can easily be expressed in a database SQL dialect. So, in order to do this, you usually need to implement it in a more sophisticated language and then bring the data to it by downloading each row of data. Then you need to push the sentiment analysis rating back into the database. That means a round trip for every row in the database you use. If you have millions of rows, that creates a lot of network traffic. But with the way SingleStore has integrated Wasm, you are already in the database, so you don’t incur that overhead.

Wasm is getting better all the time: Creating standards makes it even more powerful

Wasm is already very capable. And with the new technologies and standards that are on the way, Wasm will let you do even more.

For example, the W3C WebAssembly Community Group, with help from members of organizations such as the Bytecode Alliance (of which SingleStore is a member), is currently working on standardizing the WebAssembly System Interface (WASI). WASI will provide a standard set of APIs and services that can be used when Wasm modules are running on the server. Many standard proposals are still in progress, such as garbage collection, network I/O, and threading, so you can’t always map the things that you’re doing in other programming languages to Wasm. But eventually, WASI aims to provide a full standard that will help to achieve that. In many ways, the goals of WASI are similar to POSIX.

Wasm as it now stands also doesn’t address the ability to link or communicate with other Wasm modules. But the Wasm community, with support from members of the computing industry, is working on the creation of something called the component model. This aims to create a dynamic linking infrastructure around Wasm modules, defining how components start up and communicate with each other (similar to a traditional OS’s process model).

Additionally, an emerging standard IDL syntax, called WIT (for WebAssembly Interface Types), will allow people to describe their Wasm interfaces in a language-agnostic way. As a result, binding generators will be able to take what’s in the IDL and compile code that will allow both the Wasm host and the guest to communicate data back and forth in a common way.

Wasm is the future: Providing a faster, more secure, and more efficient way to bring things together
Wasm, though more lightweight, may not replace containers any time soon. But you can expect Wasm to become part of a whole lot of software going forward.

Whether on the server or on the edge, Wasm lets you create custom logic that runs much closer to the data than it could before—and you can do it securely, efficiently, and with greater flexibility.

And now with SingleStore, you can compile your existing programs to Wasm, push them into the database, and run them there. That means that you may not  have to rewrite that code and put it somewhere the data is not. With Wasm technology, you can have the best of both worlds. 

https://www.infoworld.com/