Wednesday, 13 December 2023

Google's Gemini continues the dangerous obfuscation of AI technology

Until this year, it was possible to learn a lot about artificial intelligence technology simply by reading research documentation published by Google and other AI leaders with each new program they released. Open disclosure was the norm for the AI world. 

All that changed in March of this year, when OpenAI elected to announce its latest program, GPT-4, with very little technical detail. The research paper provided by the company obscured just about every important detail of GPT-4 that would allow researchers to understand its structure and to attempt to replicate its effects. 

Last week, Google continued that new obfuscation approach, announcing the formal release of its newest generative AI program, Gemini, developed in conjunction with its DeepMind unit, which was first unveiled in May. The Google and DeepMind researchers offered a blog post devoid of technical specifications, and an accompanying technical report almost completely devoid of any relevant technical details. 

Much of the blog post and the technical report cite a raft of benchmark scores, with Google boasting of beating out OpenAI's GPT-4 on most measures and beating Google's former top neural network, PaLM. 

Neither the blog nor the technical paper include key details customary in years past, such as how many neural net "parameters," or, "weights," the program has, a key aspect of its design and function. Instead, Google refers to three versions of Gemini, with three different sizes, "Ultra," "Pro," and "Nano." The paper does disclose that Nano is trained with two different weight counts, 1.8 billion and 3.25 billion, while failing to disclose the weights of the other two sizes. 

Numerous other technical details are absent, just as with the GPT-4 technical paper from OpenAI. In the absence of technical details, online debate has focused on whether the boasting of benchmarks means anything. 

OpenAI researcher Rowan Zellers wrote on X (formerly Twitter) that Gemini is "super impressive," and added, "I also don't have a good sense on how much to trust the dozen or so text benchmarks that all the LLM papers report on these days." 

Tech news site TechCrunch's Kyle Wiggers reports anecdotes of poor performance by Google's Bard search engine, enhanced by Gemini. He cites posts on X by people asking Bard questions such as movie trivia or vocabulary suggestions and reporting the failures. 

The sudden swing to secrecy by Google and OpenAI is becoming a major ethical issue for the tech industry because no one knows, outside the vendors -- OpenAI and its partner Microsoft, or, in this case, Google's Google Cloud unit -- what is going on in the black box in their computing clouds. 

Google's lack of disclosure, while not surprising given its commercial battle with OpenAI, and partner Microsoft, for market share, is made more striking by one very large omission: model cards. 

Model cards are a form of standard disclosure used in AI to report on the details of neural networks, including potential harms of the program (hate speech, etc.) While the GPT-4 report from OpenAI omitted most details, it at least made a nod to model cards with a "GPT-4 System Card" section in the paper, which it said was inspired by model cards.

Google doesn't even go that far, omitting anything resembling model cards. The omission is particularly strange given that model cards were invented at Google by a team that included Margaret Mitchell, formerly co-lead of Ethical AI at Google, and former co-lead Timnit Gebru. 

Instead of model cards, the report offers a brief, rather bizarre passage about the deployment of the program with vague language about having model cards at some point.

If Google puts question marks next to model cards in its own technical disclosure, one has to wonder what the future of oversight and safety is for neural networks.

https://www.zdnet.com/

Six of the most popular Android password managers are leaking data

Several mobile password managers are leaking user credentials due to a vulnerability discovered in the autofill functionality of Android apps. 

The credential-stealing flaw, dubbed AutoSpill, was reported by a team of researchers from the International Institute of Information Technology Hyderabad at last week's Black Hat Europe 2023 conference.

The vulnerability comes into play when Android calls a login page via WebView. (WebView is an Android component that makes it possible to view web content without opening a web browser.) When that happens, WebView allows Android apps to display the content of the web page in question. 

That's all fine and good -- unless a password manager is added to the mix: The credentials shared with WebView can also be shared with the app that originally called for the username and password. If the originating app is trusted, everything should be OK If that app isn't trusted, things could go very wrong.

The affected password managers are 1Password, LastPass, Enpass, Keeper, and Keepass2Android. Also, if the credentials were shared via a JavaScript injection method, both DashLane and Google Smart Lock are also affected by the vulnerability.

Because of the nature of this vulnerability, neither phishing nor malicious in-app code is required.

One thing to keep in mind is that the researchers tested this on less-than-current hardware and software.

Specifically, they tested on these three devices: Poco F1, Samsung Galaxy Tab S6 Lite, and Samsung Galaxy A52. The versions of Android used in their testing were Android 0 (with the December 2020 security patch), Android 11 (with the January 2022 security patch), and Android 12 (with the April 2022 security patch). 

As these tested devices -- as well as the OS and security patches -- were out of date, it's hard to know with any certainty whether the vulnerability would affect newer versions of Android. 

However, even if you are using a device other than what the group tested with, it doesn't mean this vulnerability should be shrugged off. Rather, it should serve as a reminder to always keep your Android OS and installed app up-to-date. The WebView system has always been held under scrutiny and updates for this software should always be updated. For that, you can open the Google Play Store on your device, search for WebView, tap About this app, and compare the latest version with the version installed on your device. If they are not the same, you'll want to update.

One of your best means of keeping Android secure is to make sure it is always as up-to-date as possible. Check daily for OS and app updates and apply all that are available.

https://www.zdnet.com/

Monday, 25 September 2023

Smartphone Showdown: 15 Years of Android vs. iPhone

 "I'm going to destroy Android, because it's a stolen product," Steve Jobs says in author Walter Isaacson's 2011 biography of the late Apple co-founder.

Jobs' fury around Google and its smartphone software is well documented, and the many lawsuits involving Apple and various Android partners showed that Jobs was serious about his allegations of theft. But the reality is that both Apple and Google have taken inspiration from each other for years and that neither company would be where it is today without the work of the other.

So as Android celebrates its 15th birthday (since the launch of the first Android-based phone, the T-Mobile G1), let's take a look back at the journey the companies have taken to becoming the most dominant forces in the tech world -- and how their competition pushed them to innovate. 

Smartphones have arguably changed the world more than any other invention in human history, from radically altering how we interact with one another to creating a whole new category of companies that deal in various mobile technologies. And though Jobs may have been outspokenly vitriolic about Android in the early days, it's clear that ideas and inspiration have echoed back and forth between Apple and Google in the years since.

The last 15 years of competition between the two companies have often felt like siblings bickering at playtime, falling out over who had which toy first or crying to the parents when the other one took something that wasn't theirs. Most siblings will argue to some extent throughout their lives, but history is also rife with pairings that, through spirited competition, pushed each sibling to succeed. 

The two companies' volleying back and forth pushed them ahead in the game, and allowed them to fight off other challengers, like the once-dominant BlackBerry, as well as Nokia and its short-lived Symbian platform. Even tech giant Microsoft and its Windows Phone failed to thrive in the face of the heated competition from Apple and Google.

But though the relationship today between the iPhone maker and the Android purveyor hardly matches the Williams' friendly, familial rivalry, that wasn't always the case. Let's take a look back.

Beginnings

Android began as its own company (Android Inc.) back in 2003, and it wasn't acquired by Google until 2005. Meanwhile, Apple already had success with mobile products in the form of the iPod, the iPhone began development in secret in 2004 and Jobs was reportedly approached to become Google CEO. 

Jobs didn't take the role, but Google found a CEO in Eric Schmidt, who in 2006 became part of Apple's board of directors. "There was so much overlap that it was almost as if Apple and Google were a single company," journalist Steven Levy wrote in his 2011 book In the Plex: How Google Thinks, Works, and Shapes Our Lives. Things didn't stay as cozy, however. 

In January 2007 Apple unveiled the first iPhone, and in November 2007 Google showed off two prototypes. One, a Blackberry-esque phone that made use of hardware buttons and scroll wheels, had been in the prototype phase for some time. The more recent prototype was dominated by a large touchscreen and appeared to be much more like the iPhone.

That didn't go down well with Jobs, who threatened the destruction of Android using "every penny of Apple's $40 billion in the bank." The first Android phone, the T-Mobile G1, combined elements of both those prototypes, with a touchscreen that slid out to reveal a physical keyboard. Schmidt left Apple's board of directors in 2009 due to potential conflicts of interest, and so began a series of lawsuits involving Apple and various Google partners over alleged infringement of phone-related patents. 

The most notable of the Google partners was Samsung, which Apple accused of infringing a number of patents, including patents related to basic functions like tap to zoom and slide to unlock. These legal battles raged for years, with Apple claiming that "it is a fact that Samsung blatantly copied our design" and Samsung pushing back. The long dispute finally came to an end in 2018, when both sides agreed to settle out of court.

Despite the competing claims made during those long courtroom struggles, if we look at the development not just of the software but of the phones that run it, it seems clear both sides continued to liberally borrow ideas from each other. 

Features like picture-in-picture, live voicemail, lock screen customization and live translation were all found on the Android operating system before eventually making their way to iOS. And though the use of widgets to customize your home screen was long held as a differentiator for Android, that feature too eventually found its way to iOS. 

On the other hand, Android's Nearby Share feature is remarkably similar to Apple's AirDrop, and Android phones didn't get features like "do not disturb" or the ability to take screenshots until some time after the iPhone had them. 

Apple removed the 3.5mm headphone jack from the iPhone in September 2016, and I distinctly remember that at Google's launch event for the Pixel the following month, chuckles went round the room when the exec on stage proclaimed, "Yes, it has a headphone jack." Still, Google itself went on to ditch the headphone jack, with the Pixel 2. 

Sometimes it's difficult, if not impossible, to say whether these companies are copying each other's ideas or simply coming up with the same conclusions after paying attention to consumer trends, rumors in the press and the general evolution of supporting technologies. 

Rumors that Apple would remove the physical home button on the iPhone X were circling long before the phone was officially unveiled in September 2017. Are they the same rumors Samsung responded to when it "beat Apple to the punch" and removed the home button from its Galaxy S8 earlier that same year? Or did both sides simply arrive at such a big design decision independently? 

It's impossible to pick a side in this argument -- and somewhat reductive to even try. And regardless, you wind up with the same thing: Phones and software from different manufacturers that seem to evolve in unison. 

Today

In 2023, Android is by far the dominant smartphone platform, with 70.8% market share globally against Apple's 28.4% (according to information from Statista). But Google's focus has always been on getting the Android operating system onto as many devices as possible, from phones costing less than $50 to those costing over $1,500. Apple, meanwhile, offers iOS only on its own devices, and those devices come at a hefty premium, so it's fair to expect that iOS won't be as widespread. 

Google's business model is primarily one of a service provider, though, and not a hardware manufacturer. It makes its money chiefly from selling advertisements across all its platforms, and so it typically benefits from a mass market approach. Android itself is free for companies to use -- hence the large number of installs. But to use Google services (Gmail, YouTube, Chrome and so on, along with access to the Google Play Store) companies must pay license fees to Google. Still, the free use of Android is why you'll find the operating system on phones from Samsung, Motorola, OnePlus, Oppo, Nothing and a huge variety of other brands -- and yes, on Google's own Pixel phones. 

Apple, however, is a closed shop. Only iPhones can run iOS, and Apple has every intention of keeping it that way. It has full control over how that software works on its phones (and charges developers accordingly for apps sold in its own App Store) and how it can be best optimized for the hardware. That's why Apple phones typically perform better than many high-end Android phones, despite the hardware often being less high-spec on paper. Android by its nature has to take more of a "one size fits all" approach, where each new version has to run well on a huge variety of devices, with different screen sizes and under-the-hood components. 

Android struggled with the arrival of tablets, as software designed for 4-inch phones suddenly had to stretch to fit screens much larger in size. Android 3.0 Honeycomb was primarily designed for tablets, but various issues meant it didn't hang around for long, and some of its features were simply absorbed into future versions. Apple takes a different approach: Though at first it used iOS for both devices, now it keeps iOS solely for its phones, optimizing for the smaller screen sizes, with the newer iPadOS as the software for its tablets. 

Yet it's still clear to see the ways the two operating systems have converged over the years. Though Android was always the more customizable of the two, Apple eventually introduced home-screen widgets, customizable lock screens and even the ability to create icon themes to transform the look of your device. 

Meanwhile, Google worked hard to limit the problems caused by fragmentation and has arguably taken more of an "Apple" approach in its own line of devices. Like Apple's iPhones, the phones in the more recent Pixel range -- including the excellent Pixel 7 Pro -- were designed to show off "the best of Google," with processors produced in house (as Apple does with the chips for its iPhones) and software optimized for the Pixel phone it'll run on. 

Though Android may be ahead in terms of numbers of users, Google has clearly seen that Apple is leading the way in terms of a more premium, refined hardware experience, and the Pixel series is Google's answer. Having reviewed both the Pixel 6 Pro and Pixel 7 Pro myself, I can say with certainty that they're the most Apple-like experience you can get from an Android phone. 

The future 

"We are at an interesting crossroads for Android," says Ben Woods, industry analyst at CCS Insight. "Although its success in volume terms is undisputed, it is increasingly losing share to Apple in the premium smartphone space." Google's Pixel phones are some of the best Android phones around, but sales of the devices are a fraction of what Apple sees with the iPhone. 

It's a different story when you look at Android partners, chiefly Samsung, which is jostling with Apple for the position of No. 1 phone manufacturer in the world -- a title that seems to frequently slip from one of the companies to the other. But Samsung has a much wider catalog of products, with unit sales being bolstered by a larger number of phones at lower price points. In the premium segment, Apple still rules, and that's showing no sign of slowing down. 

But Android is increasingly betting on longer-term success from its innovation with foldable phones. Samsung is now multiple generations into its Galaxy Z Flip and Z Fold devices, with Google's own Pixel Fold joining the party earlier this year, along with foldables from the likes of Oppo, Motorola and soon OnePlus. Apple has yet to launch a foldable device, and it remains to be seen whether that's simply because its take on the genre isn't ready, or because it believes foldables are a fad that'll pass (like 3D displays or curving designs). 

Rather than looking toward more-experimental innovations like foldable displays, Apple has instead continued to refine its existing hardware, equipping its latest iPhone 15 Pro series with titanium designs and improved cameras. And Apple's approach also includes pulling people into the wider Apple ecosystem, with iPhones syncing seamlessly with other Apple products, including Apple Watches, iPads, Macs, HomePods and Apple TV. 

With each new iPhone customer comes an opportunity for Apple to sell additional products from its own catalog, along with services like iCloud storage, Apple Music, Apple Fitness or subscriptions to its Apple TV streaming service. Though Google offers products like this to some extent, it has yet to offer the sort of cohesive package Apple does, which could make Google's offerings less enticing for new customers and tempt Android users to jump ship to Apple. 

Still, Android's proliferation across devices at lower price points will continue to make it a popular choice for people on tighter  budgets. And its presence on a huge number of devices from third-party manufacturers means it's where we'll see more innovation that seeks to answer the question of what role the smartphone plays in our lives. 

With smartphone shipments expected to hit their lowest point in a decade, more companies will be looking for ways to use new, exciting technologies to capture an audience's attention and present a product that serves up new ways of doing things. We'll see this from Android and its partners and from Apple with the iPhone, its software and its peripheral devices, including new tech like Apple's Vision Pro headset. 

We'll also see a bigger focus from all sides on sustainability: Apple, for instance, went to great lengths during its iPhone 15 launch event in September to flex its green credentials. While Samsung is making larger efforts in sustainability and smaller companies like Fairphone are using planet-friendly features as primary selling points, other manufacturers have yet to make sustainability a key part of their business model. It's likely, then, that as consumers increasingly look toward sustainable options, the next major competition in the smartphone industry could be who can make the greenest product.

There's no question that the development of both the software and hardware side of iOS and Android smartphones has at times happened almost in tandem, with one side launching a feature and the other responding in "me too!" fashion. And like the Williams sisters using their sporting rivalry to reach stratospheric new heights in tennis, Apple and Android will need to continue to embrace that spirit of competition to find new ways to succeed in an increasingly difficult market.

https://www.cnet.com/

Monday, 31 July 2023

Cryptography may offer a solution to the massive AI-labeling problem

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where the content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt into labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where the information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about the content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to the media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

https://www.technologyreview.com/

Wednesday, 28 June 2023

Dust uses large language models on internal data to improve team productivity

Dust is a new AI startup based in France that is working on improving team productivity by breaking down internal silos, surfacing important knowledge and providing tools to build custom internal apps. At its core, Dust is using large language models (LLMs) on internal company data to give new superpowers to team members.

The company was co-founded by Gabriel Hubert and Stanislas Polu, who have known each other for more than a decade. Their first startup was called Totems and was acquired by Stripe in 2015. After that, they both spent a few years working for Stripe before parting ways.

Stanislas Polu joined OpenAI, where he spent three years working on LLMs’ reasoning capabilities while Gabriel Hubert became the head of product at Alan.

They teamed up once again to create Dust. Unlike many AI startups, Dust isn’t focused on creating new large language models. Instead, the company wants to build applications on top of LLMs developed by OpenAI, Cohere, AI21, etc.

The team first worked on a platform that can be used to design and deploy large language model apps. It has then focused its efforts on one use case in particular — centralizing and indexing internal data so that it can be used by LLMs.

From an internal ChatGPT to next-gen software

There are a handful of connectors that constantly fetch internal data from Notion, Slack, GitHub and Google Drive. This data is then indexed and can be used for semantic search queries. When a user wants to do something with a Dust-powered app, Dust will find the relevant internal data, use it as the context of an LLM and return an answer.

For example, let’s say you just joined a company and you’re working on a project that was started a while back. If your company fosters communication transparency, you will want to find information in existing internal data. But the internal knowledge base might not be up to date. Or it might be hard to find the reason why something is done this way, as it’s been discussed in an archived Slack channel.

Dust isn’t just a better internal search tool, as it doesn’t just return search results. It can find information across multiple data sources and format answers in a way that is much more useful to you. It can be used as a sort of internal ChatGPT, but it could also be used as the basis of new internal tools.

“We’re convinced that natural language interface is going to disrupt software,” Gabriel Hubert told me. “In five years’ time, it would be disappointing if you still have to go and click on edit, settings, preferences, to decide that your software should behave differently. We see a lot more of our software adapting to your individual needs, because that’s the way you are, but also because that’s the way your team is — because that’s the way your company is.”

The company is working with design partners on several ways to implement and package the Dust platform. “We think there are a lot of different products that can be created in this area of enterprise data, knowledge workers and models that could be used to support them,” Polu told me.

It’s still early days for Dust, but the startup is exploring an interesting problem. There are many challenges ahead when it comes to data retention, hallucination and all of the issues that come with LLMs. Maybe hallucination will become less of an issue as LLMs evolve. Maybe Dust will end up creating its own LLM for data privacy reasons.

Dust has raised $5.5 million (€5 million) in a seed round led by Sequoia with XYZ, GG1, Seedcamp, Connect, Motier Ventures, Tiny Supercomputer, and AI Grant. Several business angels also participated, such as Olivier Pomel from Datadog, Julien Codorniou, Julien Chaumond from Hugging Face, Mathilde Collin from Front, Charles Gorintin and Jean-Charles Samuelian-Werve from Alan, Eléonore Crespo and Romain Niccoli from Pigment, Nicolas Brusson from BlaBlaCar, Howie Liu from Airtable, Matthieu Rouif from PhotoRoom, Igor Babuschkin and Irwan Bello.

If you take a step back, Dust is betting that LLMs will greatly change how companies work. A product like Dust works even better in a company that fosters radical transparency instead of information retention, written communication instead of endless meetings, autonomy instead of top-down management.

If LLMs deliver on their promise and greatly improve productivity, some companies will gain an unfair advantage by adopting these values as Dust will unlock a lot of untapped potential for knowledge workers.

https://techcrunch.com/

Tuesday, 6 June 2023

Governments worldwide grapple with regulation to rein in AI dangers

Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.

While most consumers are just having fun testing the limits of large language models such as ChatGPT, a number of worrying stories have circulated about the technology making up supposed facts (also known as "hallucinating") and making inappropriate suggestions to users, as when an AI-powered version of Bing told a New York Times reporter to divorce his spouse.

Tech industry insiders and legal experts also note a raft of other concerns, including the ability of generative AI to enhance the attacks of threat actors on cybersecurity defenses, the possibility of copyright and data-privacy violations — since large language models are trained on all sorts of information — and the potential for discrimination as humans encode their own biases into algorithms. 

Possibly the biggest area of concern is that generative AI programs are essentially self-learning, demonstrating increasing capability as they ingest data, and that their creators don't know exactly what is happening within them. This may mean, as ex-Google AI leader Geoffrey Hinton has said, that humanity may just be a passing phase in the evolution of intelligence and that AI systems could develop their own goals that humans know nothing about.

All this has prompted governments around the world to call for protective regulations. But, as with most technology regulation, there is rarely a one-size-fits-all approach, with different governments looking to regulate generative AI in a way that best suits their own political landscape.

Countries make their own regulations

“[When it comes to] tech issues, even though every country is free to make its own rules, in the past what we have seen is there’s been some form of harmonization between the US, EU, and most Western countries,” said Sophie Goossens, a partner at law firm Reed Smith who specializes in AI, copyright, and IP issues. “It's rare to see legislation that completely contradicts the legislation of someone else.”

While the details of the legislation put forward by each jurisdiction might differ, there is one overarching theme that unites all governments that have so far outlined proposals: how the benefits of AI can be realized while minimizing the risks it presents to society. Indeed, EU and US lawmakers are drawing up an AI code of conduct to bridge the gap until any legislation has been legally passed.

Generative AI is an umbrella term for any kind of automated process that uses algorithms to produce, manipulate, or synthesize data, often in the form of images or human-readable text. It’s called generative because it creates something that didn’t previously exist. It's not a new technology, and conversations around regulation are not new either.

Generative AI has arguably been around (in a very basic chatbot form, at least) since the mid-1960s, when an MIT professor created ELIZA, an application programmed to use pattern matching and language substitution methodology to issue responses fashioned to make users feel like they were talking to a therapist. But generative AI's recent advent into the public domain has allowed people who might not have had access to the technology before to create sophisticated content on just about any topic, based off a few basic prompts.

As generative AI applications become more powerful and prevalent, there is growing pressure for regulation.

“The risk is definitely higher because now these companies have decided to release extremely powerful tools on the open internet for everyone to use, and I think there is definitely a risk that technology could be used with bad intentions,” Goossens said.

First steps toward AI legislation

Although discussions by the European Commission around an AI regulatory act began in 2019, the UK government was one of the first to announce its intentions, publishing a white paper in March this year that outlined five principles it wants companies to follow: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

In an effort to to avoid what it called “heavy-handed legislation,” however, the UK government has called on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.

Since then, the European Commission has published the first draft of its AI Act, which was delayed due to the need to include provisions for regulating the more recent generative AI applications. The draft legislation includes requirements for generative AI models to reasonably mitigate against foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.

The legislation proposed by the EU would forbid the use of AI when it could become a threat to safety, livelihoods, or people’s rights, with stipulations around the use of artificial intelligence becoming less restrictive based on the perceived risk it might pose to someone coming into contact with it — for example, interacting with a chatbot in a customer service setting would be considered low risk. AI systems that present such limited and minimal risks may be used with few requirements. AI systems posing higher levels of bias or risk, such as those used for government social-scoring systems and biometric identification systems, will generally not be allowed, with few exceptions.

However, even before the legislation had been finalized, ChatGPT in particular had already come under scrutiny from a number of individual European countries for possible GDPR data protection violations. The Italian data regulator initially banned ChatGPT over alleged privacy violations relating to the chatbot’s collection and storage of personal data, but reinstated use of the technology after Microsoft-backed OpenAI, the creator of ChatGPT, clarified its privacy policy and made it more accessible, and offered a new tool to verify the age of users.

Other European countries, including France and Spain, have filed complaints about ChatGPT similar to those issued by Italy, although no decisions relating to those grievances have been made.

Differing approaches to regulation

All regulation reflects the politics, ethics, and culture of the society you’re in, said Martha Bennett, vice president and principal analyst at Forrester, noting that in the US, for instance, there’s an instinctive reluctance to regulate unless there is tremendous pressure to do so, whereas in Europe there is a much stronger culture of regulation for the common good.

“There is nothing wrong with having a different approach, because yes, you do not want to stifle innovation,” Bennett said. Alluding to the comments made by the UK  government, Bennett said it is understandable to not want to stifle innovation, but she doesn’t agree with the idea that by relying largely on current laws and being less stringent than the EU AI Act, the UK government can provide the country with a competitive advantage — particularly if this comes at the expense of data protection laws.

“If the UK gets a reputation of playing fast and loose with personal data, that’s also not appropriate,” she said.

While Bennett believes that differing legislative approaches can have their benefits, she notes that AI regulations implemented by the Chinese government would be completely unacceptable in North America or Western Europe.

Under Chinese law, AI firms will be required to submit security assessments to the government before launching their AI tools to the public, and any content generated by generative AI must be in line with the country’s core socialist values. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations.

The challenges to AI legislation

Although a number of countries have begun to draft AI regulations, such efforts are hampered by the reality that lawmakers constantly have to play catchup to new technologies, trying to understand their risks and rewards.

“If we refer back to most technological advancements, such as the internet or artificial intelligence, it’s like a double-edged sword, as you can use it for both lawful and unlawful purposes,” said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire’s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI.

AI systems may also do harm inadvertently, since humans who program them can be biased, and the data the programs are trained with may contain bias or inaccurate information. “We need artificial intelligence that has been trained with unbiased data,” Romero Moreno said. “Otherwise, decisions made by AI will be inaccurate as well as discriminatory.”

Accountability on the part of vendors is essential, he said, stating that users should be able to challenge the outcome of any artificial intelligence decision and compel AI developers to explain the logic or the rationale behind the technology’s reasoning. (A recent example of a related case is a class-action lawsuit filed by US man who was rejected from a job because AI video software judged him to be untrustworthy.)

Tech companies need to make artificial intelligence systems auditable so that they can be subject to independent and external checks from regulatory bodies — and users should have access to legal recourse to challenge the impact of a decision made by artificial intelligence, with final oversight always being given to a human, not a machine, Romero Moreno said.

Copyright a major issue for AI apps

Another major regulatory issue that needs to be navigated is copyright. The EU’s AI Act includes a provision that would make creators of generative AI tools disclose any copyrighted material used to develop their systems.

“Copyright is everywhere, so when you have a gigantic amount of data somewhere on a server, and you’re going to use that data in order to train a model, chances are that at least some of that data will be protected by copyright,” Goossens said, adding that the most difficult issues to resolve will be around the training sets on which AI tools are developed.

When this problem first arose, lawmakers in countries including Japan, Taiwan, and Singapore made an exception for copyrighted material that found its way into training sets, stating that copyright should not stand in the way of technological advancements.

However, Goossens said, a lot of these copyright exceptions are now almost seven years old. The issue is further complicated by the fact that in the EU, while these same exceptions exist, anyone who is a rights holder can opt out of having their data used in training sets.

Currently, because there is no incentive to having your data included, huge swathes of people are now opting out, meaning the EU is a less desirable jurisdiction for AI vendors to operate from.

In the UK, an exception currently exists for research purposes, but the plan to introduce an exception that includes commercial AI technologies was scrapped, with the government yet to announce an alternative plan.

What’s next for AI regulation?

So far, China is the only country that has passed laws and launched prosecutions relating to generative AI — in May, Chinese authorities detained a man in Northern China for allegedly using ChatGPT to write fake news articles.

Elsewhere, the UK government has said that regulators will issue practical guidance to organizations, setting out how to implement the principles outlined in its white paper over the next 12 months, while the EU Commission is expected to vote imminently to finalize the text of its AI Act.

By comparison, the US still appears to be in the fact-finding stages, although President Joe Biden and Vice President Kamala Harris recently met with executives from leading AI companies to discuss the potential dangers of AI.

Last month, two Senate committees also met with industry experts, including OpenAI CEO Sam Altman. Speaking to lawmakers, Altman said regulation would be “wise” because people need to know if they’re talking to an AI system or looking at content — images, videos, or documents — generated by a chatbot.

“I think we’ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we’re talking about,” Altman said.

This is a sentiment Forrester’s Bennett agrees with, arguing that the biggest danger generative AI presents to society is the ease with which misinformation and disinformation can be created.

“[This issue] goes hand in hand with ensuring that providers of these large language models and generative AI tools are abiding by existing rules around copyright, intellectual property, personal data, etc. and looking at how we make sure those rules are really enforced,” she said.

Romero Moreno argues that education holds the key to tackling the technology’s ability to create and spread disinformation, particularly among young people or those who are less technologically savvy. Pop-up notifications that remind users that content might not be accurate would encourage people to think more critically about how they engage with online content, he said, adding that something like the current cookie disclaimer messages that show up on web pages would not be suitable, as they are often long and convoluted and therefore rarely read.

Ultimately, Bennett said, irrespective of what final legislation looks like, regulators and governments across the world need to act now. Otherwise we’ll end up in a situation where the technology has been exploited to such an extreme that we’re fighting a battle we can never win.

https://www.computerworld.com/

Saturday, 1 April 2023

Italy orders ChatGPT blocked citing data protection concerns

Two days after an open letter called for a moratorium on the development of more powerful generative AI models so regulators can catch up with the likes of ChatGPT, Italy’s data protection authority has just put out a timely reminder that some countries do have laws that already apply to cutting edge AI: it has ordered OpenAI to stop processing people’s data locally with immediate effect.

The Italian DPA said it’s concerned that the ChatGPT maker is breaching the European Union’s General Data Protection Regulation (GDPR), and is opening an investigation.

Specifically, the Garante said it has issued the order to block ChatGPT over concerns OpenAI has unlawfully processed people’s data as well as over the lack of any system to prevent minors from accessing the tech.

The San Francisco-based company has 20 days to respond to the order, backed up by the threat of some meaty penalties if it fails to comply. (Reminder: Fines for breaches of the EU’s data protection regime can scale up to 4% of annual turnover, or €20 million, whichever is greater.)

It’s worth noting that since OpenAI does nt have a legal entity established in the EU, any data protection authority is empowered to intervene, under the GDPR, if it sees risks to local users. (So where Italy steps in, others may follow.)

Suite of GDPR issues

The GDPR applies whenever EU users’ personal data is processed. And it’s clear OpenAI’s large language model has been crunching this kind of information, since it can, for example, produce biographies of named individuals in the region on-demand (we know; we’ve tried it). Although OpenAI declined to provide details of the training data used for the latest iteration of the technology, GPT-4, it has disclosed that earlier models were trained on data scraped from the Internet, including forums such as Reddit. So if you’ve been reasonably online, chances are the bot knows your name.

Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks. That potentially raises further GDPR concerns, since the regulation provides Europeans with a suite of rights over their data, including the right to rectification of errors. It’s not clear how/whether people can ask OpenAI to correct erroneous pronouncements about them generated by the bot, for example.

The Garante‘s statement also highlights a data breach the service suffered earlier this month, when OpenAI admitted a conversation history feature had been leaking users’ chats, and said it may have exposed some users’ payment information.

Data breaches are another area the GDPR regulates with a focus on ensuring entities that process personal data are adequately protecting the information. The pan-EU law also requires companies to notify relevant supervisory authorities of significant breaches within tight time-periods.

Overarching all this is the big(ger) question of what legal basis OpenAI has relied upon for processing Europeans’ data in the first place. In other words, the lawfulness of this processing.

The GDPR allows for a number of possibilities — from consent to public interest — but the scale of processing to train these large language models complicates the question of legality. As the Garante notes (pointing to the “mass collection and storage of personal data”), with data minimization being another big focus in the regulation, which also contains principles that require transparency and fairness. Yet, at the least, the (now) for-profit company behind ChatGPT does not appear to have informed people whose data it has repurposed to train its commercial AIs. That could be a pretty sticky problem for it.

If OpenAI has processed Europeans’ data unlawfully, DPAs across the bloc could order the data to be deleted, although whether that would force the company to retrain models trained on data unlawfully obtained is one open question as an existing law grapples with cutting edge tech.

On the flip side, Italy may have just banned all machine learning by, er, accident… 

“[T]he Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms underlying the operation of the platform,” the DPA wrote in its statement today [which we’ve translated from Italian using AI].

“As evidenced by the checks carried out, the information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate processing of personal data,” it added.

The authority added that it is concerned about the risk of minors’ data being processed by OpenAI since the company is not actively preventing people under the age of 13 from signing up to use the chatbot, such as by applying age verification technology.

Risks to children’s data is an area where the regulator has been very active, recently ordering a similar ban on the virtual friendship AI chatbot, Replika, over child safety concerns. In recent years, it has also pursued TikTok over underage usage, forcing the company to purge over half-a-million accounts it could not confirm did not belong to kids.

So if OpenAI can’t definitively confirm the age of any users it’s signed up in Italy, it could, at the very least, be forced to delete their accounts and start again with a more robust sign-up process.

OpenAI was contacted for a response to the Garante‘s order.

Lilian Edwards, an expert in data protection and Internet law at Newcastle University who has been ahead of the curve in conducting research on the implications of “algorithms that remember,” told TechCrunch: “What’s fascinating is that it more or less copy-pasted Replika in the emphasis on access by children to inappropriate content. But the real time-bomb is denial of lawful basis, which should apply to ALL or at least many machine learning systems, not just generative AI.”

She pointed to the pivotal ‘right to be forgotten’ case involving Google search, where a challenge was brought to its consentless processing of personal data by an individual in Spain. But while European courts established a right for individuals to ask search engines to remove inaccurate or outdated information about them (balanced against a public interest test), Google’s processing of personal data in that context (internet search) did not get struck down by EU regulators over the lawfulness of processing point, seemingly because it was providing a public utility. But also, ultimately, because Google ended up providing rights of erasure and rectification to EU data subjects.

“Large language models don’t offer those remedies and it’s not entirely clear they would, could or what the consequences would be,” Edwards added, suggesting that enforced retraining of models may be one potential fix.

Or, well, that technologies like ChatGPT may simply have broken data protection law…

https://techcrunch.com/

Saturday, 11 March 2023

Skills-based hiring continues to rise as degree requirements fade

More employers are leaving behind college degree requirements and embracing a skills-based hiring approach that emphasizes strong work backgrounds, certifications, assessments, and endorsements. And soft skills are becoming a key focus of hiring managers, even over hard skills.

Large companies, including Boeing, Walmart, and IBM, have signed on to varying skills-based employment projects, such as Rework America Alliance, the Business Roundtable’s Multiple Pathways programs, and the campaign to Tear the Paper Ceiling, pledging to implement skills-based practices, according to McKinsey & Co.

“So far, they’ve removed degree requirements from certain job postings and have worked with other organizations to help workers progress from lower- to higher-wage jobs,” McKinsey said in a November report.

Skills-based hiring helps companies find and attract a broader pool of candidates who are better suited to fill positions the long term, and it opens up opportunities to non-traditional candidates, including women and minorities, according to McKinsey.

At Google, a four-year degree is not required for almost any role at the company — and a computer science degree isn't required for most software engineering or product manager positions. “Our focus is on demonstrated skills and experience, and this can come through degrees or it can come through relevant experience,” said Tom Dewaele, Google’s vice president of people experience.

Similarly, Bank of America has refocused its hiring to use a skills-based approach. “We recognize that prospective talent think they need a degree to work for us, but that is not the case,” said Christie Gragnani-Woods, a Bank of America global talent acquisition executive. “We are dedicated to recruiting from a diverse talent pool to provide an equal opportunity for all to find careers in financial services, including those that don’t require a degree.”

Hard skills, such as cybersecurity and software development, are still in peak demand, but organizations are finding soft skills can be just as important, according to Jamie Kohn, research director in the Gartner Research’s human resources practice.

Soft skills, which are often innate, include adaptability, leadership, communications, creativity, problem-solving or critical thinking, good interpersonal skills, and the ability to collaborate with others.

“Also, people don’t learn all their [hard] skills at college,” Kohn said. “They haven’t for some time, but there’s definitely a surge in self-taught skills or taking online courses. You may have a history major who’s a great programmer. That’s not at all unusual anymore. Companies that don’t consider that are missing out by requiring specific degrees.”

A lessening of 'degree discrimination'

From 2000 through 2020 “degree discrimination,” cost employees who were skilled through alternative routes 7.4 million jobs, according to Opportunity@Work, a Washington-based nonprofit promoting workers who are skilled through alternative routes. Alternative routes include skills learned on the job, in the military, through training programs, or at community colleges, for example.

“They are among our country’s greatest under-valued resources — the invisible casualties of America’s broken labor market — where low-wage work is often equated with low-skill work and the lack of a degree is presumed to be synonymous with a lack of skills,” Opportunity@Work explains on its site.

Over the past few years, however, job postings with a degree requirement have dropped from 51% of jobs in 2017 to 44% in 2021, according to the Burning Glass Institute.

Much of the recent shift to skills-based hiring is due to the dearth of tech talent created by the Great Resignation and a growing number of digital transformation projects. While the US unemployment rate hovers around 3.5%, in technology fields, it’s less than half that (1.5%).

While many IT occupations have also seen degree requirements vanish, there remain three where bachelor's degrees are still blocking the more than 70 million workers who have skills gained through alternatives to college, according to Opportunity@Work:

  • Computer & Information Systems Managers: 698,000 workers hold such jobs today — and 19% of them are alternatively trained. Yet, 94% of those jobs require a bachelor's degree.
  • Computer Programmers: 481,000 workers fill these jobs today, 21% of whom are alternatively trained. But 76% of those jobs require a bachelor's degree.
  • Computer Support Specialists: 539,000 workers now have these jobs, with 45% of them alternatively trained. And still, 45% of those jobs require a bachelor's degree.

As many as 70% of organizations have rolled out some kind of workplace technology education in the past year, according to a survey of HR professionals and workers by digital consulting agency West Monroe.

“With this figure in mind, it will be imperative for these organizations to assess their workforce and invest in teaching their workers new skills instead of taking the time, effort and cost to fill a new position,” West Monroe said.

While the cost and time it takes to acquire skills in software development, Java, Python, big data, risk management, and algorithms is high, so is their longevity.

“The payoff for skills in this group is often as long as a person’s entire career,” the Burning Glass Institute stated in a report this month. “Historically, these are the skills that are ripe for reskilling and redeploying talent for the long term.”

Other skills such as risk management and project management also stand out as being particularly durable, yet costly to develop — but they’re not typically as expensive to hire for, according to Burning Glass Institute.

Skills that can be built on an as-needed basis — because the time to learn them is generally low but the return on investment is high — include salesforce, data structures, data analysis, visual design, SAS (software) and cost estimation, the report said.

Many organizations are already implementing internal programs to upskill new and existing employees.

According to research firm IDC, 60% of the Global 2000 corporations have or will have a citizen developer training ecosystem. A significant number of those developers will come not from IT, but from business units looking to digitize processes and using low-code or no-code software tools.

While citizen developers may have little coding knowledge, they’re generally tech-savvy; they’ve worked with spreadsheets and databases, or they’re intimately familiar with corporate technology because they're customer service representatives or business analysts.

“We have seen a surge in demand for particularly digital and tech-related skills,” Kohn said. "A lot of companies have accelerated their digital transformation. So, there’s a huge demand and not enough talent going around."

The change isn't just in private industry
Skills-based hiring practices aren't limited to the private sector. Last year, the White House announced new limits on the use of educational requirements. Over the past year, five governors removed most college degree requirements for entry-level state jobs.

In January, Pennsylvania Gov. Josh Shapiro announced that his first executive order would ensure 92% of state government jobs no longer require a four-year college degree. The move opened up 65,000 state jobs that previously required a college degree and meant candidates are free to compete for those positions based on skills, relevant experience, and merit. Shapiro’s move followed similar actions in other states, such as Colorado, Utah and Maryland. In Utah’s case, 98% of its civil servant jobs will no longer require a college degree.

“Degrees have become a blanketed barrier-to-entry in too many jobs,” Utah Gov. Spencer Cox said in a statement. “Instead of focusing on demonstrated competence, the focus too often has been on a piece of paper. We are changing that.”

And just this week, Alaska Gov. Mike Dunleavy ordered a review of which state jobs could have four-year college degree requirements eliminated as a way to tackle the public sector’s recruitment and retention crisis.

Relying too much on academic degrees is a significant factor in the “over-speccing” of job requirements for tech positions, according to CompTIA, a nonprofit association for the IT industry and its workers. CompTIA's research has found that a notable segment of HR professionals is unaware of the concept of overspending when creating job postings.

In 2022, 61% of all employer job postings for tech positions nationally listed a four-year degree or higher as a requirement. In Pennsylvania, a degree was required in 62% of postings for tech jobs, in Utah, 59%; and in Maryland, 69%.

“That’s not to say a degree doesn’t play some role later in the process,” Kohn said. “Hiring managers are still skeptical of candidates who don’t have a traditional technology background. The difference is they’re allowing people with different backgrounds to get a foot in the door.”

For example, a marketing professional with data analytics skills might not be able to land an IT role. “They may be a great fit for it," Kohn said, "but they just don’t have the background companies traditionally look for."

https://www.computerworld.com/

Artificial intelligence helps solve networking problems

With the public release of ChatGPT and Microsoft’s $10-billion investment into OpenAI, artificial intelligence (AI) is quickly gaining mainstream acceptance. For enterprise networking professionals, this means there is a very real possibility that AI traffic will affect their networks in major ways, both positive and negative.

As AI becomes a core feature in mission-critical software, how should network teams and networking professionals adjust to stay ahead of the trend?

Andrew Coward, GM of Software Defined Networking at IBM, argues that the enterprise has already lost control of its networks. The shift to the cloud has left the traditional enterprise network stranded, and AI and automation are required if enterprises hope to regain control.

“The center of gravity has shifted from the corporate data center to a hybrid multicloud environment, but the network was designed for a world where all traffic still flows to the data center. This means that many of the network elements that dictate traffic flow and policy are now beyond the reach and control of the enterprise’s networking teams,” Coward said.

Recent research from Enterprise Management Associates (EMA) supports Coward’s observations. According to EMA’s 2022 Network Management Megatrends report, while 99% of enterprises have adopted at least one public-cloud service and 72% have a multi-cloud strategy, only 18% of the 400 IT organizations surveyed believed that their existing tools are effective at monitoring public clouds.   

AI can help monitor networks.

AI is stressing networks in both obvious and nonobvious ways. It’s no secret that organizations that use cloud-based AI tools, such as OpenAI, IBM Watson, or AWS DeepLens, must accommodate heavy traffic between cloud and enterprise data centers to train the tools. Training AI and keeping it current requires shuttling massive amounts of data back and forth.  

What’s less obvious is that AI enters the enterprise through side doors, sneaking in through capabilities built into other tools. AI adds intelligence to everything from content creation tools to anti-spam engines to video surveillance software to edge devices, and many of those tools constantly communicate over the WAN to enterprise data centers. This can create traffic surges and latency issues, among a range of other problems.

On the positive side of the ledger, AI-powered traffic-management and monitoring tools are starting to help resource-constrained network teams cope with the complexity and fragility of multi-cloud, distributed networks. At the same time, modern network services such as SD-WAN, SASE, and 5G also now rely on AI for such things as intelligent routing, load balancing, and network slicing.

But as AI takes over more network functions, is it wise for enterprise leaders to trust this technology?

Is it wise to trust AI for mission-critical networking?

The professionals who will be tasked with using AI to enable next-generation networking are understandably skeptical of the many overheated claims of AI vendors.

“Network operations manage what many perceive to be a complex, fragile environment. So, many teams are fearful of using AI to drive decision-making because of potential network disruptions,” said Jason Normandin, a netops product manager for Broadcom Software.

Operation teams that don’t understand or have access to the underlying AI model’s logic will be hard to win over. “To ensure buy-in from network operations teams, it is critical to keep human oversight over the AI-enabled devices and systems,” Normandin said.

To trust AI, networking professionals require “explainable AI,” or AI that is not a black box but that reveals its inner workings. “Building trust in AI as a reliable companion starts with understanding its capabilities and limitations and testing it in a controlled environment before deployment,” said Dr. Adnan Masood, Chief AI Architect at digital transformation company UST.

Explainable and interpretable AI allows network teams to understand how AI arrives at its decisions, while key metrics allow network teams to track its performance. “Continuously monitoring AI’s performance and gathering feedback from team members is also an important way to build trust,” Masood added. “Trust in AI is not about blind-faith but rather understanding its capabilities and using it as a valuable tool to enhance your team’s performance.”

Broadcom’s Normandin notes that while networking experts may be reluctant to “give up the wheel” to AI, there is a middle way. “Recommendation engines can be a good compromise between manual and fully automated systems,” he said. “Such solutions let human experts ultimately make decisions of their own while offering users to rate recommendations provided. This approach enables a continuous training feedback loop, giving the opportunity to dynamically improve the models by using operators’ input.”

AI can assist network support with natural-language chat.

As enterprise networks become more complicated, distributed, and congested, AI is helping resource-strapped network teams keep up. “The need for instantaneous, elastic connectivity across the enterprise is no longer just an option; it is table stakes for a successful business,” Coward from IBM said. “That’s why the industry is looking to apply AI and intelligent automation solutions to the network.”

The fact is that AI-powered tools are already spreading throughout cloud and enterprise networks, and the number of tools that feature AI will continue to rise for the foreseeable future. Enterprise networking has been one of the sectors most aggressively adopting AI and automation. AI is currently being used for a wide range of network functions, including performance monitoring, alarm suppression, root-cause analysis, and anomaly detection.

For instance, Cisco’s Meraki Insight analyzes network performance issues and helps with troubleshooting; Juniper’s Mist AI automates network configuration and handles optimization; and IBM’s Watson AIOps automates IT operations and improves service delivery.

AI is also being used to improve customer experiences. “AI’s ability to adapt and learn the client-to-cloud connection as it changes will make AI ideal for the most dynamic network use cases,” said Bob Friday, Chief AI Officer at Juniper Networks. Friday said that as society becomes more mobile, the wireless user experience gets ever more complex. That’s a problem because wireless networks are now critical to the daily lives of employees, especially in the age of work-from-home, which forces IT to support users in environments over which IT has little to no control.

This is why AI-powered support is one of the most popular early use cases.

“AI is enabling the next era of search and chatbots,” Friday said. “The end goal is an environment where users enjoy steady, consistent performance and no longer need to spend precious IT resources on mountains of support tickets.”

Chatbots and virtual assistants built with Natural Language Processing (NLP) and Natural Language Understanding (NLU) can understand questions that users ask in their own words. The system responds with specific insights and recommendations based on observations made across the LAN, WLAN, and WAN.

“Where this client-to-cloud insight and automation simply was not possible just a few years ago, today’s chatbots can utilize NLP capabilities to provide context and meaning to user inputs, allowing AI to come up with the best response,” Friday said. “This far surpasses the simple ‘yes’ or ‘no’ responses that originally came from traditional chatbots. With better NLP capabilities, chatbots can progress to become more intuitive, to the point where users will have a hard time telling the difference between a bot and a human.”

The early stages of this vision are already underway. AI is currently being used to help Fortune 500 companies accomplish such things as managing end-to-end user connectivity and enabling the delivery of new 5G services.

Gap turns to AI-powered operations and support.

Retail giant Gap’s in-store WLAN networks were originally designed to accommodate a handful of mobile devices. Now these networks are used not only for employee connections to centralized resources but also to connect shoppers’ devices and an increasing array of retail IoT devices across thousands of stores.

“Wireless in retail is really tough,” said Snehal Patel, global network architect for Gap

Inc. As more clients connected to Gap WLANs, a string of problems emerged. “Stores need enough wireless capacity to support innovation, and the network operations team needs better visibility into issues when they arise,” Patel said.

Gap’s IT team searched for a WLAN technology that would leverage the scale and resiliency of public clouds, but the team also wanted a platform that included tools like AI and automation that would enable their networks to scale to meet future demand.

Gap eventually settled on a set of tools from Juniper. Gap deployed Juniper’s Mist AI, an AI-powered network operations and support platform, Marvis VNA, a virtual network assistant designed to work with Mist AI, and Juniper’s SD-WAN service.

Gap’s operations team can now ask Marvis questions, and not only will it tell them what’s wrong with the network, but it will also recommend the next steps to remediate the problem.

“Before Mist, we spent a lot more time troubleshooting,” Patel said. Now, Mist continuously measures baseline performance, and if there’s a deviation, Marvis helps the operation team identify the problem. With enhanced visibility into network health and root-cause analysis of network issues, Gap has been reduced technical-staff visits to stores by 85%.

DISH taps AI to scale 5G for enterprise customers.

Another Fortune 500 company that has adopted AI to modernize networking is DISH Network, which has deployed AI to enable new 5G services. DISH was seeing increasing demand for enterprise 5G services but was having a hard time optimizing its infrastructure to meet that demand.

Enterprise customers were seeking 5G services to enable new use cases, such as smart cities, agricultural drone networks, and smart factories. However, those use cases require secure, private, low-latency, stable connections over shared resources.

DISH knew that it needed to modernize its networking stack, and it sought tools that would help it deliver private 5G networks to enterprise customers on demand and with guaranteed SLAs. This was not possible using legacy tools.

DISH turned to IBM for help. IBM’s AI-powered automation and network orchestration software and services enable DISH to bring 5G network orchestration to both business and operations platforms. Intent-driven orchestration, a software-powered automation process, and AI now underpin DISH’s cloud-native 5G network architecture.

DISH also intends to use IBM Cloud Pak for Network Automation, an AI and machine-learning-powered network automation and orchestration software suite, to unlock new revenue streams, such as the on-demand delivery of private 5G network services.

Cloud Pak automates the complicated, cumbersome process of creating 5G network slices, which can then be provisioned as private networks. By automating the process, DISH can create enterprise-class private networks on 5G slices as soon as demand materializes, complete with SLAs.

 AI-powered advanced network slicing allows DISH to offer 5G services that are customized to each business. Businesses are able to set service levels for each device on their network, so, for example, an autonomous vehicle can receive a very low-latency connection, while an HD video camera can be allocated high bandwidth. 

“Our 5G build is unique in that we are truly creating a network of networks where each enterprise can custom-tailor a network slice or group of slices to achieve their specific business needs,” said Marc Rouanne, chief network officer, DISH Wireless. IBM’s orchestration solutions leverage AI, automation, and machine learning to not only make these private 5G slices possible, but also to ensure they adapt over time as customer use evolves.

How IT pros should prepare for AI.

As AI, machine learning, and automation power an increasing array of networking software and gear, how should individual network professionals prepare to deal with their new artificial colleagues?

While few professionals will miss the mundane, repetitive chores that AI excels at, many also worry that AI will eventually displace them entirely.

“While AI is developing exponentially, it is inevitable network teams will be exposed to AI-enabled devices and systems,” Broadcom’s Normandin said. “As network experts are not meant to become AI specialists, a cultural change is probably more likely to happen than anything else.”

Masood of UST agrees that a cultural change is in order. “Network teams are rapidly evolving from just managing networks to managing networks with a brain,” he said. “Within the context of networking, these teams will need to develop the ability to work collaboratively with data scientists, software engineers, and other experts to build, deploy, and maintain AI systems in production.”

https://www.networkworld.com/

Can Ageing be Prevented? Retro Biosciences says 'Yes, we increase your life by 10 years!'

When a startup called Retro Biosciences eased out of stealth mode in mid-2022, it announced it had secured $180 million to bankroll an audacious mission: to add 10 years to the average human life span. It had set up its headquarters in a raw warehouse space near San Francisco just the year before, bolting shipping containers to the concrete floor to quickly make lab space for the scientists who had been enticed to join the company.

Retro said that it would “prize speed” and “tighten feedback loops” as part of an “aggressive mission” to stall aging, or even reverse it. But it was vague about where its money had come from. At the time, it was a “mysterious startup,” according to press reports, “whose investors remain anonymous.”

Now MIT Technology Review can reveal that the entire sum was put up by Sam Altman, the 37-year-old startup guru and investor who is CEO of OpenAI. 

Altman spends nearly all his time at OpenAI, an artificial intelligence company whose chatbots and electronic art programs have been convulsing the tech sphere with their human-like capabilities. 

But Altman’s money is a different matter. He says he’s emptied his bank account to fund two other very different but equally ambitious goals: limitless energy and extended life span.

One of those bets is on the fusion power startup Helion Energy, into which he’s poured more than $375 million, he told CNBC in 2021. The other is Retro, to which Altman cut checks totaling $180 million the same year. 

“It’s a lot. I basically just took all my liquid net worth and put it into these two companies,” Altman says.

Altman’s investment in Retro hasn’t been previously reported. It is among the largest ever by an individual into a startup pursuing human longevity.

Altman has long been a prominent figure in the Silicon Valley scene, where he previously ran the startup incubator Y Combinator in San Francisco. But his profile has gone global with OpenAI’s release of ChatGPT, software that’s able to write poems and answer questions.

The AI breakthrough, according to Fortune, has turned the seven-year-old company into “an unlikely member of the club of tech superpowers.” Microsoft committed to investing $10 billion, and Altman, with 1.5 million Twitter followers, is consolidating a reputation as a heavy hitter whose creations seem certain to alter society in profound ways.  

Altman does not appear on the Forbes billionaires list, but that doesn’t mean he isn’t extremely wealthy. His wide-ranging investments have included early stakes in companies like Stripe and Airbnb. 

 “I have been an early-stage tech investor in the greatest bull market in history,” he says. 

Young Blood

About eight years ago, Altman became interested in so-called “young blood” research. These were studies in which scientists sewed young and old mice together so that they shared one blood system. The surprise: the old mice seemed to be partly rejuvenated.

A grisly experiment, but in a way, remarkably simple. Altman was head of Y Combinator at the time, and he tasked his staff with looking into the progress being made by anti-aging scientists.

“It felt like, all right, this was a result I didn’t expect and another one I didn’t expect,” he says. “So there’s something going on where … maybe there is a secret here that is going to be easier to find than we think.” 

In 2018, Y Combinator launched a special course for biotech companies, inviting those with “radical anti-aging schemes” to apply, but before long, Altman moved away from Y Combinator to focus on his growing role at OpenAI. 

Then, in 2020, researchers in California showed they could achieve an effect similar to young blood by replacing the plasma of old mice with salt water and albumin. That suggested the real problem lay in the old blood. Simply by diluting it (and the toxins in it), medicine might get one step closer to a cure for aging.

The new company would need a lot of money—enough to keep it afloat at least seven or eight years while it carried out research, ran into setbacks, and overcame them. It would also need to get things done quickly. Spending at many biotech startups is decided on by a board of directors, but at Retro, Betts-LaCroix has all the decision-making power.  “We have no bureaucracy,’ he says. “I am the bureaucracy.” 

https://tinyurl.com/4zsukek9

https://retro.bio/announcement/