Saturday 4 May 2024

ChatGPT’s AI ‘memory’ can remember the preferences of paying customers

OpenAI announced the Memory feature that allows ChatGPT to store queries, prompts, and other customizations more permanently in February. At the time, it was only available to a “small portion” of users, but now it’s available for ChatGPT Plus paying subscribers outside of Europe or Korea.

ChatGPT’s Memory works in two ways to make the chatbot’s responses more personalized. The first is by letting you tell ChatGPT to remember certain details, and the second is by learning from conversations similar to other algorithms in our apps. Memory brings ChatGPT closer to being a better AI assistant. Once it remembers your preferences, it can include them without needing a reminder.

As The Verge’s David Pierce pointed out, some users can find it creepy if chatbots know them in this way, so OpenAI has said users will always have control over what ChatGPT retains (and what is used for additional training).

OpenAI writes in a blog post that in a change from the earlier test, now ChatGPT will tell users when memories are updated. Users can manage what ChatGPT remembers by reviewing what the chatbot took from conversations and even making ChatGPT “forget” unwanted details.

A screenshot of ChatGPT’s memory feature.
Image: OpenAI

OpenAI’s examples of uses for Memory include:

You’ve explained that you prefer meeting notes to have headlines, bullets and action items summarized at the bottom. ChatGPT remembers this and recaps meetings this way.

You’ve told ChatGPT you own a neighborhood coffee shop. When brainstorming messaging for a social post celebrating a new location, ChatGPT knows where to start. 

You mention that you have a toddler and that she loves jellyfish. When you ask ChatGPT to help create her birthday card, it suggests a jellyfish wearing a party hat. 

As a kindergarten teacher with 25 students, you prefer 50-minute lessons with follow-up activities. ChatGPT remembers this when helping you create lesson plans.

ChatGPT could always recall details during active conversations; for example, if you ask the chatbot to draft an email, you can immediately follow it up with “make it more professional,” and it will remember that you were talking about email. But if you wait long enough or start a new conversation, that all goes out the window. 

OpenAI did not say why Memory will not be available in Europe or Korea. The company says Memory will roll out to subscribers to ChatGPT Enterprise and Teams, as well as custom GPTs on the GPT Store, but did not specify when. 

https://www.theverge.com/

Wednesday 13 December 2023

Google's Gemini continues the dangerous obfuscation of AI technology

Until this year, it was possible to learn a lot about artificial intelligence technology simply by reading research documentation published by Google and other AI leaders with each new program they released. Open disclosure was the norm for the AI world. 

All that changed in March of this year, when OpenAI elected to announce its latest program, GPT-4, with very little technical detail. The research paper provided by the company obscured just about every important detail of GPT-4 that would allow researchers to understand its structure and to attempt to replicate its effects. 

Last week, Google continued that new obfuscation approach, announcing the formal release of its newest generative AI program, Gemini, developed in conjunction with its DeepMind unit, which was first unveiled in May. The Google and DeepMind researchers offered a blog post devoid of technical specifications, and an accompanying technical report almost completely devoid of any relevant technical details. 

Much of the blog post and the technical report cite a raft of benchmark scores, with Google boasting of beating out OpenAI's GPT-4 on most measures and beating Google's former top neural network, PaLM. 

Neither the blog nor the technical paper include key details customary in years past, such as how many neural net "parameters," or, "weights," the program has, a key aspect of its design and function. Instead, Google refers to three versions of Gemini, with three different sizes, "Ultra," "Pro," and "Nano." The paper does disclose that Nano is trained with two different weight counts, 1.8 billion and 3.25 billion, while failing to disclose the weights of the other two sizes. 

Numerous other technical details are absent, just as with the GPT-4 technical paper from OpenAI. In the absence of technical details, online debate has focused on whether the boasting of benchmarks means anything. 

OpenAI researcher Rowan Zellers wrote on X (formerly Twitter) that Gemini is "super impressive," and added, "I also don't have a good sense on how much to trust the dozen or so text benchmarks that all the LLM papers report on these days." 

Tech news site TechCrunch's Kyle Wiggers reports anecdotes of poor performance by Google's Bard search engine, enhanced by Gemini. He cites posts on X by people asking Bard questions such as movie trivia or vocabulary suggestions and reporting the failures. 

The sudden swing to secrecy by Google and OpenAI is becoming a major ethical issue for the tech industry because no one knows, outside the vendors -- OpenAI and its partner Microsoft, or, in this case, Google's Google Cloud unit -- what is going on in the black box in their computing clouds. 

Google's lack of disclosure, while not surprising given its commercial battle with OpenAI, and partner Microsoft, for market share, is made more striking by one very large omission: model cards. 

Model cards are a form of standard disclosure used in AI to report on the details of neural networks, including potential harms of the program (hate speech, etc.) While the GPT-4 report from OpenAI omitted most details, it at least made a nod to model cards with a "GPT-4 System Card" section in the paper, which it said was inspired by model cards.

Google doesn't even go that far, omitting anything resembling model cards. The omission is particularly strange given that model cards were invented at Google by a team that included Margaret Mitchell, formerly co-lead of Ethical AI at Google, and former co-lead Timnit Gebru. 

Instead of model cards, the report offers a brief, rather bizarre passage about the deployment of the program with vague language about having model cards at some point.

If Google puts question marks next to model cards in its own technical disclosure, one has to wonder what the future of oversight and safety is for neural networks.

https://www.zdnet.com/

Six of the most popular Android password managers are leaking data

Several mobile password managers are leaking user credentials due to a vulnerability discovered in the autofill functionality of Android apps. 

The credential-stealing flaw, dubbed AutoSpill, was reported by a team of researchers from the International Institute of Information Technology Hyderabad at last week's Black Hat Europe 2023 conference.

The vulnerability comes into play when Android calls a login page via WebView. (WebView is an Android component that makes it possible to view web content without opening a web browser.) When that happens, WebView allows Android apps to display the content of the web page in question. 

That's all fine and good -- unless a password manager is added to the mix: The credentials shared with WebView can also be shared with the app that originally called for the username and password. If the originating app is trusted, everything should be OK If that app isn't trusted, things could go very wrong.

The affected password managers are 1Password, LastPass, Enpass, Keeper, and Keepass2Android. Also, if the credentials were shared via a JavaScript injection method, both DashLane and Google Smart Lock are also affected by the vulnerability.

Because of the nature of this vulnerability, neither phishing nor malicious in-app code is required.

One thing to keep in mind is that the researchers tested this on less-than-current hardware and software.

Specifically, they tested on these three devices: Poco F1, Samsung Galaxy Tab S6 Lite, and Samsung Galaxy A52. The versions of Android used in their testing were Android 0 (with the December 2020 security patch), Android 11 (with the January 2022 security patch), and Android 12 (with the April 2022 security patch). 

As these tested devices -- as well as the OS and security patches -- were out of date, it's hard to know with any certainty whether the vulnerability would affect newer versions of Android. 

However, even if you are using a device other than what the group tested with, it doesn't mean this vulnerability should be shrugged off. Rather, it should serve as a reminder to always keep your Android OS and installed app up-to-date. The WebView system has always been held under scrutiny and updates for this software should always be updated. For that, you can open the Google Play Store on your device, search for WebView, tap About this app, and compare the latest version with the version installed on your device. If they are not the same, you'll want to update.

One of your best means of keeping Android secure is to make sure it is always as up-to-date as possible. Check daily for OS and app updates and apply all that are available.

https://www.zdnet.com/

Monday 25 September 2023

Smartphone Showdown: 15 Years of Android vs. iPhone

 "I'm going to destroy Android, because it's a stolen product," Steve Jobs says in author Walter Isaacson's 2011 biography of the late Apple co-founder.

Jobs' fury around Google and its smartphone software is well documented, and the many lawsuits involving Apple and various Android partners showed that Jobs was serious about his allegations of theft. But the reality is that both Apple and Google have taken inspiration from each other for years and that neither company would be where it is today without the work of the other.

So as Android celebrates its 15th birthday (since the launch of the first Android-based phone, the T-Mobile G1), let's take a look back at the journey the companies have taken to becoming the most dominant forces in the tech world -- and how their competition pushed them to innovate. 

Smartphones have arguably changed the world more than any other invention in human history, from radically altering how we interact with one another to creating a whole new category of companies that deal in various mobile technologies. And though Jobs may have been outspokenly vitriolic about Android in the early days, it's clear that ideas and inspiration have echoed back and forth between Apple and Google in the years since.

The last 15 years of competition between the two companies have often felt like siblings bickering at playtime, falling out over who had which toy first or crying to the parents when the other one took something that wasn't theirs. Most siblings will argue to some extent throughout their lives, but history is also rife with pairings that, through spirited competition, pushed each sibling to succeed. 

The two companies' volleying back and forth pushed them ahead in the game, and allowed them to fight off other challengers, like the once-dominant BlackBerry, as well as Nokia and its short-lived Symbian platform. Even tech giant Microsoft and its Windows Phone failed to thrive in the face of the heated competition from Apple and Google.

But though the relationship today between the iPhone maker and the Android purveyor hardly matches the Williams' friendly, familial rivalry, that wasn't always the case. Let's take a look back.

Beginnings

Android began as its own company (Android Inc.) back in 2003, and it wasn't acquired by Google until 2005. Meanwhile, Apple already had success with mobile products in the form of the iPod, the iPhone began development in secret in 2004 and Jobs was reportedly approached to become Google CEO. 

Jobs didn't take the role, but Google found a CEO in Eric Schmidt, who in 2006 became part of Apple's board of directors. "There was so much overlap that it was almost as if Apple and Google were a single company," journalist Steven Levy wrote in his 2011 book In the Plex: How Google Thinks, Works, and Shapes Our Lives. Things didn't stay as cozy, however. 

In January 2007 Apple unveiled the first iPhone, and in November 2007 Google showed off two prototypes. One, a Blackberry-esque phone that made use of hardware buttons and scroll wheels, had been in the prototype phase for some time. The more recent prototype was dominated by a large touchscreen and appeared to be much more like the iPhone.

That didn't go down well with Jobs, who threatened the destruction of Android using "every penny of Apple's $40 billion in the bank." The first Android phone, the T-Mobile G1, combined elements of both those prototypes, with a touchscreen that slid out to reveal a physical keyboard. Schmidt left Apple's board of directors in 2009 due to potential conflicts of interest, and so began a series of lawsuits involving Apple and various Google partners over alleged infringement of phone-related patents. 

The most notable of the Google partners was Samsung, which Apple accused of infringing a number of patents, including patents related to basic functions like tap to zoom and slide to unlock. These legal battles raged for years, with Apple claiming that "it is a fact that Samsung blatantly copied our design" and Samsung pushing back. The long dispute finally came to an end in 2018, when both sides agreed to settle out of court.

Despite the competing claims made during those long courtroom struggles, if we look at the development not just of the software but of the phones that run it, it seems clear both sides continued to liberally borrow ideas from each other. 

Features like picture-in-picture, live voicemail, lock screen customization and live translation were all found on the Android operating system before eventually making their way to iOS. And though the use of widgets to customize your home screen was long held as a differentiator for Android, that feature too eventually found its way to iOS. 

On the other hand, Android's Nearby Share feature is remarkably similar to Apple's AirDrop, and Android phones didn't get features like "do not disturb" or the ability to take screenshots until some time after the iPhone had them. 

Apple removed the 3.5mm headphone jack from the iPhone in September 2016, and I distinctly remember that at Google's launch event for the Pixel the following month, chuckles went round the room when the exec on stage proclaimed, "Yes, it has a headphone jack." Still, Google itself went on to ditch the headphone jack, with the Pixel 2. 

Sometimes it's difficult, if not impossible, to say whether these companies are copying each other's ideas or simply coming up with the same conclusions after paying attention to consumer trends, rumors in the press and the general evolution of supporting technologies. 

Rumors that Apple would remove the physical home button on the iPhone X were circling long before the phone was officially unveiled in September 2017. Are they the same rumors Samsung responded to when it "beat Apple to the punch" and removed the home button from its Galaxy S8 earlier that same year? Or did both sides simply arrive at such a big design decision independently? 

It's impossible to pick a side in this argument -- and somewhat reductive to even try. And regardless, you wind up with the same thing: Phones and software from different manufacturers that seem to evolve in unison. 

Today

In 2023, Android is by far the dominant smartphone platform, with 70.8% market share globally against Apple's 28.4% (according to information from Statista). But Google's focus has always been on getting the Android operating system onto as many devices as possible, from phones costing less than $50 to those costing over $1,500. Apple, meanwhile, offers iOS only on its own devices, and those devices come at a hefty premium, so it's fair to expect that iOS won't be as widespread. 

Google's business model is primarily one of a service provider, though, and not a hardware manufacturer. It makes its money chiefly from selling advertisements across all its platforms, and so it typically benefits from a mass market approach. Android itself is free for companies to use -- hence the large number of installs. But to use Google services (Gmail, YouTube, Chrome and so on, along with access to the Google Play Store) companies must pay license fees to Google. Still, the free use of Android is why you'll find the operating system on phones from Samsung, Motorola, OnePlus, Oppo, Nothing and a huge variety of other brands -- and yes, on Google's own Pixel phones. 

Apple, however, is a closed shop. Only iPhones can run iOS, and Apple has every intention of keeping it that way. It has full control over how that software works on its phones (and charges developers accordingly for apps sold in its own App Store) and how it can be best optimized for the hardware. That's why Apple phones typically perform better than many high-end Android phones, despite the hardware often being less high-spec on paper. Android by its nature has to take more of a "one size fits all" approach, where each new version has to run well on a huge variety of devices, with different screen sizes and under-the-hood components. 

Android struggled with the arrival of tablets, as software designed for 4-inch phones suddenly had to stretch to fit screens much larger in size. Android 3.0 Honeycomb was primarily designed for tablets, but various issues meant it didn't hang around for long, and some of its features were simply absorbed into future versions. Apple takes a different approach: Though at first it used iOS for both devices, now it keeps iOS solely for its phones, optimizing for the smaller screen sizes, with the newer iPadOS as the software for its tablets. 

Yet it's still clear to see the ways the two operating systems have converged over the years. Though Android was always the more customizable of the two, Apple eventually introduced home-screen widgets, customizable lock screens and even the ability to create icon themes to transform the look of your device. 

Meanwhile, Google worked hard to limit the problems caused by fragmentation and has arguably taken more of an "Apple" approach in its own line of devices. Like Apple's iPhones, the phones in the more recent Pixel range -- including the excellent Pixel 7 Pro -- were designed to show off "the best of Google," with processors produced in house (as Apple does with the chips for its iPhones) and software optimized for the Pixel phone it'll run on. 

Though Android may be ahead in terms of numbers of users, Google has clearly seen that Apple is leading the way in terms of a more premium, refined hardware experience, and the Pixel series is Google's answer. Having reviewed both the Pixel 6 Pro and Pixel 7 Pro myself, I can say with certainty that they're the most Apple-like experience you can get from an Android phone. 

The future 

"We are at an interesting crossroads for Android," says Ben Woods, industry analyst at CCS Insight. "Although its success in volume terms is undisputed, it is increasingly losing share to Apple in the premium smartphone space." Google's Pixel phones are some of the best Android phones around, but sales of the devices are a fraction of what Apple sees with the iPhone. 

It's a different story when you look at Android partners, chiefly Samsung, which is jostling with Apple for the position of No. 1 phone manufacturer in the world -- a title that seems to frequently slip from one of the companies to the other. But Samsung has a much wider catalog of products, with unit sales being bolstered by a larger number of phones at lower price points. In the premium segment, Apple still rules, and that's showing no sign of slowing down. 

But Android is increasingly betting on longer-term success from its innovation with foldable phones. Samsung is now multiple generations into its Galaxy Z Flip and Z Fold devices, with Google's own Pixel Fold joining the party earlier this year, along with foldables from the likes of Oppo, Motorola and soon OnePlus. Apple has yet to launch a foldable device, and it remains to be seen whether that's simply because its take on the genre isn't ready, or because it believes foldables are a fad that'll pass (like 3D displays or curving designs). 

Rather than looking toward more-experimental innovations like foldable displays, Apple has instead continued to refine its existing hardware, equipping its latest iPhone 15 Pro series with titanium designs and improved cameras. And Apple's approach also includes pulling people into the wider Apple ecosystem, with iPhones syncing seamlessly with other Apple products, including Apple Watches, iPads, Macs, HomePods and Apple TV. 

With each new iPhone customer comes an opportunity for Apple to sell additional products from its own catalog, along with services like iCloud storage, Apple Music, Apple Fitness or subscriptions to its Apple TV streaming service. Though Google offers products like this to some extent, it has yet to offer the sort of cohesive package Apple does, which could make Google's offerings less enticing for new customers and tempt Android users to jump ship to Apple. 

Still, Android's proliferation across devices at lower price points will continue to make it a popular choice for people on tighter  budgets. And its presence on a huge number of devices from third-party manufacturers means it's where we'll see more innovation that seeks to answer the question of what role the smartphone plays in our lives. 

With smartphone shipments expected to hit their lowest point in a decade, more companies will be looking for ways to use new, exciting technologies to capture an audience's attention and present a product that serves up new ways of doing things. We'll see this from Android and its partners and from Apple with the iPhone, its software and its peripheral devices, including new tech like Apple's Vision Pro headset. 

We'll also see a bigger focus from all sides on sustainability: Apple, for instance, went to great lengths during its iPhone 15 launch event in September to flex its green credentials. While Samsung is making larger efforts in sustainability and smaller companies like Fairphone are using planet-friendly features as primary selling points, other manufacturers have yet to make sustainability a key part of their business model. It's likely, then, that as consumers increasingly look toward sustainable options, the next major competition in the smartphone industry could be who can make the greenest product.

There's no question that the development of both the software and hardware side of iOS and Android smartphones has at times happened almost in tandem, with one side launching a feature and the other responding in "me too!" fashion. And like the Williams sisters using their sporting rivalry to reach stratospheric new heights in tennis, Apple and Android will need to continue to embrace that spirit of competition to find new ways to succeed in an increasingly difficult market.

https://www.cnet.com/

Monday 31 July 2023

Cryptography may offer a solution to the massive AI-labeling problem

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where the content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt into labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where the information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about the content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to the media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

https://www.technologyreview.com/

Wednesday 28 June 2023

Dust uses large language models on internal data to improve team productivity

Dust is a new AI startup based in France that is working on improving team productivity by breaking down internal silos, surfacing important knowledge and providing tools to build custom internal apps. At its core, Dust is using large language models (LLMs) on internal company data to give new superpowers to team members.

The company was co-founded by Gabriel Hubert and Stanislas Polu, who have known each other for more than a decade. Their first startup was called Totems and was acquired by Stripe in 2015. After that, they both spent a few years working for Stripe before parting ways.

Stanislas Polu joined OpenAI, where he spent three years working on LLMs’ reasoning capabilities while Gabriel Hubert became the head of product at Alan.

They teamed up once again to create Dust. Unlike many AI startups, Dust isn’t focused on creating new large language models. Instead, the company wants to build applications on top of LLMs developed by OpenAI, Cohere, AI21, etc.

The team first worked on a platform that can be used to design and deploy large language model apps. It has then focused its efforts on one use case in particular — centralizing and indexing internal data so that it can be used by LLMs.

From an internal ChatGPT to next-gen software

There are a handful of connectors that constantly fetch internal data from Notion, Slack, GitHub and Google Drive. This data is then indexed and can be used for semantic search queries. When a user wants to do something with a Dust-powered app, Dust will find the relevant internal data, use it as the context of an LLM and return an answer.

For example, let’s say you just joined a company and you’re working on a project that was started a while back. If your company fosters communication transparency, you will want to find information in existing internal data. But the internal knowledge base might not be up to date. Or it might be hard to find the reason why something is done this way, as it’s been discussed in an archived Slack channel.

Dust isn’t just a better internal search tool, as it doesn’t just return search results. It can find information across multiple data sources and format answers in a way that is much more useful to you. It can be used as a sort of internal ChatGPT, but it could also be used as the basis of new internal tools.

“We’re convinced that natural language interface is going to disrupt software,” Gabriel Hubert told me. “In five years’ time, it would be disappointing if you still have to go and click on edit, settings, preferences, to decide that your software should behave differently. We see a lot more of our software adapting to your individual needs, because that’s the way you are, but also because that’s the way your team is — because that’s the way your company is.”

The company is working with design partners on several ways to implement and package the Dust platform. “We think there are a lot of different products that can be created in this area of enterprise data, knowledge workers and models that could be used to support them,” Polu told me.

It’s still early days for Dust, but the startup is exploring an interesting problem. There are many challenges ahead when it comes to data retention, hallucination and all of the issues that come with LLMs. Maybe hallucination will become less of an issue as LLMs evolve. Maybe Dust will end up creating its own LLM for data privacy reasons.

Dust has raised $5.5 million (€5 million) in a seed round led by Sequoia with XYZ, GG1, Seedcamp, Connect, Motier Ventures, Tiny Supercomputer, and AI Grant. Several business angels also participated, such as Olivier Pomel from Datadog, Julien Codorniou, Julien Chaumond from Hugging Face, Mathilde Collin from Front, Charles Gorintin and Jean-Charles Samuelian-Werve from Alan, ElĂ©onore Crespo and Romain Niccoli from Pigment, Nicolas Brusson from BlaBlaCar, Howie Liu from Airtable, Matthieu Rouif from PhotoRoom, Igor Babuschkin and Irwan Bello.

If you take a step back, Dust is betting that LLMs will greatly change how companies work. A product like Dust works even better in a company that fosters radical transparency instead of information retention, written communication instead of endless meetings, autonomy instead of top-down management.

If LLMs deliver on their promise and greatly improve productivity, some companies will gain an unfair advantage by adopting these values as Dust will unlock a lot of untapped potential for knowledge workers.

https://techcrunch.com/

Tuesday 6 June 2023

Governments worldwide grapple with regulation to rein in AI dangers

Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race.

While most consumers are just having fun testing the limits of large language models such as ChatGPT, a number of worrying stories have circulated about the technology making up supposed facts (also known as "hallucinating") and making inappropriate suggestions to users, as when an AI-powered version of Bing told a New York Times reporter to divorce his spouse.

Tech industry insiders and legal experts also note a raft of other concerns, including the ability of generative AI to enhance the attacks of threat actors on cybersecurity defenses, the possibility of copyright and data-privacy violations — since large language models are trained on all sorts of information — and the potential for discrimination as humans encode their own biases into algorithms. 

Possibly the biggest area of concern is that generative AI programs are essentially self-learning, demonstrating increasing capability as they ingest data, and that their creators don't know exactly what is happening within them. This may mean, as ex-Google AI leader Geoffrey Hinton has said, that humanity may just be a passing phase in the evolution of intelligence and that AI systems could develop their own goals that humans know nothing about.

All this has prompted governments around the world to call for protective regulations. But, as with most technology regulation, there is rarely a one-size-fits-all approach, with different governments looking to regulate generative AI in a way that best suits their own political landscape.

Countries make their own regulations

“[When it comes to] tech issues, even though every country is free to make its own rules, in the past what we have seen is there’s been some form of harmonization between the US, EU, and most Western countries,” said Sophie Goossens, a partner at law firm Reed Smith who specializes in AI, copyright, and IP issues. “It's rare to see legislation that completely contradicts the legislation of someone else.”

While the details of the legislation put forward by each jurisdiction might differ, there is one overarching theme that unites all governments that have so far outlined proposals: how the benefits of AI can be realized while minimizing the risks it presents to society. Indeed, EU and US lawmakers are drawing up an AI code of conduct to bridge the gap until any legislation has been legally passed.

Generative AI is an umbrella term for any kind of automated process that uses algorithms to produce, manipulate, or synthesize data, often in the form of images or human-readable text. It’s called generative because it creates something that didn’t previously exist. It's not a new technology, and conversations around regulation are not new either.

Generative AI has arguably been around (in a very basic chatbot form, at least) since the mid-1960s, when an MIT professor created ELIZA, an application programmed to use pattern matching and language substitution methodology to issue responses fashioned to make users feel like they were talking to a therapist. But generative AI's recent advent into the public domain has allowed people who might not have had access to the technology before to create sophisticated content on just about any topic, based off a few basic prompts.

As generative AI applications become more powerful and prevalent, there is growing pressure for regulation.

“The risk is definitely higher because now these companies have decided to release extremely powerful tools on the open internet for everyone to use, and I think there is definitely a risk that technology could be used with bad intentions,” Goossens said.

First steps toward AI legislation

Although discussions by the European Commission around an AI regulatory act began in 2019, the UK government was one of the first to announce its intentions, publishing a white paper in March this year that outlined five principles it wants companies to follow: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

In an effort to to avoid what it called “heavy-handed legislation,” however, the UK government has called on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.

Since then, the European Commission has published the first draft of its AI Act, which was delayed due to the need to include provisions for regulating the more recent generative AI applications. The draft legislation includes requirements for generative AI models to reasonably mitigate against foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.

The legislation proposed by the EU would forbid the use of AI when it could become a threat to safety, livelihoods, or people’s rights, with stipulations around the use of artificial intelligence becoming less restrictive based on the perceived risk it might pose to someone coming into contact with it — for example, interacting with a chatbot in a customer service setting would be considered low risk. AI systems that present such limited and minimal risks may be used with few requirements. AI systems posing higher levels of bias or risk, such as those used for government social-scoring systems and biometric identification systems, will generally not be allowed, with few exceptions.

However, even before the legislation had been finalized, ChatGPT in particular had already come under scrutiny from a number of individual European countries for possible GDPR data protection violations. The Italian data regulator initially banned ChatGPT over alleged privacy violations relating to the chatbot’s collection and storage of personal data, but reinstated use of the technology after Microsoft-backed OpenAI, the creator of ChatGPT, clarified its privacy policy and made it more accessible, and offered a new tool to verify the age of users.

Other European countries, including France and Spain, have filed complaints about ChatGPT similar to those issued by Italy, although no decisions relating to those grievances have been made.

Differing approaches to regulation

All regulation reflects the politics, ethics, and culture of the society you’re in, said Martha Bennett, vice president and principal analyst at Forrester, noting that in the US, for instance, there’s an instinctive reluctance to regulate unless there is tremendous pressure to do so, whereas in Europe there is a much stronger culture of regulation for the common good.

“There is nothing wrong with having a different approach, because yes, you do not want to stifle innovation,” Bennett said. Alluding to the comments made by the UK  government, Bennett said it is understandable to not want to stifle innovation, but she doesn’t agree with the idea that by relying largely on current laws and being less stringent than the EU AI Act, the UK government can provide the country with a competitive advantage — particularly if this comes at the expense of data protection laws.

“If the UK gets a reputation of playing fast and loose with personal data, that’s also not appropriate,” she said.

While Bennett believes that differing legislative approaches can have their benefits, she notes that AI regulations implemented by the Chinese government would be completely unacceptable in North America or Western Europe.

Under Chinese law, AI firms will be required to submit security assessments to the government before launching their AI tools to the public, and any content generated by generative AI must be in line with the country’s core socialist values. Failure to comply with the rules will results in providers being fined, having their services suspended, or facing criminal investigations.

The challenges to AI legislation

Although a number of countries have begun to draft AI regulations, such efforts are hampered by the reality that lawmakers constantly have to play catchup to new technologies, trying to understand their risks and rewards.

“If we refer back to most technological advancements, such as the internet or artificial intelligence, it’s like a double-edged sword, as you can use it for both lawful and unlawful purposes,” said Felipe Romero Moreno, a principal lecturer at the University of Hertfordshire’s Law School whose work focuses on legal issues and regulation of emerging technologies, including AI.

AI systems may also do harm inadvertently, since humans who program them can be biased, and the data the programs are trained with may contain bias or inaccurate information. “We need artificial intelligence that has been trained with unbiased data,” Romero Moreno said. “Otherwise, decisions made by AI will be inaccurate as well as discriminatory.”

Accountability on the part of vendors is essential, he said, stating that users should be able to challenge the outcome of any artificial intelligence decision and compel AI developers to explain the logic or the rationale behind the technology’s reasoning. (A recent example of a related case is a class-action lawsuit filed by US man who was rejected from a job because AI video software judged him to be untrustworthy.)

Tech companies need to make artificial intelligence systems auditable so that they can be subject to independent and external checks from regulatory bodies — and users should have access to legal recourse to challenge the impact of a decision made by artificial intelligence, with final oversight always being given to a human, not a machine, Romero Moreno said.

Copyright a major issue for AI apps

Another major regulatory issue that needs to be navigated is copyright. The EU’s AI Act includes a provision that would make creators of generative AI tools disclose any copyrighted material used to develop their systems.

“Copyright is everywhere, so when you have a gigantic amount of data somewhere on a server, and you’re going to use that data in order to train a model, chances are that at least some of that data will be protected by copyright,” Goossens said, adding that the most difficult issues to resolve will be around the training sets on which AI tools are developed.

When this problem first arose, lawmakers in countries including Japan, Taiwan, and Singapore made an exception for copyrighted material that found its way into training sets, stating that copyright should not stand in the way of technological advancements.

However, Goossens said, a lot of these copyright exceptions are now almost seven years old. The issue is further complicated by the fact that in the EU, while these same exceptions exist, anyone who is a rights holder can opt out of having their data used in training sets.

Currently, because there is no incentive to having your data included, huge swathes of people are now opting out, meaning the EU is a less desirable jurisdiction for AI vendors to operate from.

In the UK, an exception currently exists for research purposes, but the plan to introduce an exception that includes commercial AI technologies was scrapped, with the government yet to announce an alternative plan.

What’s next for AI regulation?

So far, China is the only country that has passed laws and launched prosecutions relating to generative AI — in May, Chinese authorities detained a man in Northern China for allegedly using ChatGPT to write fake news articles.

Elsewhere, the UK government has said that regulators will issue practical guidance to organizations, setting out how to implement the principles outlined in its white paper over the next 12 months, while the EU Commission is expected to vote imminently to finalize the text of its AI Act.

By comparison, the US still appears to be in the fact-finding stages, although President Joe Biden and Vice President Kamala Harris recently met with executives from leading AI companies to discuss the potential dangers of AI.

Last month, two Senate committees also met with industry experts, including OpenAI CEO Sam Altman. Speaking to lawmakers, Altman said regulation would be “wise” because people need to know if they’re talking to an AI system or looking at content — images, videos, or documents — generated by a chatbot.

“I think we’ll also need rules and guidelines about what is expected in terms of disclosure from a company providing a model that could have these sorts of abilities we’re talking about,” Altman said.

This is a sentiment Forrester’s Bennett agrees with, arguing that the biggest danger generative AI presents to society is the ease with which misinformation and disinformation can be created.

“[This issue] goes hand in hand with ensuring that providers of these large language models and generative AI tools are abiding by existing rules around copyright, intellectual property, personal data, etc. and looking at how we make sure those rules are really enforced,” she said.

Romero Moreno argues that education holds the key to tackling the technology’s ability to create and spread disinformation, particularly among young people or those who are less technologically savvy. Pop-up notifications that remind users that content might not be accurate would encourage people to think more critically about how they engage with online content, he said, adding that something like the current cookie disclaimer messages that show up on web pages would not be suitable, as they are often long and convoluted and therefore rarely read.

Ultimately, Bennett said, irrespective of what final legislation looks like, regulators and governments across the world need to act now. Otherwise we’ll end up in a situation where the technology has been exploited to such an extreme that we’re fighting a battle we can never win.

https://www.computerworld.com/