Friday, 3 January 2025

What is quantum computing?

Quantum computing is an emergent field of cutting-edge computer science harnessing the unique qualities of quantum mechanics to solve problems beyond the ability of even the most powerful classical computers. 

The field of quantum computing contains a range of disciplines, including quantum hardware and quantum algorithms. While still in development, quantum technology will soon be able to solve complex problems that supercomputers can’t solve, or can’t solve fast enough.

By taking advantage of quantum physics, fully realized quantum computers would be able to process massively complicated problems at orders of magnitude faster than modern machines. For a quantum computer, challenges that might take a classical computer thousands of years to complete might be reduced to a matter of minutes.

The study of subatomic particles, also known as quantum mechanics, reveals unique and fundamental natural principles. Quantum computers harness these fundamental phenomena to compute probabilistically and quantum mechanically.

Four key principles of quantum mechanics

Understanding quantum computing requires understanding these four key principles of quantum mechanics:
  • Superposition: Superposition is the state in which a quantum particle or system can represent not just one possibility, but a combination of multiple possibilities.
  • Entanglement: Entanglement is the process in which multiple quantum particles become correlated more strongly than regular probability allows.
  • Decoherence: Decoherence is the process in which quantum particles and systems can decay, collapse or change, converting into single states measurable by classical physics.
  • Interference: Interference is the phenomenon in which entangled quantum states can interact and produce more and less likely probabilities.
Qubits

While classical computers rely on binary bits (zeros and ones) to store and process data, quantum computers can encode even more data at once using quantum bits, or qubits, in superposition.

A qubit can behave like a bit and store either a zero or a one, but it can also be a weighted combination of zero and one at the same time. When combined, qubits in superposition can scale exponentially. Two qubits can compute with four pieces of information, three can compute with eight, and four can compute with sixteen.

However, each qubit can only output a single bit of information at the end of the computation. Quantum algorithms work by storing and manipulating information in a way inaccessible to classical computers, which can provide speedups for certain problems.

As silicon chip and superconductor development has scaled over the years, it is distinctly possible that we might soon reach a material limit on the computing power of classical computers. Quantum computing could provide a path forward for certain important problems.

With leading institutions such as IBM, Microsoft, Google and Amazon joining eager startups such as Rigetti and Ionq in investing heavily in this exciting new technology, quantum computing is estimated to become a USD 1.3 trillion industry by 2035.

What are qubits?

Generally, qubits are created by manipulating and measuring quantum particles (the smallest known building blocks of the physical universe), such as photons, electrons, trapped ions and atoms. Qubits can also engineer systems that behave like a quantum particle, as in superconducting circuits.

To manipulate such particles, qubits must be kept extremely cold to minimize noise and prevent them from providing inaccurate results or errors resulting from unintended decoherence.

There are many different types of qubits used in quantum computing today, with some better suited for different types of tasks.

Key principles of quantum computing

When discussing quantum computers, it is important to understand that quantum mechanics is not like traditional physics. The behaviors of quantum particles often appear to be bizarre, counterintuitive or even impossible. Yet the laws of quantum mechanics dictate the order of the natural world.

Describing the behaviors of quantum particles presents a unique challenge. Most common-sense paradigms for the natural world lack the vocabulary to communicate the surprising behaviors of quantum particles.

To understand quantum computing, it is important to understand a few key terms:

  • Superposition
  • Entanglement
  • Decoherence
  • Interference
Superposition

A qubit itself isn't very useful. But it can place the quantum information it holds into a state of superposition, which represents a combination of all possible configurations of the qubit. Groups of qubits in superposition can create complex, multidimensional computational spaces. Complex problems can be represented in new ways in these spaces.

This superposition of qubits gives quantum computers their inherent parallelism, allowing them to process many inputs simultaneously.

Entanglement

Entanglement is the ability of qubits to correlate their state with other qubits. Entangled systems are so intrinsically linked that when quantum processors measure a single entangled qubit, they can immediately determine information about other qubits in the entangled system.

When a quantum system is measured, its state collapses from a superposition of possibilities into a binary state, which can be registered like binary code as either a zero or a one.

Decoherence

Decoherence is the process in which a system in a quantum state collapses into a nonquantum state. It can be intentionally triggered by measuring a quantum system or by other environmental factors (sometimes these factors trigger it unintentionally). Decoherence allows quantum computers to provide measurements and interact with classical computers.

Interference

An environment of entangled qubits placed into a state of collective superposition structures information in a way that looks like waves, with amplitudes associated with each outcome. These amplitudes become the probabilities of the outcomes of a measurement of the system. These waves can build on each other when many of them peak at a particular outcome, or cancel each other out when peaks and troughs interact. Amplifying a probability or canceling out others are both forms of interference.

How the principles work together

To better understand quantum computing, consider that two counterintuitive ideas can both be true. The first is that objects that can be measured—qubits in superposition with defined probability amplitudes—behave randomly. The second is that objects too distant to influence each other—entangled qubits—can still behave in ways that, though individually random, are somehow strongly correlated.

A computation on a quantum computer works by preparing a superposition of computational states. A quantum circuit, prepared by the user, uses operations to generate entanglement, leading to interference between these different states, as governed by an algorithm. Many possible outcomes are cancelled out through interference, while others are amplified. The amplified outcomes are the solutions to the computation.

Classical computing versus quantum computing

Quantum computing is built on the principles of quantum mechanics, which describe how subatomic particles behave differently from macrolevel physics. But because quantum mechanics provides the foundational laws for our entire universe, on a subatomic level, every system is a quantum system.

For this reason, we can say that while conventional computers are also built on top of quantum systems, they fail to take full advantage of the quantum mechanical properties during their calculations. Quantum computers take better advantage of quantum mechanics to conduct calculations that even high-performance computers cannot.

What is a classical computer?

From antiquated punch-card adders to modern supercomputers, traditional (or classical) computers essentially function in the same way. These machines generally perform calculations sequentially, storing data by using binary bits of information. Each bit represents either a 0 or 1.

When combined into binary code and manipulated by using logic operations, we can use computers to create everything from simple operating systems to the most advanced supercomputing calculations.

What is a quantum computer?

Quantum computers function similarly to classical computers, but instead of bits, quantum computing uses qubits. These qubits are special systems that act like subatomic particles made of atoms, superconducting electric circuits or other systems that data in a set of amplitudes applied to both 0 and 1, rather than just two states (0 or 1). This complicated quantum mechanical concept is called a superposition. Through a process called quantum entanglement, those amplitudes can apply to multiple qubits simultaneously.

The difference between quantum and classical computing

Classical computing
  • Used by common, multipurpose computers and devices.
  • Stores information in bits with a discrete number of possible states, 0 or 1.
  • Processes data logically and sequentially.

Quantum computing
  • Used by specialized and experimental quantum mechanics-based quantum hardware.
  • Stores information in qubits as 0, 1 or a superposition of 0 and 1.
  • Processes data with quantum logic at parallel instances, relying on interference.

Quantum processors do not perform mathematical equations the same way classical computers do. Unlike classical computers that must compute every step of a complicated calculation, quantum circuits made from logical qubits can process enormous datasets simultaneously with different operations, improving efficiency by many orders of magnitude for certain problems.

Quantum computers have this capability because they are probabilistic, finding the most likely solution to a problem, while traditional computers are deterministic, requiring laborious computations to determine a specific singular outcome of any inputs.

While traditional computers commonly provide singular answers, probabilistic quantum machines typically provide ranges of possible answers. This range might make quantum seem less precise than traditional computation; however, for the kinds of incredibly complex problems quantum computers might one day solve, this way of computing could potentially save hundreds of thousands of years of traditional computations.

While fully realized quantum computers would be far superior to classical computers for certain kinds of problems requiring large data sets or for completing other problems like advanced prime factoring, quantum computing is not ideal for every, or even most problems.

More at: https://www.ibm.com/

IBM announces 50-fold quantum speed improvement

IBM launched its most advanced quantum computer yet last week at its inaugural quantum developer conference. It features nearly twice the gates of last year’s quantum utility demonstration – and a 50-fold speed increase.

Last year, in a paper published in Nature, IBM announced a breakthrough demonstration of quantum computing that can produce accurate results beyond those of classical computers. IBM calls this “utility scale.”

“We’re specifically referring to how quantum computers can now serve as scientific tools to explore new classes of problems in chemistry, physics, materials, and other fields that are beyond the reach of brute-force classical computing techniques,” says Tushar Mittal, head of product for quantum services at IBM. “Put simply, quantum computers are now better at running quantum circuits than a classical supercomputer is at exactly simulating them.”

That computer, Eagle, had a total of 127 superconducting cubits, 2,880 two-qubit gates, and took 112 hours to complete the quantum utility experiment. Today, IBM’s newest quantum chip, the 156-qubit Heron quantum processor, can handle circuits of up to 5,000 gates, and the same experiment was completed in 2.2 hours.

“This circuit itself is mainly used for benchmarking right now, but could be used for calculating expectation values for materials science problems,” Mittal says.

And there’s another improvement. Last year’s experiment used custom circuits and software. Now, IBM customers can run the same experiments using IBM’s quantum computing software development kit, Qiskit.

Up until now, the users were computational scientists exploring how these quantum circuits can be used for specific scientific domains, Mittal says. That’s starting to change. “At the 5,000-gate operations scale, we are also starting to see the emergence of quantum working in line with classical computing to calculate the properties of systems that are relevant to chemistry,” he says.

Today, researchers, scientists, and quantum developers are beginning to leverage quantum computing to help solve complex problems. For example, Cleveland Clinic is exploring this technology to simulate molecular bonds, which is key to solving pharmaceutical problems.

“We are pushing through traditional scientific boundaries using cutting-edge technology such as Qiskit to advance research and find new treatments for patients around the globe,” says Lara Jehi, chief research information officer at Cleveland Clinic, in a statement. “

“The work with Cleveland Clinic is already beginning to yield results,” says Mittal. The secret sauce, he says, is that the Cleveland Clinic combined classical and quantum computing in one workflow, which produced results not possible with quantum alone.

“Enterprises can use our utility-scale systems now,” he says. “However, our ultimate goal is that developers now use these existing quantum computers to search for heuristic quantum advantages, much like the early days of GPUs being employed to find speedups in high-performance computing.”

But quantum advantage – where quantum computers are cheaper, faster, or more accurate than traditional computers – is still a few years away, he says.

IBM also demonstrated its generative AI-powered Qiskit Code Assistant, first announced a month ago, which is now in private preview. The assistant, which is built on top of IBM’s Granite gen AI models, helps users build quantum circuits, or migrate old quantum code to the latest version of Qiskit.

The latest announcement is important because it couples progress on the hardware side with that of the software, says Heather West, research manager in the infrastructure systems, platforms, and technology group at IDC.

“Not only has IBM introduced a method for efficiently scaling their systems in a modular fashion, they are also introducing the software that is needed to help optimize the circuits that will run on the hardware,” she says.

But we’re not at the end goal yet, she adds. “Like all other quantum hardware vendors, IBM is still trying to solve the error correction issues that plague the systems,” she says.

These issues are preventing quantum computers from being able to solve some of the most complex problems. “Once this issue is resolved, enterprises will be able to use the technology for more than just small scale experimentation,” she says.

https://www.networkworld.com

Saturday, 4 May 2024

ChatGPT’s AI ‘memory’ can remember the preferences of paying customers

OpenAI announced the Memory feature that allows ChatGPT to store queries, prompts, and other customizations more permanently in February. At the time, it was only available to a “small portion” of users, but now it’s available for ChatGPT Plus paying subscribers outside of Europe or Korea.

ChatGPT’s Memory works in two ways to make the chatbot’s responses more personalized. The first is by letting you tell ChatGPT to remember certain details, and the second is by learning from conversations similar to other algorithms in our apps. Memory brings ChatGPT closer to being a better AI assistant. Once it remembers your preferences, it can include them without needing a reminder.

As The Verge’s David Pierce pointed out, some users can find it creepy if chatbots know them in this way, so OpenAI has said users will always have control over what ChatGPT retains (and what is used for additional training).

OpenAI writes in a blog post that in a change from the earlier test, now ChatGPT will tell users when memories are updated. Users can manage what ChatGPT remembers by reviewing what the chatbot took from conversations and even making ChatGPT “forget” unwanted details.

A screenshot of ChatGPT’s memory feature.
Image: OpenAI

OpenAI’s examples of uses for Memory include:

You’ve explained that you prefer meeting notes to have headlines, bullets and action items summarized at the bottom. ChatGPT remembers this and recaps meetings this way.

You’ve told ChatGPT you own a neighborhood coffee shop. When brainstorming messaging for a social post celebrating a new location, ChatGPT knows where to start. 

You mention that you have a toddler and that she loves jellyfish. When you ask ChatGPT to help create her birthday card, it suggests a jellyfish wearing a party hat. 

As a kindergarten teacher with 25 students, you prefer 50-minute lessons with follow-up activities. ChatGPT remembers this when helping you create lesson plans.

ChatGPT could always recall details during active conversations; for example, if you ask the chatbot to draft an email, you can immediately follow it up with “make it more professional,” and it will remember that you were talking about email. But if you wait long enough or start a new conversation, that all goes out the window. 

OpenAI did not say why Memory will not be available in Europe or Korea. The company says Memory will roll out to subscribers to ChatGPT Enterprise and Teams, as well as custom GPTs on the GPT Store, but did not specify when. 

https://www.theverge.com/

Wednesday, 13 December 2023

Google's Gemini continues the dangerous obfuscation of AI technology

Until this year, it was possible to learn a lot about artificial intelligence technology simply by reading research documentation published by Google and other AI leaders with each new program they released. Open disclosure was the norm for the AI world. 

All that changed in March of this year, when OpenAI elected to announce its latest program, GPT-4, with very little technical detail. The research paper provided by the company obscured just about every important detail of GPT-4 that would allow researchers to understand its structure and to attempt to replicate its effects. 

Last week, Google continued that new obfuscation approach, announcing the formal release of its newest generative AI program, Gemini, developed in conjunction with its DeepMind unit, which was first unveiled in May. The Google and DeepMind researchers offered a blog post devoid of technical specifications, and an accompanying technical report almost completely devoid of any relevant technical details. 

Much of the blog post and the technical report cite a raft of benchmark scores, with Google boasting of beating out OpenAI's GPT-4 on most measures and beating Google's former top neural network, PaLM. 

Neither the blog nor the technical paper include key details customary in years past, such as how many neural net "parameters," or, "weights," the program has, a key aspect of its design and function. Instead, Google refers to three versions of Gemini, with three different sizes, "Ultra," "Pro," and "Nano." The paper does disclose that Nano is trained with two different weight counts, 1.8 billion and 3.25 billion, while failing to disclose the weights of the other two sizes. 

Numerous other technical details are absent, just as with the GPT-4 technical paper from OpenAI. In the absence of technical details, online debate has focused on whether the boasting of benchmarks means anything. 

OpenAI researcher Rowan Zellers wrote on X (formerly Twitter) that Gemini is "super impressive," and added, "I also don't have a good sense on how much to trust the dozen or so text benchmarks that all the LLM papers report on these days." 

Tech news site TechCrunch's Kyle Wiggers reports anecdotes of poor performance by Google's Bard search engine, enhanced by Gemini. He cites posts on X by people asking Bard questions such as movie trivia or vocabulary suggestions and reporting the failures. 

The sudden swing to secrecy by Google and OpenAI is becoming a major ethical issue for the tech industry because no one knows, outside the vendors -- OpenAI and its partner Microsoft, or, in this case, Google's Google Cloud unit -- what is going on in the black box in their computing clouds. 

Google's lack of disclosure, while not surprising given its commercial battle with OpenAI, and partner Microsoft, for market share, is made more striking by one very large omission: model cards. 

Model cards are a form of standard disclosure used in AI to report on the details of neural networks, including potential harms of the program (hate speech, etc.) While the GPT-4 report from OpenAI omitted most details, it at least made a nod to model cards with a "GPT-4 System Card" section in the paper, which it said was inspired by model cards.

Google doesn't even go that far, omitting anything resembling model cards. The omission is particularly strange given that model cards were invented at Google by a team that included Margaret Mitchell, formerly co-lead of Ethical AI at Google, and former co-lead Timnit Gebru. 

Instead of model cards, the report offers a brief, rather bizarre passage about the deployment of the program with vague language about having model cards at some point.

If Google puts question marks next to model cards in its own technical disclosure, one has to wonder what the future of oversight and safety is for neural networks.

https://www.zdnet.com/

Six of the most popular Android password managers are leaking data

Several mobile password managers are leaking user credentials due to a vulnerability discovered in the autofill functionality of Android apps. 

The credential-stealing flaw, dubbed AutoSpill, was reported by a team of researchers from the International Institute of Information Technology Hyderabad at last week's Black Hat Europe 2023 conference.

The vulnerability comes into play when Android calls a login page via WebView. (WebView is an Android component that makes it possible to view web content without opening a web browser.) When that happens, WebView allows Android apps to display the content of the web page in question. 

That's all fine and good -- unless a password manager is added to the mix: The credentials shared with WebView can also be shared with the app that originally called for the username and password. If the originating app is trusted, everything should be OK If that app isn't trusted, things could go very wrong.

The affected password managers are 1Password, LastPass, Enpass, Keeper, and Keepass2Android. Also, if the credentials were shared via a JavaScript injection method, both DashLane and Google Smart Lock are also affected by the vulnerability.

Because of the nature of this vulnerability, neither phishing nor malicious in-app code is required.

One thing to keep in mind is that the researchers tested this on less-than-current hardware and software.

Specifically, they tested on these three devices: Poco F1, Samsung Galaxy Tab S6 Lite, and Samsung Galaxy A52. The versions of Android used in their testing were Android 0 (with the December 2020 security patch), Android 11 (with the January 2022 security patch), and Android 12 (with the April 2022 security patch). 

As these tested devices -- as well as the OS and security patches -- were out of date, it's hard to know with any certainty whether the vulnerability would affect newer versions of Android. 

However, even if you are using a device other than what the group tested with, it doesn't mean this vulnerability should be shrugged off. Rather, it should serve as a reminder to always keep your Android OS and installed app up-to-date. The WebView system has always been held under scrutiny and updates for this software should always be updated. For that, you can open the Google Play Store on your device, search for WebView, tap About this app, and compare the latest version with the version installed on your device. If they are not the same, you'll want to update.

One of your best means of keeping Android secure is to make sure it is always as up-to-date as possible. Check daily for OS and app updates and apply all that are available.

https://www.zdnet.com/

Monday, 25 September 2023

Smartphone Showdown: 15 Years of Android vs. iPhone

 "I'm going to destroy Android, because it's a stolen product," Steve Jobs says in author Walter Isaacson's 2011 biography of the late Apple co-founder.

Jobs' fury around Google and its smartphone software is well documented, and the many lawsuits involving Apple and various Android partners showed that Jobs was serious about his allegations of theft. But the reality is that both Apple and Google have taken inspiration from each other for years and that neither company would be where it is today without the work of the other.

So as Android celebrates its 15th birthday (since the launch of the first Android-based phone, the T-Mobile G1), let's take a look back at the journey the companies have taken to becoming the most dominant forces in the tech world -- and how their competition pushed them to innovate. 

Smartphones have arguably changed the world more than any other invention in human history, from radically altering how we interact with one another to creating a whole new category of companies that deal in various mobile technologies. And though Jobs may have been outspokenly vitriolic about Android in the early days, it's clear that ideas and inspiration have echoed back and forth between Apple and Google in the years since.

The last 15 years of competition between the two companies have often felt like siblings bickering at playtime, falling out over who had which toy first or crying to the parents when the other one took something that wasn't theirs. Most siblings will argue to some extent throughout their lives, but history is also rife with pairings that, through spirited competition, pushed each sibling to succeed. 

The two companies' volleying back and forth pushed them ahead in the game, and allowed them to fight off other challengers, like the once-dominant BlackBerry, as well as Nokia and its short-lived Symbian platform. Even tech giant Microsoft and its Windows Phone failed to thrive in the face of the heated competition from Apple and Google.

But though the relationship today between the iPhone maker and the Android purveyor hardly matches the Williams' friendly, familial rivalry, that wasn't always the case. Let's take a look back.

Beginnings

Android began as its own company (Android Inc.) back in 2003, and it wasn't acquired by Google until 2005. Meanwhile, Apple already had success with mobile products in the form of the iPod, the iPhone began development in secret in 2004 and Jobs was reportedly approached to become Google CEO. 

Jobs didn't take the role, but Google found a CEO in Eric Schmidt, who in 2006 became part of Apple's board of directors. "There was so much overlap that it was almost as if Apple and Google were a single company," journalist Steven Levy wrote in his 2011 book In the Plex: How Google Thinks, Works, and Shapes Our Lives. Things didn't stay as cozy, however. 

In January 2007 Apple unveiled the first iPhone, and in November 2007 Google showed off two prototypes. One, a Blackberry-esque phone that made use of hardware buttons and scroll wheels, had been in the prototype phase for some time. The more recent prototype was dominated by a large touchscreen and appeared to be much more like the iPhone.

That didn't go down well with Jobs, who threatened the destruction of Android using "every penny of Apple's $40 billion in the bank." The first Android phone, the T-Mobile G1, combined elements of both those prototypes, with a touchscreen that slid out to reveal a physical keyboard. Schmidt left Apple's board of directors in 2009 due to potential conflicts of interest, and so began a series of lawsuits involving Apple and various Google partners over alleged infringement of phone-related patents. 

The most notable of the Google partners was Samsung, which Apple accused of infringing a number of patents, including patents related to basic functions like tap to zoom and slide to unlock. These legal battles raged for years, with Apple claiming that "it is a fact that Samsung blatantly copied our design" and Samsung pushing back. The long dispute finally came to an end in 2018, when both sides agreed to settle out of court.

Despite the competing claims made during those long courtroom struggles, if we look at the development not just of the software but of the phones that run it, it seems clear both sides continued to liberally borrow ideas from each other. 

Features like picture-in-picture, live voicemail, lock screen customization and live translation were all found on the Android operating system before eventually making their way to iOS. And though the use of widgets to customize your home screen was long held as a differentiator for Android, that feature too eventually found its way to iOS. 

On the other hand, Android's Nearby Share feature is remarkably similar to Apple's AirDrop, and Android phones didn't get features like "do not disturb" or the ability to take screenshots until some time after the iPhone had them. 

Apple removed the 3.5mm headphone jack from the iPhone in September 2016, and I distinctly remember that at Google's launch event for the Pixel the following month, chuckles went round the room when the exec on stage proclaimed, "Yes, it has a headphone jack." Still, Google itself went on to ditch the headphone jack, with the Pixel 2. 

Sometimes it's difficult, if not impossible, to say whether these companies are copying each other's ideas or simply coming up with the same conclusions after paying attention to consumer trends, rumors in the press and the general evolution of supporting technologies. 

Rumors that Apple would remove the physical home button on the iPhone X were circling long before the phone was officially unveiled in September 2017. Are they the same rumors Samsung responded to when it "beat Apple to the punch" and removed the home button from its Galaxy S8 earlier that same year? Or did both sides simply arrive at such a big design decision independently? 

It's impossible to pick a side in this argument -- and somewhat reductive to even try. And regardless, you wind up with the same thing: Phones and software from different manufacturers that seem to evolve in unison. 

Today

In 2023, Android is by far the dominant smartphone platform, with 70.8% market share globally against Apple's 28.4% (according to information from Statista). But Google's focus has always been on getting the Android operating system onto as many devices as possible, from phones costing less than $50 to those costing over $1,500. Apple, meanwhile, offers iOS only on its own devices, and those devices come at a hefty premium, so it's fair to expect that iOS won't be as widespread. 

Google's business model is primarily one of a service provider, though, and not a hardware manufacturer. It makes its money chiefly from selling advertisements across all its platforms, and so it typically benefits from a mass market approach. Android itself is free for companies to use -- hence the large number of installs. But to use Google services (Gmail, YouTube, Chrome and so on, along with access to the Google Play Store) companies must pay license fees to Google. Still, the free use of Android is why you'll find the operating system on phones from Samsung, Motorola, OnePlus, Oppo, Nothing and a huge variety of other brands -- and yes, on Google's own Pixel phones. 

Apple, however, is a closed shop. Only iPhones can run iOS, and Apple has every intention of keeping it that way. It has full control over how that software works on its phones (and charges developers accordingly for apps sold in its own App Store) and how it can be best optimized for the hardware. That's why Apple phones typically perform better than many high-end Android phones, despite the hardware often being less high-spec on paper. Android by its nature has to take more of a "one size fits all" approach, where each new version has to run well on a huge variety of devices, with different screen sizes and under-the-hood components. 

Android struggled with the arrival of tablets, as software designed for 4-inch phones suddenly had to stretch to fit screens much larger in size. Android 3.0 Honeycomb was primarily designed for tablets, but various issues meant it didn't hang around for long, and some of its features were simply absorbed into future versions. Apple takes a different approach: Though at first it used iOS for both devices, now it keeps iOS solely for its phones, optimizing for the smaller screen sizes, with the newer iPadOS as the software for its tablets. 

Yet it's still clear to see the ways the two operating systems have converged over the years. Though Android was always the more customizable of the two, Apple eventually introduced home-screen widgets, customizable lock screens and even the ability to create icon themes to transform the look of your device. 

Meanwhile, Google worked hard to limit the problems caused by fragmentation and has arguably taken more of an "Apple" approach in its own line of devices. Like Apple's iPhones, the phones in the more recent Pixel range -- including the excellent Pixel 7 Pro -- were designed to show off "the best of Google," with processors produced in house (as Apple does with the chips for its iPhones) and software optimized for the Pixel phone it'll run on. 

Though Android may be ahead in terms of numbers of users, Google has clearly seen that Apple is leading the way in terms of a more premium, refined hardware experience, and the Pixel series is Google's answer. Having reviewed both the Pixel 6 Pro and Pixel 7 Pro myself, I can say with certainty that they're the most Apple-like experience you can get from an Android phone. 

The future 

"We are at an interesting crossroads for Android," says Ben Woods, industry analyst at CCS Insight. "Although its success in volume terms is undisputed, it is increasingly losing share to Apple in the premium smartphone space." Google's Pixel phones are some of the best Android phones around, but sales of the devices are a fraction of what Apple sees with the iPhone. 

It's a different story when you look at Android partners, chiefly Samsung, which is jostling with Apple for the position of No. 1 phone manufacturer in the world -- a title that seems to frequently slip from one of the companies to the other. But Samsung has a much wider catalog of products, with unit sales being bolstered by a larger number of phones at lower price points. In the premium segment, Apple still rules, and that's showing no sign of slowing down. 

But Android is increasingly betting on longer-term success from its innovation with foldable phones. Samsung is now multiple generations into its Galaxy Z Flip and Z Fold devices, with Google's own Pixel Fold joining the party earlier this year, along with foldables from the likes of Oppo, Motorola and soon OnePlus. Apple has yet to launch a foldable device, and it remains to be seen whether that's simply because its take on the genre isn't ready, or because it believes foldables are a fad that'll pass (like 3D displays or curving designs). 

Rather than looking toward more-experimental innovations like foldable displays, Apple has instead continued to refine its existing hardware, equipping its latest iPhone 15 Pro series with titanium designs and improved cameras. And Apple's approach also includes pulling people into the wider Apple ecosystem, with iPhones syncing seamlessly with other Apple products, including Apple Watches, iPads, Macs, HomePods and Apple TV. 

With each new iPhone customer comes an opportunity for Apple to sell additional products from its own catalog, along with services like iCloud storage, Apple Music, Apple Fitness or subscriptions to its Apple TV streaming service. Though Google offers products like this to some extent, it has yet to offer the sort of cohesive package Apple does, which could make Google's offerings less enticing for new customers and tempt Android users to jump ship to Apple. 

Still, Android's proliferation across devices at lower price points will continue to make it a popular choice for people on tighter  budgets. And its presence on a huge number of devices from third-party manufacturers means it's where we'll see more innovation that seeks to answer the question of what role the smartphone plays in our lives. 

With smartphone shipments expected to hit their lowest point in a decade, more companies will be looking for ways to use new, exciting technologies to capture an audience's attention and present a product that serves up new ways of doing things. We'll see this from Android and its partners and from Apple with the iPhone, its software and its peripheral devices, including new tech like Apple's Vision Pro headset. 

We'll also see a bigger focus from all sides on sustainability: Apple, for instance, went to great lengths during its iPhone 15 launch event in September to flex its green credentials. While Samsung is making larger efforts in sustainability and smaller companies like Fairphone are using planet-friendly features as primary selling points, other manufacturers have yet to make sustainability a key part of their business model. It's likely, then, that as consumers increasingly look toward sustainable options, the next major competition in the smartphone industry could be who can make the greenest product.

There's no question that the development of both the software and hardware side of iOS and Android smartphones has at times happened almost in tandem, with one side launching a feature and the other responding in "me too!" fashion. And like the Williams sisters using their sporting rivalry to reach stratospheric new heights in tennis, Apple and Android will need to continue to embrace that spirit of competition to find new ways to succeed in an increasingly difficult market.

https://www.cnet.com/

Monday, 31 July 2023

Cryptography may offer a solution to the massive AI-labeling problem

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where the content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt into labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where the information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about the content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to the media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

https://www.technologyreview.com/