Skip to content
Home of the finest science fiction and science fact
ORDER NOW

The Alternate View

Will Quantum Computing Improve AI?
by John G. Cramer

In recent years, two of the most transformative technologies in active development have been artificial intelligence (AI) and quantum computing. Both of these technologies have independently made great strides. Therefore, one is tempted to think that if the new deep learning AI technology could be implemented on a top-of-the-line quantum computer, a groundbreaking improvement in AI speed and power, and perhaps even self-awareness, might result.

In this column we will pursue that idea, but we find that it isn’t so easy. We’ll start by reviewing the present state of quantum computing and of artificial intelligence. (Here we will focus exclusively on “gated” quantum computing and on neural-network-based AI. We will not consider the alternative of D-Wave-type “adiabatic relaxation” quantum computing.)

A quantum computer is a device, first suggested in 1978 by Richard Feynman, that uses the manipulation of quantum states to process information and solve problems. Feynman himself was particularly interested in applying such a device to problems involving the quantum mechanics of interacting systems of particles, but later quantum computing enthusiasts seem to be more focused on database searches and cryptography.

A quantum computer is a completely new kind of data-processing device that is today emerging from the academic and industrial laboratories of the planet, ready or not, and is finding its initial applications in the real world. For example, I subscribe to a financial predictor system that uses deep-learning AI to identify stocks that are likely to significantly increase in value in the coming year, and it seems to work.

A conventional computer has a memory in which bits of information are stored in definite binary states of 1 or 0 and has a processor that operates on this stored information. For example, the memory may contain two numbers stored in binary representation. A program specifying the locations of these numbers is run that calls for them to be multiplied together and the result stored in another location. With the execution of the program by the processor, the numbers are fetched, multiplied, and their product in binary representation stored in the memory.

A quantum computer differs from this computing scheme in one important way: the definite-state memory units, the 0-or-1 bits in the conventional computer, are replaced by indefinite-state qubits (short for quantum bits). A qubit is an element of the quantum system that is in a quantum state that is a superposition of 1 and 0, and that remains indefinite until it “collapses” to a definite 1 or 0. Like other quantum states, two or more qubits may be entangled, so that a measurement performed on one member of an entangled system nonlocally affects the quantum states of the other members. The quantum states of an entangled system of qubits are interdependent and cannot be described separately.

Of course, a quantum computer needs more than just qubits for its operation. Each qubit must be well enough insulated from the random scrambling effects of environmental noise (called decoherence) that the coherent state of the quantum system is preserved for at least long enough to set up a calculation, perform it, and read out the results. Because of this decoherence problem, a major demand of quantum computer architecture involves isolating the system from its environment and correcting errors that inevitably arise from decoherence.

A quantum computer must have the ability to initialize any qubit in a specified state and to measure the state of any specific qubit. It must have “universal quantum gates,” logical elements capable of arranging any desired logical relationship between the states of qubits. It must also have a processor capable of interlinking such quantum gates to establish rules and boundary conditions for their inter-relationships.

In a quantum computation, the arrangement of quantum gates is set by the programmer to connect the qubits in a logical pattern, according to a program or algorithm. After an interval, the qubits assigned to the final result are read out. If this is done properly, the quantum computer can be made to perform calculations in a way that is qualitatively different from calculations performed on a conventional computer. In fact, computations easily performed with a quantum computer in some cases would require a computation time with a conventional computer greater than the age of the Universe.

*   *   *

My first exposure to artificial intelligence occurred in the mid-1990s, when I was working on physics experiment NA49 at CERN. During my stay, I attended a presentation by CERN’s IT Group, at which they discussed using the neural network technology, a new and then-emerging form of artificial intelligence, to provide an improved “trigger” for large experiments like ours.

A key problem with such experiments is that they flood the experimenters with detailed data (track coordinates, counter hits, energy dependent pulse-heights, etc.) of far more events than can be physically recorded. Therefore, some “trigger,” usually a large fast electronic system of counters and gates, must select for recording the information of most interest from an event set that may contain a large fraction of uninteresting events.

The IT Group presenter suggested using a three-layer neural network, with several thousand connected nodes that form input, hidden, and output layers, to do the triggering and event selection. This net-trigger would “learn” by being trained on a large volume of old events known to be good and then used to select new incoming events for recording. He suggested that subtle patterns in the data might be selected by such a trigger, patterns that would be too obscure for normal triggering to recognize.

I raised my hand and objected that using a single hidden layer in such a neural net seemed very likely to be not enough. The human brain, which they were attempting to emulate, must contain hundreds of hidden layers in its normal neural network structure. He answered that there was no evidence that more than one hidden layer was needed, and that they were using the same layer structure as other AI investigators.

Back at our experiment, we discussed attempting to trigger our data collection with a neural network and decided against it, on the basis that it would be costly, the trigger we were using worked well enough, and it would be a diversion of effort with an uncertain payoff.

That was almost thirty years ago. In the intervening time, the neural-network architecture has blossomed. Indeed, the favored “deep learning” architecture does indeed use a large number of hidden layers and has become the leading AI technique. A typical “deep” neural net has a hundred or so hidden layers of nodes, each node having weighted connections to all of the nodes of the next-forward layer. For example GPT-3, Open-AI’s implementation of AI introduced in June 2020 used a neural network structure with 96 hidden layers, alternating between “bottleneck” layers having 12,288 nodes and larger “feed forward” layers having 4 × 12,288 nodes. Their newer GPT-4 AI system currently in operation has about ten times more elements than GPT-3 in its structure.

Such a network is “trained” with a very large number of inputs and expected responses, with the weight-parameters of the many internal connections adjusted as training progresses and the system “learns.” The result is a system that typically requires a large “farm” of processors and that, from verbal, text, or graphical inputs, can produce surprisingly human-like and reliable answers.

*   *   *

So the question is: In the not too distant future when many-qubit quantum computers with minimum decoherence become widely available, can we make better, faster, smarter AIs by implementing something like the deep-learning architecture with qubits instead of bits? Unfortunately, the answer seems to be that it will not be that easy and may not produce the anticipated AI payoff. First, the current AI architecture requires orders of more neural nodes than quantum computers could likely supply in the foreseeable future.

Further, there is a basic conflict between the modes of data input and output in classical vs. quantum computing. Quantum computers work best when they are operating on an input of quantum information, i.e., information on the overall state of a quantum system. However, the problems that will need to be resolved by AI almost universally involve the classical information of everyday life, so there is a mismatch.

Further, the user of a quantum computer has access to the final results of a calculation only through what has been described as a “very thin straw.” In a typical quantum calculation, the system is initialized in a quantum state of qubits that have been interconnected by programmable quantum gates, and then it simultaneously explores many different paths, like exploring a maze, until a final optimum solution is achieved and the wave function describing the answer emerges as the final quantum state.

But reading out that final state is difficult. It involves destroying that “answer” by collapsing the final-state qubits to 1s and 0s in a selected way and reading those out. In the process, much of the information contained in the final state solution is lost.

Here is an analogy. Suppose that you have a 3D object with a complicated shape that you wish to understand. You cannot look directly at the object or rotate it, but you can project its shadow on a wall. You can select the front wall, the side wall, or the ceiling for projecting the shadow, but you have to choose only one of these, not all three. As should be clear, a single shadow does not contain enough information to deduce the 3D shape of the object. That is the essence of the problem with reading out a quantum state. It is not at all clear that such a process is compatible with deep-learning artificial intelligence.

*   *   *

On the basis of the above considerations, it is my guess that AI and quantum computing will both continue to evolve and to improve. They will be applied to a growing range of important problems in weather forecasting, drug discovery, science, medicine, finance, and many other areas. They will have profound impacts on our everyday lives. However, at least in the foreseeable future they will not be joined into the new breakthrough technology of quantum artificial intelligence.

Over the years there have been suggestions by notables like Roger Penrose that quantum effects in the human brain could be responsible for intelligence and self-awareness. I’m inclined to doubt that, because of the strong decoherence from body temperature. So, in summary, the science-fiction-based vision that that an AI + quantum computing synthesis will happen soon, in my opinion, is unfortunately a mirage.

*   *   *

References:

  1. Klusch, J. Lässig, D. Müssig, A. Macaluso, and F. K. Wilhelm, “Quantum Artificial Intelligence: A Brief Survey,” arXiv: 2408.10726v1 [quant-ph] (2024).

Davide Castelvecchi, “The AI-quantum computing mash-up: will it revolutionize science?,” Nature, January 02, 2024 issue.

  1. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd. “Quantum machine learning,” Nature 549, 195-202 (2017); https://doi.org/10.1038/nature23474.
  2. Ngadiuba and M. Pierini, “Hunting anomalies with an AI trigger,” CERN Courier, August 31, 2021 issue.

 

John’s new third hard SF novel, Fermi’s Question, and its prequel, his second hard SF novel Einstein’s Bridge, are available as eBooks from Baen Books at: https://www.baen.com/einstein-s-bridge.html. His first hard SF novel Twistor is available online at:  https://www.amazon.com/Twistor-John-Cramer/dp/048680450X. John’s 2016 nonfiction book describing his transactional interpretation of quantum mechanics, The Quantum Handshake—Entanglement, Nonlocality, and Transactions, (Springer, January 2016) is available online as a hardcover or eBook at: https://www.amazon.com/dp/3319246402. Electronic reprints of “The Alternate View” are currently available online at:  http://www.npl.washington.edu/av.

Copyright © 2024 John G. Cramer

Back To Top
0
    0
    Your Cart
    Your cart is emptyReturn to Shop