Skip to main content

Cyber Basics: Quantum Computing

It must’ve sounded like something from Star Trek when D-Wave Systems unveiled the first commercially available quantum computer in 2007. The name “quantum computer” probably seems like science fiction. How can a computer be “quantum”? And what does it mean to be “quantum,” anyway? Why would I ever give up my normal computer for a quantum computer? Quantum computers are progressing rapidly, and they soon will outpace our normal computers.

 

Computing Gets Weird

We owe much of the progress in computing to the theoretical work done by Alan Turing in the 1930s and 1940s. His work on Turing Machines and Universal Turing Machines—these were models based on logic, not actual machines—laid the groundwork for our digital computers. Indeed, many computers today still follow the basic idea of the Universal Turing Machine, a machine that can compute virtually any well-defined problem.

What does it mean to “compute,” anyway? In Turing’s time, computers were people who performed calculations on some given data according to some given ruleset. Turing showed that this process of computation can be automated to a great extent—hence, the Turing Machine was created. A given Turing Machine can solve a certain type of problem; it’s really good at solving specific problems. A Universal Turing Machine can compute all problems that are solvable by other Turing Machines. This theoretical result underlies modern computing.

Computers have come a long way since Turing’s time. We have gone from machines filling entire rooms to small smartphones with exponentially more computational power. But, as with all scientific and technological enterprises, computing has been taking a different turn in development in recent years.

First proposed by physicist Richard Feynman in 1982, quantum computing is a new type of computing that relies on the “weirdness” of quantum mechanics to perform computations. British physicist David Deutsch followed up on Feynman’s proposition in 1985. He suggested that a machine could be constructed that stored its information in quantum bits, or qubits. Researchers are now attempting to build such a machine.

 

Disentangling Qubits

Imagine a light switch. The switch can be in one of two states at any given time: on or off. In the “on” state, light flows forth, but in the “off” state, light does not flow. Normal computers represent information in a similar way: data is represented by a series of binary digits, or bits, each of which can represent a zero or a one at any given time. When a programmer accesses a given bit, it is known that the information is either a zero or a one. With a reasonably high degree of certainty, a programmer can access the information of multiple bits without errors. If a bit is erroneous, then it’s a simple matter to fix. Quantum computing throws all this out the virtual window.

binary-diagram

While bits are the fundamental units of information in normal computers, qubits form the basis of quantum computing. Unlike bits, qubits can represent zero, one, or both simultaneously. It’s like having the light switch in the “on” state and the “off” state at the same time. Is the light flowing out or not? Is the light switch broken? Well, we don’t know for sure until we inspect the light. That is, we have to observe the light switch to determine which state it actually is in at the given moment. The same idea applies to qubits. According to quantum mechanics, a qubit’s information is accessed by measuring it. This is a complex, extremely difficult process. The problem is that when we measure a qubit, the information it holds decoheres. Basically, whenever we measure a qubit, it loses its ability to represent multiple values; it is reduced to a normal bit with only one value. In effect, the act of measuring a qubit causes the qubit to change, sometimes with random results. So, we go from having a light switch whose nature is rather confusing to a light switch we understand fairly well.

Things get more exciting when two qubits are paired. These “entangled” qubits can represent four values at once: 00, 01, 10, and 11. Even better, because the qubits are entangled, if we operate on any one of them, we actually operate on all of them. And if we increase the number of qubits, then the quantum computer becomes exponentially more powerful. Entanglement also makes quantum computers mind-bogglingly fast—a quantum computer with 50 qubits will far outpace a normal computer in terms of computational power and speed. With qubits, a quantum computer is able to represent and process more information than a normal computer. A single qubit can represent twice as much information as a normal bit. Entanglement allows us to string together several qubits—which represent a lot of information—and operate on them all simultaneously—which is much faster than traditional serial computation.

So we should just create as many qubits as possible and entangle them all, right? Well, not exactly. Quantum computing benefits but also suffers from the constraints of quantum mechanics. The truth is that we still don’t know enough about quantum mechanics to make largescale quantum computers. The main issue stems from how qubits are accessed. The highly sensitive nature of qubits has plagued researchers for decades, but some progress toward error correction has been made recently.

 qubit diagram-01

 

Change is an Essential Process

Interest in quantum computing has taken off since 2007, especially in the corporate world. Microsoft, Google, and IBM are among the leading pioneers in quantum computing research. Most of the research to date has focused on an approach called “quantum annealing.” In a sense, current quantum computers are like Turing Machines in that they are designed for very specific tasks. Instead of trying to build a quantum computer that has the equivalent theoretical power of a Universal Turing Machine, researchers want something simpler. The goal is to develop highly efficient machines dedicated to specific, specialized tasks. Once that goal is met, then researchers may turn their collective attention to developing a quantum equivalent to the Universal Turing Machine.

The specific tasks solved by quantum computers tend usually to be mathematical. For example, quantum computers are really good at quantum factoring, a process of breaking very large numbers into smaller prime numbers. This is great news for researchers, but it’s bad news for security. Most security schemes nowadays use encryption and cryptography techniques that rely heavily on large numbers. The theory is that these numbers are so large that no human or normal computer can easily factor them to break the security. However, as Peter Shor showed in 1996, a powerful quantum computer can easily break common security schemes. This means that today’s security measures may become obsolete sooner than expected. Earlier this year, in fact, the NSA released a report warning users to begin upgrading or switching their security schemes. This comes in the wake of news that the NSA is itself trying to build a quantum computer to break common security schemes! In response, researchers—including those at the NSA, though more secretively—are attempting to develop “quantum-resistant” algorithms. For now, quantum computing isn’t quite ready to break our security. But, the threat is real, and every year quantum computers increase in power.

Companies like D-Wave and IBM are pushing quantum computing research. Recently D-Wave released a commercial quantum computer with 1000 qubits! This system is highly specialized, however, and its quality is still controversial. For its part, IBM made its own quantum computer publicly available in May 2016 via the internet. Rather than trying to maximize the number of qubits in its system (like D-Wave has tried), IBM has focused on creating a system with strong error correction. IBM has claimed it will be able to scale this proof-of-concept machine up to 50 or 100 qubits within 10 years. Also, much more controversially, IBM has claimed that its new system, when scaled, will be the quantum equivalent to a Universal Turing Machine.

These are exciting times for computing. Normal computers are getting smaller and faster, and quantum computers are becoming more stable and more reliable. With significant improvements, quantum computers may one day simplify complex problems, like machine learning and natural language processing. For example, we may one day have the ability to perform real-time language translation. In the sciences, quantum computing will prove useful for finally nailing down the particulars of quantum mechanics. Granted, quantum mechanics seems like something that should be restricted to physics classrooms, but there is real promise for future developments. Quantum computing may help us unlock some of the more resilient secrets of nature. Although we’re not quite there yet, we get a little closer every year to making science fiction a reality.