Quantum gravity from quantum error-correcting codes?

The lessons we learned from the Ryu-Takayanagi formula, the firewall paradox and the ER=EPR conjecture have convinced us that quantum information theory can become a powerful tool to sharpen our understanding of various problems in high-energy physics. But, many of the concepts utilized so far rely on entanglement entropy and its generalizations, quantities developed by Von Neumann more than 60 years ago. We live in the 21st century. Why don’t we use more modern concepts, such as the theory of quantum error-correcting codes?

In a recent paper with Daniel Harlow, Fernando Pastawski and John Preskill, we have proposed a toy model of the AdS/CFT correspondence based on quantum error-correcting codes. Fernando has already written how this research project started after a fateful visit by Daniel to Caltech and John’s remarkable prediction in 1999. In this post, I hope to write an introduction which may serve as a reader’s guide to our paper, explaining why I’m so fascinated by the beauty of the toy model.

This is certainly a challenging task because I need to make it accessible to everyone while explaining real physics behind the paper. My personal philosophy is that a toy model must be as simple as possible while capturing key properties of the system of interest. In this post, I will try to extract some key features of the AdS/CFT correspondence and construct a toy model which captures these features. This post may be a bit technical compared to other recent posts, but anyway, let me give it a try…

Bulk locality paradox and quantum error-correction

The AdS/CFT correspondence says that there is some kind of correspondence between quantum gravity on (d+1)-dimensional asymptotically-AdS space and d-dimensional conformal field theory on its boundary. But how are they related?

The AdS-Rindler reconstruction tells us how to “reconstruct” a bulk operator from boundary operators. Consider a bulk operator \phi and a boundary region A on a hyperbolic space (in other words, a negatively-curved plane). On a fixed time-slice, the causal wedge of A is a bulk region enclosed by the geodesic line of A (a curve with a minimal length). The AdS-Rindler reconstruction says that \phi can be represented by some integral of local boundary operators supported on A if and only if \phi is contained inside the causal wedge of A. Of course, there are multiple regions A,B,C,… whose causal wedges contain \phi, and the reconstruction should work for any such region.

fig_Rindler

The Rindler-wedge reconstruction

That a bulk operator in the causal wedge can be reconstructed by local boundary operators, however, leads to a rather perplexing paradox in the AdS/CFT correspondence. Consider a bulk operator \phi at the center of a hyperbolic space, and split the boundary into three pieces, A, B, C. Then the geodesic line for the union of BC encloses the bulk operator, that is, \phi is contained inside the causal wedge of BC. So, \phi can be represented by local boundary operators supported on BC. But the same argument applies to AB and CA, implying that the bulk operator \phi corresponds to local boundary operators which are supported inside AB, BC and CA simultaneously. It would seem then that the bulk operator \phi must correspond to an identity operator times a complex phase. In fact, similar arguments apply to any bulk operators, and thus, all the bulk operators must correspond to identity operators on the boundary. Then, the AdS/CFT correspondence seems so boring…

fig_paradox

The bulk operator at the center is contained inside causal wedges of BC, AB, AC. Does this mean that the bulk operator corresponds to an identity operator on the boundary?

Almheiri, Dong and Harlow have recently proposed an intriguing way of reconciling this paradox with the AdS/CFT correspondence. They proposed that the AdS/CFT correspondence can be viewed as a quantum error-correcting code. Their idea is as follows. Instead of \phi corresponding to a single boundary operator, \phi may correspond to different operators in different regions, say O_{AB}, O_{BC}, O_{CA} living in AB, BC, CA respectively. Even though O_{AB}, O_{BC}, O_{CA} are different boundary operators, they may be equivalent inside a certain low energy subspace on the boundary.

This situation resembles the so-called quantum secret-sharing code. The quantum information at the center of the bulk cannot be accessed from any single party A, B or C because \phi does not have representation on A, B, or C. It can be accessed only if multiple parties cooperate and perform joint measurements. It seems that a quantum secret is shared among three parties, and the AdS/CFT correspondence somehow realizes the three-party quantum secret-sharing code!

Entanglement wedge reconstruction?

Recently, causal wedge reconstruction has been further generalized to the notion of entanglement wedge reconstruction. Imagine we split the boundary into four pieces A,B,C,D such that A,C are larger than B,D. Then the geodesic lines for A and C do not form the geodesic line for the union of A and C because we can draw shorter arcs by connecting endpoints of A and C, which form the global geodesic line. The entanglement wedge of AC is a bulk region enclosed by this global geodesic line of AC. And the entanglement wedge reconstruction predicts that \phi can be represented as an integral of local boundary operators on AC if and only if \phi is inside the entanglement wedge of AC [1].

fig_reconstruction

Causal wedge vs entanglement wedge.

Building a minimal toy model; the five-qubit code

Okay, now let’s try to construct a toy model which admits causal and entanglement wedge reconstructions of bulk operators. Because I want a simple toy model, I take a rather bold assumption that the bulk consists of a single qubit while the boundary consists of five qubits, denoted by A, B, C, D, E.

fig_minimal

Reconstruction of a bulk operator in the “minimal” model.

What does causal wedge reconstruction teach us in this minimal setup of five and one qubits? First, we split the boundary system into two pieces, ABC and DE and observe that the bulk operator \phi is contained inside the causal wedge of ABC. From the rotational symmetries, we know that the bulk operator \phi must have representations on ABC, BCD, CDE, DEA, EAB. Next, we split the boundary system into four pieces, AB, C, D and E, and observe that the bulk operator \phi is contained inside the entanglement wedge of AB and D. So, the bulk operator \phi must have representations on ABD, BCE, CDA, DEB, EAC. In summary, we have the following:

  • The bulk operator must have representations on R if and only if R contains three or more qubits.

This is the property I want my toy model to possess.

What kinds of physical systems have such a property? Luckily, we quantum information theorists know the answer; the five-qubit code. The five-qubit code, proposed here and here, has an ability to encode one logical qubit into five-qubit entangled states and corrects any single qubit error. We can view the five-qubit code as a quantum encoding isometry from one-qubit states to five-qubit states:

\alpha | 0 \rangle + \beta | 1 \rangle \rightarrow \alpha | \tilde{0} \rangle + \beta | \tilde{1} \rangle

where | \tilde{0} \rangle and | \tilde{1} \rangle are the basis for a logical qubit. In quantum coding theory, logical Pauli operators \bar{X} and \bar{Z} are Pauli operators which act like Pauli X (bit flip) and Z (phase flip) on a logical qubit spanned by | \tilde{0} \rangle and | \tilde{1} \rangle. In the five-qubit code, for any set of qubits R with volume 3, some representations of logical Pauli X and Z operators, \bar{X}_{R} and \bar{Z}_{R}, can be found on R. While \bar{X}_{R} and \bar{X}_{R'} are different operators for R \not= R', they act exactly in the same manner on the codeword subspace spanned by | \tilde{0} \rangle and | \tilde{1} \rangle. This is exactly the property I was looking for.

Holographic quantum error-correcting codes

We just found possibly the smallest toy model of the AdS/CFT correspondence, the five-qubit code! The remaining task is to construct a larger model. For this goal, we view the encoding isometry of the five-qubit code as a six-leg tensor. The holographic quantum code is a network of such six-leg tensors covering a hyperbolic space where each tensor has one open leg. These open legs on the bulk are interpreted as logical input legs of a quantum error-correcting code while open legs on the boundary are identified as outputs where quantum information is encoded. Then the entire tensor network can be viewed as an encoding isometry.

The six-leg tensor has some nice properties. Imagine we inject some Pauli operator into one of six legs in the tensor. Then, for any given choice of three legs, there always exists a Pauli operator acting on them which counteracts the effect of the injection. An example is shown below:

fig_pushing

In other words, if an operator is injected from one tensor leg, one can “push” it into other three tensor legs.

Finally, let’s demonstrate causal wedge reconstruction of bulk logical operators. Pick an arbitrary open tensor leg in the bulk and inject some Pauli operator into it. We can “push” it into three tensor legs, which are then injected into neighboring tensors. By repeatedly pushing operators to the boundary in the network, we eventually have some representation of the operator living on a piece of boundary region A. And the bulk operator is contained inside the causal wedge of A. (Here, the length of the curve can be defined as the number of tensor legs cut by the curve). You can also push operators into the boundary by choosing different tensor legs which lead to different representations of a logical operator. You can even have a rather exotic representation which is supported non-locally over two disjoint pieces of the boundary, realizing entanglement wedge reconstruction.

fig_example

Causal wedge and entanglement wedge reconstruction.

What’s next?

This post is already pretty long and I need to wrap it up…

Shor’s quantum factoring algorithm is a revolutionary invention which opened a whole new research avenue of quantum information science. It is often forgotten, but the first quantum error-correcting code is another important invention by Peter Shor (and independently by Andrew Steane) which enabled a proof that the quantum computation can be performed fault-tolerantly. The theory of quantum error-correcting codes has found interesting applications in studies of condensed matter physics, such as topological phases of matter. Perhaps then, quantum coding theory will also find applications in high energy physics.

Indeed, many interesting open problems are awaiting us. Is entanglement wedge reconstruction a generic feature of tensor networks? How do we describe black holes by quantum error-correcting codes? Can we build a fast scrambler by tensor networks? Is entanglement a wormhole (or maybe a perfect tensor)? Can we resolve the firewall paradox by holographic quantum codes? Can the physics of quantum gravity be described by tensor networks? Or can the theory of quantum gravity provide us with novel constructions of quantum codes?

I feel that now is the time for quantum information scientists to jump into the research of black holes. We don’t know if we will be burned by a firewall or not … , but it is worth trying.



1. Whether entanglement wedge reconstruction is possible in the AdS/CFT correspondence or not still remains controversial. In the spirit of the Ryu-Takayanagi formula which relates entanglement entropy to the length of a global geodesic line, entanglement wedge reconstruction seems natural. But that a bulk operator can be reconstructed from boundary operators on two separate pieces A and C non-locally sounds rather exotic. In our paper, we constructed a toy model of tensor networks which allows both causal and entanglement wedge reconstruction in many cases. For details, see our paper. 

Putting back the pieces of a broken hologram

It is Monday afternoon and the day seems to be a productive one, if not yet quite memorable. As I revise some notes on my desk, Beni Yoshida walks into my office to remind me that the high-energy physics seminar is about to start. I hesitate, somewhat apprehensive of the near-certain frustration of being lost during the first few minutes of a talk in an unfamiliar field. I normally avoid such a situation, but in my email I find John’s forecast for an accessible talk by Daniel Harlow and a title with three words I can cling onto. “Quantum error correction” has driven my curiosity for the last seven years. The remaining acronyms in the title will become much more familiar in the four months to come.

Most of you are probably familiar with holograms, these shiny flat films representing a 3D object from essentially any desired angle. I find it quite remarkable how all the information of a 3D object can be printed on an essentially 2D film. True, the colors are not represented as faithfully as in a traditional photograph, but it looks as though we have taken a photograph from every possible angle! The speaker’s main message that day seemed even more provocative than the idea of holography itself. Even if the hologram is broken into pieces, and some of these are lost, we may still use the remaining pieces to recover parts of the 3D image or even the full thing given a sufficiently large portion of the hologram. The 3D object is not only recorded in 2D, it is recorded redundantly!

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Left to right: Beni Yoshida, Aleksander Kubica, Aidan Chatwin-Davies and Fernando Pastawski discussing holographic codes.

Half way through Daniel’s exposition, Beni and I exchange a knowing glance. We recognize a familiar pattern from our latest project. A pattern which has gained the moniker of “cleaning lemma” within the quantum information community which can be thought of as a quantitative analog of reconstructing the 3D image from pieces of the hologram. Daniel makes connections using a language that we are familiar with. Beni and I discuss what we have understood and how to make it more concrete as we stride back through campus. We scribble diagrams on the whiteboard and string words such as tensor, encoder, MERA and negative curvature into our discussion. An image from the web gives us some intuition on the latter. We are onto something. We have a model. It is simple. It is new. It is exciting.

Poincare projection of a regular pentagon tiling of negatively curved space.

Poincare projection of a regular pentagon tiling of negatively curved space.

Food has not come our way so we head to my apartment as we enthusiastically continue our discussion. I can only provide two avocados and some leftover pasta but that is not important, we are sharing the joy of insight. We arrange a meeting with Daniel to present our progress. By Wednesday Beni and I introduce the holographic pentagon code at the group meeting. A core for a new project is already there, but we need some help to navigate the high-energy waters. Who better to guide us in such an endeavor than our mentor, John Preskill, who recognized the importance of quantum information in Holography as early as 1999 and has repeatedly proven himself a master of both trades.

“I feel that the idea of holography has a strong whiff of entanglement—for we have seen that in a profoundly entangled state the amount of information stored locally in the microscopic degrees of freedom can be far less than we would naively expect. For example, in the case of the quantum error-correcting codes, the encoded information may occupy a small ‘global’ subspace of a much larger Hilbert space. Similarly, the distinct topological phases of a fractional quantum Hall system look alike locally in the bulk, but have distinguishable edge states at the boundary.”
-J. Preskill, 1999

As Beni puts it, the time for using modern quantum information tools in high-energy physics has come. By this he means quantum error correction and maybe tensor networks. First privately, then more openly, we continue to sharpen and shape our project. Through conferences, Skype calls and emails, we further our discussion and progressively shape ideas. Many speculations mature to conjectures and fall victim to counterexamples. Some stand the test of simulations or are even promoted to theorems by virtue of mathematical proofs.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

Beni Yoshida presenting our work at a quantum entanglement conference in Puerto Rico.

I publicly present the project for the first time at a select quantum information conference in Australia. Two months later, after a particularly intense writing, revising and editing process, the article is almost complete. As we finalize the text and relabel the figures, Daniel and Beni unveil our work to quantum entanglement experts in Puerto Rico. The talks are a hit and it is time to let all our peers read about it.

You are invited to do so and Beni will even be serving a reader’s guide in an upcoming post.

Quantum Frontiers salutes Terry Pratchett.

I blame British novels for my love of physics. Philip Pullman introduced me to elementary particles; Jasper Fforde, to the possibility that multiple worlds exist; Diana Wynne Jones, to questions about space and time.

So began the personal statement in my application to Caltech’s PhD program. I didn’t mention Sir Terry Pratchett, but he belongs in the list. Pratchett wrote over 70 books, blending science fiction with fantasy, humor, and truths about humankind. Pratchett passed away last week, having completed several novels after doctors diagnosed him with early-onset Alzheimer’s. According to the San Francisco Chronicle, Pratchett “parodie[d] everything in sight.” Everything in sight included physics.

http://www.lookoutmountainbookstore.com/

Terry Pratchett continues to influence my trajectory through physics: This cover has a cameo in a seminar I’m presenting in Maryland this March.

Pratchett set many novels on the Discworld, a pancake of a land perched atop four elephants, which balance on the shell of a turtle that swims through space. Discworld wizards quantify magic in units called thaums. Units impressed their importance upon me in week one of my first high-school physics class. We define one meter as “the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.” Wizards define one thaum as “the amount of magic needed to create one small white pigeon or three normal-sized billiard balls.”

Wizards study the thaum in a High-Energy Magic Building reminiscent of Caltech’s Lauritsen-Downs Building. To split the thaum, the wizards built a Thaumatic Resonator. Particle physicists in our world have split atoms into constituent particles called mesons and baryons. Discworld wizards discovered that the thaum consists of resons. Mesons and baryons consist of quarks, seemingly elementary particles that we believe cannot be split. Quarks fall into six types, called flavors: up, down, charmed, strange, top (or truth), and bottom (or beauty). Resons, too, consist of quarks. The Discworld’s quarks have the flavors up, down, sideways, sex appeal, and peppermint.

Reading about the Discworld since high school, I’ve wanted to grasp Pratchett’s allusions. I’ve wanted to do more than laugh at them. In Pyramids, Pratchett describes “ideas that would make even a quantum mechanic give in and hand back his toolbox.” Pratchett’s ideas have given me a hankering for that toolbox. Pratchett nudged me toward training as a quantum mechanic.

Pratchett hasn’t only piqued my curiosity about his allusions. He’s piqued my desire to create as he did, to do physics as he wrote. While reading or writing, we build worlds in our imaginations. We visualize settings; we grow acquainted with characters; we sense a plot’s consistency or the consistency of a system of magic. We build worlds in our imaginations also when doing and studying physics and math. The Standard Model is a system that encapsulates the consistency of our knowledge about particles. We tell stories about electrons’ behaviors in magnetic fields. Theorems’ proofs have logical structures like plots’. Pratchett and other authors trained me to build worlds in my imagination. Little wonder I’m training to build worlds as a physicist.

Around the time I graduated from college, Diana Wynne Jones passed away. So did Brian Jacques (another British novelist) and Madeleine L’Engle. L’Engle wasn’t British, but I forgave her because her Time Quartet introduced me to dimensions beyond three. As I completed one stage of intellectual growth, creators who’d led me there left.

Terry Pratchett has joined Jones, Jacques, and L’Engle. I will probably create nothing as valuable as his Discworld, let alone a character in the Standard Model toward which the Discworld steered me.

But, because of Terry Pratchett, I have to try.

A detective with a quantum helper

Have you ever wanted to be incredibly perceptive and make far-reaching deductions about people? I have always been fascinated by spy stories, and how the main character in them notices tiny details of his surroundings to navigate life-or-death situations. This skill seems out of reach for us normal people; you have to be “a high-functioning sociopath” to memorize all existing data on behavior, clothes choices and forensic science. Of course I’m referring to:
Small details help Sherlock figure out what did the woman do to meet such a sad end
Yet in the not too distant future, a computer may help you become a brilliant detective (or a scheming villain) yourself! The first step is noticing the details, which is known in machine learning as the classification task. Here is a pioneering work that somewhat resembles the above picture, only it’s done by a computer:
A computer spits out a sentence (read down) describing what's in the picture. Work by Stanford group.

The task for the computer here was to produce a verbal description of the image. There are thousands of words in the vocabulary, and a computer has to try them in different combinations to make a sensible sentence. There is no way a computer can be given an exhaustive list of correct sentences with examples of images for each. That kind of list would be a database bigger than the earth (as one can see just by counting the number of combinations). So to train the computer to use language like in a picture above, one only possesses a limited set of examples – maybe a few thousand pictures with descriptions. Yet we as humans are capable of learning from just seeing a few examples, by noticing the repeating patterns. So the computer can do the same! The score next to each word above is an estimate based on those few thousand examples of how relevant is the word “tennis” or “woman” to what’s in the box on the image. The algorithm produces possible sentences, scores them, and then selects the sentence with the highest total score.

Once the classification task is done, one needs to use all the collected information to make a prediction – as Sherlock is able to point out the most probable motive in the first picture, we also want to predict a piece of very personal information: we’d like to know how to start up a conversation with that tennis player.

Humans are actually good at classification tasks: with luck, we can notice and type in our cellphone all the details the predictor will need, like brand of clothing, hair color, height… though computers recently became better than humans at facial expression recognition, so we don’t have to trust ourselves on that anymore. Finally, when all the data is collected, most humans will still say only generic advice to you on conversation starters. Which means we are very bad at prediction tasks. We don’t notice the hidden dependencies between brand of clothes and sense of humor. But such information may not hide from the all-seeing eye of the machine learning algorithm! So expect your cellphones to give you dating advice within 10 years… 

Now how do quantum computers come into play? Well if you look at your search results, they are still pretty irrelevant most of the time. Imagine you used them as conversation starters – you’ll embarrass yourself 9 out of 10 times! To make this better, a certain company needs more memory and processing power. Yet most advanced deep learning routines remain out of reach, just because there are exponentially many hidden dependencies one would need to try and reject before the algorithm finds the right predictor. So a certain company turns to us, quantum computing people, as we deal with exponentially hard problems notoriously well! And indeed, quantum algorithms make some of the machine learning routines exponentially faster – see this Quantum Machine Learning article, as well as a talk by Seth Lloyd for technical details. Some anonymous stock trader is already trying to intimidate their fellow quants (quantitative analysts) by calling the top trading system “Quantum machine learning”. I think we should appreciate his sense of humor and invest into his algorithm as soon as Quantiacs.com opens such functionality. Or we could invest in Teagan from Caltech – her code recently won the futures contest on the same website.