About Thomas

I am a professor in the department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology, where I am also a member of the Institute for Quantum Information and Matter (IQIM). My research is in quantum complexity theory and cryptography.

Interaction + Entanglement = Efficient Proofs of Halting

A couple weeks ago my co-authors Zhengfeng Ji (UTS Sydney), Heny Yuen (University of Toronto) and Anand Natarajan and John Wright (both at Caltech’s IQIM, with John soon moving to UT Austin) & I posted a manuscript on the arXiv preprint server entitled

MIP*=RE

The magic of the single-letter formula quickly made its effect, and our posting received some attention on the blogosphere (see links below). Within computer science, complexity theory is at an advantage in its ability to capture powerful statements in few letters: who has not head of P, NP, and, for readers of this blog, BQP and QMA? (In contrast, I am under no illusion that my vague attempt at a more descriptive title has, by the time you reach this line, all but vanished from the reader’s memory.)

Even accounting for this popularity however, it is a safe bet that fewer of our readers have heard of MIP* or RE. Yet we are promised that the above-stated equality has great consequences for physics (“Tsirelson’s problem” in the study of nonlocality) and mathematics (“Connes’ embedding problem” in the theory of von Neumann algebras). How so — how can complexity-theoretic alphabet soup have any consequence for, on the one hand, physical reality, and on the other, abstract mathematics?

The goal of this post and the next one is to help the interested reader grasp the significance of interactive proofs (that lie between the symbols MIP*) and undecidability (that lies behind RE) for quantum mechanics.

The bulk of the present post is an almost identical copy of a post I wrote for my personal blog. To avoid accusations of self-plagiarism, I will substantiate it with a little picture and a story, see below. The post gives a very personal take on the research that led to the aforementioned result. In the next post, my co-author Henry Yuen has offered to give a more scientific introduction to the result and its significance.

Before proceeding, it is important to make it clear that the research described in this post and the next has not been refereed or thoroughly vetted by the community. This process will take place over the coming months, and we should wait until it is completed before placing too much weight on the results. As an author, I am proud of my work; yet I am aware that there is due process to be made before the claims can be officialised. As such, these posts only represent my opinion (and Henry’s) and not necessarily that of the wider scientific community.

For more popular introductions to our result, see the blog posts of Scott Aaronson, Dick Lipton, and Gil Kalai and reporting by Davide Castelvecchi for Nature and Emily Conover for Science.

Now for the personal post…and the promised picture. IMG_8074Isn’t it beautiful? The design is courtesy of Tony Metger and Alexandru Gheorghiu, the first a visiting student and the second a postdoctoral scholar at Caltech’s IQIM. While Tony and Andru came up with the idea, the execution is courtesy of the bakery store employee, who graciously implemented the custom design (apparently writing equations on top of cakes is not  common enough to be part of the standard offerings, so they had to go for the custom option). Although it is unclear if the executioner grasped the full depth of the signs they were copying, note how perfect the execution: not a single letter is out of place! Thanks to Tony, Andru, and the anonymous chef for the tasty souvenir.

Now for the story. In an earlier post on my personal research blog, I had reported on the beautiful recent result by Natarajan and Wright showing the astounding power of multi-prover interactive proofs with quantum provers sharing entanglement: in letters, \text{NEEXP} \subseteq \text{MIP}^\star. In the remainder of this post I will describe our follow-up work with Ji, Natarajan, Wright, and Yuen. In this post I will tell the story from a personal point of view, with all the caveats that this implies: the “hard science” will be limited (but there could be a hint as to how “science”, to use a big word, “progresses”, to use an ill-defined one; see also the upcoming post by Henry Yuen for more), the story is far too long, and it might be mostly of interest to me only. It’s a one-sided story, but that has to be. (In particular below I may at times attribute credit in the form “X had this idea”. This is my recollection only, and it is likely to be inaccurate. Certainly I am ignoring a lot of important threads.) I wrote this because I enjoyed recollecting some of the best moments in the story just as much as some the hardest; it is fun to look back and find meanings in ideas that initially appeared disconnected. Think of it as an example of how different lines of work can come together in unexpected ways; a case for open-ended research. It’s also an antidote against despair that I am preparing for myself: whenever I feel I’ve been stuck on a project for far too long, I’ll come back to this post and ask myself if it’s been 14 years yet — if not, then press on.

It likely comes as a surprise to me only that I am no longer fresh out of the cradle. My academic life started in earnest some 14 years ago, when in the Spring of 2006 I completed my Masters thesis in Computer Science under the supervision of Julia Kempe, at Orsay in France. I had met Julia the previous term: her class on quantum computing was, by far, the best-taught and most exciting course in the Masters program I was attending, and she had gotten me instantly hooked. Julia agreed to supervise my thesis, and suggested that I look into some interesting recent result by Stephanie Wehner that linked the study of entanglement and nonlocality in quantum mechanics to complexity-theoretic questions about interactive proof systems (specifically, this was Stephanie’s paper showing that \text{XOR-MIP}^\star \subseteq \text{QIP}(2)).

At the time the topic was very new. It had been initiated the previous year with a beautiful paper by Cleve et al. (that I have recommended to many a student since!) It was a perfect fit for me: the mathematical aspects of complexity theory and quantum computing connected to my undergraduate background, while the relative concreteness of quantum mechanics (it is a physical theory after all) spoke to my desire for real-world connection (not “impact” or even “application” — just “connection”). Once I got myself up to speed in the area (which consisted of three papers: the two I already mentioned, together with a paper by Kobayashi and Matsumoto where they studied interactive proofs with quantum messages), Julia suggested looking into the “entangled-prover” class \text{MIP}^\star introduced in the aforementioned paper by Cleve et al. Nothing was known about this class! Nothing besides the trivial inclusion of single-prover interactive proofs, IP, and the containment in…ALL, the trivial class that contains all languages.

Yet the characterization MIP=NEXP of its classical counterpart by Babai et al. in the 1990s had led to one of the most productive lines of work in complexity of the past few decades, through the PCP theorem and its use from hardness of approximation to efficient cryptographic schemes. Surely, studying \text{MIP}^\star had to be a productive direction? In spite of its well-established connection to classical complexity theory, via the formalism of interactive proofs, this was a real gamble. The study of entanglement from the complexity-theoretic perspective was entirely new, and bound to be fraught with difficulty; very few results were available and the existing lines of works, from the foundations of non-locality to more recent endeavors in device-independent cryptography, provided little other starting point than strong evidence that even the simplest examples came with many unanswered questions. But my mentor was fearless, and far from a novice in terms of defraying new areas, having done pioneering work in areas ranging from quantum random walks to Hamiltonian complexity through adiabatic computation. Surely this would lead to something?

It certainly did. More sleepless nights than papers, clearly, but then the opposite would only indicate dullness. Julia’s question led to far more unexpected consequences than I, or I believe she, could have imagined at the time. I am writing this post to celebrate, in a personal way, the latest step in 15 years of research by dozens of researchers: today my co-authors and I uploaded to the quant-ph arXiv what we consider a complete characterization of the power of entangled-prover interactive proof systems by proving the equality \text{MIP}^\star = \text{RE}, the class of all recursively enumerable languages (a complete problem for RE is the halting problem). Without going too much into the result itself (if you’re interested, look for an upcoming post here that goes into the proof a bit more), and since this is a more personal post, I will continue on with some personal thoughts about the path that got us there.

When Julia & I started working on the question, our main source of inspiration were the results by Cleve et al. showing that the non-local correlations of entanglement had interesting consequences when seen through the lens of interactive proof systems in complexity theory. Since the EPR paper, a lot of work in understanding entanglement had already been accomplished in the Physics community, most notably by Mermin, Peres, Bell, and more recently the works in device-independent quantum cryptography by Acin, Pironio, Scarani and many others, stimulated by Ekert’s proposal for quantum key distribution and Mayers and Yao’s idea for “device-independent cryptography”. By then we certainly knew that “spooky action-at-a-distance” did not entail any faster-than-light communication, and indeed was not really “action-at-a-distance” in the first place but merely “correlation-at-a-distance”. What Cleve et al. recognized is that these “spooky correlations-at-a-distance” were sufficiently special so as to not only give numerically different values in “Bell inequalities”, the tool invented by Bell to evidence non-locality in quantum mechanics, but also have some potentially profound consequences in complexity theory.

In particular, examples such as the “Magic Square game” demonstrated that enough correlation could be gained from entanglement so as to defeat basic proof systems whose soundness relied only on the absence of communication between the provers, an assumption that until then had been wrongly equated with the assumption that any computation performed by the provers could be modeled entirely locally. I think that the fallacy of this implicit assumption came as a surprise to complexity theorists, who may still not have entirely internalized it. Yet the perfect quantum strategy for the Magic Square game provides a very concrete “counter-example” to the soundness of the “clause-vs-variable” game for 3SAT. Indeed this game, a reformulation by Aravind and Cleve-Mermin of a Bell Inequality discovered by Mermin and Peres in 1990, can be easily re-framed as a 3SAT system of equations that is not satisfiable, and yet is such that the associated two-player clause-vs-variable game has a perfect quantum strategy. It is this observation, made in the paper by Cleve et al., that gave the first strong hint that the use of entanglement in interactive proof systems could make many classical results in the area go awry.

By importing the study of non-locality into complexity theory Cleve et al. immediately brought it into the realm of asymptotic analysis. Complexity theorists don’t study fixed objects, they study families of objects that tend to have a uniform underlying structure and whose interesting properties manifest themselves “in the limit”. As a result of this new perspective focus shifted from the study of single games or correlations to infinite families thereof. Some of the early successes of this translation include the “unbounded violations” that arose from translating asymptotic separations in communication complexity to the language of Bell inequalities and correlations (e.g. this paper). These early successes attracted the attention of some physicists working in foundations as well as some mathematical physicists, leading to a productive exploration that combined tools from quantum information, functional analysis and complexity theory.

The initial observations made by Cleve et al. had pointed to \text{MIP}^\star as a possibly interesting complexity class to study. Rather amazingly, nothing was known about it! They had shown that under strong restrictions on the verifier’s predicate (it should be an XOR of two answer bits), a collapse took place: by the work of Hastad, XOR-MIP equals NEXP, but \text{MIP}^\star is included in EXP. This seemed very fortuitous (the inclusion is proved via a connection with semidefinite programming that seems tied to the structure of XOR-MIP protocols): could entanglement induce a collapse of the entire, unrestricted class? We thought (at this point mostly Julia thought, because I had no clue) that this ought not to be the case, and so we set ourselves to show that the equality \text{MIP}^\star=\text{NEXP}, that would directly parallel Babai et al.’s characterization MIP=NEXP, holds. We tried to show this by introducing techniques to “immunize” games against entanglement: modify an interactive proof system so that its structure makes it “resistant” to the kind of “nonlocal powers” that can be used to defeat the clause-vs-variable game (witness the Magic Square). This was partially successful, and led to one of the papers I am most proud of — I am proud of it because I think it introduced elementary techniques (such as the use of the Cauchy-Schwarz inequality — inside joke — more seriously, basic things such as “prover-switching”, “commutation tests”, etc.) that are now routine manipulations in the area. The paper was a hard sell! It’s good to remember the first rejections we received. They were not unjustified: the main point of criticism was that we were only able to establish a hardness result for exponentially small completeness-soundness gap. A result for such a small gap in the classical setting follows directly from a very elementary analysis based on the Cook-Levin theorem. So then why did we have to write so many pages (and so many applications of Cauchy-Schwarz!) to arrive at basically the same result (with a ^\star)?

Eventually we got lucky and the paper was accepted to a conference. But the real problem, of establishing any non-trivial lower bound on the class \text{MIP}^\star with constant (or, in the absence of any parallel repetition theorem, inverse-polynomial) completeness-soundness gap, remained. By that time I had transitioned from a Masters student in France to a graduate student in Berkeley, and the problem (pre-)occupied me during some of the most difficult years of my Ph.D. I fully remember spending my first year entirely thinking about this (oh and sure, that systems class I had to pass to satisfy the Berkeley requirements), and then my second year — yet, getting nowhere. (I checked the arXiv to make sure I’m not making this up: two full years, no posts.) I am forever grateful to my fellow student Anindya De for having taken me out of the cycle of torture by knocking on my door with one of the most interesting questions I have studied, that led me into quantum cryptography and quickly resulted in an enjoyable paper. It was good to feel productive again! (Though the paper had fun reactions as well: after putting it on the arXiv we quickly heard from experts in the area that we had solved an irrelevant problem, and that we better learn about information theory — which we did, eventually leading to another paper, etc.) The project had distracted me and I set interactive proofs aside; clearly, I was stuck.

About a year later I visited IQC in Waterloo. I don’t remember in what context the visit took place. What I do remember is a meeting in the office of Tsuyoshi Ito, at the time a postdoctoral scholar at IQC. Tsuyoshi asked me to explain our result with Julia. He then asked a very pointed question: the bedrock for the classical analysis of interactive proof systems is the “linearity test” of Blum-Luby-Rubinfeld (BLR). Is there any sense in which we could devise a quantum version of that test?

What a question! This was great. At first it seemed fruitless: in what sense could one argue that quantum provers apply a “linear function”? Sure, quantum mechanics is linear, but that is besides the point. The linearity is a property of the prover’s answers as a function of their question. So what to make of the quantum state, the inherent randomness, etc.?

It took us a few months to figure it out. Once we got there however, the answer was relatively simple — the prover should be making a question-independent measurement that returns a linear function that it applies to its question in order to obtain the answer returned to the verifier — and it opened the path to our subsequent paper showing that the inclusion of NEXP in \text{MIP}^\star indeed holds. Tsuyoshi’s question about linearity testing had allowed us to make the connection with PCP techniques; from there to MIP=NEXP there was only one step to make, which is to analyze multi-linearity testing. That step was suggested by my Ph.D. advisor, Umesh Vazirani, who was well aware of the many pathways towards the classical PCP theorem, since the theorem had been obtained in great part by his former student Sanjeev Arora. It took a lot of technical work, yet conceptually a single question from my co-author had sufficed to take me out of a 3-year slumber.

This was in 2012, and I thought we were done. For some reason the converse inclusion, of \text{MIP}^\star in NEXP, seemed to resist our efforts, but surely it couldn’t resist much longer. Navascues et al. had introduced a hierarchy of semidefinite programs that seemed to give the right answer (technically they could only show convergence to a relaxation, the commuting value, but that seemed like a technicality; in particular, the values coincide when restricted to finite-dimensional strategies, which is all we computer scientists cared about). There were no convergence bounds on the hierarchy, yet at the same time commutative SDP hierarchies were being used to obtain very strong results in combinatorial optimization, and it seemed like it would only be a matter of time before someone came up with an analysis of the quantum case. (I had been trying to solve a related “dimension reduction problem” with Oded Regev for years, and we were making no progress; yet it seemed someone ought to!)

In Spring 2014 during an open questions session at a workshop at the Simons Institute in Berkeley Dorit Aharonov suggested that I ask the question of the possible inclusion of QMA-EXP, the exponential-sized-proofs analogue of QMA, in \text{MIP}^\star. A stronger result than the inclusion of NEXP (under assumptions), wouldn’t it be a more natural “fully quantum” analogue of MIP=NEXP? Dorit’s suggestion was motivated by research on the “quantum PCP theorem”, that aims to establish similar hardness results in the realm of the local Hamiltonian problem; see e.g. this post for the connection. I had no idea how to approach the question — I also didn’t really believe the answer could be positive — but what can you do, if Dorit asks you something… So I reluctantly went to the board and asked the question. Joe Fitzsimons was in the audience, and he immediately picked it up! Joe had the fantastic ideas of using quantum error-correction, or more specifically secret-sharing, to distribute a quantum proof among the provers. His enthusiasm overcame my skepticism, and we eventually showed the desired inclusion. Maybe \text{MIP}^\star was bigger than \text{NEXP} after all.

Our result, however, had a similar deficiency as the one with Julia, in that the completeness-soundness gap was exponentially small. Obtaining a result with a constant gap took 3 years of couple more years of work and the fantastic energy and insights of a Ph.D. student at MIT, Anand Natarajan. Anand is the first person I know of to have had the courage to dive into the most technical aspects of the analysis of the aforementioned results, while also bringing in the insights of a “true quantum information theorist” that were supported by Anand’s background in Physics and upbringing in the group of Aram Harrow at MIT. (In contrast I think of myself more as a “raw” mathematician; I don’t really understand quantum states other than as positive-semidefinite matrices…not that I understand math either of course; I suppose I’m some kind of a half-baked mish-mash.) Anand had many ideas but one of the most beautiful ones led to what he poetically called the “Pauli braiding test”, a “truly quantum” analogue of the BLR linearity test that amounts to doing two linearity tests in conjugate bases and piecing the results together into a robust test for {n}-qubit entanglement (I wrote about our work on this here).

At approximately the same time, Zhengfeng Ji had another wonderful idea that was in some sense orthogonal to our work. (My interpretation of) Zhengfeng’s idea is that one can see an interactive proof system as a computation (verifier-prover-verifier) and use Kitaev’s circuit-to-Hamiltonian construction to transform the entire computation into a “quantum CSP” (in the same sense that the local Hamiltonian problem is a quantum analogue of classical constraint satisfaction problems (CSP)) that could then itself be verified by a quantum multi-prover interactive proof system…with exponential gains in efficiency! Zhengfeng’s result implied an exponential improvement in complexity compared to the result by Julia and myself, showing inclusion of NEEXP, instead of NEXP, in \text{MIP}^\star. However, Zhengfeng’s technique suffered from the same exponentially small completeness-soundness gap as we had, so that the best lower bound on \text{MIP}^\star per se remained NEXP.

Both works led to follow-ups. With Natarajan we promoted the Pauli braiding test into a “quantum low-degree test” that allowed us to show the inclusion of QMA-EXP into \text{MIP}^\star, with constant gap, thereby finally answering the question posed by Aharonov 4 years after it was asked. (I should also say that by then all results on \text{MIP}^\star started relying on a sequence of parallel repetition results shown by Bavarian, Yuen, and others; I am skipping this part.) In parallel, with Ji, Fitzsimons, and Yuen we showed that Ji’s compression technique could be “iterated” an arbitrary number of times. In fact, by going back to “first principles” and representing verifiers uniformly as Turing machines we realized that the compression technique could be used iteratively to (up to small caveats) give a new proof of the fact (first shown by Slofstra using an embedding theorem for finitely presented group) that the zero-gap version of \text{MIP}^\star contains the halting problem. In particular, the entangled value is uncomputable! This was not the first time that uncomputability crops in to a natural problem in quantum computing (e.g. the spectral gap paper), yet it still surprises when it shows up. Uncomputable! How can anything be uncomputable!

As we were wrapping up our paper Henry Yuen realized that our “iterated compression of interactive proof systems” was likely optimal, in the following sense. Even a mild improvement of the technique, in the form of a slower closing of the completeness-soundness gap through compression, would yield a much stronger result: undecidability of the constant-gap class \text{MIP}^\star. It was already known by work of Navascues et al., Fritz, and others, that such a result would have, if not surprising, certainly consequences that seemed like they would be taking us out of our depth. In particular, undecidability of any language in \text{MIP}^\star would imply a negative resolution to a series of equivalent conjectures in functional analysis, from Tsirelson’s problem to Connes’ Embedding Conjecture through Kirchberg’s QWEP conjecture. While we liked our result, I don’t think that we believed it could resolve any conjecture(s) in functional analysis.

So we moved on. At least I moved on, I did some cryptography for a change. But Anand Natarajan and his co-author John Wright did not stop there. They had the last major insight in this story, which underlies their recent STOC best paper described in the previous post. Briefly, they were able to combine the two lines of work, by Natarajan & myself on low-degree testing and by Ji et al. on compression, to obtain a compression that is specially tailored to the existing \text{MIP}^\star protocol for NEXP and compresses that protocol without reducing its completeness-soundness gap. This then let them show Ji’s result that \text{MIP}^\star contains NEEXP, but this time with constant gap! The result received well-deserved attention. In particular, it is the first in this line of works to not suffer from any caveats (such as a closing gap, or randomized reductions, or some kind of “unfair” tweak on the model that one could attribute the gain in power to), and it implies an unconditional separation between MIP and \text{MIP}^\star.

As they were putting the last touches on their result, suddenly something happened, which is that a path towards a much bigger result opened up. What Natarajan & Wright had achieved is a one-step gapless compression. In our iterated compression paper we had observed that iterated gapless compression would lead to \text{MIP}^\star=\text{RE}, implying negative answers to the aforementioned conjectures. So then?

I suppose it took some more work, but in some way all the ideas had been laid out in the previous 15 years of work in the complexity of quantum interactive proof systems; we just had to put it together. And so a decade after the characterization QIP = PSPACE of single-prover quantum interactive proof systems, we have arrived at a characterization of quantum multiprover interactive proof systems, \text{MIP}^\star = \text{RE}. With one author in common between the two papers: congratulations Zhengfeng!

Even though we just posted a paper, in a sense there is much more left to do. I am hopeful that our complexity-theoretic result will attract enough interest from the mathematicians’ community, and especially operator algebraists, for whom CEP is a central problem, that some of them will be willing to devote time to understanding the result. I also recognize that much effort is needed on our own side to make it accessible in the first place! I don’t doubt that eventually complexity theory will not be needed to obtain the purely mathematical consequences; yet I am hopeful that some of the ideas may eventually find their way into the construction of interesting mathematical objects (such as, who knows, a non-hyperlinear group).

That was a good Masters project…thanks Julia!

The Quantum Wave in Computing

Summer is a great time for academics. Imagine: three full months off! Hit the beach. Tune that golf pitch. Hike the sierras. Go on a cruise. Watch soccer with the brazilenos (there’s been better years for that one). Catch the sunset by the Sydney opera house. Take a nap.

IMG_20180619_145256

A visiting researcher taking full advantage of the Simons Institute’s world-class relaxation facilities. And yes, I bet you he’s proving a theorem at the same time.

Think that’s outrageous? We have it even better. Not only do we get to travel the globe worry-free, but we prove theorems while doing it. For some of us summer is the only time of year when we manage to prove theorems. Ideas accumulate during the year, blossom during the conferences and workshops that mark the start of the summer, and hatch during the few weeks that many of us set aside as “quiet time” to finally “wrap things up”.

I recently had the pleasure of contributing to the general well-being of my academic colleagues by helping to co-organize (with Andrew Childs, Ignacio Cirac, and Umesh Vazirani) a 2-month long program on “Challenges in Quantum Computation” at the Simons Institute in Berkeley. In this post I report on the program and describe one of the highlights discussed during it: Mahadev’s very recent breakthrough on classical verification of quantum computation.

Challenges in Quantum Computation

The Simons Institute has been in place on the UC Berkeley campus since the Fall of 2013, and in fact one of their first programs was on “Quantum Hamiltonian Complexity”, in Spring 2014 (see my account of one of the semester’s workshops here). Since then the institute has been hosting a pair of semester-long programs at a time, in all areas of theoretical computer science and neighboring fields. Our “summer cluster” had a slightly different flavor: shorter, smaller, it doubled up as the prelude to a full semester-long program scheduled for Spring 2020 (provisional title: The Quantum Wave in Computing, a title inspired from Umesh Vazirani’s recent tutorial at STOC’18 in Los Angeles) — (my interpretation of) the idea being that the ongoing surge in experimental capabilities supports a much broader overhaul of some of the central questions of computer science, from the more applied (such as, programming languages and compilers), to the most theoretical (such as, what complexity classes play the most central role).

This summer’s program hosted a couple dozen participants at a time. Some stayed for the full 2 months, while others visited for shorter times. The Simons Institute is a fantastic place for collaborative research. The three-story building is entirely devoted to us. There are pleasant yet not-too-comfortable shared offices, but the highlight is the two large communal rooms meant for organized and spontaneous discussion. Filled with whiteboards, bright daylight, comfy couches, a constant supply of tea, coffee, and cookies, and eager theorists!

After a couple weeks of settling down the program kicked off with an invigorating workshop. Our goal for the workshop was to frame the theoretical questions raised by the sudden jump in the capabilities of experimental quantum devices that we are all witnessing. There were talks describing progress in experiments (superconducting qubits, ion traps, and cold atoms were represented), suggesting applications for the new devices (from quantum simulation & quantum chemistry to quantum optimization and machine learning through “quantum supremacy” and randomness generation), and laying the theoretical framework for trustworthy interaction with the quantum devices (interactive proofs, testing, and verifiable delegation). We had an outstanding line-up of speakers. All talks (except the panel discussions, unfortunately) were recorded, and you can watch them here.

The workshop was followed by five additional weeks of “residency”, that allowed long-term participants to digest and develop the ideas presented during the workshop. In my experience these few additional weeks, right after the workshop, make all the difference. It is the same difference as between a quick conference call and a leisurely afternoon at the whiteboard: while the former may feel productive and bring the adrenaline levels up, the latter is more suited to in-depth exploration and unexpected discoveries.

There would be much to say about the ideas discussed during the workshop and following weeks. I will describe a single one of these ideas — in my opinion, one of the most outstanding ideas to have emerged at the interface of quantum computing and theoretical computer science in recent years! The result, “Classical Verification of Quantum Computations”, is by Urmila Mahadev, a Ph.D.~student at UC Berkeley (I think she just graduated). Urmila gave a wonderful talk on her result at the workshop, and I highly recommend watching the recorded video. In the remainder of this post I’ll provide an overview of the result. I also wrote a slightly more technical introduction that eager readers will find here.

A cryptographic leash on quantum systems

Mahadev’s result is already famous: announced on the blog of Scott Aaronson, it has earned her a long-standing 25$ prize, awarded for “solving the problem of proving the results of an arbitrary quantum computation to a classical skeptic”. Or, in complexity-theoretic linguo, for showing that “every language in the class BQP admits an interactive protocol where the prover is in BQP and the verifier is in BPP”. What does this mean?

Verifying quantum computations in the high complexity regime

On his blog Scott Aaronson traces the question back to a talk given by Daniel Gottesman in 2004. An eloquent formulation appears in a subsequent paper by Dorit Aharonov and Umesh Vazirani, aptly titled “Is Quantum Mechanics Falsifiable? A computational perspective on the foundations of Quantum Mechanics”.

Here is the problem. As readers of this blog are well aware, Feynman’s idea of a quantum computer, and the subsequent formalization by Bernstein and Vazirani of the Quantum Turing Machine, layed the theoretical foundation for the construction of computing devices whose inner functioning is based on the laws of quantum physics. Most readers also probably realize that we currently believe that these quantum devices will have the ability to efficiently solve computational problems (the class of which is denoted BQP) that are thought to be beyond the reach of classical computers (represented by the class BPP). A prominent example is factoring, but there are many others. The most elementary example is arguably Feynman’s original proposal: a quantum computer can be used to simulate the evolution of any quantum mechanical system “in real time”. In contrast, the best classical simulations available can take exponential time to converge even on concrete examples of practical interest. This places a computational impediment to scientific progress: the work of many physicists, chemists, and biologists, would be greatly sped up if only they could perform simulations at will.

So this hypothetical quantum device claims (or will likely claim) that it has the ability to efficiently solve computational problems for which there is no known efficient classical algorithm. Not only this but, as is widely believed in complexity-theoretic circles (a belief recently strenghtened by the proof of an oracle separation between BQP and PH by Tal and Raz), for some of these problems, even given the answer, there does not exist a classical proof that the answer is correct. The quantum device’s claim cannot be verified! This seems to place the future of science at the mercy of an ingenuous charlatan, with good enough design & marketing skills, that would convince us that it is providing the solution to exponentially complex problems by throwing stardust in our eyes. (Wait, did this happen already?)

Today is the most exciting time in quantum computing since the discovery of Shor’s algorithm for factoring: while we’re not quite ready to run that particular algorithm yet, experimental capabilities have ramped up to the point where we are just about to probe the “high-complexity” regime of quantum mechanics, by making predictions that cannot be emulated, or even verified, using the most powerful classical supercomputers available. What confidence will we have that the predictions have been obtained correctly? Note that this question is different from the question of testing the validity of the theory of quantum mechanics itself. The result presented here assumes the validity of quantum mechanics. What it offers is a method to test, assuming the correctness of quantum mechanics, that a device performs the calculation that it claims to have performed. If the device has supra-quantum powers, all bets are off. Even assuming the correctness of quantum mechanics, however, the device may, intentionally or not (e.g. due to faulty hardware), mislead the experimentalist. This is the scenario that Mahadev’s result aims to counter.

Interactive proofs

The first key idea is to use the power of interaction. The situation can be framed as follows: given a certain computation, such that a device (henceforth called “prover”) has the ability to perform the computation, but another entity, the classical physicist (henceforth called “verifier”) does not, is there a way for the verifier to extract the right answer from the prover with high confidence — given that the prover may not be trusted, and may attempt to use its superior computing power to mislead the verifier instead of performing the required computation?

The simplest scenario would be one where the verifier can execute the computation herself, and check the prover’s outcome. The second simplest scenario is one where the verifier cannot execute the computation, but there is a short proof that the prover can provide that allows her to fully certify the outcome. These two scenario correspond to problems in BPP and NP respectively; an example of the latter is factoring. As argued earlier, not all quantum computations (BQP) are believed to fall within these two classes. Both direct computation and proof verification are ruled out. What can we do? Use interaction!

The framework of interactive proofs originates in complexity theory in the 1990s. An interactive proof is a protocol through which a verifier (typically a computationally bounded entity, such as the physicist and her classical laptop) interacts with a more powerful, but generally untrusted, prover (such as the experimental quantum device). The goal of the protocol is for the verifier to certify the validity of a certain computational statement.

Here is a classical example (the expert — or impatient — reader may safely skip this). The example is for a problem that lies in co-NP, but is not believed to lie in NP. Suppose that both the verifier and prover have access to two graphs, {G} and {H}, such that the verifier wishes to raise an “ACCEPT” flag if and only if the two graphs are not isomorphic. In general this is a hard decision to make, because the verifier would have to check all possible mappings from one graph to the other, of which there are exponentially many. Here is how the verifier can extract the correct answer by interacting with a powerful, untrusted prover. First, the verifier flips a fair coin. If the coin comes up heads, she selects a random relabeling of the vertices of {G}. If the coin comes up tail, she selects a random relabeling of the vertices of {H}. The verifier then sends the relabeled graph to the prover, and asks the prover to guess which graph the verifier has hidden. If the prover provides the correct answer (easy to check), the verifier concludes that the graphs were not isomorphic. Otherwise, she concludes that they were isomorphic. It is not hard to see that, if the graphs are indeed not isomorphic, the prover always has a means to correctly identify the hidden graph, and convince the verifier to make the right decision. But if the graphs are isomorphic, then there is no way for the prover to distinguish the random relabelings (since the distributions obtained by randomly relabeling each graph are identical), and so the verifier makes the right decision with probability 1/2. Repeating the protocol a few times, with a different choice of relabeling each time, quickly drives the probability of making an error to {0}.

A deep result from the 1990s exactly charaterizes the class of computational problems (languages) that a classical polynomial-time verifier can decide in this model: IP = PSPACE. In words, any problem whose solution can be found in polynomial space has an interactive proof in which the verifier only needs polynomial time. Now observe that PSPACE contains NP, and much more: in fact PSPACE contains BQP as well (and even QMA)! (See this nice recent article in Quanta for a gentle introduction to these complexity classes, and more.) Thus any problem that can be decided on a quantum computer can also be decided without a quantum computer, by interacting with a powerful entity, the prover, that can convince the verifier of the right answer without being able to induce her in error (in spite of the prover’s greater power).

Are we not done? We’re not! The problem is that the result PSPACE = IP, even when specialized to BQP, requires (for all we know) a prover whose power matches that of PSPACE (almost: see e.g. this recent result for a slighlty more efficient prover). And as much as our experimental quantum device inches towards the power of BQP, we certainly wouldn’t dare ask it to perform a PSPACE-hard computation. So even though in principle there do exist interactive proofs for BQP-complete languages, these interactive proofs require a prover whose computational power goes much beyond what we believe is physically achievable. But that’s useless (for us): back to square zero.

Interactive proofs with quantum provers

Prior to Mahadev’s result, a sequence of beautiful results in the late 2000’s introduced a clever extension of the model of interactive proofs by allowing the verifier to make use of a very limited quantum computer. For example, the verifier may have the ability to prepare single qubits in two possible bases of her choice, one qubit at a time, and send them to the prover. Or the verifier may have the ability to receive single qubits from the prover, one at a time, and measure them in one of two bases of her choice. In both cases it was shown that the verifier could combine such limited quantum capacity with the possibility to interact with a quantum polynomial-time prover to verify arbitrary polynomial-time quantum computation. The idea for the protocols crucially relied on the ability of the verifier to prepare qubits in a way that any deviation by the prover from the presecribed honest behavior would be detected (e.g. by encoding information in mutually unbiased bases unknown to the prover). For a decade the question remained open: can a completely classical verifier certify the computation performed by a quantum prover?

Mahadev’s result brings a positive resolution to this question. Mahadev describes a protocol with the following properties. First, as expected, for any quantum computation, there is a quantum prover that will convince the classical verifier of the right outcome for the computation. This property is called completeness of the protocol. Second, no prover can convince the classical verifier to accept a wrong outcome. This property is called soundness of the protocol. In Mahadev’s result the latter property comes with a twist: soundness holds provided the prover cannot break post-quantum cryptography. In contrast, the earlier results mentioned in the previous paragraph obtained protocols that were sound against an arbitrarily powerful prover. The additional cryptographic assumption gives Mahadev’s result a “win-win” flavor: either the protocol is sound, or someone in the quantum cloud has figured out how to break an increasingly standard cryptographic assumption (namely, post-quantum security of the Learning With Errors problem) — in all cases, a verified quantum feat!

In the remaining of the post I will give a high-level overview of Mahadev’s protocol and its analysis. For more detail, see the accompanying blog post.

The protocol is constructed in two steps. The first step builds on insights from works preceding this one. This step reduces the problem of verifying the outcome of an arbitrary quantum computation to a seemingly much simpler problem, that nevertheless encapsulates all the subtlety of the verification task. The problem is the following — in keeping with the terminology employed by Mahadev, I’ll call it the qubit commitment problem. Suppose that a prover claims to have prepared a single-qubit state of its choice; call it {| \psi \rangle} ({| \psi \rangle} is not known to the verifier). Suppose the verifier challenges the prover for the outcome of a measurement performed on {| \psi \rangle}, either in the computational basis (the eigenbasis of the Pauli Z), or in the Hadamard basis (the eigenbasis of the Pauli X). Which basis to use is the verifier’s choice, but of course only one basis can be asked. Does there exist a protocol that guarantees that, at the end of the protocol, the verifier will be able to produce a bit that matches the true outcome of a measurement of {| \psi \rangle} in the chosen basis? (More precisely, it should be that the verifier’s final bit has the same distribution as the outcome of a measurement of {| \psi \rangle} in the chosen basis.)

The reduction that accomplishes this first step combines Kitaev’s circuit-to-Hamiltonian construction with some gadgetry from perturbation theory, and I will not describe it here. An important property of the reduction is that it is ultimately sufficient that the verifier has the guarantee that the measurement outcomes she obtains in either case, computational or Hadamard, are consistent with measurement outcomes for the correct measurements performed on some quantum state. In principle the state does not need to be related to anything the prover does (though of course the analysis will eventually define that state from the prover), it only needs to exist. Specifically, we wish to rule out situations where e.g. the prover claims that both outcomes are deterministically “0”, a claim that would violate the uncertainty principle. (For the sake of the argument, let’s ignore that in the case of a single qubit the space of outcomes allowed by quantum mechanics can be explicitly mapped out — in the actual protocol, the prover commits to {n} qubits, not just one.)

Committing to a qubit

The second step of the protocol construction introduces a key idea. In order to accomplish the sought-after commitment, the verifier is going to engage in an initial commitment phase with the prover. In this phase, the prover is required to provide classical information to the verifier, that “commits” it to a specific qubit. This committed qubit is the state on which the prover will later perform the measurement asked by the verifier. The classical information obtained in the commitment phase will bind the prover to reporting the correct outcome, for both of the verifier’s basis choice — or risk being caught cheating.bit_commit_cartoon

How does this work? Commitments to bits, or even qubits, are an old story in cryptography. The standard method for committing to a bit {b} is based on the use of a one-way permutation {f}, together with a hardcore predicate {h} for {f} (i.e.~an efficiently computable function {h: \{0,1\}^n\rightarrow \{0,1\}} such that given {f(x)}, it is hard to predict {h(x)}). The construction goes as follows. The committer selects a uniformly random string {r} and sends {(y,m)=(f(r),h(r)\oplus b)}. To unveil the commitment {b}, it is enough to reveal a string {r} such that {f(r)=y}; the receiver can then compute {h(r)} and decode {b=h(r)\oplus m}. The point is that since {f} is a permutation, the value {y} uniquely “commits” the sender to an {r}, and thus to a {b}; however, given {y=f(r)} for an unknown {r} the hardcore predicate {h(r)} looks uniformly random, thus {(y,m)} reveals no information about {b} to the receiver.

What is new in Mahadev’s scheme is not only that the commitment is to a qubit, instead of a bit, but even more importabtly that the commitment is provided by classical information, which is necessary to obtain a classical protocol. (Commitments to qubits, using qubits, can be obtained by combining the quantum one-time pad with the commitment scheme described above.) To explain how this is achieved we’ll need a slightly more advanced crypographic primitive: a pair of injective trapdoor one-way functions {f_0,f_1:\{0,1\}^n\rightarrow\{0,1\}^n}. This means that it is easy to evaluate both functions on any input, but that given a value {y} in their common range, it is hard to find a preimage of {y} under either function — except if one is given the trapdoor information. (Note that this is an over-simplification of the actual primitive used by Mahadev, which has additional properties, including that of being “claw-free”.)

The commitment phase of the protocol works as follows. Starting from a state {| \psi \rangle=\alpha| 0 \rangle+\beta| 1 \rangle} of its choice, the prover is supposed to perform the following steps. First, the prover creates a uniform superposition over the common domain of {f_0} and {f_1}. Then it evaluates either function, {f_0} or {f_1}, in an additional register, by controlling on the qubit of {| \psi \rangle}. Finally, the prover measures the register that contains the image of {f_0} or {f_1}. This achieves the following sequence of transformations:

\displaystyle \begin{array}{rcl} \alpha| 0 \rangle+\beta| 1 \rangle &\mapsto& (\alpha| 0 \rangle + \beta| 1 \rangle) \otimes \Big(2^{-n/2} \sum_{x\in\{0,1\}^n} | x \rangle\Big) \\ &\mapsto & 2^{-n/2} \sum_x \alpha | 0 \rangle| x \rangle| f_0(x) \rangle + \beta | 1 \rangle| f_1(x) \rangle\\ &\mapsto & \big(\alpha| 0 \rangle| x_0 \rangle+\beta| 1 \rangle| x_1 \rangle\big)| y \rangle\;, \end{array}

where {y\in\{0,1\}^n} is the measured image. The string {y} is called the prover’s commitment string. It is required to report it to the verifier.

In what sense is {y} a commitment to the state {| \psi \rangle}? The key point is that, once it has measured {y}, the prover has “lost control” over its qubit — it has effectively handed over that control to the verifier. For example, the prover no longer has the ability to perform an arbitrary rotation on its qubit. Why? The prover knows {y} (it had to report it to the verifier) but not {x_0} and {x_1} (this is the claw-free assumption on the pair {(f_0,f_1)}). What this means — though of course it has to be shown — is that the prover can no longer recover the state {| \psi \rangle}! It does not have the ability to “uncompute” {x_0} and {x_1}. Thus the qubit has been “set in cryptographic stone”. In contrast, the verifier can use the trapdoor information to recover {x_0} and {x_1}. This gives her extra leverage on the prover. This asymmetry, introduced by cryptography, is what eventually allows the verifier to extract a truthful measurement outcome from the prover (or detect lying).

It is such a wonderful idea! It stuns me every time Urmila explains it. Proving it is of course rather delicate. In this post I make an attempt at going into the idea in a little more depth. The best resource remains Urmila’s paper, as well as her talk at the Simons Institute.

Open questions

What is great about this result is not that it closes a decades-old open question, but that by introducing a truly novel idea it opens up a whole new field of investigation. Some of the ideas that led to the result were already fleshed out by Mahadev in her work on homomorphic encryption for quantum circuits, and I expect many more results to continue building on these ideas.

An obvious outstanding question is whether the cryptography is needed at all: could there be a scheme achieving the same result as Mahadev’s, but without computational assumptions on the prover? It is known that if such a scheme exists, it is unlikely to have the property of being blind, meaning that the prover learns nothing about the computation that the verifier wishes it to execute (aside from an upper bound on its length); see this paper for “implausibility” results in this direction. Mahadev’s protocol relies on “post-hoc” verification, and is not blind. Urmila points out that it is likely the protocol could be made blind by composing it with her protocol for homomorphic encryption. Could there be a different protocol, that would not go through post-hoc verification, but instead directly guide the prover through the evaluation of a universal circuit on an encrypted input, gate by gate, as did some previous works?