Peeking into the world of quantum intelligence

Intelligent beings have the ability to receive, process, store information, and based on the processed information, predict what would happen in the future and act accordingly.

An illustration of receiving, processing, and storing information. Based on the processed information, one can make prediction about the future.
[Credit: Claudia Cheng]

We, as intelligent beings, receive, process, and store classical information. The information comes from vision, hearing, smell, and tactile sensing. The data is encoded as analog classical information through the electrical pulses sending through our nerve fibers. Our brain processes this information classically through neural circuits (at least that is our current understanding, but one should check out this blogpost). We then store this processed classical information in our hippocampus that allows us to retrieve it later to combine it with future information that we obtain. Finally, we use the stored classical information to make predictions about the future (imagine/predict the future outcomes if we perform certain action) and choose the action that would most likely be in our favor.

Such abilities have enabled us to make remarkable accomplishments: soaring in the sky by constructing accurate models of how air flows around objects, or building weak forms of intelligent beings capable of performing basic conversations and play different board games. Instead of receiving/processing/storing classical information, one could imagine some form of quantum intelligence that deals with quantum information instead of classical information. These quantum beings can receive quantum information through quantum sensors built up from tiny photons and atoms. They would then process this quantum information with quantum mechanical evolutions (such as quantum computers), and store the processed qubits in a quantum memory (protected with a surface code or toric code).

A caricature of human intelligence dating long before 1950, artificial intelligence that began in the 50’s, and the emergence of quantum intelligence.
[Credit: Claudia Cheng]

It is natural to wonder what a world of quantum intelligence would be like. While we have never encountered such a strange creature in the real world (yet), the mathematics of quantum mechanics, machine learning, and information theory allow us to peek into what such a fantastic world would be like. The physical world we live in is intrinsically quantum. So one may imagine that a quantum being is capable of making more powerful predictions than a classical being. Maybe he/she/they could better predict events that happened further away, such as tell us how a distant black hole was engulfing another? Or perhaps he/she/they could improve our lives, for example by presenting us with an entirely new approach for capturing energy from sunlight?

One may be skeptical about finding quantum intelligent beings in nature (and rightfully so). But it may not be so absurd to synthesize a weak form of quantum (artificial) intelligence in an experimental lab, or enhance our classical human intelligence with quantum devices to approximate a quantum-mechanical being. Many famous companies, like Google, IBM, Microsoft, and Amazon, as well as many academic labs and startups have been building better quantum machines/computers day by day. By combining the concepts of machine learning on classical computers with these quantum machines, the future of us interacting with some form of quantum (artificial) intelligence may not be so distant.

Before the day comes, could we peek into the world of quantum intelligence? And could one better understand how much more powerful they could be over classical intelligence?

A cartoon depiction of me (Left), Richard Kueng (Middle), and John Preskill (Right).
[Credit: Claudia Cheng]

In a recent publication [1], my advisor John Preskill, my good friend Richard Kueng, and I made some progress toward these questions. We consider a quantum mechanical world where classical beings could obtain classical information by measuring the world (performing POVM measurement). In contrast, quantum beings could retrieve quantum information through quantum sensors and store the data in a quantum memory. We study how much better quantum over classical beings could learn from the physical world to accurately predict the outcomes of unseen events (with the focus on the number of interactions with the physical world instead of computation time). We cast these problems in a rigorous mathematical framework and utilize high-dimensional probability and quantum information theory to understand their respective prediction power. Rigorously, one refers to a classical/quantum being as a classical/quantum model, algorithm, protocol, or procedure. This is because the actions of these classical/quantum beings are the center of the mathematical analysis.

Formally, we consider the task of learning an unknown physical evolution described by a CPTP map \mathcal{E} that takes in n-qubit state and maps to m-qubit state. The classical model can select an arbitrary classical input to the CPTP map and measure the output state of the CPTP map with some POVM measurement. The quantum model can access the CPTP map coherently and obtain quantum data from each access, which is equivalent to composing multiple CPTP maps with quantum computations to learn about the CPTP map. The task is to predict a property of the output state \mathcal{E}(\lvert x \rangle\!\langle x \rvert), given by \mathrm{Tr}(O \mathcal{E}(\lvert x \rangle\!\langle x \rvert)), for a new classical input x \in \{0, 1\}^n. And the goal is to achieve the task while accessing \mathcal{E} as few times as possible (i.e., fewer interactions or experiments in the physical world). We denote the number of interactions needed by classical and quantum models as N_{\mathrm{C}}, N_{\mathrm{Q}}.

In general, quantum models could learn from fewer interactions with the physical world (or experiments in the physical world) than classical models. This is because coherent quantum information can facilitate better information synthesis with information obtained from previous experiments. Nevertheless, in [1], we show that there is a fundamental limit to how much more efficient quantum models can be. In order to achieve a prediction error

\mathbb{E}_{x \sim \mathcal{D}} |h(x) -  \mathrm{Tr}(O \mathcal{E}(\lvert x \rangle\!\langle x \rvert))| \leq \mathcal{O}(\epsilon),

where h(x) is the hypothesis learned from the classical/quantum model and \mathcal{D} is an arbitrary distribution over the input space \{0, 1\}^n, we found that the speed-up N_{\mathrm{C}} / N_{\mathrm{Q}} is upper bounded by m / \epsilon, where m > 0 is the number of qubits each experiment provides (the output number of qubits in the CPTP map \mathcal{E}), and \epsilon > 0 is the desired prediction error (smaller \epsilon means we want to predict more accurately).

In contrast, when we want to accurately predict all unseen events, we prove that quantum models could use exponentially fewer experiments than classical models. We give a construction for predicting properties of quantum systems showing that quantum models could substantially outperform classical models. These rigorous results show that quantum intelligence shines when we seek stronger prediction performance.

We have only scratched the surface of what is possible with quantum intelligence. As the future unfolds, I am hopeful that we will discover more that can be done only by quantum intelligence, through mathematical analysis, rigorous numerical studies, and physical experiments.

Further information:

  • A classical model that can be used to accurately predict properties of quantum systems is the classical shadow formalism [2] that we proposed a year ago. In many tasks, this model can be shown to be one of the strongest rivals that quantum models have to surpass.
  • Even if a quantum model only receives and stores classical data, the ability to process the data using a quantum-mechanical evolution can still be advantageous [3]. However, obtaining large advantage will be harder in this case as the computational power in data can slightly boost classical machines/intelligence [3].
  • Another nice paper by Dorit Aharonov, Jordan Cotler, and Xiao-Liang Qi [4] also proved advantages of quantum models over classical one in some classification tasks.

References:

[1] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. “Information-Theoretic Bounds on Quantum Advantage in Machine Learning.” Physical Review Letters 126: 190505 (2021). https://doi.org/10.1103/PhysRevLett.126.190505

[2] Huang, Hsin-Yuan, Richard Kueng, and John Preskill. “Predicting many properties of a quantum system from very few measurements.” Nature Physics 16: 1050-1057 (2020). https://doi.org/10.1038/s41567-020-0932-7

[3] Huang, Hsin-Yuan, et al. “Power of data in quantum machine learning.” Nature communications 12.1 (2021): 1-9. https://doi.org/10.1038/s41467-021-22539-9

[4] Aharonov, Dorit, Jordan Cotler, and Xiao-Liang Qi. “Quantum Algorithmic Measurement.” arXiv preprint arXiv:2101.04634 (2021).

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Life, cellular automata, and mentoring

One night last July, IQIM postdoc Ning Bao emailed me a photo. He’d found a soda can that read, “Share a Coke with Patrick.”

Ning and I were co-mentoring two Summer Undergraduate Research Fellows, or SURFers. One mentee received Ning’s photo: Caltech physics major Patrick Rall.

“Haha,” Patrick emailed back. “I’ll share a Coke.”

Patrick, Ning, and I shared the intellectual equivalent of a six-pack last summer. We shared papers, meals, frustrations, hopes, late-night emails (from Patrick and Ning), 7-AM emails (from me), and webcomic strips. Now a senior, Patrick is co-authoring a paper about his SURF project.

The project grew from the question “What would happen if we quantized Conway’s Game of Life?” (For readers unfamiliar with the game, I’ll explain below.) Lessons we learned about the Game of Life overlapped with lessons I learned about life, as a first-time mentor. The soda fountain of topics contained the following flavors.

Patrick + Coke

Update rules: Till last spring, I’d been burrowing into two models for out-of-equilibrium physics. PhD students burrow as no prairie dogs can. But, given five years in Caltech’s grassland, I wanted to explore. I wanted an update.

Ning and I had trespassed upon quantum game theory months earlier. Consider a nonquantum game, such as the Prisoner’s Dilemma or an election. Suppose that players have physical systems, such as photons (particles of light), that occupy superposed or entangled states. These quantum resources can change the landscape of the game’s possible outcomes. These changes clarify how we can harness quantum mechanics to process, transmit, and secure information.

How might quantum resources change Conway’s Game of Life, or GoL? British mathematician John Conway invented the game in 1970. Imagine a square board divided into smaller squares, or cells. On each cell sits a white or a black tile. Black represents a living organism; white represents a lack thereof.

Conway modeled population dynamics with an update rule. If prairie dogs overpopulate a field, some die from overcrowding. If a black cell borders more than three black neighbors, a white tile replaces the black. If separated from its pack, a prairie dog dies from isolation. If a black tile borders too few black neighbors, we exchange the black for a white. Mathematics columnist Martin Gardner detailed the rest of Conway’s update rule in this 1970 article.

Updating the board repeatedly evolves the population. Black and white shapes might flicker and undulate. Space-ship-like shapes can glide across the board. A simple update rule can generate complex outcomes—including, I found, frustrations, hopes, responsibility for another human’s contentment, and more meetings than I’d realized could fit in one summer.

Prairie dogs

Modeled by Conway’s Game of Life. And by PhD students.

Initial conditions: The evolution depends on the initial state, on how you distribute white and black tiles when preparing the board. Imagine choosing the initial state randomly from all the possibilities. White likely mingles with about as much black. The random initial condition might not generate eye-catchers such as gliders. The board might fade to, and remain, one color.*

Enthusiasm can fade as research drags onward. Project Quantum GoL has continued gliding due to its initial condition: The spring afternoon on which Ning, Patrick, and I observed the firmness of each other’s handshakes; Patrick walked Ning and me through a CV that could have intimidated a postdoc; and everyone tried to soothe everyone else’s nerves but occasionally avoided eye contact.

I don’t mean that awkwardness sustained the project. The awkwardness faded, as exclamation points and smiley faces crept into our emails. I mean that Ning and I had the fortune to entice Patrick. We signed up a bundle of enthusiasm, creativity, programming skills, and determination. That determination perpetuated the project through the summer and beyond. Initial conditions can determine a system’s evolution.

Long-distance correlations:  “Sure, I’d love to have dinner with you both! Thank you for the invitation!”

Lincoln Carr, a Colorado School of Mines professor, visited in June. Lincoln’s group, I’d heard, was exploring quantum GoLs.** He studies entanglement (quantum correlations) in many-particle systems. When I reached out, Lincoln welcomed our SURF group to collaborate.

I relished coordinating his visit with the mentees. How many SURFers could say that a professor had visited for his or her sake? When I invited Patrick to dinner with Lincoln, Patrick lit up like a sunrise over grasslands.

Our SURF group began skyping with Mines every Wednesday. We brainstorm, analyze, trade code, and kvetch with Mines student Logan Hillberry and colleagues. They offer insights about condensed matter; Patrick, about data processing and efficiency; I, about entanglement theory; and Ning, about entropy and time evolution.

We’ve learned together about long-range entanglement, about correlations between far-apart quantum systems. Thank goodness for skype and email that correlate far-apart research groups. Everyone would have learned less alone.

Correlations.001

Long-distance correlations between quantum states and between research groups

Time evolution: Logan and Patrick simulated quantum systems inspired by Conway’s GoL. Each researcher coded a simulation, or mathematical model, of a quantum system. They agreed on a nonquantum update rule; Logan quantized it in one way (constructed one quantum analog of the rule); and Patrick quantized the rule another way. They chose initial conditions, let their systems evolve, and waited.

In July, I noticed that Patrick brought a hand-sized green spiral notepad to meetings. He would synopsize his progress, and brainstorm questions, on the notepad before arriving. He jotted suggestions as we talked.

The notepad began guiding meetings in July. Patrick now steers discussions, ticking items off his agenda. The agenda I’ve typed remains minimized on my laptop till he finishes. My agenda contains few points absent from his, and his contains points not in mine.

Patrick and Logan are comparing their results. Behaviors of their simulations, they’ve found, depend on how they quantized their update rule. One might expect the update rule to determine a system’s evolution. One might expect the SURF program’s template to determine how research and mentoring skills evolve. But how we implement update rules matters.

SURF photo

Caltech’s 2015 quantum-information-theory Summer Undergraduate Research Fellows and mentors

Life: I’ve learned, during the past six months, about Conway’s Game of Life, simulations, and many-body entanglement. I’ve learned how to suggest references and experts when I can’t answer a question. I’ve learned that editing SURF reports by hand costs me less time than editing electronically. I’ve learned where Patrick and his family vacation, that he’s studying Chinese, and how undergrads regard on-campus dining. Conway’s Game of Life has expanded this prairie dog’s view of the grassland more than expected.

I’ll drink a Coke to that.

Glossary: Conway’s GoL is a cellular automatonA cellular automaton consists of a board whose tiles change according to some update rule. Different cellular automata correspond to different board shapes, to boards of different dimensions, to different types of tiles, and to different update rules.

*Reversible cellular automata have greater probabilities (than the GoL has) of updating random initial states through dull-looking evolutions.

**Others have pondered quantum variations on Conway’s GoL.

Quantum Information: Episode II: The Tools’ Applications

Monday dawns. Headlines report that “Star Wars: Episode VII” has earned more money, during its opening weekend, than I hope to earn in my lifetime. Trading the newspaper for my laptop, I learn that a friend has discovered ThinkGeek’s BB-8 plushie. “I want one!” she exclaims in a Facebook post. “Because BB-8 definitely needs to be hugged.”

BB-8 plays sidekick to Star Wars hero Poe Dameron. The droid has a spherical body covered with metallic panels and lights.Mr. Gadget and Frosty the Snowman could have spawned such offspring. BB-8 has captured viewers’ hearts, and its chirps have captured cell-phone ringtones.

BB-8

ThinkGeek’s BB-8 plushie

Still, I scratch my head over my friend’s Facebook post. Hugged? Why would she hug…

Oh. Oops.

I’ve mentally verbalized “BB-8” as “BB84.” BB84 denotes an application of quantum theory to cryptography. Cryptographers safeguard information from eavesdroppers and tampering. I’ve been thinking that my friend wants to hug a safety protocol.

Charles Bennett and Gilles Brassard invented BB84 in 1984. Imagine wanting to tell someone a secret. Suppose I wish to coordinate, with a classmate, the purchase of a BB-8 plushie for our friend the droid-hugger. Suppose that the classmate and I can communicate only via a public channel on which the droid-hugger eavesdrops.

Cryptographers advise me to send my classmate a key. A key is a random string of letters, such as CCCAAACCABACA. I’ll encode my message with the string, with which my classmate will decode the message.

Key 2

I have to transmit the key via the public channel. But the droid-hugger eavesdrops on the public channel. Haven’t we taken one step forward and one step back? Why would the key secure our information?

Because quantum-information science enables me to to transmit the key without the droid-hugger’s obtaining it. I won’t transmit random letters; I’ll transmit quantum states. That is, I’ll transmit physical systems, such as photons (particles of light), whose properties encode quantum information.

A nonquantum letter has a value, such as A or B or C.  Each letter has one and only one value, regardless of whether anyone knows what value the letter has. You can learn the value by measuring (looking at) the letter. We can’t necessarily associate such a value with a quantum state. Imagine my classmate measuring a state I send. Which value the measurement device outputs depends on chance and on how my classmate looks at the state.

If the droid-hugger intercepts and measures the state, she’ll change it. My classmate and I will notice such changes. We’ll scrap our key and repeat the BB84 protocol until the droid-hugger quits eavesdropping.

BB84 launched quantum cryptography, the safeguarding of information with quantum physics. Today’s quantum cryptographers rely on BB84 as you rely, when planning a holiday feast, on a carrot-cake recipe that passed your brother’s taste test on his birthday. Quantum cryptographers construct protocols dependent on lines like “The message sender and receiver are assumed to share a key distributed, e.g., via the BB84 protocol.”

BB84 has become a primitive task, a solved problem whose results we invoke in more-complicated problems. Other quantum-information primitives include (warning: jargon ahead) entanglement distillation, entanglement dilution, quantum data compression, and quantum-state merging. Quantum-information scientists solved many primitive problems during the 1990s and early 2000s. You can apply those primitives, even if you’ve forgotten how to prove them.

Caveman

A primitive task, like quantum-entanglement distillation

Those primitives appear to darken quantum information’s horizons. The spring before I started my PhD, an older physicist asked me why I was specializing in quantum information theory. Haven’t all the problems been solved? he asked. Isn’t quantum information theory “dead”?

Imagine discovering how to power plasma blades with kyber crystals. Would you declare, “Problem solved” and relegate your blades to the attic? Or would you apply your tool to defending freedom?

Saber + what to - small

Primitive quantum-information tools are unknotting problems throughout physics—in computer science; chemistry; optics (the study of light); thermodynamics (the study of work, heat, and efficiency); and string theory. My advisor has tracked how uses of “entanglement,” a quantum-information term, have swelled in high-energy-physics papers.

A colleague of that older physicist views quantum information theory as a toolkit, a perspective, a lens through which to view science. During the 1700s, the calculus invented by Isaac Newton and Gottfried Leibniz revolutionized physics. Emmy Noether (1882—1935) recast physics in terms of symmetries and conservation laws. (If the forces acting on a system don’t change in time, for example, the system doesn’t gain or lose energy. A constant force is invariant under, or symmetric with respect to, the progression of time. This symmetry implies that the system’s energy is conserved.) We can cast physics instead (jargon ahead) in terms of the minimization of a free energy or an action.

Quantum information theory, this physicist predicted, will revolutionize physics as calculus, symmetries, conservation, and free energy have. Quantum-information tools such as entropies, entanglement, and qubits will bleed into subfields of physics as Lucasfilm has bled into the fanfiction, LEGO, and Halloween-costume markets.

BB84, and the solution of other primitives, have not killed quantum information. They’ve empowered it to spread—thankfully, to this early-career quantum information scientist. Never mind BB-8; I’d rather hug BB84. Perhaps I shall. Engineers have realized technologies that debuted on Star Trek; quantum mechanics has secured key sharing; bakers have crafted cakes shaped like the Internet; and a droid’s popularity rivals R2D2’s. Maybe next Monday will bring a BB84 plushie.

Plushie

The author hugging the BB84 paper and a plushie. On my wish list: a combination of the two.

Discourse in Delft

A camel strolled past, yards from our window in the Applied-Sciences Building.

I hadn’t expected to see camels at TU Delft, aka the Delft University of Technology, in Holland. I breathed, “Oh!” and turned to watch until the camel followed its turbaned leader out of sight. Nelly Ng, the PhD student with whom I was talking, followed my gaze and laughed.

Nelly works in Stephanie Wehner’s research group. Stephanie—a quantum cryptographer, information theorist, thermodynamicist, and former Caltech postdoc—was kind enough to host me for half August. I arrived at the same time as TU Delft’s first-year undergrads. My visit coincided with their orientation. The orientation involved coffee hours, team-building exercises, and clogging the cafeteria whenever the Wehner group wanted lunch.

And, as far as I could tell, a camel.

Not even a camel could unseat Nelly’s and my conversation. Nelly, postdoc Mischa Woods, and Stephanie are the Wehner-group members who study quantum and small-scale thermodynamics. I study quantum and small-scale thermodynamics, as Quantum Frontiers stalwarts might have tired of hearing. The four of us exchanged perspectives on our field.

Mischa knew more than Nelly and I about clocks; Nelly knew more about catalysis; and I knew more about fluctuation relations. We’d read different papers. We’d proved different theorems. We explained the same phenomena differently. Nelly and I—with Mischa and Stephanie, when they could join us—questioned and answered each other almost perpetually, those two weeks.

We talked in our offices, over lunch, in the group discussion room, and over tea at TU Delft’s Quantum Café. We emailed. We talked while walking. We talked while waiting for Stephanie to arrive so that she could talk with us.

IMG_0125

The site of many a tête-à-tête.

The copiousness of the conversation drained me. I’m an introvert, formerly “the quiet kid” in elementary school. Early some mornings in Delft, I barricaded myself in the visitors’ office. Late some nights, I retreated to my hotel room or to a canal bank. I’d exhausted my supply of communication; I had no more words for anyone. Which troubled me, because I had to finish a paper. But I regret not one discussion, for three reasons.

First, we relished our chats. We laughed together, poked fun at ourselves, commiserated about calculations, and confided about what we didn’t understand.

We helped each other understand, second. As I listened to Mischa or as I revised notes about a meeting, a camel would stroll past a window in my understanding. I’d see what I hadn’t seen before. Mischa might be explaining which quantum states represent high-quality clocks. Nelly might be explaining how a quantum state ξ can enable a state ρ to transform into a state σ. I’d breathe, “Oh!” and watch the mental camel follow my interlocutor through my comprehension.

Nelly’s, Mischa’s, and Stephanie’s names appear in the acknowledgements of the paper I’d worried about finishing. The paper benefited from their explanations and feedback.

Third, I left Delft with more friends than I’d had upon arriving. Nelly, Mischa, and I grew to know each other, to trust each other, to enjoy each other’s company. At the end of my first week, Nelly invited Mischa and me to her apartment for dinner. She provided pasta; I brought apples; and Mischa brought a sweet granola-and-seed mixture. We tasted and enjoyed more than we would have separately.

IMG_0050

Dinner with Nelly and Mischa.

I’ve written about how Facebook has enhanced my understanding of, and participation in, science. Research involves communication. Communication can challenge us, especially many of us drawn to science. Let’s shoulder past the barrier. Interlocutors point out camels—and hot-air balloons, and lemmas and theorems, and other sources of information and delight—that I wouldn’t spot alone.

With gratitude to Stephanie, Nelly, Mischa, the rest of the Wehner group (with whom I enjoyed talking), QuTech and TU Delft.

During my visit, Stephanie and Delft colleagues unveiled the “first loophole-free Bell test.” Their paper sent shockwaves (AKA camels) throughout the quantum community. Scott Aaronson explains the experiment here.

Reporting from the ‘Frontiers of Quantum Information Science’

What am I referring to with this title? It is similar to the name of this blog–but that’s not where this particular title comes from–although there is a common denominator. Frontiers of Quantum Information Science was the theme for the 31st Jerusalem winter school in theoretical physics, which takes place annually at the Israeli Institute for Advanced Studies located on the Givat Ram campus of the Hebrew University of Jerusalem. The school took place from December 30, 2013 through January 9, 2014, but some of the attendees are still trickling back to their home institutions. The common denominator is that our very own John Preskill was the director of this school; co-directed by Michael Ben-Or and Patrick Hayden. John mentioned during a previous post and reiterated during his opening remarks that this is the first time the IIAS has chosen quantum information to be the topic for its prestigious advanced school–another sign of quantum information’s emergence as an important sub-field of physics. In this blog post, I’m going to do my best to recount these festivities while John protects his home from forest fires, prepares a talk for the Simons Institute’s workshop on Hamiltonian complexityteaches his quantum information course and celebrates his birthday 60+1.

The school was mainly targeted at physicists, but it was diversely represented. Proof of the value of this diversity came in an interaction between a computer scientist and a physicist, which led to one of the school’s most memorable moments. Both of my most memorable moments started with the talent show (I was surprised that so many talents were on display at a physics conference…) Anyways, towards the end of the show, Mateus Araújo Santos, a PhD student in Vienna, entered the stage and mentioned that he could channel “the ghost of Feynman” to serve as an oracle for NP-complete decision problems. After making this claim, people obviously turned to Scott Aaronson, hoping that he’d be able to break the oracle. However, in order for this to happen, we had to wait until Scott’s third lecture about linear optics and boson sampling the next day. You can watch Scott bombard the oracle with decision problems from 1:00-2:15 during the video from his third lecture.

oracle_aaronson

Scott Aaronson grilling the oracle with a string of NP-complete decision problems! From 1:00-2:15 during this video.

The other most memorable moment was when John briefly danced Gangnam style during Soonwon Choi‘s talent show performance. Unfortunately, I thought I had this on video, but the video didn’t record. If anyone has video evidence of this, then please share!
Continue reading

The complementarity (not incompatibility) of reason and rhyme

Shortly after learning of the Institute for Quantum Information and Matter, I learned of its poetry.

I’d been eating lunch with a fellow QI student at the Perimeter Institute for Theoretical Physics. Perimeter’s faculty includes Daniel Gottesman, who earned his PhD at what became Caltech’s IQIM. Perhaps as Daniel passed our table, I wondered whether a liberal-arts enthusiast like me could fit in at Caltech.

“Have you seen Daniel Gottesman’s website?” my friend replied. “He’s written a sonnet.”

Quill

He could have written equations with that quill.

Digesting this news with my chicken wrap, I found the website after lunch. The sonnet concerned quantum error correction, the fixing of mistakes made during computations by quantum systems. After reading Daniel’s sonnet, I found John Preskill’s verses about Daniel. Then I found more verses of John’s.

To my Perimeter friend: You win. I’ll fit in, no doubt.

Exhibit A: the latest edition of The Quantum Times, the newsletter for the American Physical Society’s QI group. On page 10, my enthusiasm for QI bubbles over into verse. Don’t worry if you haven’t heard all the terms in the poem. Consider them guidebook entries, landmarks to visit during a Wikipedia trek.

If you know the jargon, listen to it with a newcomer’s ear. Does anyone other than me empathize with frustrated lattices? Or describe speeches accidentally as “monotonic” instead of as “monotonous”? Hearing jargon outside its natural habitat highlights how not to explain research to nonexperts. Examining names for mathematical objects can reveal properties that we never realized those objects had. Inviting us to poke fun at ourselves, the confrontation of jargon sprinkles whimsy onto the meringue of physics.

No matter your familiarity with physics or poetry: Enjoy. And fifty points if you persuade Physical Review Letters to publish this poem’s sequel.

Quantum information

By Nicole Yunger Halpern

If “CHSH” rings a bell,
you know QI’s fared, lately, well.
Such promise does this field portend!
In Neumark fashion, let’s extend
this quantum-information spring:
dilation, growth, this taking wing.

We span the space of physics types
from spin to hypersurface hype,
from neutron-beam experiment
to Bohm and Einstein’s discontent,
from records of a photon’s path
to algebra and other math
that’s more abstract and less applied—
of platforms’ details, purified.

We function as a refuge, too,
if lattices can frustrate you.
If gravity has got your goat,
momentum cutoffs cut your throat:
Forget regimes renormalized;
our states are (mostly) unit-sized.
Velocities stay mostly fixed;
results, at worst, look somewhat mixed.

Though factions I do not condone,
the action that most stirs my bones
is more a spook than Popov ghosts; 1
more at-a-distance, less quark-close.

This field’s a tot—cacophonous—
like cosine, not monotonous.
Cacophony enlivens thought:
We’ve learned from noise what discord’s not.

So take a chance on wave collapse;
enthuse about the CP maps;
in place of “part” and “piece,” say “bit”;
employ, as yardstick, Hilbert-Schmidt;
choose quantum as your nesting place,
of all the fields in physics space.

1 With apologies to Ludvig Faddeev.

Steampunk quantum

A dark-haired man leans over a marble balustrade. In the ballroom below, his assistants tinker with animatronic elephants that trumpet and with potions for improving black-and-white photographs. The man is an inventor near the turn of the 20th century. Cape swirling about him, he watches technology wed fantasy.

Welcome to the steampunk genre. A stew of science fiction and Victorianism, steampunk has invaded literature, film, and the Wall Street Journal. A few years after James Watt improved the steam engine, protagonists build animatronics, clone cats, and time-travel. At sci-fi conventions, top hats and blast goggles distinguish steampunkers from superheroes.

Photo

The closest the author has come to dressing steampunk.

I’ve never read steampunk other than H. G. Wells’s The Time Machine—and other than the scene recapped above. The scene features in The Wolsenberg Clock, a novel by Canadian poet Jay Ruzesky. The novel caught my eye at an Ontario library.

In Ontario, I began researching the intersection of QI with thermodynamics. Thermodynamics is the study of energy, efficiency, and entropy. Entropy quantifies uncertainty about a system’s small-scale properties, given large-scale properties. Consider a room of air molecules. Knowing that the room has a temperature of 75°F, you don’t know whether some molecule is skimming the floor, poking you in the eye, or elsewhere. Ambiguities in molecules’ positions and momenta endow the gas with entropy. Whereas entropy suggests lack of control, work is energy that accomplishes tasks.
Continue reading

This single-shot life

The night before defending my Masters thesis, I ran out of shampoo. I ran out late enough that I wouldn’t defend from beneath a mop like Jack Sparrow’s; but, belonging to the Luxuriant Flowing-Hair Club for Scientists (technically, if not officially), I’d have to visit Shopper’s Drug Mart.

Image

The author’s unofficially Luxuriant Flowing Scientist Hair

Before visiting Shopper’s Drug Mart, I had to defend my thesis. The thesis, as explained elsewhere, concerns epsilons, the mathematical equivalents of seed pearls. The thesis also concerns single-shot information theory.

Ordinary information theory emerged in 1948, midwifed by American engineer Claude E. Shannon. Shannon calculated how efficiently we can pack information into symbols when encoding long messages. Consider encoding this article in the fewest possible symbols. Because “the” appears many times, you might represent “the” by one symbol. Longer strings of symbols suit misfits like “luxuriant” and “oobleck.” The longer the article, the fewer encoding symbols you need per encoded word. The encoding-to-encoded ratio decreases, toward a number called the Shannon entropy, as the message grows infinitely long.

Claude Shannon

We don’t send infinitely long messages, excepting teenagers during phone conversations. How efficiently can we encode just one article or sentence? The answer involves single-shot information theory, or—to those stuffing long messages into the shortest possible emails to busy colleagues—“1-shot info.” Pioneered within the past few years, single-shot theory concerns short messages and single trials, the Twitter to Shannon’s epic. Like articles, quantum states can form messages. Hence single-shot theory blended with quantum information in my thesis.

Continue reading

A Public Lecture on Quantum Information

Sooner or later, most scientists are asked to deliver a public lecture about their research specialties. When successful, lecturing about science to the lay public can give one a feeling of deep satisfaction. But preparing the lecture is a lot of work!

Caltech sponsors the Earnest C. Watson lecture series (named after the same Earnest Watson mentioned in my post about Jane Werner Watson), which attracts very enthusiastic audiences to Beckman Auditorium nine times a year. I gave a Watson lecture on April 3 about Quantum Entanglement and Quantum Computing, which is now available from iTunes U and also on YouTube:

I did a Watson lecture once before, in 1997. That occasion precipitated some big changes in my presentation style. To prepare for the lecture, I acquired my first laptop computer and learned to use PowerPoint. This was still the era when a typical physics talk was handwritten on transparencies and displayed using an overhead projector, so I was sort of a pioneer. And I had many anxious moments in the late 1990s worrying about whether my laptop would be able to communicate with the projector — that can still be a problem even today, but was a more common problem then.

I invested an enormous amount of time in preparing that 1997 lecture, an investment still yielding dividends today. Aside from figuring out what computer to buy (an IBM ThinkPad) and how to do animation in PowerPoint, I also learned to draw using Adobe Illustrator under the tutelage of Caltech’s digital media expert Wayne Waller. And apart from all that technical preparation, I had to figure out the content of the lecture!

That was when I first decided to represent a qubit as a box with two doors, which contains a ball that can be either red or green, and I still use some of the drawings I made then.

Entanglement, illustrated with balls in boxes.

Entanglement, illustrated with balls in boxes.

This choice of colors was unfortunate, because people with red-green color blindness cannot tell the difference. I still feel bad about that, but I don’t have editable versions of the drawings anymore, so fixing it would be a big job …

I also asked my nephew Ben Preskill (then 10 years old, now a math PhD candidate at UC Berkeley), to make a drawing for me illustrating weirdness.

The desire to put weirdness to work has driven the emergence of quantum information science.

The desire to put weirdness to work has driven the emergence of quantum information science.

I still use that, for sentimental reasons, even though it would be easier to update.

The turnout at the lecture was gratifying (you can’t really see the audience with the spotlight shining in your eyes, but I sensed that the main floor of the Auditorium was mostly full), and I have gotten a lot of positive feedback (including from the people who came up to ask questions afterward — we might have been there all night if the audio-visual staff had not forced us to go home).

I did make a few decisions about which I have had second thoughts. I was told I had the option of giving a 45 minute talk with a public question period following, or a 55 minute talk with only a private question period, and I opted for the longer talk. Maybe I should have pushed back and insisted on allowing some public questions even after the longer talk — I like answering questions. And I was told that I should stay in the spotlight, to ensure good video quality, so I decided to stand behind the podium the whole time to curb my tendency to pace across the stage. But maybe I would have seemed more dynamic if I had done some pacing.

I got some gentle criticism from my wife, Roberta, who suggested I could modulate my voice more. I have heard that before, particularly in teaching evaluations that complain about my “soporific” tone. I recall that Mike Freedman once commented after watching a video of a public lecture I did at the KITP in Santa Barbara — he praised its professionalism and “newscaster quality”. But that cuts two ways, doesn’t it? Paul Ginsparg listened to a podcast of that same lecture while doing yardwork, and then sent me a compliment by email, with a characteristic Ginspargian twist. Noting that my sentences were clear, precise, and grammatical, Paul asked: “is this something that just came naturally at some early age, or something that you were able to acquire at some later stage by conscious design (perhaps out of necessity, talks on quantum computing might not go over as well without the reassuring smoothness)?”

Another criticism stung more. To illustrate the monogamy of entanglement, I used a slide describing the frustration of Bob, who wants to entangle with both Alice and Carrie, but finds that he can increase his entanglement with Carrie only my sacrificing some of his entanglement with Alice.

Entanglement is monogamous. Bob is frustrated to find that he cannot be fully entangled with both Alice and Carrie.

Entanglement is monogamous. Bob is frustrated to find that he cannot be fully entangled with both Alice and Carrie.

This got a big laugh. But I used the same slide in a talk at the APS Denver meeting the following week (at a session celebrating the 100th anniversary of Niels Bohr’s atomic model), and a young woman came up to me after that talk to complain. She suggested that my monogamy metaphor was offensive and might discourage women from entering the field!

After discussing the issue with Roberta, I decided to address the problem by swapping the gender roles. The next day, during the question period following Stephen Hawking’s Public Lecture, I spoke about Betty’s frustration over her inability to entangle fully with both Adam and Charlie. But is that really an improvement, or does it reflect negatively on Betty’s morals? I would appreciate advice about this quandary in the comments.

In case you watch the video, there are a couple of things you should know. First, in his introduction, Tom Soifer quotes from a poem about me, but neglects to name the poet. It is former Caltech postdoc Patrick Hayden. And second, toward the end of the lecture I talk about some IQIM outreach activities, but neglect to name our Outreach Director Spiros Michalakis, without whose visionary leadership these things would not have happened.

The most touching feedback I received came from my Caltech colleague Oskar Painter. I joked in the lecture about how mild mannered IQIM scientists can unleash the superpower of quantum information at a moment’s notice.

Mild mannered professor unleashes the super power of quantum information.

Mild mannered professor unleashes the superpower of quantum information.

After watching the video, Oskar shot me an email:

“I sent a link to my son [Ewan, age 11] and daughter [Quinn, age 9], and they each watched it from beginning to end on their iPads, without interruption.  Afterwards, they had a huge number of questions for me, and were dreaming of all sorts of “quantum super powers” they imagined for the future.”