Happy 200th birthday, Carnot’s theorem!

In Kenneth Grahame’s 1908 novel The Wind in the Willows, a Mole meets a Water Rat who lives on a River. The Rat explains how the River permeates his life: “It’s brother and sister to me, and aunts, and company, and food and drink, and (naturally) washing.” As the River plays many roles in the Rat’s life, so does Carnot’s theorem play many roles in a thermodynamicist’s.

Nicolas Léonard Sadi Carnot lived in France during the turn of the 19th century. His father named him Sadi after the 13th-century Persian poet Saadi Shirazi. Said father led a colorful life himself,1 working as a mathematician, engineer, and military commander for and before the Napoleonic Empire. Sadi Carnot studied in Paris at the École Polytechnique, whose members populate a “Who’s Who” list of science and engineering. 

As Carnot grew up, the Industrial Revolution was humming. Steam engines were producing reliable energy on vast scales; factories were booming; and economies were transforming. France’s old enemy Britain enjoyed two advantages. One consisted of inventors: Englishmen Thomas Savery and Thomas Newcomen invented the steam engine. Scotsman James Watt then improved upon Newcomen’s design until rendering it practical. Second, northern Britain contained loads of coal that industrialists could mine to power her engines. France had less coal. So if you were a French engineer during Carnot’s lifetime, you should have cared about engines’ efficiencies—how effectively engines used fuel.2

Carnot proved a fundamental limitation on engines’ efficiencies. His theorem governs engines that draw energy from heat—rather than from, say, the motional energy of water cascading down a waterfall. In Carnot’s argument, a heat engine interacts with a cold environment and a hot environment. (Many car engines fall into this category: the hot environment is burning gasoline. The cold environment is the surrounding air into which the car dumps exhaust.) Heat flows from the hot environment to the cold. The engine siphons off some heat and converts it into work. Work is coordinated, well-organized energy that one can directly harness to perform a useful task, such as turning a turbine. In contrast, heat is the disordered energy of particles shuffling about randomly. Heat engines transform random heat into coordinated work.

In The Wind and the Willows, Toad drives motorcars likely powered by internal combustion, rather than by a steam engine of the sort that powered the Industrial Revolution.

An engine’s efficiency is the bang we get for our buck—the upshot we gain, compared to the cost we spend. Running an engine costs the heat that flows between the environments: the more heat flows, the more the hot environment cools, so the less effectively it can serve as a hot environment in the future. An analogous statement concerns the cold environment. So a heat engine’s efficiency is the work produced, divided by the heat spent.

Carnot upper-bounded the efficiency achievable by every heat engine of the sort described above. Let T_{\rm C} denote the cold environment’s temperature; and T_{\rm H}, the hot environment’s. The efficiency can’t exceed 1 - \frac{ T_{\rm C} }{ T_{\rm H} }. What a simple formula for such an extensive class of objects! Carnot’s theorem governs not only many car engines (Otto engines), but also the Stirling engine that competed with the steam engine, its cousin the Ericsson engine, and more.

In addition to generality and simplicity, Carnot’s bound boasts practical and fundamental significances. Capping engine efficiencies caps the output one can expect of a machine, factory, or economy. The cap also prevents engineers from wasting their time on daydreaming about more-efficient engines. 

More fundamentally than these applications, Carnot’s theorem encapsulates the second law of thermodynamics. The second law helps us understand why time flows in only one direction. And what’s deeper or more foundational than time’s arrow? People often cast the second law in terms of entropy, but many equivalent formulations express the law’s contents. The formulations share a flavor often synopsized with “You can’t win.” Just as we can’t grow younger, we can’t beat Carnot’s bound on engines. 

Video courtesy of FQxI

One might expect no engine to achieve the greatest efficiency imaginable: 1 - \frac{ T_{\rm C} }{ T_{\rm H} }, called the Carnot efficiency. This expectation is incorrect in one way and correct in another. Carnot did design an engine that could operate at his eponymous efficiency: an eponymous engine. A Carnot engine can manifest as the thermodynamicist’s favorite physical system: a gas in a box topped by a movable piston. The gas undergoes four strokes, or steps, to perform work. The strokes form a closed cycle, returning the gas to its initial conditions.3 

Steampunk artist Todd Cahill beautifully illustrated the Carnot cycle for my book. The gas performs useful work because a teapot sits atop the piston. Pushing the piston upward, the gas lifts the teapot. You can find a more detailed description of Carnot’s engine in Chapter 4 of the book, but I’ll recap the cycle here.

The gas expands during stroke 1, pushing the piston and so outputting work. Maintaining contact with the hot environment, the gas remains at the temperature T_{\rm H}. The gas then disconnects from the hot environment. Yet the gas continues to expand throughout stroke 2, lifting the teapot further. Forfeiting energy, the gas cools. It ends stroke 2 at the temperature T_{\rm C}.

The gas contacts the cold environment throughout stroke 3. The piston pushes on the gas, compressing it. At the end of the stroke, the gas disconnects from the cold environment. The piston continues compressing the gas throughout stroke 4, performing more work on the gas. This work warms the gas back up to T_{\rm H}.

In summary, Carnot’s engine begins hot, performs work, cools down, has work performed on it, and warms back up. The gas performs more work on the piston than the piston performs on it. Therefore, the teapot rises (during strokes 1 and 2) more than it descends (during strokes 3 and 4). 

At what cost, if the engine operates at the Carnot efficiency? The engine mustn’t waste heat. One wastes heat by roiling up the gas unnecessarily—by expanding or compressing it too quickly. The gas must stay in equilibrium, a calm, quiescent state. One can keep the gas quiescent only by running the cycle infinitely slowly. The cycle will take an infinitely long time, outputting zero power (work per unit time). So one can achieve the perfect efficiency only in principle, not in practice, and only by sacrificing power. Again, you can’t win.

Efficiency trades off with power.

Carnot’s theorem may sound like the Eeyore of physics, all negativity and depression. But I view it as a companion and backdrop as rich, for thermodynamicists, as the River is for the Water Rat. Carnot’s theorem curbs diverse technologies in practical settings. It captures the second law, a foundational principle. The Carnot cycle provides intuition, serving as a simple example on which thermodynamicists try out new ideas, such as quantum engines. Carnot’s theorem also provides what physicists call a sanity check: whenever a researcher devises a new (for example, quantum) heat engine, they can confirm that the engine obeys Carnot’s theorem, to help confirm their proposal’s accuracy. Carnot’s theorem also serves as a school exercise and a historical tipping point: the theorem initiated the development of thermodynamics, which continues to this day. 

So Carnot’s theorem is practical and fundamental, pedagogical and cutting-edge—brother and sister, and aunts, and company, and food and drink. I just wouldn’t recommend trying to wash your socks in Carnot’s theorem.

1To a theoretical physicist, working as a mathematician and an engineer amounts to leading a colorful life.

2People other than Industrial Revolution–era French engineers should care, too.

3A cycle doesn’t return the hot and cold environments to their initial conditions, as explained above.

My favorite rocket scientist

Whenever someone protests, “I’m not a rocket scientist,” I think of my friend Jamie Rankin. Jamie is a researcher at Princeton University, and she showed me her lab this June. When I first met Jamie, she was testing instruments to be launched on NASA’s Parker Solar Probe. The spacecraft has approached closer to the sun than any of its predecessors. It took off in August 2018—fittingly, from my view, as I’d completed my PhD a few months earlier and met Jamie near the beginning of my PhD.

During my first term of Caltech courses, I noticed Jamie in one of my classes. She seemed sensible and approachable, so I invited her to check our answers against each other on homework assignments. Our homework checks evolved into studying together for qualifying exams—tests of basic physics knowledge, which serve as gateways to a PhD. The studying gave way to eating lunch together on weekends. After a quiet morning at my desk, I’d bring a sandwich to a shady patch of lawn in front of Caltech’s institute for chemical and biological research. (Pasadena lawns are suitable for eating on regardless of the season.) Jamie would regale me—as her token theorist friend—with tales of suiting up to use clean rooms; of puzzling out instrument breakages; and of working for the legendary Ed Stone, who’d headed NASA’s Jet Propulsion Laboratory (JPL).1

The Voyager probes were constructed at JPL during the 1970s. I’m guessing you’ve heard of Voyager, given how the project captured the public’s imagination. I heard about it on an educational audiotape when I was little. The probes sent us data about planets far out in our solar system. For instance, Voyager 2 was the first spacecraft to approach Neptune, as well as the first to approach four planets past Earth (Jupiter, Saturn, Uranus, and Neptune). But the probes’ mission still hasn’t ended. In 2012, Voyager 1 became the first human-made object to enter interstellar space. Both spacecrafts continue to transmit data. They also carry Golden Records, disks that encode sounds from Earth—a greeting to any intelligent aliens who find the probes.

Jamie published the first PhD thesis about data collected by Voyager. She now serves as Deputy Project Scientist for Voyager, despite her early-career status. The news didn’t surprise me much; I’d known for years how dependable and diligent she is.

A theorist intrudes on Jamie’s Princeton lab

As much as I appreciated those qualities in Jamie, though, what struck me more was her good-heartedness. In college, I found fellow undergrads to be interested and interesting, energetic and caring, open to deep conversations and self-evaluation—what one might expect of Dartmouth. At Caltech, I found grad students to be candid, generous, and open-hearted. Would you have expected as much from the tech school’s tech school—the distilled essence of the purification of concentrated Science? I didn’t. But I appreciated what I found, and Jamie epitomized it.

The back of the lab coat I borrowed

Jamie moved to Princeton after graduating. I’d moved to Harvard, and then I moved to NIST. We fell out of touch; the pandemic prevented her from attending my wedding, and we spoke maybe once a year. But, this June, I visited Princeton for the annual workshop of the Institute for Robust Quantum Simulation. We didn’t eat sandwiches on a lawn, but we ate dinner together, and she showed me around the lab she’d built. (I never did suit up for a clean-room tour at Caltech.)

In many ways, Jamie Rankin remains my favorite rocket scientist.


1Ed passed away between the drafting and publishing of this post. He oversaw my PhD class’s first-year seminar course. Each week, one faculty member would present to us about their research over pizza. Ed had landed the best teaching gig, I thought: continual learning about diverse, cutting-edge physics. So I associate Ed with intellectual breadth, curiosity, and the scent of baked cheese.

Let gravity do its work

One day, early this spring, I found myself in a hotel elevator with three other people. The cohort consisted of two theoretical physicists, one computer scientist, and what appeared to be a normal person. I pressed the elevator’s 4 button, as my husband (the computer scientist) and I were staying on the hotel’s fourth floor. The button refused to light up.

“That happened last time,” the normal person remarked. He was staying on the fourth floor, too.

The other theoretical physicist pressed the 3 button.

“Should we press the 5 button,” the normal person continued, “and let gravity do its work?

I took a moment to realize that he was suggesting we ascend to the fifth floor and then induce the elevator to fall under gravity’s influence to the fourth. We were reaching floor three, so I exchanged a “have a good evening” with the other physicist, who left. The door shut, and we began to ascend.

As it happens,” I remarked, “he’s an expert on gravity.” The other physicist was Herman Verlinde, a professor at Princeton.

Such is a side effect of visiting the Simons Center for Geometry and Physics. The Simons Center graces the Stony Brook University campus, which was awash in daffodils and magnolia blossoms last month. The Simons Center derives its name from hedge-fund manager Jim Simons (who passed away during the writing of this article). He achieved landmark physics and math research before earning his fortune on Wall Street as a quant. Simons supported his early loves by funding the Simons Center and other scientific initiatives. The center reminded me of the Perimeter Institute for Theoretical Physics, down to the café’s linen napkins, so I felt at home.

I was participating in the Simons Center workshop “Entanglement, thermalization, and holography.” It united researchers from quantum information and computation, black-hole physics and string theory, quantum thermodynamics and many-body physics, and nuclear physics. We were to share our fields’ approaches to problems centered on thermalization, entanglement, quantum simulation, and the like. I presented about the eigenstate thermalization hypothesis, which elucidates how many-particle quantum systems thermalize. The hypothesis fails, I argued, if a system’s dynamics conserve quantities (analogous to energy and particle number) that can’t be measured simultaneously. Herman Verlinde discussed the ER=EPR conjecture.

My PhD advisor, John Preskill, blogged about ER=EPR almost exactly eleven years ago. Read his blog post for a detailed introduction. Briefly, ER=EPR posits an equivalence between wormholes and entanglement. 

The ER stands for Einstein–Rosen, as in Einstein–Rosen bridge. Sean Carroll provided the punchiest explanation I’ve heard of Einstein–Rosen bridges. He served as the scientific advisor for the 2011 film Thor. Sean suggested that the film feature a wormhole, a connection between two black holes. The filmmakers replied that wormholes were passé. So Sean suggested that the film feature an Einstein–Rosen bridge. “What’s an Einstein–Rosen bridge?” the filmmakers asked. “A wormhole.” So Thor features an Einstein–Rosen bridge.

EPR stands for Einstein–Podolsky–Rosen. The three authors published a quantum paradox in 1935. Their EPR paper galvanized the community’s understanding of entanglement.

ER=EPR is a conjecture that entanglement is closely related to wormholes. As Herman said during his talk, “You probably need entanglement to realize a wormhole.” Or any two maximally entangled particles are connected by a wormhole. The idea crystallized in a paper by Juan Maldacena and Lenny Susskind. They drew on work by Mark Van Raamsdonk (who masterminded the workshop behind this Quantum Frontiers post) and Brian Swingle (who’s appeared in further posts).

Herman presented four pieces of evidence for the conjecture, as you can hear in the video of his talk. One piece emerges from the AdS/CFT duality, a parallel between certain space-times (called anti–de Sitter, or AdS, spaces) and quantum theories that have a certain symmetry (called conformal field theories, or CFTs). A CFT, being quantum, can contain entanglement. One entangled state is called the thermofield double. Suppose that a quantum system is in a thermofield double and you discard half the system. The remaining half looks thermal—we can attribute a temperature to it. Evidence indicates that, if a CFT has a temperature, then it parallels an AdS space that contains a black hole. So entanglement appears connected to black holes via thermality and temperature.

Despite the evidence—and despite the eleven years since John’s publication of his blog post—ER=EPR remains a conjecture. Herman remarked, “It’s more like a slogan than anything else.” His talk’s abstract contains more hedging than a suburban yard. I appreciated the conscientiousness, a college acquaintance having once observed that I spoke carefully even over sandwiches with a friend.

A “source of uneasiness” about ER=EPR, to Herman, is measurability. We can’t check whether a quantum state is entangled via any single measurement. We have to prepare many identical copies of the state, measure the copies, and process the outcome statistics. In contrast, we seem able to conclude that a space-time is connected without measuring multiple copies of the space-time. We can check that a hotel’s first floor is connected to its fourth, for instance, by riding in an elevator once.

Or by riding an elevator to the fifth floor and descending by one story. My husband, the normal person, and I took the stairs instead of falling. The hotel fixed the elevator within a day or two, but who knows when we’ll fix on the truth value of ER=EPR?

With thanks to the conference organizers for their invitation, to the Simons Center for its hospitality, to Jim Simons for his generosity, and to the normal person for inspiration.

The rain in Portugal

My husband taught me how to pronounce the name of the city where I’d be presenting a talk late last July: Aveiro, Portugal. Having studied Spanish, I pronounced the name as Ah-VEH-roh, with a v partway to a hard b. But my husband had studied Portuguese, so he recommended Ah-VAI-roo

His accuracy impressed me when I heard the name pronounced by the organizer of the conference I was participating in—Theory of Quantum Computation, or TQC. Lídia del Rio grew up in Portugal and studied at the University of Aveiro, so I bow to her in matters of Portuguese pronunciation. I bow to her also for organizing one of the world’s largest annual quantum-computation conferences (with substantial help—fellow quantum physicist Nuriya Nurgalieva shared the burden). But Lídia cofounded Quantum, a journal that’s risen from a Gedankenexperiment to a go-to venue in six years. So she gives the impression of being able to manage anything.

Aveiro architecture

Watching Lídia open TQC gave me pause. I met her in 2013, the summer before beginning my PhD at Caltech. She was pursuing her PhD at ETH Zürich, which I was visiting. Lídia took me dancing at an Argentine-tango studio one evening. Now, she’d invited me to speak at an international conference that she was coordinating.

Lídia and me in Zürich as PhD students
Lídia opening TQC

Not only Lídia gave me pause; so did the three other invited speakers. Every one of them, I’d met when each of us was a grad student or a postdoc. 

Richard Küng described classical shadows, a technique for extracting information about quantum states via measurements. Suppose we wish to infer about diverse properties of a quantum state \rho (about diverse observables’ expectation values). We have to measure many copies of \rho—some number n of copies. The community expected n to grow exponentially with the system’s size—for instance, with the number of qubits in a quantum computer’s register. We can get away with far fewer, Richard and collaborators showed, by randomizing our measurements. 

Richard postdocked at Caltech while I was a grad student there. Two properties of his stand out in my memory: his describing, during group meetings, the math he’d been exploring and the Austrian accent in which he described that math.

Did this restaurant’s owners realize that quantum physicists were descending on their city? I have no idea.

Also while I was a grad student, Daniel Stilck França visited Caltech. Daniel’s TQC talk conveyed skepticism about whether near-term quantum computers can beat classical computers in optimization problems. Near-term quantum computers are NISQ (noisy, intermediate-scale quantum) devices. Daniel studied how noise (particularly, local depolarizing noise) propagates through NISQ circuits. Imagine a quantum computer suffering from a 1% noise error. The quantum computer loses its advantage over classical competitors after 10 layers of gates, Daniel concluded. Nor does he expect error mitigation—a bandaid en route to the sutures of quantum error correction—to help much.

I’d coauthored a paper with the fourth invited speaker, Adam Bene Watts. He was a PhD student at MIT, and I was a postdoc. At the time, he resembled the 20th-century entanglement guru John Bell. Adam still resembles Bell, but he’s moved to Canada.

Adam speaking at TQC
From a 2021 Quantum Frontiers post of mine. I was tickled to see that TQC’s organizers used the photo from my 2021 post as Adam’s speaker photo.

Adam distinguished what we can compute using simple quantum circuits but not using simple classical ones. His results fall under the heading of complexity theory, about which one can rarely prove anything. Complexity theorists cling to their jobs by assuming conjectures widely expected to be true. Atop the assumptions, or conditions, they construct “conditional” proofs. Adam proved unconditional claims in complexity theory, thanks to the simplicity of the circuits he compared.

In my estimation, the talks conveyed cautious optimism: according to Adam, we can prove modest claims unconditionally in complexity theory. According to Richard, we can spare ourselves trials while measuring certain properties of quantum systems. Even Daniel’s talk inspired more optimism than he intended: a few years ago, the community couldn’t predict how noisy short-depth quantum circuits could perform. So his defeatism, rooted in evidence, marks an advance.

Aveiro nurtures optimism, I expect most visitors would agree. Sunshine drenches the city, and the canals sparkle—literally sparkle, as though devised by Elsa at a higher temperature than usual. Fresh fruit seems to wend its way into every meal.1 Art nouveau flowers scale the architecture, and fanciful designs pattern the tiled sidewalks.

What’s more, quantum information theorists of my generation were making good. Three riveted me in their talks, and another co-orchestrated one of the world’s largest quantum-computation gatherings. To think that she’d taken me dancing years before ascending to the global stage.

My husband and I made do, during our visit, by cobbling together our Spanish, his Portuguese, and occasional English. Could I hold a conversation with the Portuguese I gleaned? As adroitly as a NISQ circuit could beat a classical computer. But perhaps we’ll return to Portugal, and experimentalists are doubling down on quantum error correction. I remain cautiously optimistic.

1As do eggs, I was intrigued to discover. Enjoyed a hardboiled egg at breakfast? Have a fried egg on your hamburger at lunch. And another on your steak at dinner. And candied egg yolks for dessert.

This article takes its title from a book by former US Poet Laureate Billy Collins. The title alludes to a song in the musical My Fair Lady, “The Rain in Spain.” The song has grown so famous that I don’t think twice upon hearing the name. “The rain in Portugal” did lead me to think twice—and so did TQC.

With thanks to Lídia and Nuriya for their hospitality. You can submit to TQC2024 here.

What geckos have to do with quantum computing

When my brother and I were little, we sometimes played video games on weekend mornings, before our parents woke up. We owned a 3DO console, which ran the game Gex. Gex is named after its main character, a gecko. Stepping into Gex’s shoes—or toe pads—a player can clamber up walls and across ceilings. 

I learned this month how geckos clamber, at the 125th Statistical Mechanics Conference at Rutgers University. (For those unfamiliar with the field: statistical mechanics is a sibling of thermodynamics, the study of energy.) Joel Lebowitz, a legendary mathematical physicist and nonagenarian, has organized the conference for decades. This iteration included a talk by Kanupriya (Kanu) Sinha, an assistant professor at the University of Arizona. 

Kanu studies open quantum systems, or quantum systems that interact with environments. She often studies a particle that can be polarized. Such a particle carries an electric charge, which can be distributed unevenly across the particle. Examples include a water molecule. As encoded in its chemical symbol, H2O, a water molecule consists of two hydrogen atoms and one oxygen atom. The oxygen attracts the molecule’s electrons more strongly than the hydrogen atoms do. So the molecule’s oxygen end carries a negative charge, and the hydrogen ends carry positive charges.1

The red area represents the oxygen, and the gray areas represent the hydrogen atoms. Image from the American Chemical Society.

When certain quantum particles are polarized, we can control their positions using lasers. After all, a laser consists of light—an electromagnetic field—and electric fields influence electrically charged particles’ movements. This control enables optical tweezers—laser beams that can place certain polarizable atoms wherever an experimentalist wishes. Such atoms can form a quantum computer, as John Preskill wrote in a blog post on Quantum Frontiers earlier this month.

Instead of placing polarizable atoms in an array that will perform a quantum computation, you can place the atoms in an outline of the Eiffel Tower. Image from Antoine Browaeys’s lab.

A tweezered atom’s environment consists not only of a laser, but also everything else around, including dust particles. Undesirable interactions with the environment deplete an atom of its quantum properties. Quantum information stored in the atom leaks into the environment, threatening a quantum computer’s integrity. Hence the need for researchers such as Kanu, who study open quantum systems.

Kanu illustrated the importance of polarizable particles in environments, in her talk, through geckos. A gecko’s toe pads contain tiny hairs that polarize temporarily. The electric charges therein can be attracted to electric charges in a wall. We call this attraction the van der Waals force. So Gex can clamber around for a reason related to why certain atoms suit quantum computing.

Kanu explaining how geckos stick.

Winter break offers prime opportunities for kicking back with one’s siblings. Even if you don’t play Gex (and I doubt whether you do), behind your game of choice may lie more physics than expected.

1Water molecules are polarized permanently, whereas Kanu studies particles that polarize temporarily.

The power of awe

Mid-afternoon, one Saturday late in September, I forgot where I was. I forgot that I was visiting Seattle for the second time; I forgot that I’d just finished co-organizing a workshop partially about nuclear physics for the first time. I’d arrived at a crowded doorway in the Chihuly Garden and Glass museum, and a froth of blue was towering above the onlookers in front of me. Glass tentacles, ranging from ultramarine through turquoise to clear, extended from the froth. Golden conch shells, starfish, and mollusks rode the waves below. The vision drove everything else from my mind for an instant.

Much had been weighing on my mind that week. The previous day had marked the end of a workshop hosted by the Inqubator for Quantum Simulation (IQuS, pronounced eye-KWISS) at the University of Washington. I’d co-organized the workshop with IQuS member Niklas Mueller, NIST physicist Alexey Gorshkov, and nuclear theorist Raju Venugopalanan (although Niklas deserves most of the credit). We’d entitled the workshop “Thermalization, from Cold Atoms to Hot Quantum Chromodynamics.” Quantum chromodynamics describes the strong force that binds together a nucleus’s constituents, so I call the workshop “Journey to the Center of the Atom” to myself. 

We aimed to unite researchers studying thermal properties of quantum many-body systems from disparate perspectives. Theorists and experimentalists came; and quantum information scientists and nuclear physicists; and quantum thermodynamicists and many-body physicists; and atomic, molecular, and optical physicists. Everyone cared about entanglement, equilibration, and what else happens when many quantum particles crowd together and interact. 

We quantum physicists crowded together and interacted from morning till evening. We presented findings to each other, questioned each other, coagulated in the hallways, drank tea together, and cobbled together possible projects. The week electrified us like a chilly ocean wave but also wearied me like an undertow. Other work called for attention, and I’d be presenting four more talks at four more workshops and campus visits over the next three weeks. The day after the workshop, I worked in my hotel half the morning and then locked away my laptop. I needed refreshment, and little refreshes like art.

Strongly interacting physicists

Chihuly Garden and Glass, in downtown Seattle, succeeded beyond my dreams: the museum drew me into somebody else’s dreams. Dale Chihuly grew up in Washington state during the mid-twentieth century. He studied interior design and sculpture before winning a Fulbright Fellowship to learn glass-blowing techniques in Murano, Italy. After that, Chihuly transformed the world. I’ve encountered glass sculptures of his in Pittsburgh; Florida; Boston; Jerusalem; Washington, DC; and now Seattle—and his reach dwarfs my travels. 

Chihuly chandelier at the Renwick Gallery in Washington, DC

After the first few encounters, I began recognizing sculptures as Chihuly’s before checking their name plates. Every work by his team reflects his style. Tentacles, bulbs, gourds, spheres, and bowls evidence what I never expected glass to do but what, having now seen it, I’m glad it does.

This sentiment struck home a couple of galleries beyond the Seaforms. The exhibit Mille Fiori drew inspiration from the garden cultivated by Chihuly’s mother. The name means A Thousand Flowers, although I spied fewer flowers than what resembled grass, toadstools, and palm fronds. Visitors feel like grasshoppers amongst the red, green, and purple stalks that dwarfed some of us. The narrator of Jules Vernes’s Journey to the Center of the Earth must have felt similarly, encountering mastodons and dinosaurs underground. I encircled the garden before registering how much my mind had lightened. Responsibilities and cares felt miles away—or, to a grasshopper, backyards away. Wonder does wonders.

Mille Fiori

Near the end of the path around the museum, a theater plays documentaries about Chihuly’s projects. The documentaries include interviews with the artist, and several quotes reminded me of the science I’d been trained to seek out: “I really wanted to take glass to its glorious height,” Chihuly said, “you know, really make something special.” “Things—pieces got bigger, pieces got taller, pieces got wider.” He felt driven to push art forms as large as the glass would permit his team. Similarly, my PhD advisor John Preskill encouraged me to “think big.” What physics is worth doing—what would create an impact?

How did a boy from Tacoma, Washington impact not only fellow blown-glass artists—not only artists—not only an exhibition here and there in his home country—but experiences across the globe, including that of a physicist one weekend in September?

One idea from the IQuS workshop caught my eye. Some particle colliders accelerate heavy ions to high energies and then smash the ions together. Examples include lead and gold ions studied at CERN in Geneva. After a collision, the matter expands and cools. Nuclear physicists don’t understand how the matter cools; models predict cooling times longer than those observed. This mismatch has persisted across decades of experiments. The post-collision matter evades attempts at computer simulation; it’s literally a hot mess. Can recent advances in many-body physics help?

The exhibit Persian Ceiling at Chihuly Garden and Glass. Doesn’t it look like it could double as an artist’s rendering of a heavy-ion collision?

Martin Savage, the director of IQuS, hopes so. He hopes that IQuS will impact nuclear physics across the globe. Every university and its uncle boasts a quantum institute nowadays, but IQuS seems to me to have carved out a niche for itself. IQuS has grown up in the bosom of the Institute for Nuclear Theory at the University of Washington, which has guided nuclear theory for decades. IQuS is smashing that history together with the future of quantum simulators. IQuS doesn’t strike me as just another glass bowl in the kitchen of quantum science. A bowl worthy of Chihuly? I don’t know, but I’d like to hope so.

I left Chihuly Garden and Glass with respect for the past week and energy for the week ahead. Whether you find it in physics or in glass or in both—or in plunging into a dormant Icelandic volcano in search of the Earth’s core—I recommend the occasional dose of awe.

Participants in the final week of the workshop

With thanks to Martin Savage, IQuS, and the University of Washington for their hospitality.

Astrobiology meets quantum computation?

The origin of life appears to share little with quantum computation, apart from the difficulty of achieving it and its potential for clickbait. Yet similar notions of complexity have recently garnered attention in both fields. Each topic’s researchers expect only special systems to generate high values of such complexity, or complexity at high rates: organisms, in one community, and quantum computers (and perhaps black holes), in the other. 

Each community appears fairly unaware of its counterpart. This article is intended to introduce the two. Below, I review assembly theory from origin-of-life studies, followed by quantum complexity. I’ll then compare and contrast the two concepts. Finally, I’ll suggest that origin-of-life scientists can quantize assembly theory using quantum complexity. The idea is a bit crazy, but, well, so what?

Assembly theory in origin-of-life studies

Imagine discovering evidence of extraterrestrial life. How could you tell that you’d found it? You’d have detected a bunch of matter—a bunch of particles, perhaps molecules. What about those particles could evidence life?

This question motivated Sara Imari Walker and Lee Cronin to develop assembly theory. (Most of my assembly-theory knowledge comes from Sara, about whom I wrote this blog post years ago and with whom I share a mentor.) Assembly theory governs physical objects, from proteins to self-driving cars. 

Imagine assembling a protein from its constituent atoms. First, you’d bind two atoms together. Then, you might bind another two atoms together. Eventually, you’d bind two pairs together. Your sequence of steps would form an algorithm for assembling the protein. Many algorithms can generate the same protein. One algorithm has the least number of steps. That number is called the protein’s assembly number.

Different natural processes tend to create objects that have different assembly numbers. Stars form low-assembly-number objects by fusing two hydrogen atoms together into helium. Similarly, random processes have high probabilities of forming low-assembly-number objects. For example, geological upheavals can bring a shard of iron near a lodestone. The iron will stick to the magnetized stone, forming a two-component object.

My laptop has an enormous assembly number. Why can such an object exist? Because of information, Sara and Lee emphasize. Human beings amassed information about materials science, Boolean logic, the principles of engineering, and more. That information—which exists only because organisms exists—helped engender my laptop.

If any object has a high enough assembly number, Sara and Lee posit, that object evidences life. Absent life, natural processes have too low a probability of randomly throwing together molecules into the shape of a computer. How high is “high enough”? Approximately fifteen, experiments by Lee’s group suggest. (Why do those experiments point to the number fifteen? Sara’s group is working on a theory for predicting the number.)

In summary, assembly number quantifies complexity in origin-of-life studies, according to Sara and Lee. The researchers propose that only living beings create high-assembly-number objects.

Quantum complexity in quantum computation

Quantum complexity defines a stage in the equilibration of many-particle quantum systems. Consider a clump of N quantum particles isolated from its environment. The clump will be in a pure quantum state | \psi(0) \rangle at a time t = 0. The particles will interact, evolving the clump’s state as a function  | \psi(t) \rangle

Quantum many-body equilibration is more complicated than the equilibration undergone by your afternoon pick-me-up as it cools.

The interactions will equilibrate the clump internally. One stage of equilibration centers on local observables O. They’ll come to have expectation values \langle \psi(t) | O | \psi(t) \rangle approximately equal to thermal expectation values {\rm Tr} ( O \, \rho_{\rm th} ), for a thermal state \rho_{\rm th} of the clump. During another stage of equilibration, the particles correlate through many-body entanglement. 

The longest known stage centers on the quantum complexity of | \psi(t) \rangle. The quantum complexity is the minimal number of basic operations needed to prepare | \psi(t) \rangle from a simple initial state. We can define “basic operations” in many ways. Examples include quantum logic gates that act on two particles. Another example is an evolution for one time step under a Hamiltonian that couples together at most k particles, for some k independent of N. Similarly, we can define “a simple initial state” in many ways. We could count as simple only the N-fold tensor product | 0 \rangle^{\otimes N} of our favorite single-particle state | 0 \rangle. Or we could call any N-fold tensor product simple, or any state that contains at-most-two-body entanglement, and so on. These choices don’t affect the quantum complexity’s qualitative behavior, according to string theorists Adam Brown and Lenny Susskind.

How quickly can the quantum complexity of | \psi(t) \rangle grow? Fast growth stems from many-body interactions, long-range interactions, and random coherent evolutions. (Random unitary circuits exemplify random coherent evolutions: each gate is chosen according to the Haar measure, which we can view roughly as uniformly random.) At most, quantum complexity can grow linearly in time. Random unitary circuits achieve this rate. Black holes may; they scramble information quickly. The greatest possible complexity of any N-particle state scales exponentially in N, according to a counting argument

A highly complex state | \psi(t) \rangle looks simple from one perspective and complicated from another. Human scientists can easily measure only local observables O. Such observables’ expectation values \langle \psi(t) | O | \psi(t) \rangle  tend to look thermal in highly complex states, \langle \psi(t) | O | \psi(t) \rangle \approx {\rm Tr} ( O \, \rho_{\rm th} ), as implied above. The thermal state has the greatest von Neumann entropy, - {\rm Tr} ( \rho \log \rho), of any quantum state \rho that obeys the same linear constraints as | \psi(t) \rangle (such as having the same energy expectation value). Probed through simple, local observables O, highly complex states look highly entropic—highly random—similarly to a flipped coin.

Yet complex states differ from flipped coins significantly, as revealed by subtler analyses. An example underlies the quantum-supremacy experiment published by Google’s quantum-computing group in 2018. Experimentalists initialized 53 qubits (quantum two-level systems) in a tensor product. The state underwent many gates, which prepared a highly complex state. Then, the experimentalists measured the z-component \sigma_z of each qubit’s spin, randomly obtaining a -1 or a 1. One trial yielded a 53-bit string. The experimentalists repeated this process many times, using the same gates in each trial. From all the trials’ bit strings, the group inferred the probability p(s) of obtaining a given string s in the next trial. The distribution \{ p(s) \} resembles the uniformly random distribution…but differs from it subtly, as revealed by a cross-entropy analysis. Classical computers can’t easily generate \{ p(s) \}; hence the Google group’s claiming to have achieved quantum supremacy/advantage. Quantum complexity differs from simple randomness, that difference is difficult to detect, and the difference can evidence quantum computers’ power.

A fridge that holds one of Google’s quantum computers.

Comparison and contrast

Assembly number and quantum complexity resemble each other as follows:

  1. Each function quantifies the fewest basic operations needed to prepare something.
  2. Only special systems (organisms) can generate high assembly numbers, according to Sara and Lee. Similarly, only special systems (such as quantum computers and perhaps black holes) can generate high complexity quickly, quantum physicists expect.
  3. Assembly number may distinguish products of life from products of abiotic systems. Similarly, quantum complexity helps distinguish quantum computers’ computational power from classical computers’.
  4. High-assembly-number objects are highly structured (think of my laptop). Similarly, high-complexity quantum states are highly structured in the sense of having much many-body entanglement.
  5. Organisms generate high assembly numbers, using information. Similarly, using information, organisms have created quantum computers, which can generate quantum complexity quickly.

Assembly number and quantum complexity differ as follows:

  1. Classical objects have assembly numbers, whereas quantum states have quantum complexities.
  2. In the absence of life, random natural processes have low probabilities of producing high-assembly-number objects. That is, randomness appears to keep assembly numbers low. In contrast, randomness can help quantum complexity grow quickly.
  3. Highly complex quantum states look very random, according to simple, local probes. High-assembly-number objects do not.
  4. Only organisms generate high assembly numbers, according to Sara and Lee. In contrast, abiotic black holes may generate quantum complexity quickly.

Another feature shared by assembly-number studies and quantum computation merits its own paragraph: the importance of robustness. Suppose that multiple copies of a high-assembly-number (or moderate-assembly-number) object exist. Not only does my laptop exist, for example, but so do many other laptops. To Sara, such multiplicity signals the existence of some stable mechanism for creating that object. The multiplicity may provide extra evidence for life (including life that’s discovered manufacturing), as opposed to an unlikely sequence of random forces. Similarly, quantum computing—the preparation of highly complex states—requires stability. Decoherence threatens quantum states, necessitating quantum error correction. Quantum error correction differs from Sara’s stable production mechanism, but both evidence the importance of robustness to their respective fields.

A modest proposal

One can generalize assembly number to quantum states, using quantum complexity. Imagine finding a clump of atoms while searching for extraterrestrial life. The atoms need not have formed molecules, so the clump can have a low classical assembly number. However, the clump can be in a highly complex quantum state. We could detect the state’s complexity only (as far as I know) using many copies of the state, so imagine finding many clumps of atoms. Preparing highly complex quantum states requires special conditions, such as a quantum computer. The clump might therefore evidence organisms who’ve discovered quantum physics. Using quantum complexity, one might extend the assembly number to identify quantum states that may evidence life. However, quantum complexity, or a high rate of complexity generation, alone may not evidence life—for example, if achievable by black holes. Fortunately, a black hole seems unlikely to generate many identical copies of a highly complex quantum state. So we seem to have a low probability of mistakenly attributing a highly complex quantum state, sourced by a black hole, to organisms (atop our low probability of detecting any complex quantum state prepared by anyone other than us).

Would I expect a quantum assembly number to greatly improve humanity’s search for extraterrestrial life? I’m no astrobiology expert (NASA videos notwithstanding), but I’d expect probably not. Still, astrobiology requires chemistry, which requires quantum physics. Quantum complexity seems likely to find applications in the assembly-number sphere. Besides, doesn’t juxtaposing the search for extraterrestrial life and the understanding of life’s origins with quantum computing sound like fun? And a sense of fun distinguishes certain living beings from inanimate matter about as straightforwardly as assembly number does.

With thanks to Jim Al-Khalili, Paul Davies, the From Physics to Life collaboration, and UCLA for hosting me at the workshop that spurred this article.

The Book of Mark, Chapter 2

Late in the summer of 2021, I visited a physics paradise in a physical paradise: the Kavli Institute for Theoretical Physics (KITP). The KITP sits at the edge of the University of California, Santa Barbara like a bougainvillea bush at the edge of a yard. I was eating lunch outside the KITP one afternoon, across the street from the beach. PhD student Arman Babakhani, whom a colleague had just introduced me to, had joined me.

The KITP’s Kohn Hall

What physics was I working on nowadays? Arman wanted to know.

Thermodynamic exchanges. 

The world consists of physical systems exchanging quantities with other systems. When a rose blooms outside the Santa Barbara mission, it exchanges pollen with the surrounding air. The total amount of pollen across the rose-and-air whole remains constant, so we call the amount a conserved quantity. Quantum physicists usually analyze conservation of particles, energy, and magnetization. But quantum systems can conserve quantities that participate in uncertainty relations. Such quantities are called incompatible, because you can’t measure them simultaneously. The x-, y-, and z-components of a qubit’s spin are incompatible.

The Santa Barbara mission…
…and its roses

Exchanging and conserving incompatible quantities, systems can violate thermodynamic expectations. If one system is much larger than the other, we expect the smaller system to thermalize; yet incompatibility invalidates derivations of the thermal state’s form. Incompatibility reduces the thermodynamic entropy produced by exchanges. And incompatibility can raise the average amount entanglement in the pair of systems—the total system.

If the total system conserves incompatible quantities, what happens to the eigenstate thermalization hypothesis (ETH)? Last month’s blog post overviewed the ETH, a framework for understanding how quantum many-particle systems thermalize internally. That post labeled Mark Srednicki, a professor at the KITP, a high priest of the ETH. I want, I told Arman, to ask Mark what happens when you combine the ETH with incompatible conserved quantities.

I’ll do it, Arman said.

Soon after, I found myself in the fishbowl. High up in the KITP, a room filled with cushy seats overlooks the ocean. The circular windows lend the room its nickname. Arrayed on the armchairs and couches were Mark, Arman, Mark’s PhD student Fernando Iniguez, and Mark’s recent PhD student Chaitanya Murthy. The conversation went like this:

Mark was frustrated about not being able to answer the question. I was delighted to have stumped him. Over the next several weeks, the group continued meeting, and we emailed out notes for everyone to criticize. I particulary enjoyed watching Mark and Chaitanya interact. They’d grown so intellectually close throughout Chaitanya’s PhD studies, they reminded me of an old married couple. One of them had to express only half an idea for the other to realize what he’d meant and to continue the thread. Neither had any qualms with challenging the other, yet they trusted each other’s judgment.1

In vintage KITP fashion, we’d nearly completed a project by the time Chaitanya and I left Santa Barbara. Physical Review Letters published our paper this year, and I’m as proud of it as a gardener of the first buds from her garden. Here’s what we found.

Southern California spoiled me for roses.

Incompatible conserved quantities conflict with the ETH and the ETH’s prediction of internal thermalization. Why? For three reasons. First, when inferring thermalization from the ETH, we assume that the Hamiltonian lacks degeneracies (that no energy equals any other). But incompatible conserved quantities force degeneracies on the Hamiltonian.2 

Second, when inferring from the ETH that the system thermalizes, we assume that the system begins in a microcanonical subspace. That’s an eigenspace shared by the conserved quantities (other than the Hamiltonian)—usually, an eigenspace of the total particle number or the total spin’s z-component. But, if incompatible, the conserved quantities share no eigenbasis, so they might not share eigenspaces, so microcanonical subspaces won’t exist in abundance.

Third, let’s focus on a system of N qubits. Say that the Hamiltonian conserves the total spin components S_x, S_y, and S_z. The Hamiltonian obeys the Wigner–Eckart theorem, which sounds more complicated than it is. Suppose that the qubits begin in a state | s_\alpha, \, m \rangle labeled by a spin quantum number s_\alpha and a magnetic spin quantum number m. Let a particle hit the qubits, acting on them with an operator \mathcal{O} . With what probability (amplitude) do the qubits end up with quantum numbers s_{\alpha'} and m'? The answer is \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle. The Wigner–Eckart theorem dictates this probability amplitude’s form. 

| s_\alpha, \, m \rangle and | s_{\alpha'}, \, m' \rangle are Hamiltonian eigenstates, thanks to the conservation law. The ETH is an ansatz for the form of \langle s_{\alpha'}, \, m' | \mathcal{O} | s_\alpha, \, m \rangle—of the elements of matrices that represent operators \mathcal{O} relative to the energy eigenbasis. The ETH butts heads with the Wigner–Eckart theorem, which also predicts the matrix element’s form.

The Wigner–Eckart theorem wins, being a theorem—a proved claim. The ETH is, as the H in the acronym relates, only a hypothesis.

If conserved quantities are incompatible, we have to kiss the ETH and its thermalization predictions goodbye. But must we set ourselves adrift entirely? Can we cling to no buoy from physics’s best toolkit for quantum many-body thermalization?

No, and yes, respectively. Our clan proposed a non-Abelian ETH for Hamiltonians that conserve incompatible quantities—or, equivalently, that have non-Abelian symmetries. The non-Abelian ETH depends on s_\alpha and on Clebsch–Gordan coefficients—conversion factors between total-spin eigenstates | s_\alpha, \, m \rangle and product states | s_1, \, m_1 \rangle \otimes | s_2, \, m_2 \rangle.

Using the non-Abelian ETH, we proved that many systems thermalize internally, despite conserving incompatible quantities. Yet the incompatibility complicates the proof enormously, extending it from half a page to several pages. Also, under certain conditions, incompatible quantities may alter thermalization. According to the conventional ETH, time-averaged expectation values \overline{ \langle \mathcal{O} \rangle }_t come to equal thermal expectation values \langle \mathcal{O} \rangle_{\rm th} to within O( N^{-1} ) corrections, as I explained last month. The correction can grow polynomially larger in the system size, to O( N^{-1/2} ), if conserved quantities are incompatible. Our conclusion holds under an assumption that we argue is physically reasonable.

So incompatible conserved quantities do alter the ETH, yet another thermodynamic expectation. Physicist Jae Dong Noh began checking the non-Abelian ETH numerically, and more testing is underway. And I’m looking forward to returning to the KITP this fall. Tales do say that paradise is a garden.

View through my office window at the KITP

1Not that married people always trust each other’s judgment.

2The reason is Schur’s lemma, a group-theoretic result. Appendix A of this paper explains the details.

The Book of Mark

Mark Srednicki doesn’t look like a high priest. He’s a professor of physics at the University of California, Santa Barbara (UCSB); and you’ll sooner find him in khakis than in sacred vestments. Humor suits his round face better than channeling divine wrath would; and I’ve never heard him speak in tongues—although, when an idea excites him, his hands rise to shoulder height of their own accord, as though halfway toward a priestly blessing. Mark belongs less on a ziggurat than in front of a chalkboard. Nevertheless, he called himself a high priest.

Specifically, Mark jokingly called himself a high priest of the eigenstate thermalization hypothesis, a framework for understanding how quantum many-body systems thermalize internally. The eigenstate thermalization hypothesis has an unfortunate number of syllables, so I’ll call it the ETH. The ETH illuminates closed quantum many-body systems, such as a clump of N ultracold atoms. The clump can begin in a pure product state | \psi(0) \rangle, then evolve under a chaotic1 Hamiltonian H. The time-t state | \psi(t) \rangle will remain pure; its von Neumann entropy will always vanish. Yet entropy grows according to the second law of thermodynamics. Breaking the second law amounts almost to a enacting a miracle, according to physicists. Does the clump of atoms deserve consideration for sainthood?

No—although the clump’s state remains pure, a small subsystem’s state does not. A subsystem consists of, for example, a few atoms. They’ll entangle with the other atoms, which serve as an effective environment. The entanglement will mix the few atoms’ state, whose von Neumann entropy will grow.

The ETH predicts this growth. The ETH is an ansatz about H and an operator O—say, an observable of the few-atom subsystem. We can represent O as a matrix relative to the energy eigenbasis. The matrix elements have a certain structure, if O and H satisfy the ETH. Suppose that the operators do and that H lacks degeneracies—that no two energy eigenvalues equal each other. We can prove that O thermalizes: Imagine measuring the expectation value \langle \psi(t) | O | \psi(t) \rangle at each of many instants t. Averaging over instants produces the time-averaged expectation value \overline{ \langle O \rangle_t }

Another average is the thermal average—the expectation value of O in the appropriate thermal state. If H conserves just itself,2 the appropriate thermal state is the canonical state, \rho_{\rm can} := e^{-\beta H}/ Z. The average energy \langle \psi(0) | H | \psi(0) \rangle defines the inverse temperature \beta, and Z normalizes the state. Hence the thermal average is \langle O \rangle_{\rm th}  :=  {\rm Tr} ( O \rho_{\rm can} )

The time average approximately equals the thermal average, according to the ETH: \overline{ \langle O \rangle_t }  =  \langle O \rangle_{\rm th} + O \big( N^{-1} \big). The correction is small in the total number N of atoms. Through the lens of O, the atoms thermalize internally. Local observables tend to satisfy the ETH, and we can easily observe only local observables. We therefore usually observe thermalization, consistently with the second law of thermodynamics.

I agree that Mark Srednicki deserves the title high priest of the ETH. He and Joshua Deutsch independently dreamed up the ETH in 1994 and 1991. Since numericists reexamined it in 2008, studies and applications of the ETH have exploded like a desert religion. Yet Mark had never encountered the question I posed about it in 2021. Next month’s blog post will share the good news about that question.

1Nonintegrable.

2Apart from trivial quantities, such as projectors onto eigenspaces of H.

Let the great world spin

I first heard the song “Fireflies,” by Owl City, shortly after my junior year of college. During the refrain, singer Adam Young almost whispers, “I’d like to make myself believe / that planet Earth turns slowly.” Goosebumps prickled along my neck. Yes, I thought, I’ve studied Foucault’s pendulum.

Léon Foucault practiced physics in France during the mid-1800s. During one of his best-known experiments, he hung a pendulum from high up in a building. Imagine drawing a wide circle on the floor, around the pendulum’s bob.1

Pendulum bob and encompassing circle, as viewed from above.

Imagine pulling the bob out to a point above the circle, then releasing the pendulum. The bob will swing back and forth, tracing out a straight line across the circle.

You might expect the bob to keep swinging back and forth along that line, and to do nothing more, forever (or until the pendulum has spent all its energy on pushing air molecules out of its way). After all, the only forces acting on the bob seem to be gravity and the tension in the pendulum’s wire. But the line rotates; its two tips trace out the circle.

How long the tips take to trace the circle depends on your latitude. At the North and South Poles, the tips take one day.

Why does the line rotate? Because the pendulum dangles from a building on the Earth’s surface. As the Earth rotates, so does the building, which pushes the pendulum. You’ve experienced such a pushing if you’ve ridden in a car. Suppose that the car is zipping along at a constant speed, in an unchanging direction, on a smooth road. With your eyes closed, you won’t feel like you’re moving. The only forces you can sense are gravity and the car seat’s preventing you from sinking into the ground (analogous to the wire tension that prevents the pendulum bob from crashing into the floor). If the car turns a bend, it pushes you sidewise in your seat. This push is called a centrifugal force. The pendulum feels a centrifugal force because the Earth’s rotation is an acceleration like the car’s. The pendulum also feels another force—a Coriolis force—because it’s not merely sitting, but moving on the rotating Earth.

We can predict the rotation of Foucault’s pendulum by assuming that the Earth rotates, then calculating the centrifugal and Coriolis forces induced, and then calculating how those forces will influence the pendulum’s motion. The pendulum evidences the Earth’s rotation as nothing else had before debuting in 1851. You can imagine the stir created by the pendulum when Foucault demonstrated it at the Observatoire de Paris and at the Panthéon monument. Copycat pendulums popped up across the world. One ended up next to my college’s physics building, as shown in this video. I reveled in understanding that pendulum’s motion, junior year.

My professor alluded to a grander Foucault pendulum in Paris. It hangs in what sounded like a temple to the Enlightenment—beautiful in form, steeped in history, and rich in scientific significance. I’m a romantic about the Enlightenment; I adore the idea of creating the first large-scale organizational system for knowledge. So I hungered to make a pilgrimage to Paris.

I made the pilgrimage this spring. I was attending a quantum-chaos workshop at the Institut Pascal, an interdisciplinary institute in a suburb of Paris. One quiet Saturday morning, I rode a train into the city center. The city houses a former priory—a gorgeous, 11th-century, white-stone affair of the sort for which I envy European cities. For over 200 years, the former priory has housed the Musée des Arts et Métiers, a museum of industry and technology. In the priory’s chapel hangs Foucault’s pendulum.2

A pendulum of Foucault’s own—the one he exhibited at the Panthéon—used to hang in the chapel. That pendulum broke in 2010; but still, the pendulum swinging today is all but a holy relic of scientific history. Foucault’s pendulum! Demonstrating that the Earth rotates! And in a jewel of a setting—flooded with light from stained-glass windows and surrounded by Gothic arches below a painted ceiling. I flitted around the little chapel like a pollen-happy bee for maybe 15 minutes, watching the pendulum swing, looking at other artifacts of Foucault’s, wending my way around the carved columns.

Almost alone. A handful of visitors trickled in and out. They contrasted with my visit, the previous weekend, to the Louvre. There, I’d witnessed a Disney World–esque line of tourists waiting for a glimpse of the Mona Lisa, camera phones held high. Nobody was queueing up in the musée’s chapel. But this was Foucault’s pendulum! Demonstrating that the Earth rotates!

I confess to capitalizing on the lack of visitors to take a photo with Foucault’s pendulum and Foucault’s Pendulum, though.

Shortly before I’d left for Paris, a librarian friend had recommended Umberto Eco’s novel Foucault’s Pendulum. It occupied me during many a train ride to or from the center of Paris.

The rest of the museum could model in an advertisement for steampunk. I found automata, models of the steam engines that triggered the Industrial Revolution, and a phonograph of Thomas Edison’s. The gadgets, many formed from brass and dark wood, contrast with the priory’s light-toned majesty. Yet the priory shares its elegance with the inventions, many of which gleam and curve in decorative flutes. 

The grand finale at the Musée des Arts et Métiers.

I tore myself away from the Musée des Arts et Métiers after several hours. I returned home a week later and heard the song “Fireflies” again not long afterward. The goosebumps returned worse. Thanks to Foucault, I can make myself believe that planet Earth turns.

With thanks to Kristina Lynch for tolerating my many, many, many questions throughout her classical-mechanics course.

This story’s title refers to a translation of Goethe’s Faust. In the translation, the demon Mephistopheles tells the title character, “You let the great world spin and riot; / we’ll nest contented in our quiet” (to within punctuational and other minor errors, as I no longer have the text with me). A prize-winning 2009 novel is called Let the Great World Spin; I’ve long wondered whether Faust inspired its title.

1Why isn’t the bottom of the pendulum called the alice?

2After visiting the musée, I learned that my classical-mechanics professor had been referring to the Foucault pendulum that hangs in the Panthéon, rather than to the pendulum in the musée. The musée still contains the pendulum used by Foucault in 1851, whereas the Panthéon has only a copy, so I’m content. Still, I wouldn’t mind making a pilgrimage to the Panthéon. Let me know if more thermodynamic workshops take place in Paris!