The enigma of Robert Hooke

In 1675, Robert Hooke published the “true mathematical and mechanical form” for the shape of an ideal arch.  However, Hooke wrote the theory as an anagram,

abcccddeeeeefggiiiiiiiillmmmmnnnnnooprrsssttttttuuuuuuuux.

Its solution was never published in his lifetime.  What was the secret hiding in these series of letters?

Hooke-Arch

An excerpt from Hooke’s manuscript “A description of helioscopes, and some other instruments”.

The arch is one of the fundamental building blocks in architecture.  Used in bridges, cathedrals, doorways, etc., arches provide an aesthetic quality to the structures they dwell within.  Their key utility comes from their ability to support weight above an empty space, by distributing the load onto the abutments at its feet.  A dome functions much like an arch, except a dome takes on a three-dimensional shape whereas an arch is two-dimensional.  Paradoxically, while being the backbone of the many edifices, arches and domes themselves are extremely delicate: a single misplaced component along its curve, or an improper shape in the design would spell doom for the entire structure.

The Romans employed the rounded arch/dome—in the shape of a semicircle/hemisphere–in their bridges and pantheons.  The Gothic architecture favored the pointed arch and the ribbed vault in their designs.  However, neither of these arch forms were adequate for the progressively grander structures and more ambitious cathedrals sought in the 17th century.  Following the great fire of London in 1666, a massive rebuilding effort was under way.  Among the new public buildings, the most prominent was to be St. Paul’s Cathedral with its signature dome.  A modern theory of arches was sorely needed: what is the perfect shape for an arch/dome?

Christopher Wren, the chief architect of St. Paul’s Cathedral, consulted Hooke on the dome’s design.  To quote from the cathedral’s website [1]:

The two half-sections [of the dome] in the study employ a formula devised by Robert Hooke in about 1671 for calculating the curve of a parabolic dome and reducing its thickness.  Hooke had explored this curve the three-dimensional equivalent of the ‘hanging chain’, or catenary arch: the shape of a weighted chain which, when inverted, produces the ideal profile for a self-supporting arch.  He thought that such a curve derived from the equation y = x3.

A figure from Wren's design of St. Paul's Cathedral. (Courtesy of the British Museum)

A figure from Wren’s design of St. Paul’s Cathedral. (Courtesy of the British Museum)

How did Hooke came about the shape for the dome?  It wasn’t until after Hooke’s death his executor provided the unencrypted solution to the anagram [2]

Ut pendet continuum flexile, sic stabit contiguum rigidum inversum

which translates to

As hangs a flexible cable so, inverted, stand the touching pieces of an arch.

In other words, the ideal shape of an arch is exactly that of a freely hanging rope, only upside down.  Hooke understood that the building materials could withstand only compression forces and not tensile forces, in direct contrast to a rope that could resist tension but would buckle under compression.  The mathematics describing the arch and the cable are in fact identical, save for a minus sign.  Consequently, you could perform a real-time simulation of an arch using a piece of string!

Bonus:  Robert published the anagram in his book describing helioscopes, simply to “fill up the vacancy of the ensuring page” [3].  On that very page among other claims, Hooke also wrote the anagram “ceiiinosssttuu” in regards to “the true theory of elasticity”.  Can you solve this riddle?

[1] https://www.stpauls.co.uk/history-collections/the-collections/architectural-archive/wren-office-drawings/5-designs-for-the-dome-c16871708
[2] Written in Latin, the ‘u’ and ‘v’ are the same letter.
[3] In truth, Hooke was likely trying to avoid being scooped by his contemporaries, notably Issac Newton.

This article was inspired by my visit to the Huntington Library.  I would like to thank Catherine Wehrey for the illustrations and help with the research.

Beware global search and replace!

I’m old enough to remember when cutting and pasting were really done with scissors and glue (or Scotch tape). When I was a graduate student in the late 1970s, few physicists typed their own papers, and if they did they left gaps in the text, to be filled in later with handwritten equations. The gold standard of technical typing was the IBM Correcting Selectric II typewriter. Among its innovations was the correction ribbon, which allowed one to remove a typo with the touch of a key. But it was especially important for scientists that the Selectric could type mathematical characters, including Greek letters.

IBM Selectric typeballs

IBM Selectric typeballs

It wasn’t easy. Many different typeballs were available, to support various fonts and special characters. Typing a displayed equation or in-line equation usually involved swapping back and forth between typeballs to access all the needed symbols. Most physics research groups had staff who knew how to use the IBM Selectric and spent much of their time typing manuscripts.

Though the IBM Selectric was used by many groups, typewriters have unique personalities, as forensic scientists know. I had a friend who claimed he had learned to recognize telltale differences among documents produced by various IBM Selectric machines. That way, whenever he received a referee report, he could identify its place of origin.

Manuscripts did not evolve through 23 typeset versions in those days, as one of my recent papers did. Editing was arduous and frustrating, particularly for a lowly graduate student like me, who needed to beg Blanche to set aside what she was doing for Steve Weinberg and devote a moment or two to working on my paper.

It was tremendously liberating when I learned to use TeX in 1990 and started typing my own papers. (Not LaTeX in those days, but Plain TeX embellished by a macro for formatting.) That was a technological advance that definitely improved my productivity. An earlier generation had felt the same way about the Xerox machine.

But as I was reminded a few days ago, while technological advances can be empowering, they can also be dangerous when used recklessly. I was editing a very long document, and decided to make a change. I had repeatedly used $x$ to denote an n-bit string, and thought it better to use $\vec x$ instead. I was walking through the paper with the replace button, changing each $x$ to $\vec x$ where the change seemed warranted. But I slipped once, and hit the “Replace All” button instead of “Replace.” My computer curtly informed me that it had made the replacement 1011 times. Oops …

This was a revocable error. There must have been a way to undo it (though it was not immediately obvious how). Or I could have closed the file without saving, losing some recent edits but limiting the damage.

But it was late at night and I was tired. I panicked, immediately saving and LaTeXing the file. It was a mess.

Okay, no problem, all I had to do was replace every \vec x with x and everything would be fine. Except that in the original replacement I had neglected to specify “Match Case.” In 264 places $X$ had become $\vec x$, and the new replacement did not restore the capitalization. It took hours to restore every $X$ by hand, and there are probably a few more that I haven’t noticed yet.

Which brings me to the cautionary tale of one of my former graduate students, Robert Navin. Rob’s thesis had two main topics, scattering off vortices and scattering off monopoles. On the night before the thesis due date, Rob made a horrifying discovery. The crux of his analysis of scattering off vortices concerned the singularity structure of a certain analytic function, and the chapter about vortices made many references to the poles of this function. What Rob realized at this late stage is that these singularities are actually branch points, not poles!

What to do? It’s late and you’re tired and your thesis is due in a few hours. Aha! Global search and replace! Rob replaced every occurrence of “pole” in his thesis by “branch point.” Problem solved.

Except … Rob had momentarily forgotten about that chapter on monopoles. Which, when I read the thesis, had been transformed into a chapter on monobranch points. His committee accepted the thesis, but requested some changes …

Rob Navin no longer does physics, but has been very successful in finance. I’m sure he’s more careful now.

Quantum Information meets Quantum Matter

“Quantum Information meets Quantum Matter”, it sounds like the beginning of a perfect romance story. It is probably not the kind that makes an Oscar movie, but it does get many physicists excited, physicists including Bei, Duanlu, Xiaogang and me. Actually we find the story so compelling that we decided to write a book about it, and it all started one day in 2011 when Bei popped the question ‘Do you want to write a book about it?’ during one of our conversations.

This idea quickly sparked enthusiasm among the rest of us, who have all been working in this interdisciplinary area and are witness to its rising power. In fact Xiao-Gang has had the same idea of book writing for some time. So now here we are, four years later, posting the first version of the book on arXiv last week.  (arXiv link)

The book is a condensed matter book on the topic of strongly interacting many-body systems, with a special focus on the emergence of topological order. This is an exciting topic, with new developments everyday. We are not trying to cover the whole picture, but rather to present just one perspective – the quantum information perspective – of the story. Quantum information ideas, like entanglement, quantum circuit, quantum codes are becoming ever more popular nowadays in condensed matter study and have lead to many important developments. On the other hand, they are not usually taught in condensed matter courses or covered by condensed matter books. Therefore, we feel that writing a book may help bridge the gap.

We keep the writing in a self-consistent way, requiring minimum background in quantum information and condensed matter. The first part introduces concepts in quantum information that is going to be useful in the later study of condensed matter systems. (It is by no means a well-rounded introduction to quantum information and should not be read in that way.) The second part moves onto explaining one major topic of condensed matter theory, the local Hamiltonians and their ground states, and contains introduction to the most basic concepts in condensed matter theory like locality, gap, universality, etc. The third part then focuses on the emergence of topological order, first presenting a historical and intuitive picture of topological order and then building a more systematic approach based on entanglement and quantum circuit. With this framework established, the fourth part studies some interesting topological phases in 1D and 2D, with the help of the tensor network formalism. Finally part V concludes with the outlook of where this miraculous encounter of quantum information and condensed matter would take us – the unification between information and matter.

We hope that, with such a structure, the book is accessible to both condensed matter students / researchers interested in this quantum information approach and also quantum information people who are interested in condensed matter topics. And of course, the book is also limited by the perspective we are taking. Compared to a standard condensed matter book, we are missing even the most elementary ingredient – the free fermion. Therefore, this book is not to be read as a standard textbook on condensed matter theory. On the other hand, by presenting a new approach, we hope to bring the readers to the frontiers of current research.

The most important thing I want to say here is: this arXiv version is NOT the final version. We posted it so that we can gather feedbacks from our colleagues. Therefore, it is not yet ready for junior students to read in order to learn the subject. On the other hand, if you are a researcher in a related field, please send us criticism, comments, suggestions, or whatever comes to your mind. We will be very grateful for that! (One thing we already learned (thanks Burak!) is that we forgot to put in all the references on conditional mutual information. That will be corrected in a later version, together with everything else.) The final version will be published by Springer as part of their “Quantum Information Science and Technology” series.

I guess it is quite obvious that me writing on the blog of the Institute for Quantum Information and Matter (IQIM) about this book titled “Quantum Information meets Quantum Matter” (QIQM) is not a simple coincidence. The romance story between the two emerged in the past decade or so and has been growing at a rate much beyond expectations. Our book is merely an attempt to record some aspects of the beginning. Let’s see where it will take us.

Kitaev, Moore, Read share Dirac Medal!

Since its founding 30 years ago, the Dirac Medal has been one of the most prestigious honors in theoretical physics. Particle theorists and string theorists have claimed most of the medals, but occasionally other fields break through, as when Haldane, Kane, and Zhang shared the 2012 Dirac Medal for their pioneering work on topological insulators. I was excited to learn today that the 2015 Dirac Medal has been awarded to Alexei Kitaev, Greg Moore, and Nick Read “for their interdisciplinary contributions which introduced  concepts of conformal field theory and non-abelian quasiparticle statistics in condensed matter systems and  applications of these ideas to quantum computation.”

Left to right: Alexei Kitaev, Greg Moore and Nicholas Read.

Left to right: Alexei Kitaev, Greg Moore, and Nick Read.

I have written before about the exciting day in April 1997 when Alesha and I met, and I heard for the first time about the thrilling concept of a topological quantum computer. I’ll take the liberty of drawing a quote from that post, which seems particularly relevant today:

Over coffee at the Red Door Cafe that afternoon, we bonded over our shared admiration for a visionary paper by Greg Moore and Nick Read about non-abelian anyons in fractional quantum Hall systems, though neither of us fully understood the paper (and I still don’t). Maybe, we mused together, non-abelian anyons are not just a theorist’s dream … It was the beginning of a beautiful friendship.

As all physics students know, fundamental particles in three spatial dimensions come in two varieties, bosons and fermions, but in two spatial dimensions more exotic possibilities abound, dubbed “anyons” by Wilczek. Anyons have an exotic spin, a fraction of an electron’s spin, and corresponding exotic statistics — when one anyon is carried around another, their quantum state picks up a nontrivial topological phase. (I had some fun discussions with Frank Wilczek in 1981 as he was developing the theory of anyons. In some of his writings Frank has kindly credited me for suggesting to him that a robust spin-statistics connection should hold in two dimensions, so that fractional spin is necessarily accompanied by fractional statistics. The truth is that my understanding of this point was murky at best back then.) Not long after Wilczek’s paper, Bert Halperin recognized the relevance of anyons to the strange fractional quantum Hall states that had recently been discovered; these support particle-like objects carrying a fraction of the electron’s electric charge, which Halperin recognized to be anyons.

Non-abelian anyons are even more exotic. In a system with many widely separated non-abelian anyons, there are a vast number of different ways for the particles to “fuse” together, giving rise to many possible quantum states, all of which are in principle distinguishable but in practice are hard to tell apart. Furthermore, by “braiding” the anyons (performing a sequence of particle exchanges, so the world lines of the anyons trace out a braid in three-dimensional spacetime), this state can be manipulated, coherently processing the quantum information encoded in the system.

Others (including me) had mused about non-abelian anyons before Moore and Read came along, but no one had proposed a plausible story for how such exotic objects would arise in a realistic laboratory setting. As collaborators, Moore and Read complemented one another perfectly. Greg was, and is, one of the world’s leading experts on conformal field theory. Nick was, and is, one of the world’s leading experts on the fractional quantum Hall effect. Together, they realized that one of the already known fractional quantum Hall states (at filling factor 5/2) is a good candidate for a topological phase supporting non-abelian anyons. This was an inspired guess, most likely correct, though we still don’t have smoking gun experimental evidence 25 years later. Their paper is a magical and rare combination of mathematical sophistication with brilliant intuition.

Alexei arrived at his ideas about non-abelian anyons coming from a different direction, though I suspect he drew inspiration from the earlier deep contributions of Moore and Read. He was trying to imagine a physical system that could store and process a quantum state reliably. Normally quantum systems are very fragile — just looking at the system alters its state. To prevent a quantum computer from making errors, we need to isolate the information processed by the computer from the environment. A system of non-abelian anyons has just the right properties to make this possible; it carries lots of information, but the environment can’t read (or damage) that information when it looks at the particles one at a time. That’s because the information is not encoded in the individual particles, but instead in subtle collective properties shared by many particles at once.

Alexei and I had inspiring discussions about topological quantum computing when we first met at Caltech in April 1997, which continued at a meeting in Torino, Italy that summer, where we shared a bedroom. I was usually asleep by the time he came to bed, because he was staying up late, typing his paper.

Alexei did not think it important to publish his now renowned 1997 paper in a journal — he was content for the paper to be accessible on the arXiv. But after a few years I started to get worried … in my eyes Alexei was becoming an increasingly likely Nobel Prize candidate. Would it cause a problem if his most famous paper had never been published? Just to be safe, I arranged for it to appear in Annals of Physics in 2003, where I was on the editorial board at the time. Frank Wilczek, then the editor, was delighted by this submission, which has definitely boosted the journal’s impact factor! (“Fault-tolerant quantum computation by anyons” has 2633 citations as of today, according to Google Scholar.) Nobelists are ineligible for the Dirac Medal, but some past medalists have proceeded to greater glory. It could happen again, right?

Alesha and I have now been close friends and collaborators for 18 years, but I have actually known Greg and Nick even longer. I taught at Harvard for a few years in the early 1980s, at a time when an amazingly talented crew of physics graduate students roamed the halls, of whom Andy Cohen, Jacques Distler, Ben Grinstein, David Kaplan, Aneesh Manohar, Ann Nelson, and Phil Nelson among others all made indelible impressions. But there was something special about Greg. The word that comes to mind is intensity. Few students exhibit as much drive and passion for physics as Greg did in those days. He’s calmer now, but still pretty intense. I met Nick a few years later when we tried to recruit him to the Caltech faculty. Luring him to southern California turned out to be a lost cause because he didn’t know how to drive a car. I suppose he’s learned by now?* Whenever I’ve spoken to Nick in the years since then, I’ve always been dazzled by his clarity of thought.

Non-abelian anyons are at a pivotal stage, with lots of experimental hints supporting their existence, but still no ironclad evidence. I feel confident this will change in the next few years. These are exciting times!

And guess what? This occasion gives me another opportunity to dust off one of my poems!

Anyon, Anyon

Anyon, anyon, where do you roam?
Braid for a while before you go home.

Though you’re condemned just to slide on a table,
A life in 2D also means that you’re able
To be of a type neither Fermi nor Bose
And to know left from right — that’s a kick, I suppose.

You and your buddy were made in a pair
Then wandered around, braiding here, braiding there.
You’ll fuse back together when braiding is through
We’ll bid you adieu as you vanish from view.

Alexei exhibits a knack for persuading
That someday we’ll crunch quantum data by braiding,
With quantum states hidden where no one can see,
Protected from damage through top-ology.

Anyon, anyon, where do you roam?
Braid for a while, before you go home.

*Note added: Nick confirms, “Yes, I’ve had a driving license since 1992, and a car since 1994!”

Bits, bears, and beyond in Banff

Another conference about entropy. Another graveyard.

Last year, I blogged about the University of Cambridge cemetery visited by participants in the conference “Eddington and Wheeler: Information and Interaction.” We’d lectured each other about entropy–a quantification of decay, of the march of time. Then we marched to an overgrown graveyard, where scientists who’d lectured about entropy decades earlier were decaying.

This July, I attended the conference “Beyond i.i.d. in information theory.” The acronym “i.i.d.” stands for “independent and identically distributed,” which requires its own explanation. The conference took place at BIRS, the Banff International Research Station, in Canada. Locals pronounce “BIRS” as “burrs,” the spiky plant bits that stick to your socks when you hike. (I had thought that one pronounces “BIRS” as “beers,” over which participants in quantum conferences debate about the Measurement Problem.) Conversations at “Beyond i.i.d.” dinner tables ranged from mathematical identities to the hiking for which most tourists visit Banff to the bears we’d been advised to avoid while hiking. So let me explain the meaning of “i.i.d.” in terms of bear attacks.

BIRS

The BIRS conference center. Beyond here, there be bears.

Suppose that, every day, exactly one bear attacks you as you hike in Banff. Every day, you have a probability p1 of facing down a black bear, a probability p2 of facing down a grizzly, and so on. These probabilities form a distribution {pi} over the set of possible events (of possible attacks). We call the type of attack that occurs on a given day a random variable. The distribution associated with each day equals the distribution associated with each other day. Hence the variables are identically distributed. The Monday distribution doesn’t affect the Tuesday distribution and so on, so the distributions are independent.

Information theorists quantify efficiencies with which i.i.d. tasks can be performed. Suppose that your mother expresses concern about your hiking. She asks you to report which bear harassed you on which day. You compress your report into the fewest possible bits, or units of information. Consider the limit as the number of days approaches infinity, called the asymptotic limit. The number of bits required per day approaches a function, called the Shannon entropy HS, of the distribution:

Number of bits required per day → HS({pi}).

The Shannon entropy describes many asymptotic properties of i.i.d. variables. Similarly, the von Neumann entropy HvN describes many asymptotic properties of i.i.d. quantum states.

But you don’t hike for infinitely many days. The rate of black-bear attacks ebbs and flows. If you stumbled into grizzly land on Friday, you’ll probably avoid it, and have a lower grizzly-attack probability, on Saturday. Into how few bits can you compress a set of nonasymptotic, non-i.i.d. variables?

We answer such questions in terms of ɛ-smooth α-Rényi entropies, the sandwiched Rényi relative entropy, the hypothesis-testing entropy, and related beasts. These beasts form a zoo diagrammed by conference participant Philippe Faist. I wish I had his diagram on a placemat.

Entropy zoo

“Beyond i.i.d.” participants define these entropies, generalize the entropies, probe the entropies’ properties, and apply the entropies to physics. Want to quantify the efficiency with which you can perform an information-processing task or a thermodynamic task? An entropy might hold the key.

Many highlights distinguished the conference; I’ll mention a handful.  If the jargon upsets your stomach, skip three paragraphs to Thermodynamic Thursday.

Aram Harrow introduced a resource theory that resembles entanglement theory but whose agents pay to communicate classically. Why, I interrupted him, define such a theory? The backstory involves a wager against quantum-information pioneer Charlie Bennett (more precisely, against an opinion of Bennett’s). For details, and for a quantum version of The Princess and the Pea, watch Aram’s talk.

Graeme Smith and colleagues “remove[d] the . . . creativity” from proofs that certain entropic quantities satisfy subadditivity. Subadditivity is a property that facilitates proofs and that offers physical insights into applications. Graeme & co. designed an algorithm for checking whether entropic quantity Q satisfies subadditivity. Just add water; no innovation required. How appropriate, conference co-organizer Mark Wilde observed. BIRS has the slogan “Inspiring creativity.”

Patrick Hayden applied one-shot entropies to AdS/CFT and emergent spacetime, enthused about elsewhere on this blog. Debbie Leung discussed approximations to Haar-random unitaries. Gilad Gour compared resource theories.

Presentation

Conference participants graciously tolerated my talk about thermodynamic resource theories. I closed my eyes to symbolize the ignorance quantified by entropy. Not really; the photo didn’t turn out as well as hoped, despite the photographer’s goodwill. But I could have closed my eyes to symbolize entropic ignorance.

Thermodynamics and resource theories dominated Thursday. Thermodynamics is the physics of heat, work, entropy, and stasis. Resource theories are simple models for transformations, like from a charged battery and a Tesla car at the bottom of a hill to an empty battery and a Tesla atop a hill.

John

My advisor’s Tesla. No wonder I study thermodynamic resource theories.

Philippe Faist, diagrammer of the Entropy Zoo, compared two models for thermodynamic operations. I introduced a generalization of resource theories for thermodynamics. Last year, Joe Renes of ETH and I broadened thermo resource theories to model exchanges of not only heat, but also particles, angular momentum, and other quantities. We calculated work in terms of the hypothesis-testing entropy. Though our generalization won’t surprise Quantum Frontiers diehards, the magic tricks in my presentation might.

At twilight on Thermodynamic Thursday, I meandered down the mountain from the conference center. Entropies hummed in my mind like the mosquitoes I slapped from my calves. Rising from scratching a bite, I confronted the Banff Cemetery. Half-wild greenery framed the headstones that bordered the gravel path I was following. Thermodynamicists have associated entropy with the passage of time, with deterioration, with a fate we can’t escape. I seem unable to escape from brushing past cemeteries at entropy conferences.

Not that I mind, I thought while scratching the bite in Pasadena. At least I escaped attacks by Banff’s bears.

Cemetery

With thanks to the conference organizers and to BIRS for the opportunity to participate in “Beyond i.i.d. 2015.”