About preskill

I am a theoretical physicist at Caltech, and the Director of the Institute for Quantum Information and Matter. Follow me on Twitter @preskill.

Toward a Coherent US Government Strategy for QIS

In an upbeat  recent post, Spiros reported some encouraging news about quantum information science from the US National Science and Technology Council. Today I’ll chime in with some further perspective and background.

report-cover-2The Interagency Working Group on Quantum Information Science (IWG on QIS), which began its work in late 2014, was charged “to assess Federal programs in QIS, monitor the state of the field, provide a forum for interagency coordination and collaboration, and engage in strategic planning of Federal QIS activities and investments.”  The IWG recently released a  well-crafted report, Advancing Quantum Information Science: National Challenges and Opportunities. The report recommends that “quantum information science be considered a priority for Federal coordination and investment.”

All the major US government agencies supporting QIS were represented on the IWG, which was co-chaired by officials from DOE, NSF, and NIST:

  • Steve Binkley, who heads the Advanced Scientific Computing Research (ASCR) program in the Department of Energy Office of Science,
  • Denise Caldwell, who directs the Physics Division of the National Science Foundation,
  • Carl Williams, Deputy Director of the Physical Measurement Laboratory at the National Institute for Standards and Technology.

Denise and Carl have been effective supporters of QIS over many years of government service. Steve has recently emerged as another eloquent advocate for the field’s promise and importance.

At our request, the three co-chairs fielded questions about the report, with the understanding that their responses would be broadly disseminated. Their comments reinforced the message of the report — that all cognizant agencies favor a “coherent, all-of-government approach to QIS.”

Science funding in the US differs from elsewhere in the world. QIS is a prime example — for over 20 years, various US government agencies, each with its own mission, goals, and culture, have had a stake in QIS research. By providing more options for supporting innovative ideas, the existence of diverse autonomous funding agencies can be a blessing. But it can also be bewildering for scientists seeking support, and it poses challenges for formulating and executing effective national science policy. It’s significant that many different agencies worked together in the IWG, and were able to align with a shared vision.

“I think that everybody in the group has the same goals,” Denise told us. “The nation has a tremendous opportunity here. This is a terrifically important field for all of us involved, and we all want to see it succeed.” Carl added, “All of us believe that this is an area in which the US must be competitive, it is very important for both scientific and technological reasons … The differences [among agencies] are minor.”

Asked about the timing of the IWG and its report, Carl noted the recent trend toward “emerging niche applications” of QIS such as quantum sensors, and Denise remarked that government agencies are responding to a plea from industry for a cross-disciplinary work force broadly trained in QIS. At the same time, Denise emphasized, the IWG recognizes that “there are still many open basic science questions that are important for this field, and we need to focus investment onto these basic science questions, as well as look at investments or opportunities that lead into the first applications.”

DOE’s FY2017 budget request includes $10M to fund a new QIS research program, coordinated with NIST and NSF. Steve explained the thinking behind that request:  “There are problems in the physical science space, spanned by DOE Office of Science programs, where quantum computation would be a useful a tool. This is the time to start making investments in that area.” Asked about the longer term commitment of DOE to QIS research, Steve was cautious. “What it will grow into over time is hard to tell — we’re right at the beginning.”

What can the rest of us in the QIS community do to amplify the impact of the report? Carl advised: “All of us should continue getting the excitement of the field out there, [and point to] the potential long term payoffs,  whether they be in searches for dark matter or building better clocks or better GPS systems or better sensors. Making everybody aware of all the potential is good for our economy, for our country, and for all of us.”

Taking an even longer view, Denise reminded us that effective advocacy for QIS can get young people “excited about a field they can work in, where they can get jobs, where they can pursue science — that can be critically important.  If we all think back to our own beginning careers, at some point in time we got excited about science. And so whatever one can do to excite the next generation about science and technology, with the hope of bringing them into studying and developing careers in this field, to me this is tremendously valuable. ”

All of us in the quantum information science community owe a debt to the IWG for their hard work and eloquent report, and to the agencies they represent for their vision and support. And we are all fortunate to be participating in the early stages of a new quantum revolution. As the IWG report makes clear, the best is yet to come.

LIGO: Playing the long game, and winning big!

Wow. What a day! And what a story!

Kip Thorne in 1972, around the time MTW was completed.

Kip Thorne in 1972, around the time MTW was completed.

It is hard for me to believe, but I have been on the Caltech faculty for nearly a third of a century. And when I arrived in 1983, interferometric detection of gravitational waves was already a hot topic of discussion here. At Kip Thorne’s urging, Ron Drever had been recruited to Caltech and was building the 40-meter prototype interferometer (which is still operating as a testbed for future detection technologies). Kip and his colleagues, spurred by Vladimir Braginsky’s insights, had for several years been actively studying the fundamental limits of quantum measurement precision, and how these might impact the search for gravitational waves.

I decided to bone up a bit on the subject, so naturally I pulled down from my shelf the “telephone book” — Misner, Thorne, and Wheeler’s mammoth Gravitationand browsed Chapter 37 (Detection of Gravitational Wave), for which Kip had been the lead author. The chapter brimmed over with enthusiasm for the subject, but to my surprise interferometers were hardly mentioned. Instead the emphasis was on mechanical bar detectors. These had been pioneered by Joseph Weber, whose efforts in the 1960s had first aroused Kip’s interest in detecting gravitational waves, and by Braginsky.

I sought Kip out for an explanation, and with characteristic clarity and patience he told how his views had evolved. He had realized in the 1970s that a strain sensitivity of order 10^{-21} would be needed for a good chance at detection, and after many discussions with colleagues like Drever, Braginsky, and Rai Weiss, he had decided that kind of sensitivity would not be achievable with foreseeable technology using bars.

Ron Drever, who built Caltech's 40-meter prototype interferometer in the 1980s.

Ron Drever, who built Caltech’s 40-meter prototype interferometer in the 1980s.

We talked about what would be needed — a kilometer scale detector capable of sensing displacements of 10^{-18} meters. I laughed. As he had many times by then, Kip told why this goal was not completely crazy, if there is enough light in an interferometer, which bounces back and forth many times as a waveform passes. Immediately after the discussion ended I went to my desk and did some crude calculations. The numbers kind of worked, but I shook my head, unconvinced. This was going to be a huge undertaking. Success seemed unlikely. Poor Kip!

I’ve never been involved in LIGO, but Kip and I remained friends, and every now and then he would give me the inside scoop on the latest developments (most memorably while walking the streets of London for hours on a beautiful spring evening in 1991). From afar I followed the forced partnership between Caltech and MIT that was forged in the 1980s, and the painful transition from a small project under the leadership of Drever-Thorne-Weiss (great scientists but lacking much needed management expertise) to a large collaboration under a succession of strong leaders, all based at Caltech.

Vladimir Braginsky, who realized that quantum effects constrain gravitational wave detectors.

Vladimir Braginsky, who realized that quantum effects limit the sensitivity of  gravitational wave detectors.

During 1994-95, I co-chaired a committee formulating a long-range plan for Caltech physics, and we spent more time talking about LIGO than any other issue. Part of our concern was whether a small institution like Caltech could absorb such a large project, which was growing explosively and straining Institute resources. And we also worried about whether LIGO would ultimately succeed. But our biggest worry of all was different — could Caltech remain at the forefront of gravitational wave research so that if and when LIGO hit paydirt we would reap the scientific benefits?

A lot has changed since then. After searching for years we made two crucial new faculty appointments: theorist Yanbei Chen (2007), who provided seminal ideas for improving sensitivity, and experimentalist Rana Adhikari (2006), a magician at the black art of making an interferometer really work. Alan Weinstein transitioned from high energy physics to become a leader of LIGO data analysis. We established a world-class numerical relativity group, now led by Mark Scheel. Staff scientists like Stan Whitcomb also had an essential role, as did longtime Project Manager Gary Sanders. LIGO Directors Robbie Vogt, Barry Barish, Jay Marx, and now Dave Reitze have provided effective and much needed leadership.

Rai Weiss, around the time he conceived LIGO in an amazing 1972 paper.

Rai Weiss, around the time he conceived LIGO in an amazing 1972 paper.

My closest connection to LIGO arose during the 1998-99 academic year, when Kip asked me to participate in a “QND reading group” he organized. (QND stands for Quantum Non-Demolition, Braginsky’s term for measurements that surpass the naïve quantum limits on measurement precision.) At that time we envisioned that Advanced LIGO would turn on in 2008, yet there were still many questions about how it would achieve the sensitivity required to ensure detection. I took part enthusiastically, and learned a lot, but never contributed any ideas of enduring value. The discussions that year did have positive outcomes, however; leading for example to a seminal paper by Kimble, Levin, Matsko, Thorne, and Vyatchanin on improving precision through squeezing of light. By the end of the year I had gained a much better appreciation of the strength of the LIGO team, and had accepted that Advanced LIGO might actually work!

I once asked Vladimir Braginsky why he spent years working on bar detectors for gravitational waves, while at the same time realizing that fundamental limits on quantum measurement would make successful detection very unlikely. Why wasn’t he trying to build an interferometer already in the 1970s? Braginsky loved to be asked questions like this, and his answer was a long story, told with many dramatic flourishes. The short answer is that he viewed interferometric detection of gravitational waves as too ambitious. A bar detector was something he could build in his lab, while an interferometer of the appropriate scale would be a long-term project involving a much larger, technically diverse team.

Joe Weber, who audaciously believed gravitational waves can be detected on earth.

Joe Weber, whose audacious belief that gravitational waves are detectable on earth inspired Kip Thorne and many others.

Kip’s chapter in MTW ends with section 37.10 (“Looking toward the future”) which concludes with this juicy quote (written almost 45 years ago):

“The technical difficulties to be surmounted in constructing such detectors are enormous. But physicists are ingenious; and with the impetus provided by Joseph Weber’s pioneering work, and with the support of a broad lay public sincerely interested in pioneering in science, all obstacles will surely be overcome.”

That’s what we call vision, folks. You might also call it cockeyed optimism, but without optimism great things would never happen.

Optimism alone is not enough. For something like the detection of gravitational waves, we needed technical ingenuity, wise leadership, lots and lots of persistence, the will to overcome adversity, and ultimately the efforts of hundreds of hard working, talented scientists and engineers. Not to mention the courage displayed by the National Science Foundation in supporting such a risky project for decades.

I have never been prouder than I am today to be part of the Caltech family.

Wouldn’t you like to know what’s going on in my mind?

I suppose most theoretical physicists who (like me) are comfortably past the age of 60 worry about their susceptibility to “crazy-old-guy syndrome.” (Sorry for the sexism, but all the victims of this malady I know are guys.) It can be sad when a formerly great scientist falls far out of the mainstream and seems to be spouting nonsense.

Matthew Fisher is only 55, but reluctance to be seen as a crazy old guy might partially explain why he has kept pretty quiet about his passionate pursuit of neuroscience over the past three years. That changed two months ago when he posted a paper on the arXiv about Quantum Cognition.

Neuroscience has a very seductive pull, because it is at once very accessible and very inaccessible. While a theoretical physicist might think and write about a brane even without having or seeing a brane, everybody’s got a brain (some scarecrows excepted). On the other hand, while it’s not too hard to write down and study the equations that describe a brane, it is not at all easy to write down the equations for a brain, let alone solve them. The brain is fascinating because we know so little about it. And … how can anyone with a healthy appreciation for Gödel’s Theorem not be intrigued by the very idea of a brain that thinks about itself?

(Almost) everybody's got a brain.

(Almost) everybody’s got a brain.

The idea that quantum effects could have an important role in brain function is not new, but is routinely dismissed as wildly implausible. Matthew Fisher begs to differ. And those who read his paper (as I hope many will) are bound to conclude: This old guy’s not so crazy. He may be onto something. At least he’s raising some very interesting questions.

My appreciation for Matthew and his paper was heightened further this Wednesday, when Matthew stopped by Caltech for a lunch-time seminar and one of my interminable dinner-time group meetings. I don’t know whether my brain is performing quantum information processing (and neither does Matthew), but just the thought that it might be is lighting me up like a zebrafish.

Following Matthew, let’s take a deep breath and ask ourselves: What would need to be true for quantum information processing to be important in the brain? Presumably we would need ways to (1) store quantum information for a long time, (2) transport quantum information, (3) create entanglement, and (4) have entanglement influence the firing of neurons. After a three-year quest, Matthew has interesting things to say about all of these issues. For details, you should read the paper.

Matthew argues that the only plausible repositories for quantum information in the brain are the Phosphorus-31 nuclear spins in phosphate ions. Because these nuclei are spin-1/2, they have no electric quadrupole moments and hence corresponding long coherence times — of order a second. That may not be long enough, but phosphate ions can be bound with calcium ions into objects called Posner clusters, each containing six P-31 nuclei. The phosphorus nuclei in Posner clusters might have coherence times greatly enhanced by motional narrowing, perhaps as long as weeks or even longer.

Where energy is being consumed in a cell, ATP sometimes releases diphosphate ions (what biochemists call pyrophosphate), which are later broken into two separate phosphate ions, each with a single P-31 qubit. Matthew argues that the breakup of the diphosphate, catalyzed by a suitable enzyme, will occur at an enhanced rate when these two P-31 qubits are in a spin singlet rather than a spin triplet. The reason is that the enzyme has to grab ahold of the diphosphate molecule and stop its rotation in order to break it apart, which is much easier when the molecule has even rather than odd orbital angular momentum; therefore due to Fermi statistics the spin state of the P-31 nuclei must be antisymmetric. Thus wherever ATP is consumed there is a plentiful source of entangled qubit pairs.

If the phosphate molecules remain unbound, this entanglement will decay in about a second, but it is a different story if the phosphate ions group together quickly enough into Posner clusters, allowing the entanglement to survive for a much longer time. If the two members of an entangled qubit pair are snatched up by different Posner clusters, the clusters may then be transported into different cells, distributing the entanglement over relatively long distances.

(a) Two entangled Posner clusters. Each dot is a P-31 nuclear spin, and each dashed line represents a singlet pair. (b) Many entangled Posner clusters. [From the paper]

(a) Two entangled Posner clusters. Each dot is a P-31 nuclear spin, and each dashed line represents a singlet pair. (b) Many entangled Posner clusters. [From Fisher 2015]

What causes a neuron to fire is a complicated story that I won’t attempt to wade into. Suffice it to say that part of the story may involve the chemical binding of a pair of Posner clusters which then melt if the environment is sufficiently acidic, releasing calcium ions and phosphate ions which enhance the firing. The melting rate depends on the spin state of the six P-31 nuclei within the cluster, so that entanglement between clusters in different cells may induce nonlocal correlations among different neurons, which could be quite complex if entanglement is widely distributed.

This scenario raises more questions than it answers, but these are definitely scientific questions inviting further investigation and experimental exploration. One thing that is far from clear at this stage is whether such quantum correlations among neurons (if they exist at all) would be easy to simulate with a classical computer. Even if that turns out to be so, these potential quantum effects involving many neurons could be fabulously interesting. IQIM’s mission is to reach for transformative quantum science, particularly approaches that take advantage of synergies between different fields of study. This topic certainly qualifies.* It’s going to be great fun to see where it leads.

If you are a young and ambitious scientist, you may be contemplating the dilemma: Should I pursue quantum physics or neuroscience? Maybe, just maybe, the right answer is: Both.

*Matthew is the only member of the IQIM faculty who is not a Caltech professor, though he once was.

Beware global search and replace!

I’m old enough to remember when cutting and pasting were really done with scissors and glue (or Scotch tape). When I was a graduate student in the late 1970s, few physicists typed their own papers, and if they did they left gaps in the text, to be filled in later with handwritten equations. The gold standard of technical typing was the IBM Correcting Selectric II typewriter. Among its innovations was the correction ribbon, which allowed one to remove a typo with the touch of a key. But it was especially important for scientists that the Selectric could type mathematical characters, including Greek letters.

IBM Selectric typeballs

IBM Selectric typeballs

It wasn’t easy. Many different typeballs were available, to support various fonts and special characters. Typing a displayed equation or in-line equation usually involved swapping back and forth between typeballs to access all the needed symbols. Most physics research groups had staff who knew how to use the IBM Selectric and spent much of their time typing manuscripts.

Though the IBM Selectric was used by many groups, typewriters have unique personalities, as forensic scientists know. I had a friend who claimed he had learned to recognize telltale differences among documents produced by various IBM Selectric machines. That way, whenever he received a referee report, he could identify its place of origin.

Manuscripts did not evolve through 23 typeset versions in those days, as one of my recent papers did. Editing was arduous and frustrating, particularly for a lowly graduate student like me, who needed to beg Blanche to set aside what she was doing for Steve Weinberg and devote a moment or two to working on my paper.

It was tremendously liberating when I learned to use TeX in 1990 and started typing my own papers. (Not LaTeX in those days, but Plain TeX embellished by a macro for formatting.) That was a technological advance that definitely improved my productivity. An earlier generation had felt the same way about the Xerox machine.

But as I was reminded a few days ago, while technological advances can be empowering, they can also be dangerous when used recklessly. I was editing a very long document, and decided to make a change. I had repeatedly used $x$ to denote an n-bit string, and thought it better to use $\vec x$ instead. I was walking through the paper with the replace button, changing each $x$ to $\vec x$ where the change seemed warranted. But I slipped once, and hit the “Replace All” button instead of “Replace.” My computer curtly informed me that it had made the replacement 1011 times. Oops …

This was a revocable error. There must have been a way to undo it (though it was not immediately obvious how). Or I could have closed the file without saving, losing some recent edits but limiting the damage.

But it was late at night and I was tired. I panicked, immediately saving and LaTeXing the file. It was a mess.

Okay, no problem, all I had to do was replace every \vec x with x and everything would be fine. Except that in the original replacement I had neglected to specify “Match Case.” In 264 places $X$ had become $\vec x$, and the new replacement did not restore the capitalization. It took hours to restore every $X$ by hand, and there are probably a few more that I haven’t noticed yet.

Which brings me to the cautionary tale of one of my former graduate students, Robert Navin. Rob’s thesis had two main topics, scattering off vortices and scattering off monopoles. On the night before the thesis due date, Rob made a horrifying discovery. The crux of his analysis of scattering off vortices concerned the singularity structure of a certain analytic function, and the chapter about vortices made many references to the poles of this function. What Rob realized at this late stage is that these singularities are actually branch points, not poles!

What to do? It’s late and you’re tired and your thesis is due in a few hours. Aha! Global search and replace! Rob replaced every occurrence of “pole” in his thesis by “branch point.” Problem solved.

Except … Rob had momentarily forgotten about that chapter on monopoles. Which, when I read the thesis, had been transformed into a chapter on monobranch points. His committee accepted the thesis, but requested some changes …

Rob Navin no longer does physics, but has been very successful in finance. I’m sure he’s more careful now.

Kitaev, Moore, Read share Dirac Medal!

Since its founding 30 years ago, the Dirac Medal has been one of the most prestigious honors in theoretical physics. Particle theorists and string theorists have claimed most of the medals, but occasionally other fields break through, as when Haldane, Kane, and Zhang shared the 2012 Dirac Medal for their pioneering work on topological insulators. I was excited to learn today that the 2015 Dirac Medal has been awarded to Alexei Kitaev, Greg Moore, and Nick Read “for their interdisciplinary contributions which introduced  concepts of conformal field theory and non-abelian quasiparticle statistics in condensed matter systems and  applications of these ideas to quantum computation.”

Left to right: Alexei Kitaev, Greg Moore and Nicholas Read.

Left to right: Alexei Kitaev, Greg Moore, and Nick Read.

I have written before about the exciting day in April 1997 when Alesha and I met, and I heard for the first time about the thrilling concept of a topological quantum computer. I’ll take the liberty of drawing a quote from that post, which seems particularly relevant today:

Over coffee at the Red Door Cafe that afternoon, we bonded over our shared admiration for a visionary paper by Greg Moore and Nick Read about non-abelian anyons in fractional quantum Hall systems, though neither of us fully understood the paper (and I still don’t). Maybe, we mused together, non-abelian anyons are not just a theorist’s dream … It was the beginning of a beautiful friendship.

As all physics students know, fundamental particles in three spatial dimensions come in two varieties, bosons and fermions, but in two spatial dimensions more exotic possibilities abound, dubbed “anyons” by Wilczek. Anyons have an exotic spin, a fraction of an electron’s spin, and corresponding exotic statistics — when one anyon is carried around another, their quantum state picks up a nontrivial topological phase. (I had some fun discussions with Frank Wilczek in 1981 as he was developing the theory of anyons. In some of his writings Frank has kindly credited me for suggesting to him that a robust spin-statistics connection should hold in two dimensions, so that fractional spin is necessarily accompanied by fractional statistics. The truth is that my understanding of this point was murky at best back then.) Not long after Wilczek’s paper, Bert Halperin recognized the relevance of anyons to the strange fractional quantum Hall states that had recently been discovered; these support particle-like objects carrying a fraction of the electron’s electric charge, which Halperin recognized to be anyons.

Non-abelian anyons are even more exotic. In a system with many widely separated non-abelian anyons, there are a vast number of different ways for the particles to “fuse” together, giving rise to many possible quantum states, all of which are in principle distinguishable but in practice are hard to tell apart. Furthermore, by “braiding” the anyons (performing a sequence of particle exchanges, so the world lines of the anyons trace out a braid in three-dimensional spacetime), this state can be manipulated, coherently processing the quantum information encoded in the system.

Others (including me) had mused about non-abelian anyons before Moore and Read came along, but no one had proposed a plausible story for how such exotic objects would arise in a realistic laboratory setting. As collaborators, Moore and Read complemented one another perfectly. Greg was, and is, one of the world’s leading experts on conformal field theory. Nick was, and is, one of the world’s leading experts on the fractional quantum Hall effect. Together, they realized that one of the already known fractional quantum Hall states (at filling factor 5/2) is a good candidate for a topological phase supporting non-abelian anyons. This was an inspired guess, most likely correct, though we still don’t have smoking gun experimental evidence 25 years later. Their paper is a magical and rare combination of mathematical sophistication with brilliant intuition.

Alexei arrived at his ideas about non-abelian anyons coming from a different direction, though I suspect he drew inspiration from the earlier deep contributions of Moore and Read. He was trying to imagine a physical system that could store and process a quantum state reliably. Normally quantum systems are very fragile — just looking at the system alters its state. To prevent a quantum computer from making errors, we need to isolate the information processed by the computer from the environment. A system of non-abelian anyons has just the right properties to make this possible; it carries lots of information, but the environment can’t read (or damage) that information when it looks at the particles one at a time. That’s because the information is not encoded in the individual particles, but instead in subtle collective properties shared by many particles at once.

Alexei and I had inspiring discussions about topological quantum computing when we first met at Caltech in April 1997, which continued at a meeting in Torino, Italy that summer, where we shared a bedroom. I was usually asleep by the time he came to bed, because he was staying up late, typing his paper.

Alexei did not think it important to publish his now renowned 1997 paper in a journal — he was content for the paper to be accessible on the arXiv. But after a few years I started to get worried … in my eyes Alexei was becoming an increasingly likely Nobel Prize candidate. Would it cause a problem if his most famous paper had never been published? Just to be safe, I arranged for it to appear in Annals of Physics in 2003, where I was on the editorial board at the time. Frank Wilczek, then the editor, was delighted by this submission, which has definitely boosted the journal’s impact factor! (“Fault-tolerant quantum computation by anyons” has 2633 citations as of today, according to Google Scholar.) Nobelists are ineligible for the Dirac Medal, but some past medalists have proceeded to greater glory. It could happen again, right?

Alesha and I have now been close friends and collaborators for 18 years, but I have actually known Greg and Nick even longer. I taught at Harvard for a few years in the early 1980s, at a time when an amazingly talented crew of physics graduate students roamed the halls, of whom Andy Cohen, Jacques Distler, Ben Grinstein, David Kaplan, Aneesh Manohar, Ann Nelson, and Phil Nelson among others all made indelible impressions. But there was something special about Greg. The word that comes to mind is intensity. Few students exhibit as much drive and passion for physics as Greg did in those days. He’s calmer now, but still pretty intense. I met Nick a few years later when we tried to recruit him to the Caltech faculty. Luring him to southern California turned out to be a lost cause because he didn’t know how to drive a car. I suppose he’s learned by now?* Whenever I’ve spoken to Nick in the years since then, I’ve always been dazzled by his clarity of thought.

Non-abelian anyons are at a pivotal stage, with lots of experimental hints supporting their existence, but still no ironclad evidence. I feel confident this will change in the next few years. These are exciting times!

And guess what? This occasion gives me another opportunity to dust off one of my poems!

Anyon, Anyon

Anyon, anyon, where do you roam?
Braid for a while before you go home.

Though you’re condemned just to slide on a table,
A life in 2D also means that you’re able
To be of a type neither Fermi nor Bose
And to know left from right — that’s a kick, I suppose.

You and your buddy were made in a pair
Then wandered around, braiding here, braiding there.
You’ll fuse back together when braiding is through
We’ll bid you adieu as you vanish from view.

Alexei exhibits a knack for persuading
That someday we’ll crunch quantum data by braiding,
With quantum states hidden where no one can see,
Protected from damage through top-ology.

Anyon, anyon, where do you roam?
Braid for a while, before you go home.

*Note added: Nick confirms, “Yes, I’ve had a driving license since 1992, and a car since 1994!”

20 years of qubits: the arXiv data

Editor’s Note: The preceding post on Quantum Frontiers inspired the always curious Paul Ginsparg to do some homework on usage of the word “qubit” in papers posted on the arXiv. Rather than paraphrase Paul’s observations I will quote his email verbatim, so you can experience its Ginspargian style.qubit-data

fig has total # uses of qubit in arxiv (divided by 10) per month, and
total # docs per month:
an impressive 669394 total in 29587 docs.

the graph starts at 9412 (dec '94), but that is illusory since qubit
only shows up in v2 of hep-th/9412048, posted in 2004.
the actual first was quant-ph/9503016 by bennett/divicenzo/shor et al
(posted 23 Mar '95) where they carefully attribute the term to
schumacher ("PRA, to appear '95") and jozsa/schumacher ("J. Mod Optics
'94"), followed immediately by quant-ph/9503017 by deutsch/jozsa et al
(which no longer finds it necessary to attribute term)

[neither of schumacher's first two articles is on arxiv, but otherwise
probably have on arxiv near 100% coverage of its usage and growth, so
permits a viral epidemic analysis along the lines of kaiser's "drawing
theories apart"  of use of Feynman diagrams in post wwII period].

ever late to the party, the first use by j.preskill was
quant-ph/9602016, posted 21 Feb 1996

#articles by primary subject area as follows (hep-th is surprisingly
low given the firewall connection...):

quant-ph 22096
cond-mat.mes-hall 3350
cond-mat.supr-con 880
cond-mat.str-el 376
cond-mat.mtrl-sci 250
math-ph 244
hep-th 228
physics.atom-ph 224
cond-mat.stat-mech 213
cond-mat.other 200
physics.optics 177
cond-mat.quant-gas 152
physics.gen-ph 120
gr-qc 105
cond-mat 91
cs.CC 85
cs.IT 67
cond-mat.dis-nn 55
cs.LO 49
cs.CR 43
physics.chem-ph 33
cs.ET 25
physics.ins-det 21
math.CO,nlin.CD 20
physics.hist-ph,physics.bio-ph,math.OC 19
hep-ph 18
cond-mat.soft,cs.DS,math.OA 17
cs.NE,cs.PL,math.QA 13
cs.AR,cs.OH 12
physics.comp-ph 11
math.LO 10
physics.soc-ph,physics.ed-ph,cs.AI 9
math.ST,physics.pop-ph,cs.GT 8
nlin.AO,astro-ph,cs.DC,cs.FL,q-bio.GN 7
nlin.PS,math.FA,cs.NI,math.PR,q-bio.NC,physics.class-ph,math.GM,
physics.data-an 6
nlin.SI,math.CT,q-fin.GN,cs.LG,q-bio.BM,cs.DM,math.GT 5
math.DS,physics.atm-clus,q-bio.PE 4
math.DG,math.CA,nucl-th,q-bio.MN,math.HO,stat.ME,cs.MS,q-bio.QM,
math.RA,math.AG,astro-ph.IM,q-bio.OT 3
stat.AP,cs.CV,math.SG,cs.SI,cs.SE,cs.SC,cs.DB,stat.ML,physics.med-ph,
math.RT 2
cs.CL,cs.CE,q-fin.RM,chao-dyn,astro-ph.CO,q-fin.ST,math.NA,
cs.SY,math.MG,physics.plasm-ph,hep-lat,math.GR,cs.MM,cs.PF,math.AC,
nucl-ex 1

Who named the qubit?

Perhaps because my 40th wedding anniversary is just a few weeks away, I have been thinking about anniversaries lately, which reminded me that we are celebrating the 20th anniversary of a number of milestones in quantum information science. In 1995 Cirac and Zoller proposed, and Wineland’s group first demonstrated, the ion trap quantum computer. Quantum error-correcting codes were invented by Shor and Steane, entanglement concentration and purification were described by Bennett et al., and there were many other fast-breaking developments. It was an exciting year.

But the event that moved me to write a blog post is the 1995 appearance of the word “qubit” in an American Physical Society journal. When I was a boy, two-level quantum systems were called “two-level quantum systems.” Which is a descriptive name, but a mouthful and far from euphonious. Think of all the time I’ve saved in the past 20 years by saying “qubit” instead of “two-level quantum system.” And saying “qubit” not only saves time, it also conveys the powerful insight that a quantum state encodes a novel type of information. (Alas, the spelling was bound to stir controversy, with the estimable David Mermin a passionate holdout for “qbit”. Give it up, David, you lost.)

Ben Schumacher. Thanks for the qubits, Ben!

Ben Schumacher. Thanks for the qubits, Ben!

For the word “qubit” we know whom to thank: Ben Schumacher. He introduced the word in his paper “Quantum Coding” which appeared in the April 1995 issue of Physical Review A. (History is complicated, and in this case the paper was actually submitted in 1993, which allowed another paper by Jozsa and Schumacher to be published earlier even though it was written and submitted later. But I’m celebrating the 20th anniversary of the qubit now, because otherwise how will I justify this blog post?)

In the acknowledgments of the paper, Ben provided some helpful background on the origin of the word:

The term “qubit” was coined in jest during one of the author’s many intriguing and valuable conversations with W. K. Wootters, and became the initial impetus for this work.

I met Ben (and other luminaries of quantum information theory) for the first time at a summer school in Torino, Italy in 1996. After reading his papers my expectations were high, all the more so after Sam Braunstein warned me that I would be impressed: “Ben’s a good talker,” Sam assured me. I was not disappointed.

(I also met Asher Peres at that Torino meeting. When I introduced myself Asher asked, “Isn’t there someone with a similar name in particle theory?” I had no choice but to come clean. I particularly remember that conversation because Asher told me his secret motivation for studying quantum entanglement: it might be important in quantum gravity!)

A few years later Ben spent his sabbatical year at Caltech, which gave me an opportunity to compose a poem for the introduction to Ben’s (characteristically brilliant) talk at our physics colloquium. This poem does homage to that famous 1995 paper in which Ben not only introduced the word “qubit” but also explained how to compress a quantum state to the minimal number of qubits from which the original state can be recovered with a negligible loss of fidelity, thus formulating and proving the quantum version of Shannon’s famous source coding theorem, and laying the foundation for many subsequent developments in quantum information theory.

Sometimes when I recite a poem I can sense the audience’s appreciation. But in this case there were only a few nervous titters. I was going for edgy but might have crossed the line into bizarre.. Since then I’ve (usually) tried to be more careful.

(For reading the poem, it helps to know that the quantum state appears to be random when it has been compressed as much as possible.)

On Quantum Compression (in honor of Ben Schumacher)

Ben.
He rocks.
I remember
When
He showed me how to fit
A qubit
In a small box.

I wonder how it feels
To be compressed.
And then to pass
A fidelity test.

Or does it feel
At all, and if it does
Would I squeal
Or be just as I was?

If not undone
I’d become as I’d begun
And write a memorandum
On being random.
Had it felt like a belt
Of rum?

And might it be predicted
That I’d become addicted,
Longing for my session
Of compression?

I’d crawl
To Ben again.
And call,
“Put down your pen!
Don’t stall!
Make me small!”

Celebrating Theoretical Physics at Caltech’s Burke Institute

Editor’s Note: Yesterday and today, Caltech is celebrating the inauguration of the Walter Burke Institute for Theoretical Physics. John Preskill made the following remarks at a dinner last night honoring the board of the Sherman Fairchild Foundation.

This is an exciting night for me and all of us at Caltech. Tonight we celebrate physics. Especially theoretical physics. And in particular the Walter Burke Institute for Theoretical Physics.

Some of our dinner guests are theoretical physicists. Why do we do what we do?

I don’t have to convince this crowd that physics has a profound impact on society. You all know that. We’re celebrating this year the 100th anniversary of general relativity, which transformed how we think about space and time. It may be less well known that two years later Einstein laid the foundations of laser science. Einstein was a genius for sure, but I don’t think he envisioned in 1917 that we would use his discoveries to play movies in our houses, or print documents, or repair our vision. Or see an awesome light show at Disneyland.

And where did this phone in my pocket come from? Well, the story of the integrated circuit is fascinating, prominently involving Sherman Fairchild, and other good friends of Caltech like Arnold Beckman and Gordon Moore. But when you dig a little deeper, at the heart of the story are two theorists, Bill Shockley and John Bardeen, with an exceptionally clear understanding of how electrons move through semiconductors. Which led to transistors, and integrated circuits, and this phone. And we all know it doesn’t stop here. When the computers take over the world, you’ll know who to blame.

Incidentally, while Shockley was a Caltech grad (BS class of 1932), John Bardeen, one of the great theoretical physicists of the 20th century, grew up in Wisconsin and studied physics and electrical engineering at the University of Wisconsin at Madison. I suppose that in the 1920s Wisconsin had no pressing need for physicists, but think of the return on the investment the state of Wisconsin made in the education of John Bardeen.1

So, physics is a great investment, of incalculable value to society. But … that’s not why I do it. I suppose few physicists choose to do physics for that reason. So why do we do it? Yes, we like it, we’re good at it, but there is a stronger pull than just that. We honestly think there is no more engaging intellectual adventure than struggling to understand Nature at the deepest level. This requires attitude. Maybe you’ve heard that theoretical physicists have a reputation for arrogance. Okay, it’s true, we are arrogant, we have to be. But it is not that we overestimate our own prowess, our ability to understand the world. In fact, the opposite is often true. Physics works, it’s successful, and this often surprises us; we wind up being shocked again and again by “unreasonable effectiveness of mathematics in the natural sciences.” It’s hard to believe that the equations you write down on a piece of paper can really describe the world. But they do.

And to display my own arrogance, I’ll tell you more about myself. This occasion has given me cause to reflect on my own 30+ years on the Caltech faculty, and what I’ve learned about doing theoretical physics successfully. And I’ll tell you just three principles, which have been important for me, and may be relevant to the future of the Burke Institute. I’m not saying these are universal principles – we’re all different and we all contribute in different ways, but these are principles that have been important for me.

My first principle is: We learn by teaching.

Why do physics at universities, at institutions of higher learning? Well, not all great physics is done at universities. Excellent physics is done at industrial laboratories and at our national laboratories. But the great engine of discovery in the physical sciences is still our universities, and US universities like Caltech in particular. Granted, US preeminence in science is not what it once was — it is a great national asset to be cherished and protected — but world changing discoveries are still flowing from Caltech and other great universities.

Why? Well, when I contemplate my own career, I realize I could never have accomplished what I have as a research scientist if I were not also a teacher. And it’s not just because the students and postdocs have all the great ideas. No, it’s more interesting than that. Most of what I know about physics, most of what I really understand, I learned by teaching it to others. When I first came to Caltech 30 years ago I taught advanced elementary particle physics, and I’m still reaping the return from what I learned those first few years. Later I got interested in black holes, and most of what I know about that I learned by teaching general relativity at Caltech. And when I became interested in quantum computing, a really new subject for me, I learned all about it by teaching it.2

Part of what makes teaching so valuable for the teacher is that we’re forced to simplify, to strip down a field of knowledge to what is really indispensable, a tremendously useful exercise. Feynman liked to say that if you really understand something you should be able to explain it in a lecture for the freshman. Okay, he meant the Caltech freshman. They’re smart, but they don’t know all the sophisticated tools we use in our everyday work. Whether you can explain the core idea without all the peripheral technical machinery is a great test of understanding.

And of course it’s not just the teachers, but also the students and the postdocs who benefit from the teaching. They learn things faster than we do and often we’re just providing some gentle steering; the effect is to amplify greatly what we could do on our own. All the more so when they leave Caltech and go elsewhere to change the world, as they so often do, like those who are returning tonight for this Symposium. We’re proud of you!

My second principle is: The two-trick pony has a leg up.

I’m a firm believer that advances are often made when different ideas collide and a synthesis occurs. I learned this early, when as a student I was fascinated by two topics in physics, elementary particles and cosmology. Nowadays everyone recognizes that particle physics and cosmology are closely related, because when the universe was very young it was also very hot, and particles were colliding at very high energies. But back in the 1970s, the connection was less widely appreciated. By knowing something about cosmology and about particle physics, by being a two-trick pony, I was able to think through what happens as the universe cools, which turned out to be my ticket to becoming a Caltech professor.

It takes a community to produce two-trick ponies. I learned cosmology from one set of colleagues and particle physics from another set of colleagues. I didn’t know either subject as well as the real experts. But I was a two-trick pony, so I had a leg up. I’ve tried to be a two-trick pony ever since.

Another great example of a two-trick pony is my Caltech colleague Alexei Kitaev. Alexei studied condensed matter physics, but he also became intensely interested in computer science, and learned all about that. Back in the 1990s, perhaps no one else in the world combined so deep an understanding of both condensed matter physics and computer science, and that led Alexei to many novel insights. Perhaps most remarkably, he connected ideas about error-correcting code, which protect information from damage, with ideas about novel quantum phases of matter, leading to radical new suggestions about how to operate a quantum computer using exotic particles we call anyons. These ideas had an invigorating impact on experimental physics and may someday have a transformative effect on technology. (We don’t know that yet; it’s still way too early to tell.) Alexei could produce an idea like that because he was a two-trick pony.3

Which brings me to my third principle: Nature is subtle.

Yes, mathematics is unreasonably effective. Yes, we can succeed at formulating laws of Nature with amazing explanatory power. But it’s a struggle. Nature does not give up her secrets so readily. Things are often different than they seem on the surface, and we’re easily fooled. Nature is subtle.4

Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen? Well, I think we need more two-trick ponies. Except maybe in this case we’ll need ponies who can do three tricks or even more. Explaining how spacetime might emerge from some more fundamental notion is one of the hardest problems we face in physics, and it’s not going to yield easily. We’ll need to combine ideas from gravitational physics, information science, and condensed matter physics to make real progress, and maybe completely new ideas as well. Some of our former Sherman Fairchild Prize Fellows are leading the way at bringing these ideas together, people like Guifre Vidal, who is here tonight, and Patrick Hayden, who very much wanted to be here.5 We’re very proud of what they and others have accomplished.

Bringing ideas together is what the Walter Burke Institute for Theoretical Physics is all about. I’m not talking about only the holographic principle, which is just one example, but all the great challenges of theoretical physics, which will require ingenuity and synthesis of great ideas if we hope to make real progress. We need a community of people coming from different backgrounds, with enough intellectual common ground to produce a new generation of two-trick ponies.

Finally, it seems to me that an occasion as important as the inauguration of the Burke Institute should be celebrated in verse. And so …

Who studies spacetime stress and strain
And excitations on a brane,
Where particles go back in time,
And physicists engage in rhyme?

Whose speedy code blows up a star
(Though it won’t quite blow up so far),
Where anyons, which braid and roam
Annihilate when they get home?

Who makes math and physics blend
Inside black holes where time may end?
Where do they do all this work?
The Institute of Walter Burke!

We’re very grateful to the Burke family and to the Sherman Fairchild Foundation. And we’re confident that your generosity will make great things happen!

 


  1. I was reminded of this when I read about a recent proposal by the current governor of Wisconsin. 
  2. And by the way, I put my lecture notes online, and thousands of people still download them and read them. So even before MOOCs – massive open online courses – the Internet was greatly expanding the impact of our teaching. Handwritten versions of my old particle theory and relativity notes are also online here
  3. Okay, I admit it’s not quite that simple. At that same time I was also very interested in both error correction and in anyons, without imagining any connection between the two. It helps to be a genius. But a genius who is also a two-trick pony can be especially awesome. 
  4. We made that the tagline of IQIM. 
  5. Patrick can’t be here for a happy reason, because today he and his wife Mary Race welcomed a new baby girl, Caroline Eleanor Hayden, their first child. The Burke Institute is not the only good thing being inaugurated today. 

Bell’s inequality 50 years later

This is a jubilee year.* In November 1964, John Bell submitted a paper to the obscure (and now defunct) journal Physics. That paper, entitled “On the Einstein Podolsky Rosen Paradox,” changed how we think about quantum physics.

The paper was about quantum entanglement, the characteristic correlations among parts of a quantum system that are profoundly different than correlations in classical systems. Quantum entanglement had first been explicitly discussed in a 1935 paper by Einstein, Podolsky, and Rosen (hence Bell’s title). Later that same year, the essence of entanglement was nicely and succinctly captured by Schrödinger, who said, “the best possible knowledge of a whole does not necessarily include the best possible knowledge of its parts.” Schrödinger meant that even if we have the most complete knowledge Nature will allow about the state of a highly entangled quantum system, we are still powerless to predict what we’ll see if we look at a small part of the full system. Classical systems aren’t like that — if we know everything about the whole system then we know everything about all the parts as well. I think Schrödinger’s statement is still the best way to explain quantum entanglement in a single vigorous sentence.

To Einstein, quantum entanglement was unsettling, indicating that something is missing from our understanding of the quantum world. Bell proposed thinking about quantum entanglement in a different way, not just as something weird and counter-intuitive, but as a resource that might be employed to perform useful tasks. Bell described a game that can be played by two parties, Alice and Bob. It is a cooperative game, meaning that Alice and Bob are both on the same side, trying to help one another win. In the game, Alice and Bob receive inputs from a referee, and they send outputs to the referee, winning if their outputs are correlated in a particular way which depends on the inputs they receive.

But under the rules of the game, Alice and Bob are not allowed to communicate with one another between when they receive their inputs and when they send their outputs, though they are allowed to use correlated classical bits which might have been distributed to them before the game began. For a particular version of Bell’s game, if Alice and Bob play their best possible strategy then they can win the game with a probability of success no higher than 75%, averaged uniformly over the inputs they could receive. This upper bound on the success probability is Bell’s famous inequality.**

Classical and quantum versions of Bell's game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

Classical and quantum versions of Bell’s game. If Alice and Bob share entangled qubits rather than classical bits, then they can win the game with a higher success probability.

There is also a quantum version of the game, in which the rules are the same except that Alice and Bob are now permitted to use entangled quantum bits (“qubits”)  which were distributed before the game began. By exploiting their shared entanglement, they can play a better quantum strategy and win the game with a higher success probability, better than 85%. Thus quantum entanglement is a useful resource, enabling Alice and Bob to play the game better than if they shared only classical correlations instead of quantum correlations.

And experimental physicists have been playing the game for decades, winning with a success probability that violates Bell’s inequality. The experiments indicate that quantum correlations really are fundamentally different than, and stronger than, classical correlations.

Why is that such a big deal? Bell showed that a quantum system is more than just a probabilistic classical system, which eventually led to the realization (now widely believed though still not rigorously proven) that accurately predicting the behavior of highly entangled quantum systems is beyond the capacity of ordinary digital computers. Therefore physicists are now striving to scale up the weirdness of the microscopic world to larger and larger scales, eagerly seeking new phenomena and unprecedented technological capabilities.

1964 was a good year. Higgs and others described the Higgs mechanism, Gell-Mann and Zweig proposed the quark model, Penzias and Wilson discovered the cosmic microwave background, and I saw the Beatles on the Ed Sullivan show. Those developments continue to reverberate 50 years later. We’re still looking for evidence of new particle physics beyond the standard model, we’re still trying to unravel the large scale structure of the universe, and I still like listening to the Beatles.

Bell’s legacy is that quantum entanglement is becoming an increasingly pervasive theme of contemporary physics, important not just as the source of a quantum computer’s awesome power, but also as a crucial feature of exotic quantum phases of matter, and even as a vital element of the quantum structure of spacetime itself. 21st century physics will advance not only by probing the short-distance frontier of particle physics and the long-distance frontier of cosmology, but also by exploring the entanglement frontier, by elucidating and exploiting the properties of increasingly complex quantum states.

frontiersSometimes I wonder how the history of physics might have been different if there had been no John Bell. Without Higgs, Brout and Englert and others would have elucidated the spontaneous breakdown of gauge symmetry in 1964. Without Gell-Mann, Zweig could have formulated the quark model. Without Penzias and Wilson, Dicke and collaborators would have discovered the primordial black-body radiation at around the same time.

But it’s not obvious which contemporary of Bell, if any, would have discovered his inequality in Bell’s absence. Not so many good physicists were thinking about quantum entanglement and hidden variables at the time (though David Bohm may have been one notable exception, and his work deeply influenced Bell.) Without Bell, the broader significance of quantum entanglement would have unfolded quite differently and perhaps not until much later. We really owe Bell a great debt.

*I’m stealing the title and opening sentence of this post from Sidney Coleman’s great 1981 lectures on “The magnetic monopole 50 years later.” (I’ve waited a long time for the right opportunity.)

**I’m abusing history somewhat. Bell did not use the language of games, and this particular version of the inequality, which has since been extensively tested in experiments, was derived by Clauser, Horne, Shimony, and Holt in 1969.

When I met with Steven Spielberg to talk about Interstellar

Today I had the awesome and eagerly anticipated privilege of attending a screening of the new film Interstellar, directed by Christopher Nolan. One can’t help but be impressed by Nolan’s fertile visual imagination. But you should know that Caltech’s own Kip Thorne also had a vital role in this project. Indeed, were there no Kip Thorne, Interstellar would never have happened.

On June 2, 2006, I participated in an unusual one-day meeting at Caltech, organized by Kip and the movie producer Lynda Obst (Sleepless in Seattle, Contact, The Invention of Lying, …). Lynda and Kip, who have been close since being introduced by their mutual friend Carl Sagan decades ago, had conceived a movie project together, and had collaborated on a “treatment” outlining the story idea. The treatment adhered to a core principle that was very important to Kip — that the movie be scientifically accurate. Though the story indulged in some wild speculations, at Kip’s insistence it skirted away from any flagrant violation of the firmly established laws of Nature. This principle of scientifically constrained speculation intrigued Steven Spielberg, who was interested in directing.

The purpose of the meeting was to brainstorm about the story and the science behind it with Spielberg, Obst, and Thorne. A remarkable group assembled, including physicists (Andrei Linde, Lisa Randall, Savas Dimopoulos, Mark Wise, as well as Kip), astrobiologists (Frank Drake, David Grinspoon), planetary scientists (Alan Boss, John Spencer, Dave Stevenson), and psychologists (Jay Buckey, James Carter, David Musson). As we all chatted and got acquainted, I couldn’t help but feel that we were taking part in the opening scene of a movie about making a movie. Spielberg came late and left early, but spent about three hours with us; he even brought along his Dad (an engineer).

Time_cover_interstellarThough the official release of Interstellar is still a few days away, you may already know from numerous media reports (including the cover story in this week’s Time Magazine) the essential elements of the story, which involves traveling through a wormhole seeking a new planet for humankind, a replacement for the hopelessly ravaged earth. The narrative evolved substantially as the project progressed, but traveling through a wormhole to visit a distant planet was already central to the original story.

Inevitably, some elements of the Obst/Thorne treatment did not survive in the final film. For one, Stephen Hawking was a prominent character in the original story; he joined the mission because of his unparalleled expertise at wormhole transversal, and Stephen’s ALS symptoms eased during prolonged weightlessness, only to recur upon return to earth gravity. Also, gravitational waves played a big part in the treatment; in particular the opening scene depicted LIGO scientists discovering the wormhole by detecting the gravitational waves emanating from it.

There was plenty to discuss to fill our one-day workshop, including: the rocket technology needed for the trip, the strong but stretchy materials that would allow the ship to pass through the wormhole without being torn apart by tidal gravity, how to select a crew psychologically fit for such a dangerous mission, what exotic life forms might be found on other worlds, how to communicate with an advanced civilization which resides in a higher dimensional bulk rather than the three-dimensional brane to which we’re confined, how to build a wormhole that stays open rather than pinching off and crushing those who attempt to pass through, and whether a wormhole could enable travel backward in time.

Spielberg was quite engaged in our discussions. Upon his arrival I immediately shot off a text to my daughter Carina: “Steven Spielberg is wearing a Brown University cap!” (Carina was a Brown student at the time, as Spielberg’s daughter had been.) Steven assured us of his keen interest in the project, noting wryly that “Aliens have been very good to me,” and he mentioned some of his favorite space movies, which included some I had also enjoyed as a kid, like Forbidden Planet and (the original) The Day the Earth Stood Still. In one notable moment, Spielberg asked the group “Who believes that intelligent life exists elsewhere in the universe?” We all raised our hands. “And who believes that the earth has been visited by extraterrestrial civilizations?” No one raised a hand. Steven seemed struck by our unanimity, on both questions.

I remember tentatively suggesting that the extraterrestrials had mastered M-theory, thus attaining computational power far beyond the comprehension of earthlings, and that they themselves were really advanced robots, constructed by an earlier generation of computers. Like many of the fun story ideas floated that day, this one had no apparent impact on the final version of the film.

Spielberg later brought in Jonah Nolan to write the screenplay. When Spielberg had to abandon the project because his DreamWorks production company broke up with Paramount Pictures (which owned the story), Jonah’s brother Chris Nolan eventually took over the project. Jonah and Chris Nolan transformed the story, but continued to consult extensively with Kip, who became an Executive Producer and says he is pleased with the final result.

Of the many recent articles about Interstellar, one of the most interesting is this one in Wired by Adam Rogers, which describes how Kip worked closely with the visual effects team at Double Negative to ensure that wormholes and rapidly rotating black holes are accurately depicted in the film (though liberties were taken to avoid confusing the audience). The images produced by sophisticated ray tracing computations were so surprising that at first Kip thought there must be a bug in the software, though eventually he accepted that the calculations are correct, and he is still working hard to more fully understand the results.

ScienceofInterstellarMech.inddI can’t give away the ending of the movie, but I can safely say this: When it’s over you’re going to have a lot of questions. Fortunately for all of us, Kip’s book The Science of Interstellar will be available the same day the movie goes into wide release (November 7), so we’ll all know where to seek enlightenment.

In fact on that very same day we’ll be treated to the release of The Theory of Everything, a biopic about Stephen and Jane Hawking. So November 7 is going to be an unforgettable Black Hole Day. Enjoy!