The weak shall inherit the quasiprobability.

Justin Dressel’s office could understudy for the archetype of a physicist’s office. A long, rectangular table resembles a lab bench. Atop the table perches a tesla coil. A larger tesla coil perches on Justin’s desk. Rubik’s cubes and other puzzles surround a computer and papers. In front of the desk hangs a whiteboard.

A puzzle filled the whiteboard in August. Justin had written a model for a measurement of a quasiprobability. I introduced quasiprobabilities here last Halloween. Quasiprobabilities are to probabilities as ebooks are to books: Ebooks resemble books but can respond to touchscreen interactions through sounds and animation. Quasiprobabilities resemble probabilities but behave in ways that probabilities don’t.

tesla-coil-2

A tesla coil of Justin Dressel’s

 

Let p denote the probability that any given physicist keeps a tesla coil in his or her office. p ranges between zero and one. Quasiprobabilities can dip below zero. They can assume nonreal values, dependent on the imaginary number i = \sqrt{-1}. Probabilities describe nonquantum phenomena, like tesla-coil collectors,1 and quantum phenomena, like photons. Quasiprobabilities appear nonclassical.2,3

We can infer the tesla-coil probability by observing many physicists’ offices:

\text{Prob(any given physicist keeps a tesla coil in his/her office)}  =  \frac{ \text{\# physicists who keep tesla coils in their offices} }{ \text{\# physicists} } \, . We can infer quasiprobabilities from weak measurements, Justin explained. You can measure the number of tesla coils in an office by shining light on the office, correlating the light’s state with the tesla-coil number, and capturing the light on photographic paper. The correlation needn’t affect the tesla coils. Observing a quantum state changes the state, by the Uncertainty Principle heralded by Heisenberg.

We could observe a quantum system weakly. We’d correlate our measurement device (the analogue of light) with the quantum state (the analogue of the tesla-coil number) unreliably. Imagining shining a dull light on an office for a brief duration. Shadows would obscure our photo. We’d have trouble inferring the number of tesla coils. But the dull, brief light burst would affect the office less than a strong, long burst would.

Justin explained how to infer a quasiprobability from weak measurements. He’d explained on account of an action that others might regard as weak: I’d asked for help.

whiteboard

Chaos had seized my attention a few weeks earlier. Chaos is a branch of math and physics that involves phenomena we can’t predict, like weather. I had forayed into quantum chaos for reasons I’ll explain in later posts. I was studying a function F(t) that can flag chaos in cold atoms, black holes, and superconductors.

I’d derived a theorem about F(t). The theorem involved a UFO of a mathematical object: a probability amplitude that resembled a probability but could assume nonreal values. I presented the theorem to my research group, which was kind enough to provide feedback.

“Is this amplitude physical?” John Preskill asked. “Can you measure it?”

“I don’t know,” I admitted. “I can tell a story about what it signifies.”

“If you could measure it,” he said, “I might be more excited.”

You needn’t study chaos to predict that private clouds drizzled on me that evening. I was grateful to receive feedback from thinkers I respected, to learn of a weakness in my argument. Still, scientific works are creative works. Creative works carry fragments of their creators. A weakness in my argument felt like a weakness in me. So I took the step that some might regard as weak—by seeking help.

 

drizzle

Some problems, one should solve alone. If you wake me at 3 AM and demand that I solve the Schrödinger equation that governs a particle in a box, I should be able to comply (if you comply with my demand for justification for the need to solve the Schrödinger equation at 3 AM).One should struggle far into problems before seeking help.

Some scientists extend this principle into a ban on assistance. Some students avoid asking questions for fear of revealing that they don’t understand. Some boast about passing exams and finishing homework without the need to attend office hours. I call their attitude “scientific machismo.”

I’ve all but lived in office hours. I’ve interrupted lectures with questions every few minutes. I didn’t know if I could measure that probability amplitude. But I knew three people who might know. Twenty-five minutes after I emailed them, Justin replied: “The short answer is yes!”

sun

I visited Justin the following week, at Chapman University’s Institute for Quantum Studies. I sat at his bench-like table, eyeing the nearest tesla coil, as he explained. Justin had recognized my probability amplitude from studies of the Kirkwood-Dirac quasiprobability. Experimentalists infer the Kirkwood-Dirac quasiprobability from weak measurements. We could borrow these experimentalists’ techniques, Justin showed, to measure my probability amplitude.

The borrowing grew into a measurement protocol. The theorem grew into a paper. I plunged into quasiprobabilities and weak measurements, following Justin’s advice. John grew more excited.

The meek might inherit the Earth. But the weak shall measure the quasiprobability.

With gratitude to Justin for sharing his expertise and time; and to Justin, Matt Leifer, and Chapman University’s Institute for Quantum Studies for their hospitality.

Chapman’s community was gracious enough to tolerate a seminar from me about thermal states of quantum systems. You can watch the seminar here.

1Tesla-coil collectors consists of atoms described by quantum theory. But we can describe tesla-coil collectors without quantum theory.

2Readers foreign to quantum theory can interpret “nonclassical” roughly as “quantum.”

3Debate has raged about whether quasiprobabilities govern classical phenomena.

4I should be able also to recite the solutions from memory.

Happy Halloween from…the discrete Wigner function?

Do you hope to feel a breath of cold air on the back of your neck this Halloween? I’ve felt one literally: I earned my Masters in the icebox called “Ontario,” at the Perimeter Institute for Theoretical Physics. Perimeter’s colloquia1 take place in an auditorium blacker than a Quentin Tarantino film. Aephraim Steinberg presented a colloquium one air-conditioned May.

Steinberg experiments on ultracold atoms and quantum optics2 at the University of Toronto. He introduced an idea that reminds me of biting into an apple whose coating you’d thought consisted of caramel, then tasting blood: a negative (quasi)probability.

Probabilities usually range from zero upward. Consider Shirley Jackson’s short story The Lottery. Villagers in a 20th-century American village prepare slips of paper. The number of slips equals the number of families in the village. One slip bears a black spot. Each family receives a slip. Each family has a probability p > 0  of receiving the marked slip. What happens to the family that receives the black spot? Read Jackson’s story—if you can stomach more than a Tarantino film.

Jackson peeled off skin to reveal the offal of human nature. Steinberg’s experiments reveal the offal of Nature. I’d expect humaneness of Jackson’s villagers and nonnegativity of probabilities. But what looks like a probability and smells like a probability might be hiding its odor with Special-Edition Autumn-Harvest Febreeze.

febreeze

A quantum state resembles a set of classical3 probabilities. Consider a classical system that has too many components for us to track them all. Consider, for example, the cold breath on the back of your neck. The breath consists of air molecules at some temperature T. Suppose we measured the molecules’ positions and momenta. We’d have some probability p_1 of finding this particle here with this momentum, that particle there with that momentum, and so on. We’d have a probability p_2 of finding this particle there with that momentum, that particle here with this momentum, and so on. These probabilities form the air’s state.

We can tell a similar story about a quantum system. Consider the quantum light prepared in a Toronto lab. The light has properties analogous to position and momentum. We can represent the light’s state with a mathematical object similar to the air’s probability density.4 But this probability-like object can sink below zero. We call the object a quasiprobability, denoted by \mu.

If a \mu sinks below zero, the quantum state it represents encodes entanglement. Entanglement is a correlation stronger than any achievable with nonquantum systems. Quantum information scientists use entanglement to teleport information, encrypt messages, and probe the nature of space-time. I usually avoid this cliché, but since Halloween is approaching: Einstein called entanglement “spooky action at a distance.”

too-cute

Eugene Wigner and others defined quasiprobabilities shortly before Shirley Jackson wrote The Lottery. Quantum opticians use these \mu’s, because quantum optics and quasiprobabilities involve continuous variables. Examples of continuous variables include position: An air molecule can sit at this point (e.g., x = 0) or at that point (e.g., x = 1) or anywhere between the two (e.g., x = 0.001). The possible positions form a continuous set. Continuous variables model quantum optics as they model air molecules’ positions.

Information scientists use continuous variables less than we use discrete variables. A discrete variable assumes one of just a few possible values, such as 0 or 1, or trick or treat.

discrete

How a quantum-information theorist views Halloween.

Quantum-information scientists study discrete systems, such as electron spins. Can we represent discrete quantum systems with quasiprobabilities \mu as we represent continuous quantum systems? You bet your barmbrack.

Bill Wootters and others have designed quasiprobabilities for discrete systems. Wootters stipulated that his \mu have certain properties. The properties appear in this review.  Most physicists label properties “1,” “2,” etc. or “Prop. 1,” “Prop. 2,” etc. The Wootters properties in this review have labels suited to Halloween.

woo

Seeing (quasi)probabilities sink below zero feels like biting into an apple that you think has a caramel coating, then tasting blood. Did you eat caramel apples around age six? Caramel apples dislodge baby teeth. When baby teeth fall out, so does blood. Tasting blood can mark growth—as does the squeamishness induced by a colloquium that spooks a student. Who needs haunted mansions when you have negative quasiprobabilities?

 

For nonexperts:

1Weekly research presentations attended by a department.

2Light.

3Nonquantum (basically).

4Think “set of probabilities.”

What matters to me, and why?

Students at my college asked every Tuesday. They gathered in a white, windowed room near the center of campus. “We serve,” read advertisements, “soup, bread, and food for thought.” One professor or visitor would discuss human rights, family,  religion, or another pepper in the chili of life.

I joined occasionally. I listened by the window, in the circle of chairs that ringed the speaker. Then I ventured from college into physics.

The questions “What matters to you, and why?” have chased me through physics. I ask experimentalists and theorists, professors and students: Why do you do science? Which papers catch your eye? Why have you devoted to quantum information more years than many spouses devote to marriages?

One physicist answered with another question. Chris Jarzynski works as a professor at the University of Maryland. He studies statistical mechanics—how particles typically act and how often particles act atypically; how materials shine, how gases push back when we compress them, and more.

“How,” Chris asked, “should we quantify precision?”

Chris had in mind nonequilibrium fluctuation theoremsOut-of-equilibrium systems have large-scale properties, like temperature, that change significantly.1 Examples include white-bean soup cooling at a “What matters” lunch. The soup’s temperature drops to room temperature as the system approaches equilibrium.

Steaming soup

Nonequilibrium. Tasty, tasty nonequilibrium.

Some out-of-equilibrium systems obey fluctuation theorems. Fluctuation theorems are equations derived in statistical mechanics. Imagine a DNA molecule floating in a watery solution. Water molecules buffet the strand, which twitches. But the strand’s shape doesn’t change much. The DNA is in equilibrium.

You can grab the strand’s ends and stretch them apart. The strand will leave equilibrium as its length changes. Imagine pulling the strand to some predetermined length. You’ll have exerted energy.

How much? The amount will vary if you repeat the experiment. Why? This trial began with the DNA curled this way; that trial began with the DNA curled that way. During this trial, the water batters the molecule more; during that trial, less. These discrepancies block us from predicting how much energy you’ll exert. But suppose you pick a number W. We can form predictions about the probability that you’ll have to exert an amount W of energy.

How do we predict? Using nonequilibrium fluctuation theorems.

Fluctuation theorems matter to me, as Quantum Frontiers regulars know. Why? Because I’ve written enough fluctuation-theorem articles to test even a statistical mechanic’s patience. More seriously, why do fluctuation theorems matter to me?

Fluctuation theorems fill a gap in the theory of statistical mechanics. Fluctuation theorems relate nonequilibrium processes (like the cooling of soup) to equilibrium systems (like room-temperature soup). Physicists can model equilibrium. But we know little about nonequilibrium. Fluctuation theorems bridge from the known (equilibrium) to the unknown (nonequilibrium).

Bridge - theory

Experiments take place out of equilibrium. (Stretching a DNA molecule changes the molecule’s length.) So we can measure properties of nonequilibrium processes. We can’t directly measure properties of equilibrium processes, which we can’t perform experimentally. But we can measure an equilibrium property indirectly: We perform nonequilibrium experiments, then plug our data into fluctuation theorems.

Bridge - exprmt

Which equilibrium property can we infer about? A free-energy difference, denoted by ΔF. Every equilibrated system (every room-temperature soup) has a free energy F. F represents the energy that the system can exert, such as the energy available to stretch a DNA molecule. Imagine subtracting one system’s free energy, F1, from another system’s free energy, F2. The subtraction yields a free-energy difference, ΔF = F2 – F1. We can infer the value of a ΔF from experiments.

How should we evaluate those experiments? Which experiments can we trust, and which need repeating?

Those questions mattered little to me, before I met Chris Jarzynski. Bridging equilibrium with nonequilibrium mattered to me, and bridging theory with experiment. Not experimental nitty-gritty.

I deserved a dunking in white-bean soup.

Dunk 2

Suppose you performed infinitely many trials—stretched a DNA molecule infinitely many times. In each trial, you measured the energy exerted. You processed your data, then substituted into a fluctuation theorem. You could infer the exact value of ΔF.

But we can’t perform infinitely many trials. Imprecision mars our inference about ΔF. How does the imprecision relate to the number of trials performed?2

Chris and I adopted an information-theoretic approach. We quantified precision with a parameter \delta. Suppose you want to estimate ΔF with some precision. How many trials should you expect to need to perform? We bounded the number N_\delta of trials, using an entropy. The bound tightens an earlier estimate of Chris’s. If you perform N_\delta trials, you can estimate ΔF with a percent error that we estimated. We illustrated our results by modeling a gas.

I’d never appreciated the texture and richness of precision. But richness precision has: A few decimal places distinguish Albert Einstein’s general theory of relativity from Isaac Newton’s 17th-century mechanics. Particle physicists calculate constants of nature to many decimal places. Such a calculation earned a nod on physicist Julian Schwinger’s headstone. Precision serves as the bread and soup of much physics. I’d sniffed the importance of precision, but not tasted it, until questioned by Chris Jarzynski.

Schwinger headstone

The questioning continues. My college has discontinued its “What matters” series. But I ask scientist after scientist—thoughtful human being after thoughtful human being—“What matters to you, and why?” Asking, listening, reading, calculating, and self-regulating sharpen my answers those questions. My answers often squish beneath the bread knife in my cutlery drawer of criticism. Thank goodness that repeating trials can reduce our errors.

Bread knife

1Or large-scale properties that will change. Imagine connecting the ends of a charged battery with a wire. Charge will flow from terminal to terminal, producing a current. You can measure, every minute, how quickly charge is flowing: You can measure how much current is flowing. The current won’t change much, for a while. But the current will die off as the battery nears depletion. A large-scale property (the current) appears constant but will change. Such a capacity to change characterizes nonequilibrium steady states (NESSes). NESSes form our second example of nonequilibrium states. Many-body localization forms a third, quantum example.

2Readers might object that scientists have tools for quantifying imprecision. Why not apply those tools? Because ΔF equals a logarithm, which is nonlinear. Other authors’ proposals appear in references 1-13 of our paper. Charlie Bennett addressed a related problem with his “acceptance ratio.” (Bennett also blogged about evil on Quantum Frontiers last month.)

Carbon copy

The anticipatory excitement of summer vacation endures in the teaching profession like no place outside childhood schooldays. Undoubtedly, ranking high on the list that keep teachers teaching. The excitement was high as the summer of 2015 started out the same as it had the three previous years at Caltech. I would show up, find a place to set up, and wait for orders from scientist David Boyd. Upon arrival in Dr. Yeh’s lab, surprisingly, I found all the equipment and my work space very much untouched from last year. I was happy to find it this way, because it likely meant I could continue exactly where I left off last summer. Later, I realized David’s time since I left was devoted to the development of a revolutionary new process for making graphene in large sheets at low temperatures. He did not have time to mess with my stuff, including the stepper-motor I had been working on last summer.

landscape-1426869044-dboyd-ncyeh-0910So, I place my glorified man purse in a bottom drawer, log into my computer, and wait.   After maybe a half hour I hear the footsteps set to a rhythm defined only by someone with purpose, and I’m sure it’s David.  He peeks in the little office where I’m seated and with a brief welcoming phrase informs me that the goal for the summer is to wrap graphene around a thin copper wire using, what he refers to as, “your motor.” The motor is a stepper motor from an experiment David ran several years back. I wired and set up the track and motor last year for a proposed experiment that was never realized involving the growth of graphene strips. Due to the limited time I spend each summer at Caltech (8 weeks), that experiment came to a halt when I left, and was to be continued this year. Instead, the focus veered from growing graphene strips to growing a two to three layer coating of graphene around a copper wire. The procedure remains the same, however, the substrate onto which the graphene grows changes. When growing graphene-strips the substrate is a 25 micron thick copper foil, and after growth the graphene needs to be removed from the copper substrate. In our experiment we used a copper wire with an average thickness of 154 microns, and since the goal is to acquire a copper wire with graphene wrapped around, there’s no need to remove the graphene. 

Noteworthy of mention is the great effort toward research concerning the removal and transfer of graphene from copper to more useful substrates. After graphene growth, the challenge shifts to separating the graphene sheet from the copper substrate without damaging the graphene. Next, the graphene is transferred to various substrates for fabrication and other purposes. Current techniques to remove graphene from copper often damage the graphene, ill-effecting the amazing electrical properties warranting great attention from R&D groups globally. A surprisingly simple new technique employs water to harmlessly remove graphene from copper. This technique has been shown to be effective on plasma-enhanced chemical vapor deposition (PECVD).  PECVD is the technique employed by scientist David Boyd, and is the focus of his paper published in Nature Communications in March of 2015.

So, David wants me to do something that has never been done before; grow graphene around a copper wire using a translation stage. The technique is to attach an Evenson cavity to the stage of a stepper motor/threaded rod apparatus, and very slowly move the plasma along a strip of copper wire. If successful, this could have far reaching implications for use with copper wire including, but certainly not limited to, corrosion prevention and thermal dissipation due to the high thermal conductivity exhibited by graphene. With David granting me free reign in his lab, and Ph.D. candidate Chen-Chih Hsu agreeing to help, I felt I had all the tools to give it a go.

Setting up this experiment is similar to growing graphene on copper foil using PECVD with a couple modifications. First, prior to pumping the quartz tube down to a near vacuum, we place a single copper wire into the tube instead of thin copper foil. Also, special care is taken when setting up the translation stage ensuring the Evenson cavity, attached to the stage, travels perfectly parallel to the quartz tube so as not to create a bind between the cavity and tube during travel. For the first trial we decide to grow along a 5cm long section of copper wire at a translation speed of 25 microns per second, which is a very slow speed made possible by the use of the stepper motor apparatus. Per usual, after growth we check the sample using Raman Spectroscopy. The graph shown here is the actual Raman taken in the lab immediately after growth. As the sample is scanned, the graph develops from right to left.  We’re not expecting to see anything of much interest, however, hope and excitement steadily rise as the computer monitor shows a well defined 2D-peak (right peak), a G-peak (middle peak)Raman of Graphene on Copper Wire 4, and a D-peak (left peak) with a height indicative of high defects.  Not the greatest of Raman spectra if we were shooting for defect-free monolayer graphene, but this is a very strong indication that we have 2-3 layer graphene on the copper wire.  How could this be? Chen-Chih and I looked at each other incredulously.  We quickly checked several locations along the wire and found the same result.  We did it!  Not only did we do it, but we did it on our first try!  OK, now we can party.  Streamers popped up into the air, a DJ with a turn table slid out from one of the walls, a perfectly synchronized kick line of cabaret dancers pranced about…… okay, back to reality, we had a high-five and a back-and-forth “wow, that’s so cool!”

We knew before we even reported our success to David, and eventually Professor Yeh, that they would both, immediately, ask for the exact parameters of the experiment and if the results were reproducible. So, we set off to try and grow again. Unfortunately, the second run did not yield a copper wire coated with graphene. The third trial did not yield graphene, and neither did the fourth or fifth. We were, however, finding that multi-layer graphene was growing at the tips of the copper wire, but not in the middle sections.  Our hypothesis at that point was that the existence of three edges at the tips of the wire aided the growth of graphene, compared to only two edges in the wire’s midsection (we are still not sure if this is the whole story).

In an effort to repeat the experiment and attain the parameters for growth, an issue with the experimental setup needed to be addressed. We lacked control concerning the exact mixture of each gas employed for CVD (Chemical Vapor Deposition). In the initial setup of the experiment, a lack of control was acceptable, because the goal was only to discover if growing graphene around a copper wire was possible. Now that we knew it was possible, attaining reproducible results required a deeper understanding of the process, therefore, more precise control in our setup. Dr. Boyd agreed, and ordered two leak valves, providing greater control over the exact recipe for the mixture of gases used for CVD. With this improved control, the hope is to be able to control and, therefore, detect the exact gas mixture yielding the much needed parameters for reliable graphene growth on a copper wire.

Unfortunately, my last day at Caltech before returning to my regular teaching gig, and the delivery of the leak valves occurred on the same day. Fortunately, I will be returning this summer (2016) to continue the search for the elusive parameters. If we succeed, David Boyd’s and Chen-Chih’s names will, once again, show up in a prestigious journal (Nature, Science, one of those…) and, just maybe, mine will make it there too. For the first time ever.  

 

LIGO: Playing the long game, and winning big!

Wow. What a day! And what a story!

Kip Thorne in 1972, around the time MTW was completed.

Kip Thorne in 1972, around the time MTW was completed.

It is hard for me to believe, but I have been on the Caltech faculty for nearly a third of a century. And when I arrived in 1983, interferometric detection of gravitational waves was already a hot topic of discussion here. At Kip Thorne’s urging, Ron Drever had been recruited to Caltech and was building the 40-meter prototype interferometer (which is still operating as a testbed for future detection technologies). Kip and his colleagues, spurred by Vladimir Braginsky’s insights, had for several years been actively studying the fundamental limits of quantum measurement precision, and how these might impact the search for gravitational waves.

I decided to bone up a bit on the subject, so naturally I pulled down from my shelf the “telephone book” — Misner, Thorne, and Wheeler’s mammoth Gravitationand browsed Chapter 37 (Detection of Gravitational Wave), for which Kip had been the lead author. The chapter brimmed over with enthusiasm for the subject, but to my surprise interferometers were hardly mentioned. Instead the emphasis was on mechanical bar detectors. These had been pioneered by Joseph Weber, whose efforts in the 1960s had first aroused Kip’s interest in detecting gravitational waves, and by Braginsky.

I sought Kip out for an explanation, and with characteristic clarity and patience he told how his views had evolved. He had realized in the 1970s that a strain sensitivity of order 10^{-21} would be needed for a good chance at detection, and after many discussions with colleagues like Drever, Braginsky, and Rai Weiss, he had decided that kind of sensitivity would not be achievable with foreseeable technology using bars.

Ron Drever, who built Caltech's 40-meter prototype interferometer in the 1980s.

Ron Drever, who built Caltech’s 40-meter prototype interferometer in the 1980s.

We talked about what would be needed — a kilometer scale detector capable of sensing displacements of 10^{-18} meters. I laughed. As he had many times by then, Kip told why this goal was not completely crazy, if there is enough light in an interferometer, which bounces back and forth many times as a waveform passes. Immediately after the discussion ended I went to my desk and did some crude calculations. The numbers kind of worked, but I shook my head, unconvinced. This was going to be a huge undertaking. Success seemed unlikely. Poor Kip!

I’ve never been involved in LIGO, but Kip and I remained friends, and every now and then he would give me the inside scoop on the latest developments (most memorably while walking the streets of London for hours on a beautiful spring evening in 1991). From afar I followed the forced partnership between Caltech and MIT that was forged in the 1980s, and the painful transition from a small project under the leadership of Drever-Thorne-Weiss (great scientists but lacking much needed management expertise) to a large collaboration under a succession of strong leaders, all based at Caltech.

Vladimir Braginsky, who realized that quantum effects constrain gravitational wave detectors.

Vladimir Braginsky, who realized that quantum effects limit the sensitivity of  gravitational wave detectors.

During 1994-95, I co-chaired a committee formulating a long-range plan for Caltech physics, and we spent more time talking about LIGO than any other issue. Part of our concern was whether a small institution like Caltech could absorb such a large project, which was growing explosively and straining Institute resources. And we also worried about whether LIGO would ultimately succeed. But our biggest worry of all was different — could Caltech remain at the forefront of gravitational wave research so that if and when LIGO hit paydirt we would reap the scientific benefits?

A lot has changed since then. After searching for years we made two crucial new faculty appointments: theorist Yanbei Chen (2007), who provided seminal ideas for improving sensitivity, and experimentalist Rana Adhikari (2006), a magician at the black art of making an interferometer really work. Alan Weinstein transitioned from high energy physics to become a leader of LIGO data analysis. We established a world-class numerical relativity group, now led by Mark Scheel. Staff scientists like Stan Whitcomb also had an essential role, as did longtime Project Manager Gary Sanders. LIGO Directors Robbie Vogt, Barry Barish, Jay Marx, and now Dave Reitze have provided effective and much needed leadership.

Rai Weiss, around the time he conceived LIGO in an amazing 1972 paper.

Rai Weiss, around the time he conceived LIGO in an amazing 1972 paper.

My closest connection to LIGO arose during the 1998-99 academic year, when Kip asked me to participate in a “QND reading group” he organized. (QND stands for Quantum Non-Demolition, Braginsky’s term for measurements that surpass the naïve quantum limits on measurement precision.) At that time we envisioned that Advanced LIGO would turn on in 2008, yet there were still many questions about how it would achieve the sensitivity required to ensure detection. I took part enthusiastically, and learned a lot, but never contributed any ideas of enduring value. The discussions that year did have positive outcomes, however; leading for example to a seminal paper by Kimble, Levin, Matsko, Thorne, and Vyatchanin on improving precision through squeezing of light. By the end of the year I had gained a much better appreciation of the strength of the LIGO team, and had accepted that Advanced LIGO might actually work!

I once asked Vladimir Braginsky why he spent years working on bar detectors for gravitational waves, while at the same time realizing that fundamental limits on quantum measurement would make successful detection very unlikely. Why wasn’t he trying to build an interferometer already in the 1970s? Braginsky loved to be asked questions like this, and his answer was a long story, told with many dramatic flourishes. The short answer is that he viewed interferometric detection of gravitational waves as too ambitious. A bar detector was something he could build in his lab, while an interferometer of the appropriate scale would be a long-term project involving a much larger, technically diverse team.

Joe Weber, who audaciously believed gravitational waves can be detected on earth.

Joe Weber, whose audacious belief that gravitational waves are detectable on earth inspired Kip Thorne and many others.

Kip’s chapter in MTW ends with section 37.10 (“Looking toward the future”) which concludes with this juicy quote (written almost 45 years ago):

“The technical difficulties to be surmounted in constructing such detectors are enormous. But physicists are ingenious; and with the impetus provided by Joseph Weber’s pioneering work, and with the support of a broad lay public sincerely interested in pioneering in science, all obstacles will surely be overcome.”

That’s what we call vision, folks. You might also call it cockeyed optimism, but without optimism great things would never happen.

Optimism alone is not enough. For something like the detection of gravitational waves, we needed technical ingenuity, wise leadership, lots and lots of persistence, the will to overcome adversity, and ultimately the efforts of hundreds of hard working, talented scientists and engineers. Not to mention the courage displayed by the National Science Foundation in supporting such a risky project for decades.

I have never been prouder than I am today to be part of the Caltech family.

Some like it cold.

When I reached IBM’s Watson research center, I’d barely seen Aaron in three weeks. Aaron is an experimentalist pursuing a physics PhD at Caltech. I eat dinner with him and other friends, most Fridays. The group would gather on a sidewalk in the November dusk, those three weeks. Light would spill from a lamppost, and we’d tuck our hands into our pockets against the chill. Aaron’s wife would shake her head.

“The fridge is running,” she’d explain.

Aaron cools down mechanical devices to near absolute zero. Absolute zero is the lowest temperature possible,1 lower than outer space’s temperature. Cold magnifies certain quantum behaviors. Researchers observe those behaviors in small systems, such as nanoscale devices (devices about 10-9 meters long). Aaron studies few-centimeter-long devices. Offsetting the devices’ size with cold might coax them into exhibiting quantum behaviors.

The cooling sounds as effortless as teaching a cat to play fetch. Aaron lowers his fridge’s temperature in steps. Each step involves checking for leaks: A mix of two fluids—two types of helium—cools the fridge. One type of helium costs about $800 per liter. Lose too much helium, and you’ve lost your shot at graduating. Each leak requires Aaron to warm the fridge, then re-cool it. He hauled helium and pampered the fridge for ten days, before the temperature reached 10 milliKelvins (0.01 units above absolute zero). He then worked like…well, like a grad student to check for quantum behaviors.

Aaron came to mind at IBM.

“How long does cooling your fridge take?” I asked Nick Bronn.

Nick works at Watson, IBM’s research center in Yorktown Heights, New York. Watson has sweeping architecture frosted with glass and stone. The building reminded me of Fred Astaire: decades-old, yet classy. I found Nick outside the cafeteria, nursing a coffee. He had sandy hair, more piercings than I, and a mandate to build a quantum computer.

thumb_IMG_0158_1024

IBM Watson

“Might I look around your lab?” I asked.

“Definitely!” Nick fished out an ID badge; grabbed his coffee cup; and whisked me down a wide, window-paneled hall.

Different researchers, across the world, are building quantum computers from different materials. IBMers use superconductors. Superconductors are tiny circuits. They function at low temperatures, so IBM has seven closet-sized fridges. Different teams use different fridges to tackle different challenges to computing.

Nick found a fridge that wasn’t running. He climbed half-inside, pointed at metallic wires and canisters, and explained how they work. I wondered how his cooling process compared to Aaron’s.

“You push a button.” Nick shrugged. “The fridge cools in two days.”

IBM, I learned, has dry fridges. Aaron uses a wet fridge. Dry and wet fridges operate differently, though both require helium. Aaron’s wet fridge vibrates less, jiggling his experiment less. Jiggling relates to transferring heat. Heat suppresses the quantum behaviors Aaron hopes to observe.

Heat and warmth manifest in many ways, in physics. Count Rumford, an 18th-century American-Brit, conjectured the relationship between heat and jiggling. He noticed that drilling holes into canons immersed in water boils the water. The drill bits rotated–moved in circles–transferring energy of movement to the canons, which heated up. Heat enraptures me because it relates to entropy, a measure of disorderliness and ignorance. The flow of heat helps explain why time flows in just one direction.

A physicist friend of mine writes papers, he says, when catalyzed by “blinding rage.” He reads a paper by someone else, whose misunderstandings anger him. His wrath boils over into a research project.

Warmth manifests as the welcoming of a visitor into one’s lab. Nick didn’t know me from Fred Astaire, but he gave me the benefit of the doubt. He let me pepper him with questions and invited more questions.

Warmth manifests as a 500-word disquisition on fridges. I asked Aaron, via email, about how his cooling compares to IBM’s. I expected two sentences and a link to Wikipedia, since Aaron works 12-hour shifts. But he took pity on his theorist friend. He also warmed to his subject. Can’t you sense the zeal in “Helium is the only substance in the world that will naturally isotopically separate (neat!)”? No knowledge of isotopic separation required.

Many quantum scientists like it cold. But understanding, curiosity, and teamwork fire us up. Anyone under the sway of those elements of science likes it hot.

With thanks to Aaron and Nick. Thanks also to John Smolin and IBM Watson’s quantum-computing-theory team for their hospitality.

1In many situations. Some systems, like small magnets, can access negative temperatures.

Surprise Happens in Experiments

The discovery of high temperature superconductivity in copper-oxide-based ceramics (cuprates) in 1986 created tremendous excitement in the scientific community. For the first time superconductivity, the ability of a material to conduct electricity with zero energy loss to heat, was possible at temperatures an order of magnitude higher than what were previously thought possible. Thus began the dream of room temperature superconductivity, a dream that has been heavily sought but still unfulfilled to this day.

The difficulty in creating a room temperature superconductor is that we still do not even understand how cuprate high temperature superconductors exactly work. We have known that the superconductivity is born from removing or adding a proper amount of electrons to an insulating antiferromagnet. What is more is that the material experiences a mysterious region, usually called pseudogap, when transiting from the insulating antiferromagnet into the superconductor. For decades, scientists have debated whether the pseudogap in cuprates is a continuous evolution into superconductivity or a competing phase of matter with distinct symmetry properties, and some believe that a better understanding of its nature and relationship to superconductivity can help to pave a path towards room temperature superconductivity.

The compound that we are studying, strontium-iridium-oxide (Sr2IrO4), is a promising candidate for a new family of high temperature superconductors. Recent experimental findings in Sr2IrO4 reveal great similarities between Sr2IrO4 and cuprates. Sr2IrO4 is a novel insulator at room temperature and turns into an antiferromagnet below a critical temperature called Néel temperature (TN). With a certain amount of electrons added or removed by introducing foreign atoms in it, Sr2IrO4 enters into the pseudogap regime. At an even higher charge carrier concentration and a lower temperature, Sr2IrO4 exhibits strong signatures of unconventional superconductivity. A summary of the evolution of Sr2IrO4 as functions of charge carrier density and temperature, usually referred as a phase diagram, is depicted into a cartoon below, which mimics that of cuprates.

A cartoon showing similarities between Sr2IrO4 and Cuprates

A cartoon showing similarities between Sr2IrO4 and cuprates.

Our experimental results on the multipolar order in Sr2IrO4 further bridges the connection between Sr2IrO4 and cuprates. On one hand, there have been growing experimental evidences in recent years to support the presence of symmetry breaking phases of matter in the pseudogap regime of cuprates. On the other hand, the discovery of multipolar order in Sr2IrO4 where the psuedogap phenomenon has also been observed suggests a possible connection between these two. To establish the relationship between the multipolar order and the pseudogap in Sr2IrO4, one needs to compare the temperature scales at which each of them happens. So far, we have bounded a line in the Sr2IrO4 phase diagram for the multipolar ordered phase that breaks the 90o rotational symmetry from its high temperature state. However, the onset temperature for the pseudogap in Sr2IrO4 remains unknown in the community.

An artistic rendition of rotational anisotropy patterns both above and below the transition temperature T_Ω where the multipolar order happens, showing the 90^o rotational symmetry breaking across T_Ω

An artistic rendition of rotational anisotropy patterns both above and below the transition temperature T_Ω where the multipolar order happens, showing the 90^o rotational symmetry breaking across T_Ω.

Retrospectively, the scientific story was told as above in which it seems our experiment perfectly fits in a void in the connections between Sr2IrO4 and cuprates. In reality, this experiment is my first encounter of serendipity in scientific researches. When we started our experiment, there were no experimental indications about pseudogap or superconductivity in Sr2IrO4, and we were just planning to refine its antiferromagnetic structure based upon its recently refined crystallographic structure. This joyful surprise makes me aware of the importance of sensitivity to unexpected results, especially in a developing field. Another surprise to me is the technique that we used in this study, namely rotational anisotropy optical second harmonic generation. This technique is as simple as shining light of frequency ω at the sample from a series of angles and collecting light of frequency 2ω reflected from the sample. The novelty of our setup is to move the light around the sample as opposed to the other way in the traditional version of this technique. Exactly thank to this seemingly trivial novelty, we are able to probe the multipolar order that is still challenging for other more sophisticated symmetry sensitive techniques. To me, it is this experience that is more valuable, and that is what I feel happiest to share.

Although the dream of room temperature superconductivity is still unfulfilled, the cross comparisons between Sr2IrO4 and cuprates could be insightful in determining the important factors for superconductivity, and eventually make the journey towards the dream.

Please find more details in our paper and Caltech media.

Artist's rendition of spatially segregated domains of multipolar order in the Sr2IrO4 crystal.

Artist’s rendition of spatially segregated domains of multipolar order in the Sr2IrO4 crystal.

The Graphene Effect

Spyridon Michalakis, Eryn Walsh, Benjamin Fackrell, Jackie O'Sullivan

Lunch with Spiros, Eryn, and Jackie at the Athenaeum (left to right).

Sitting and eating lunch in the room where Einstein and many others of turbo charged, ultra-powered acumen sat and ate lunch excites me. So, I was thrilled when lunch was arranged for the teachers participating in IQIM’s Summer Research Internship at the famed Athenaeum on Caltech’s campus. Spyridon Michalakis (Spiros), Jackie O’Sullivan, Eryn Walsh and I were having lunch when I asked Spiros about one of the renowned “Millennium” problems in Mathematical Physics I heard he had solved. He told me about his 18 month epic journey (surely an extremely condensed version) to solve a problem pertaining to the Quantum Hall effect. Understandably, within this journey lied many trials and tribulations ranging from feelings of self loathing and pessimistic resignation to dealing with tragic disappointment that comes from the realization that a victory celebration was much ado about nothing because the solution wasn’t correct. An unveiling of your true humanity and the lengths one can push themselves to find a solution. Three points struck me from this conversation. First, there’s a necessity for a love of the pain that tends to accompany a dogged determinism for a solution. Secondly, the idea that a person’s humanity is exposed, at least to some degree, when accepting a challenge of this caliber and then refusing to accept failure with an almost supernatural steadfastness towards a solution. Lastly, the Quantum Hall effect. The first two on the list are ideas I often ponder as a teacher and student, and probably lends itself to more of a philosophical discussion, which I do find very interesting, however, will not be the focus of this posting.

The Yeh research group, which I gratefully have been allowed to join the last three summers, researches (among other things) different applications of graphene encompassing the growth of graphene, high efficiency graphene solar cells, graphene component fabrication and strain engineering of graphene where, coincidentally for the latter, the quantum Hall effect takes center stage. The quantum Hall effect now had my attention and I felt it necessary to learn something, anything, about this recently recurring topic. The quantum Hall effect is something I had put very little thought into and if you are like I was, you’ve heard about it, but surely couldn’t explain even the basics to someone. I now know something on the subject and, hopefully, after reading this post you too will know something about the very basics of both the classical and the quantum Hall effect, and maybe experience a spark of interest regarding graphene’s fascinating ability to display the quantum Hall effect in a magnetic field-free environment.

Let’s start at the beginning with the Hall effect. Edwin Herbert Hall discovered the appropriately named effect in 1879. The Hall element in the diagram is a flat piece of conducting metal with a longitudinal current running through. When a magnetic field is introduced normal to the Hall element the charge carriers moving through the Hall element experience a Lorentz force. If we think of the current as being conventionHallEffectal (direction flow of positively charged ions), then the electrons (negative charge carriers) are traveling in the opposite direction of the green arrow shown in the diagram. Referring to the diagram and using the right hand rule you can conclude a buildup of electrons at the long bottom edge of the Hall element running parallel to the longitudinal current, and an opposing positively charged edge at the long top edge of the Hall element. This separation of charge will produce a transverse potential difference and is labeled on the diagram as Hall voltage (VH). Once the electric force (acting towards the positively charged edge perpendicular to both current and magnetic field) from the charge build up balances with the Lorentz force (opposing the electric force), the result is a negative charge carrier with a straight line trajectory in the opposite direction of the green arrow. Essentially, Hall conductance is the longitudinal current divided by the Hall voltage.

Now, let’s take a look at the quantum Hall effect. On February 5th, 1980 Klaus von Klitzing was investigating the Hall effect, in particular, the Hall conductance of a two-dimensional electron gas plane (2DEG) at very low temperatures around 4 Kelvin (- 4520 Fahrenheit). von Klitzing found when a magnetic field is applied normal to the 2DEG, and Hall conductance is graphed as a function of magnetic field strength, a staircase looking graph emerges. The discovery that earned von Klitzing’s Nobel Prize in 1985 was as unexpected as it is intriguing. For each step in the staircase the value of the function was an integer multiple of e2/h, where e is the elementary charge and h is Planck’s constant. Since conductance is the reciprocal of resistance we can view this data as h/ie2. When i (integer that describes each plateau) equals one, h/ie2 is approximately 26,000 ohms and serves as a superior standard of electrical resistance used worldwide to maintain and compare the unit of resistance.

Before discussing where graphene and the quantum Hall effect cross paths, let’s examine some extraordinary characteristics of graphene. Graphene is truly an amazing material for many reasons. We’ll look at size and scale things up a bit for fun. Graphene is one carbon atom thick, that’s 0.345 nanometers (0.000000000345 meters). Envision a one square centimeter sized graphene sheet, which is now regularly grown. Imagine, somehow, we could thicken the monolayer graphene sheet equal to that of a piece of printer paper (0.1 mm) while appropriately scaling up the area coverage. The graphene sheet that originally covered only one square centimeter would now cover an area of about 2900 meters by 2900 meters or roughly 1.8 miles by 1.8 miles. A paper thin sheet covering about 4 square miles. The Royal Swedish Academy of Sciences at nobelprize.org has an interesting way of scaling the tiny up to every day experience. They want you to picture a one square meter hammock made of graphene suspending a 4 kg cat, which represents the maximum weight such a sheet of graphene could support. The hammock would be nearly invisible, would weigh as much as one of the cat’s whiskers, and incredibly, would possess the strength to keep the cat suspended. If it were possible to make the exact hammock out of steel, its maximum load would be less than 1/100 the weight of the cat. Graphene is more than 100 times stronger than the strongest steel!

Graphene sheets possess many fascinating characteristics certainly not limited to mere size and strength. Experiments are being conducted at Caltech to study the electrical properties of graphene when draped over a field of gold nanoparticles; a discipline appropriately termed “strain engineering.” The peaks and valleys that form create strain in the graphene sheet, changing its electrical properties. The greater the curvature of the graphene over the peaks, the greater the strain. The electrons in graphene in regions experiencing strain behave as if they are in a magnetic field despite the fact that they are not. The electrons in regions experiencing the greatest strain behave as they would in extremely strong magnetic fields exceeding 300 tesla. For some perspective, the largest magnetic field ever created has been near 100 tesla and it only lasted for a few milliseconds. Additionally, graphene sheets under strain experience conductance plateaus very similar to those observed in the quantum Hall effect. This allows for great control of electrical properties by simply deforming the graphene sheet, effectively changing the amount of strain. The pseudo-magnetic field generated at room temperature by mere deformation of graphene is an extremely promising and exotic property that is bound to make graphene a key component in a plethora of future technologies.

Graphene and its incredibly fascinating properties make it very difficult to think of an area of technology where it won’t have a huge impact once incorporated. Caltech is at the forefront in research and development for graphene component fabrication, as well as the many aspects involved in the growth of high quality graphene. This summer I was involved in the latter and contributed a bit in setting up an experimenKodak_Camera 1326t that will attempt to grow graphene in a unique way. My contribution included the set-up of the stepper motor (pictured to the right) and its controls, so that it would very slowly travel down the tube in an attempt to grow a long strip of graphene. If Caltech scientist David Boyd and graduate student Chen-Chih Hsu are able to grow the long strips of graphene, this will mark yet another landmark achievement for them and Caltech in graphene research, bringing all of us closer to technologies such as flexible electronics, synthetic nerve cells, 500-mile range Tesla cars and batteries that allow us to stream Netflix on smartphones for weeks on end.

Top 10 questions for your potential PhD adviser/group

Everyone in grad school has taken on the task of picking the perfect research group at some point.  Then some among us had the dubious distinction of choosing the perfect research group twice.  Luckily for me, a year of grad research taught me a lot and I found myself asking group members and PIs (primary investigators) very different questions.  And luckily for you, I wrote these questions down to share with future generations.  My background as an experimental applied physicist showed through initially, so I got Shaun Maguire and Spiros Michalakis to help make it applicable for theorists too, and most of them should be useful outside physics as well.

Questions to break that silence when your potential advisor asks “So, do you have any questions for me?”

1. Are you taking new students?
– 2a. if yes: How many are you looking to take?
– 2b. if no: Ask them about the department or other professors.  They’ve been there long enough to have opinions.  Alternatively, ask what kinds of questions they would suggest you ask other PIs
3. What is the procedure for joining the group?
4. (experimental) Would you have me TA?  (This is the nicest way I thought of to ask if a PI can fund you with a research assistance-ship (RA), though sometimes they just like you to TA their class.)
4. (theory) Funding routes will often be covered by question 3 since TAs are the dominant funding method for theory students, unlike for experimentalists. If relevant, you can follow up with: How does funding for your students normally work? Do you have funding for me?
5. Do new students work for/report to other grad students, post docs, or you directly?
6. How do you like students to arrange time to meet with you?
7. How often do you have group meetings?
8. How much would you like students to prepare for them?
9. Would you suggest I take any specific classes?
10. What makes someone a good fit for this group?

And then for the high bandwidth information transfer.  Grill the group members themselves, and try to ask more than one group member if you can.

1. How much do you prepare for meetings with PI?
2. How long until people lead their own project? – Equivalently, who’s working on what projects.
3. How much do people on different projects communicate? (only group meeting or every day)
4. Is the PI hands on (how often PI wants to meet with you)?
5. Is the PI accessible (how easily can you meet with the PI if you want to)?
6. What is the average time to graduation? (if it’s important to you personally)
7. Does the group/subgroup have any bonding activities?
8. Do you think I should join this group?
9. What are people’s backgrounds?
10. What makes someone a good fit for this group?

Hope that helps.  If you have any other suggested questions, be sure to leave them in the comments.

My 10 biggest thrills

Wow!

BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Evidence for gravitational waves produced during cosmic inflation. BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Like many physicists, I have been reflecting a lot the past few days about the BICEP2 results, trying to put them in context. Other bloggers have been telling you all about it (here, here, and here, for example); what can I possibly add?

The hoopla this week reminds me of other times I have been really excited about scientific advances. And I recall some wise advice I received from Sean Carroll: blog readers like lists.  So here are (in chronological order)…

My 10 biggest thrills (in science)

This is a very personal list — your results may vary. I’m not saying these are necessarily the most important discoveries of my lifetime (there are conspicuous omissions), just that, as best I can recall, these are the developments that really started my heart pounding at the time.

1) The J/Psi from below (1974)

I was a senior at Princeton during the November Revolution. I was too young to appreciate fully what it was all about — having just learned about the Weinberg-Salam model, I thought at first that the Z boson had been discovered. But by stalking the third floor of Jadwin I picked up the buzz. No, it was charm! The discovery of a very narrow charmonium resonance meant we were on the right track in two ways — charm itself confirmed ideas about the electroweak gauge theory, and the narrowness of the resonance fit in with the then recent idea of asymptotic freedom. Theory triumphant!

2) A magnetic monopole in Palo Alto (1982)

By 1982 I had been thinking about the magnetic monopoles in grand unified theories for a few years. We thought we understood why no monopoles seem to be around. Sure, monopoles would be copiously produced in the very early universe, but then cosmic inflation would blow them away, diluting their density to a hopelessly undetectable value. Then somebody saw one …. a magnetic monopole obediently passed through Blas Cabrera’s loop of superconducting wire, producing a sudden jump in the persistent current. On Valentine’s Day!

According to then current theory, the monopole mass was expected to be about 10^16 GeV (10 million billion times heavier than a proton). Had Nature really been so kind as the bless us with this spectacular message from an staggeringly high energy scale? It seemed too good to be true.

It was. Blas never detected another monopole. As far as I know he never understood what glitch had caused the aberrant signal in his device.

3) “They’re green!” High-temperature superconductivity (1987)

High-temperature superconductors were discovered in 1986 by Bednorz and Mueller, but I did not pay much attention until Paul Chu found one in early 1987 with a critical temperature of 77 K. Then for a while the critical temperature seemed to be creeping higher and higher on an almost daily basis, eventually topping 130K …. one wondered whether it might go up, up, up forever.

It didn’t. Today 138K still seems to be the record.

My most vivid memory is that David Politzer stormed into my office one day with a big grin. “They’re green!” he squealed. David did not mean that high-temperature superconductors would be good for the environment. He was passing on information he had just learned from Phil Anderson, who happened to be visiting Caltech: Chu’s samples were copper oxides.

4) “Now I have mine” Supernova 1987A (1987)

What was most remarkable and satisfying about the 1987 supernova in the nearby Large Magellanic Cloud was that the neutrinos released in a ten second burst during the stellar core collapse were detected here on earth, by gigantic water Cerenkov detectors that had been built to test grand unified theories by looking for proton decay! Not a truly fundamental discovery, but very cool nonetheless.

Soon after it happened some of us were loafing in the Lauritsen seminar room, relishing the good luck that had made the detection possible. Then Feynman piped up: “Tycho Brahe had his supernova, Kepler had his, … and now I have mine!” We were all silent for a few seconds, and then everyone burst out laughing, with Feynman laughing the hardest. It was funny because Feynman was making fun of his own gargantuan ego. Feynman knew a good gag, and I heard him use this line at a few other opportune times thereafter.

5) Science by press conference: Cold fusion (1989)

The New York Times was my source for the news that two chemists claimed to have produced nuclear fusion in heavy water using an electrochemical cell on a tabletop. I was interested enough to consult that day with our local nuclear experts Charlie Barnes, Bob McKeown, and Steve Koonin, none of whom believed it. Still, could it be true?

I decided to spend a quiet day in my office, trying to imagine ways to induce nuclear fusion by stuffing deuterium into a palladium electrode. I came up empty.

My interest dimmed when I heard that they had done a “control” experiment using ordinary water, had observed the same excess heat as with heavy water, and remained just as convinced as before that they were observing fusion. Later, Caltech chemist Nate Lewis gave a clear and convincing talk to the campus community debunking the original experiment.

6) “The face of God” COBE (1992)

I’m often too skeptical. When I first heard in the early 1980s about proposals to detect the anisotropy in the cosmic microwave background, I doubted it would be possible. The signal is so small! It will be blurred by reionization of the universe! What about the galaxy! What about the dust! Blah, blah, blah, …

The COBE DMR instrument showed it could be done, at least at large angular scales, and set the stage for the spectacular advances in observational cosmology we’ve witnessed over the past 20 years. George Smoot infamously declared that he had glimpsed “the face of God.” Overly dramatic, perhaps, but he was excited! And so was I.

7) “83 SNU” Gallex solar neutrinos (1992)

Until 1992 the only neutrinos from the sun ever detected were the relatively high energy neutrinos produced by nuclear reactions involving boron and beryllium — these account for just a tiny fraction of all neutrinos emitted. Fewer than expected were seen, a puzzle that could be resolved if neutrinos have mass and oscillate to another flavor before reaching earth. But it made me uncomfortable that the evidence for solar neutrino oscillations was based on the boron-beryllium side show, and might conceivably be explained just by tweaking the astrophysics of the sun’s core.

The Gallex experiment was the first to detect the lower energy pp neutrinos, the predominant type coming from the sun. The results seemed to confirm that we really did understand the sun and that solar neutrinos really oscillate. (More compelling evidence, from SNO, came later.) I stayed up late the night I heard about the Gallex result, and gave a talk the next day to our particle theory group explaining its significance. The talk title was “83 SNU” — that was the initially reported neutrino flux in Solar Neutrino Units, later revised downward somewhat.

8) Awestruck: Shor’s algorithm (1994)

I’ve written before about how Peter Shor’s discovery of an efficient quantum algorithm for factoring numbers changed my life. This came at a pivotal time for me, as the SSC had been cancelled six months earlier, and I was growing pessimistic about the future of particle physics. I realized that observational cosmology would have a bright future, but I sensed that theoretical cosmology would be dominated by data analysis, where I would have little comparative advantage. So I became a quantum informationist, and have not regretted it.

9) The Higgs boson at last (2012)

The discovery of the Higgs boson was exciting because we had been waiting soooo long for it to happen. Unable to stream the live feed of the announcement, I followed developments via Twitter. That was the first time I appreciated the potential value of Twitter for scientific communication, and soon after I started to tweet.

10) A lucky universe: BICEP2 (2014)

Many past experiences prepared me to appreciate the BICEP2 announcement this past Monday.

I first came to admire Alan Guth‘s distinctive clarity of thought in the fall of 1973 when he was the instructor for my classical mechanics course at Princeton (one of the best classes I ever took). I got to know him better in the summer of 1979 when I was a graduate student, and Alan invited me to visit Cornell because we were both interested in magnetic monopole production  in the very early universe. Months later Alan realized that cosmic inflation could explain the isotropy and flatness of the universe, as well as the dearth of magnetic monopoles. I recall his first seminar at Harvard explaining his discovery. Steve Weinberg had to leave before the seminar was over, and Alan called as Steve walked out, “I was hoping to hear your reaction.” Steve replied, “My reaction is applause.” We all felt that way.

I was at a wonderful workshop in Cambridge during the summer of 1982, where Alan and others made great progress in understanding the origin of primordial density perturbations produced from quantum fluctuations during inflation (Bardeen, Steinhardt, Turner, Starobinsky, and Hawking were also working on that problem, and they all reached a consensus by the end of the three-week workshop … meanwhile I was thinking about the cosmological implications of axions).

I also met Andrei Linde at that same workshop, my first encounter with his mischievous grin and deadpan wit. (There was a delegation of Russians, who split their time between Xeroxing papers and watching the World Cup on TV.) When Andrei visited Caltech in 1987, I took him to Disneyland, and he had even more fun than my two-year-old daughter.

During my first year at Caltech in 1984, Mark Wise and Larry Abbott told me about their calculations of the gravitational waves produced during inflation, which they used to derive a bound on the characteristic energy scale driving inflation, a few times 10^16 GeV. We mused about whether the signal might turn out to be detectable someday. Would Nature really be so kind as to place that mass scale below the Abbott-Wise bound, yet high enough (above 10^16 GeV) to be detectable? It seemed unlikely.

Last week I caught up with the rumors about the BICEP2 results by scanning my Twitter feed on my iPad, while still lying in bed during the early morning. I immediately leapt up and stumbled around the house in the dark, mumbling to myself over and over again, “Holy Shit! … Holy Shit! …” The dog cast a curious glance my way, then went back to sleep.

Like millions of others, I was frustrated Monday morning, trying to follow the live feed of the discovery announcement broadcast from the hopelessly overtaxed Center for Astrophysics website. I was able to join in the moment, though, by following on Twitter, and I indulged in a few breathless tweets of my own.

Many of his friends have been thinking a lot these past few days about Andrew Lange, who had been the leader of the BICEP team (current senior team members John Kovac and Chao-Lin Kuo were Caltech postdocs under Andrew in the mid-2000s). One day in September 2007 he sent me an unexpected email, with the subject heading “the bard of cosmology.” Having discovered on the Internet a poem I had written to introduce a seminar by Craig Hogan, Andrew wrote:

“John,

just came across this – I must have been out of town for the event.

l love it.

it will be posted prominently in our lab today (with “LISA” replaced by “BICEP”, and remain our rallying cry till we detect the B-mode.

have you set it to music yet?

a”

I lifted a couplet from that poem for one of my tweets (while rumors were swirling prior to the official announcement):

We’ll finally know how the cosmos behaves
If we can detect gravitational waves.

Assuming the BICEP2 measurement r ~ 0.2 is really a detection of primordial gravitational waves, we have learned that the characteristic mass scale during inflation is an astonishingly high 2 X 10^16 GeV. Were it a factor of 2 smaller, the signal would have been far too small to detect in current experiments. This time, Nature really is on our side, eagerly revealing secrets about physics at a scale far, far beyond what we will every explore using particle accelerators. We feel lucky.

We physicists can never quite believe that the equations we scrawl on a notepad actually have something to do with the real universe. You would think we’d be used to that by now, but we’re not — when it happens we’re amazed. In my case, never more so than this time.

The BICEP2 paper, a historic document (if the result holds up), ends just the way it should:

“We dedicate this paper to the memory of Andrew Lange, whom we sorely miss.”