Inflation on the back of an envelope

Last Monday was an exciting day!

After following the BICEP2 announcement via Twitter, I had to board a transcontinental flight, so I had 5 uninterrupted hours to think about what it all meant. Without Internet access or references, and having not thought seriously about inflation for decades, I wanted to reconstruct a few scraps of knowledge needed to interpret the implications of r ~ 0.2.

I did what any physicist would have done … I derived the basic equations without worrying about niceties such as factors of 3 or 2 \pi. None of what I derived was at all original —  the theory has been known for 30 years — but I’ve decided to turn my in-flight notes into a blog post. Experts may cringe at the crude approximations and overlooked conceptual nuances, not to mention the missing references. But some mathematically literate readers who are curious about the implications of the BICEP2 findings may find these notes helpful. I should emphasize that I am not an expert on this stuff (anymore), and if there are serious errors I hope better informed readers will point them out.

By tradition, careless estimates like these are called “back-of-the-envelope” calculations. There have been times when I have made notes on the back of an envelope, or a napkin or place mat. But in this case I had the presence of mind to bring a notepad with me.

Notes from a plane ride

Notes from a plane ride

According to inflation theory, a nearly homogeneous scalar field called the inflaton (denoted by \phi)  filled the very early universe. The value of \phi varied with time, as determined by a potential function V(\phi). The inflaton rolled slowly for a while, while the dark energy stored in V(\phi) caused the universe to expand exponentially. This rapid cosmic inflation lasted long enough that previously existing inhomogeneities in our currently visible universe were nearly smoothed out. What inhomogeneities remained arose from quantum fluctuations in the inflaton and the spacetime geometry occurring during the inflationary period.

Gradually, the rolling inflaton picked up speed. When its kinetic energy became comparable to its potential energy, inflation ended, and the universe “reheated” — the energy previously stored in the potential V(\phi) was converted to hot radiation, instigating a “hot big bang”. As the universe continued to expand, the radiation cooled. Eventually, the energy density in the universe came to be dominated by cold matter, and the relic fluctuations of the inflaton became perturbations in the matter density. Regions that were more dense than average grew even more dense due to their gravitational pull, eventually collapsing into the galaxies and clusters of galaxies that fill the universe today. Relic fluctuations in the geometry became gravitational waves, which BICEP2 seems to have detected.

Both the density perturbations and the gravitational waves have been detected via their influence on the inhomogeneities in the cosmic microwave background. The 2.726 K photons left over from the big bang have a nearly uniform temperature as we scan across the sky, but there are small deviations from perfect uniformity that have been precisely measured. We won’t worry about the details of how the size of the perturbations is inferred from the data. Our goal is to achieve a crude understanding of how the density perturbations and gravitational waves are related, which is what the BICEP2 results are telling us about. We also won’t worry about the details of the shape of the potential function V(\phi), though it’s very interesting that we might learn a lot about that from the data.

Exponential expansion

Einstein’s field equations tell us how the rate at which the universe expands during inflation is related to energy density stored in the scalar field potential. If a(t) is the “scale factor” which describes how lengths grow with time, then roughly

\left(\frac{\dot a}{a}\right)^2 \sim \frac{V}{m_P^2}.

Here \dot a means the time derivative of the scale factor, and m_P = 1/\sqrt{8 \pi G} \approx 2.4 \times 10^{18} GeV is the Planck scale associated with quantum gravity. (G is Newton’s gravitational constant.) I’ve left our a factor of 3 on purpose, and I used the symbol ~ rather than = to emphasize that we are just trying to get a feel for the order of magnitude of things. I’m using units in which Planck’s constant \hbar and the speed of light c are set to one, so mass, energy, and inverse length (or inverse time) all have the same dimensions. 1 GeV means one billion electron volts, about the mass of a proton.

(To persuade yourself that this is at least roughly the right equation, you should note that a similar equation applies to an expanding spherical ball of radius a(t) with uniform mass density V. But in the case of the ball, the mass density would decrease as the ball expands. The universe is different — it can expand without diluting its mass density, so the rate of expansion \dot a / a does not slow down as the expansion proceeds.)

During inflation, the scalar field \phi and therefore the potential energy V(\phi) were changing slowly; it’s a good approximation to assume V is constant. Then the solution is

a(t) \sim a(0) e^{Ht},

where H, the Hubble constant during inflation, is

H \sim \frac{\sqrt{V}}{m_P}.

To explain the smoothness of the observed universe, we require at least 50 “e-foldings” of inflation before the universe reheated — that is, inflation should have lasted for a time at least 50 H^{-1}.

Slow rolling

During inflation the inflaton \phi rolls slowly, so slowly that friction dominates inertia — this friction results from the cosmic expansion. The speed of rolling \dot \phi is determined by

H \dot \phi \sim -V'(\phi).

Here V'(\phi) is the slope of the potential, so the right-hand side is the force exerted by the potential, which matches the frictional force on the left-hand side. The coefficient of \dot \phi has to be H on dimensional grounds. (Here I have blown another factor of 3, but let’s not worry about that.)

Density perturbations

The trickiest thing we need to understand is how inflation produced the density perturbations which later seeded the formation of galaxies. There are several steps to the argument.

Quantum fluctuations of the inflaton

As the universe inflates, the inflaton field is subject to quantum fluctuations, where the size of the fluctuation depends on its wavelength. Due to inflation, the wavelength increases rapidly, like e^{Ht}, and once the wavelength gets large compared to H^{-1}, there isn’t enough time for the fluctuation to wiggle — it gets “frozen in.” Much later, long after the reheating of the universe, the oscillation period of the wave becomes comparable to the age of the universe, and then it can wiggle again. (We say that the fluctuations “cross the horizon” at that stage.) Observations of the anisotropy of the microwave background have determined how big the fluctuations are at the time of horizon crossing. What does inflation theory say about that?

Well, first of all, how big are the fluctuations when they leave the horizon during inflation? Then the wavelength is H^{-1} and the universe is expanding at the rate H, so H is the only thing the magnitude of the fluctuations could depend on. Since the field \phi has the same dimensions as H, we conclude that fluctuations have magnitude

\delta \phi \sim H.

From inflaton fluctuations to density perturbations

Reheating occurs abruptly when the inflaton field reaches a particular value. Because of the quantum fluctuations, some horizon volumes have larger than average values of \phi and some have smaller than average values; hence different regions reheat at slightly different times. The energy density in regions that reheat earlier starts to be reduced by expansion (“red shifted”) earlier, so these regions have a smaller than average energy density. Likewise, regions that reheat later start to red shift later, and wind up having larger than average density.

When we compare different regions of comparable size, we can find the typical (root-mean-square) fluctuations \delta t in the reheating time, knowing the fluctuations in \phi and the rolling speed \dot \phi:

\delta t \sim \frac{\delta \phi}{\dot \phi} \sim \frac{H}{\dot\phi}.

Small fractional fluctuations in the scale factor a right after reheating produce comparable small fractional fluctuations in the energy density \rho. The expansion rate right after reheating roughly matches the expansion rate H right before reheating, and so we find that the characteristic size of the density perturbations is

\delta_S\equiv\left(\frac{\delta \rho}{\rho}\right)_{hor} \sim \frac{\delta a}{a} \sim \frac{\dot a}{a} \delta t\sim \frac{H^2}{\dot \phi}.

The subscript hor serves to remind us that this is the size of density perturbations as they cross the horizon, before they get a chance to grow due to gravitational instabilities. We have found our first important conclusion: The density perturbations have a size determined by the Hubble constant H and the rolling speed \dot \phi of the inflaton, up to a factor of order one which we have not tried to keep track of. Insofar as the Hubble constant and rolling speed change slowly during inflation, these density perturbations have a strength which is nearly independent of the length scale of the perturbation. From here on we will denote this dimensionless scale of the fluctuations by \delta_S, where the subscript S stands for “scalar”.

Perturbations in terms of the potential

Putting together \dot \phi \sim -V' / H and H^2 \sim V/{m_P}^2 with our expression for \delta_S, we find

\delta_S^2 \sim \frac{H^4}{\dot\phi^2}\sim \frac{H^6}{V'^2} \sim \frac{1}{{m_P}^6}\frac{V^3}{V'^2}.

The observed density perturbations are telling us something interesting about the scalar field potential during inflation.

Gravitational waves and the meaning of r

The gravitational field as well as the inflaton field is subject to quantum fluctuations during inflation. We call these tensor fluctuations to distinguish them from the scalar fluctuations in the energy density. The tensor fluctuations have an effect on the microwave anisotropy which can be distinguished in principle from the scalar fluctuations. We’ll just take that for granted here, without worrying about the details of how it’s done.

While a scalar field fluctuation with wavelength \lambda and strength \delta \phi carries energy density \sim \delta\phi^2 / \lambda^2, a fluctuation of the dimensionless gravitation field h with wavelength \lambda and strength \delta h carries energy density \sim m_P^2 \delta h^2 / \lambda^2. Applying the same dimensional analysis we used to estimate \delta \phi at horizon crossing to the rescaled field m_P h, we estimate the strength \delta_T of the tensor fluctuations (the fluctuations of h) as

\delta_T^2 \sim \frac{H^2}{m_P^2}\sim \frac{V}{m_P^4}.

From observations of the CMB anisotropy we know that \delta_S\sim 10^{-5}, and now BICEP2 claims that the ratio

r = \frac{\delta_T^2}{\delta_S^2}

is about r\sim 0.2 at an angular scale on the sky of about one degree. The conclusion (being a little more careful about the O(1) factors this time) is

V^{1/4} \sim 2 \times 10^{16}~GeV \left(\frac{r}{0.2}\right)^{1/4}.

This is our second important conclusion: The energy density during inflation defines a mass scale, which turns our to be 2 \times 10^{16}~GeV for the observed value of r. This is a very interesting finding because this mass scale is not so far below the Planck scale, where quantum gravity kicks in, and is in fact pretty close to theoretical estimates of the unification scale in supersymmetric grand unified theories. If this mass scale were a factor of 2 smaller, then r would be smaller by a factor of 16, and hence much harder to detect.

Rolling, rolling, rolling, …

Using \delta_S^2 \sim H^4/\dot\phi^2, we can express r as

r = \frac{\delta_T^2}{\delta_S^2}\sim \frac{\dot\phi^2}{m_P^2 H^2}.

It is convenient to measure time in units of the number N = H t of e-foldings of inflation, in terms of which we find

\frac{1}{m_P^2} \left(\frac{d\phi}{dN}\right)^2\sim r;

Now, we know that for inflation to explain the smoothness of the universe we need N larger than 50, and if we assume that the inflaton rolls at a roughly constant rate during N e-foldings, we conclude that, while rolling, the change in the inflaton field is

\frac{\Delta \phi}{m_P} \sim N \sqrt{r}.

This is our third important conclusion — the inflaton field had to roll a long, long, way during inflation — it changed by much more than the Planck scale! Putting in the O(1) factors we have left out reduces the required amount of rolling by about a factor of 3, but we still conclude that the rolling was super-Planckian if r\sim 0.2. That’s curious, because when the scalar field strength is super-Planckian, we expect the kind of effective field theory we have been implicitly using to be a poor approximation because quantum gravity corrections are large. One possible way out is that the inflaton might have rolled round and round in a circle instead of in a straight line, so the field strength stayed sub-Planckian even though the distance traveled was super-Planckian.

Spectral tilt

As the inflaton rolls, the potential energy, and hence also the Hubble constant H, change during inflation. That means that both the scalar and tensor fluctuations have a strength which is not quite independent of length scale. We can parametrize the scale dependence in terms of how the fluctuations change per e-folding of inflation, which is equivalent to the change per logarithmic length scale and is called the “spectral tilt.”

To keep things simple, let’s suppose that the rate of rolling is constant during inflation, at least over the length scales for which we have data. Using \delta_S^2 \sim H^4/\dot\phi^2, and assuming \dot\phi is constant, we estimate the scalar spectral tilt as

-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim - \frac{4 \dot H}{H^2}.

Using \delta_T^2 \sim H^2/m_P^2, we conclude that the tensor spectral tilt is half as big.

From H^2 \sim V/m_P^2, we find

\dot H \sim \frac{1}{2} \dot \phi \frac{V'}{V} H,

and using \dot \phi \sim -V'/H we find

-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim \frac{V'^2}{H^2V}\sim m_P^2\left(\frac{V'}{V}\right)^2\sim \left(\frac{V}{m_P^4}\right)\left(\frac{m_P^6 V'^2}{V^3}\right)\sim \delta_T^2 \delta_S^{-2}\sim r.

Putting in the numbers more carefully we find a scalar spectral tilt of r/4 and a tensor spectral tilt of r/8.

This is our last important conclusion: A relatively large value of r means a significant spectral tilt. In fact, even before the BICEP2 results, the CMB anisotropy data already supported a scalar spectral tilt of about .04, which suggested something like r \sim .16. The BICEP2 detection of the tensor fluctuations (if correct) has confirmed that suspicion.

Summing up

If you have stuck with me this far, and you haven’t seen this stuff before, I hope you’re impressed. Of course, everything I’ve described can be done much more carefully. I’ve tried to convey, though, that the emerging story seems to hold together pretty well. Compared to last week, we have stronger evidence now that inflation occurred, that the mass scale of inflation is high, and that the scalar and tensor fluctuations produced during inflation have been detected. One prediction is that the tensor fluctuations, like the scalar ones, should have a notable spectral tilt, though a lot more data will be needed to pin that down.

I apologize to the experts again, for the sloppiness of these arguments. I hope that I have at least faithfully conveyed some of the spirit of inflation theory in a way that seems somewhat accessible to the uninitiated. And I’m sorry there are no references, but I wasn’t sure which ones to include (and I was too lazy to track them down).

It should also be clear that much can be done to sharpen the confrontation between theory and experiment. A whole lot of fun lies ahead.

Added notes (3/25/2014):

Okay, here’s a good reference, a useful review article by Baumann. (I found out about it on Twitter!)

From Baumann’s lectures I learned a convenient notation. The rolling of the inflaton can be characterized by two “potential slow-roll parameters” defined by

\epsilon = \frac{m_p^2}{2}\left(\frac{V'}{V}\right)^2,\quad \eta = m_p^2\left(\frac{V''}{V}\right).

Both parameters are small during slow rolling, but the relationship between them depends on the shape of the potential. My crude approximation (\epsilon = \eta) would hold for a quadratic potential.

We can express the spectral tilt (as I defined it) in terms of these parameters, finding 2\epsilon for the tensor tilt, and 6 \epsilon - 2\eta for the scalar tilt. To derive these formulas it suffices to know that \delta_S^2 is proportional to V^3/V'^2, and that \delta_T^2 is proportional to H^2; we also use

3H\dot \phi = -V', \quad 3H^2 = V/m_P^2,

keeping factors of 3 that I left out before. (As a homework exercise, check these formulas for the tensor and scalar tilt.)

It is also easy to see that r is proportional to \epsilon; it turns out that r = 16 \epsilon. To get that factor of 16 we need more detailed information about the relative size of the tensor and scalar fluctuations than I explained in the post; I can’t think of a handwaving way to derive it.

We see, though, that the conclusion that the tensor tilt is r/8 does not depend on the details of the potential, while the relation between the scalar tilt and r does depend on the details. Nevertheless, it seems fair to claim (as I did) that, already before we knew the BICEP2 results, the measured nonzero scalar spectral tilt indicated a reasonably large value of r.

Once again, we’re lucky. On the one hand, it’s good to have a robust prediction (for the tensor tilt). On the other hand, it’s good to have a handle (the scalar tilt) for distinguishing among different inflationary models.

One last point is worth mentioning. We have set Planck’s constant \hbar equal to one so far, but it is easy to put the powers of \hbar back in using dimensional analysis (we’ll continue to assume the speed of light c is one). Since Newton’s constant G has the dimensions of length/energy, and the potential V has the dimensions of energy/volume, while \hbar has the dimensions of energy times length, we see that

\delta_T^2 \sim \hbar G^2V.

Thus the production of gravitational waves during inflation is a quantum effect, which would disappear in the limit \hbar \to 0. Likewise, the scalar fluctuation strength \delta_S^2 is also O(\hbar), and hence also a quantum effect.

Therefore the detection of primordial gravitational waves by BICEP2, if correct, confirms that gravity is quantized just like the other fundamental forces. That shouldn’t be a surprise, but it’s nice to know.

My 10 biggest thrills

Wow!

BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Evidence for gravitational waves produced during cosmic inflation. BICEP2 results for the ratio r of gravitational wave perturbations to density perturbations, and the density perturbation spectral tilt n.

Like many physicists, I have been reflecting a lot the past few days about the BICEP2 results, trying to put them in context. Other bloggers have been telling you all about it (here, here, and here, for example); what can I possibly add?

The hoopla this week reminds me of other times I have been really excited about scientific advances. And I recall some wise advice I received from Sean Carroll: blog readers like lists.  So here are (in chronological order)…

My 10 biggest thrills (in science)

This is a very personal list — your results may vary. I’m not saying these are necessarily the most important discoveries of my lifetime (there are conspicuous omissions), just that, as best I can recall, these are the developments that really started my heart pounding at the time.

1) The J/Psi from below (1974)

I was a senior at Princeton during the November Revolution. I was too young to appreciate fully what it was all about — having just learned about the Weinberg-Salam model, I thought at first that the Z boson had been discovered. But by stalking the third floor of Jadwin I picked up the buzz. No, it was charm! The discovery of a very narrow charmonium resonance meant we were on the right track in two ways — charm itself confirmed ideas about the electroweak gauge theory, and the narrowness of the resonance fit in with the then recent idea of asymptotic freedom. Theory triumphant!

2) A magnetic monopole in Palo Alto (1982)

By 1982 I had been thinking about the magnetic monopoles in grand unified theories for a few years. We thought we understood why no monopoles seem to be around. Sure, monopoles would be copiously produced in the very early universe, but then cosmic inflation would blow them away, diluting their density to a hopelessly undetectable value. Then somebody saw one …. a magnetic monopole obediently passed through Blas Cabrera’s loop of superconducting wire, producing a sudden jump in the persistent current. On Valentine’s Day!

According to then current theory, the monopole mass was expected to be about 10^16 GeV (10 million billion times heavier than a proton). Had Nature really been so kind as the bless us with this spectacular message from an staggeringly high energy scale? It seemed too good to be true.

It was. Blas never detected another monopole. As far as I know he never understood what glitch had caused the aberrant signal in his device.

3) “They’re green!” High-temperature superconductivity (1987)

High-temperature superconductors were discovered in 1986 by Bednorz and Mueller, but I did not pay much attention until Paul Chu found one in early 1987 with a critical temperature of 77 K. Then for a while the critical temperature seemed to be creeping higher and higher on an almost daily basis, eventually topping 130K …. one wondered whether it might go up, up, up forever.

It didn’t. Today 138K still seems to be the record.

My most vivid memory is that David Politzer stormed into my office one day with a big grin. “They’re green!” he squealed. David did not mean that high-temperature superconductors would be good for the environment. He was passing on information he had just learned from Phil Anderson, who happened to be visiting Caltech: Chu’s samples were copper oxides.

4) “Now I have mine” Supernova 1987A (1987)

What was most remarkable and satisfying about the 1987 supernova in the nearby Large Magellanic Cloud was that the neutrinos released in a ten second burst during the stellar core collapse were detected here on earth, by gigantic water Cerenkov detectors that had been built to test grand unified theories by looking for proton decay! Not a truly fundamental discovery, but very cool nonetheless.

Soon after it happened some of us were loafing in the Lauritsen seminar room, relishing the good luck that had made the detection possible. Then Feynman piped up: “Tycho Brahe had his supernova, Kepler had his, … and now I have mine!” We were all silent for a few seconds, and then everyone burst out laughing, with Feynman laughing the hardest. It was funny because Feynman was making fun of his own gargantuan ego. Feynman knew a good gag, and I heard him use this line at a few other opportune times thereafter.

5) Science by press conference: Cold fusion (1989)

The New York Times was my source for the news that two chemists claimed to have produced nuclear fusion in heavy water using an electrochemical cell on a tabletop. I was interested enough to consult that day with our local nuclear experts Charlie Barnes, Bob McKeown, and Steve Koonin, none of whom believed it. Still, could it be true?

I decided to spend a quiet day in my office, trying to imagine ways to induce nuclear fusion by stuffing deuterium into a palladium electrode. I came up empty.

My interest dimmed when I heard that they had done a “control” experiment using ordinary water, had observed the same excess heat as with heavy water, and remained just as convinced as before that they were observing fusion. Later, Caltech chemist Nate Lewis gave a clear and convincing talk to the campus community debunking the original experiment.

6) “The face of God” COBE (1992)

I’m often too skeptical. When I first heard in the early 1980s about proposals to detect the anisotropy in the cosmic microwave background, I doubted it would be possible. The signal is so small! It will be blurred by reionization of the universe! What about the galaxy! What about the dust! Blah, blah, blah, …

The COBE DMR instrument showed it could be done, at least at large angular scales, and set the stage for the spectacular advances in observational cosmology we’ve witnessed over the past 20 years. George Smoot infamously declared that he had glimpsed “the face of God.” Overly dramatic, perhaps, but he was excited! And so was I.

7) “83 SNU” Gallex solar neutrinos (1992)

Until 1992 the only neutrinos from the sun ever detected were the relatively high energy neutrinos produced by nuclear reactions involving boron and beryllium — these account for just a tiny fraction of all neutrinos emitted. Fewer than expected were seen, a puzzle that could be resolved if neutrinos have mass and oscillate to another flavor before reaching earth. But it made me uncomfortable that the evidence for solar neutrino oscillations was based on the boron-beryllium side show, and might conceivably be explained just by tweaking the astrophysics of the sun’s core.

The Gallex experiment was the first to detect the lower energy pp neutrinos, the predominant type coming from the sun. The results seemed to confirm that we really did understand the sun and that solar neutrinos really oscillate. (More compelling evidence, from SNO, came later.) I stayed up late the night I heard about the Gallex result, and gave a talk the next day to our particle theory group explaining its significance. The talk title was “83 SNU” — that was the initially reported neutrino flux in Solar Neutrino Units, later revised downward somewhat.

8) Awestruck: Shor’s algorithm (1994)

I’ve written before about how Peter Shor’s discovery of an efficient quantum algorithm for factoring numbers changed my life. This came at a pivotal time for me, as the SSC had been cancelled six months earlier, and I was growing pessimistic about the future of particle physics. I realized that observational cosmology would have a bright future, but I sensed that theoretical cosmology would be dominated by data analysis, where I would have little comparative advantage. So I became a quantum informationist, and have not regretted it.

9) The Higgs boson at last (2012)

The discovery of the Higgs boson was exciting because we had been waiting soooo long for it to happen. Unable to stream the live feed of the announcement, I followed developments via Twitter. That was the first time I appreciated the potential value of Twitter for scientific communication, and soon after I started to tweet.

10) A lucky universe: BICEP2 (2014)

Many past experiences prepared me to appreciate the BICEP2 announcement this past Monday.

I first came to admire Alan Guth‘s distinctive clarity of thought in the fall of 1973 when he was the instructor for my classical mechanics course at Princeton (one of the best classes I ever took). I got to know him better in the summer of 1979 when I was a graduate student, and Alan invited me to visit Cornell because we were both interested in magnetic monopole production  in the very early universe. Months later Alan realized that cosmic inflation could explain the isotropy and flatness of the universe, as well as the dearth of magnetic monopoles. I recall his first seminar at Harvard explaining his discovery. Steve Weinberg had to leave before the seminar was over, and Alan called as Steve walked out, “I was hoping to hear your reaction.” Steve replied, “My reaction is applause.” We all felt that way.

I was at a wonderful workshop in Cambridge during the summer of 1982, where Alan and others made great progress in understanding the origin of primordial density perturbations produced from quantum fluctuations during inflation (Bardeen, Steinhardt, Turner, Starobinsky, and Hawking were also working on that problem, and they all reached a consensus by the end of the three-week workshop … meanwhile I was thinking about the cosmological implications of axions).

I also met Andrei Linde at that same workshop, my first encounter with his mischievous grin and deadpan wit. (There was a delegation of Russians, who split their time between Xeroxing papers and watching the World Cup on TV.) When Andrei visited Caltech in 1987, I took him to Disneyland, and he had even more fun than my two-year-old daughter.

During my first year at Caltech in 1984, Mark Wise and Larry Abbott told me about their calculations of the gravitational waves produced during inflation, which they used to derive a bound on the characteristic energy scale driving inflation, a few times 10^16 GeV. We mused about whether the signal might turn out to be detectable someday. Would Nature really be so kind as to place that mass scale below the Abbott-Wise bound, yet high enough (above 10^16 GeV) to be detectable? It seemed unlikely.

Last week I caught up with the rumors about the BICEP2 results by scanning my Twitter feed on my iPad, while still lying in bed during the early morning. I immediately leapt up and stumbled around the house in the dark, mumbling to myself over and over again, “Holy Shit! … Holy Shit! …” The dog cast a curious glance my way, then went back to sleep.

Like millions of others, I was frustrated Monday morning, trying to follow the live feed of the discovery announcement broadcast from the hopelessly overtaxed Center for Astrophysics website. I was able to join in the moment, though, by following on Twitter, and I indulged in a few breathless tweets of my own.

Many of his friends have been thinking a lot these past few days about Andrew Lange, who had been the leader of the BICEP team (current senior team members John Kovac and Chao-Lin Kuo were Caltech postdocs under Andrew in the mid-2000s). One day in September 2007 he sent me an unexpected email, with the subject heading “the bard of cosmology.” Having discovered on the Internet a poem I had written to introduce a seminar by Craig Hogan, Andrew wrote:

“John,

just came across this – I must have been out of town for the event.

l love it.

it will be posted prominently in our lab today (with “LISA” replaced by “BICEP”, and remain our rallying cry till we detect the B-mode.

have you set it to music yet?

a”

I lifted a couplet from that poem for one of my tweets (while rumors were swirling prior to the official announcement):

We’ll finally know how the cosmos behaves
If we can detect gravitational waves.

Assuming the BICEP2 measurement r ~ 0.2 is really a detection of primordial gravitational waves, we have learned that the characteristic mass scale during inflation is an astonishingly high 2 X 10^16 GeV. Were it a factor of 2 smaller, the signal would have been far too small to detect in current experiments. This time, Nature really is on our side, eagerly revealing secrets about physics at a scale far, far beyond what we will every explore using particle accelerators. We feel lucky.

We physicists can never quite believe that the equations we scrawl on a notepad actually have something to do with the real universe. You would think we’d be used to that by now, but we’re not — when it happens we’re amazed. In my case, never more so than this time.

The BICEP2 paper, a historic document (if the result holds up), ends just the way it should:

“We dedicate this paper to the memory of Andrew Lange, whom we sorely miss.”

Summer of Science: Caltech InnoWorks 2013

The following post is a collaboration between visiting undergraduates Evan Giarta from Stanford University and Joy Hui from Harvard University. As mentors for the 2013 Caltech InnoWorks Academy, Evan and Joy agreed to share their experience with the audience of this blog.

Innoworks-68All throughout modern history, science and mathematics have been the foundation for engineering new, world-advancing technologies. From the wheel and sailboat to the automobile and jumbo jet, the fields of science, technology, engineering and math (STEM) have helped our world and its people move faster, farther, and forward. Now, more than ever, products that were unimaginable a century ago are used every day in households and businesses all over the earth, products made possible by generations of scientists, mathematicians, engineers and technologists.

There is, however, some troublesome news regarding the state of our nation’s math and science education. For the past few years, education and news reports have ranked the United States behind other developed countries in science and mathematics. In fact, the proportion of students that score in the most advanced levels of math and science in the United States is significantly lower than that of several other nations. This reality brings to light a stark concern: If proficiency in science and math is necessary for the engineering of novel technologies and breakthrough discoveries, but is something the next generation will lack, what will become of the production industry, our economy, and global security? While the answer to this question might range from complete and utter chaos to little or no effect, it seems reasonable to try and avoid finding out. Rather, we–the community of collective scientists, technologists, engineers, mathematicians–should seek to solve this problem: What can we do to restore the integrity and substance of the educational system, especially in science and math?

Innoworks-36Although there is no easy answer to this question, there are ongoing efforts to address this important issue, one of which is the InnoWorks Academy. Founded in 2003 by a group of Duke undergraduates, the United InnoWorks Academy has developed summer programs that encourage underprivileged middle school students to explore the fields of science, technology, engineering, math, and medicine, (STEM^2) free of charge. The academy is sponsored by a variety of businesses and organizations, including GlaxoSmithKline, Cisco, Project Performance Corporation, Do Something, St. Andrew’s School, University of Pennsylvania, University of California, Los Angeles, and many others. InnoWorks Academy was a winner of the 2007 BRICK Awards and received both the MIT Global Community Choice Award and the MIT IDEAS Challenge Grand Prize in 2011. In ten short years, the United InnoWorks Academy has successfully conducted over 50 summer programs for more than 2,000 students through the contributions of over 1,000 volunteers. Currently, InnoWorks has chapters at about a dozen universities and is scheduled to add three more chapters this coming summer.

Innoworks-34In early August of 2013, the Caltech InnoWorks chapter hosted its second annual summer camp, and Joy and I (Evan) were privileged to be invited as program mentors. As people on the inside, we got a first hand look at how the organization prepares and runs the week-long event and what the students themselves experience in the hands-on, interactive and collaborative one-of-a-kind opportunity that is InnoWorks.

Since Joy and I kept a diary of each day’s activities, we thought you may like to see what our middle-schoolers experienced during that week. Here is a play-by-play of the first few days from Joy’s perspective.

Monday, August 5, 2013 was the first day of our camp! Before I go on with talking about the cool things we did that day, I’d like to introduce my team. The self-titled YOLOSWAG consisted of Michael, Chase, Evan, Phaelan and myself. Michael was the oldest, and a little shy at first, but he definitely started talking when he got comfortable. Chase was respectful and polite, and we hit it off immediately. Evan loved science and had lots of questions and knew a TON. Phaelan was the only other girl on the team, but she was very nice and friendly to the other students, and eager to help with anything she could. We all seem different, right? But wait, here’s the best part: we were all die-hard Percy Jackson (property of Rick Riordan) fans! We were definitely the best team.

Innoworks-24

Anyway, back to the actual stuff we did. One of the first things we saw was a cloud demo, essentially the creation of a cloud in a fish tank, with lots of dry ice and water. The demonstrator, Rob Usiskin, stuck a lot of dry ice in the (empty) fish tank, and poured some water into the tank, which caused the water to turn into a fog, which turns out to be the exact form of a cloud! Add a bubble maker, a question of whether bubbles will float or sink on the cloud, and a room full of InnoWorks campers (about 40 of them), and you will get an hour of general excitement. See the pictures for yourself! (The bubbles, float, by the way, even when the cloud is invisible!)

Innoworks-32

Following the demonstration, we made Soap Boats. We were apparently supposed to cut index cards into the shape of boats, and dab a little bit of soap on the bottom of the boat, and set the boat in the water. These “Soap Boats” were supposed to be propelled forward by the soap’s ability to decrease the surface tension of the water it touched. Ours, appropriately named “Titanic,” however, simply sat in the water until the water soaked through, and the Titanic sank for the second time in history. Many other teams’ boats fared about the same, but we certainly had a blast designing and naming our Soap Boat!

The last activity of the day was a secret message decoded with bio-luminescence. Each team was given a vial of dried up ostracods, which are sea creatures found glowing in the darkness of the deep sea. Then, each team crushed the ostracods and mixed the resulting powder with water to catalyze the bio-luminescence. Every mentor had written a secret message on a slip of paper, folded it, and handed it to their teams to decipher in the dark (Mine said: YOLOSWAG for the win). The convenient cleaning closet provided said darkness–Spiros, our faculty mentor, suggested that it might also provide passage to Narnia. I don’t think we lost any of our students that day, so no Narnia-traveling was done; by students. Nevertheless, it was a fun-filled and action-packed day, and a great start to an eventful week.

Tuesday was full of fun, hands-on activities. After a short lesson on the effects of air resistance and gravity on free-falling objects, we demonstrated the concept with a thin sheet of paper and a large textbook. As expected, when placed side by side and dropped, air resistance caused the sheet of paper to hit the ground much later than the textbook. But when the sheet of paper was placed directly above the textbook, to the shock of students, both items fell at the exact same rate. Though thorough in their knowledge of physical laws, the connection between their conceptual understanding and real life application had yet to be established. And as a result, when asked to explain what happened and why, the best answer one could muster was simply, “SCIENCE!”.

Following an exercise consisting of blow dryers and floating ping pong balls, the kids received a brief tutorial on how tornadoes are formed by air moving through high and low pressure regions and gusts of vertically rising winds. Due to the forces it is producing and acting upon, the tornado would then be able to more or less sustain itself. To explore this concept further, students constructed water tornado machines by taping two soda bottles together at their openings. Laughter and wetness ensued. Some groups added small trinkets in their tornado machine to observed the water tornado’s effect on “debris”. One team in particular inserted duct tape sharks, and aptly renamed themselves as Sharknado.

Innoworks-66[1]

In the afternoon, the campers were presented a lesson on the transportation of sailboats and aircraft. Contrary to what most people intuit, the fastest way to control a boat is not to flow in the direction of the wind, but to place the sail at the heading which produces a net force, which can be explained by Bernoulli’s Principle. It states that faster moving fluid has less pressure than slower moving fluid, therefore producing a force from the slower moving side to the faster moving side. Though in sailing this force is initially barely noticeable, over time it creates a large impulse to move the craft at considerable speeds. The same principle can also explain the way a plane takes flight.

With the importance of good design in mind, students were tasked with prototyping the fastest water-bottle boat. Given a solar panel, electric motor, various propellers, empty bottle, tape, and other construction essentials, kids started with basic designs, then diversified in order to gain an edge against other teams. Some teams boasted two-bottle designs, and others used one, each type having trade-offs in speed and stability. Few implemented style upgrades with graphics and colors, and even fewer leveraged performance modifications with ballast and crude control systems. But with a tough deadline to meet, not all boats met their intended specifications. Nonetheless, the races commenced and each team’s innovation was tested in the torrents of Caltech’s Beckman Fountain. Some failed, but those that survived were rewarded accordingly.

Innoworks-82[1]

By the end of the second day, many of the campers’ initial shyness had been replaced with conversation and budding new friendships. Lunch hour and break times allowed time for kids and mentors alike to hang out and enjoy themselves in California’s summer sun in-between discovering the applications of science and math to engineering, medicine, and technology. These moments of discovery, no matter how rare, are the reasons why we do what we do as we continue our research, studies, and work to improve the world we live in.

Oh, the Places You’ll Do Theoretical Physics!

I won’t run lab tests in a box.
I won’t run lab tests with a fox.
But I’ll prove theorems here or there.
Yes, I’ll prove theorems anywhere…

Physicists occupy two camps. Some—theorists—model the world using math. We try to predict experiments’ outcomes and to explain natural phenomena. Others—experimentalists—gather data using supermagnets, superconductors, the world’s coldest atoms, and other instruments deserving of superlatives. Experimentalists confirm that our theories deserve trashing or—for this we pray—might not model the world inaccurately.

Theorists, people say, can work anywhere. We need no million-dollar freezers. We need no multi-pound magnets.* We need paper, pencils, computers, and coffee. Though I would add “quiet,” colleagues would add “iPods.”

Theorists’ mobility reminds me of the book Green Eggs and Ham. Sam-I-am, the antagonist, drags the protagonist to spots as outlandish as our workplaces. Today marks the author’s birthday. Since Theodor Geisel stimulated imaginations, and since imagination drives physics, Quantum Frontiers is paying its respects. In honor of Oh, the Places You’ll Go!, I’m spotlighting places you can do theoretical physics. You judge whose appetite for exotica exceeds whose: Dr. Seuss’s or theorists’.

http://breakfastonthect.com/2012/01/18/dartmouths-winter-carnival-ranked-6th/

I’ve most looked out-of-place doing physics by a dirt road between sheep-populated meadows outside Lancaster, UK. Lancaster, the War of the Roses victor, is a city in northern England. The year after graduating from college, I worked in Lancaster University as a research assistant. I studied a crystal that resembles graphene, a material whose superlatives include “superstrong,” “supercapacitor,” and “superconductor.” From morning to evening, I’d submerse in math till it poured out my ears. Then I’d trek from “uni,” as Brits say, to the “city centre,” as they write.

The trek wound between trees; fields; and, because I was in England, puddles. Many evenings, a rose or a sunset would arrest me. Other evenings, physics would. I’d realize how to solve an equation, or that I should quit banging my head against one. Stepping off the road, I’d fish out a notebook and write. Amidst the puddles and lambs. Cyclists must have thought me the queerest sight since a cloudless sky.

A colleague loves doing theory in the sky. On planes, he explained, hardly anyone interrupts his calculations. And who minds interruptions by pretzels and coffee?

“A mathematician is a device for turning coffee into theorems,” some have said, and theoretical physicists live down the block from mathematicians in the neighborhood of science. Turn a Pasadena café upside-down and shake it, and out will fall theorists. Since Hemingway’s day, the romanticism has faded from the penning of novels in cafés. But many a theorist trumpets about an equation derived on a napkin.

Trumpeting filled my workplace in Oxford. One of Clarendon Lab’s few theorists, I neighbored lasers, circuits, and signs that read “DANGER! RADIATION.” Though radiation didn’t leak through our walls (I hope), what did contributed more to that office’s eccentricity more than radiation would. As early as 9:10 AM, the experimentalists next door blasted “Born to Be Wild” and Animal House tunes. If you can concentrate over there, you can concentrate anywhere.

One paper I concentrated on had a Crumple-Horn Web-Footed Green-Bearded Schlottz of an acknowledgements section. In a physics paper’s last paragraph, one thanks funding agencies and colleagues for support and advice. “The authors would like to thank So-and-So for insightful comments,” papers read. This paper referenced a workplace: “[One coauthor] is grateful to the Half Moon Pub.” Colleagues of the coauthor confirmed the acknowledgement’s aptness.

Though I’ve dwelled on theorists’ physical locations, our minds roost elsewhere. Some loiter in atoms; others, in black holes; some, on four-dimensional surfaces; others, in hypothetical universes. I hobnob with particles in boxes. As Dr. Seuss whisks us to a Bazzim populated by Nazzim, theorists tell of function spaces populated by Rényi entropies.

The next time you see someone standing in a puddle, or in a ditch, or outside Buckingham Palace, scribbling equations, feel free to laugh. You might be seeing a theoretical physicist. You might be seeing me. To me, physics has relevance everywhere. Scribbling there and here should raise eyebrows no more than any setting in a Dr. Seuss book.

The author would like to thank this emporium of Seussoria. And Java & Co.

*We need for them to confirm that our theories deserve trashing, but we don’t need them with us. Just as, when considering quitting school to break into the movie business, you need for your mother to ask, “Are you sure that’s a good idea, dear?” but you don’t need for her to hang on your elbow. Except experimentalists don’t say “dear” when crushing theorists’ dreams.