Decoding (the allure of) the apparent horizon

I took 32 hours to unravel why Netta Engelhardt’s talk had struck me.

We were participating in Quantum Information in Quantum Gravity III, a workshop hosted by the University of British Columbia (UBC) in Vancouver. Netta studies quantum gravity as a Princeton postdoc. She discussed a feature of black holes—an apparent horizon—I’d not heard of. After hearing of it, I had to grasp it. I peppered Netta with questions three times in the following day. I didn’t understand why, for 32 hours.

After 26 hours, I understood apparent horizons like so.

Imagine standing beside a glass sphere, an empty round shell. Imagine light radiating from a point source in the sphere’s center. Think of the point source as a minuscule flash light. Light rays spill from the point source.

Which paths do the rays follow through space? They fan outward from the sphere’s center, hit the glass, and fan out more. Imagine turning your back to the sphere and looking outward. Light rays diverge as they pass you.

At least, rays diverge in flat space-time. We live in nearly flat space-time. We wouldn’t if we neighbored a supermassive object, like a black hole. Mass curves space-time, as described by Einstein’s theory of general relativity.

Sphere 2

Imagine standing beside the sphere near a black hole. Let the sphere have roughly the black hole’s diameter—around 10 kilometers, according to astrophysical observations. You can’t see much of the sphere. So—imagine—you recruit your high-school-physics classmates. You array yourselves around the sphere, planning to observe light and compare observations. Imagine turning your back to the sphere. Light rays would converge, or flow toward each other. You’d know yourself to be far from Kansas.

Picture you, your classmates, and the sphere falling into the black hole. When would everyone agree that the rays switch from diverging to converging? Sometime after you passed the event horizon, the point of no return.1 Before you reached the singularity, the black hole’s center, where space-time warps infinitely. The rays would switch when you reached an in-between region, the apparent horizon.

Imagine pausing at the apparent horizon with your sphere, facing away from the sphere. Light rays would neither diverge nor converge; they’d point straight. Continue toward the singularity, and the rays would converge. Reverse away from the singularity, and the rays would diverge.

Rose garden 2

UBC near twilight

Rays diverged from the horizon beyond UBC at twilight. Twilight suits UBC as marble suits the Parthenon; and UBC’s twilight suits musing. You can reflect while gazing on reflections in glass buildings, or reflections in a pool by a rose garden. Your mind can roam as you roam paths lined by elms, oaks, and willows. I wandered while wondering why the sphere intrigued me.

Science thrives on instrumentation. Galileo improved the telescope, which unveiled Jupiter’s moons. Alexander von Humboldt measured temperatures and pressures with thermometers and barometers, charting South America during the 1700s. The Large Hadron Collider revealed the Higgs particle’s mass in 2012.

The sphere reminded me of a thermometer. As thermometers register temperature, so does the sphere register space-time curvature. Not that you’d need a sphere to distinguish a black hole from Kansas. Nor do you need a thermometer to distinguish Vancouver from a Brazilian jungle. But thermometers quantify the distinction. A sphere would sharpen your observations’ precision.

A sphere and a light source—free of supercolliders, superconductors, and superfridges. The instrument boasts not only profundity, but also simplicity.

von Humboldt.001

Alexander von Humboldt

Netta proved a profound theorem about apparent horizons, with coauthor Aron Wall. Jakob Bekenstein and Stephen Hawking had studied event horizons during the 1970s. An event horizon’s area, Bekenstein and Hawking showed, is proportional to the black hole’s thermodynamic entropy. Netta and Aron proved a proportionality between another area and another entropy.

They calculated an apparent horizon’s area, A. The math that represents their black hole represents also a quantum system, by a duality called AdS/CFT. The quantum system can occupy any of several states. Different states encode different information about the black hole. Consider the information needed to describe, fully and only, the region outside the apparent horizon. Some quantum state \rho encodes this information. \rho encodes no information about the region behind the apparent horizon, closer to the black hole. How would you quantify this lack of information? With the von Neumann entropy S(\rho). This entropy is proportional to the apparent horizon’s area: S( \rho )  \propto  A.

Netta and Aron entitled their paper “Decoding the apparent horizon.” Decoding the apparent horizon’s allure took me 32 hours and took me to an edge of campus. But I didn’t mind. Edges and horizons suited my visit as twilight suits UBC. Where can we learn, if not at edges, as where quantum information meets other fields?

 

With gratitude to Mark van Raamsdonk and UBC for hosting Quantum Information in Quantum Gravity III; to Mark, the other organizers, and the “It from Qubit” Simons Foundation collaboration for the opportunity to participate; and to Netta Engelhardt for sharing her expertise.

1Nothing that draws closer to a black hole than the event horizon can turn around and leave, according to general relativity. The black hole’s gravity pulls too strongly. Quantum mechanics implies that information leaves, though, in Hawking radiation.

Teacher Research at Caltech

The Yeh Lab group’s research activities at Caltech have been instrumental in studying semiconductors and making two-dimensional materials such as graphene, as highlighted on a BBC Horizons show.  

An emerging sub-field of semiconductor and two-dimensional research is that of Transition metal dichalcogenide (TDMC) monolayers. In particular, a monolayer of Tungsten disulfide, a TDMC, is believed to exhibit interesting semiconductor properties when exposed to circularly polarized light. My role in the Yeh Lab, as a visiting high school Physics Teacher intern,  for the Summer of 2017 has been to help research and set up a vacuum chamber to study Tungsten disulfide samples under circularly polarized light.

What makes semiconductors unique is that conductivity can be controlled by doping or changes in temperature. Higher temperatures or doping can bridge the energy gap between the valence and conduction bands; in other words, electrons can start moving from one side of the material to the other. Like graphene, Tungsten disulfide has a hexagonal, symmetric crystal structure. Monolayers of transition metal dichalcogenides in such a honeycomb structure have two valleys of energy. One valley can interact with another valley. Circularly polarized light is used to populate one valley versus another. This gives a degree of control over the population of electrons by polarized light.

The Yeh Lab Group prides itself on making in-house the materials and devices needed for research. For example, in order to study high temperature superconductors, the Yeh Group designed and built their own scanning tunneling microscope. When they began researching graphene, instead of buying vast quantities of graphene, they pioneered new ways of fabricating it. This research topic has been no different: Wei-hsiang Lin, a Caltech graduate student, has been busy fabricating Tungsten disulfide samples via chemical vapor deposition (CVD) using Tungsten oxide and sulfur powder.  

IMG_0722

Wei-hsiang Lin’s area for using PLD to form the TDMC samples

The first portion of my assignment was spent learning more about vacuum chambers and researching what to order to confine our sample into the chamber. One must determine how the electronic feeds should be attached, how many are necessary, which vacuum pump will be used, how many flanges and gaskets of each size must be purchased in order to prepare the vacuum chamber.

There were also a number of flanges and parts already in the lab that needed to be examined for possible use. After triple checking the details the order was set with Kurt J. Lesker. Following a sufficient amount of anti-seize lubricant and numerous nuts, washers, and bolts, we assembled the vacuum chamber that will hold the TDMC sample.

IMG_0056

The original vacuum chamber


IMG_0630

Fun in the lab


IMG_0672 (1)

The prepped vacuum chamber

IMG_0673IMG_0674

The second part of my assignment was spent researching how to set up the optics for our experiment and ordering the necessary equipment. Once the experiment is up and running we will be using a milliWatt broad spectrum light source that is directed into a monochromator to narrow down the light to specific wavelengths for testing. Ultimately we will be evaluating the giant wavelength range of 300 nm through 1800 nm. Following the monochromator, light will be refocused by a planoconvex lens. Next, light will pass through a linear polarizer and then a circular polarizer (quarter wave plate). Lastly, the light will be refocused by a biconvex lens into the vacuum chamber and onto a 1 mm by 1 mm area of the sample.  

Soon, we are excited to verify how tungsten disulfide responds to circularly polarized light.  Does our sample resonate at the exact same wavelengths as the first labs found? Why or why not?  What other unique properties are observed?  How can they be explained?  How is the Hall Effect observed?  What does this mean for the possible applications of semiconductors? How can the transfer of information from one valley to another be used in advanced electronics for communication?  Then, similar exciting experimentation will take place with graphene under circularly polarized light.

I love the sharp contrast of the high-energy, adolescent classroom to the quiet, calm of the lab.  I am grateful for getting to learn a different and new-to-me area of Physics during the summer.  Yes, I remember studying polarization and semiconductors in high school and as an undergraduate.  But it is completely different to set up an experiment from scratch, to be a part of groundbreaking research in these areas.  And it is just fun to get to work with your hands and build research equipment at a world leading research university.  Sometimes Science teachers can get bogged down with all the paperwork and meetings.  I am grateful to have had this fabulous opportunity during the summer to work on applied Science and to be re-energized in my love for Physics.  I look forward to meeting my new batch of students in a few short weeks to share my curiosity and joy for learning how the world works with them.

Two Views of the Eclipse

I am sure many of us are thinking about the eclipse.

It all starts with how far are we going to drive in order to see totality. My family and I are currently in Colorado, so we are relatively close to the path of darkness in Wyoming. I thought about trying to book a hotel room. But if you’d like to see the dusk in Lusk, here is what you get:

Let us just say that I became quite acquainted with small-town WY and any-ville NE before giving up. Driving in the same day for 10 hours with my two children, ages 4 and 5, was not an option. So I will have to be content with 90% coverage.

90% coverage sounds like it is good enough… But when you think about the sun and its output, you realize that it won’t actually be very dark. The sun gives out about 1kW of light and heat per square meter. 90% of that still leaves us with 100W per meter squared. Imagine a room lit by a square array of 100W incandescent bulbs at one meter apart from each other. Not so dark. Luckily, we have really dark eclipse glasses.

All things considered, it is a huge coincidence that the moon is just about the right size and distance from the earth to block the sun exactly, \frac{\mbox{sun radius}}{\mbox{sun-Earth distance}}=\frac{0.7\cdot 10^6 km}{150\cdot 10^6 km}\approx \frac{\mbox{luna radius}}{\mbox{luna-Earth distance}}=\frac{1.7\cdot 10^3 km}{385\cdot 10^3 km}.

On a more personal note, another coincidence of a lesser cosmic meaning is that my wife, Jocelyn Holland, a professor of comparative literature at UCSB and Caltech, has also done research on eclipses. She has recently published an essay that shows how, for nineteenth-century observers, and astronomers in particular, the unique darkness associated with the eclipse during totality shook their subjective experience of time. Readers might want to share their own personal experiences at the end of this blog so that we can see how a twenty-first century perspective compares.

As for Jocelyn’s paper, here is a redacted ‘poetry for scientists’ excerpt from it.

Eclipses are well-known objects of scientific study but it is just as true that, throughout history, they have been perceived as the most supernatural of events, permitting superstition and fear to intrude. As a result, eclipses have frequently been used across cultures, in particular, by the community of scientists and scholars, as an index of “enlightenment.” Astronomers in the nineteenth century – an epoch that witnessed several mathematical advances in the calculation of solar and lunar eclipses, as exemplified in the work of Friedrich Bessel – looked back at prior centuries with scorn, mocking the irrational fears of times past. The German astronomer Ludwig August Busch, in text published shortly before a total eclipse in 1851, points out with some smugness that scarcely 200 years before then, in Germany, “the majority of the population threw itself upon its knees in desperation during a total eclipse,” and that the composure with which the next eclipse will be greeted is “the most certain proof how only science is able to conquer prejudices and superstition which prior centuries have gone through.”

Two solar eclipses were witnessed by Europeans in the mid-nineteenth century, on July 8th, 1842 and July 28th, 1851, when the first photographic image of an eclipse was made by Julius Berkowski (see below).

What Berkowski’s daguerreotype cannot convey, however, is a particular perception shared by both professional astronomers and amateur observers of these eclipses: that the darkness of the eclipse’s totality is unlike any darkness they had experienced before. As it turns out, this perception posed a challenge to their self-proclaimed enlightenment.

There was already a historical record in place describing the strange darkness of a total eclipse. As another nineteenth-century astronomer, Jacob Lehmann, phrased it, “How is it now to be explained, namely what several observers report during the eclipse of 1706, that the darkness at the time of the total occultation of the sun compares neither to night nor to dusk, but rather is of a particular kind. What is this particular kind?” The strange darkness of the eclipse presents a problem that one can state quite simply in temporal terms: it corresponds to no prior experience of natural light or time of day.

It might strike us as odd that August Ludwig Busch, the same astronomer who derided the superstition of prior generations, writes the following with reference to eclipses past, and in anticipation of the eclipse of 1851:

You will all remember the inexplicable melancholic frame of mind which one already experiences during large if not even total eclipses, when all objects appear in a dull, unusual light, there lies namely in the sight of great plains and far-spread drifts, upon which trees and rocks, although still illuminated by sunlight, still seem to cast no shadow, such a thing which causes mourning, that one is involuntarily overcome by horror. This feeling should occur more intensely in people when, during the total eclipse, a very peculiar darkness arrives which can be named neither night nor dusk.

August Ludwig Busch.

One can say that the perceived relationship between the quality of light and time of day is based on expectations that are so innate as to be taken as infallible until experience teaches otherwise. It is natural for us to use the available light in the sky as the basis for a measure of time when no time-keeping piece is on hand. The cyclical predictability of a steady increase and decrease in available light during the course of the day, however, in addition to all the nuances of how the midday light differs from dawn and twilight, is less than helpful in the rare event of an eclipse. The quality of light does not correspond to any experience of lived time. As a consequence, not only August Ludwig Busch, but also numerous other observers, attributed it to death, as if for lack of an alternative.

For all their claims of rationality, nineteenth-century observers were troubled by this darkness that conformed to no experienced time of day. It signaled to them, among other things, that time and light are out of joint. In short, as natural and as it may be, a full solar eclipse has, historically, posed a real challenge: not to the predictability of mechanical time-keeping, but rather to a very human experience of time.

Topological qubits: Arriving in 2018?

Editor‘s note: This post was prepared jointly by Ryan Mishmash and Jason Alicea.

Physicists appear to be on the verge of demonstrating proof-of-principle “usefulness” of small quantum computers.  Preskill’s notion of quantum supremacy spotlights a particularly enticing goal: use a quantum device to perform some computation—any computation in fact—that falls beyond the reach of the world’s best classical computers.  Efforts along these lines are being vigorously pursued along many fronts, from academia to large corporations to startups.  IBM’s publicly accessible 16-qubit superconducting device, Google’s pursuit of a 7×7 superconducting qubit array, and the recent synthesis of a 51-qubit quantum simulator using rubidium atoms are a few of many notable highlights.  While the number of qubits obtainable within such “conventional” approaches has steadily risen, synthesizing the first “topological qubit” remains an outstanding goal.  That ceiling may soon crumble however—vaulting topological qubits into a fascinating new chapter in the quest for scalable quantum hardware.

Why topological quantum computing?

As quantum computing progresses from minimalist quantum supremacy demonstrations to attacking real-world problems, hardware demands will naturally steepen.  In, say, a superconducting-qubit architecture, a major source of overhead arises from quantum error correction needed to combat decoherence.  Quantum-error-correction schemes such as the popular surface-code approach encode a single fault-tolerant logical qubit in many physical qubits, perhaps thousands.  The number of physical qubits required for practical applications can thus rapidly balloon.

The dream of topological quantum computing (introduced by Kitaev) is to construct hardware inherently immune to decoherence, thereby mitigating the need for active error correction.  In essence, one seeks physical qubits that by themselves function as good logical qubits.  This lofty objective requires stabilizing exotic phases of matter that harbor emergent particles known as “non-Abelian anyons”.  Crucially, nucleating non-Abelian anyons generates an exponentially large set of ground states that cannot be distinguished from each other by any local measurement.  Topological qubits encode information in those ground states, yielding two key virtues:

(1) Insensitivity to local noise.  For reference, consider a conventional qubit encoded in some two-level system, with the 0 and 1 states split by an energy \hbar \omega.  Local noise sources—e.g., random electric and magnetic fields—cause that splitting to fluctuate stochastically in time, dephasing the qubit.  In practice one can engender immunity against certain environmental perturbations.  One famous example is the transmon qubit (see “Charge-insensitive qubit design derived from the Cooper pair box” by Koch et al.) used extensively at IBM, Google, and elsewhere.  The transmon is a superconducting qubit that cleverly suppresses the effects of charge noise by operating in a regime where Josephson couplings are sizable compared to charging energies.  Transmons remain susceptible, however, to other sources of randomness such as flux noise and critical-current noise.  By contrast, topological qubits embed quantum information in global properties of the system, building in immunity against all local noise sources.  Topological qubits thus realize “perfect” quantum memory.

(2) Perfect gates via braiding.  By exploiting the remarkable phenomenon of non-Abelian statistics, topological qubits further enjoy “perfect” quantum gates: Moving non-Abelian anyons around one another reshuffles the system among the ground states—thereby processing the qubits—in exquisitely precise ways that depend only on coarse properties of the exchange.

Disclaimer: Adjectives like “perfect” should come with the qualifier “up to exponentially small corrections”, a point that we revisit below.

Experimental status

The catch is that systems supporting non-Abelian anyons are not easily found in nature.  One promising topological-qubit implementation exploits exotic 1D superconductors whose ends host “Majorana modes”—novel zero-energy degrees of freedom that underlie non-Abelian-anyon physics.  In 2010, two groups (Lutchyn et al. and Oreg et al.) proposed a laboratory realization that combines semiconducting nanowires, conventional superconductors, and modest magnetic fields.

Since then, the materials-science progress on nanowire-superconductor hybrids has been remarkable.  Researchers can now grow extremely clean, versatile devices featuring various manipulation and readout bells and whistles.  These fabrication advances paved the way for experiments that have reported increasingly detailed Majorana characteristics: tunneling signatures including recent reports of long-sought quantized response, evolution of Majorana modes with system size, mapping out of the phase diagram as a function of external parameters, etc.  Alternate explanations are still being debated though.  Perhaps the most likely culprit are conventional localized fermionic levels (“Andreev bound states”) that can imitate Majorana signatures under certain conditions; see in particular Liu et al.  Still, the collective experimental effort on this problem over the last 5+ years has provided mounting evidence for the existence of Majorana modes.  Revealing their prized quantum-information properties poses a logical next step.

Validating a topological qubit

Ideally one would like to verify both hallmarks of topological qubits noted above—“perfect” insensitivity to local noise and “perfect” gates via braiding.  We will focus on the former property, which can be probed in simpler device architectures.  Intuitively, noise insensitivity should imply long qubit coherence times.  But how do you pinpoint the topological origin of long coherence times, and in any case what exactly qualifies as “long”?

Here is one way to sharply address these questions (for more details, see our work in Aasen et al.).  As alluded to in our disclaimer above, logical 0 and 1 topological-qubit states aren’t exactly degenerate.  In nanowire devices they’re split by an energy \hbar \omega that is exponentially small in the separation distance L between Majorana modes divided by the superconducting coherence length \xi.  Correspondingly, the qubit states are not quite locally indistinguishable either, and hence not perfectly immune to local noise.  Now imagine pulling apart Majorana modes to go from a relatively poor to a perfect topological qubit.  During this process two things transpire in tandem: The topological qubit’s oscillation frequency, \omega, vanishes exponentially while the dephasing time T_2 becomes exponentially long.  That is,

scaling

This scaling relation could in fact be used as a practical definition of a topologically protected quantum memory.  Importantly, mimicking this property in any non-topological qubit would require some form of divine intervention.  For example, even if one fine-tuned conventional 0 and 1 qubit states (e.g., resulting from the Andreev bound states mentioned above) to be exactly degenerate, local noise could still readily produce dephasing.

As discussed in Aasen et al., this topological-qubit scaling relation can be tested experimentally via Ramsey-like protocols in a setup that might look something like the following:

Aasen

This device contains two adjacent Majorana wires (orange rectangles) with couplings controlled by local gates (“valves” represented by black switches).  Incidentally, the design was inspired by a gate-controlled variation of the transmon pioneered in Larsen et al. and de Lange et al.  In fact, if only charge noise was present, we wouldn’t stand to gain much in the way of coherence times: both the transmon and topological qubit would yield exponentially long T_2 times.  But once again, other noise sources can efficiently dephase the transmon, whereas a topological qubit enjoys exponential protection from all sources of local noise.  Mathematically, this distinction occurs because the splitting for transmon qubit states is exponentially flat only with respect to variations in a “gate offset” n_g.  For the topological qubit, the splitting is exponentially flat with respect to variations in all external parameters (e.g., magnetic field, chemical potential, etc.), so long as Majorana modes still survive.  (By “exponentially flat” we mean constant up to exponentially small deviations.)  Plotting the energies of the qubit states in the two respective cases versus external parameters, the situation can be summarized as follows:

energies

Outlook: Toward “topological quantum ascendancy”

These qubit-validation experiments constitute a small stepping stone toward building a universal topological quantum computer.  Explicitly demonstrating exponentially protected quantum information as discussed above would, nevertheless, go a long way toward establishing practical utility of Majorana-based topological qubits.  One might even view this goal as single-qubit-level “topological quantum ascendancy”.  Completion of this milestone would further set the stage for implementing “perfect” quantum gates, which requires similar capabilities albeit in more complex devices.  Researchers at Microsoft and elsewhere have their sights set on bringing a prototype topological qubit to life in the very near future.  It is not unreasonable to anticipate that 2018 will mark the debut of the topological qubit.  We could of course be off target.  There is, after all, still plenty of time in 2017 to prove us wrong.

Taming wave functions with neural networks

Note from Nicole Yunger Halpern: One sunny Saturday this spring, I heard Sam Greydanus present about his undergraduate thesis. Sam was about to graduate from Dartmouth with a major in physics. He had worked with quantum-computation theorist Professor James Whitfield. The presentation — about applying neural networks to quantum computation — so intrigued me that I asked him to share his research on Quantum Frontiers. Sam generously agreed; this is his story.

Wave functions in the wild

ski_interference

The wave function, \psi , is a mixed blessing. At first, it causes unsuspecting undergrads (me) some angst via the Schrodinger’s cat paradox. This angst morphs into full-fledged panic when they encounter concepts such as nonlocality and Bell’s theorem (which, by the way, is surprisingly hard to verify experimentally). The real trouble with \psi , though, is that it grows exponentially with the number of entangled particles in a system. We couldn’t even hope to write the wavefunction of 100 entangled particles, much less perform computations on it…but there’s a lot to gain from doing just that.

The thing is, we (a couple of luckless physicists) love \psi . Manipulating wave functions can give us ultra-precise timekeeping, secure encryption, and polynomial-time factoring of integers (read: break RSA). Harnessing quantum effects can also produce better machine learning, better physics simulations, and even quantum teleportation.

Taming the beast

Though \psi grows exponentially with the number of particles in a system, most physical wave functions can be described with a lot less information. Two algorithms for doing this are the Density Matrix Renormalization Group (DMRG) and Quantum Monte Carlo (QMC).

bonsai

Density Matrix Renormalization Group (DMRG). Imagine we want to learn about trees, but studying a full-grown, 50-foot tall tree in the lab is too unwieldy. One idea is to keep the tree small, like a bonsai tree. DMRG is an algorithm which, like a bonsai gardener, prunes the wave function while preserving its most important components. It produces a compressed version of the wave function called a Matrix Product State (MPS). One issue with DMRG is that it doesn’t extend particularly well to 2D and 3D systems.

Screen Shot 2017-07-29 at 12.01.23 AM

Quantum Monte Carlo (QMC). Another way to study the concept of “tree” in a lab (bear with me on this metaphor) would be to study a bunch of leaf, seed, and bark samples. Quantum Monte Carlo algorithms do this with wave functions, taking “samples” of a wave function (pure states) and using the properties and frequencies of these samples to build a picture of the wave function as a whole. The difficulty with QMC is that it treats the wave function as a black box. We might ask, “how does flipping the spin of the third electron affect the total energy?” and QMC wouldn’t have much of a physical answer.

Brains \gg Brawn

Neural Quantum States (NQS). Some state spaces are far too large for even Monte Carlo to sample adequately. Suppose now we’re studying a forest full of different species of trees. If one type of tree vastly outnumbers the others, choosing samples from random trees isn’t an efficient way to map biodiversity. Somehow, we need to make the sampling process “smarter”. Last year, Google DeepMind used a technique called deep reinforcement learning to do just that – and achieved fame for defeating the world champion human Go player. A recent Science paper by Carleo and Troyer (2017) used the same technique to make QMC “smarter” and effectively compress wave functions with neural networks. This approach, called “Neural Quantum States (NQS)”, produced several state-of-the-art results.

mps-learn-schema

The general idea of my thesis.

My thesis. My undergraduate thesis centered upon much the same idea. In fact, I had to abandon some of my initial work after reading the NQS paper. I then focused on using machine learning techniques to obtain MPS coefficients. Like Carleo and Troyer, I used neural networks to approximate  \psi . Unlike Carleo and Troyer, I trained my model to output a set of Matrix Product State coefficients which have physical meaning (MPS coefficients always correspond to a certain state and site, e.g. “spin up, electron number 3”).

Cool – but does it work?

Yes – for small systems. In my thesis, I considered a toy system of 4 spin-\frac{1}{2} particles interacting via the Heisenberg Hamiltonian. Solving this system is not difficult so I was able to focus on fitting the two disparate parts – machine learning and Matrix Product States – together.

Success! My model solved for ground states with arbitrary precision. Even more interestingly, I used it to automatically obtain MPS coefficients. Shown below, for example, is a visualization of my model’s coefficients for the GHZ state, compared with coefficients taken from the literature.

Screen Shot 2017-07-28 at 11.46.45 PM

A visual comparison of a 4-site Matrix Product State for the GHZ state a) listed in the literature b) obtained from my neural network model. Colored squares correspond to real-valued elements of 2×2 matrices.

Limitations. The careful reader might point out that, according to the schema of my model (above), I still have to write out the full wave function. To scale my model up, I instead trained it variationally over a subspace of the Hamiltonian (just as the authors of the NQS paper did). Results are decent for larger (10-20 particle) systems, but the training itself is still unstable. I’ll finish ironing out the details soon, so keep an eye on arXiv* :).

Outside the ivory tower

qcomputer

A quantum computer developed by Joint Quantum Institute, U. Maryland.

Quantum computing is a field that’s poised to take on commercial relevance. Taming the wave function is one of the big hurdles we need to clear before this happens. Hopefully my findings will have a small role to play in making this happen.

On a more personal note, thank you for reading about my work. As a recent undergrad, I’m still new to research and I’d love to hear constructive comments or criticisms. If you found this post interesting, check out my research blog.

*arXiv is an online library for electronic preprints of scientific papers