It’s the beginning of another summer, and I’m looking forward to outdoor barbecues, swimming in lakes and pools, and sharing my home-made ice cream with friends and family. One thing that I won’t encounter this summer, but I did last year, is a Canadian goose. In summer 2023, I ventured north from the University of Maryland – College Park to Waterloo, Canada, for a position at the University of Waterloo. The university houses the Institute for Quantum Computing (IQC), and the Perimeter Institute (PI) for Theoretical Physics is nearby. I spent my summer at these two institutions because I was accepted into the IQC’s Undergraduate School on Experimental Quantum Information Processing (USEQIP) and received an Undergraduate Research Award. I’ll detail my experiences in the program and the fun social activities I participated in along the way.
For my first two weeks in Waterloo, I participated in USEQIP. This program is an intense boot camp in quantum hardware. I learned about many quantum-computing platforms, including trapped ions, superconducting circuits, and nuclear magnetic resonance systems. There were interactive lab sessions where I built a low-temperature thermometer, assembled a quantum key distribution setup, and designed an experiment of the Quantum Zeno Effect using nuclear magnetic resonance systems. We also toured the IQC’s numerous research labs and their nano-fabrication clean room. I learned a lot from these two weeks, and I settled into life in goose-filled Waterloo, trying to avoid goose poop on my daily walks around campus.
Once USEQIP ended, I began the work for my Undergraduate Research Award, joining Dr. Raymond Laflamme’s group. My job was to read Dr. Laflamme’s soon-to-be-published textbook about quantum hardware, which he co-wrote with graduate student Shayan Majidy and Dr. Chris Wilson. I read through the sections for clarity and equation errors. I also worked through the textbook’s exercises to ensure they were appropriate for the book. Additionally, I contributed figures to the book.
The most challenging part of this work was completing the exercises. I would become frustrated with the complex problems, sometimes toiling over a single problem for over three hours. My frustrations were aggravated when I asked Shayan for help, and my bitter labor was to him a simple trick I had not seen. I had to remind myself that I had been asked to test drive this textbook because I am the target audience for it. I offered an authentic undergraduate perspective on the material that would be valuable to the book’s development. Despite the challenges, I successfully completed my book review, and Shayan sent the textbook for publication at the beginning of August.
After, I moved on to another project. I worked on the quantum thermodynamics research that I conduct with Dr. Nicole Yunger Halpern. My work with Dr. Yunger Halpern concerns systems with noncommuting charges. I run numerical calculations on these systems to understand how they thermalize internally. I enjoyed working at both the IQC and the Perimeter Institute with their wonderful office views and free coffee.
Midway through the summer, Dr. Laflamme’s former and current students celebrated his 60th birthday with a birthday conference. As one of his newest students, I had a wonderful time meeting many of his past students who’ve had exciting careers following their graduation from the group. During the birthday conference, we had six hours of talks daily, but these were not traditional research talks. The talks were on any topic the speaker wanted to share with the audience. I learned about how a senior data scientist at TD Bank uses machine learning, a museum exhibit organized by the University of Waterloo called Quantum: The Exhibition, and photonic quantum science at the Raman Research Institute. For the socializing portion, we played street hockey and enjoyed delicious sushi, sandwiches, and pastries. By coincidence, Dr. Laflamme’s birthday and mine are one day apart!
Outside of my work, I spent almost every weekend exploring Ontario. I beheld the majesty of Niagara Falls for the first time; I visited Canada’s wine country, Niagara on the Lake; I met with friends and family in Toronto; I stargazed with the hope of seeing the aurora borealis (unfortunately, the Northern Lights did not appear). I also joined a women’s ultimate frisbee team, PPF (sorry, we can’t tell you what it stands for), during my stay in Canada. I had a blast getting to play while sharpening my skills for the collegiate ultimate frisbee season. Finally, my summer would not have been great without the friendships that I formed with my fellow USEQIP undergraduates. We shared more than just meals; we shared our hopes and dreams, and I am so lucky to have met such inspiring people.
Though my summer in Waterloo has come to an end now, I’ll never forget the incredible experiences I had.
This is the final part of a four-part series covering the recent Perspective on noncommuting charges. I’ve been posting one part every ~5 weeks leading up to my PhD thesis defence. You can find Part 1 here, Part 2 here, and Part 3 here.
In four months, I’ll embark on the adventure of a lifetime—fatherhood.
To prepare, I’ve been honing a quintessential father skill—storytelling. If my son inherits even a fraction of my tastes, he’ll soon develop a passion for film noir detective stories. And really, who can resist the allure of a hardboiled detective, a femme fatale, moody chiaroscuro lighting, and plot twists that leave you reeling? For the uninitiated, here’s a quick breakdown of the genre.
To sharpen my storytelling skills, I’ve decided to channel my inner noir writer and craft this final blog post—the opportunities for future work, as outlined in the Perspective—in that style.
Theft at the Quantum Frontier
Under the dim light of a flickering bulb, private investigator Max Kelvin leaned back in his creaky chair, nursing a cigarette. The steady patter of rain against the window was interrupted by the creak of the office door. In walked trouble. Trouble with a capital T.
She was tall, moving with a confident stride that barely masked the worry lines etched into her face. Her dark hair was pulled back in a tight bun, and her eyes were as sharp as the edges of the papers she clutched in her gloved hand.
“Mr. Kelvin?” she asked, her voice a low, smoky whisper.
“That’s what the sign says,” Max replied, taking a long drag of his cigarette, the ember glowing a fiery red. “What can I do for you, Miss…?”
“Doctor,” she corrected, her tone firm, “Shayna Majidy. I need your help. Someone’s about to scoop my research.”
Max’s eyebrows arched. “Scooped? You mean someone stole your work?”
“Yes,” Shayna said, frustration seeping into her voice. “I’ve been working on noncommuting charge physics, a topic recently highlighted in a Perspective article. But someone has stolen my paper. We need to find who did it before they send it to the local rag, The Ark Hive.”
Max leaned forward, snuffing out his cigarette and grabbing his coat in one smooth motion. “Alright, Dr. Majidy, let’s see where your work might have wandered off to.”
They started their investigation with Joey “The Ant” Guzman, an experimental physicist whose lab was a tangled maze of gleaming equipment. Superconducting qubits, quantum dots, ultracold atoms, quantum optics, and optomechanics cluttered the room, each device buzzing with the hum of cutting-edge science. Joey earned his nickname due to his meticulous and industrious nature, much like an ant in its colony.
Guzman was a prime suspect, Shayna had whispered as they approached. His experiments could validate the predictions of noncommuting charges. “The first test of noncommuting-charge thermodynamics was performed with trapped ions,” she explained, her voice low and tense. “But there’s a lot more to explore—decreased entropy production rates, increased entanglement, to name a couple. There are many platforms to test these results, and Guzman knows them all. It’s a major opportunity for future work.”
Guzman looked up from his work as they entered, his expression guarded. “Can I help you?” he asked, wiping his hands on a rag.
Max stepped forward, his eyes scanning the room. “A rag? I guess you really are a quantum mechanic.” He paused for laughter, but only silence answered. “We’re investigating some missing research,” he said, his voice calm but edged with intensity. “You wouldn’t happen to know anything about noncommuting charges, would you?”
Guzman’s eyes narrowed, a flicker of suspicion crossing his face. “Almost everyone is interested in that right now,” he replied cautiously.
Shayna stepped forward, her eyes boring into Guzman’s. “So what’s stopping you from doing experimental tests? Do you have enough qubits? Long enough decoherence times?”
Guzman shifted uncomfortably but kept his silence. Max took another drag of his cigarette, the smoke curling around his thoughts. “Alright, Guzman,” he said finally. “If you think of anything that might help, you know where to find us.”
As they left the lab, Max turned to Shayna. “He’s hiding something,” he said quietly. “But whether it’s your work or how noisy and intermediate scale his hardware is, we need more to go on.”
Shayna nodded, her face set in grim determination. The rain had stopped, but the storm was just beginning.
Their next stop was the dimly lit office of Alex “Last Piece” Lasek, a puzzle enthusiast with a sudden obsession with noncommuting charge physics. The room was a chaotic labyrinth, papers strewn haphazardly, each covered with intricate diagrams and cryptic scrawlings. The stale aroma of old coffee and ink permeated the air.
Lasek was hunched over his desk, scribbling furiously, his eyes darting across the page. He barely acknowledged their presence as they entered. “Noncommuting charges,” he muttered, his voice a gravelly whisper, “they present a fascinating puzzle. They hinder thermalization in some ways and enhance it in others.”
“Last Piece Lasek, I presume?” Max’s voice sliced through the dense silence.
Lasek blinked, finally lifting his gaze. “Yeah, that’s me,” he said, pushing his glasses up the bridge of his nose. “Who wants to know?”
“Max Kelvin, private eye,” Max replied, flicking his card onto the cluttered desk. “And this is Dr. Majidy. We’re investigating some missing research.”
Shayna stepped forward, her eyes sweeping the room like a hawk. “I’ve read your papers, Lasek,” she said, her tone a blend of admiration and suspicion. “You live for puzzles, and this one’s as tangled as they come. How do you plan to crack it?”
Lasek shrugged, leaning back in his creaky chair. “It’s a tough nut,” he admitted, a sly smile playing at his lips. “But I’m no thief, Dr. Majidy. I’m more interested in solving the puzzle than in academic glory.”
As they exited Lasek’s shadowy lair, Max turned to Shayna. “He’s a riddle wrapped in an enigma, but he doesn’t strike me as a thief.”
Shayna nodded, her expression grim. “Then we keep digging. Time’s slipping away, and we’ve got to find the missing pieces before it’s too late.”
Their third stop was the office of Billy “Brass Knuckles,” a classical physicist infamous for his no-nonsense attitude and a knack for punching holes in established theories.
Max’s skepticism was palpable as they entered the office. “He’s a classical physicist; why would he give a damn about noncommuting charges?” he asked Shayna, raising an eyebrow.
Billy, overhearing Max’s question, let out a gravelly chuckle. “It’s not as crazy as it sounds,” he said, his eyes glinting with amusement. “Sure, the noncommutation of observables is at the core of quantum quirks like uncertainty, measurement disturbances, and the Einstein-Podolsky-Rosen paradox.”
Max nodded slowly, “Go on.”
“However,” Billy continued, leaning forward, “classical mechanics also deals with quantities that don’t commute, like rotations around different axes. So, how unique is noncommuting-charge thermodynamics to the quantum realm? What parts of this new physics can we find in classical systems?”
Shayna crossed her arms, a devious smile playing on her lips. “Wouldn’t you like to know?”
“Wouldn’t we all?” Billy retorted, his grin mirroring hers. “But I’m about to retire. I’m not the one sneaking around your work.”
Max studied Billy for a moment longer, then nodded. “Alright, Brass Knuckles. Thanks for your time.”
As they stepped out of the shadowy office and into the damp night air, Shayna turned to Max. “Another dead end?”
Max nodded and lit a cigarette, the smoke curling into the misty air. “Seems so. But the clock’s ticking, and we can’t afford to stop now.”
Their fourth suspect, Tony “Munchies” Munsoni, was a specialist in chaos theory and thermodynamics, with an insatiable appetite for both science and snacks.
“Another non-quantum physicist?” Max muttered to Shayna, raising an eyebrow.
Shayna nodded, a glint of excitement in her eyes. “The most thrilling discoveries often happen at the crossroads of different fields.”
Dr. Munson looked up from his desk as they entered, setting aside his bag of chips with a wry smile. “I’ve read the Perspective article,” he said, getting straight to the point. “I agree—every chaotic or thermodynamic phenomenon deserves another look under the lens of noncommuting charges.”
Max leaned against the doorframe, studying Munsoni closely.
“We’ve seen how they shake up the Eigenstate Thermalization Hypothesis, monitored quantum circuits, fluctuation relations, and Page curves,” Munson continued, his eyes alight with intellectual fervour. “There’s so much more to uncover. Think about their impact on diffusion coefficients, transport relations, thermalization times, out-of-time-ordered correlators, operator spreading, and quantum-complexity growth.”
Shayna leaned in, clearly intrigued. “Which avenue do you think holds the most promise?”
Munsoni’s enthusiasm dimmed slightly, his expression turning regretful. “I’d love to dive into this, but I’m swamped with other projects right now. Give me a few months, and then you can start grilling me.”
Max glanced at Shayna, then back at Munsoni. “Alright, Munchies. If you hear anything or stumble upon any unusual findings, keep us in the loop.”
As they stepped back into the dimly lit hallway, Max turned to Shayna. “I saw his calendar; he’s telling the truth. His schedule is too packed to be stealing your work.”
Shayna’s shoulders slumped slightly. “Maybe. But we’re not done yet. The clock’s ticking, and we’ve got to keep moving.”
Finally, they turned to a pair of researchers dabbling in the peripheries of quantum thermodynamics. One was Twitch Uppity, an expert on non-Abelian gauge theories. The other, Jada LeShock, specialized in hydrodynamics and heavy-ion collisions.
Max leaned against the doorframe, his voice casual but probing. “What exactly are non-Abelian gauge theories?” he asked (setting up the exposition for the Quantum Frontiers reader’s benefit).
Uppity looked up, his eyes showing the weary patience of someone who had explained this concept countless times. “Imagine different particles interacting, like magnets and electric charges,” he began, his voice steady. “We describe the rules for these interactions using mathematical objects called ‘fields.’ These rules are called field theories. Electromagnetism is one example. Gauge theories are a class of field theories where the laws of physics are invariant under certain local transformations. This means that a gauge theory includes more degrees of freedom than the physical system it represents. We can choose a ‘gauge’ to eliminate the extra degrees of freedom, making the math simpler.”
Max nodded slowly, his eyes fixed on Uppity. “Go on.”
“These transformations form what is called a gauge group,” Uppity continued, taking a sip of his coffee. “Electromagnetism is described by the gauge group U(1). Other interactions are described by more complex gauge groups. For instance, quantum chromodynamics, or QCD, uses an SU(3) symmetry and describes the strong force between particles in an atom. QCD is a non-Abelian gauge theory because its gauge group is noncommutative. This leads to many intriguing effects.”
“I see the noncommuting part,” Max stated, trying to keep up. “But, what’s the connection to noncommuting charges in quantum thermodynamics?”
“That’s the golden question,” Shayna interjected, excitement in her voice. “In QCD, particle physics uses non-Abelian groups, so it may exhibit phenomena related to noncommuting charges in thermodynamics.”
“May is the keyword,” Uppity replied. “In QCD, the symmetry is local, unlike the global symmetries described in the Perspective. An open question is how much noncommuting-charge quantum thermodynamics applies to non-Abelian gauge theories.”
Max turned his gaze to Jada. “How about you? What are hydrodynamics and heavy-ion collisions?” he asked, setting up more exposition.
Jada dropped her pencil and raised her head. “Hydrodynamics is the study of fluid motion and the forces acting on them,” she began. “We focus on large-scale properties, assuming that even if the fluid isn’t in equilibrium as a whole, small regions within it are. Hydrodynamics can explain systems in condensed matter and stages of heavy-ion collisions—collisions between large atomic nuclei at high speeds.”
“Where does the non-Abelian part come in?” Max asked, his curiosity piqued.
“Hydrodynamics researchers have identified specific effects caused by non-Abelian symmetries,” Jada answered. “These include non-Abelian contributions to conductivity, effects on entropy currents, and shortening neutralization times in heavy-ion collisions.”
“Are you looking for more effects due to non-Abelian symmetries?” Shayna asked, her interest clear. “A long-standing question is how heavy-ion collisions thermalize. Maybe the non-Abelian ETH would help explain this?”
Jada nodded, a faint smile playing on her lips. “That’s the hope. But as with all cutting-edge research, the answers are elusive.”
Max glanced at Shayna, his eyes thoughtful. “Let’s wrap this up. We’ve got some thinking to do.”
After hearing from each researcher, Max and Shayna found themselves back at the office. The dim light of the flickering bulb cast long shadows on the walls. Max poured himself a drink. He offered one to Shayna, who declined, her eyes darting around the room, betraying her nerves.
“So,” Max said, leaning back in his chair, the creak of the wood echoing in the silence. “Everyone seems to be minding their own business. Well…” Max paused, taking a slow sip of his drink, “almost everyone.”
Shayna’s eyes widened, a flicker of panic crossing her face. “I’m not sure who you’re referring to,” she said, her voice wavering slightly. “Did you figure out who stole my work?” She took a seat, her discomfort apparent.
Max stood up and began circling Shayna’s chair like a predator stalking its prey. His eyes were sharp, scrutinizing her every move. “I couldn’t help but notice all the questions you were asking and your eyes peeking onto their desks.”
Shayna sighed, her confident façade cracking under the pressure. “You’re good, Max. Too good… No one stole my work.” Shayna looked down, her voice barely above a whisper. “I read that Perspective article. It mentioned all these promising research avenues. I wanted to see what others were working on so I could get a jump on them.”
Max shook his head, a wry smile playing on his lips. “You tried to scoop the scoopers, huh?”
Shayna nodded, looking somewhat sheepish. “I guess I got a bit carried away.”
Max chuckled, pouring himself another drink. “Science is a tough game, Dr. Majidy. Just make sure next time you play fair.”
As Shayna left the office, Max watched the rain continue to fall outside. His thoughts lingered on the strange case, a world where the race for discovery was cutthroat and unforgiving. But even in the darkest corners of competition, integrity was a prize worth keeping…
That concludes my four-part series on our recent Perspective article. I hope you had as much fun reading them as I did writing them.
This is the third part of a four-part series covering the recent Perspective on noncommuting charges. I’ll post one part every ~5 weeks leading up to my PhD thesis defence. You can find Part 1 hereand Part 2 here.
If Hamlet had been a system of noncommuting charges, his famous soliloquy may have gone like this…
To thermalize, or not to thermalize, that is the question: Whether ’tis more natural for the system to suffer The large entanglement of thermalizing dynamics, Or to take arms against the ETH And by opposing inhibit it. To die—to thermalize, No more; and by thermalization to say we end The dynamical symmetries and quantum scars That complicate dynamics: ’tis a consummation Devoutly to be wish’d. To die, to thermalize; To thermalize, perchance to compute—ay, there’s the rub: For in that thermalization our quantum information decoheres, When our coherence has shuffled off this quantum coil, Must give us pause—there’s the respect That makes calamity of resisting thermalization.
Hamlet (the quantum steampunk edition)
In the original play, Hamlet grapples with the dilemma of whether to live or die. Noncommuting charges have a dilemma regarding whether they facilitate or impede thermalization. Among the five research opportunities highlighted in the Perspective article, resolving this debate is my favourite opportunity due to its potential implications for quantum technologies. A primary obstacle in developing scalable quantum computers is mitigating decoherence; here, thermalization plays a crucial role. If systems with noncommuting charges are shown to resist thermalization, they may contribute to quantum technologies that are more resistant to decoherence. Systems with noncommuting charges, such as spin systems and squeezed states of light, naturally occur in quantum computing models like quantum dots and optical approaches. This possibility is further supported by recent advances demonstrating that non-Abelian symmetric operations are universal for quantum computing (see references 1 and 2).
In this penultimate blog post of the series, I will review some results that argue both in favour of and against noncommuting charges hindering thermalization. This discussion includes content from Sections III, IV, and V of the Perspective article, along with a dash of some related works at the end—one I recently posted and another I recently found. The results I will review do not directly contradict one another because they arise from different setups. My final blog post will delve into the remaining parts of the Perspective article.
Arguments for hindering thermalization
The first argument supporting the idea that noncommuting charges hinder thermalization is that they can reduce the production of thermodynamic entropy. In their study, Manzano, Parrondo, and Landi explore a collisional model involving two systems, each composed of numerous subsystems. In each “collision,” one subsystem from each system is randomly selected to “collide.” These subsystems undergo a unitary evolution during the collision and are subsequently returned to their original systems. The researchers derive a formula for the entropy production per collision within a certain regime (the linear-response regime). Notably, one term of this formula is negative if and only if the charges do not commute. Since thermodynamic entropy production is a hallmark of thermalization, this finding implies that systems with noncommuting charges may thermalize more slowly. Twoother extensions support this result.
The second argument stems from an essential result in quantum computing. This result is that every algorithm you want to run on your quantum computer can be broken down into gates you run on one or two qubits (the building blocks of quantum computers). Marvian’s research reveals that this principle fails when dealing with charge-conserving unitaries. For instance, consider the charge as energy. Marvian’s results suggest that energy-preserving interactions between neighbouring qubits don’t suffice to construct all energy-preserving interactions across all qubits. The restrictions become more severe when dealing with noncommuting charges. Local interactions that preserve noncommuting charges impose stricter constraints on the system’s overall dynamics compared to commuting charges. These constraints could potentially reduce chaos, something that tends to lead to thermalization.
Adding to the evidence, we revisit the eigenstate thermalization hypothesis (ETH), which I discussed in my first post. The ETH essentially asserts that if an observable and Hamiltonian adhere to the ETH, the observable will thermalize. This means its expectation value stabilizes over time, aligning with the expectation value of the thermal state, albeit with some important corrections. Noncommuting charges cause all kinds of problems for the ETH, as detailed in these twoposts by Nicole Yunger Halpern. Rather than reiterating Nicole’s succinct explanations, I’ll present the main takeaway: noncommuting charges undermine the ETH. This has led to the development of a non-Abelian version of the ETH by Murthy and collaborators. This new framework still predicts thermalization in many, but not all, cases. Under a reasonable physical assumption, the previously mentioned corrections to the ETH may be more substantial.
If this story ended here, I would have needed to reference a different Shakespearean work. Fortunately, the internal conflict inherent in noncommuting aligns well with Hamlet. Noncommuting charges appear to impede thermalization in various aspects, yet paradoxically, they also seem to promote it in others.
Arguments for promoting thermalization
Among the many factors accompanying the thermalization of quantum systems, entanglement is one of the most studied. Last year, I wrote a blog post explaining how my collaborators and I constructed analogous models that differ in whether their charges commute. One of the paper’s results was that the model with noncommuting charges had higher average entanglement entropy. As a result of that blog post, I was invited to CBC’s “Quirks & Quarks” Podcast to explain, on national radio, whether quantum entanglement can explain the extreme similarities we see in identical twins who are raised apart. Spoilers for the interview: it can’t, but wouldn’t it be grand if it could?
Following up on that work, my collaborators and I introduced noncommuting charges into monitored quantum circuits (MQCs)—quantum circuits with mid-circuit measurements. MQCs offer a practical framework for exploring how, for example, entanglement is affected by the interplay between unitary dynamics and measurements. MQCs with no charges or with commuting charges have a weakly entangled phase (“area-law” phase) when the measurements are done often enough, and a highly entangled phase (“volume-law” phase) otherwise. However, in MQCs with noncommuting charges, this weakly entangled phase never exists. In its place, there is a critical phase marked by long-range entanglement. This finding supports our earlier observation that noncommuting charges tend to increase entanglement.
I recently looked at a different angle to this thermalization puzzle. It’s well known that most quantum many-body systems thermalize; some don’t. In those that don’t, what effect do noncommuting charges have? One paper that answers this question is covered in the Perspective. Here, Potter and Vasseur study many-body localization (MBL). Imagine a chain of spins that are strongly interacting. We can add a disorder term, such as an external field whose magnitude varies across sites on this chain. If the disorder is sufficiently strong, the system “localizes.” This implies that if we measured the expectation value of some property of each qubit at some time, it would maintain that same value for a while. MBL is one type of behaviour that resists thermalization. Potter and Vasseur found that noncommuting charges destabilize MBL, thereby promoting thermalizing behaviour.
In addition to the papers discussed in our Perspective article, I want to highlight two other studies that study how systems can avoid thermalization. One mechanism is through the presence of “dynamical symmetries” (there are “spectrum-generating algebras” with a locality constraint). These are operators that act similarly to ladder operators for the Hamiltonian. For any observable that overlaps with these dynamical symmetries, the observable’s expectation value will continue to evolve over time and will not thermalize in accordance with the Eigenstate Thermalization Hypothesis (ETH). In my recent work, I demonstrate that noncommuting charges remove the non-thermalizing dynamics that emerge from dynamical symmetries.
Additionally, I came across a study by O’Dea, Burnell, Chandran, and Khemani, which proposes a method for constructing Hamiltonians that exhibit quantum scars. Quantum scars are unique eigenstates of the Hamiltonian that do not thermalize despite being surrounded by a spectrum of other eigenstates that do thermalize. Their approach involves creating a Hamiltonian with noncommuting charges and subsequently breaking the non-Abelian symmetry. When the symmetry is broken, quantum scars appear; however, if the non-Abelian symmetry were to be restored, the quantum scars vanish. These last three results suggest that noncommuting charges impede various types of non-thermalizing dynamics.
Unlike Hamlet, the narrative of noncommuting charges is still unfolding. I wish I could conclude with a dramatic finale akin to the duel between Hamlet and Laertes, Claudius’s poisoning, and the proclamation of a new heir to the Danish throne. However, that chapter is yet to be written. “To thermalize or not to thermalize?” We will just have to wait and see.
Imagine a billiard ball bouncing around on a pool table. High-school level physics enables us to predict its motion until the end of time using simple equations for energy and momentum conservation, as long as you know the initial conditions – how fast the ball is moving at launch, and in which direction.
What if you add a second ball? This makes things more complicated, but predicting the future state of this system would still be possible based on the same principles. What about if you had a thousand balls, or a million? Technically, you could still apply the same equations, but the problem would not be tractable in any practical sense.
Thermodynamics lets us make precise predictions about averaged (over all the particles) properties of complicated, many-body systems, like millions of billiard balls or atoms bouncing around, without needing to know the gory details. We can make these predictions by introducing the notion of probabilities. Even though the system is deterministic – we can in principle calculate the exact motion of every ball – there are so many balls in this system, that the properties of the whole will be very close to the average properties of the balls. If you throw a six-sided die, the result is in principle deterministic and predictable, based on the way you throw it, but it’s in practice completely random to you – it could be 1 through 6, equally likely. But you know that if you cast a thousand dice, the average will be close to 3.5 – the average of all possibilities. Statistical physics enables us to calculate a probability distribution over the energies of the balls, which tells us everything about the average properties of the system. And because of entropy – the tendency for the system to go from ordered to disordered configurations, even if the probability distribution of the initial system is far from the one statistical physics predicts, after the system is allowed to bounce around and settle, this final distribution will be extremely close to a generic distribution that depends on average properties only. We call this the thermal distribution, and the process of the system mixing and settling to one of the most likely configurations – thermalization.
For a practical example – instead of billiard balls, consider a gas of air molecules bouncing around. The average energy of this gas is proportional to its temperature, which we can calculate from the probability distribution of energies. Being able to predict the temperature of a gas is useful for practical things like weather forecasting, cooling your home efficiently, or building an engine. The important properties of the initial state we needed to know – energy and number of particles – are conserved during the evolution, and we call them “thermodynamic charges”. They don’t actually need to be electric charges, although it is a good example of something that’s conserved.
Let’s cross from the classical world – balls bouncing around – to the quantum one, which deals with elementary particles that can be entangled, or in a superposition. What changes when we introduce this complexity? Do systems even thermalize in the quantum world? Because of the above differences, we cannot in principle be sure that the mixing and settling of the system will happen just like in the classical cases of balls or gas molecules colliding.
It turns out that we can predict the thermal state of a quantum system using very similar principles and equations that let us do this in the classical case. Well, with one exception – what if we cannot simultaneously measure our critical quantities – the charges?
One of the quirks of quantum mechanics is that observing the state of the system can change it. Before the observation, the system might be in a quantum superposition of many states. After the observation, a definite classical value will be recorded on our instrument – we say that the system has collapsed to this state, and thus changed its state. There are certain observables that are mutually incompatible – we cannot know their values simultaneously, because observing one definite value collapses the system to a state in which the other observable is in a superposition. We call these observables noncommuting, because the order of observation matters – unlike in multiplication of numbers, which is a commuting operation you’re familiar with. 2 * 3 = 6, and also 3 * 2 = 6 – the order of multiplication doesn’t matter.
Electron spin is a common example that entails noncommutation. In a simplified picture, we can think of spin as an axis of rotation of our electron in 3D space. Note that the electron doesn’t actually rotate in space, but it is a useful analogy – the property is “spin” for a reason. We can measure the spin along the x-,y-, or z-axis of a 3D coordinate system and obtain a definite positive or negative value, but this observation will result in a complete loss of information about spin in the other two perpendicular directions.
If we investigate a system that conserves the three spin components independently, we will be in a situation where the three conserved charges do not commute. We call them “non-Abelian” charges, because they enjoy a non-Abelian, that is, noncommuting, algebra. Will such a system thermalize, and if so, to what kind of final state?
This is precisely what we set out to investigate. Noncommutation of charges breaks usual derivations of the thermal state, but researchers have managed to show that with non-Abelian charges, a subtly different non-Abelian thermal state (NATS) should emerge. Myself and Nicole Yunger Halpern at the Joint Center for Quantum Information and Computer Science (QuICS) at the University of Maryland have collaborated with Amir Kalev from the Information Sciences Institute (ISI) at the University of Southern California, and experimentalists from the University of Innsbruck (Florian Kranzl, Manoj Joshi, Rainer Blatt and Christian Roos) to observe thermalization in a non-Abelian system – and we’ve recently published this work in PRX Quantum .
The experimentalists used a device that can trap ions with electric fields, as well as manipulate and read out their states using lasers. Only select energy levels of these ions are used, which effectively makes them behave like electrons. The laser field can couple the ions in a way that approximates the Heisenberg Hamiltonian – an interaction that conserves the three total spin components individually. We thus construct the quantum system we want to study – multiple particles coupled with interactions that conserve noncommuting charges.
We conceptually divide the ions into a system of interest and an environment. The system of interest, which consists of two particles, is what we want to measure and compare to theoretical predictions. Meanwhile, the other ions act as the effective environment for our pair of ions – the environment ions interact with the pair in a way that simulates a large bath exchanging heat and spin.
If we start this total system in some initial state, and let it evolve under our engineered interaction for a long enough time, we can then measure the final state of the system of interest. To make the NATS distinguishable from the usual thermal state, I designed an initial state that is easy to prepare, and has the ions pointing in directions that result in high charge averages and relatively low temperature. High charge averages make the noncommuting nature of the charges more pronounced, and low temperature makes the state easy to distinguish from the thermal background. However, we also show that our experiment works for a variety of more-arbitrary states.
We let the system evolve from this initial state for as long as possible given experimental limitations, which was 15 ms. The experimentalists then used quantum state tomography to reconstruct the state of the system of interest. Quantum state tomography makes multiple measurements over many experimental runs to approximate the average quantum state of the system measured. We then check how close the measured state is to the NATS. We have found that it’s about as close as one can expect in this experiment!
And we know this because we have also implemented a different coupling scheme, one that doesn’t have non-Abelian charges. The expected thermal state in the latter case was reached within a distance that’s a little smaller than our non-Abelian case. This tells us that the NATS is almost reached in our experiment, and so it is a good, and the best known, thermal state for the non-Abelian system – we have compared it to competitor thermal states.
Working with the experimentalists directly has been a new experience for me. While I was focused on the theory and analyzing the tomography results they obtained, they needed to figure out practical ways to realize what we asked of them. I feel like each group has learned a lot about the tasks of the other. I have become well acquainted with the trapped ion experiment and its capabilities and limitation. Overall, it has been great collaborating with the Austrian group.
Our result is exciting, as it’s the first experimental observation within the field of non-Abelian thermodynamics! This result was observed in a realistic, non-fine-tuned system that experiences non-negligible errors due to noise. So the system does thermalize after all. We have also demonstrated that the trapped ion experiment of our Austrian friends can be used to simulate interesting many-body quantum systems. With different settings and programming, other types of couplings can be simulated in different types of experiments.
The experiment also opened avenues for future work. The distance to the NATS was greater than the analogous distance to the Abelian system. This suggests that thermalization is inhibited by the noncommutation of charges, but more evidence is needed to justify this claim. In fact, our other recent paper in Physical Review B suggests the opposite!
As noncommutation is one of the core features that distinguishes classical and quantum physics, it is of great interest to unravel the fine differences non-Abelian charges can cause. But we also hope that this research can have practical uses. If thermalization is disrupted by noncommutation of charges, engineered systems featuring them could possibly be used to build quantum memory that is more robust, or maybe even reduce noise in quantum computers. We continue to explore noncommutation, looking for interesting effects that we can pin on it. I am currently working on verifying the workings of a hypothesis that explains when and why quantum systems thermalize internally.
This is the second part of a four-part series covering the recent Perspective on noncommuting charges. I’ll post one part every ~5 weeks leading up to my PhD thesis defence. You can find part 1 here.
Understanding a character’s origins enriches their narrative and motivates their actions. Take Batman as an example: without knowing his backstory, he appears merely as a billionaire who might achieve more by donating his wealth rather than masquerading as a bat to combat crime. However, with the context of his tragic past, Batman transforms into a symbol designed to instill fear in the hearts of criminals. Another example involves noncommuting charges. Without understanding their origins, the question “What happens when charges don’t commute?” might appear contrived or simply devised to occupy quantum information theorists and thermodynamicists. However, understanding the context of their emergence, we find that numerous established results unravel, for various reasons, in the face of noncommuting charges. In this light, noncommuting charges are much like Batman; their backstory adds to their intrigue and clarifies their motivation. Admittedly, noncommuting charges come with fewer costumes, outside the occasional steampunk top hat my advisor Nicole Yunger Halpern might sport.
In the early works I’m about to discuss, a common thread emerges: the initial breakdown of some well-understood derivations and the effort to establish a new derivation that accommodates noncommuting charges. These findings will illuminate, yet not fully capture, the multitude of results predicated on the assumption that charges commute. Removing this assumption is akin to pulling a piece from a Jenga tower, triggering a cascade of other results. Critics might argue, “If you’re merely rederiving known results, this field seems uninteresting.” However, the reality is far more compelling. As researchers diligently worked to reconstruct this theoretical framework, they have continually uncovered ways in which noncommuting charges might pave the way for new physics. That said, the exploration of these novel phenomena will be the subject of my next post, where we delve into the emerging physics. So, I invite you to stay tuned. Back to the history…
E.T. Jaynes’s 1957 formalization of the maximum entropy principle has a blink-and-you’ll-miss-it reference to noncommuting charges. Consider a quantum system, similar to the box discussed in Part 1, where our understanding of the system’s state is limited to the expectation values of certain observables. Our aim is to deduce a probability distribution for the system’s potential pure states that accurately reflects our knowledge without making unjustified assumptions. According to the maximum entropy principle, this objective is met by maximizing the entropy of the distribution, which serve as a measure of uncertainty. This resulting state is known as the generalized Gibbs ensemble. Jaynes noted that this reasoning, based on information theory for the generalized Gibbs ensemble, remains valid even when our knowledge is restricted to the expectation values of noncommuting charges. However, later scholars have highlighted that physically substantiating the generalized Gibbs ensemble becomes significantly more challenging when the charges do not commute. Due to this and other reasons, when the system’s charges do not commute, the generalized Gibbs ensemble is specifically referred to as the non-Abelian thermal state (NATS).
For approximately 60 years, discussions about noncommuting charges remain dormant, outside a few mentions here and there. This changed when two studies highlighted how noncommuting charges break commonplace thermodynamics derivations. The first of these, conducted by Matteo Lostaglio as part of his 2014 thesis, challenged expectations about a system’s free energy—a measure of the system’s capacity for performing work. Interestingly, one can define a free energy for each charge within a system. Imagine a scenario where a system with commuting charges comes into contact with an environment that also has commuting charges. We then evolve the system such that the total charges in both the system and the environment are conserved. This evolution alters the system’s information content and its correlation with the environment. This change in information content depends on a sum of terms. Each term depends on the average change in one of the environment’s charges and the change in the system’s free energy for that same charge. However, this neat distinction of terms according to each charge breaks down when the system and environment exchange noncommuting charges. In such cases, the terms cannot be cleanly attributed to individual charges, and the conventional derivation falters.
The second work delved into resource theories, a topic discussed at length in Quantum Frontiers blog posts. In short, resource theories are frameworks used to quantify how effectively an agent can perform a task subject to some constraints. For example, consider all allowed evolutions (those conserving energy and other charges) one can perform on a closed system. From these evolutions, what system can you not extract any work from? The answer is systems in thermal equilibrium. The method used to determine the thermal state’s structure also fails when the system includes noncommuting charges. Building on this result, three groups (one, two, and three) presented physically motivated derivations of the form of the thermal state for systems with noncommuting charges using resource-theory-related arguments. Ultimately, the form of the NATS was recovered in each work.
Just as re-examining Batman’s origin story unveils a deeper, more compelling reason behind his crusade against crime, diving into the history and implications of noncommuting charges reveals their untapped potential for new physics. Behind every mask—or theory—there can lie an untold story. Earlier, I hinted at how reevaluating results with noncommuting charges opens the door to new physics. A specific example, initially veiled in Part 1, involves the violation of the Onsager coefficients’ derivation by noncommuting charges. By recalculating these coefficients for systems with noncommuting charges, we discover that their noncommutation can decrease entropy production. In Part 3, we’ll delve into other new physics that stems from charges’ noncommutation, exploring how noncommuting charges, akin to Batman, can really pack a punch.
John Preskill, Richard P. Feynman Professor of Theoretical Physics at Caltech, has been named the 2024 John Stewart Bell Prize recipient. The prize honors John’s contributions in “the developments at the interface of efficient learning and processing of quantum information in quantum computation, and following upon long standing intellectual leadership in near-term quantum computing.” The committee cited John’s seminal work defining the concept of the NISQ (noisy intermediate-scale quantum) era, our joint work “Predicting Many Properties of a Quantum System from Very Few Measurements” proposing the classical shadow formalism, along with subsequent research that builds on classical shadows to develop new machine learning algorithms for processing information in the quantum world.
We are truly honored that our joint work on classical shadows played a role in John winning this prize. But as the citation implies, this is also a much-deserved “lifetime achievement” award. For the past two and a half decades, first at IQI and now at IQIM, John has cultivated a wonderful, world-class research environment at Caltech that celebrates intellectual freedom, while fostering collaborations between diverse groups of physicists, computer scientists, chemists, and mathematicians. John has said that his job is to shield young researchers from bureaucratic issues, teaching duties and the like, so that we can focus on what we love doing best. This extraordinary generosity of spirit has been responsible for seeding the world with some of the bests minds in the field of quantum information science and technology.
It is in this environment that the two of us (Robert and Richard) met and first developed the rudimentary form of classical shadows — inspired by Scott Aaronson’s idea of shadow tomography. While the initial form of classical shadows is mathematically appealing and was appreciated by the theorists (it was a short plenary talk at the premier quantum information theory conference), it was deemed too abstract to be of practical use. As a result, when we submitted the initial version of classical shadows for publication, the paper was rejected. John not only recognized the conceptual beauty of our initial idea, but also pointed us towards a direction that blossomed into the classical shadows we know today. Applications range from enabling scientists to more efficiently understand engineered quantum devices, speeding up various near-term quantum algorithms, to teaching machines to learn and predict the behavior of quantum systems.
Congratulations John! Thank you for bringing this community together to do extraordinarily fun research and for guiding us throughout the journey.
This past summer, our quantum thermodynamics research group had the wonderful opportunity to visit the Dibner Rare Book Library in D.C. Located in a small corner of the Smithsonian National Museum of American History, tucked away behind flashier exhibits, the Dibner is home to thousands of rare books and manuscripts, some dating back many centuries.
Our advisor, Nicole Yunger Halpern, has a special connection to the Dibner, having interned there as an undergrad. She’s remained in contact with the head librarian, Lilla Vekerdy. For our visit, the two of them curated a large spread of scientific work related to thermodynamics, physics, and mathematics. The tomes ranged from a 1500s print of Euclid’s Elements to originals of Einstein’s manuscripts with hand-written notes in the margin.
The print of Euclid’s Elements was one of the standout exhibits. It featured a number of foldout nets of 3D solids, which had been cut and glued into the book by hand. Several hundred copies of this print are believed to have been made, each of them containing painstakingly crafted paper models. At the time, this technique was an innovation, resulting from printers’ explorations of the then-young art of large-scale book publication.
Another interesting exhibit was rough notes on ideal gases written by Planck, one of the fathers of quantum mechanics. Ideal gases are the prototypical model in statistical mechanics, capturing to high accuracy the behaviour of real gases within certain temperatures and pressures. The notes contained comparisons between Boltzmann, Ehrenfest, and Planck’s own calculations for classical and quantum ideal gases. Though the prose was in German, some results were instantly recognizable, such as the plot of the specific heat of a classical ideal gas, showing the stepwise jump as degrees of freedom freeze out.
Looking through these great physicists’ rough notes, scratched-out ideas, and personal correspondences was a unique experience, helping humanize them and place their work in historical context. Understanding the history of science doesn’t just need to be for historians, it can be useful for scientists themselves! Seeing how scientists persevered through unknowns, grappling with doubts and incomplete knowledge to generate new ideas, is inspiring. But when one only reads the final, polished result in a modern textbook, it can be difficult to appreciate this process of discovery. Another reason to study the historical development of scientific results is that core concepts have a way of arising time and again across science. Recognizing how these ideas have arisen in the past is insightful. Examining the creative processes of great scientists before us helps develop our own intuition and skillset.
Thanks to our advisor for this field trip – and make sure to check out the Dibner next time you’re in DC!
This is the first part in a four part series covering the recent Perspectives article on noncommuting charges. I’ll be posting one part every ~6 weeks leading up to my PhD thesis defence.
Thermodynamics problems have surprisingly many similarities with fairy tales. For example, most of them begin with a familiar opening. In thermodynamics, the phrase “Consider an isolated box of particles” serves a similar purpose to “Once upon a time” in fairy tales—both serve as a gateway to their respective worlds. Additionally, both have been around for a long time. Thermodynamics emerged in the Victorian era to help us understand steam engines, while Beauty and the Beast and Rumpelstiltskin, for example, originated about 4000 years ago. Moreover, each conclude with important lessons. In thermodynamics, we learn hard truths such as the futility of defying the second law, while fairy tales often impart morals like the risks of accepting apples from strangers. The parallels go on; both feature archetypal characters—such as wise old men and fairy godmothers versus ideal gases and perfect insulators—and simplified models of complex ideas, like portraying clear moral dichotomies in narratives versus assuming non-interacting particles in scientific models.1
Of all the ways thermodynamic problems are like fairytale, one is most relevant to me: both have experienced modern reimagining. Sometimes, all you need is a little twist to liven things up. In thermodynamics, noncommuting conserved quantities, or charges, have added a twist.
First, let me recap some of my favourite thermodynamic stories before I highlight the role that the noncommuting-charge twist plays. The first is the inevitability of the thermal state. For example, this means that, at most times, the state of most sufficiently small subsystem within the box will be close to a specific form (the thermal state).
The second is an apparent paradox that arises in quantum thermodynamics: How do the reversible processes inherent in quantum dynamics lead to irreversible phenomena such as thermalization? If you’ve been keeping up with Nicole Yunger Halpern‘s (my PhD co-advisor and fellow fan of fairytale) recent posts on the eigenstate thermalization hypothesis (ETH) (part 1 and part 2) you already know the answer. The expectation value of a quantum observable is often comprised of a sum of basis states with various phases. As time passes, these phases tend to experience destructive interference, leading to a stable expectation value over a longer period. This stable value tends to align with that of a thermal state’s. Thus, despite the apparent paradox, stationary dynamics in quantum systems are commonplace.
The third story is about how concentrations of one quantity can cause flows in another. Imagine a box of charged particles that’s initially outside of equilibrium such that there exists gradients in particle concentration and temperature across the box. The temperature gradient will cause a flow of heat (Fourier’s law) and charged particles (Seebeck effect) and the particle-concentration gradient will cause the same—a flow of particles (Fick’s law) and heat (Peltier effect). These movements are encompassed within Onsager’s theory of transport dynamics…if the gradients are very small. If you’re reading this post on your computer, the Peltier effect is likely at work for you right now by cooling your computer.
What do various derivations of the thermal state’s forms, the eigenstate thermalization hypothesis (ETH), and the Onsager coefficients have in common? Each concept is founded on the assumption that the system we’re studying contains charges that commute with each other (e.g. particle number, energy, and electric charge). It’s only recently that physicists have acknowledged that this assumption was even present.
This is important to note because not all charges commute. In fact, the noncommutation of charges leads to fundamental quantum phenomena, such as the Einstein–Podolsky–Rosen (EPR) paradox, uncertainty relations, and disturbances during measurement. This raises an intriguing question. How would the above mentioned stories change if we introduce the following twist?
“Consider an isolated box with charges that do not commute with one another.”
This question is at the core of a burgeoning subfield that intersects quantum information, thermodynamics, and many-body physics. I had the pleasure of co-authoring a recent perspective article in Nature Reviews Physics that centres on this topic. Collaborating with me in this endeavour were three members of Nicole’s group: the avid mountain climber, Billy Braasch; the powerlifter, Aleksander Lasek; and Twesh Upadhyaya, known for his prowess in street basketball. Completing our authorship team were Nicole herself and Amir Kalev.
To give you a touchstone, let me present a simple example of a system with noncommuting charges. Imagine a chain of qubits, where each qubit interacts with its nearest and next-nearest neighbours, such as in the image below.
In this interaction, the qubits exchange quanta of spin angular momentum, forming what is known as a Heisenberg spin chain. This chain is characterized by three charges which are the total spin components in the x, y, and z directions, which I’ll refer to as Qx, Qy, and Qz, respectively. The Hamiltonian H conserves these charges, satisfying [H, Qa] = 0 for each a, and these three charges are non-commuting, [Qa, Qb] ≠ 0, for any pair a, b ∈ {x,y,z} where a≠b. It’s noteworthy that Hamiltonians can be constructed to transport various other kinds of noncommuting charges. I have discussed the procedure to do so in more detail here (to summarize that post: it essentially involves constructing a Koi pond).
This is the first in a series of blog posts where I will highlight key elements discussed in the perspective article. Motivated by requests from peers for a streamlined introduction to the subject, I’ve designed this series specifically for a target audience: graduate students in physics. Additionally, I’m gearing up to defending my PhD thesis on noncommuting-charge physics next semester and these blog posts will double as a fun way to prepare for that.
This opening text was taken from the draft of my thesis. ↩︎
Terry Rudolph, PsiQuantum & Imperial College London
During a recent visit to the wild western town of Pasadena I got into a shootout at high-noon trying to explain the nuances of this question to a colleague. Here is a more thorough (and less risky) attempt to recover!
tl;dr Photonic quantum computers can perform a useful computation orders of magnitude faster than a superconducting qubit machine. Surprisingly, this would still be true even if every physical timescale of the photonic machine was an order of magnitude longer (i.e. slower) than those of the superconducting one. But they won’t be.
SUMMARY
There is a misconception that the slow rate of entangled photon production from many current (“postselected”) experiments is somehow relevant to the logical speed of a photonic quantum computer. It isn’t, because those experiments don’t use an optical switch.
If we care about how fast we can solve useful problems then photonic quantum computers will eventually win that race. Not only because in principle their components can run faster, but because of fundamental architectural flexibilities which mean they need to do fewer things.
Unlike most quantum systems for which relevant physical timescales are determined by “constants of nature” like interaction strengths, the relevant photonic timescales are determined by “classical speeds” (optical switch speeds, electronic signal latencies etc). Surprisingly, even if these were slower – which there is no reason for them to be – the photonic machine can still compute faster.
In a simple world the speed of a photonic quantum computer would just be the speed at which it’s possible to make small (fixed sized) entangled states. GHz rates for such are plausible and correspond to the much slower MHz code-cycle rates of a superconducting machine. But we want to leverage two unique photonic features: Availability of long delays (e.g. optical fiber) and ease of nonlocal operations, and as such the overall story is much less simple.
If what floats your boat are really slow things, like cold atoms, ions etc., then the hybrid photonic/matter architecture outlined here is the way you can build a quantum computer with a faster logical gate speed than (say) a superconducting qubit machine. You should be all over it.
Magnifying the number of logical qubits in a photonic quantum computer by 100 could be done simply by making optical fiber 100 times less lossy. There are reasons to believe that such fiber is possible (though not easy!). This is just one example of the “photonics is different, photonics is different, ” mantra we should all chant every morning as we stagger out of bed.
The flexibility of photonic architectures means there is much more unexplored territory in quantum algorithms, compiling, error correction/fault tolerance, system architectural design and much more. If you’re a student you’d be mad to work on anything else!
Sorry, I realize that’s kind of an in-your-face list, some of which is obviously just my opinion! Lets see if I can make it yours too 🙂
I am not going to reiterate all the standard stuff about how photonics is great because of how manufacturable it is, its high temperature operation, easy networking modularity blah blah blah. That story has been told many times elsewhere. But there are subtleties to understanding the eventual computational speed of a photonic quantum computer which have not been explained carefully before. This post is going to slowly lead you through them.
I will only be talking about useful, large-scale quantum computing – by which I mean machines capable of, at a minimum, implementing billions of logical quantum gates on hundreds of logical qubits.
PHYSICAL TIMESCALES
In a quantum computer built from matter – say superconducting qubits, ions, cold atoms, nuclear/electronic spins and so on, there is always at least one natural and inescapable timescale to point to. This typically manifests as some discrete energy levels in the system, the levels that make the two states of the qubit. Related timescales are determined by the interaction strengths of a qubit with its neighbors, or with external fields used to control it. One of the most important timescales is that of measurement – how fast can we determine the state of the qubit? This generally means interacting with the qubit via a sequence of electromagnetic fields and electronic amplification methods to turn quantum information classical. Of course, measurements in quantum theory are a pernicious philosophical pit – some people claim they are instantaneous, others that they don’t even happen! Whatever. What we care about is: How long does it take for a readout signal to get to a computer that records the measurement outcome as classical bits, processes them, and potentially changes some future action (control field) interacting with the computer?
For building a quantum computer from optical frequency photons there are no energy levels to point to. The fundamental qubit states correspond to a single photon being either “here” or “there”, but we cannot trap and hold them at fixed locations, so unlike, say, trapped atoms these aren’t discrete energy eigenstates. The frequency of the photons does, in principle, set some kind of timescale (by energy-time uncertainty), but it is far too small to be constraining. The most basic relevant timescales are set by how fast we can produce, control (switch) or detect the photons. While these depend on the bandwidth of the photons used – itself a very flexible design choice – typical components operate in GHz regimes. Another relevant timescale is that we can store photons in a standard optical fiber for tens of microseconds before its probability of getting lost exceeds (say) 10%.
There is a long chain of things that need to be strung together to get from component-level physical timescales to the computational speed of a quantum computer built from them. The first step on the journey is to delve a little more into the world of fault tolerance.
TIMESCALES RELEVANT FOR FAULT TOLERANCE
The timescales of measurement are important because they determine the rate at which entropy can be removed from the system. All practical schemes for fault tolerance rely on performing repeated measurements during the computation to combat noise and imperfection. (Here I will only discuss surface-code fault tolerance, much of what I say though remains true more generally.) In fact, although at a high level one might think a quantum computer is doing some nice unitary logic gates, microscopically the machine is overwhelmingly just a device for performing repeated measurements on small subsets of qubits.
In matter-based quantum computers the overall story is relatively simple. There is a parameter , the “code distance”, dependent primarily on the quality of your hardware, which is somewhere in the range of 20-40. It takes qubits to make up a logical qubit, so let’s say 1000 of them per logical qubit. (We need to make use of an equivalent number of ancillary qubits as well). Very roughly speaking, we repeat twice the following: each physical qubit gets involved in a small number (say 4-8) of two-qubit gates with neighboring qubits, and then some subset of qubits undergo a single-qubit measurement. Most of these gates can happen simultaneously, so (again, roughly!) the time for this whole process is the time for a handful of two-qubit gates plus a measurement. It is known as a code cycle and the time it takes we denote . For example, in superconducting qubits this timescale is expected to be about 1 microsecond, for ion-trap qubits about 1 millisecond. Although variations exist, lets stick to considering a basic architecture which requires repeating this whole process on the order of times in order to complete one logical operation (i.e., a logical gate). So, the time for a logical gate would be , this sets the effective logical gate speed.
If you zoom out, each code cycle for a single logical qubit is therefore built up in a modular fashion from copies of the same simple quantum process – a process that involves a handful of physical qubits and gates over a handful of time steps, and which outputs a classical bit of information – a measurement outcome. I have ignored the issue of what happens to those measurement outcomes. Some of them will be sent to a classical computer and processed (decoded) then fed back to control systems and so on. That sets another relevant timescale (the reaction time) which can be of concern in some approaches, but early generations of photonic machines – for reasons outlined later – will use long delay lines, and it is not going to be constraining.
In a photonic quantum computer we also build up a single logical qubit code cycle from copies of some quantum stuff. In this case it is from copies of an entangled state of photons that we call a resource state. The number of entangled photons comprising one resource state depends a lot on how nice and clean they are, lets fix it and say we need a 20-photon entangled state. (The noisier the method for preparing resource states the larger they will need to be). No sequence of gates is performed on these photons. Rather, photons from adjacent resource states get interfered at a beamsplitter and immediately detected – a process we call fusion. You can see a toy version in this animation:
Measurements destroy photons, so to ensure continuity from one time step to the next some photons in a resource state get delayed by one time step to fuse with a photon from the subsequent resource state – you can see the delayed photons depicted as lit up single blobs if you look carefully in the animation.
The upshot is that the zoomed out view of the photonic quantum computer is very similar to that of the matter-based one, we have just replaced the handful of physical qubits/gates of the latter with a 20-photon entangled state. (And in case it wasn’t obvious – building a bigger computer to do a larger computation means generating more of the resource states, it doesn’t mean using larger and larger resource states.)
If that was the end of the story it would be easy to compare the logical gate speeds for matter-based and photonic approaches. We would only need to answer the question “how fast can you spit out and measure resource states?”. Whatever the time for resource state generation, , the time for a logical gate would be and the photonic equivalent of would simply be . (Measurements on photons are fast and so the fusion time becomes effectively negligible compared to .) An easy argument could then be made that resource state generation at GHz rates is possible, therefore photonic machines are going to be orders of magnitude faster, and this article would be done! And while I personally do think its obvious that one day this is where the story will end, in the present day and age….
… there are two distinct ways in which this picture is far too simple.
FUNKY FEATURES OF PHOTONICS, PART I
The first over-simplification is based on facing up to the fact that building the hardware to generate a photonic resource state is difficult and expensive. We cannot afford to construct one resource state generator per resource state required at each time step. However, in photonics we are very fortunate that it is possible to store/delay photons in long lengths of optical fiber with very low error rates. This lets us use many resource states all produced by a single resource state generator in such a way that they can all be involved in the same code-cycle. So, for example, all resource states required for a single code cycle may come from a single resource state generator:
You can see an animation of how this works in the figure – a single resource state generator spits out resource states (depicted again as a 6-qubit hexagonal ring), and you can see a kind of spacetime 3d-printing of entanglement being performed. We call this game interleaving. In the toy example of the figure we see some of the qubits get measured (fused) immediately, some go into a delay of length and some go into a delay of length .
So now we have brought another timescale into the photonics picture, the length of time that some photons spend in the longest interleaving delay line. We would like to make this as long as possible, but the maximum time is limited by the loss in the delay (typically optical fiber) and the maximum loss our error correcting code can tolerate. A number to have in mind for this (in early machines) is a handful of microseconds – corresponding to a few Km of fiber.
The upshot is that ultimately the temporal quantity that matters most to us in photonic quantum computing is:
What is the total number of resource states produced per second?
It’s important to appreciate we care only about the total rate of resource state production across the whole machine – so, if we take the total number of resource state generators we have built, and divide by , we get this total rate of resource state generation that we denote . Note that this rate is distinct from any physical clock rate, as, e.g., 100 resource state generators running at 100MHz, or 10 resource state generators running at 1GHz, or 1 resource state generator running at 10GHz all yield the same total rate of resource state production
The second most important temporal quantity is , the time of the longest low-loss delay we can use.
We then have that the total number of logical qubits in the machine is:
You can see this is proportional to which is effectively the total number of resource states “alive” in the machine at any given instant of time, including all the ones stacked up in long delay lines. This is how we leverage optical fiber delays for a massive amplification of the entanglement our hardware has available to compute with.
The time it takes to perform a logical gate is determined both by and by the total number of resource states that we need to consume for every logical qubit to undergo a gate. Even logical qubits that appear to not be part of a gate in that time step do, in fact, undergo a gate – the identity gate – because they need to be kept error free while they “idle”. As such the total number of resource states consumed in a logical time step is just and the logical gate time of the machine is
.
Because is expected to be about the same as for superconducting qubits (microseconds), the logical gate speeds are comparable.
At least they are, until…………
FUNKY FEATURES OF PHOTONICS, PART II
But wait! There’s more.
The second way in which unique features of photonics play havoc with the simple comparison to matter-based systems is in the exciting possibility of what we call an active-volume architecture.
A few moments ago I said:
Even logical qubits that seem to not be part of a gate in that time step undergo a gate – the identity gate – because they need to be kept error free while they “idle”. As such the total number of resource states consumed is just
and that was true. Until recently.
It turns out that there is a way of eliminating the majority of consumption of resources expended on idling qubits! This is done by some clever tricks that make use of the possibility of performing a limited number of non-nearest neighbor fusions between photons. It’s possible because photons are not anyway stuck in one place, and they can be passed around readily without interacting with other photons. (Their quantum crosstalk is exactly zero, they do really seem to despise each other.)
What previously was a large volume of resource states being consumed for “thumb-twiddling”, can instead all be put to good use doing non-trivial computational gates. Here is a simple quantum circuit with what we mean by the active volume highlighted:
Now, for any given computation the amount of active volume will depend very much on what you are computing. There are always many different circuits decomposing a given computation, some will use more active volume than others. This makes it impossible to talk about “what is the logical gate speed” completely independent of considerations about the computation actually being performed.
In this recent paper https://arxiv.org/abs/2306.08585 Daniel Litinski considers breaking elliptic curve cryptosystems on a quantum computer. In particular, he considers what it would take to run the relevant version of Shor’s algorithm on a superconducting qubit architecture with a microsecond code cycle – the answer is roughly that with 10 million physical superconducting qubits it would take about 4 hours (with an equivalent ion trap computer the time balloons to more than 5 months).
He then compares solving the same problem on a machine with an active volume architecture. Here is a subset of his results:
Recall that is the photonics parameter which is roughly equivalent to the code cycle time. Thus taking 1 microsecond compares to the expected for superconducting qubits. Imagine we can produce resource states at . This could be 6000 resource state generators each producing resource states at or 3500 generators producing them at 1GHz for example. Then the same computation would take 58 seconds, instead of four hours, a speedup by a factor of more than 200!
Now, this whole blog post is basically about addressing confusions out there regarding physical versus computational timescales. So, for the sake of illustration, let me push a purely theoretical envelope: What if we can’t do everything as fast as in the example just stated? What if our rate of total resource state generation was 10 times slower, i.e. ? And what if our longest delay is ten times longer, i.e. microseconds (so as to be much slower than )? Furthermore, for the sake of illustration, lets consider a ridiculously slow machine that achieves by building 350 billion resource state generators that can each produce resource states at only 1Hz. Yes, you read that right.
The fastest device in this ridiculous machine would only need to be a (very large!) slow optical switch operating at 100KHz (due to the chosen ). And yet this ridiculous machine could still solve the problem that takes a superconducting qubit machine four hours, in less than 10 minutes.
To reiterate:
Despite all the “physical stuff going on” in this (hypothetical, active-volume) photonic machine running much slower than all the “physical stuff going on” in the (hypothetical, non-active-volume) superconducting qubit machine, we see the photonic machine can still do the desired computation 25 times faster!
Hopefully the fundamental murkiness of the titular question “what is the logical gate speed of a photonic quantum computer” is now clear! Put simply: Even if it did “fundamentally run slower” (it won’t), it would still be faster. Because it has less stuff to do. It’s worth noting that the 25x increase in speed is clearly not based on physical timescales, but rather on the efficient parallelization achieved through long-range connections in the photonic active-volume device. If we were to scale up the hypothetical 10-million-superconducting-qubit device by a factor of 25, it could potentially also complete computations 25 times faster. However, this would require a staggering 250 million physical qubits or more. Ultimately, the absolute speed limit of quantum computations is set by the reaction time, which refers to the time it takes to perform a layer of single-qubit measurements and some classical processing. Early-generation machines will not be limited by this reaction time, although eventually it will dictate the maximum speed of a quantum computation. But even in this distant-future scenario, the photonic approach remains advantageous. As classical computation and communication speed up beyond the microsecond range, slower physical measurements of matter-based qubits will hinder the reaction time, while fast single-photon detectors won’t face the same bottleneck.
In the standard photonic architecture we saw that would scale proportionally with – that is, adding long delays would slow the logical gate speed (while giving us more logical qubits). But remarkably the active-volume architecture allows us to exploit the extra logical qubits without incurring a big negative tradeoff. I still find this unintuitive and miraculous, it just seems to so massively violate Conservation of Trouble.
With all this in mind it is also worth noting as an aside that optical fibers made from (expensive!) exotic glasses or with funky core structures are theoretically calculated to be possible with up to 100 times less loss than conventional fiber – therefore allowing for an equivalent scaling of . How many approaches to quantum computing can claim that perhaps one day, by simply swapping out some strands of glass, they could instantaneously multiply the number of logical qubits in the machine from (say) 100 to 10000? Even a (more realistic) factor of 10 would be incredible.
Obviously for pedagogical reasons the above discussion is based around the simplest approaches to logic in both standard and active-volume architectures, but more detailed analysis shows that conclusions regarding total computational time speedup persist even after known optimizations for both approaches.
Now the reason I called the example above a “ridiculous machine” is that even I am not cruel enough to ask our engineers to assemble 350 billion resource state generators. Fewer resource state generators running faster is desirable from the perspective of both sweat and dollars.
We have arrived then at a simple conclusion: what we really need to know is “how fast and at what scale can we generate resource states, with as large a machine as we can afford to build”.
HOW FAST COULD/SHOULD WE AIM TO DO RESOURCE STATE GENERATION?
In the world of classical photonics – such as that used for telecoms, LIDAR and so on – very high speeds are often thrown around: pulsed lasers and optical switches readily run at 100’s of GHz for example. On the quantum side, if we produce single photons via a probabilistic parametric process then similarly high repetition rates have been achieved. (This is because in such a process there are no timescale constraints set by atomic energy levels etc.) Off-the-shelf single photon avalanche photodiode detectors can count photons at multiple GHz.
Seems like we should be aiming to generate resource states at 10’s of GHz right?
Well, yes, one day – one of the main reasons I believe the long-term future of quantum computing is ultimately photonic is because of the obvious attainability of such timescales. [Two others: it’s the only sensible route to a large-scale room temperature machine; eventually there is only so much you can fit in a single cryostat, so ultimately any approach will converge to being a network of photonically linked machines].
In the real world of quantum engineering there are a couple of reasons to slow things down: (i) It relaxes hardware tolerances, since it makes it easier to get things like path lengths aligned, synchronization working, electronics operating in easy regimes etc (ii) in a similar way to how we use interleaving during a computation to drastically reduce the number of resource state generators we need to build, we can also use (shorter than length) delays to reduce the amount of hardware required to assemble the resource states in the first place and (iii) We want to use multiplexing.
Multiplexing is often misunderstood. The way we produce the requisite photonic entanglement is probabilistic. Producing the whole 20-photon resource state in a single step, while possible, would have very low probability. The way to obviate this is to cascade a couple of higher probability, intermediate, steps – selecting out successes (more on this in the appendix). While it has been known since the seminal work of Knill, Laflamme and Milburn two decades ago that this is a sensible thing to do, the obstacle has always been the need for a high performance (fast, low loss) optical switch. Multiplexing introduces a new physical “timescale of convenience” – basically dictated by latencies of electronic processing and signal transmission.
The brief summary therefore is: Yeah, everything internal to making resource states can be done at GHz rates, but multiple design flexibilities mean the rate of resource state generation is itself a parameter that should be tuned/optimized in the context of the whole machine, it is not constrained by fundamental quantum things like interaction energies, rather it is constrained by the speeds of a bunch of purely classical stuff.
I do not want to leave the impression that generation of entangled photons can only be done via the multistage probabilistic method just outlined. Using quantum dots, for example, people can already demonstrate generation of small photonic entangled states at GHz rates (see e.g. https://www.nature.com/articles/s41566-022-01152-2). Eventually, direct generation of photonic entanglement from matter-based systems will be how photonic quantum computers are built, and I should emphasize that its perfectly possible to use small resource states (say, 4 entangled photons) instead of the 20 proposed above, as long as they are extremely clean and pure. In fact, as the discussion above has hopefully made clear: for quantum computing approaches based on fundamentally slow things like atoms and ions, transduction of matter-based entanglement into photonic entanglement allows – by simply scaling to more systems – evasion of the extremely slow logical gate speeds they will face if they do not do so.
Right now, however, approaches based on converting the entanglement of matter qubits into photonic entanglement are not nearly clean enough, nor manufacturable at large enough scales, to be compatible with utility-scale quantum computing. And our present method of state generation by multiplexing has the added benefit of decorrelating many error mechanisms that might otherwise be correlated if many photons originate from the same device.
So where does all this leave us?
I want to build a useful machine. Lets back-of-the-envelope what that means photonically. Consider we target a machine comprising (say) at least 100 logical qubits capable of billions of logical gates. (From thinking about active volume architectures I learn that what I really want is to produce as many “logical blocks” as possible, which can then be divvied up into computational/memory/processing units in funky ways, so here I’m really just spitballing an estimate to give you an idea).
Staring at
and presuming and is going to be about 10 microseconds, we need to be producing resource states at a total rate of at least . As I hope is clear by now, as a pure theoretician, I don’t give a damn if that means 10000 resource state generators running at 1MHz, 100 resource state generators running at 100MHz, or 10 resource state generators running at 1GHz. However, the fact this flexibility exists is very useful to my engineering colleagues – who, of course, aim to build the smallest and fastest possible machine they can, thereby shortening the time until we let them head off for a nice long vacation sipping mezcal margaritas on a warm tropical beach.
None of these numbers should seem fundamentally indigestible, though I do not want to understate the challenge: all never-before-done large-scale engineering is extremely hard.
But regardless of the regime we operate in, logical gate speeds are not going to be the issue upon which photonics will be found wanting.
REAL-WORLD QUANTUM COMPUTING DESIGN
Now, I know this blog is read by lots of quantum physics students. If you want to impact the world, working in quantum computing really is a great way to do it. The foundation of everything round you in the modern world was laid in the 40’s and 50’s when early mathematicians, computer scientists, physicists and engineers figured out how we can compute classically. Today you have a unique opportunity to be part of laying the foundation of humanity’s quantum computing future. Of course, I want the best of you to work on a photonic approach specifically (I’m also very happy to suggest places for the worst of you to go work). Please appreciate, therefore, that these final few paragraphs are my very biased – though fortunately totally correct – personal perspective!
The broad features of the photonic machine described above – it’s a network of stuff to make resource states, stuff to fuse them, and some interleaving modules, has been fixed now for several years (see the references).
Once we go down even just one level of detail, a myriad of very-much-not-independent questions arise: What is the best resource state? What series of procedures is optimal for creating that state? What is the best underlying topological code to target? What fusion network can build that code? What other things (like active volume) can exploit the ability for photons to be easily nonlocally connected? What types of encoding of quantum information into photonic states is best? What interferometers generate the most robust small entangled states? What procedures for systematically growing resource states from smaller entangled states are most robust or use the least amount of hardware? How can we best use measurements and classical feedforward/control to mitigate error accumulation?
Those sorts of questions cannot be meaningfully addressed without going down to another level of detail, one in which we do considerable modelling of the imperfect devices from which everything will be built – modelling that starts by detailed parameterization of about 40 component specifications (ranging over things like roughness of silicon photonic waveguide walls, stability of integrated voltage drivers, precision of optical fiber cutting robots,….. Well, the list goes on and on). We then model errors of subsystems built from those components, verify against data, and proceed.
The upshot is none of these questions have unique answers! There just isn’t “one obviously best code” etc. In fact the answers can change significantly with even small variations in performance of the hardware. This opens a very rich design space, where we can establish tradeoffs and choose solutions that optimize a wide variety of practical hardware metrics.
In photonics there is also considerably more flexibility and opportunity than with most approaches on the “quantum side” of things. That is, the quantum aspects of the sources, the quantum states we use for encoding even single qubits, the quantum states we should target for the most robust entanglement, the topological quantum logical states we target and so on, are all “on the table” so to speak.
Exploring the parameter space of possible machines to assemble, while staying fully connected to component level hardware performance, involves both having a very detailed simulation stack, and having smart people to help find new and better schemes to test in the simulations. It seems to me there are far more interesting avenues for impactful research than more established approaches can claim. Right now, on this planet, there are only around 30 people engaged seriously in that enterprise. It’s fun. Perhaps you should join in?
REFERENCES
A surface code quantum computer in siliconhttps://www.science.org/doi/10.1126/sciadv.1500707. Figure 4 is a clear depiction of the circuits for performing a code cycle appropriate to a generic 2d matter-based architecture.
Here is a common misconception: Current methods of producing ~20 photon entangled states succeed only a few times per second, so generating resource states for fusion-based quantum computing is many orders of magnitude away from where it needs to be.
This misconception arises from considering experiments which produce photonic entangled states via single-shot spontaneous processes and extrapolating them incorrectly as having relevance to how resource states for photonic quantum computing are assembled.
Such single-shot experiments are hit by a “double whammy”. The first whammy is that the experiments produce some very large and messy state that only has a tiny amplitude in the component of the desired entangled state. Thus, on each shot, even in ideal circumstances, the probability of getting the desired state is very, very small. Because billions of attempts can be made each second (as mentioned, running these devices at GHz speeds is easy) it does occasionally occur. But only a small number of times per second.
The second whammy is that if you are trying to produce a 20-photon state, but each photon gets lost with probability 20%, then the probability of you detecting all the photons – even if you live in a branch of the multiverse where they have been produced – is reduced by a factor of . Loss reduces the rate of production considerably.
Now, photonic fusion-based quantum computing could not be based on this type of entangled photon generation anyway, because the production of the resource states needs to be heralded, while these experiments only postselect onto the very tiny part of the total wavefunction with the desired entanglement. But let us put that aside, because the two whammy’s could, in principle, be showstoppers for production of heralded resource states, and it is useful to understand why they are not.
Imagine you can toss coins, and you need to generate 20 coins showing Heads. If you repeatedly toss all 20 coins simultaneously until they all come up heads you’d typically have to do so millions of times before you succeed. This is even more true if each coin also has a 20% chance of rolling off the table (akin to photon loss). But if you can toss 20 coins, set aside (switch out!) the ones that came up heads and re-toss the others, then after only a small number of steps you will have 20 coins all showing heads. This large gap is fundamentally why the first whammy is not relevant: To generate a large photonic entangled state we begin by probabilistically attempting to generate a bunch of small ones. We then select out the success (multiplexing) and combine successes to (again, probabilistically) generate a slightly larger entangled state. We repeat a few steps of this. This possibility has been appreciated for more than twenty years, but hasn’t been done at scale yet because nobody has had a good enough optical switch until now.
The second whammy is taken care of the fact that for fault tolerant photonic fusion-based quantum computing there never is any need to make the resource state such that all photons are guaranteed to be there! The per-photon loss rate can be high (in principle 10’s of percent) – in fact the larger the resource state being built the higher it is allowed to be.
The upshot is that comparing this method of entangled photon generation with the methods which are actually employed is somewhat like a creation scientist claiming monkeys cannot have evolved from bacteria, because it is all so unlikely for suitable mutations to have happened simultaneously!
Acknowledgements
Very grateful to Mercedes Gimeno-Segovia, Daniel Litinski, Naomi Nickerson, Mike Nielsen and Pete Shadbolt for help and feedback.
By guest blogger Clarice D. Aiello, faculty at UCLA
Imagine using your cellphone to control the activity of your own cells to treat injuries and disease. It sounds like something from the imagination of an overly optimistic science fiction writer. But this may one day be a possibility through the emerging field of quantum biology.
Over the past few decades, scientists have made incredible progress in understanding and manipulating biological systems at increasingly small scales, from protein folding to genetic engineering. And yet, the extent to which quantum effects influence living systems remains barely understood.
Quantum effects are phenomena that occur between atoms and molecules that can’t be explained by classical physics. It has been known for more than a century that the rules of classical mechanics, like Newton’s laws of motion, break down at atomic scales. Instead, tiny objects behave according to a different set of laws known as quantum mechanics.
For humans, who can only perceive the macroscopic world, or what’s visible to the naked eye, quantum mechanics can seem counterintuitive and somewhat magical. Things you might not expect happen in the quantum world, like electrons “tunneling” through tiny energy barriers and appearing on the other side unscathed, or being in two different places at the same time in a phenomenon called superposition.
I am trained as a quantum engineer. Research in quantum mechanics is usually geared toward technology. However, and somewhat surprisingly, there is increasing evidence that nature – an engineer with billions of years of practice – has learned how to use quantum mechanics to function optimally. If this is indeed true, it means that our understanding of biology is radically incomplete. It also means that we could possibly control physiological processes by using the quantum properties of biological matter.
Quantumness in biology is probably real
Researchers can manipulate quantum phenomena to build better technology. In fact, you already live in a quantum-powered world: from laser pointers to GPS, magnetic resonance imaging and the transistors in your computer – all these technologies rely on quantum effects.
In general, quantum effects only manifest at very small length and mass scales, or when temperatures approach absolute zero. This is because quantum objects like atoms and molecules lose their “quantumness” when they uncontrollably interact with each other and their environment. In other words, a macroscopic collection of quantum objects is better described by the laws of classical mechanics. Everything that starts quantum dies classical. For example, an electron can be manipulated to be in two places at the same time, but it will end up in only one place after a short while – exactly what would be expected classically.
In a complicated, noisy biological system, it is thus expected that most quantum effects will rapidly disappear, washed out in what the physicist Erwin Schrödinger called the “warm, wet environment of the cell.” To most physicists, the fact that the living world operates at elevated temperatures and in complex environments implies that biology can be adequately and fully described by classical physics: no funky barrier crossing, no being in multiple locations simultaneously.
Chemists, however, have for a long time begged to differ. Research on basic chemical reactions at room temperature unambiguously shows that processes occurring within biomolecules like proteins and genetic material are the result of quantum effects. Importantly, such nanoscopic, short-lived quantum effects are consistent with driving some macroscopic physiological processes that biologists have measured in living cells and organisms. Research suggests that quantum effects influence biological functions, including regulating enzyme activity, sensing magnetic fields, cell metabolism and electron transport in biomolecules.
How to study quantum biology
The tantalizing possibility that subtle quantum effects can tweak biological processes presents both an exciting frontier and a challenge to scientists. Studying quantum mechanical effects in biology requires tools that can measure the short time scales, small length scales and subtle differences in quantum states that give rise to physiological changes – all integrated within a traditional wet lab environment.
In my work, I build instruments to study and control the quantum properties of small things like electrons. In the same way that electrons have mass and charge, they also have a quantum property called spin. Spin defines how the electrons interact with a magnetic field, in the same way that charge defines how electrons interact with an electric field. The quantum experiments I have been building since graduate school, and now in my own lab, aim to apply tailored magnetic fields to change the spins of particular electrons.
Research has demonstrated that many physiological processes are influenced by weak magnetic fields. These processes include stem cell development and maturation, cell proliferation rates, genetic material repair and countless others. These physiological responses to magnetic fields are consistent with chemical reactions that depend on the spin of particular electrons within molecules. Applying a weak magnetic field to change electron spins can thus effectively control a chemical reaction’s final products, with important physiological consequences.
Currently, a lack of understanding of how such processes work at the nanoscale level prevents researchers from determining exactly what strength and frequency of magnetic fields cause specific chemical reactions in cells. Current cellphone, wearable and miniaturization technologies are already sufficient to produce tailored, weak magnetic fields that change physiology, both for good and for bad. The missing piece of the puzzle is, hence, a “deterministic codebook” of how to map quantum causes to physiological outcomes.
In the future, fine-tuning nature’s quantum properties could enable researchers to develop therapeutic devices that are noninvasive, remotely controlled and accessible with a mobile phone. Electromagnetic treatments could potentially be used to prevent and treat disease, such as brain tumors, as well as in biomanufacturing, such as increasing lab-grown meat production.
A whole new way of doing science
Quantum biology is one of the most interdisciplinary fields to ever emerge. How do you build community and train scientists to work in this area?
Since the pandemic, my lab at the University of California, Los Angeles and the University of Surrey’s Quantum Biology Doctoral Training Centre have organized Big Quantum Biology meetings to provide an informal weekly forum for researchers to meet and share their expertise in fields like mainstream quantum physics, biophysics, medicine, chemistry and biology.
Research with potentially transformative implications for biology, medicine and the physical sciences will require working within an equally transformative model of collaboration. Working in one unified lab would allow scientists from disciplines that take very different approaches to research to conduct experiments that meet the breadth of quantum biology from the quantum to the molecular, the cellular and the organismal.
The existence of quantum biology as a discipline implies that traditional understanding of life processes is incomplete. Further research will lead to new insights into the age-old question of what life is, how it can be controlled and how to learn with nature to build better quantum technologies.
Clarice D. Aiello is a quantum engineer interested in how quantum physics informs biology at the nanoscale. She is an expert on nanosensors that harness room-temperature quantum effects in noisy environments. Aiello received a bachelor’s in physics from the Ecole Polytechnique, France; a master’s degree in physics from the University of Cambridge, Trinity College, UK; and a PhD in electrical engineering from the Massachusetts Institute of Technology. She held postdoctoral appointments in bioengineering at Stanford University and in chemistry at the University of California, Berkeley. Two months before the pandemic, she joined the University of California, Los Angeles, where she leads the Quantum Biology Tech (QuBiT) Lab.
***
The author thanks Nicole Yunger Halpern and Spyridon Michalakis for the opportunity to talk about quantum biology to the physics audience of this wonderful blog!