Showing posts with label Physicists. Show all posts
Showing posts with label Physicists. Show all posts
Saturday, April 20, 2024
The Prof who Looked for Love and Ended up in Prison: The Supermodel Scam
This is the story of how particle physicist Paul Frampton ended up in prison for smuggling drugs after being scammed by someone who pretended to be a bikini model, and what happened next.
Saturday, February 26, 2022
Will the Big Bang repeat?
[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]
This video is about Roger Penrose’s idea for the beginning of the universe and its end, conformal cyclic cosmology, CCC for short. It’s a topic that a lot of you have asked for ever since Roger Penrose won the Nobel Prize in 2020. The reason I’ve put off talking about it is that I don’t enjoy criticizing other people’s ideas, especially if they’re people I personally know. And also, who am I to criticize a Nobel Prize winner. on YouTube, out of all places.
However, Penrose himself has been very outspoken about his misgivings of string theory and contemporary cosmology, in particular inflation, and so in the end I think it’ll be okay if I tell you what I think about conformal cyclic cosmology. And that’s what we’ll talk about today.
First thing first, what does conformal cyclic cosmology mean. I think we’re all good with the word cosmology, it’s a theory for the history of the entire universe, alright. That it’s cyclic means it repeats in some sense. Penrose calls these cycles eons. Each starts with a big bang, but it doesn’t end with a big crunch.
A big crunch would happen when the expansion of the universe changes to a contraction and eventually all the matter is well, crunched together. A big crunch is like a big bang in reverse. This does not happen in Conformal Cyclic Cosmology. Rather, the history of the universe just kind of tapers out. Matter becomes more and more thinly diluted. And then there’s the word conformal. We need that to get from the thinly diluted end of one eon to the beginning of the next. But what does conformal mean?
A conformal rescaling is a stretching or shrinking that maintains all relative angles. Penrose uses that because you can use a conformal rescaling to make something that has infinite size into something that has finite size.
Here is a simple example of a conformal rescaling. Suppose you have an infinite two-dimensional plane. And suppose you have half of a sphere. Now from every point on the infinite plane, you draw a line to the center of the sphere. At the point where it pierces the sphere, you project that down onto a disk. That way you map every point of the infinite plane into the disk underneath the sphere. A famous example of a conformal rescaling is this image from Escher. Imagine that those bats are all the same size and once filled in an infinite plane. In this image they are all squeezed into a finite area.
Now in Penrose’s case, the infinite thing that you rescale is not just space, but space-time. You rescale them both and then you glue the end of our universe to a new beginning. Mathematically you can totally do that. But why would you? And what’s with the physics?
Let’s first talk about why you would want to do that. Penrose is trying to solve a big puzzle in our current theories for the universe. It’s the second law of thermodynamics: entropy increases. We see it increase. But that entropy increases means it must have been smaller in the past. Indeed, the universe must have started out with very small entropy, otherwise we just can’t explain what we see. That the early universe must have had small entropy is often called the Past Hypothesis, a term coined by the philosopher David Albert.
Our current theories work perfectly fine with the past hypothesis. But of course it would be better if one didn’t need it. If one instead had a theory from which one can derive it.
Penrose has attacked this problem by first finding a way to quantify the entropy in the gravitational field. He argued already in the 1970s, that it’s encoded in the Weyl curvature tensor. That’s loosely speaking part of the complete curvature tensor of space-time. This Weyl curvature tensor, according to Penrose, should be very small in the beginning of the universe. Then the entropy would be small and the past hypothesis would be explained. He calls this the Weyl Curvature Hypothesis.
This video is about Roger Penrose’s idea for the beginning of the universe and its end, conformal cyclic cosmology, CCC for short. It’s a topic that a lot of you have asked for ever since Roger Penrose won the Nobel Prize in 2020. The reason I’ve put off talking about it is that I don’t enjoy criticizing other people’s ideas, especially if they’re people I personally know. And also, who am I to criticize a Nobel Prize winner. on YouTube, out of all places.
However, Penrose himself has been very outspoken about his misgivings of string theory and contemporary cosmology, in particular inflation, and so in the end I think it’ll be okay if I tell you what I think about conformal cyclic cosmology. And that’s what we’ll talk about today.
First thing first, what does conformal cyclic cosmology mean. I think we’re all good with the word cosmology, it’s a theory for the history of the entire universe, alright. That it’s cyclic means it repeats in some sense. Penrose calls these cycles eons. Each starts with a big bang, but it doesn’t end with a big crunch.
A big crunch would happen when the expansion of the universe changes to a contraction and eventually all the matter is well, crunched together. A big crunch is like a big bang in reverse. This does not happen in Conformal Cyclic Cosmology. Rather, the history of the universe just kind of tapers out. Matter becomes more and more thinly diluted. And then there’s the word conformal. We need that to get from the thinly diluted end of one eon to the beginning of the next. But what does conformal mean?
A conformal rescaling is a stretching or shrinking that maintains all relative angles. Penrose uses that because you can use a conformal rescaling to make something that has infinite size into something that has finite size.
Here is a simple example of a conformal rescaling. Suppose you have an infinite two-dimensional plane. And suppose you have half of a sphere. Now from every point on the infinite plane, you draw a line to the center of the sphere. At the point where it pierces the sphere, you project that down onto a disk. That way you map every point of the infinite plane into the disk underneath the sphere. A famous example of a conformal rescaling is this image from Escher. Imagine that those bats are all the same size and once filled in an infinite plane. In this image they are all squeezed into a finite area.
Now in Penrose’s case, the infinite thing that you rescale is not just space, but space-time. You rescale them both and then you glue the end of our universe to a new beginning. Mathematically you can totally do that. But why would you? And what’s with the physics?
Let’s first talk about why you would want to do that. Penrose is trying to solve a big puzzle in our current theories for the universe. It’s the second law of thermodynamics: entropy increases. We see it increase. But that entropy increases means it must have been smaller in the past. Indeed, the universe must have started out with very small entropy, otherwise we just can’t explain what we see. That the early universe must have had small entropy is often called the Past Hypothesis, a term coined by the philosopher David Albert.
Our current theories work perfectly fine with the past hypothesis. But of course it would be better if one didn’t need it. If one instead had a theory from which one can derive it.
Penrose has attacked this problem by first finding a way to quantify the entropy in the gravitational field. He argued already in the 1970s, that it’s encoded in the Weyl curvature tensor. That’s loosely speaking part of the complete curvature tensor of space-time. This Weyl curvature tensor, according to Penrose, should be very small in the beginning of the universe. Then the entropy would be small and the past hypothesis would be explained. He calls this the Weyl Curvature Hypothesis.
So, instead of the rather vague past hypothesis, we now have a mathematically precise Weyl Curvature Hypothesis. Like the entropy, the Weyl Curvature would start initially very small and then increase as the universe gets older. This goes along with the formation of bigger structures like stars and galaxies.
Remains the question how do you get the Weyl Curvature to be small. Here’s where the conformal rescaling kicks in. You take the end of a universe where the Weyl curvature is large, you rescale it which makes it very small, and then you postulate that this is the beginning of a new universe.
Okay, so that explains why you may want to do that, but what’s with the physics. The reason why this rescaling works mathematically is that in a conformally invariant universe there’s no meaningful way to talk about time. It’s like if I show you a piece of the Koch snowflake and ask if that’s big or small. These pieces repeat infinitely often so you can’t tell. In CCC it’s the same with time at the end of the universe.
But the conformal rescaling and gluing only works if the universe approaches conformal invariance towards the end of its life. This may or may not be the case. The universe contains massive particles, and massive particles are not conformally invariant. That’s because particles are also waves and massive particles are waves with a particular wavelength. That’s the Compton wave-length, which is inversely proportional to the mass. This is a specific scale, so if you rescale the universe, it will not remain the same.
However, the masses of the elementary particles all come from the Higgs field, so if you can somehow get rid of the Higgs at the end of the universe, then that would be conformally invariant and everything would work. Or maybe you can think of some other way to get rid of massive particles. And since no one really knows what may happen at the end of the universe anyway, ok, well, maybe it works somehow.
But we can’t test what will happen in a hundred billion years. So how could one test Penrose’s cyclic cosmology? Interestingly, this conformal rescaling doesn’t wash out all the details from the previous eon. Gravitational waves survive because they scale differently than the Weyl curvature. And those gravitational waves from the previous eon affect how matter moves after the big bang of our eon, which in turn leaves patterns in the cosmic microwave background. Indeed, rather specific patterns.
Roger Penrose first said one should look for rings. These rights would come from the collisions of supermassive black holes in the eon before ours. This is pretty much the most violent event one can think of and so should produce a lot of gravitational waves. However, the search for those signals remained inconclusive.
Penrose then found a better observational signature from the earlier eon which he called Hawking points. Supermassive black holes in the earlier eon evaporate and leave behind a cloud of Hawking radiation which spreads out over the whole universe. But at the end of the eon, you do the rescaling and you squeeze all that Hawking radiation together. That carries over into the next eon and makes a localized point with some rings around it in the CMB.
And these Hawking points are actually there. It’s not only Penrose and his people who have found them in the CMB. The thing is though that some cosmologists have argued they should also be there in the most popular model for the early universe, which is inflation. So, this prediction may not be wrong, but it’s maybe not a good way to tell Penrose’s model from others.
Penrose also says that this conformal rescaling requires that one introduces a new field which gives rise to a new particle. He has called this particle the “erebon”, named after erebos, the god of darkness. The erebons might make up dark matter. They are heavy particles with masses of about the Planck mass, so that’s much heavier than the particles astrophysicists typically consider for dark matter. But it’s not ruled out that dark matter particles might be so heavy and indeed other astrophysicists have considered similar particles as candidates for dark matter.
Penrose’s erebons are ultimately unstable. Remember you have to get rid of all the masses at the end of the eon to get to conformal invariance. So Penrose predicts that dark matter should slowly decay. That decay however is so slow that it is hard to test. He has also predicted that there should be rings around the Hawking points in the CMB B-modes which is the thing that the BICEP experiment was looking for. But those too haven’t been seen – so far.
Okay, so that’s my brief summary of conformal cyclic cosmology, now what do I think about it. Mostly I have questions. The obvious thing to pick on is that actually the universe isn’t conformally invariant and that postulating all Higgs bosons disappear or something like that is rather ad hoc. But this actually isn’t my main problem. Maybe I’ve spent too much time among particle physicists, but I’ve seen far worse things. Unparticles, anybody?
One thing that gives me headaches is that it’s one thing to do a conformal rescaling mathematically. Understanding what this physically means is another thing entirely. You see, just because you can create an infinite sequence of eons doesn’t mean the duration of any eon is now finite. You can totally glue together infinitely many infinitely large space-times if you really want to. Saying that time becomes meaningless doesn’t really explain to me what this rescaling physically does.
Okay, but maybe that’s a rather philosophical misgiving. Here is a more concrete one. If the previous eon leaves information imprinted in the next one, then it isn’t obvious that the cycles repeat in the same way. Instead, I would think, they will generally end up with larger and larger fluctuations that will pass on larger and larger fluctuations to the next eon because that’s a positive feedback. If that was so, then Penrose would have to explain why we are in a universe that’s special for not having these huge fluctuations.
Another issue is that it’s not obvious you can extend these cosmologies back in time indefinitely. This is a problem also for “eternal inflation.” Eternal inflation is eternal really only into the future. It has a finite past. You can calculate this just from the geometry. In a recent paper Kinney and Stein showed that this is also the case for a model of cyclic cosmology put forward by Ijjas and Steinhard has the same problem. The cycle might go on infinitely, alright, but only into the future not into the past. It’s not clear at the moment whether this is also the case for conformal cyclic cosmology. I don’t think anyone has looked at it.
Finally, I am not sure that CCC actually solves the problem it was supposed to solve. Remember we are trying to explain the past hypothesis. But a scientific explanation shouldn’t be more difficult than the thing you’re trying to explain. And CCC requires some assumptions, about the conformal invariance and the erebons, that at least to me don’t seem any better than the past hypothesis.
Having said that, I think Penrose’s point that the Weyl curvature in the early universe must have been small is really important and it hasn’t been appreciated enough. Maybe CCC isn’t exactly the right conclusion to draw from it, but it’s a mathematical puzzle that in my opinion deserves a little more attention.
Remains the question how do you get the Weyl Curvature to be small. Here’s where the conformal rescaling kicks in. You take the end of a universe where the Weyl curvature is large, you rescale it which makes it very small, and then you postulate that this is the beginning of a new universe.
Okay, so that explains why you may want to do that, but what’s with the physics. The reason why this rescaling works mathematically is that in a conformally invariant universe there’s no meaningful way to talk about time. It’s like if I show you a piece of the Koch snowflake and ask if that’s big or small. These pieces repeat infinitely often so you can’t tell. In CCC it’s the same with time at the end of the universe.
But the conformal rescaling and gluing only works if the universe approaches conformal invariance towards the end of its life. This may or may not be the case. The universe contains massive particles, and massive particles are not conformally invariant. That’s because particles are also waves and massive particles are waves with a particular wavelength. That’s the Compton wave-length, which is inversely proportional to the mass. This is a specific scale, so if you rescale the universe, it will not remain the same.
However, the masses of the elementary particles all come from the Higgs field, so if you can somehow get rid of the Higgs at the end of the universe, then that would be conformally invariant and everything would work. Or maybe you can think of some other way to get rid of massive particles. And since no one really knows what may happen at the end of the universe anyway, ok, well, maybe it works somehow.
But we can’t test what will happen in a hundred billion years. So how could one test Penrose’s cyclic cosmology? Interestingly, this conformal rescaling doesn’t wash out all the details from the previous eon. Gravitational waves survive because they scale differently than the Weyl curvature. And those gravitational waves from the previous eon affect how matter moves after the big bang of our eon, which in turn leaves patterns in the cosmic microwave background. Indeed, rather specific patterns.
Roger Penrose first said one should look for rings. These rights would come from the collisions of supermassive black holes in the eon before ours. This is pretty much the most violent event one can think of and so should produce a lot of gravitational waves. However, the search for those signals remained inconclusive.
Penrose then found a better observational signature from the earlier eon which he called Hawking points. Supermassive black holes in the earlier eon evaporate and leave behind a cloud of Hawking radiation which spreads out over the whole universe. But at the end of the eon, you do the rescaling and you squeeze all that Hawking radiation together. That carries over into the next eon and makes a localized point with some rings around it in the CMB.
And these Hawking points are actually there. It’s not only Penrose and his people who have found them in the CMB. The thing is though that some cosmologists have argued they should also be there in the most popular model for the early universe, which is inflation. So, this prediction may not be wrong, but it’s maybe not a good way to tell Penrose’s model from others.
Penrose also says that this conformal rescaling requires that one introduces a new field which gives rise to a new particle. He has called this particle the “erebon”, named after erebos, the god of darkness. The erebons might make up dark matter. They are heavy particles with masses of about the Planck mass, so that’s much heavier than the particles astrophysicists typically consider for dark matter. But it’s not ruled out that dark matter particles might be so heavy and indeed other astrophysicists have considered similar particles as candidates for dark matter.
Penrose’s erebons are ultimately unstable. Remember you have to get rid of all the masses at the end of the eon to get to conformal invariance. So Penrose predicts that dark matter should slowly decay. That decay however is so slow that it is hard to test. He has also predicted that there should be rings around the Hawking points in the CMB B-modes which is the thing that the BICEP experiment was looking for. But those too haven’t been seen – so far.
Okay, so that’s my brief summary of conformal cyclic cosmology, now what do I think about it. Mostly I have questions. The obvious thing to pick on is that actually the universe isn’t conformally invariant and that postulating all Higgs bosons disappear or something like that is rather ad hoc. But this actually isn’t my main problem. Maybe I’ve spent too much time among particle physicists, but I’ve seen far worse things. Unparticles, anybody?
One thing that gives me headaches is that it’s one thing to do a conformal rescaling mathematically. Understanding what this physically means is another thing entirely. You see, just because you can create an infinite sequence of eons doesn’t mean the duration of any eon is now finite. You can totally glue together infinitely many infinitely large space-times if you really want to. Saying that time becomes meaningless doesn’t really explain to me what this rescaling physically does.
Okay, but maybe that’s a rather philosophical misgiving. Here is a more concrete one. If the previous eon leaves information imprinted in the next one, then it isn’t obvious that the cycles repeat in the same way. Instead, I would think, they will generally end up with larger and larger fluctuations that will pass on larger and larger fluctuations to the next eon because that’s a positive feedback. If that was so, then Penrose would have to explain why we are in a universe that’s special for not having these huge fluctuations.
Another issue is that it’s not obvious you can extend these cosmologies back in time indefinitely. This is a problem also for “eternal inflation.” Eternal inflation is eternal really only into the future. It has a finite past. You can calculate this just from the geometry. In a recent paper Kinney and Stein showed that this is also the case for a model of cyclic cosmology put forward by Ijjas and Steinhard has the same problem. The cycle might go on infinitely, alright, but only into the future not into the past. It’s not clear at the moment whether this is also the case for conformal cyclic cosmology. I don’t think anyone has looked at it.
Finally, I am not sure that CCC actually solves the problem it was supposed to solve. Remember we are trying to explain the past hypothesis. But a scientific explanation shouldn’t be more difficult than the thing you’re trying to explain. And CCC requires some assumptions, about the conformal invariance and the erebons, that at least to me don’t seem any better than the past hypothesis.
Having said that, I think Penrose’s point that the Weyl curvature in the early universe must have been small is really important and it hasn’t been appreciated enough. Maybe CCC isn’t exactly the right conclusion to draw from it, but it’s a mathematical puzzle that in my opinion deserves a little more attention.
Wednesday, September 29, 2021
[Guest Post] Brian Keating: How to Think Like a Nobel Prize Winner
[The following is an excerpt from Think Like a Nobel Prize Winner, Brian Keating’s newest book based on his interviews with 9 Nobel Prize winning physicists. The book isn’t a physics text, nor even a memoir like Keating’s first book Losing the Nobel Prize. Instead, it’s a self-help guide for technically minded individuals seeking to ‘level-up’ their lives and careers.]
When 2017 Nobel Prize winner Barry Barish told me he had suffered from the imposter syndrome, the hair stood up on the back of my neck. I couldn’t believe that one of the most influential figures in my life and career—as a scientist, as a father, and as a human—is mortal. He sometimes feels insecure, just like I do. Every time I’m teaching, in the back of my head, I am thinking, who am I to do this? I always struggled with math, and physics never came naturally to me. I got where I am because of my passion and curiosity, not my SAT scores. Society venerates the genius. Maybe that’s you, but it’s certainly not me.
I’ve always suffered from the imposter syndrome. Discovering that Barish did too, even after winning a Nobel Prize—the highest regard in our field and in society itself—immensely comforted me. If he was insecure about how he compared to Einstein, I wanted to comfort him: Ein- stein was in awe of Isaac Newton, saying Newton “... determined the course of Western thought, research, and practice like no one else before or since.” And compared to whom did Newton feel inadequate? Jesus Christ almighty!
The truth is, the imposter syndrome is just a normal, even healthy, dose of inadequacy. As such, we can never overcome or defeat it, nor should we try to. But we can manage it through understanding and acceptance. Hearing about Barry’s experience allowed me to do exactly that, and I hoped sharing that message would also help others manage better. This was the moment I decided to create this book.
This isn’t a physics book. These pages are not for aspir- ing Nobel Prize winners, mathematicians, or any of my fellow geeks, dweebs, or nerds. In fact, I wrote it specifically for nonscientists—for those who, because of the quotidian demands of everyday life, sometimes lose sight of the biggest-picture topics humans are capable of learning about and contributing to. Most of all, I hope by humanizing science, by showing the craft of science as performed by its master practitioners, you my reader will see common themes emerge that will boost your creativity, stoke your imagination, and most of all, help overcome barriers like the imposter syndrome, thereby unlocking your full potential for out-of-this-universe success.
Though I didn’t write it for physicists, it’s appropriate to consider why the subjects of this book—who are all physicists—are good role models. Physicists are mental Swiss Army knives, or a cerebral SEAL Team Six. We dwell in uncertainty. We exist to solve problems.
We are not the best mathematicians (just ask a real mathematician). We’re not the best engineers. We also aren’t the best writers, speakers, or communicators—but no single group can simultaneously do all of these disparate tasks so well as the physicists I’ve compiled here. That’s what makes them worth listening to and learning from. I sure have.
The individuals in this book have balanced collaboration with competition. All scientists stand on the proverbial shoulders of giants of the past and present. Yet some of the most profound moments of inspiration do breathe magic into the equation of a single individual one unique time. There is a skill to know when to listen and when to talk, for you can’t do both at the same time. These scientists have navigated the challenging waters between focus and diversity, balancing intellectual breadth with depth, which are challenges we all face. Whether you’re a scientist or a salesman, you must “niche down” to solve problems. (Imagine trying to sell every car model made!)
I wrote this book for everyone who struggles to balance the mundane with the sublime—who is attending to the day-to-day hard work and labor of whatever craft they are in while also trying to achieve something greater in their profession or in life. I wanted to deconstruct the mental habits and tactics of some of society’s best and brightest minds in order to share their wisdom with readers—and also to show readers that they’re just like us. They struggle with compromise. They wrestle with perfection. And they aspire always to do something great. We can too.
By studying the habits and tactics of the world’s brightest, you can recognize common themes that apply to your life— even if the subject matter itself is as far removed from your daily life as a black hole is from a quark. Honestly, even though I am a physicist, the work done by most of the subjects in this book is no more similar to my daily work than it is to yours, and yet I learned much from them about issues common between us. These pages include enduring life lessons applicable to anyone eager to acquire new the true keys to success!
HOW IT ALL BEGAN
A theme pops up throughout these interviews regarding the connection between teaching and learning. In the Russian language, the word for “scientist” translates into “one who was taught.” That is an awesome responsibility with many implications. If we were taught, we have an obligation to teach. But the paradox is this: To be a good teacher, you must also be a good student. You must study how people learn in order to teach effectively. And to learn, you must not only study but also teach. In that way, I also have a selfish motivation behind this book: I wanted to share everything I learned from these laureates in order to learn it even more durably. Mostly, however, I see this book as an extension of my duty as an educator. That’s also how the podcast Into the Impossible began.
I’ve always had an insatiable curiosity about learning and education, combined with the recognition that life is short and I want to extract as much wisdom as I can while I can.
As a college professor, I think of teachers as shortcuts in this endeavor. Teachers act as a sort of hack to reduce the amount of time otherwise required to learn something on one’s own, compressing and making the learning process as efficient as possible—but no more so. In other words, there is a value in wrestling with material that cannot be hacked away.
As part of my duty as an educator, I wanted to cultivate a collection of dream faculty comprised of minds I wish I had encountered in my life. The next best thing to having them as my actual teachers is to learn from their interviews in a way that distills their knowledge, philosophy, struggles, tactics, and habits.
I started doing just that at UC San Diego in 2018 and realized I was extremely privileged to have access to some of the greatest minds in human history, ranging from Pulitzer Prize winners and authors to CEOs, artists, and astronauts. As the codirector of the Arthur C. Clarke Center for Human Imagination, I had access to a wide variety of writers, thinkers, and inventors from all walks of life, courtesy of our guest-speaker series. The list of invited speakers is not at all limited to the sciences. The common denominator is conversations about human curi- osity, imagination, and communication from a variety of vantage points.
I realized it would be a missed opportunity if only those people who attended our live events benefited from these world-class intellects. So we supplemented their visit- ing lectures with podcast interviews, during which we explored topics in more detail. I started referring to the podcast as the “university I wish I’d attended where you can wear your pajamas and don’t incur student-loan debt.”
The goal of the podcast is to interview the greatest minds for the greatest number of people. My very first guest was the esteemed physicist Freeman Dyson. I next inter- viewed science-fiction authors, such as Andy Weir and Kim Stanley Robinson; poets and artists, including Herbert Sigüenza and Ray Armentrout; astronauts, such as Jessica Meir and Nicole Stott; and many others. Along the way, I also started to collect a curated subset of interviews with Nobel Prize–winning physicists.
Then in February 2020, my friend Freeman Dyson died. Dyson was the prototype of a truly overlooked Nobel laureate. His contributions to our understanding of the fundamentals of matter and energy cannot be overstated, yet he was bypassed for the Nobel Prize he surely deserved. I was honored to host him for his winter visits to enjoy La Jolla’s sublime weather.
Freeman’s passing lent an incredible sense of urgency to my pursuits, forcing me to acknowledge that most prize- winning physicists are getting on in years. I don’t know how to say this any other way, but I started to feel sick to my stomach, thinking that I might miss an opportunity to talk to some of the most brilliant minds in history who, because of winning the Nobel Prize, have had an outsized influence on society and culture. So in 2020, I started reaching out to them. Most said yes, although sadly, both of the living female Nobel laureate physicists declined to be interviewed. I’m incredibly disappointed not to have female voices in this book, but it’s due to the reality of the situation and not for lack of trying.
A year later, I had this incredible collection of legacy interviews with some of the most celebrated minds on the planet. T.S. Eliot once said, “The Nobel is a ticket to one’s own funeral. No one has ever done anything after he got it.” No one proves that idea more wrong than the physicists in this book. It’s a rarefied group of individuals to learn from—especially when the focus is on life lessons instead of their research. It would be a dereliction of my intellectual duty not to preserve and share them.
HOW TO APPROACH THIS BOOK
These chapters are not transcripts. From the lengthy interviews I conducted with each laureate, I pulled all of the bits exemplifying traits worthy of emulation. Then, after each exchange, I added context or shared how I have been affected by that quote or idea. I have also edited for clarity, since spoken communication doesn’t always translate directly to the page.
All in all, I have done my best to maintain the authenticity of my exchanges with my guests. For example, you’ll notice that my questions don’t always relate to the take-away. Conversations often go in unexpected directions. I could’ve rephrased the questions for this book so they more accurately represented the laureates’ responses, but I didn’t want to misrepresent context. Still, any mistakes accidentally introduced are definitely mine, not theirs.
Each chapter contains a small box briefly explaining the laureate’s Prize-winning work—not because there will be a test at the end, but because it’s interesting context, and further, I know a lot of my readers will want to learn a bit of the fascinating science in these pages, consider- ing the folks from whom you’ll be learning. Perhaps their work will ignite further curiosity in you. If that’s not you, feel free to skip these boxes. If you’re looking for more, I refer you to the laureates’ Nobel lectures at nobelprize.org. There, you will find their knowledge. But here, you will find examples of their wisdom—distilled and compressed into concentrated, actionable form.
Each interview ends with a handful of lightning-round questions designed to investigate more deeply, to provide you with insight into what these laureates are like as human beings. Often these questions reoccur.
Further, you’ll find several recurrent themes from interview to interview, including the power of curiosity, the importance of listening to your critics, and why it’s paramount to pursue goals that are “useless.” I truly hope you’ll enjoy going out of this Universe and the benefits it will accrue to your life and career!
Buy your copy of Think Like A Nobel Prize Winner here!
When 2017 Nobel Prize winner Barry Barish told me he had suffered from the imposter syndrome, the hair stood up on the back of my neck. I couldn’t believe that one of the most influential figures in my life and career—as a scientist, as a father, and as a human—is mortal. He sometimes feels insecure, just like I do. Every time I’m teaching, in the back of my head, I am thinking, who am I to do this? I always struggled with math, and physics never came naturally to me. I got where I am because of my passion and curiosity, not my SAT scores. Society venerates the genius. Maybe that’s you, but it’s certainly not me.
I’ve always suffered from the imposter syndrome. Discovering that Barish did too, even after winning a Nobel Prize—the highest regard in our field and in society itself—immensely comforted me. If he was insecure about how he compared to Einstein, I wanted to comfort him: Ein- stein was in awe of Isaac Newton, saying Newton “... determined the course of Western thought, research, and practice like no one else before or since.” And compared to whom did Newton feel inadequate? Jesus Christ almighty!
The truth is, the imposter syndrome is just a normal, even healthy, dose of inadequacy. As such, we can never overcome or defeat it, nor should we try to. But we can manage it through understanding and acceptance. Hearing about Barry’s experience allowed me to do exactly that, and I hoped sharing that message would also help others manage better. This was the moment I decided to create this book.
This isn’t a physics book. These pages are not for aspir- ing Nobel Prize winners, mathematicians, or any of my fellow geeks, dweebs, or nerds. In fact, I wrote it specifically for nonscientists—for those who, because of the quotidian demands of everyday life, sometimes lose sight of the biggest-picture topics humans are capable of learning about and contributing to. Most of all, I hope by humanizing science, by showing the craft of science as performed by its master practitioners, you my reader will see common themes emerge that will boost your creativity, stoke your imagination, and most of all, help overcome barriers like the imposter syndrome, thereby unlocking your full potential for out-of-this-universe success.
Though I didn’t write it for physicists, it’s appropriate to consider why the subjects of this book—who are all physicists—are good role models. Physicists are mental Swiss Army knives, or a cerebral SEAL Team Six. We dwell in uncertainty. We exist to solve problems.
We are not the best mathematicians (just ask a real mathematician). We’re not the best engineers. We also aren’t the best writers, speakers, or communicators—but no single group can simultaneously do all of these disparate tasks so well as the physicists I’ve compiled here. That’s what makes them worth listening to and learning from. I sure have.
The individuals in this book have balanced collaboration with competition. All scientists stand on the proverbial shoulders of giants of the past and present. Yet some of the most profound moments of inspiration do breathe magic into the equation of a single individual one unique time. There is a skill to know when to listen and when to talk, for you can’t do both at the same time. These scientists have navigated the challenging waters between focus and diversity, balancing intellectual breadth with depth, which are challenges we all face. Whether you’re a scientist or a salesman, you must “niche down” to solve problems. (Imagine trying to sell every car model made!)
I wrote this book for everyone who struggles to balance the mundane with the sublime—who is attending to the day-to-day hard work and labor of whatever craft they are in while also trying to achieve something greater in their profession or in life. I wanted to deconstruct the mental habits and tactics of some of society’s best and brightest minds in order to share their wisdom with readers—and also to show readers that they’re just like us. They struggle with compromise. They wrestle with perfection. And they aspire always to do something great. We can too.
By studying the habits and tactics of the world’s brightest, you can recognize common themes that apply to your life— even if the subject matter itself is as far removed from your daily life as a black hole is from a quark. Honestly, even though I am a physicist, the work done by most of the subjects in this book is no more similar to my daily work than it is to yours, and yet I learned much from them about issues common between us. These pages include enduring life lessons applicable to anyone eager to acquire new the true keys to success!
HOW IT ALL BEGAN
A theme pops up throughout these interviews regarding the connection between teaching and learning. In the Russian language, the word for “scientist” translates into “one who was taught.” That is an awesome responsibility with many implications. If we were taught, we have an obligation to teach. But the paradox is this: To be a good teacher, you must also be a good student. You must study how people learn in order to teach effectively. And to learn, you must not only study but also teach. In that way, I also have a selfish motivation behind this book: I wanted to share everything I learned from these laureates in order to learn it even more durably. Mostly, however, I see this book as an extension of my duty as an educator. That’s also how the podcast Into the Impossible began.
I’ve always had an insatiable curiosity about learning and education, combined with the recognition that life is short and I want to extract as much wisdom as I can while I can.
As a college professor, I think of teachers as shortcuts in this endeavor. Teachers act as a sort of hack to reduce the amount of time otherwise required to learn something on one’s own, compressing and making the learning process as efficient as possible—but no more so. In other words, there is a value in wrestling with material that cannot be hacked away.
As part of my duty as an educator, I wanted to cultivate a collection of dream faculty comprised of minds I wish I had encountered in my life. The next best thing to having them as my actual teachers is to learn from their interviews in a way that distills their knowledge, philosophy, struggles, tactics, and habits.
I started doing just that at UC San Diego in 2018 and realized I was extremely privileged to have access to some of the greatest minds in human history, ranging from Pulitzer Prize winners and authors to CEOs, artists, and astronauts. As the codirector of the Arthur C. Clarke Center for Human Imagination, I had access to a wide variety of writers, thinkers, and inventors from all walks of life, courtesy of our guest-speaker series. The list of invited speakers is not at all limited to the sciences. The common denominator is conversations about human curi- osity, imagination, and communication from a variety of vantage points.
I realized it would be a missed opportunity if only those people who attended our live events benefited from these world-class intellects. So we supplemented their visit- ing lectures with podcast interviews, during which we explored topics in more detail. I started referring to the podcast as the “university I wish I’d attended where you can wear your pajamas and don’t incur student-loan debt.”
The goal of the podcast is to interview the greatest minds for the greatest number of people. My very first guest was the esteemed physicist Freeman Dyson. I next inter- viewed science-fiction authors, such as Andy Weir and Kim Stanley Robinson; poets and artists, including Herbert Sigüenza and Ray Armentrout; astronauts, such as Jessica Meir and Nicole Stott; and many others. Along the way, I also started to collect a curated subset of interviews with Nobel Prize–winning physicists.
Then in February 2020, my friend Freeman Dyson died. Dyson was the prototype of a truly overlooked Nobel laureate. His contributions to our understanding of the fundamentals of matter and energy cannot be overstated, yet he was bypassed for the Nobel Prize he surely deserved. I was honored to host him for his winter visits to enjoy La Jolla’s sublime weather.
Freeman’s passing lent an incredible sense of urgency to my pursuits, forcing me to acknowledge that most prize- winning physicists are getting on in years. I don’t know how to say this any other way, but I started to feel sick to my stomach, thinking that I might miss an opportunity to talk to some of the most brilliant minds in history who, because of winning the Nobel Prize, have had an outsized influence on society and culture. So in 2020, I started reaching out to them. Most said yes, although sadly, both of the living female Nobel laureate physicists declined to be interviewed. I’m incredibly disappointed not to have female voices in this book, but it’s due to the reality of the situation and not for lack of trying.
A year later, I had this incredible collection of legacy interviews with some of the most celebrated minds on the planet. T.S. Eliot once said, “The Nobel is a ticket to one’s own funeral. No one has ever done anything after he got it.” No one proves that idea more wrong than the physicists in this book. It’s a rarefied group of individuals to learn from—especially when the focus is on life lessons instead of their research. It would be a dereliction of my intellectual duty not to preserve and share them.
HOW TO APPROACH THIS BOOK
These chapters are not transcripts. From the lengthy interviews I conducted with each laureate, I pulled all of the bits exemplifying traits worthy of emulation. Then, after each exchange, I added context or shared how I have been affected by that quote or idea. I have also edited for clarity, since spoken communication doesn’t always translate directly to the page.
All in all, I have done my best to maintain the authenticity of my exchanges with my guests. For example, you’ll notice that my questions don’t always relate to the take-away. Conversations often go in unexpected directions. I could’ve rephrased the questions for this book so they more accurately represented the laureates’ responses, but I didn’t want to misrepresent context. Still, any mistakes accidentally introduced are definitely mine, not theirs.
Each chapter contains a small box briefly explaining the laureate’s Prize-winning work—not because there will be a test at the end, but because it’s interesting context, and further, I know a lot of my readers will want to learn a bit of the fascinating science in these pages, consider- ing the folks from whom you’ll be learning. Perhaps their work will ignite further curiosity in you. If that’s not you, feel free to skip these boxes. If you’re looking for more, I refer you to the laureates’ Nobel lectures at nobelprize.org. There, you will find their knowledge. But here, you will find examples of their wisdom—distilled and compressed into concentrated, actionable form.
Each interview ends with a handful of lightning-round questions designed to investigate more deeply, to provide you with insight into what these laureates are like as human beings. Often these questions reoccur.
Further, you’ll find several recurrent themes from interview to interview, including the power of curiosity, the importance of listening to your critics, and why it’s paramount to pursue goals that are “useless.” I truly hope you’ll enjoy going out of this Universe and the benefits it will accrue to your life and career!
Buy your copy of Think Like A Nobel Prize Winner here!
Saturday, April 03, 2021
Should Stephen Hawking have won the Nobel Prize?
[This is a transcript of the video embedded below.]
Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.
In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.
But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.
Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.
Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.
But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.
However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.
So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.
Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.
One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.
If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.
You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.
Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.
So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.
Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.
I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.
Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.
Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
“Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize. So I might get a Nobel Prize after all.”The British physicist and science writer Phillip Ball, who attended this 2016 lecture, commented:
“I was struck by how unusual it was for a scientist to state publicly that their work warranted a Nobel… [It] gives a clue to the physicist’s elusive character: shamelessly self-promoting to the point of arrogance, and heedless of what others might think.”I heard Hawking say pretty much exactly the same thing in a public lecture a year earlier in Stockholm. But I had an entirely different reaction. I didn’t think of his comment as arrogant. I thought he was explaining something which few people knew about. And I thought he was right in that, if the Large Hadron Collider would have seen these tiny black holes decay, he almost certainly would have gotten a Nobel Prize. But I also thought that this was not going to happen. He was much more likely to win a Nobel Prize for something else. And he almost did.
Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.
In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.
But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.
Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.
Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.
But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.
However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.
So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.
Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.
One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.
If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.
You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.
Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.
So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.
Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.
I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.
Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.
Saturday, August 15, 2020
Understanding Quantum Mechanics #5: Decoherence
[Note: This transcript will not make much sense without the graphics in the video.]
I know I promised I would tell you what it takes to solve the measurement problem in quantum mechanics. But then I remembered that almost one of two physicists believes that the problem does not exist to begin with. So, I figured I should first make sure everyone – even the physicists – understand why the measurement problem has remained unsolved, despite a century of effort. This also means that if you watch this video to the end, you will understand what half of physicists do not understand.
That about half of physicists do not understand the measurement problem is not just anecdotal evidence, that’s poll results from 2016.This questionnaire was sent to a little more than one thousand two hundred physicists, from which about twelve percent responded. That’s a decent response rate for a survey, but note that the sample may not be representative for the global community. While the questionnaire was sent to physicists of all research areas, forty-four percent of them were Danish.
With those warnings ahead, a stunning seventeen percent of the survey-respondents said the measurement problem is a pseudoproblem. Even worse: twenty-nine percent erroneously think it has been solved by decoherence. So, this is what I want to explain today: What is decoherence and what does it have to do with quantum measurements? For this video, I will assume that you know the bra-ket notation for wave-functions. If you do not know it, please watch my earlier video.
In quantum mechanics, we describe a system by a wave-function that is a vector and can be expanded in a basis, which is a set of vectors of length one. The wave-function is usually denoted with the greek letter Psi. I will just label these basis vectors with numbers. A key feature of quantum mechanics is that the coefficients in the expansion of the wave-function, for which I used the letter a, can be complex numbers. Technically, there can be infinitely many basis-vectors, but that’s a complication we will not have to deal with here. We will just look at the simplest possible case, that of two basis vectors.
It is common to use basis vectors which describe possible measurement outcomes, and we will do the same. So, |1> and |2>, stand for two values of an observable that you could measure. The example that physicists typically have in mind for this are two different spin values of a particle, say +1 and -1. But the basis vectors could also describe something else that you measure, for example two different energy levels of an atom or two different sides of a detector, or what have you.
Once you have expanded the wave-function in a basis belonging to the measurement outcomes, then the square of the coefficient for a basis vector gives you the probability of getting the measurement outcome. This is Born’s rule. So if a coefficient was one over square root two, then the square is one half which means a fifty percent probability of finding this measurement outcome. Since the probabilities have to add up to 100%, this means the absolute squares of the coefficients have to add up to 1.
With these two basis vectors you can describe a superposition, which is a sum with factors in front of them. For more about superpositions, please watch my earlier video. The weird thing about quantum mechanics now is that if you have a state that is in a superposition of possible measurement outcomes, say, spin plus one and spin minus one, you never measure that superposition. You only measure either one or the other.
As example, let us use a superposition that is with equal probability in one of the possible measurement outcomes. Then the factor for each basis vector has to be the square root of one half. But this is quantum mechanics, so let us not forget that the coefficients are complex numbers. To take this into account, we will put in another factor here, which is a complex number with absolute value equal to one. We can write any such complex number as e to the I times theta, where theta is a real number.
The reason for doing this is that such a complex number does not change anything about the probabilities. See, if we ask what is the probability of finding this superposition in state |1>, then this would be (one over square root of two) times (e to the I theta) times the complex conjugate, which is (one over square root of two) times (e to the minus I theta). And that comes out to be one half, regardless of what theta is.
This theta also called the “phase” of the wave-function because you can decompose the complex number into a sine and cosine, and then it appears in the argument where a phase normally appears for an oscillation. There isn’t anything oscillating here, though, because there is no time-dependence. You could put another such complex number in front of the other coefficient, but this doesn’t change anything about the following.
Ok, so now we have this superposition that we never measure. The idea of decoherence is now to take into account that the superposition is not the only thing in our system. We prepare a state at some initial time, and then it travels to the detector. A detector is basically a device that amplifies a signal. A little quantum particle comes in one end and a number comes out on the other end. This necessarily means that the superposition which we want to measure interacts with many other particles, both along the way to the detector, and in the detector. This is what you want to describe with decoherence.
The easiest way to describe these constant bumps that the superposition has to endure is that each bump changes the phase of the state, so the theta, by a tiny little bit. To see what effect this has if you do a great many of these little bumps, we first have to calculate the density-matrix of the wave-function. It will become clear later, why.
As I explained in my previous video, the density matrix, usually denoted with the greek letter rho, is the ket-bra product of the wave-function with itself. For the simple case of our superposition, the density matrix looks like this. It has a one over two in each entry because of all the square roots of two, and the off-diagonal elements also have this complex factor with the phase. The idea of decoherence is then to say that each time the particle bumps into some other particle, this phase randomly changes and what you actually measure, is the average over all those random changes.
So, understanding decoherence comes down to averaging this complex number. To see what goes on, it helps to draw the complex plane. Here is the complex plane. Now, every number with an absolute value of 1 lies on a circle of radius one around zero. On this circle, you therefore find all the numbers of the form e to the I times theta, with theta a real number. If you turn theta from 0 to 2 \Pi, you go once around the circle. That’s Euler’s formula, basically.
The whole magic of decoherence is in the following insight. If you randomly select points on this circle and average over them, then the average will not lie on the circle. Instead, it will converge to the middle of the circle, which is at zero. So, if you average over all the random kicks, you get zero. The easiest way to see this is to think of the random points as little masses and the average as the center of mass.
Now let us look at the density matrix again. We just learned that if we average over the random kicks, then these off-diagonal entries go to zero. Nothing happens with the diagonal entries. That’s decoherence.
The reason this is called “decoherence” is that the random changes to the phase destroy the ability of the state to make an interference pattern with itself. If you randomly shift around the phase of a wave, you don’t get any pattern. A state that has a well-defined phase and can interfere with itself, is called “coherent”. But the terminology isn’t the interesting bit. The interesting bit is what has happened with the density matrix.
This looks utterly unremarkable. It’s just a matrix with one over two’s on the diagonal. But what’s interesting about it is that there is no wave-function that will give you this density matrix. To see this, look again at the density matrix for an arbitrary wave-function in two dimensions. Now take for example this off-diagonal entry. If this entry is zero, then one of these coefficients has to be zero, but then one of the diagonal elements is also zero, which is not what the decohered density matrix looks like. So, the matrix that we got after decoherence no longer corresponds to a wave-function.
That’s why we use density matrices in the first place. Every wave-function gives you a density matrix. But not every density matrix gives you a wave-function. If you want to describe how a system loses coherence, you therefore need to use density matrices.
br> What does this density matrix after decoherence describe? It describes classical probabilities. The diagonal entries tell you the probability for each of the possible measurement outcomes, like in quantum mechanics. But all the quantum-ness of the system, that was in the ability of the wave-function to interfere with itself, have gone away with the off-diagonal entries.
So, decoherence converts quantum probabilities to classical probabilities. It therefore explains why we never observe any strange quantum behavior in every-day life. It’s because this quantum behavior goes away very quickly with all the many interactions that every particle constantly has, whether or not you measure them. Decoherence gives you the right classical probabilities.
But it does not tell you what happens with the system itself. To see this, keep in mind that the density matrix in general does not describe a collection of particles or a sequence of measurements. It might well just describe one single particle. And after you have measured the particle, it is with probability 1 either in one state, or in the other. But this would correspond to a density matrix which has one diagonal entry that is 1 and all other entries zero. The state after measurement is not in a fifty-fifty probability-state, that just isn’t a thing. So, decoherence does not actually tell you what happens with the system itself when you measure it. It merely gives you probabilities for what you observe.
This is why decoherence only partially solves the measurement problem. It tells you why we do not normally observe quantum effects for large objects. It does not tell you, however, how it happens that a particle ends up in one, and only one, possible measurement outcome.
The best way to understand a new subject is to actively engage with it, and as much as I love doing these videos, this is something you have to do yourself. A great place to start engaging with quantum mechanics on your own is Brilliant, who have been sponsoring this video. Brilliant offers interactive courses on a large variety of topics in science and mathematics. To make sense of what I just told you about density matrices, for example, have a look at their courses on linear algebra, probabilities, and on quantum objects.
To support this channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get twenty percent off the annual Premium subscription.
I know I promised I would tell you what it takes to solve the measurement problem in quantum mechanics. But then I remembered that almost one of two physicists believes that the problem does not exist to begin with. So, I figured I should first make sure everyone – even the physicists – understand why the measurement problem has remained unsolved, despite a century of effort. This also means that if you watch this video to the end, you will understand what half of physicists do not understand.
That about half of physicists do not understand the measurement problem is not just anecdotal evidence, that’s poll results from 2016.This questionnaire was sent to a little more than one thousand two hundred physicists, from which about twelve percent responded. That’s a decent response rate for a survey, but note that the sample may not be representative for the global community. While the questionnaire was sent to physicists of all research areas, forty-four percent of them were Danish.
With those warnings ahead, a stunning seventeen percent of the survey-respondents said the measurement problem is a pseudoproblem. Even worse: twenty-nine percent erroneously think it has been solved by decoherence. So, this is what I want to explain today: What is decoherence and what does it have to do with quantum measurements? For this video, I will assume that you know the bra-ket notation for wave-functions. If you do not know it, please watch my earlier video.
In quantum mechanics, we describe a system by a wave-function that is a vector and can be expanded in a basis, which is a set of vectors of length one. The wave-function is usually denoted with the greek letter Psi. I will just label these basis vectors with numbers. A key feature of quantum mechanics is that the coefficients in the expansion of the wave-function, for which I used the letter a, can be complex numbers. Technically, there can be infinitely many basis-vectors, but that’s a complication we will not have to deal with here. We will just look at the simplest possible case, that of two basis vectors.
It is common to use basis vectors which describe possible measurement outcomes, and we will do the same. So, |1> and |2>, stand for two values of an observable that you could measure. The example that physicists typically have in mind for this are two different spin values of a particle, say +1 and -1. But the basis vectors could also describe something else that you measure, for example two different energy levels of an atom or two different sides of a detector, or what have you.
Once you have expanded the wave-function in a basis belonging to the measurement outcomes, then the square of the coefficient for a basis vector gives you the probability of getting the measurement outcome. This is Born’s rule. So if a coefficient was one over square root two, then the square is one half which means a fifty percent probability of finding this measurement outcome. Since the probabilities have to add up to 100%, this means the absolute squares of the coefficients have to add up to 1.
With these two basis vectors you can describe a superposition, which is a sum with factors in front of them. For more about superpositions, please watch my earlier video. The weird thing about quantum mechanics now is that if you have a state that is in a superposition of possible measurement outcomes, say, spin plus one and spin minus one, you never measure that superposition. You only measure either one or the other.
As example, let us use a superposition that is with equal probability in one of the possible measurement outcomes. Then the factor for each basis vector has to be the square root of one half. But this is quantum mechanics, so let us not forget that the coefficients are complex numbers. To take this into account, we will put in another factor here, which is a complex number with absolute value equal to one. We can write any such complex number as e to the I times theta, where theta is a real number.
The reason for doing this is that such a complex number does not change anything about the probabilities. See, if we ask what is the probability of finding this superposition in state |1>, then this would be (one over square root of two) times (e to the I theta) times the complex conjugate, which is (one over square root of two) times (e to the minus I theta). And that comes out to be one half, regardless of what theta is.
This theta also called the “phase” of the wave-function because you can decompose the complex number into a sine and cosine, and then it appears in the argument where a phase normally appears for an oscillation. There isn’t anything oscillating here, though, because there is no time-dependence. You could put another such complex number in front of the other coefficient, but this doesn’t change anything about the following.
Ok, so now we have this superposition that we never measure. The idea of decoherence is now to take into account that the superposition is not the only thing in our system. We prepare a state at some initial time, and then it travels to the detector. A detector is basically a device that amplifies a signal. A little quantum particle comes in one end and a number comes out on the other end. This necessarily means that the superposition which we want to measure interacts with many other particles, both along the way to the detector, and in the detector. This is what you want to describe with decoherence.
The easiest way to describe these constant bumps that the superposition has to endure is that each bump changes the phase of the state, so the theta, by a tiny little bit. To see what effect this has if you do a great many of these little bumps, we first have to calculate the density-matrix of the wave-function. It will become clear later, why.
As I explained in my previous video, the density matrix, usually denoted with the greek letter rho, is the ket-bra product of the wave-function with itself. For the simple case of our superposition, the density matrix looks like this. It has a one over two in each entry because of all the square roots of two, and the off-diagonal elements also have this complex factor with the phase. The idea of decoherence is then to say that each time the particle bumps into some other particle, this phase randomly changes and what you actually measure, is the average over all those random changes.
So, understanding decoherence comes down to averaging this complex number. To see what goes on, it helps to draw the complex plane. Here is the complex plane. Now, every number with an absolute value of 1 lies on a circle of radius one around zero. On this circle, you therefore find all the numbers of the form e to the I times theta, with theta a real number. If you turn theta from 0 to 2 \Pi, you go once around the circle. That’s Euler’s formula, basically.
The whole magic of decoherence is in the following insight. If you randomly select points on this circle and average over them, then the average will not lie on the circle. Instead, it will converge to the middle of the circle, which is at zero. So, if you average over all the random kicks, you get zero. The easiest way to see this is to think of the random points as little masses and the average as the center of mass.
Now let us look at the density matrix again. We just learned that if we average over the random kicks, then these off-diagonal entries go to zero. Nothing happens with the diagonal entries. That’s decoherence.
The reason this is called “decoherence” is that the random changes to the phase destroy the ability of the state to make an interference pattern with itself. If you randomly shift around the phase of a wave, you don’t get any pattern. A state that has a well-defined phase and can interfere with itself, is called “coherent”. But the terminology isn’t the interesting bit. The interesting bit is what has happened with the density matrix.
This looks utterly unremarkable. It’s just a matrix with one over two’s on the diagonal. But what’s interesting about it is that there is no wave-function that will give you this density matrix. To see this, look again at the density matrix for an arbitrary wave-function in two dimensions. Now take for example this off-diagonal entry. If this entry is zero, then one of these coefficients has to be zero, but then one of the diagonal elements is also zero, which is not what the decohered density matrix looks like. So, the matrix that we got after decoherence no longer corresponds to a wave-function.
That’s why we use density matrices in the first place. Every wave-function gives you a density matrix. But not every density matrix gives you a wave-function. If you want to describe how a system loses coherence, you therefore need to use density matrices.
br> What does this density matrix after decoherence describe? It describes classical probabilities. The diagonal entries tell you the probability for each of the possible measurement outcomes, like in quantum mechanics. But all the quantum-ness of the system, that was in the ability of the wave-function to interfere with itself, have gone away with the off-diagonal entries.
So, decoherence converts quantum probabilities to classical probabilities. It therefore explains why we never observe any strange quantum behavior in every-day life. It’s because this quantum behavior goes away very quickly with all the many interactions that every particle constantly has, whether or not you measure them. Decoherence gives you the right classical probabilities.
But it does not tell you what happens with the system itself. To see this, keep in mind that the density matrix in general does not describe a collection of particles or a sequence of measurements. It might well just describe one single particle. And after you have measured the particle, it is with probability 1 either in one state, or in the other. But this would correspond to a density matrix which has one diagonal entry that is 1 and all other entries zero. The state after measurement is not in a fifty-fifty probability-state, that just isn’t a thing. So, decoherence does not actually tell you what happens with the system itself when you measure it. It merely gives you probabilities for what you observe.
This is why decoherence only partially solves the measurement problem. It tells you why we do not normally observe quantum effects for large objects. It does not tell you, however, how it happens that a particle ends up in one, and only one, possible measurement outcome.
The best way to understand a new subject is to actively engage with it, and as much as I love doing these videos, this is something you have to do yourself. A great place to start engaging with quantum mechanics on your own is Brilliant, who have been sponsoring this video. Brilliant offers interactive courses on a large variety of topics in science and mathematics. To make sense of what I just told you about density matrices, for example, have a look at their courses on linear algebra, probabilities, and on quantum objects.
To support this channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get twenty percent off the annual Premium subscription.
Saturday, May 09, 2020
A brief history of black holes
Today I want to talk about the history of black holes. But before I get to this, let me mention that all my videos have captions. You turn them on by clicking on “CC” in the YouTube toolbar.
Now about the black holes. The possibility that gravity can become so strong that it traps light appears already in Newtonian gravity, but black holes were not really discussed by scientists until it turned out that they are a consequence of Einstein’s theory of general relativity.
General Relativity is a set of equations for the curvature of space and time, called Einstein’s field equations. And black holes are one of the possible solution to Einstein’s equations. This was first realized by Karl Schwarzschild in 1916. For this reason, black holes are also sometimes called the “Schwarzschild solution”.
Schwarzschild of course was not actually looking for black holes. He was just trying to understand what Einstein’s theory would say about the curvature of space-time outside an object that is to good precision spherically symmetric, like, say, our sun or planet earth. Now, outside these objects, there is approximately no matter, which is good, because in this case the equations become particularly simple and Schwarzschild was able to solve them.
What happens in Schwarzschild’s solution is the following. As I said, this solution only describes the outside of some distribution of matter. But you can ask then, what happens on the surface of that distribution of matter if you compress the matter more and more, that is, you keep the mass fixed but shrink the radius. Well, it turns out that there is a certain radius, at which light can no longer escape from the surface of the object, and also not from any location inside this surface. This dividing surface is what we now call the black hole horizon. It’s a sphere whose radius is now called the Schwarzschild radius.
Where the black hole horizon is, depends on the mass of the object, so every mass has its own Schwarzschild radius, and if you could compress the mass to below that radius, it would keep collapsing to a point and you’d make a black hole. But for most stellar objects, their actual radius is much larger than the Schwarzschild radius, so they do not have a horizon, because inside of the matter one has to use a different solution to Einstein’s equations. The Schwarzschild radius of the sun, for example, is a few miles*, whereas the actual radius of the sun is some hundred-thousand miles. The Schwarzschild radius of planet Earth is merely a few millimeters.
Now, it turns out that in Schwarzschild’s original solution, there is a quantity that goes to infinity as you approach the horizon. For this reason, physicists originally thought that the Schwarzschild solution makes no physical sense. However, it turns out that there is nothing physically wrong with that. If you look at any quantity that you can actually measure as you approach a black hole, none of them becomes infinitely large. In particular, the curvature just goes with the inverse of the square of the mass. I explained this in an earlier video. And so, physicists concluded, this infinity at the black hole horizon is a mathematical artifact and, indeed, it can be easily removed.
With that clarified, physicists accepted that there is nothing mathematically wrong with black holes, but then they argued that black holes would not occur in nature because there is no way to make them. The idea was that, since the Schwarzschild solution is perfectly spherically symmetric, the conditions that are necessary to make a black hole would just never happen.
But this too turned out to be wrong. Indeed, it was proved by Stephen Hawking and Roger Penrose in the 1960s that the very opposite is the case. Black holes are what you generally get in Einstein’s theory if you have a sufficient amount of matter that just collapses because it cannot build up sufficient pressure. And so, if a star runs out of nuclear fuel and has no new way to create pressure, a black hole will be the outcome. In contrast to what physicists thought previously, black holes are hard to avoid, not hard to make.
So this was the situation in the 1970s. Black holes had turned from mathematically wrong, to mathematically correct* but non-physical, to a real possibility. But there was at the time no way to actually observe a black hole. That’s because back then the main mode of astrophysical observation was using light. And black holes are defined by the very property that they do not emit light.
However, there are other ways of observing black holes. Most importantly, black holes influence the motion of stars in their vicinity, and the other stars are observable. From this one can infer the mass of the object that the stars orbit around and one can put a limit on the radius. Black holes also swallow material in their vicinity, and from the way that they swallow it, one can tell that the object has no hard surface. The first convincing observations that our own galaxy contains a black hole came in the late 1990s. About ten years later, there were so many observations that could only be explained by the existence of black holes that today basically no one who understands the science doubts black holes exist.
What makes this story interesting to me is how essential it was that Penrose and Hawking understood the mathematics of Einstein’s theory and could formally prove that black holes should exist. It was only because of this that black holes were taken seriously at all. Without that, maybe we’d never have looked for them to begin with. A friend of mine thinks that Penrose deserves a Nobel Prize for his contribution to the discovery of black holes. And I think that’s right.
* Unfortunately, a mistake in the spoken text.
Now about the black holes. The possibility that gravity can become so strong that it traps light appears already in Newtonian gravity, but black holes were not really discussed by scientists until it turned out that they are a consequence of Einstein’s theory of general relativity.
General Relativity is a set of equations for the curvature of space and time, called Einstein’s field equations. And black holes are one of the possible solution to Einstein’s equations. This was first realized by Karl Schwarzschild in 1916. For this reason, black holes are also sometimes called the “Schwarzschild solution”.
Schwarzschild of course was not actually looking for black holes. He was just trying to understand what Einstein’s theory would say about the curvature of space-time outside an object that is to good precision spherically symmetric, like, say, our sun or planet earth. Now, outside these objects, there is approximately no matter, which is good, because in this case the equations become particularly simple and Schwarzschild was able to solve them.
What happens in Schwarzschild’s solution is the following. As I said, this solution only describes the outside of some distribution of matter. But you can ask then, what happens on the surface of that distribution of matter if you compress the matter more and more, that is, you keep the mass fixed but shrink the radius. Well, it turns out that there is a certain radius, at which light can no longer escape from the surface of the object, and also not from any location inside this surface. This dividing surface is what we now call the black hole horizon. It’s a sphere whose radius is now called the Schwarzschild radius.
Where the black hole horizon is, depends on the mass of the object, so every mass has its own Schwarzschild radius, and if you could compress the mass to below that radius, it would keep collapsing to a point and you’d make a black hole. But for most stellar objects, their actual radius is much larger than the Schwarzschild radius, so they do not have a horizon, because inside of the matter one has to use a different solution to Einstein’s equations. The Schwarzschild radius of the sun, for example, is a few miles*, whereas the actual radius of the sun is some hundred-thousand miles. The Schwarzschild radius of planet Earth is merely a few millimeters.
Now, it turns out that in Schwarzschild’s original solution, there is a quantity that goes to infinity as you approach the horizon. For this reason, physicists originally thought that the Schwarzschild solution makes no physical sense. However, it turns out that there is nothing physically wrong with that. If you look at any quantity that you can actually measure as you approach a black hole, none of them becomes infinitely large. In particular, the curvature just goes with the inverse of the square of the mass. I explained this in an earlier video. And so, physicists concluded, this infinity at the black hole horizon is a mathematical artifact and, indeed, it can be easily removed.
With that clarified, physicists accepted that there is nothing mathematically wrong with black holes, but then they argued that black holes would not occur in nature because there is no way to make them. The idea was that, since the Schwarzschild solution is perfectly spherically symmetric, the conditions that are necessary to make a black hole would just never happen.
But this too turned out to be wrong. Indeed, it was proved by Stephen Hawking and Roger Penrose in the 1960s that the very opposite is the case. Black holes are what you generally get in Einstein’s theory if you have a sufficient amount of matter that just collapses because it cannot build up sufficient pressure. And so, if a star runs out of nuclear fuel and has no new way to create pressure, a black hole will be the outcome. In contrast to what physicists thought previously, black holes are hard to avoid, not hard to make.
So this was the situation in the 1970s. Black holes had turned from mathematically wrong, to mathematically correct* but non-physical, to a real possibility. But there was at the time no way to actually observe a black hole. That’s because back then the main mode of astrophysical observation was using light. And black holes are defined by the very property that they do not emit light.
However, there are other ways of observing black holes. Most importantly, black holes influence the motion of stars in their vicinity, and the other stars are observable. From this one can infer the mass of the object that the stars orbit around and one can put a limit on the radius. Black holes also swallow material in their vicinity, and from the way that they swallow it, one can tell that the object has no hard surface. The first convincing observations that our own galaxy contains a black hole came in the late 1990s. About ten years later, there were so many observations that could only be explained by the existence of black holes that today basically no one who understands the science doubts black holes exist.
What makes this story interesting to me is how essential it was that Penrose and Hawking understood the mathematics of Einstein’s theory and could formally prove that black holes should exist. It was only because of this that black holes were taken seriously at all. Without that, maybe we’d never have looked for them to begin with. A friend of mine thinks that Penrose deserves a Nobel Prize for his contribution to the discovery of black holes. And I think that’s right.
* Unfortunately, a mistake in the spoken text.
Thursday, April 16, 2020
How Heisenberg Became Uncertain
I have decided that my YouTube channel lacks a history part because there is so much we can learn from the history of science. So, today I want to tell you a story. It’s the story of how Werner Heisenberg got the uncertainty principle named after him.
Heisenberg was born in 1901 in the German city of Würzburg. He went on to study physics in Munich. In 1923, Heisenberg was scheduled for his final oral examination to obtain his doctorate. He passed mathematics, theoretical physics, and astronomy just fine, but then he run into trouble with experimental physics.
His examination in experimental physics was by Wilhelm Wien. That’s the guy who has Wien’s law named after him. Wien, as an experimentalist, had required that Heisenberg did a “Praktikum” which is a series of exercises in physics experimentation; it’s lab work for beginners, basically. But the university lacked some equipment and Heisenberg was not interested enough to find out where to get it. So he just moved on to other things without looking much into the experiments he was supposed to do. That, as it turned out, was not a good idea.
When Heisenberg’s day of the experimental exam came, it did not go well. In their book “The Historical Development of Quantum Theory”, Mehra and Rechenberg recount:
But this was not the end of the story. Heisenberg was so embarrassed about his miserable performance that he sat down to learn everything about telescopes and microscopes that he could find. This was in the early days of quantum mechanics and it led him to wonder if there is a fundamental limit to how well one can resolve structures with a microscope. He went about formulating a thought experiment which is now known as “Heisenberg’s Microscope.”
This thought experiment was about measuring a single electron, something which was actually not possible at the time. The smallest distance you can resolve with a microscope, let us call this Δ x, depends on both the wave-length of the light that you use, I will call that λ, and the opening angle of the microscope, ε. The smallest resolvable distance is proportional to the wave-length, so a smaller wave-length allows you to resolve smaller structures. And it is inversely proportional to the sine of the opening angle. A smaller opening angle makes the resolution worse.
But, said Heisenberg, if light is made of particles, that’s the photons, and I try to measure the position of an electron with light, then the photons will kick the electron. But you need some opening angle for the microscope to work, which means you don’t know exactly where the photon is coming from. Therefore, the act of measuring the position of the electron with a photon actually makes me less certain about where the electron is because I didn’t know where the photon came from.
Heisenberg estimated that the momentum that would be transferred from the photon to the electron to is proportional to the energy of the photon, which means inversely proportional to the wavelength, and proportional to the sine of the opening angle. So if we call that momentum Δ p we have Δ p is proportional to sine ε over λ. And the constant in front of this is Planck’s constant, because that gives you the relation between the energy and the wave-length of the photon.
Now you can see that if you multiply the two uncertainties, the one in position and the one in momentum of the electron, you find that it’s just Planck’s constant. This is Heisenberg’s famous uncertainty principle. The more you know about the position of the particle, the less you know about the momentum and the other way round.
We know today that Heisenberg’s argument for microscopes is not quite correct but, remarkably enough, the conclusion is correct. Indeed, this uncertainty has nothing to do with microscopes in particular. Heisenberg’s uncertainty is far more than that: It’s a general property of nature. And it does not only hold for position and momenta but for many other pairs of quantities.
Many years later Heisenberg wrote about his insight: “So one might even assume, that in the work on the gamma-ray microscope and the uncertainty relation I used the knowledge which I had acquired by this poor examination.”
I like this story because it tells us that if there is something you don’t understand, then don’t be ashamed and run away from it, but dig into it. Maybe you will find that no one really understands it and leave your mark in science.
Heisenberg was born in 1901 in the German city of Würzburg. He went on to study physics in Munich. In 1923, Heisenberg was scheduled for his final oral examination to obtain his doctorate. He passed mathematics, theoretical physics, and astronomy just fine, but then he run into trouble with experimental physics.
His examination in experimental physics was by Wilhelm Wien. That’s the guy who has Wien’s law named after him. Wien, as an experimentalist, had required that Heisenberg did a “Praktikum” which is a series of exercises in physics experimentation; it’s lab work for beginners, basically. But the university lacked some equipment and Heisenberg was not interested enough to find out where to get it. So he just moved on to other things without looking much into the experiments he was supposed to do. That, as it turned out, was not a good idea.
When Heisenberg’s day of the experimental exam came, it did not go well. In their book “The Historical Development of Quantum Theory”, Mehra and Rechenberg recount:
“Wien was annoyed when he learned in the examination that Heisenberg had done so little in the experimental exercise given to him. He then began to ask [Heisenberg] questions to gauge his familiarity with the experimental setup; for instance, he wanted to know what the resolving power of the Fabry-Perot interferometer was... Wien had explained all this in one of his lectures on optics; besides, Heisenberg was supposed to study it anyway... But he had not done so and now tried to figure it out unsuccessfully in the short time available during the examination. Wien... asked about the resolving power of a microscope; Heisenberg did not know that either. Wien questioned him about the resolving power of telescopes, which [Heisenberg] also did not know.”What happened next? Well, Wien wanted to fail Heisenberg, but the theoretical physicist Arnold Sommerfeld came to Heisenberg’s help. Heisenberg had excelled in the exam on theoretical physics, and so Sommerfeld put in a strong word in favor of giving Heisenberg his PhD. With that, Heisenberg passed the doctoral examination, though he got the lowest possible grade.
But this was not the end of the story. Heisenberg was so embarrassed about his miserable performance that he sat down to learn everything about telescopes and microscopes that he could find. This was in the early days of quantum mechanics and it led him to wonder if there is a fundamental limit to how well one can resolve structures with a microscope. He went about formulating a thought experiment which is now known as “Heisenberg’s Microscope.”
This thought experiment was about measuring a single electron, something which was actually not possible at the time. The smallest distance you can resolve with a microscope, let us call this Δ x, depends on both the wave-length of the light that you use, I will call that λ, and the opening angle of the microscope, ε. The smallest resolvable distance is proportional to the wave-length, so a smaller wave-length allows you to resolve smaller structures. And it is inversely proportional to the sine of the opening angle. A smaller opening angle makes the resolution worse.
But, said Heisenberg, if light is made of particles, that’s the photons, and I try to measure the position of an electron with light, then the photons will kick the electron. But you need some opening angle for the microscope to work, which means you don’t know exactly where the photon is coming from. Therefore, the act of measuring the position of the electron with a photon actually makes me less certain about where the electron is because I didn’t know where the photon came from.
Heisenberg estimated that the momentum that would be transferred from the photon to the electron to is proportional to the energy of the photon, which means inversely proportional to the wavelength, and proportional to the sine of the opening angle. So if we call that momentum Δ p we have Δ p is proportional to sine ε over λ. And the constant in front of this is Planck’s constant, because that gives you the relation between the energy and the wave-length of the photon.
Now you can see that if you multiply the two uncertainties, the one in position and the one in momentum of the electron, you find that it’s just Planck’s constant. This is Heisenberg’s famous uncertainty principle. The more you know about the position of the particle, the less you know about the momentum and the other way round.
We know today that Heisenberg’s argument for microscopes is not quite correct but, remarkably enough, the conclusion is correct. Indeed, this uncertainty has nothing to do with microscopes in particular. Heisenberg’s uncertainty is far more than that: It’s a general property of nature. And it does not only hold for position and momenta but for many other pairs of quantities.
Many years later Heisenberg wrote about his insight: “So one might even assume, that in the work on the gamma-ray microscope and the uncertainty relation I used the knowledge which I had acquired by this poor examination.”
I like this story because it tells us that if there is something you don’t understand, then don’t be ashamed and run away from it, but dig into it. Maybe you will find that no one really understands it and leave your mark in science.
Friday, May 03, 2019
Graham Farmelo’s interview of Edward Witten. Transcript.
[I’ve meant for some while to try an automatic transcription
software, and Graham Farmelo’s interview of Edward Witten (mentioned by Peter Woit) seemed a
good occasion. I used an app called “Trint” which seems to work okay. But both
the software and I have trouble with Farmelo’s British accent and with Witten’s
mumbling. I have marked the places that I didn’t understand with [xxx]. Please
leave me a comment in case you can figure out what’s being said. Also notify me
of any blunders that I might have missed. Thanks!]
GF [00:00:06] A
mind of the brilliance of Edward Witten’s comes along in mathematical physics
about once every 50 years if we’re lucky. Since the late 1970s he’s been
preeminent among the physicists who are trying to understand the underlying
order of the universe. Or, as you might say, trying to discover the most
fundamental equations of physics. More than that, by studying the mathematical
qualities of nature, Witten became remarkably influential in pure mathematics.
The only physicist ever to have won the coveted Fields Medal which has much the
same stature in mathematics as a Nobel Prize has in physics.
GF [00:00:46] My
name is Graham Farmelo, author of “The universe speaks in numbers.” Witten is a
central figure in my book and he’s been helpful to me. Though he’s a reluctant
interviewee so I was pleased when he agreed to talk with me last August about
some aspects of his career and the relationship between mathematics and
physics. He was in a relaxed mood sitting on a sofa in his office at the
Institute for Advanced Study in Princeton wearing his tennis clothes. As usual, he speaks quietly so you’ll have to listen hard.
GF [00:01:20] He
uses quite a few technical terms too. But if you’re not familiar with them I
suggest that you just let them wash over you. The key thing is to get a sense
of Witten’s thinking about the big picture. He is worth it.
GF [00:01:32] He
gives us several illuminating insights into how he became interested in state-of-the-art mathematics while remaining a physicist to his fingertips. I began by
asking him if he’d always been interested in mathematics and physics.
EW [00:01:47] When
I was a kid I was very interested in astronomy. It was the period of the space
race and everybody was interested in space. Then, when I was a little older, I was
exposed to calculus by my father. And for a while I was very interested in
math.
GF [00:02:02] You
said for a while, so did that lapse?
EW [00:02:04] Yes, it did lapse for a few years, and the reason it lapsed, I think, was that after
being exposed to calculus at the age of eleven it actually was quite a while
before I was shown anything that was really more advanced. So I wasn't really
aware that there was much more interesting more advanced math. Probably
not the only reason, but certainly one reason that my interest lapsed.
GF [00:02:22] Yeah.
Were you ever interested in any other subjects? I mean because you know you
came on to study history and things like that. Did that really interest you
comparably to math and physics?
EW [00:02:31] I
guess there was a period when I imagined doing journalism or history or
something but at about the age of 21 or 22 I realized that I wasn't going to
work out well in my case.
GF [00:02:42] After
studying modern languages he worked on George McGovern’s ill fated presidential
campaign and even studied economics for one semester before he finally turned
to physics.
GF [00:02:53] Apparently
he showed up at Princeton University wanting to do a Ph.D. in theoretical
physics and they wisely took him on after he made short work of some
preliminary exams. Boy did he learn quickly. One of the instructors tasked with
teaching him in the lab told me that within three weeks Witten’s questions on
the experiments went from basic to brilliant to Nobel level. As a postdoc at
Harvard, Witten became acquainted with several of the theorist pioneers of this
model including Steven Weinberg, Shelly Glashow, Howard Georgi, and Sydney
Coleman, who helped interest the young Witten in the mathematics of these new
theories.
EW [00:03:33] The
physicists I learned from most during those years were definitely Weinberg, Glashow, Georgi, and Coleman. And they were completely different. So Georgi and
Glashow were doing model building, basically weak interaction model building, elaborations on the Standard Model. I found it fascinating but it was a little
bit hard to find an entree there. If the
world had been a little bit different, I might have made my career doing things
like they were doing.
GF [00:04:01] Wow.
This was the first time I’d heard Witten say that he was at first expecting to
be like most other theorists and take his inspiration from the results of
experiments building so-called models of the real world. What, I wondered, led
him to change direction and become so mathematical.
EW [00:04:19] Let
me provide a little background for listeners. Up to and including the time I
was a graduate student, for 20, 25 years, there had been constant waves of new
discoveries in elementary particle physics: strange particles, muons, hadronic
resonances, parity violations, CP violation, scaling and deep inelastic
scattering, the Charm particle, and I’m forgetting a whole bunch. But that’s enough to give you the idea. So that was over a period of over 20 years. So even
after a lot of the big discoveries that was one every three years. Now, if
experimental surprises and discoveries had continued like that, which at the time
I think is what would have happened because it had been going on for a quarter century, then I would have
expected to be involved in model building, or grappling with it, like
colleagues such as Georgi and Glashow all were doing. Most notably, however, it
turned out that this period of constant surprise and turmoil was ending just
while I was a graduate and therefore later on I had no successful directions.
GF [00:05:20] Do
you remember being disappointed by that in any sense?
EW [00:05:23] Of
course I was, you never stop being disappointed.
GF [00:05:27] Oh
dear, oh it’s a hard life.
GF [00:05:31] You
were disappointed by the drawing up so to speak of the.
EW [00:05:33] There
have been important experimental discoveries since then. But the pace has not
been quite the same. Although they’ve been very important they’ve been a little
bit more abstract in what they teach us and definitely they’ve offered fewer
opportunities for model building than was the case in the 60s and 70s. I’d like
to just tell you a word or two about my interaction with the other physicists. There was Steve Weinberg and what I remember best from Weinberg. He was one of the
pioneers of a subject called current algebra which was an important part of
understanding the nuclear force. But he obviously thought most other physicists
didn't understand it properly and I was one of those. So whenever current
algebra was mentioned at a seminar or a discussion meeting he would always give
a short little speech explaining his understanding of it. In my case after
hearing those speeches the eight to 10 times [laughter] what Steve was telling
us.
EW [00:06:28] Then
there was Sidney Coleman. First of all Sidney was the only one who was
interested in strong coupling behavior of quantum field theories which is what
I’d become interested in as a graduate student with encouragement from my
advisor David Gross. So, he was really the only one I could interact with about
that. Others regarded strong coupling as a black box. So, maybe for your
listeners, I should explain that if you’re a student in physics they teach you
what to do when quantum effects are small, but no one tells you what to do when
quantum effects are big, there’s no general answer. It’s a smorgasbord of
different methods that work for different problems and a lot of problems that
are intractable. So, I'd become interested in that as a student but I was mostly
beating my head against a brick wall because it is usually intractable, and
Sydney was the only one of the professors at Harvard interested in such
matters. So, apart from interacting with him about that, also he exposed me to a
number of mathematical topics I wouldn’t have known about otherwise but that
eventually were important in my work which most physicists didn’t know about.
And certainly I didn’t know about.
GF [00:07:27] Yeah, can I ask were you consciously interested in advanced pure math at that time?
EW [00:07:32] Definitely
not
GF [00:07:32] You
were not?
EW [00:07:32] No,
most definitely not. I got dragged into math gradually because you see the
standard model had been discovered so the problems in physics were not exactly
the same as they had been before. But there were new problems that were opened
up by the standard model. For one thing there is new math that came into
understanding the standard model. Just when I was finishing graduate school
more or less Polyakov and others introduced the Yang-Mills instanton which has
proved to be important in understanding physics. It’s also had a lot of
mathematical applications.
GF [00:08:02] You can think of instantons as fleeting events that occur in space and time on
the subatomic scale. These events are predicted by the theories of the
subatomic world known as gauge theories. A key moment in this story is Witten’s
first meeting with the great mathematician Michael Atiyah at the Massachusetts
Institute of Technology. They will become the leaders of the trend towards a
more mathematical approach to our understanding of the world.
EW [00:08:32] So
Polyakov and others had discovered the Yang-Mills instanton and it was
important in physics and proved to have many other applications. And then
Atiyah was one of the mathematicians who discovered amazing mathematical
methods that could be used to solve the instanton equations. So he was
lecturing about that when he visited in Cambridge. I think in the spring of
1977, but I could be off by a few months, and I was extremely interested. And so we
talked about it a lot. I probably made more of an effort to understand the math
involved than most of the other physicists did. Anyway this interaction surely
led to my learning all kinds of math I’d never heard of before, complex manifolds, sheaf cohomology groups.
GF [00:09:16] This
was news to you at that time.
EW [00:09:18] Definitely.
So I might tell you at an even more basic level the Atiyah-Singer index theorem
had been news to me a few months earlier when I heard about it from Sidney
Coleman.
GF [00:09:28] The
index theorem first proved by Michael Atiyah and his friend Isidore Singer
connects two branches of mathematics that had seemed unconnected. Calculus,
that’s the mathematics of changing quantities, and topology about the
properties of objects that don’t change when they’re stretched, twisted, or
deformed in some way, topology is now central to our understanding of
fundamental physics.
EW [00:09:51] Like
other physics graduate students of the period, I had no inkling of any 20th
century math, really. So, I’d never heard of the names Atiyah and Singer or of
the concept of the index or if the index theorem until Albert Schwarz showed
that it was relevant to understanding instantons. And even then that paper
didn’t make an immediate splash. If Coleman hadn’t pointed it out, I’m not sure
how long it would have been before I knew about it. And then there was
progress in understanding instanton equations by Atiyah among others. The first
actually was Richard Ward, Penrose’s doctoral student. So, I got interested in that but I was
interested in a sense in a narrow way which is what good would it be in
physics. And I learned the math or some of the math that the teacher was using.
But I was a little skeptical about the applicability for physics and I wasn’t
really wrong because the original program of Polyakov didn’t quite work out. The
details of the Instanton equations that were beautifully elucidated by the
mathematicians were not in practice that helpful for things you can actually do
as a physicist. So, to sort of summarize what happened in the long run, Atiyah’s
work and that of his colleagues made me learn a lot of math I’d never heard of
before which turned out to be very important later but not per se for the
original reasons.
GF [00:11:10] When
did you start to become convinced that math was really going to be interesting?
EW [00:11:14] Well that gradually happend in the
1980s I guess. So, for example one early episode which was in 1981 or two I was
trying to understand the properties of what's called the vacuum the quantum
ground state in supersymmetric field theories and it really had some behavior
that was hard to explain using standard physics ideas and since I couldn't
understand it I kept looking at simpler and simpler models and they all had the
same puzzle. So finally I got to what seemed like the simplest possible model
which you could ask the question and it still had a puzzling behavior. But at a
certain point, I think when I was in a swimming pool in Aspen Colorado, I
remembered Raoul Bott and actually Atiyah also had given some lectures to physicists a couple of years earlier in Cargesse, and they had tried to explain something
called Morse theory to us. I’m sure there are like me many other physicists
that have never heard of Morse theory or are familiar with any of the questions it
addresses or.
GF [00:12:11] Would you like to say what Morse theory is roughly speaking?
EW [00:12:14] Well
if you’ve got a rubber ball floating in space it’s got a lowest point, where
the elevation is lowest, it’s got a highest point where the elevation is
highest. So it’s got a maximum and a minimum. If you have a more complicated
surface like for example a rubber inner tube, it’ll have saddle points of height
function as well as a maximum and minimum. And Morse theory relates the maxima
and minima and the saddle points of a function such as height function to the
topology of a surface or topological manifold on which the function is defined.
GF [00:12:48] You
see that paper by Maxwell on that what he spoke about see in 1870.
EW [00:12:52] I’ve
not read that.
GF [00:12:53] Oh
I’ll show it to you later. It’s “On Hills and Dales,” gave it in Liverpool, very
thinly attended talk, erm, anyway.
EW [00:13:01] So
was he in fact describing the two dimensional version of Morse theory.
GF [00:13:04] I can’t
go into detail but the historians of Morse theory, they often refer to that. At a public meeting
incidentally in Liverpool.
EW [00:13:13] Actually
now you mentioned it, I heard the title of the Hills and Dales talk by Maxwell that had something
to do with the beginnings of topology. And topology was just barely beginning in roughly that period.
GF [00:13:23] But
this was useful in physics. Your Aspen swimming pool revelation...
EW [00:13:28] Well,
it shed a little bit of light on the vacuum state in super symmetric quantum
theories. So anyway I developed that further so you know at first that seemed
exceptional but eventually there were too many of these exceptions to
completely ignore.
GF [00:13:42] Am I
right in saying, not to put into your mouth, but it was the advent of String Theory post Michael
Greene and John Schwartz where these things started going front and center, is that
fair?
EW [00:13:50] After...
Following the first super string revolution as people call it which came to
fruition in 1984 with the work of Greene and Schwarz on the anomalies after that
the sort of math that Atiyah and others had used for the instaton equation was
suddenly actually useful. Because to understand string theory, complex
manifolds and index theory sheaf cohomology groups, all those funny things were
actually useful in doing basic things like constructing models of the elementary particles in string theory. I should give a slightly better explanation. In
physics there are the forces that we see for the elementary particles that
means basically everything except gravity. Then there's gravity which is so
weak that we only see it for macroscopic masses like the earth or the sun. Now
we describe gravity by Einstein's theory and then we describe the rest of it by
quantum field theory. It's difficult to combine the two together. Before 1984
you couldn't even make a halfway reasonable models for elementary particles
that included all the forces together with gravity. The advance that Greene and
Schwarz made with anomaly cancellation in 1984 made that possible. But to make
such models you needed to use a lot of the math that physicists had not used
previously but which was introduced by Atiyah and others when they solved the
instanton equations and you had to use complex manifolds, sheaf cohomology
groups and things that were totally alien to the education of a physics
graduate student back in the days when I'd been a student. So those things were
useful even at a basic level in making a model of the elementary particles with
gravity. And if you wanted to understand it more deeply you ended up using
still more maths. After string theory was developed enough that you could use
it in an interesting way to make models of particle physics it was clear that a
lot of previously unfamiliar math was important. I speak loosely when I say
previously unfamiliar because obviously it was familiar to some people. First
of all the to the mathematicians. Secondly in some areas like Penrose had used
some of it in his Twister theory. But broadly speaking unfamiliar to most physicists.
GF [00:15:46] So
we actually went very well in physics very very important for mathematicians in
mathematics a very important physicist they're working harmoniously alongside
each other. You go back to Leibnitz who used to talk about the pre
established harmony between math and physics. That was one of Einstein's
favorite phrases. Is there something you regard as a fact of life or is it
something you would regard as possibly can be explained one day will never be
explained. Do you have any comment at all on that relationship.
EW [00:16:09] Well, the intimate tie between math and physics seems to be a fact of life. I can't
imagine what it would mean to explain it. The world only seems to be based on
theories that involve interesting math and a lot of interesting math is at
least partly inspired by the role that it plays in physics. Not all of course.
GF [00:16:25] But
does it inspire you when you see a piece of math that's very relevant to
physics and vice versa when you're helping mathematicians. Does that motivate
you in some way to think you're on the right track.
EW [00:16:35] Well
when something turns out to be beautiful that does encourage you believe that
it's on the right track.
GF [00:16:39] Classic
Dirac. But he took it as he put it to almost a religion. But I sense you
are a little bit more skeptical, if
that's the right word or hard nosed about it I don't know.
EW [00:16:51] Having
discovered the Dirac equation, Dirac was entitled to commit its use to extremes, to put it that
way.
GF [00:16:58] Witten
has long been a leading pioneer of the string framework which seeks to give a
unified account of all the fundamental forces based on quantum mechanics and
special relativity. It describes the basic entities of nature in terms of tiny
pieces of string.
GF [00:17:14] Go
back to string theory. Do you see that as one among several candidates or the
preeminent candidate or what? I mean what do you see the status of that
framework in the landscape of mathematical physics.
EW [00:17:24] Id
say that string slash M theory is the only really interesting direction we have
for going beyond the established framework of physics by which I mean quantum
field theory at the quantum level and classical general relativity at the
macroscopic scale. So where where we've made progress that's been in the string
slash M theory framework where a lot of interesting things have been
discovered. I'd say that there's a lot of interesting things we don't
understand at all.
EW [00:17:48] But
you’ve never been tempted down the other route. The other options are not.
EW [00:17:52] I’m
not even sure what you would mean by other routes.
GF [00:17:54] Loop
quantum gravity?
EW [00:17:56] Those
are just words. There aren’t any other routes.
GF [00:17:58] Okay,
all right, fair enough.
GF [00:18:01] So there we have it. The preternaturally
cautious Witten says that if we want to discover a unified theory of all the
fundamental forces, string theory is the only interesting way forward that’s
arisen.
GF [00:18:17] Where
we are now strikes me as being quite an unusual time in particle physics
because so many of us were looking forward to the Large Hadron Collider, huge
energy available ,and finding the Higgs boson and maybe supersymmetry. And
yet it seems that we have gotten the Higgs particle just as we were hoping and
expecting. But nothing else that’s really stimulating. What are your views on where
we are now?
EW [00:18:39] My
generation grew up with a belief very very strong belief which by the way was
drummed into us by Steven Weinberg and by others. That when physics reached the
energy scale at which you can understand the weak interactions. You would not
only discover the mechanism of electroweak symmetry breaking but you’d learn what
fixes its energy scale as been relatively low compared to the scale of gravity.
That’s what ultimately makes gravity so weak in ordinary terms. So, it came as a
big surprise that we reached the energy scale to study the W and the Z and even
the Higgs particle without finding a bigger mechanism behind it. That’s an
extremely shocking development in the context of the thinking that I grew up
with.
EW [00:19:22] There
is another shock which also occurred during that 40 year period which possibly
should be comparative. This is the discovery that the acceleration of the
expansion of the universe. For decades physicists assumed that because of the
gravitational attraction of matter the expansion of the universe with be
slowing down and tried to measure it. It turned out that the expansion is
actually speeding up. We don't know this for sure it seems quite likely that
the results from the effects of Einstein's cosmological constant which is
incredibly small but non-zero. The two things the very very small but non-zero
cosmological constant and the scale of weak interactions the scale of
elementary particle masses which in human terms can seem like a lot of
energies. But it's very small compared to other energies in physics. The two
puzzles are analogous and they're both extremely bothersome. These two puzzles
although primarily the one about gravity which was discovered first are perhaps
the main motivation for discussions of a cosmic landscape of vacua. Which is an
idea that used to make me extremely uncomfortable and unhappy. I guess because
of the challenge it poses to trying to understand the universe and the possibly
unfortunate implications for our distant descendants tens of billions of years
from now. I guess I ultimately made my peace with it recognizing that the
universe hadn't been created for our convenience.
GF [00:20:43] So you come to terms with it.
EW [00:20:45] I've
come to terms with the landscape idea and the sense of not being upset about
it. As I was for many years.
GF [00:20:49] Really
upset?
EW [00:20:50] I
still would prefer to have a different explanation but it doesn't upset me
personally to the extent it used to.
GF [00:20:56] So
just to conclude what would you say the principal challenge is all down to
people looking at fundamental physics.
EW [00:21:01] I
think it's quite possible that new observations either in astronomy or
accelerators will turn up new and more down to earth challenges. But with what
we have now and also with my own personal inclinations it's hard to avoid
answering new terms of cosmic challenges. I actually believe that string slash
M theory is on the right track toward a more deeper explanation. But at a very
fundamental level it's not well understood. And I'm not even confident that we
have a good concept of what sort of thing is missing or where to find it. The
reason I'm not is that in hindsight it's clear that a view we might have given
in the 1980s was what was missing was too narrow. Instead of discovering what
we thought was missing instead we broadened the picture in the 90s in
unexpected directions. And having lived through that I feel it might happen
again.
EW [00:21:49] To
give you a slightly less cosmic answer if you ask me where I think is the most
likely direction for another major theoretical upheaval like happened in the
80s and then again in the 90s. I've come to believe that the whole it from qbit
stuff, the relation between geometry and entanglement, is the most interesting
direction.
GF [00:22:12] It
from bit that was a phrase coined by the late American theoretician John
Wheeler who guessed that the stuff of nature the "it" might
ultimately be built from the bits of information. Perhaps the theory of
information is showing us the best way forward in fundamental physics. Witten
is usually wary of making strong pronouncements about the future of his
subjects. So I was struck by his interest in this line of inquiry, now
extremely popular.
EW [00:22:39] I
feel that if in my active career there will be another real upheaval that's
where it's most likely to be coming [xxx]
EW [00:22:47] I
had a sense both in the early 80s and in the early 90s. I had a sense a couple
of years in advance of the big upheavals where they were most likely to come
from it and those two times did turn out to be right. Then for a long long time
I had no idea where another upheaval might come from. By the last few years
I've become convinced that it's most likely to be the it from qbit stuff of
which I have not been a pioneer now. But I was not one of the first to reach
the conclusion or a suspicion that I'm telling you right now. But anyway it's
the view I've come to.
GF [00:23:20] There's
a famous book about night thoughts of a quantum physics. are there night
thoughts of a string theorists is where you have a wonderful theory that's developing you know unable to test it. Does that ever bother you.
EW [00:23:31] Of
course it bothers us but we have to live with our existential condition. But
let's backtrack 34 years. So in the early 80s there were a lot of hints that
something important was happening in string theory but once Greene and Schwartz
discovered the anomaly cancellation and it became possible to make models of
elementary particle physics unified with gravity. From then I thought the
direction was clear. But some senior physicists rejected it completely on the
grounds that it would supposedly be untestable. Or even have cracked it would
be too hard to understand. My view at the time was that when we reached the
energies of the W, Z and the Higgs particle we'd get all kinds of fantastic new
clues.
EW [00:24:11] So.
I found it very very surprising that any colleagues would be so convinced that
you wouldn't be able to get important clues that would shed light on the
validity of a fundamental new theory that might in fact be valid. Now if you
analyze that 34 years later I'm tempted to say we were both a little bit wrong.
So the scale of clues that I thought would materialize from accelerators has
not come. In fact the most important clue possibly is that we've confirmed the
standard model without getting what we fully expected would come with him. And
as I told you earlier that might be a clue concerning the landscape. I think
the flaw in the thinking of the critics though is that while it's a shame that
the period of incredible turmoil and constant experiment and discovery that
existed until roughly when I started graduate school hasn't continued. I think
that the progress which has been made in physics since 1984 is much greater
than it would have been if the naysayers had been heeded and string theory
hadn't been done in that period.
GF [00:25:11] And it's had this bonus of benefiting mathematics as well.
EW [00:25:14] Mathematics
and by now even in other areas of physics because for example new ideas about
black hole of thermodynamics have influenced areas of condensed metaphysics* even in the study of quantum phase transitions, quantum chaos and really other
areas.
GF [00:25:31] Well
let's hope we all live to see some revolutionary triumph that was completely
unexpected that's the best one of all. Edward thank you very much indeed.
EW [00:25:38] Sure
thing.
GF [00:25:43] I’m
always struck by the precision with which Edward expresses himself and by his
avoidance of fuzzy philosophical talk. He's plainly fascinated by the closeness
of the relationship between fundamental physics and pure mathematics. He isn't
prepared to go further to say that their relationship is a fact of life. Yet no
one has done more to demonstrate that not only is mathematics unreasonably
effective in physics physics is unreasonably effective in mathematics.
GF [00:26:15] This
Witten said makes sense only if our modern theories are on the right track. One
last point. Amazingly Witten is sometimes underestimated by physicists who
characterize him as a mathematician, someone who has only a passing interest in
physics. This is quite wrong. When I talk with a great theoretician Steven
Weinberg he told me of his awe at Witten's physical intuition and elsewhere
said that Witten's got more mathematical muscles in his head than I like to
think about. You can find out more about Witten and his work in my book
"The universe speaks in numbers.
--
* Condensed matter physics. I am sure he says condensed matter physics. But really I think condensed metaphysics fits better.
--
* Condensed matter physics. I am sure he says condensed matter physics. But really I think condensed metaphysics fits better.
Subscribe to:
Posts (Atom)