Showing posts with label Random Thoughts. Show all posts
Showing posts with label Random Thoughts. Show all posts

Wednesday, October 30, 2019

The crisis in physics is not only about physics

downward spiral
In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then.

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing.

The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.

But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”.

And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.

Physicists need new methods. Better methods. Methods that are appropriate to the present century.

And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.

Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.

You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.

But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community.

Indeed, we see this beginning to happen in medicine and in ecology, too.

Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits.

How physicists handle their crisis will give an example to other disciplines. So watch this space.

Thursday, August 22, 2019

You will probably not understand this

Hieroglyps. [Image: Wikipedia Commons.]

Two years ago, I gave a talk at the University of Toronto, at the institute for the history and philosophy of science. At the time, I didn’t think much about it. But in hindsight, it changed my life, at least my work-life.

I spoke about the topic of my first book. It’s a talk I have given dozens of times, and though I adapted my slides for the Toronto audience, there was nothing remarkable about it. The oddity was the format of the talk. I would speak for half an hour. After this, someone else would summarize the topic for 15 minutes. Then there would be 15 minutes discussion.

Fine, I said, sounds like fun.

A few weeks before my visit, I was contacted by a postdoc who said he’d be doing the summary. He asked for my slides, and further reading material, and if there was anything else he should know. I sent him references.

But when his turn came to speak, he did not, as I expected, summarize the argument I had delivered. Instead he reported what he had dug up about my philosophy of science, my attitude towards metaphysics, realism, and what I might mean with “explanation” or “theory” and other philosophically loaded words.

He got it largely right, though I cannot today recall the details. I only recall I didn’t have much to say about what struck me as a peculiar exercise, dedicated not to understanding my research, but to understanding me.

It was awkward, too, because I have always disliked philosophers’ dissection of scientists’ lives. Their obsessive analyses of who Schrödinger, Einstein, or Bohr talked to when, about what, in which period of what marriage, never made a lot of sense to me. It reeked too much of hero-worship, looked too much like post-mortem psychoanalysis, equally helpful to understand Einstein’s work as cutting his brain into slices.

In the months that followed the Toronto talk, though, I began reading my own blogposts with that postdoc’s interpretation in mind. And I realized that in many cases it was essential information to understand what I was trying to get across. In the past year, I have therefore made more effort to repeat background, or at least link to previous pieces, to provide that necessary context. Context which – of course! – I thought is obvious. Because certainly we all agree what a theory is. Right?

But having written a public weblog for more than 12 years makes me a comparably simple subject of study. I have, over the years, provided explanations for just exactly what I mean when I say “scientific method” or “true” or “real”. So at least you could find out if only you wanted to. Not that I expect anyone who comes here for a 1,000 word essay to study an 800,000 word archive. Still, at least that archive exists. The same, however, isn’t the case for most scientists.

I was reminded of this at a recent workshop where I spoke with another woman about her attempts to make sense of one of her senior colleague’s papers.

I don’t want to name names, but it’s someone whose research you’ll be familiar with if you follow the popular science media. His papers are chronically hard to understand. And I know it isn’t just me who struggles, because I heard a lot of people in the field make dismissive comments about his work. On the occasion which the woman told me about, apparently he got frustrated with his own inability to explain himself, resulting in rather aggressive responses to her questions.

He’s not the only one frustrated. I could tell you many stories of renown physicists who told me, or wrote to me, about their struggles to get people to listen to them. Being white and male, it seems, doesn’t help. Neither do titles, honors, or award-winning popular science books.

And if you look at the ideas they are trying to get across, there’s a pattern.

These are people who have – in some cases over decades – built their own theoretical frameworks, developed personal philosophies of science, invented their own, idiosyncratic way of expressing themselves. Along the way, they have become incomprehensible for anyone else. But they didn’t notice.

Typically, they have written multiple papers circling around a key insight which they never quite manage to bring into focus. They’re constantly trying and constantly failing. And while they usually have done parts of their work with other people, the co-authors are clearly side-characters in a single-fighter story.

So they have their potentially brilliant insights out there, for anyone to see. And yet, no one has the patience to look at their life’s work. No one makes an effort to decipher their code. In brief, no one understands them.

Of course they’re frustrated. Just as frustrated as I am that no one understands me. Not even the people who agree with me. Especially not those, actually. It’s so frustrating.

The issue, I think, is symptomatic of our times, not only in science, but in society at large. Look at any social media site. You will see people going to great lengths explaining themselves just to end up frustrated and – not seldom – aggressive. They are aggressive because no one listens to what they are trying so hard to say. Indeed, all too often, no one even tries. Why bother if misunderstanding is such an easy win? If you cannot explain yourself, that’s your fault. If you do not understand me, that’s also your fault.

And so, what I took away from my Toronto talk is that communication is much more difficult than we usually acknowledge. It takes a lot of patience, both from the sender and the receiver, to accurately decode a message. You need all that context to make sense of someone else’s ideas. I now see why philosophers spend so much time dissecting the lives of other people. And instead of talking so much, I have come to think, I should listen a little more. Who knows, I might finally understand something.

Sunday, July 07, 2019

Because Science Matters

[Foto: Michael Sentef]

Another day, another lecture. This time I am in Hamburg, at DESY, Germany’s major particle physics center.

My history with DESY is an odd one, which is none, despite the fact that fifteen years ago I was awarded Germany’s most prestigious young researcher grant, the Emmy-Noether fellowship, to work in Hamburg on particle physics phenomenology. The Emmy-Noether fellowship is a five-year grant that does not only pay the principle investigator but also comes with salaries for a small group. It’s basically the jackpot of German postdoc funding.

I declined it.

I hadn’t thought of this for a long time, but here I am in Hamburg, finally getting to see how my life might have looked like, in that parallel-world where I became a particle physicist. It looks like I’ll be late.

The taxi driver circles around a hotel and insists with heavy Polish accent this must be the right place because “there’s nothing after that”. To make his point he waves at trees and construction areas that stretch further up the road.

I finally manage to convince him that, really, I’m not looking for a hotel. A kilometer later he pulls into an anonymous driveway where a man in uniform asks him to stop. “See, this wrong!” the taxi-man squeaks and attempts to turn around when I spot a familiar sight: The cover of my book, on a poster, next to the entry.

“I’m supposed to give that talk,” I tell the man in uniform, “At two pm.” He looks at his watch. It’s a quarter past two.

I arrive at the lecture hall 20 minutes late, mostly due to a delayed train, but also, I note with some guilty consciousness, because I decided not to stay for the night. With too much traveling in my life already, I have become one of these terrible people who arrive just before their talk and vanish directly afterwards. I used to call it the “In and Out Lecture”, inspired by an American fast food chain with the laxative name “In and Out Burger”. A friend of mine more aptly dubbed it “Blitzkrieg Seminar.”

The room is well-filled. I am glad to see the audience was kept in good mood with drinks and snacks. Within minutes, I am wired up and ready to speak about the troubles in the foundations of physics.

Briefly before my arrival, I learned some particle physicists complained I was even invited. This isn’t the first time this happens. On another occasion some tried to un-invite me, albeit eventually unsuccessfully. They tend to be disappointed when it turns out I’m not a fire-spewing dragon but a middle-aged mother of two who just happens to know a lot about theory development in high energy physics.

Most of them, especially the experimentalists, don’t even find my argument all that disagreeable – at least at first sight. Relying on beauty has not historically worked well in physics, and it isn’t presently working, no doubt about this. To make progress, then, we should take clue from history and focus on resolving inconsistencies in our present description of nature, either inconsistencies between theory and experiment, or internal inconsistencies. So far, they’re usually with me.

Where my argument becomes disagreeable is when I draw consequences. There is no inconsistency to be resolved in the energy range that a next larger collider could reach. It would measure some constants to better precision, all right, but that’s not worth $20 billion.

Those 20 billion dollars, by the way, are merely the estimated construction cost for CERN’s planned Future Circular Collider (FCC). They do not include operation cost. The facility would run for about 25 years. Operation costs of the current machine, the Large Hadron Collider (LHC) are about $1 billion per year already, and with the FCC, expenses for electricity and staff are bound to increase. That means the total cost for the FCC easily exceeds $40 billion.

That’s a lot of money. And the measurements this next larger collider could make would deliver information that won’t be useful in the next 100 or maybe 5000 years. Now is not the right time for this.

On the risk of oversimplifying an 80,000 word message, we have better things to do. Figure out what’s with dark matter, quantum gravity, or the measurement problem. There are breakthroughs waiting to be made. But we have to be careful with the next steps or risk delaying progress by further decades, if not centuries.

After my talk, in the question session, an elderly man goes on about his personal theory for something. He will later tell me about his website and complain that the scientific mainstream is ignoring his breakthrough insights.

Another elderly man insists that beauty is a good guide to the development of new natural laws. To support his point he quotes Steven Weinberg, because Weinberg, you see, likes string theory. In other words, it’s exactly the type of argument I just explained is both wrong and in the way of progress.

Another man, this one not quite as old, stands up to deliver a speech about how important particle colliders are. Several people applaud.

Next up, an agitated woman reprimands me for a typographical error on a slide. More applause. She goes on to explain the LHC has taught us a lot about inflation, a hypothetical phase of exponential expansion in the early universe. I refuse to comment. There is, I feel, no way to reason with someone who really believes this.

But her’s is, I remind myself, the community I would have been part of had I accepted the fellowship 15 years ago. Now I wonder, had I taken this path, would I be that woman today, upset to learn the boat is sinking? Would I share her group’s narrative that made me their enemy? Would I, too, defend spending more and more money on larger and larger machines with less and less societal relevance?

I like to think I would not, but my reading about group psychology tells me otherwise. I would probably fight the outsider just like they do.

Another woman identifies as experimentalist and asks me why I am against diversifying experimental efforts. I am not, of course. But economic reality is that we cannot do everything we want to do. We have to make decisions. And costs are a relevant factor.

Finally, another man asks me what experiments physicists should do. As usual when I get this question, I refuse to answer it. This is not my call to make. I cannot replace ten thousands of experts. I can only beg them to please remember that scientists are human, too, and human judgement is affected by group affiliation. Someone, somewhere, has to take the first step to prevent social bias from influencing scientific decisions. Let it be particle physicists.

A second round of polite applause and I am done here. A few people come to shake my hand. The room empties. Someone hands me a travel reimbursement form and calls me a taxi. Soon I am on the way back to the city center and on to six more hours on the train.

I check my email and see I will have to catch up work on the weekend, again. Not only doesn’t it help my own research to speak about problems with the current organization of science, it’s no fun either. It’s no fun to hurt people, destroy hopes, and advocate decisions that would make their lives harder. And it’s no fun to have mud slung at me in return.

And so, as always, these trips end with me asking myself, why?, why am I doing this?

And as always, the answer I give myself is the same. Because it matters we get this right. Because progress matters. Because science matters.

Thanks for asking, I am fine. Keep it coming.

Monday, June 10, 2019

Sometimes giving up is the smart thing to do.

[likely image source]
A few years ago I signed up for a 10k race. It had an entry fee, it was a scenic route, and I had qualified for the first group. I was in best shape. The weather forecast was brilliant.

Two days before the race I got a bad cold. But that wouldn’t deter me. Oh, no, not me. I’m not a quitter. I downed a handful of pills and went nevertheless. I started with a fever, a bad cough, and a banging head.

It didn’t go well. After half a kilometer I developed a chest pain. After one kilometer it really hurt. After two kilometers I was sure I’d die. Next thing I recall is someone handing me a bottle of water after the finish line.

Needless to say, my time wasn’t the best.

But the real problem began afterward. My cold refused to clear out properly. Instead I developed a series of respiratory infections. That chest pain stayed with me for several months. When the winter came, each little virus the kids brought home knocked me down.

I eventually went to see a doctor. She sent me to have a chest X-ray taken on the suspicion of tuberculosis. When the X-ray didn’t reveal anything, she put me on a 2 week regime of antibiotics.

The antibiotics indeed finally cleared out whatever lingering infection I had carried away. It took another month until I felt like myself again.

But this isn’t a story about the misery of aging runners. It’s a story about endurance sport of a different type: academia.

In academia we write Perseverance with capital P. From day one, we are taught that pain is normal, that everyone hurts, and that self-motivation is the highest of virtues. In academia, we are all over-achievers.

This summer, as every summer for the past two decades, I receive notes about who is leaving. Leaving because they didn’t get funding, because they didn’t get another position, or because they’re just no longer willing to sacrifice their life for so little in return.

And this summer, as every summer for the past two decades, I find myself among the ones who made it into the next round, find myself sitting here, wondering if I’m worthy and if I’m in the right place doing the right thing at the right time. Because, let us be honest. We all know that success in academia has one or two elements of luck. Or maybe three. We all know it’s not always fair.

I’m writing this for the ones who have left and the ones who are about to leave. Because I have come within an inch of leaving half a dozen times and I have heard the nasty, nagging voice in the back of my head. “Quitter,” it says and laughs, “Quitter.”

Don’t listen. From the people I know who left academia, few have regrets. And the few with regrets found ways to continue some research along with their new profession. The loss isn’t yours. The loss is one for academia. I understand your decision and I think you choose wisely. Just because everyone you know is on a race to nowhere doesn’t mean going with them makes sense. Sometimes, giving up is the smart thing to do.

A year after my miserable 10k experience, I signed up for a half-marathon. A few kilometers into the race, I tore a muscle.

I don’t get a runner’s high, but running increases my pain tolerance to unhealthy levels. After a few kilometers, you could probably stab me in the back and I wouldn’t notice. I could well have finished that race. But I quit.

Thursday, May 02, 2019

How to live without free will

Lego sculpture.
By Nathan Sawaya.
[Image Source]
It’s not easy, getting a PhD in physics. Not only must you learn a lot, but some of what you learn will shake your sense of self.

Physics deals with the most fundamental laws of nature, those from which everything else derives. These laws are, to our best current knowledge, differential equations. Given those equations and the configuration of a system at one particular time, you can calculate what happens at all other times.

That is for what the universe without quantum mechanics is concerned. Add quantum mechanics, and you introduce a random element into some events. Importantly, this randomness in quantum mechanics is irreducible. It is not due to lack of information. In quantum mechanics, some things that happen are just not determined, and nothing you or I or anyone can do will determine them.

Taken together, this means that the part of your future which is not already determined is due to random chance. It therefore makes no sense to say that humans have free will.

I think I here spell out only the obvious, and use a notion of free will that most people would agree on. You have free will if your decisions select one of several possible futures. But there is no place for such a selection in the laws of nature that we know, laws that we have confirmed to high accuracy. Instead, whatever is about to happen was already determined at the big bang – up to those random flukes that come from quantum mechanics.

Now, some people try to wiggle out of this conclusion by defining free will differently, for example by noting that no one can in practice predict your future behavior (at least not currently). One can do such redefinitions, of course, but this is merely verbal gymnastics. The future is still fixed up to occasional chance events.

Others try to interpret quantum randomness as a sign of free will, but this is in conflict with evidence. Quantum processes are not influenced by conscious thought. Chaos is deterministic, so it doesn’t help. Goedel’s incompleteness theorem, remarkable as it is, has no relevance for natural laws.

The most common form of denial that I encounter is to insist that reductionism must be wrong. But we have countless experiments that document humans are made of particles, and that these particles obey our equations. This means that also humans, as collections of those particles, obey these equations. If you try to make room for free will by claiming humans obey other equations (or maybe no equation at all), you are implicitly claiming that particle physics is wrong. And in this case, sorry, I cannot take you seriously.

These are the typical objections that I hear, and none of them makes much sense.

I have had this discussion many times. Many people find it hard to comprehend that I do not believe in free will. And any such debate will, inevitably, be accompanied by the joke that the outcome of the argument was determined already, haha, aren’t you so original.

I have come to the conclusion that a large fraction of people are cognitively unable to question the existence of free will, and there is no argument that can change their mind. Therefore, the purpose of this blogpost is not to convince those who are resistant to rational arguments. The purpose is to help those who understand the situation but have trouble making sense of it. Like I have had trouble. The following shifts in perspective may help you without the need to resort to denial:

1. You never had free will.

It’s not like your free will suddenly evaporated when you learned the Euler-Lagrange equations. Your brain still functions the same way as before. So keep on doing what you have been doing. To first approximation that will work fine: Free will is a stubbornly persistent illusion, just use it and don’t worry about it being an illusion.

2. Your story hasn’t yet been told.

Free will or not, you have a place in history. Whether yours will be a happy story or a sad story, whether your research will ignite technological progress or remain a side-note in obscure journals, whether you will be remembered or forgotten – we don’t yet know. Instead of thinking of yourself as selecting a possible future, try to understand your role, and remain curious about what’s to come.

3. Input matters.

You are here to gather information, process it, and come to decisions that may, or may not result in actions. Your actions, and the information you share, will then affect the decisions and actions of others. These decisions are determined by the structure of your brain and the information you obtain. Rather than despairing over the impossibility of changing either, decide to be more careful which information you seek out, analyze, and pass on. Instead of thinking about influencing the future, ask yourself what you have learned, eg, from reading this. You may not have free will, but you still make decisions. You cannot not make decisions. You may as well be smart about it.

4. Understand yourself.

No one presently knows exactly what consciousness is or what it is good for, but we know that parts of it are self-monitoring, attentional focus, and planning ahead. A lot of the processes in your brain are not conscious, presumably because that would be computationally inefficient. Unconscious processes, however, can affect your conscious decisions. If you want to make good decisions, you must understand not only the relevance of input, but also how your own brain works. Instead of thinking that your efforts are futile, identify your goals and the strategies you have for working towards them. You are monitoring the monitor, if you wish.

Monday, February 04, 2019

Maybe I’m crazy


How often can you hold up four fingers, hear a thousand people shout “five”, and not agree with them? How often can you repeat an argument, see it ignored, and still believe in reason? How often can you tell a thousand scientists the blatantly obvious, hear them laugh, and not think you are the one who is insane?

I wonder.

Every time a particle physicist dismisses my concerns, unthinkingly, I wonder some more. Maybe I am crazy? It would explain so much. Then I remind myself of the facts, once again.

Fact is, in the foundations of physics we have not seen progress for the past four decades. Ever since the development of the standard model in the 1970s, further predictions for new effects have been wrong. Physicists commissioned dozens of experiments to look for dark matter particles and grand unification. They turned data up-side down in search for supersymmetric particles and dark energy and new dimensions of space. The result has been consistently: Nothing new.

Yes, null-results are also results. But they are not very useful results if you need to develop a new theory. A null-result says: “Let’s not go this way.” A result says: “Let’s go that way.” If there are many ways to go, discarding some of them does not help much. To move on in the foundations of physics, we need results, not null-results.

It’s not like we are done and can just stop here. We know we have not reached the end. The theories we currently have in the foundations are not complete. They have problems that require solutions. And if you look at the history of physics, theory-led breakthroughs came when predictions were based on solving problems that required solution.

But the problems that theoretical particle physicists currently try to solve do not require solutions. The lack of unification, the absence of naturalness, the seeming arbitrariness of the constants of nature: these are aesthetic problems. Physicists can think of prettier theories, and they believe those have better chances to be true. Then they set out to test those beauty-based predictions. And get null-results.

It’s not only that there is no reason to think this method should work, it does – in fact! – not work, has not worked for decades. It is failing right now, once again, as more beauty-based predictions for the LHC are ruled out every day.

They keep on believing, nevertheless.

Those who, a decade ago, made confident predictions that the Large Hadron Collider should have seen new particles can now not be bothered to comment. They are busy making “predictions” for new particles that the next larger collider should see. We risk spending $20 billion dollars on more null-results that will not move us forward. Am I crazy for saying that’s a dumb idea? Maybe.

Someone recently compared me to a dinghy that has the right of way over a tanker ship. I could have the best arguments in the world, that still would not stop them. Inertia. It’s physics, bitches.

Recently, I wrote an Op-Ed for the NYT in which I lay out why a larger particle collider is not currently a good investment. In her response, Prof Lisa Randall writes: “New dimensions or underlying structures might exist, but we won’t know unless we explore.” Correct, of course, but doesn’t explain why a larger particle collider is a promising investment.

Randall is professor of physics at Harvard. She is famous for having proposed a model, together with Raman Sundrum, according to which the universe should have additional dimensions of space. The key insight underlying the Randall-Sundrum model is that a small number in an exponential function can make a large number. She is one of the world’s best-cited particle physicists. There is no evidence these extra-dimension exist. More recently she has speculated that dark matter killed the dinosaurs.

Randall ends her response with: “Colliders are expensive, but so was the government shutdown,” an argument so flawed and so common I debunked it two weeks before she made it.

And that is how the top of tops of theoretical particle physicists react if someone points out they are unable to acknowledge failure: They demonstrate they are unable to acknowledge failure.

When I started writing my book, I thought the problem is they are missing information. But I no longer think so. Particle physicists have all the information they need. They just refuse to use it. They prefer to believe.

I now think it’s really a standoff between reason and intuition. Here I am, with all my arguments. With my stacks of papers about naturalness-based predictions that didn’t work. With my historical analysis and my reading of the philosophy of physics. With my extrapolation of the past to the future that says: Most likely, we will see more null-results at higher energies.

And on the other side there are some thousand particle physicists who think that this cannot possibly be the end of the story, that there must be more to see. Some thousand of the most intelligent people the human race has ever produced. Who believe they are right. Who trust their experience. Who think their collective hope is reason enough to spend $20 billion.

If this was a novel, hope would win. No one wants to live in a world where the little German lady with her oh-so rational arguments ends up being right. Not even the German lady wants that.

Wait, what did I say? I must be crazy.

Sunday, January 13, 2019

Good Problems in the Foundations of Physics

img src: openclipart.org
Look at the history of physics, and you will find that breakthroughs come in two different types. Either observations run into conflict with predictions and a new theory must be developed. Or physicists solve a theoretical problem, resulting in new predictions which are then confirmed by experiment. In both cases, problems that give rise to breakthroughs are inconsistencies: Either theory does not agree with data (experiment-led), or the theories have internal disagreements that require resolution (theory-led).

We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).

That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.

Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.

Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.

That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.

This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.

What can we learn from this?

Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”

But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.

The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.

The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.

At least that’s my conclusion.

It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.

Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.

(You find more details and references on all of this in my book.)



Dark Matter
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide when to consider the problem solved.)

Dark Energy
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.

The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.

Grand Unification
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.

Quantum Gravity
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.

Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.

Particle Masses
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.

Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).

The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.

The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.

The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.

Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.

The Strong CP Problem
Is a naturalness problem, like the Hierarchy problem, and not a problem of inconsistency.

There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.

Wednesday, January 09, 2019

The Real Problems with Artificial Intelligence

R2D2 costume for toddlers.
[image: amazon.com]
In recent years many prominent people have expressed worries about artificial intelligence (AI). Elon Musk thinks it’s the “biggest existential threat.” Stephen Hawking said it could “be the worst event in the history of our civilization.” Steve Wozniak believes that AIs will “get rid of the slow humans to run companies more efficiently,” and Bill Gates, too, put himself in “the camp that is concerned about super intelligence.”

In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.

Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.

However, these worries neglect the more immediate problems that AI will bring.

Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.

Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.

Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.

Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.

We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.

Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.

So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.

This brings up various immediate problems:

1. Who gets to ask questions and what questions?

This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?

2. How do you know that you are dealing with an AI?

The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.

3. How can you tell that an AI is any good at giving answers?

If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.

4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?

Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.

Friday, December 21, 2018

Winter Solstice

[Photo: Herrmann Stamm]

The clock says 3:30 am. Is that early or late? Wrapped in a blanket I go into the living room. I open the door and step onto the patio. It’s too warm for December. An almost full moon blurs into the clouds. In the distance, the highway hums.

Somewhere, someone dies.

For everyone who dies, two people are born. 7.5 billion and counting.

We came to dominate planet Earth because, compared to other animals, we learned fast and collaborated well. We used resources efficiently. We developed tools to use more resources, and then employed those tools to use even more resources. But no longer. It’s 2018, and we are failing.

That’s what I think every day when I read the news. We are failing.

Throughout history, humans improved how to exchange and act on information held by only a few. Speech, writing, politics, economics, social and cultural norms, TV, telephones, the internet. These are all methods of communication. It’s what enabled us to collectively learn and make continuous progress. But now that we have networks connecting billions of people, we have reached our limits.

Fake news, Russian trolls, shame storms. Some dude’s dick in the wrong place. That’s what we talk about.

And buried below the viral videos and memes there’s the information that was not where it was supposed to be. Hurricane Katrina? The problem was known. The 2008 financial crisis? The problem was known. That Icelandic volcano whose ashes, in 2010, grounded flight traffic? Utterly unsurprising. Iceland has active volcanoes. Sometimes the wind blows South-East. Btw, it will happen again. And California is due for a tsunami. The problems are known.

But that’s not how it will end.

20 years ago I had a car accident. I was on a busy freeway. It was raining heavily and the driver in front of me suddenly braked. Only later did I learn someone had cut his way. I hit the brakes. And then I watched a pair of red lights coming closer.

They say time slows if you fear for your life. It does.

I came to a stop one inch before slamming into the other car. I breathed out. Then a heavy BMW slammed into my back.

Human civilization will go like that. If we don’t keep moving, problems now behind us will slam into our back. Climate change, environmental pollution, antibiotic resistance, the persistent risk of nuclear war, for just to mention a few – you know the list. We will have to deal with those sooner or later. Not now. Oh, no. Not us, not now, not here. But our children. Or their children. If we stop learning, if we stop improving our technologies, it’ll catch up with them, sooner or later.

Having to deal with long-postponed problems will eat up resources. Those resources, then, will not be available for further technological development, which will create further problems, which will eat up more resources. Modern technologies will become increasingly expensive until most people no longer can afford them. Infrastructures will crumble. Education will decay. It’s a downward spiral. A long, unpreventable and disease-ridden, regress.

Those artificial intelligences you were banking on? Not going to happen. All the money in the world will not lead to scientific breakthroughs if we don’t have sufficiently many people with the sufficient education.

Who is to blame? No one, really. We are just too stupid to organize our living together on a global scale. We will not make it to the next level of evolutionary development. We don’t have the mental faculties. We do not comprehend. We do not act because we cannot. We don’t know how. We will fail and, maybe, in a million years or so, another species will try again.

Climate negotiations stalled over the choice of a word. A single word.

The clouds have drifted and the bushes now throw faint shadows in the moonlight. A cat screeches, or maybe it’s two. Something topples over. An engine starts. Then, silence again.

In the silence, I can hear them scream. All the people who don’t get heard, who pray and hope and wait for someone to please do something. But there is no one to listen. Even the scientists, even people in my own community, do not see, do not want to see, are not willing to look at their failure to make informed decisions in large groups. The problems are known.

Back there on that freeway, the BMW totaled my little Ford. I carried away neck and teeth damage, though I wouldn’t realize this until months later. I got out of my car and stood in the rain, thinking I’d be late for class. Again. The passenger’s door of the BMW opened and out came – an umbrella. Then, a tall man in a dark suit. He looked at me and the miserable state of my car and handed me a business card. “Don’t worry,” he said, “My insurance will cover that.” It did.

Of course I’m as stupid as everyone else, screaming screams that no one hears and, despite all odds, still hoping that someone has an insurance, that someone knows what to do.

I go back into the house. It’s dark inside. I step onto a LEGO, one of the pink ones. They have fewer sharp edges; maybe, I think, that’s why parents keep buying them.

The kids are sleeping. It will be some hours until the husband announces his impending awakening with a morning fart. By standby lights I navigate to my desk.

We are failing. I am failing. But what else can I do than try.

I open my laptop.

Friday, December 14, 2018

Don’t ask what science can do for you.

Among the more peculiar side-effects of publishing a book are the many people who suddenly recall we once met.

There are weird fellows who write to say they mulled ten years over a single sentence I once spoke with them. There are awkward close-encounters from conferences I’d rather have forgotten about. There are people who I have either indeed forgotten about or didn’t actually meet. And then there are those who, at some time in my life, handed me a piece of the puzzle I’ve since tried to assemble; people I am sorry I forgot about.

For example my high-school physics teacher, who read about me in a newspaper and then came to a panel discussion I took part in. Or Eric Weinstein, who I met many years ago at Perimeter Institute, and who has since become the unofficial leader of the last American intellectuals. Or Robin Hanson, with whom I had a run-in 10 years ago and later met at SciFoo.

I spoke with Robin the other day.

Robin is an economist at George Mason University in Virginia, USA. I had an argument with him because Robin proposed – all the way back in 1990 – that “gambling” would save science. He wanted scientists to bet on the outcomes of their colleagues’ predictions and claimed this would fix the broken incentive structure of academia.

I wasn’t fond of Robin’s idea back then. The major reason was that I couldn’t see scientists spend much time on a betting market. Sure, some of them would give it a go, but nowhere near enough for such a market to have much impact.

Economists tend to find it hard to grasp, but most people who stay in academia are not in for the money. This isn’t to say that money is not relevant in academia – it certainly is: Money decides who stays and who goes and what research gets done. But if getting rich is your main goal, you don’t dedicate your life to counting how many strings fit into a proton.

The foundations of physics may be an extreme case, but by my personal assessment most people in this area primarily chase after recognition. They want to be important more than they want to be rich.

And even if my assessment of scientists’ motivations was wrong, such a betting market would have to have a lot of money go around, more money than scientists can make by upping their reputation with putting money behind their own predictions.

In my book, I name a few examples of physicists who bet to express confidence in their own theory, such as Garrett Lisi who bet Frank Wilczek $1000 that supersymmetry would not be found at the LHC by 2016. Lisi won and Wilczek paid his due. But really what Garrett did there was just to publicly promote his own theory, a competitor of supersymmetry.

A betting market with minor payoffs, one has to be afraid, would likewise simply be used by researchers to bet on themselves because they have more to win by securing grants or jobs, which favorable market odds might facilitate.

But what if scientists could make larger gains by betting smartly than they could make by promoting their own research? “Who would bet against their career?” I asked Robin when we spoke last week.

“You did,” he pointed out.

He got me there.

My best shot at a permanent position in academia would have been LHC predictions for physics beyond the standard model. This is what I did for my PhD. In 2003, I was all set to continue into this direction. But by 2005, three years before the LHC began operation, I became convinced that those predictions were all nonsense. I stopped working on the topic, and instead began writing about the problems with particle physics. In 2015, my agent sold the proposal for “Lost in Math”.

When I wrote the book proposal, no one knew what the LHC would discover. Had the experiments found any of the predicted particles, I’d have made myself the laughing stock of particle physics.

So, Robin is right. It’s not how I thought about it, but I made a bet. The LHC predictions failed. I won. Hurray. Alas, the only thing I won is the right to go around and grumble “I told you so.” What little money I earn now from selling books will not make up for decades of employment I could have gotten playing academia-games by the rules.

In other words, yeah, maybe a betting market would be a good idea. Snort.

My thoughts have moved on since 2007, so have Robin’s. During our conversation, it became clear our views about what’s wrong with academia and what to do about it have converged over the years. To begin with, Robin seems to have recognized that scientists themselves are indeed unlikely candidates to do the betting. Instead, he now envisions that higher education institutions and funding agencies employ dedicated personnel to gather information and place bets. Let me call those “prediction market investors” (PMIs). Think of them like hedge-fund managers on the stock market.

Importantly, those PMIs would not merely collect information from scientists in academia, but also from those who leave. That’s important because information leaves with people. I suspect had you asked those who left particle physics about the LHC predictions, you’d have noticed quickly I was far from the only one who saw a problem. Alas, journalists don’t interview drop-outs. And those who still work in the field have all reason to project excitement and optimism about their research area.

The PMIs would of course not be the only ones making investments. Anyone could do it, if they wanted to. But I am guessing they’d be the biggest players.

This arrangement makes a lot of sense to me.

First and foremost, it’s structurally consistent. The people who evaluate information about the system do not themselves publish research papers. This circumvents the problem that I have long been going on about, that scientists don’t take into account the biases that skew their information-assessment. In Robin’s new setting, it doesn’t really matter if scientists’ see their mistakes; it only matters that someone sees them.

Second, it makes financial sense. Higher education institutions and funding agencies have reason to pay attention to the prediction market, because it provides new means to bring in money and new information about how to best invest money. In contrast to scientists, they might therefore be willing to engage in it.

Third, it is minimally intrusive yet maximally effective. It keeps the current arrangement of academia intact, but at the same it has a large potential for impact. Resistance to this idea would likely be small.

So, I quite like Robin’s proposal. Though, I wish to complain, it’s too vague to be practical and needs more work. It’s very, erm, academic.

But in 2007, I had another reason to disagree with Robin, which was that I thought his attempt to “save science” was unnecessary.

This was two years after Ioannidis’ paper “Why most published research findings are false” attracted a lot of attention. It was one year after Lee Smolin and Peter Woit published books that were both highly critical of string theory, which has long been one of the major research-bubbles in my discipline. At the time, I was optimistic – or maybe just naïve – and thought that change was on the way.

But years passed and nothing changed. If anything, problems got worse as scientists began to more aggressively market their research and lobby for themselves. The quest for truth, it seems, is now secondary. More important is you can sell an idea, both to your colleagues and to the public. And if it doesn’t pan out? Deny, deflect, dissociate.

That’s why you constantly see bombastic headlines about breakthrough insights you never hear of again. That’s why, after years of talking about the wonderful things the LHC might see, no one wants to admit something went wrong. And that’s why, if you read the comments on this blog, they wish I’d keep my mouth shut. Because it’s cozy in their research bubble and they don’t want it to burst.

That’s also why Robin’s proposal looks good to me. It looks better the more I think about it. Three days have passed, and now I think it’s brilliant. Funding agencies would make much better financial investments if they’d draw on information from such a prediction market. Unfortunately, without startup support it’s not going to happen. And who will pay for it?

This brings me back to my book. Seeing the utter lack of self-reflection in my community, I concluded scientists cannot solve the problem themselves. The only way to solve it is massive public pressure. The only way to solve the problem is that you speak up. Say it often and say it loudly, that you’re fed up watching research funds go to waste on citation games. Ask for proposals like Robin’s to be implemented.

Because if we don’t get our act together, ten years from now someone else will write another book. And you will have to listen to the same sorry story all over again.

Monday, November 19, 2018

The present phase of stagnation in the foundations of physics is not normal

Nothing is moving in the foundations of physics. One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.


Some have called it a crisis. But I don’t think “crisis” describes the current situation well: Crisis is so optimistic. It raises the impression that theorists realized the error of their ways, that change is on the way, that they are waking up now and will abandon their flawed methodology. But I see no awakening. The self-reflection in the community is zero, zilch, nada, nichts, null. They just keep doing what they’ve been doing for 40 years, blathering about naturalness and multiverses and shifting their “predictions,” once again, to the next larger particle collider.

I think stagnation describes it better. And let me be clear that the problem with this stagnation is not with the experiments. The problem is loads of wrong predictions from theoretical physicists.

The problem is also not that we lack data. We have data in abundance. But all the data are well explained by the existing theories – the standard model of particle physics and the cosmological concordance model. Still, we know that’s not it. The current theories are incomplete.

We know this both because dark matter is merely a placeholder for something we don’t understand, and because the mathematical formulation of particle physics is incompatible with the math we use for gravity. Physicists knew about these two problems already in 1930s. And until the 1970s, they made great progress. But since then, theory development in the foundations of physics has stalled. If experiments find anything new now, that will be despite, not because of, some ten-thousands of wrong predictions.

Ten-thousands of wrong predictions sounds dramatic, but it’s actually an underestimate. I am merely summing up predictions that have been made for physics beyond the standard model which the Large Hadron Collider (LHC) was supposed to find: All the extra dimensions in their multiple shapes and configurations, all the pretty symmetry groups, all the new particles with the fancy names. You can estimate the total number of such predictions by counting the papers, or, alternatively, the people working in the fields and their average productivity.

They were all wrong. Even if the LHC finds something new in the data that is yet to come, we already know that the theorists’ guesses did not work out. Not. A. Single. One. How much more evidence do they need that their methods are not working?

This long phase of lacking progress is unprecedented. Yes, it has taken something like two-thousand years from the first conjecture of atoms by Democritus to their actual detection. But that’s because for most of these two-thousand years people had other things to do than contemplating the structure of elementary matter. Like, for example, how to build houses that don’t collapse on you. For this reason, quoting chronological time is meaningless. We should better look at the actual working time of physicists.

I have some numbers for you on that too. Oh, yes, I love numbers. They’re so factual.

According to membership data from the American Physical Society and the German Physical Society the total number of physicists has increased by a factor of roughly 100 between the years 1900 and 2000.* Most of these physicists do not work in the foundations of physics. But for what publication activity is concerned the various subfields of physics grow at roughly comparable rates. And (leaving aside some bumps and dents around the second world war) the increase in the number of publications as well as in the number of authors is roughly exponential.

Now let us assume for the sake of simplicity that physicists today work as many hours per week as they did 100 years ago – the details don’t matter all that much given that the growth is exponential. Then we can ask: How much working time starting today corresponds to, say, 40 years working time starting 100 years ago. Have a guess!

Answer: About 14 months. Going by working hours only, physicists today should be able to do in 14 months what a century earlier took 40 years.

Of course you can object that progress doesn’t scale that easily, for despite all the talk about collective intelligence, research is still done by individuals. This means processing time can’t be decreased arbitrarily by simply hiring more people. Individuals still need time to exchange and comprehend each other’s insights. On the other hand, we have also greatly increased the speed and ease of information transfer, and we now use computers to aid human thought. In any case, if you want to argue that hiring more people will not aid progress, then why hire them?

So, no, I am not serious with this estimate, but I it explains why the argument that the current stagnation is not unprecedented is ill-informed. We are today making more investments into the foundations of physics than ever before. And yet nothing is coming out of it. That’s a problem and it’s a problem we should talk about.

I’ve recently been told that the use of machine learning to analyze LHC data signals a rethinking in the community. But that isn’t so. To begin with, particle physicists have used machine learning tools to analyze data for at least three decades. They use it more now because it’s become easier, and because everyone does it, and because Nature News writes about it. And they would have done it either way, even if the LHC would have found new particles. So, no, machine learning in particle physics is not a sign of rethinking.

Another comment-not-a-question I constantly have to endure is that I supposedly only complain but don’t have any better advice for what physicists should do.

First, it’s a stupid criticism that tells you more about the person criticizing than the person being criticized. Consider I was criticizing not a group of physicists, but a group of architects. If I inform the public that those architects spent 40 years building houses that all fell to pieces, why is it my task to come up with a better way to build houses?

Second, it’s not true. I have spelled out many times very clearly what theoretical physicists should do differently. It’s just that they don’t like my answer. They should stop trying to solve problems that don’t exist. That a theory isn’t pretty is not a problem. Focus on mathematically well-defined problems, that’s what I am saying. And, for heaven’s sake, stop rewarding scientists for working on what is popular with their colleagues.

I don’t take this advice out of nowhere. If you look at the history of physics, it was working on the hard mathematical problems that led to breakthroughs. If you look at the sociology of science, bad incentives create substantial inefficiencies. If you look at the psychology of science, no one likes change.

Developing new methodologies is harder than inventing new particles in the dozens, which is why they don’t like to hear my conclusions. Any change will reduce the paper output, and they don’t want this. It’s not institutional pressure that creates this resistance, it’s that scientists themselves don’t want to move their butts.

How long can they go on with this, you ask? How long can they keep on spinning theory-tales?

I am afraid there is nothing that can stop them. They review each other’s papers. They review each other’s grant proposals. And they constantly tell each other that what they are doing is good science. Why should they stop? For them, all is going well. They hold conferences, they publish papers, they discuss their great new ideas. From the inside, it looks like business as usual, just that nothing comes out of it.

This is not a problem that will go away by itself.


If you want to know more about what is going wrong with the foundations of physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”


* That’s faster than the overall population growth, meaning the fraction of physicists, indeed of scientists of general, has increased.

Saturday, November 10, 2018

Self-driving car rewarded for speed learns to spin in circles. Or, how science works like a neural net.

When I write about problems with the current organization of scientific research, I like to explain that science is a self-organizing, adaptive system. Unfortunately, that’s when most people stop reading because they have no idea what the heck I am talking about.

I now realized there is a better way to explain it, one which has the added benefit of raising the impression that it’s both a new idea and easy to understand: Science works like a neural network. Or an artificial intelligence, just to make sure we have all the buzzwords in place. Of course that’s because neural networks are really adaptive systems, neither of which is really a new idea, but then even Coca Cola sometimes redesigns their bottles.

In science, we have a system with individual actors that we feed with data. This system tries to optimize a certain reward-function and gets feedback about how well it’s doing. Iterate, and the system will learn ways to achieve its goals by extrapolating patterns in the data.

Neural nets can be a powerful method to arrive at new solutions for data-intensive problems. However, whether the feedback loop gives the desired result strongly depends on how carefully you configure the reward function. To translate this back to my going on about the malaises of scientific research, if you give researchers the wrong incentives, they will learn unintended lessons.

Just the other day I came across a list of such unintended lessons learned by neural nets. Example: Reward a simulated car for continuously going at high speed, and it will learn to rapidly spin in a circle:

Likewise, researchers rewarded to produce papers at a high frequency will learn to rapidly spin around their own axis by inventing and debating problems that don’t lead anywhere. Some recent examples from my own field are the black hole firewall, the non-naturalness of the Higgs-mass, or the string theory swampland.

Here is another gem: “Agent pauses the game indefinitely to avoid losing.” I see close parallels to the current proliferation of theories that are impossible to rule out, such as supersymmetries and multiverses.

But it could be worse, at least we are not moving backwards:

At least we are not moving backward yet. Because now that I think about it, rediscovering long-known explanations would also be a good way to feign productivity.

Of course I know of the persistent myth that scientific research is evaluated by its ability to describe observations, so I must add some words on this: I know that’s what you were told, but it’s not how it works in practice. In practice, scientists and funding agencies likewise must evaluate hypotheses prior to test to decide what is worth the time and money of testing to begin with. And the only ones able to evaluate the promise of research directions are researchers themselves.

It follows that there is no external reward function which you can impose on scientists that will optimize the return on investment. The best – indeed the only – method at your disposal is to let scientists make the evaluation internally, and then use their evaluation to distribute funding. In doing this, you may want to impose constraints on how the funding is used, eg by encouraging researchers to study specific topics. Such external constraints will reduce the overall efficiency, but this may be justifiable for societal reasons.

In case you missed it, this solution – which I have written and spoken about for more than a decade now – could come right out of the neo-libertarian’s handbook. The current system is over-regulated and therefore highly inefficient. More regulations will not fix it. This is why I am personally opposed to top-down solutions, like requirements coming from funding agencies.

However, the longer the current situation goes on, the more people we will have in the system who are convinced that what they are doing is the right thing, and the longer it will take for the problem to resolve even if you remove the flawed incentives. Indeed, in my impression the vast majority of scientists today already falls into this category: They sincerely believe that publications and citations are reliable indicators for good research.

Why do these problems persist even though they have been known for decades? I think the major reason is that most people – and that includes scientists themselves – do not understand the operation of the systems that they themselves are part of. It is not something that evolution allowed us to develop any intuitive grasp for.

Scientists in particular by and large think of themselves as islands. They do not take into account the manifold ways in which the information they obtain is affected by the networks they are part of, and neither do they consider that their assessment of this information is influenced by the opinions of others. This is a serious shortcoming in the present education of scientists.

Will drawing an analogy between scientific research and neural nets help them see the light? I don’t know. But maybe then in the not-so-far future we will all be replaced by AIs anyway. At least those sometimes get debugged.

Friday, October 26, 2018

Will it end? [In which I have breakfast with John Horgan]

Taking a selfie with a book on
your face is more difficult than
you may think.
I had breakfast with John The-End-Of-Science-Horgan two weeks ago, and I’m beginning to think it was a mistake.

I had backed out from an after-lecture-dinner two days earlier for which I felt guilty already, so I may have forgotten to mention I actually don’t eat breakfast. To make matters worse, I arrived late that morning because once I stepped into the shower, I noticed there were no towels in the hotel room. And when I had finally managed to dry my hair and find the place, I had to prevent an excited New Jersey taxi driver from having John pay my bill. Then we watched the taxi-man write down my credit card information in sloth-motion.

To celebrate this shitty start into the day, I ordered a coffee, just to learn that John doesn’t drink coffee. Which I should have known because he wrote about his coffee-fast on his blog. Evidently, I didn’t read this. Or maybe I did but immediately forgot about it. Either way, I’m a bad person. Even more so because John promptly also ordered a coffee. Caffeine-free, but still, now I had become somebody’s bad influence. And caffeine-free coffee, I hope y’all know, isn’t actually caffeine-free.

Luckily, the morning improved thereafter. John turned out to be a really nice guy who will cheerfully explain why science is over, which reminds me of the time I accidentally sprinkled herbal salt on a strawberry-jam sandwich. Indeed, he turned out to be so nice that now I was feeling guilty for spending Saturday morning with a nice guy somewhere in New Jersey while my husband watched the kids 4000 miles to the East.

If that makes you think my brain is a pretty fucked-up place, it gets worse from here on. That’s because to work off all that guilt, I did what you do to make authors happy: you go buy their books. So, once back in Germany, I went and bought “The End of Science,” 2015 edition. It was not a good idea.

Horgan’s book “The End of Science” was originally published 1996. I never read it because after attempting to read Stuart Kauffman’s 1995 book “At Home In the Universe” I didn’t touch a popular science book for a decade. This had very little to do with Kauffman (who I’d meet many years later) and very much to do with a basic malfunction of my central processing unit. Asked to cope with large amounts of complex, new information, part of my brain will wave bye-bye and go fishing. The result is a memory blackout.

I started having this in my early 20s, as I was working on my bachelor’s degree. At the time I was living in Frankfurt where I shared an apartment with another student. As most students, I spent my days reading. Then one day I found myself in a street somewhere in the city center without any clue how I had gotten there. This happened again a few weeks later. Interestingly enough, in both cases I was looking at my own reflection in a window when my memory came back.

It’s known as dissociative fugue, and not entirely uncommon. According to estimates, it affects about one or two in a thousand people at least once in their life. The actual number may be higher because it can be hard to tell if you even had a fugue. If you stay in one place, the only thing you may notice is that the day seems rather short.

These incidents piled up for a while. Aside from sudden wake-ups in places I had no recollection of visiting, I was generally confused about what I had or had not done. Sometimes I’d go to take a shower only to find my towel wet and conclude I probably had already taken one earlier. Sometimes I’d stand in the stair case with my running shoes, not knowing whether I was just about to go running or had just come back. I made sure to eat at fixed times to not entirely screw up my calorie intake.

Every once in a while I would meet someone I know or answer the phone while my stupid brain wasn’t taking records. For what I’ve been told, I’m not any weirder off-the-record than on-the-record. So not like I have multiple personalities. I just sometimes don’t recall what I do.

The biggest problem with dissociative fugue isn’t the amnesia. The biggest problem is that you begin to doubt your own ability to reconstruct reality. I suspect the major reason I’m not a realist and have the occasional lapse into solipsism is that I know reality is fragile. A few wacky neurons are all it takes to screw it up.

What has any of this to do with Horgan? Nothing, really, but it’s why I didn’t read his book when it came out. And then, when I met John, he unwittingly reminded me of times I’d rather have entirely forgotten.

Back then I took records of my episodes. It looked like it was primarily popular science books that would bring them on, so I stopped reading those. This indeed mostly solved the problem. That and some pills and a few years of psychotherapy. I can only guess why I never had issues with textbooks, maybe because those tend to be rather narrowly focused.

In any case, for ten years the only thing I read besides textbooks was cheap novels, notably Dean Koontz, whose writing is so repetitive and shallow that I have blissfully forgotten what those books were about. Then, in late 2005 Lee Smolin handed me a draft of his book “The Trouble With Physics” which would appear the following year. And what was I supposed to tell him? So I read Lee’s book. My memory lapses came back with a few months delay, but they were nowhere near as disruptive as earlier. And so, thanks to Lee, I slowly returned to reading popular science books.

With the self-insight that age brings, I’ve noticed my mental health issues are strongly stress-related. I’ve also learned to tell first signs of trouble. The past months I’ve worked too much, traveled too much, and said “yes” too often. It took me two weeks to make my way through the first 50 pages of Horgan’s book. It’s not going well. And so I think for now I better go back to reading Chad Orzel’s new book “Breakfast With Einstein.” Because that’s an easy read about things I know already. I’m sorry, John. Don’t take it personally.

Having said this, I thought it would be good to write down some thoughts about the supposed end of science before reviewing Horgan’s book (should I ever manage to finish it). But first let me show you an advertisement:


I don’t particular like American comedy (neither the intended nor the unintended kind) because I tend to find it unfunny. But this guy with his blender makes me laugh every time. Not sure why, maybe it’s his glassy stare.

In case you’ve never encountered these videos before, it seems to be an advert series featuring an old white guy shredding electronics with his awesome blender. “Will it blend?” he asks and infallibly ends up with a pile of grey dust.

I now picture Horgan stuffing science in his blender, pushing the button asking “Will it end?” This thought-experiment teaches us that science will end as infallibly as the Amazon Echo will blend. Because everything will end. You, and I, and John Horgan, and, yes, even Donald Trump’s complaints about the evil media. Entropy increase will get us all, eventually.

So, yeah, science will end.

But that’s not the interesting question. The interesting question is whether it’s ending right now. On the death bed, flatlining as we speak.

As most scientists, I am willing to argue the opposite, though not because I see all that much progress. On the opposite, it’s because I see so little progress. Scientific research today works extremely inefficiently because scientists waste time and money chasing after well-cited publications in high-impact journals. This inefficiency is problematic, frustrating, infuriating even. But it implies that science has untapped potential.

Whether making science more efficient is possible and whether it would actually make a difference I don’t know. I’ll see what John has to say about that. Which I should have done before I wrote my book.

I’m a bad person. And I promise I’ll read his book, eventually.