Showing posts with label Rant. Show all posts
Showing posts with label Rant. Show all posts

Wednesday, September 23, 2020

Follow the Science? Nonsense, I say.

Today I want to tell you why I had to stop reading news about climate science. Because it pisses me off. Every. Single. Time.



There’s all these left-wing do-gooders who think their readers are too fucking dumb to draw their own conclusions so it’s not enough to tell me what’s the correlation between hurricane intensity and air moisture, no, they also have to tell me that, therefore, I should donate to save the polar bears. There’s this implied link: Science says this, therefore you should do that. Follow the science, stop flying. Follow the science, go vegan. Follow the science and glue yourself to a bus, because certainly that’s the logical conclusion to draw from the observed weakening of the atlantic meridional circulation.

When I was your age, we learned science does not say anything about what we should do. What we should do is a matter of opinion, science is matter of fact.

Science tells us what situation we are in and what consequences our actions are likely to have, but it does not tell us what to do. Science does not say you shouldn’t pee on high voltage lines, it says urine is an excellent conductor. Science does not say you should stop smoking, science says nicotine narrows arteries, so if you smoke you’ll probably die young lacking a few toes. Science does not say we should cut carbondioxide emissions. It says if we don’t, then by the end of the century estimated damages will exceed some Trillion US $. Is that what we should go for? Well, that’s a matter of opinion.

Follow the Science is a complete rubbish idea, because science does not know the direction. We have to decide what way to go.

You’d think it’s bad enough that politicians conflate scientific fact with opinion, but the media actually make it worse. They make it worse by giving their audience the impression that it matters what someone whose job it is to execute the will of the electorate believes about scientific facts. But I couldn’t care less if Donald Trump “believes” in climate change. Look, this is a man who can’t tell herd immunity from herd mentality, he probably thinks winter’s the same as an ice age. It’s not his job to offer opinions about science he clearly doesn’t understand, so why do you keep asking him. His job is to say if the situation is this, we will do that. At least in principle, that’s what he should be doing. Then you look up what science says which situation we are in and act accordingly.

The problem, the problem, you see, is that by conflating the two things – the facts with the opinions – the media give people an excuse to hide opinions behind scientific beliefs. If you don’t give a shit that today’s teenagers will struggle their whole life cleaning up the mess that your generation left behind fine, that’s a totally valid opinion. But please just say it out loud, so we can all hear it. Don’t cover it up by telling us a story about how you weren’t able to reproduce a figure in the IPCC report even though you tried really hard for almost ten seconds, because no one gives a shit whether you have your own “theory.”

If you are more bothered by the prospect of rising gasoline prices than by rising sea levels because you don’t know anyone who lives by the sea anyway, then just say so. If you worry more about the pension for your friend the coal miner than about drought and famine in the developing world because after all there’s only poor people in the developing world, then just say so. If you don’t give a shit about a global recession caused by natural catastrophes that eat up billion after billion because you’re a rich white guy with a big house and think you’re immune to trouble, then just say so. Say it loud, so we can all hear it.

And all the rest of you stop chanting we need to “follow the science”. People who oppose action on climate change are not anti-science, they simply worry more that a wind farm might ruin the view from their summer vacation house, than they worry wild fires will burn down the house. That’s not anti-scientific, that’s just dumb. But then that’s only my opinion.

Tuesday, April 28, 2020

No, physicists have not explained why there is more matter than anti-matter in the universe. It’s not possible.

Pretty? Get over it.
You would think that physicists finally understood that insisting the laws of nature must be beautiful is unscientific. Or at least, if they do not understand it, you would think science writers meanwhile understand it. But every couple of months I have to endure yet another media blast about physicists who may have solved a problem that does not exist in the first place.

The most recent installation of this phenomenon are loads of articles about the recent T2K results that hint at CP violation in the neutrino sector. Yes, this is an interesting result and deserves to be written about. The problem is not the result itself, the problem is scientists and science writers who try to make this result more important than it is.

Truth be told, few people care about CP violation in the neutrino sector. To sell the story, therefore, this turned into a tale about how the results supposedly explain why there is more matter than antimatter in the universe. But: The experiment does not say anything about why there is more matter than anti-matter in the universe. No, it does not. No, not a single bit. If you think it does, you need to urgently switch on your brain. I do not care what your professor said, please think for yourself. Start it right now.

You can see for yourself what the problem is by reading the reports in the media. Not a single one of them explains why anyone should think there ever were equal amounts of matter and anti-matter to begin with. Leah Crane, for example, writes for New Scientist: “Our leading theories tell us that, in the moments after the big bang, there was an equal amount of matter and antimatter.”

But, no, they do not. They cannot. You don’t even need to know what these “leading theories” look like in detail, except that, as all current theories in physics, they work by applying differential equations to initial values. Theories of this type can never explain the initial values themselves. It’s not possible. The theories therefore do not tell us there was an equal amount of matter and antimatter. This amount is a postulate. The initial conditions are always assumptions that the theory does not justify.

Instead, physicists think for purely aesthetic reasons it would have been nicer if there was an equal amount of matter and antimatter in the early universe. Trouble is, this does not agree with observation. So then they cook up theories for how you can start with an equal amount of matter and anti-matter and still end up with a universe like the one we see. You find a good illustration for this in a paper by Steigman and Scherrer with the title “Is The Universal Matter - Antimatter Asymmetry Fine Tuned?” (arXiv:1801.10059) They write:
“One possibility is that the Universe actually began in an asymmetric state, with more baryons and antibaryons. This is, however, a very unsatisfying explanation. Furthermore, if the Universe underwent a period of inflation (i.e., very rapid expansion followed by reheating), then any preexisting net baryon number would have been erased. A more natural explanation is that the Universe began in an initally [sic] symmetric state, with equal numbers of baryons and antibaryons, and that it evolved later to produce a net baryon asymmetry.”
They call it an “unsatisfying explanation” to postulate a number, but the supposedly better explanation still postulates a number!

People always complain to me that I am supposedly forgetting that science is all about “explaining”. These complainers do not listen. Nothing is being explained here. The two hypothesis on the table are: “The universe started with a ratio X of matter to anti-matter and the outcome is what we observe.” The other explanation is “The universe started with a ratio Y of matter to anti-matter, then many complicated things happened and the outcome is what we observe.” Neither of these theories explains the value of X or Y. If anything, you should prefer the former hypothesis because it’s clearly the simpler one. In any case, though, as I said, this type of theory cannot explain their own initial value.

But here is the mind-boggling thing: The vast majority of physicists think that the second explanation is somehow better because the number 1.0000000000 is prettier than the number 1.0000000001. That’s what it comes down to. They like some numbers better than others. But, look, a first grader can see the problem. Physicists are wondering why X=1.0000000001. But with the supposedly new explanation you then ask why Y=1.0000000000? How is that an improvement? Answer: It is not.

Let me emphasize once again that the problem here is not the experiment itself. The problem is that physicists mistakenly think something is being explained because they never bothered to think about what it even means to explain something.

You may disagree with me that scientists should not waste time on trying to prettify the laws of nature, alright. Maybe you think this is something scientists should do with tax money. But I expect that if a topic gets media coverage then the public hears the truth. So here is the truth: No problem has been solved. The problem is not solvable with the current theories of nature.

Friday, February 28, 2020

Quantum Gravity in the Lab? The Hype Is On.


Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar.

It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example:
“In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.”
The correct statement would be:
“Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.”
Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space.

Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it.

Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe.

As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity.

Let us have a look at the paper behind the article. We can read there in the introduction:
“In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.”
This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that.

If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything.

Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on:
“However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention.

The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.”

I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them.

Saturday, July 06, 2019

No, we will not open a portal to a parallel universe

Colbert’s legendary quadruple facepalm.
The nutty physics story of the day comes to us thanks to Michael Brooks who reports for New Scientist that “We’ve seen signs of a mirror-image universe that is touching our own.” This headline has since spread to The Independent, according to which scientists are “attempting to open portal to a parallel universe” and the International Business Times, which wants you to believe that “Scientists Build A Portal To Find A Parallel Universe”.

Needless to say, we have not seen signs of a mirror universe we are not building portals to parallel universes. And if we did, trust me, you wouldn’t hear about it from New Scientist. To first approximation it is safe to assume that whatever you read in New Scientist is either not new or not science, or both.

This story is a case of both, neither new nor science. It is really – once again – about hypothetical particles that physicists have invented just because. In this case it’s particles which are exact copies of the ones that we already know, except for their handedness. These mirror-particles* do not interact with the normal particles, which is supposedly why we haven’t measured them so far. (You find instructions for how to invent particles yourself in my book, Chapter 9 in the section “Laws Like Sausages”.)

The idea of mirror-particles has been around since at least the 1960s. It’s not particularly popular among physicists, because what little we know about dark matter tells us exactly that it does not behave the same way as normal matter. So, to make mirror dark matter fit the data, you have to invent some reason for why, in the end, it is not a mirror copy of normal matter.

And then there is the problem that if the mirror matter really doesn’t interact with our normal matter you cannot measure it. So, if you want to get an experimental search funded, you have to postulate that it does interact. Why? Because otherwise you can’t measure it. Sounds like circular reasoning? That’s what it is.

Now once you have postulated that the hypothetical particles may interact in a way that makes them measureable, then you can make an experiment and try to actually measure them. It is such an measurement that this story is about.

Concretely, it seems to be about the experiment laid out in this paper:
    New Search for Mirror Neutrons at HFIR
    arXiv:1710.00767 [hep-ex]
The authors propose to search for evidence of neutrons oscillating into mirror neutrons.

Now, look, this is exactly the type of ill-motivated experiment that I complained about the other day. Can you do this experiment? Sure. Will it help you solve any of the open problems in the foundations of physics? Almost certainly not. Why not? Because we have no reason to think that these particular particles exist and interact with normal matter in just the way necessary to measure them.

It is not a coincidence that we see so many of these small scale experiments now because this is a strategic decision of the community. Indeed, you find this strategy quoted in the paper for justification: “The 2014 Report of the Particle Physics Project Prioritization Panel (P5) stressed the importance of considering “every feasible avenue,”” to look for new types of dark matter particle.

It adds to this that, some months ago, the Department of Energy announced a plan to provide $24 million for the development of new projects to study dark matter which will undoubtedly fuel physicists’ enthusiasm for thinking up even more new particles.

This, folks, is only the beginning.

I cannot stress enough how idiotic this so-called “strategy” is. You will see million after million vanish into searches for particles invented simply because you can look for them.

If you do not understand why I say this is insanity and not proper science, please read my article in which I explain that falsifiability is necessary but not sufficient to make a hypothesis scientific. This strategy is based on a basic misunderstanding of science philosophy. It is an institutionalized form of motivated reasoning, a mistake that will cost taxpayers tens of millions.

The only good thing about this strategy is that hopefully the media will soon get tired writing about each and every little lab’s search for non-existing particles.


* Not to be confused with supersymmetric partner particles. Different story entirely.

Tuesday, March 05, 2019

Merchants of Hype

Once upon a time, the task of scientists was to understand nature. “Merchants of Light,” Francis Bacon called them. They were a community of knowledge-seekers who subjected hypotheses to experimental test, using what we now simply call “the scientific method.” Understanding nature, so the idea, would both satisfy human curiosity and better our lives.

Today, the task of scientists is no longer to understand nature. Instead, their task is to uphold an illusion of progress by wrapping incremental advances in false promise. Merchants they still are, all right. But now their job is not to bring enlightenment; it is to bring excitement.

Nowhere is this more obvious than with big science initiatives. Quantum computing, personalized medicine, artificial intelligence, simulated brains, mega-scale particle colliders, and everything nano and neuro: While all those fields have a hard scientific core that justifies some investment, the big bulk is empty headlines. Most of the money goes into producing papers whose only purpose is to create an appearance of relevance.

Sooner or later, those research-bubbles become unsustainable and burst. But with the current organization of research, more people brings more money brings more people. And so, the moment one bubble bursts, the next one is on the rise already.

The hype-cycle is self-sustaining: Scientists oversell the promise of their research and get funding. Higher education institutions take their share and deliver press releases to the media. The media, since there’s money to make, produce headlines about breakthrough insights. Politicians are pleased about the impact, talk about international competitiveness, and keep the money flowing.

Trouble is, the supposed breakthroughs rarely lead to tangible progress. Where are our quantum computers? Where are our custom cancer cures? Where are the nano-bots? And why do we still not know what dark matter is made of?

Most scientists are well aware their research floats on empty promise, but keep their mouths shut. I know this not just from my personal experience. I know this because it has been vividly, yet painfully, documented in a series of anonymous interviews with British and Australian scientists about their experience writing grant proposals. These interviews, conducted by Jennifer Chubb and Richard Watermeyer (published in Studies in Higher Education), made me weep:
“I will write my proposals which will have in the middle of them all this work, yeah but on the fringes will tell some untruths about what it might do because that’s the only way it’s going to get funded and you know I’ve got a job to do, and that’s the way I’ve got to do it. It’s a shame isn’t it?”
(UK, Professor)

“If you can find me a single academic who hasn’t had to bullshit or bluff or lie or embellish in order to get grants, then I will find you an academic who is in trouble with his Head of Department. If you don’t play the game, you don’t do well by your university. So anyone that’s so ethical that they won’t bend the rules in order to play the game is going to be in trouble, which is deplorable.”
(Australia, Professor)

“We’ll just find some way of disguising it, no we’ll come out of it alright, we always bloody do, it’s not that, it’s the moral tension it places people under.”
(UK, Professor)

“They’re just playing games – I mean, I think it’s a whole load of nonsense, you’re looking for short term impact and reward so you’re playing a game... it’s over inflated stuff.”
(Australia, Professor)

“Then I’ve got this bit that’s tacked on... That might be sexy enough to get funded but I don’t believe in my heart that there’s any correlation whatsoever... There’s a risk that you end up tacking bits on for fear of the agenda and expectations when it’s not really where your heart is and so the project probably won’t be as strong.”
(Australia, Professor)
In other interviews, the researchers referred to their proposals as “virtually meaningless,” “made up stories” or “charades.” They felt sorry for their own situation. And then justified their behavior by the need to get funding.

Worse, the above quotes only document the tip of the iceberg. That’s because the people who survive in the current system are the ones most likely to be okay with the situation. This may be because they genuinely believe their field is as promising as they make it sound, or because they manage to excuse their behavior to themselves. Either way, the present selection criteria in science favor skilled salesmanship over objectivity. Need I say that this is not a good way to understand nature?

The tragedy is not that this situation sucks, though, of course, it does. The tragedy is that it’s an obvious problem and yet no one does anything about it. If scientists can increase their chances to get funding by exaggeration, they will exaggerate. If they can increase their chances to get funding by being nice to their peers, they will be nice to their peers. If they can increase their chances to get funding by publishing on popular topics, they will publish on popular topics. You don’t have to be a genius to figure that out.

Tenure was supposed to remedy scientists’ conflict of interest between truth-seeking and economic survival. But tenure is now a rarity. Even the lucky ones who have it must continue to play nice, both to please their institution and keep the funding flowing. And honesty has become self-destructive. If you draw attention to shortcomings, if you debunk hype, if you question the promise of your own research area, you will be expelled from the community. A recent commenter on this blog summed it up like this:
“at least when I was in [high energy physics], it was taken for granted that anyone in academic [high energy physics] who was not a booster for more spending, especially bigger colliders, was a traitor to the field.”
If you doubt this, think about the following. I have laid out clearly why I do not think a bigger particle collider is currently a good investment. No one who understands the scientific and technological situation seriously disagrees with my argument; they merely disagree with the conclusions. This is fine with me. This is not the problem. I don’t expect everyone to agree with me.

But I also don’t expect everyone to disagree with me, and neither should you. So here is the puzzle: Why can you not find any expert, besides me, willing to publicly voice criticism on particle physics? Hint: It’s not because there is nothing to criticize.

And if you figured this one out, maybe you will understand why I say I cannot trust scientists any more. It’s a problem. It’s a problem in dire need of a solution.

This rant, was, for once, not brought on by a particle physicist, but by someone who works in quantum computing. Someone who complained to me that scientists are overselling the potential of their research, especially when it comes to large investments. Someone distraught, frustrated, disillusioned, and most of all, unsure what to do.

I understand that many of you cannot break the ranks without putting your jobs at risk. I do not – and will not – expect you to sacrifice a career you worked hard for; no one would be helped by this. But I want to remind you that you didn’t become a scientist just to shut up and advocate.

Wednesday, January 30, 2019

Just because it’s falsifiable doesn’t mean it’s good science.

Flying carrot. 
Title says it all, really, but it’s such a common misunderstanding I want to expand on this for a bit.

A major reason we see so many wrong predictions in the foundations of physics – and see those make headlines – is that both scientists and science writers take falsifiability to be a sufficient criterion for good science.

Now, a scientific prediction must be falsifiable, all right. But falsifiability alone is not sufficient to make a prediction scientific. (And, no, Popper never said so.) Example: Tomorrow it will rain carrots. Totally falsifiable. Totally not scientific.

Why is it not scientific? Well, because it doesn’t live up to the current quality standard in olericulture, that is the study of vegetables. According to the standard model of root crops, carrots don’t grow on clouds.

What do we learn from this? (Besides that the study of vegetables is called “olericulture,” who knew.) We learn that to judge a prediction you must know why scientists think it’s a good prediction.

Why does it matter?

The other day I got an email from a science writer asking me to clarify a statement he had gotten from another physicist. That other physicist had explained a next larger particle collider, if built, would be able to falsify the predictions of certain dark matter models.

That is correct of course. A next larger collider would be able to falsify a huge amount of predictions. Indeed, if you count precisely, it would falsify infinitely many predictions. That’s more than even particle physicists can write papers about.

You may think that’s a truly remarkable achievement. But the question you should ask is: What reason did the physicist have to think that any of those predictions are good predictions? And when it comes to the discovery of dark matter with particle colliders, the answer currently is: There is no reason.

I cannot stress this often enough. There is not currently any reason to think a larger particle collider would produce fundamentally new particles or see any other new effects. There are loads of predictions, but none of those have good motivations. They are little better than carrot rain.

People not familiar with particle physics tend to be baffled by this, and I do not blame them. You would expect if scientists make predictions they have reasons to think it’ll actually happen. But that’s not the case in theory-development for physics beyond the standard model. To illustrate this, let me tell you how these predictions for new particles come into being.

The standard model of particle physics is an extremely precisely tested theory. You cannot just add particles to it as you want, because doing so quickly gets you into conflict with experiment. Neither, for that matter, can you just change something about the existing particles like, eg, postulating they are made up of smaller particles or such. Yes, particle physics is complicated.

There are however a few common techniques you can use to amend the standard model so that the deviations from it are not in the regime that we have measured yet. The most common way to do this is to make the new particles heavy (so that it takes a lot of energy to create them) or very weakly interacting (so that you produce them very rarely). The former is more common in particle physics, the latter more common in astrophysics.

There are of course a lot of other quality criteria that you need to fulfil. You need to formulate your theory in the currently used mathematical language, that is that of quantum field theories. You must demonstrate that your new theory is not in conflict with experiment already. You must make sure that your theory has no internal contradictions. Most importantly though, you must have a motivation for why your extension of the standard model is interesting.

You need this motivation because any such theory-extension is strictly speaking unnecessary. You do not need it to explain existing data. No, you do not need it to explain the observations normally attributed to dark matter either. Because to explain those you only need to assume an unspecified “fluid” and it doesn’t matter what that fluid is made of. To explain the existing data, all you need is the standard model of particle physics and the concordance model of cosmology.

The major motivation for new particles at higher energies, therefore, has for the past 20 years been an idea called “naturalness”. The standard model of particle physics is not “natural”. If you add more particles to it, you can make it “natural” again. Problem is that now the data say that the standard model is just not natural, period. So that motivation just evaporated. With that motivation gone, particle physicists don’t know what to do. Hence all the talk about confusion and crisis and so on.

Of course physicists who come up with new models will always claim that they have a good motivation, and it can be hard to follow their explanations. But it never hurts to ask. So please do ask. And don’t take “it’s falsifiable” as an answer.

There is more to be said about what it means for a theory to be “falsifiable” and how necessary that criterion really is, but that’s a different story and shall be told another time. Thanks for listening.



[I explain all this business with naturalness and inventing new particles that never show up in my book. I know you are sick of me mentioning this, but the reason I keep pointing it out is that I spent a lot of time making the statements in my book as useful and accurate as possible. I cannot make this effort with all my blogposts. So really I think you are better off reading the book.]

Wednesday, April 04, 2018

Particle Physicists begin to invent reasons to build next larger Particle Collider

Collider quilt. By Kate Findlay.
[Image: Symmetry Magazine]
Nigel Lockyer, the director of Fermilab, recently spoke to BBC about the benefits of building a next larger particle collider, one that reaches energies higher than the Large Hadron Collider (LHC).

Such a new collider could measure more precisely the properties of the Higgs-boson. But that’s not all, at least according to Lockyer. He claims he knows there is something new to discover too:
“Everybody believes there’s something there, but what we’re now starting to question is the scale of the new physics. At what energy does this new physics show up,” said Dr Lockyer. “From a simple calculation of the Higgs’ mass, there has to be new science. We just can’t give up on everything we know as an excuse for where we are now.”
First, let me note that “everybody believes” is an argument ad populum. It isn’t only non-scientific, it is also wrong because I don’t believe it, qed. But more importantly, the argument for why there has to be new science is wrong.

To begin with, we can’t calculate the Higgs mass; it’s a free parameter that is determined by measurement. Same with the Higgs mass as with the masses of all other elementary particles. But that’s a matter of imprecise phrasing, and I only bring it up because I’m an ass.

The argument Lockyer is referring to are calculations of quantum corrections to the Higgs-mass. Ie, he is making the good, old, argument from naturalness.

If that argument were right, we should have seen supersymmetric particles already. We didn’t. That’s why Giudice, head of the CERN theory division, has recently rung in the post-naturalness era. Even New Scientist took note of that. But maybe the news hasn’t yet arrived in the USA.

Naturalness arguments never had a solid mathematical basis. But so far you could have gotten away saying they are handy guides for theory development. Now, however, seeing that these guides were bad guides in that their predictions turned out incorrect, using arguments from naturalness is no longer scientifically justified. If it ever was. This means we have no reason to expect new science, not in the not-yet analyzed LHC data and not at a next larger collider.

Of course there could be something new. I am all in favor of building a larger collider and just see what happens. But please let’s stick to the facts: There is no reason to think a new discovery is around the corner.

I don’t think Lockyer deliberately lied to BBC. He’s an experimentalist and probably actually believes what the theorists tell him. He has all reasons for wanting to believe it. But really he should know better.

Much more worrisome than Lockyer’s false claim is that literally no one from the community tried to correct it. Heck, it’s like the head of NASA just told BBC we know there’s life on Mars! If that happened, astrophysicists would collectively vomit on social media. But particle physicists? They all keep their mouth shut if one of theirs spreads falsehoods. And you wonder why I say you can’t trust them?

Meanwhile Gordon Kane, a US-Particle physicist known for his unswerving support of supersymmetry, has made an interesting move: he discarded of naturalness arguments altogether.

You find this in a paper which appeared on the arXiv today. It seems to be a promotional piece that Kane wrote together with Stephen Hawking some months ago to advocate the Chinese Super Proton Proton Collider (SPPC).

Kane has claimed for 15 years or so that the LHC would have to see supersymmetric particles because of naturalness. Now that this didn’t work out, he has come up with a new reason for why a next larger collider should see something:
“Some people have said that the absence of superpartners or other phenomena at LHC so far makes discovery of superpartners unlikely. But history suggests otherwise. Once the [bottom] quark was found, in 1979, people argued that “naturally” the top quark would only be a few times heavier. In fact the top quark did exist, but was forty-one times heavier than the [bottom] quark, and was only found nearly twenty years later. If superpartners were forty-one times heavier than Z-bosons they would be too heavy to detect at LHC and its upgrades, but could be detected at SPPC.”
Indeed, nothing forbids superpartners to be forty-one times heavier than Z-bosons. Neither is there anything that forbids them to be four-thousand times heavier, or four billion times heavier. Indeed, they don’t even have to be there at all. Isn’t it beautiful?

Leaving aside that just because we can’t calculate the masses doesn’t mean they have to be near the discovery-threshold, the historical analogy doesn’t work for several reasons.

Most importantly, quarks come in pairs that are SU(2) doublets. This means once you have the bottom quark, you know it needs to have a partner. If there wouldn’t be one, you’d have to discontinue the symmetry of the standard model which was established with the lighter quarks. Supersymmetry, on contrast, has no evidence among the already known particles speaking in its favor.

Physicists also knew since the early 1970s that the weak nuclear force violates CP-invariance, which requires (at least) three generations of quarks. Because of this, the existence of both the bottom and top quark were already predicted in 1973.

Finally, for anomaly cancellation to work you need equally many leptons as quarks, and the tau and tau-neutrino (third generation of leptons) had been measured already in 1975 and 1977, respectively. (We also know the top quark mass can’t be too far away from the bottom quark mass, and the Higgs mass has to be close by the top quark mass, but this calculation wasn’t available in the 1970s.)

In brief this means if the top quark had not been found, the whole standard model wouldn’t have worked. The standard model, however, works just fine without supersymmetric particles. 

Of course Gordon Kane knows all this. But desperate times call for desperate measures I guess.

In the Kane-Hawking pamphlet we also read:
“In addition, a supersymmetric theory has the remarkable property that it can relate physics at our scale, where colliders take data, with the Planck scale, the natural scale for a fundamental physics theory, which may help in the efforts to find a deeper underlying theory.”
I don’t disagree with this. But it’s a funny statement because for 30 years or so we have been told that supersymmetry has the virtue of removing the sensitivity to Planck scale effects. So, actually the absence of naturalness holds much more promise to make that connection to higher energy. In other words, I say, the way out is through.

I wish I could say I’m surprised to see such wrong claims boldly being made in public. But then I only just wrote two weeks ago that the lobbying campaign is likely to start soon. And, lo and behold, here we go.


In my book “Lost in Math” I analyze how particle physicists got into this mess and also offer some suggestions for how to move on.

Wednesday, January 03, 2018

Sometimes I believe in string theory. Then I wake up.

They talk about me.
Grumpy Rainbow Unicorn.
[Image Source.]

And I can’t blame them. Because nothing else is happening on this planet. There’s just me and my attempt to convince physicists that beauty isn’t truth.

Yes, I know it’s not much of an insight that pretty ideas aren’t always correct. That’s why I objected when my editor suggested I title my book “Why Beauty isn’t Truth.” Because, duh, it’s been said before and if I wanted to be stale I could have written about how we’re all made of stardust, aah-choir, chimes, fade and cut.

Nature has no obligation to be pretty, that much is sure. But the truth seems hard to swallow. “Certainly she doesn’t mean that,” they say. Or “She doesn’t know what she’s doing.” Then they explain things to me. Because surely I didn’t mean to say that much of what goes on in the foundations of physics these days is a waste of time, did I? And even if, could I please not do this publicly, because some people have to earn a living from it.

They are “good friends,” you see? Good friends who want me to believe what they believe. Because believing has bettered their lives.

And certainly I can be fixed! It’s just that I haven’t yet seen the elegance of string theory and supersymmetry. Don’t I know that elegance is a sign of all successful theories? It must be that I haven’t understood how beauty has been such a great guide for physicists in the past. Think of Einstein and Dirac and, erm, there must have been others, right? Or maybe it’s that I haven’t yet grasped that pretty, natural theories are so much better. Except possibly for the cosmological constant, which isn’t pretty. And the Higgs-mass. And, oh yeah, the axion. Almost forgot about that, sorry.

But it’s not that I don’t think unified symmetry is a beautiful idea. It’s a shame, really, that we have these three different symmetries in particle physics. It would be so much nicer if we could merge them to one large symmetry. Too bad that the first theories of unification led to the prediction of proton decay and were ruled out. But there are a lot other beautiful unification ideas left to work on. Not all is lost!

And it’s not that I don’t think supersymmetry is elegant. It combines two different types of particles and how cool is that? It has candidates for dark matter. It alleviates the problem with the cosmological constant. And it aids gauge coupling unification. Or at least it did until LHC data interfered with our plans to prettify the laws of nature. Dang.

And it’s not that I don’t see why string theory is appealing. I once set out to become a string theorist. I do not kid you. I ate my way through textbooks and it was all totally amazing, how much you get out from the rather simple idea that particles shouldn’t be points but strings. Look how much consistency dictates you to construct the theory. And note how neatly it fits with all that we already know.

But then I got distracted by a disturbing question: Do we actually have evidence that elegance is a good guide to the laws of nature?

The brief answer is no, we have no evidence. The long answer is in my book and, yes, I will mention the-damned-book until everyone is sick of it. The summary is: Beautiful ideas sometimes work, sometimes they don’t. It’s just that many physicists prefer to recall the beautiful ideas which did work.

And not only is there no historical evidence that beauty and elegance are good guides to find correct theories, there isn’t even a theory for why that should be so. There’s no reason to think that our sense of beauty has any relevance for discovering new fundamental laws of nature.

Sure, if you ask those who believe in string theory and supersymmetry and in grand unification, they will say that of course they know there is no reason to believe a beautiful theory is more likely to be correct. They still work on them anyway. Because what better could they do with their lives? Or with their grants, respectively. And if you work on it, you better believe in it.

I consent, not all math is equally beautiful and not all math is equally elegant. I yet have to find anyone, for example, who thinks Loop Quantum Gravity is more beautiful than string theory. And isn’t it interesting that we share this sense of what is and isn’t beautiful? Shouldn’t it mean something that so many theoretical physicists agree beautiful math is better? Shouldn’t it mean something that so many people believe in the existence of an omniscient god?

But science isn’t about belief, it’s about facts, so here are the facts: This trust in beauty as a guide, it’s not working. There’s no evidence for grand unification. There’s no evidence for supersymmetry, no evidence for axions, no evidence for moduli, for WIMPs, or for dozens of other particles that were invented to prettify theories which work just fine without them. After decades of search, there’s no evidence for any of these.

It’s not working. I know it hurts. But now please wake up.

Let me assure you I usually mean what I say and know what I do. Could I be wrong? Of course. Maybe tomorrow we’ll discover supersymmetry. Not all is lost.

Wednesday, December 06, 2017

The cosmological constant is not the worst prediction ever. It’s not even a prediction.

Think fake news and echo chambers are a problem only in political discourse? Think again. You find many examples of myths and falsehoods on popular science pages. Most of them surround the hype of the day, but some of them have been repeated so often they now appear in papers, seminar slides, and textbooks. And many scientists, I have noticed with alarm, actually believe them.

I can’t say much about fields outside my specialty, but it’s obvious this happens in physics. The claim that the bullet cluster rules out modified gravity, for example, is a particularly pervasive myth. Another one is that inflation solves the flatness problem, or that there is a flatness problem to begin with.

I recently found another myth to add to my list: the assertion that the cosmological constant is “the worst prediction in the history of physics.” From RealClearScience I learned the other day that this catchy but wrong statement has even made it into textbooks.

Before I go and make my case, please ask yourself: If the cosmological constant was such a bad prediction, then what theory was ruled out by it? Nothing comes to mind? That’s because there never was such a prediction.

The myth has it that if you calculate the cosmological constant using the standard model of particle physics the result is 120 orders of magnitude larger than what is observed due to contributions from vacuum fluctuation. But this is wrong on at least 5 levels:

1. The standard model of particle physics doesn’t predict the cosmological constant, never did, and never will.

The cosmological constant is a free parameter in Einstein’s theory of general relativity. This means its value must be fixed by measurement. You can calculate a contribution to this constant from the standard model vacuum fluctuations. But you cannot measure this contribution by itself. So the result of the standard model calculation doesn’t matter because it doesn’t correspond to an observable. Regardless of what it is, there is always a value for the parameter in general relativity that will make the result fit with measurement.

(And if you still believe in naturalness arguments, buy my book.)

2. The calculation in the standard model cannot be trusted.

Many theoretical physicists think the standard model is not a fundamental theory but must be amended at high energies. If that is so, then any calculation of the contribution to the cosmological constant using the standard model is wrong anyway. If there are further particles, so heavy that we haven’t yet seen them, these will play a role for the result. And we don’t know if there are such particles.

3. It’s idiotic to quote ratios of energy densities.

The 120 orders of magnitude refers to a ratio of energy densities. But not only is the cosmological constant usually not quoted as an energy density (but as a square thereof), in no other situation do particle physicists quote energy densities. We usually speak about energies, in which case the ratio goes down to 30 orders of magnitude.

4. The 120 orders of magnitude are wrong to begin with.

The actual result from the standard model scales with the fourth power of the masses of particles, times an energy-dependent logarithm. At least that’s the best calculation I know of. You find the result in equation (515) in this (awesomely thorough) paper. If you put in the numbers, out comes a value that scales with the masses of the heaviest known particles (not with the Planck mass, as you may have been told). That’s currently 13 orders of magnitude larger than the measured value, or 52 orders larger in energy density.

5. No one in their right mind ever quantifies the goodness of a prediction by taking ratios.

There’s a reason physicists usually talk a about uncertainty, statistical significance, and standard deviations. That’s because these are known to be useful to quantify the match of a theory with data. If you’d bother writing down the theoretical uncertainties of the calculation for the cosmological constant, the result would be compatible with the measured value even if you’d set the additional contribution from general relativity to zero.

In summary: No prediction, no problem.

Why does it matter? Because this wrong narrative has prompted physicists to aim at the wrong target.

The real problem with the cosmological constant is not the average value of the standard model contribution but – as Niayesh Afshordi elucidated better than I ever managed to – that the vacuum fluctuations, well, fluctuate. It’s these fluctuations that you should worry about. Because these you cannot get rid of by subtracting a constant.

But of course I know the actual reason you came here is that you want to know what is “the worst prediction in the history of physics” if not the cosmological constant...

I’m not much of a historian, so don’t take my word for it, but I’d guess it’s the prediction you get for the size of the universe if you assume the universe was born by a vacuum fluctuation out of equilibrium.

In this case, you can calculate the likelihood for observing a universe like our own. But the larger and the less noisy the observed universe, the less likely it is to originate from a fluctuation. Hence, the mere fact that you have a fairly ordered memory of the past and a sense of a reasonably functioning reality would be exceedingly tiny in such a case. So tiny, I’m not interested enough to even put in the numbers. (Maybe ask Sean Carroll.)

I certainly wish I’d never have to see the cosmological constant myth again. I’m not yet deluded enough to believe it will go away, but at least I now have this blogpost to refer to when I encounter it the next time.

Saturday, October 28, 2017

No, you still cannot probe quantum gravity with quantum optics

Srsly?
Several people asked me to comment on a paper that is hyped by phys.org as a test of quantum gravity. I’ll make this brief.

First things first, why are you still following phys.org?

Second, the paper in question is on the arXiv and is titled “Probing noncommutative theories with quantum optical experiments.” The paper is as wrong as a very similar paper was in 2012.

It is correct that noncommutative geometry plays a role in many approaches to quantum gravity and it’s not an entirely uninteresting idea. However, the variant that the authors want to test in the paper is not of the commonly discussed type. They want the effect to be relevant for the center-of-mass coordinates, so that it scales with the total mass. That assumption has no support from any approach to quantum gravity. It’s made-up. It is also mathematically highly problematic.

Third, I already spelled out in my review several years ago that this is bogus (see section 4.6) and doesn’t follow from anything. Though the academically correct phrase I used there is “should be regarded with caution.”

Fourth, note that the paper appeared on the arxiv two weeks after being accepted for publication. The authors clearly were not keen on any comment by any blogger before they had made it through peer review.

Fifth, let me mention that one of the authors of the paper, Mir Faizal, is not unknown to readers of this blog. We last heard of him when claimed that Loop Quantum Gravity violates the Holographic Principle (it doesn't). Before that, he claimed that the LHC will make contact to parallell universes (it won’t) and that black holes don’t exist (they do).

I rest my case.

And don’t forget to unfollow phys.org.

Thursday, July 13, 2017

Nature magazine publishes comment on quantum gravity phenomenology, demonstrates failure of editorial oversight

I have a headache and
blame Nature magazine for it.
For about 15 years, I have worked on quantum gravity phenomenology, which means I study ways to experimentally test the quantum properties of space and time. Since 2007, my research area has its own conference series, “Experimental Search for Quantum Gravity,” which took place most recently September 2016 in Frankfurt, Germany.

Extrapolating from whom I personally know, I estimate that about 150-200 people currently work in this field. But I have never seen nor heard anything of Chiara Marletto and Vlatko Vedral, who just wrote a comment for Nature magazine complaining that the research area doesn’t exist.

In their comment, titled “Witness gravity’s quantum side in the lab,” Marletto and Vedral call for “a focused meeting bringing together the quantum- and gravity-physics communities, as well as theorists and experimentalists.” Nice.

If they think such meetings are a good idea, I recommend they attend them. There’s no shortage. The above mentioned conference series is only the most regular meeting on quantum gravity phenomenology. Also the Marcel Grossmann Meeting has sessions on the topic. Indeed, I am writing this from a conference here in Trieste, which is about “Probing the spacetime fabric: from concepts to phenomenology.”

Marletto and Vedral point out that it would be great if one could measure gravitational fields in quantum superpositions to demonstrate that gravity is quantized. They go on to lay out their own idea for such experiments, but their interest in the topic apparently didn’t go far enough to either look up the literature or actually put in the numbers.

Yes, it would be great if we could measure the gravitational field of an object in a superposition of, say, two different locations. Problem is, heavy objects – whose gravitational fields are easy to measure – decohere quickly and don’t have quantum properties. On the other hand, objects which are easy to bring into quantum superpositions are too light to measure their gravitational field.

To be clear, the challenge here is to measure the gravitational field created by the objects themselves. It is comparably easy to measure the behavior of quantum objects in the gravitational field of the Earth. That has something to do with quantum and something to do with gravity, but nothing to do with quantum gravity because the gravitational field isn’t quantized.

In their comment, Marletto and Vedral go on to propose an experiment:
“Likewise, one could envisage an experiment that uses two quantum masses. These would need to be massive enough to be detectable, perhaps nanomechanical oscillators or Bose–Einstein condensates (ultracold matter that behaves as a single super-atom with quantum properties). The first mass is set in a superposition of two locations and, through gravitational interaction, generates Schrödinger-cat states on the gravitational field. The second mass (the quantum probe) then witnesses the ‘gravitational cat states’ brought about by the first.”
This is truly remarkable, but not because it’s such a great idea. It’s because Marletto and Vedral believe they’re the first to think about this. Of course they are not.

The idea of using Schrödinger-cat states, has most recently been discussed here. I didn’t write about the paper on this blog because the experimental realization faces giant challenges and I think it won’t work. There is also Anastopolous and Hu’s CQG paper about “Probing a Gravitational Cat State” and a follow-up paper by Derakhshani, which likewise go unmentioned. I’d really like to know how Marletto and Vedral think they can improve on the previous proposals. Letting a graphic designer make a nice illustration to accompany their comment doesn’t really count much in my book.

The currently most promising attempt to probe quantum gravity indeed uses nanomechanical oscillators and comes from the group of Markus Aspelmeyer in Vienna. I previously discussed their work here. This group is about six orders of magnitude away from being able to measure such superpositions. The Nature comment doesn’t mention it either.

The prospects of using Bose-Einstein condensates to probe quantum gravity has been discussed back and forth for two decades, but clear is that this isn’t presently the best option. The reason is simple: Even if you take the largest condensate that has been created to date – something like 10 million atoms – and you calculate the total mass, you are still way below the mass of the nanomechanical oscillators. And that’s leaving aside the difficulty of creating and sustaining the condensate.

There are some other possible gravitational effects for Bose-Einstein condensates which have been investigated, but these come from violations of the equivalence principle, or rather the ambiguity of what the equivalence principle in quantum mechanics means to begin with. That’s a different story though because it’s not about measuring quantum superpositions of the gravitational field.

Besides this, there are other research directions. Paternostro and collaborators, for example, have suggested that a quantized gravitational field can exchange entanglement between objects in a way that a classical field can’t. That too, however, is a measurement which is not presently technologically feasible. A proposal closer to experimental test is that by Belenchia et al, laid out their PRL about “Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators” (which I wrote about here).

Others look for evidence of quantum gravity in the CMB, in gravitational waves, or search for violations of the symmetries that underlie General Relativity. You can find a little summary in my blogpost “How Can we test Quantum Gravity”  or in my Nautilus essay “What Quantum Gravity Needs Is More Experiments.”

Do Marletto and Vedral mention any of this research on quantum gravity phenomenology? No.

So, let’s take stock. Here, we have two scientists who don’t know anything about the topic they write about and who ignore the existing literature. They faintly reinvent an old idea without being aware of the well-known difficulties, without quantifying the prospects of ever measuring it, and without giving proper credits to those who previously wrote about it. And they get published in one of the most prominent scientific journals in existence.

Wow. This takes us to a whole new level of editorial incompetence.

The worst part isn’t even that Nature magazine claims my research area doesn’t exist. No, it’s that I’m a regular reader of the magazine – or at least have been so far – and rely on their editors to keep me informed about what happens in other disciplines. For example with the comments pieces. And let us be clear that these are, for all I know, invited comments and not selected from among unsolicited submissions. So, some editor deliberately chose these authors.

Now, in this rare case when I can judge their content’s quality, I find the Nature editors picked two people who have no idea what’s going on, who chew up 30 years old ideas, and omit relevant citations of timely contributions.

Thus, for me the worst part is that I will henceforth have to suspect Nature’s coverage of other research areas is equally miserable as this.

Really, doing as much as Googling “Quantum Gravity Phenomenology” is more informative than this Nature comment.

Friday, June 30, 2017

To understand the foundations of physics, study numerology

Numbers speak. [Img Src]
Once upon a time, we had problems in the foundations of physics. Then we solved them. That was 40 years ago. Today we spend most of our time discussing non-problems.

Here is one of these non-problems. Did you know that the universe is spatially almost flat? There is a number in the cosmological concordance model called the “curvature parameter” that, according to current observation, has a value of 0.000 plus-minus 0.005.

Why is that a problem? I don’t know. But here is the story that cosmologists tell.

From the equations of General Relativity you can calculate the dynamics of the universe. This means you get relations between the values of observable quantities today and the values they must have had in the early universe.

The contribution of curvature to the dynamics, it turns out, increases relative to that of matter and radiation as the universe expands. This means for the curvature-parameter to be smaller than 0.005 today, it must have been smaller than 10-60 or so briefly after the Big Bang.

That, so the story goes, is bad, because where would you get such a small number from?

Well, let me ask in return, where do we get any number from anyway? Why is 10-60 any worse than, say, 1.778, or exp(67Ï€)?

That the curvature must have had a small value in the early universe is called the “flatness problem,” and since it’s on Wikipedia it’s officially more real than me. And it’s an important problem. It’s important because it justifies the many attempts to solve it.

The presently most popular solution to the flatness problem is inflation – a rapid period of expansion briefly after the Big Bang. Because inflation decreases the relevance of curvature contributions dramatically – by something like 200 orders of magnitude or so – you no longer have to start with some tiny value. Instead, if you start with any curvature parameter smaller than 10197, the value today will be compatible with observation.

Ah, you might say, but clearly there are more numbers smaller than 10197 than there are numbers smaller than 10-60, so isn’t that an improvement?

Unfortunately, no. There are infinitely many numbers in both cases. Besides that, it’s totally irrelevant. Whatever the curvature parameter, the probability to get that specific number is zero regardless of its value. So the argument is bunk. Logical mush. Plainly wrong. Why do I keep hearing it?

Worse, if you want to pick parameters for our theories according to a uniform probability distribution on the real axis, then all parameters would come out infinitely large with probability one. Sucks. Also, doesn’t describe observations*.

And there is another problem with that argument, namely, what probability distribution are we even talking about? Where did it come from? Certainly not from General Relativity because a theory can’t predict a distribution on its own theory space. More logical mush.

If you have trouble seeing the trouble, let me ask the question differently. Suppose we’d manage to measure the curvature parameter today to a precision of 60 digits after the point. Yeah, it’s not going to happen, but bear with me. Now you’d have to explain all these 60 digits – but that is as fine-tuned as a zero followed by 60 zeroes would have been!

Here is a different example for this idiocy. High energy physicists think it’s a problem that the mass of the Higgs is 15 orders of magnitude smaller than the Planck mass because that means you’d need two constants to cancel each other for 15 digits. That’s supposedly unlikely, but please don’t ask anyone according to which probability distribution it’s unlikely. Because they can’t answer that question. Indeed, depending on character, they’ll either walk off or talk down to you. Guess how I know.

Now consider for a moment that the mass of the Higgs was actually about as large as the Planck mass. To be precise, let’s say it’s 1.1370982612166126 times the Planck mass. Now you’d again have to explain how you get exactly those 16 digits. But that is, according to current lore, not a finetuning problem. So, erm, what was the problem again?

The cosmological constant problem is another such confusion. If you don’t know how to calculate that constant – and we don’t, because we don’t have a theory for Planck scale physics – then it’s a free parameter. You go and measure it and that’s all there is to say about it.

And there are more numerological arguments in the foundations of physics, all of which are wrong, wrong, wrong for the same reasons. The unification of the gauge couplings. The so-called WIMP-miracle (RIP). The strong CP problem. All these are numerical coincidence that supposedly need an explanation. But you can’t speak about coincidence without quantifying a probability!

Do my colleagues deliberately lie when they claim these coincidences are problems, or do they actually believe what they say? I’m not sure what’s worse, but suspect most of them actually believe it.

Many of my readers like jump to conclusions about my opinions. But you are not one of them. You and I, therefore, both know that I did not say that inflation is bunk. Rather I said that the most common arguments for inflation are bunk. There are good arguments for inflation, but that’s a different story and shall be told another time.

And since you are among the few who actually read what I wrote, you also understand I didn’t say the cosmological constant is not a problem. I just said its value isn’t the problem. What actually needs an explanation is why it doesn’t fluctuate. Which is what vacuum fluctuations should do, and what gives rise to what Niayesh called the cosmological non-constant problem.

Enlightened as you are, you would also never think I said we shouldn’t try to explain the value of some parameter. It is always good to look for better explanations for the assumption underlying current theories – where by “better” I mean either simpler or can explain more.

No, what draws my ire is that most of the explanations my colleagues put forward aren’t any better than just fixing a parameter through measurement  – they are worse. The reason is the problem they are trying to solve – the smallness of some numbers – isn’t a problem. It’s merely a property they perceive as inelegant.

I therefore have a lot of sympathy for philosopher Tim Maudlin who recently complained that “attention to conceptual clarity (as opposed to calculational technique) is not part of the physics curriculum” which results in inevitable confusion – not to mention waste of time.

In response, a pseudoanonymous commenter remarked that a discussion between a physicist and a philosopher of physics is “like a debate between an experienced car mechanic and someone who has read (or perhaps skimmed) a book about cars.”

Trouble is, in the foundations of physics today most of the car mechanics are repairing cars that run just fine – and then bill you for it.

I am not opposed to using aesthetic arguments as research motivations. We all have to get our inspiration from somewhere. But I do think it’s bad science to pretend numerological arguments are anything more than appeals to beauty. That very small or very large numbers require an explanation is a belief – and it’s a belief that has become adapted by the vast majority of the community. That shouldn’t happen in any scientific discipline.

As a consequence, high energy physics and cosmology is now populated with people who don’t understand that finetuning arguments have no logical basis. The flatness “problem” is preached in textbooks. The naturalness “problem” is all over the literature. The cosmological constant “problem” is on every popular science page. And so the myths live on.

If you break down the numbers, it’s me against ten-thousand of the most intelligent people on the planet. Am I crazy? I surely am.


*Though that’s exactly what happens with bare values.

Wednesday, April 26, 2017

Not all publicity is good publicity, not even in science.

“Any publicity is good publicity” is a reaction I frequently get to my complaints about flaky science coverage. I find this attitude disturbing, especially when it comes from scientists.

[img src: gamedesigndojo.com]


To begin with, it’s an idiotic stance towards journalism in general – basically a permission for journalists to write nonsense. Just imagine having the same attitude towards articles on any other topic, say, immigration: Simply shrug off whether the news accurately reports survey results or even correctly uses the word “immigrant.” In that case I hope we agree that not all publicity is good publicity, neither in terms of information transfer nor in terms of public engagement.

Besides, as United Airlines and Pepsi recently served to illustrate, sometimes all you want is that they stop talking about you.

But, you may say, science is different. Scientists have little to lose and much to win from an increased interest in their research.

Well, if you think so, you either haven’t had much experience with science communication or you haven’t paid attention. Thanks to this blog, I have a lot first-hand experience with public engagement due to science writers’ diarrhea. And most of what I witness isn’t beneficial for science at all.

The most serious problem is the awakening after overhype. It’s when people start asking “Whatever happened to this?” Why are we still paying string theorists? Weren’t we supposed to have a theory of quantum gravity by 2015? Why do physicists still don’t know what dark matter is made of? Why can I still not have a meaningful conversation with my phone, where is my quantum computer, and whatever happened to negative mass particles?

That’s a predictable and wide-spread backlash from disappointed hope. Once excitement fades, the consequence is a strong headwind of public ridicule and reduced trust. And that’s for good reasons, because people were, in fact, fooled. In IT development, it goes under the (branded but catchy) name Hype Cycle

[Hype Cycle. Image: Wikipedia]

There isn’t much data on it, but academic research plausibly goes through the same “through of disillusionment” when it falls short of expectations. The more hype, the more hangover when promises don’t pan out, which is why, eg, string theory today takes most of the fire while loop quantum gravity – though in many regards even more of a disappointment – flies mostly under the radar. In the valley of disappointment, then, researchers are haunted both by dwindling financial support as well as by their colleagues’ snark. (If you think that’s not happening, wait for it.)

This overhype backlash, it’s important to emphasize, isn’t a problem journalists worry about. They’ll just drop the topic and move on to the next. We, in science, are the ones who pay for the myth that any publicity is good publicity.

In the long run the consequences are even worse. Too many never-heard-of-again breakthroughs leave even the interested layman with the impression that scientists can no longer be taken seriously. Add to this a lack of knowledge about where to find quality information, and inevitable some fraction of the public will conclude scientific results can’t be trusted, period.

If you have a hard time believing what I say, all you have to do is read comments people leave on such misleading science articles. They almost all fall into two categories. It’s either “this is a crappy piece of science writing” or “mainstream scientists are incompetent impostors.” In both cases the commenters doubt the research in question is as valuable as it was presented.

If you can stomach it, check the I-Fucking-Love-Science facebook comment section every once in a while. It's eye-opening. On recent reports from the latest LHC anomaly, for example, you find gems like “I wish I had a job that dealt with invisible particles, and then make up funny names for them! And then actually get a paycheck for something no one can see! Wow!” and “But have we created a Black Hole yet? That's what I want to know.” Black Holes at the LHC were the worst hype I can recall in my field, and it still haunts us.

Another big concern with science coverage is its impact on the scientific community. I have spoken about this many times with my colleagues, but nobody listens even though it’s not all that complicated: Our attention is influenced by what ideas we are repeatedly exposed to, and all-over-the-news topics therefore bring a high risk of streamlining our interests.

Almost everyone I ever talked to about this simply denied such influence exists because they are experts and know better and they aren’t affected by what they read. Unfortunately, many scientific studies have demonstrated that humans pay more attention to what they hear about repeatedly, and we perceive something as more important the more other people talk about it. That’s human nature.

Other studies that have shown such cognitive biases are neither correlated nor anti-correlated with intelligence. In other words, just because you’re smart doesn’t mean you’re not biased. Some techniques are known to alleviate cognitive biases but the scientific community does not presently used these techniques. (Ample references eg in “Blind Spot,” by Banaji, Greenwald, and Martin.)

I have seen this happening over and over again. My favorite example is the “OPERA anomaly” that seemed to show neutrinos could travel faster than the speed of light. The data had a high statistical significance, and yet it was pretty clear from the start that the result had to be wrong – it was in conflict with other measurements.

But the OPERA anomaly was all over the news. And of course physicists talked about it. They talked about it on the corridor, and at lunch, and in the coffee break. And they did what scientists do: They thought about it.

The more they talked about it, the more interesting it became. And they began to wonder whether not there might be something to it after all. And if maybe one could write a paper about it because, well, we’ve been thinking about it.

Everybody who I spoke to about the OPERA anomaly began their elaboration with a variant of “It’s almost certainly wrong, but...” In the end, it didn’t matter they thought it was wrong – what mattered was merely that it had become socially acceptable to work on it. And every time the media picked it up again, fuel was added to the fire. What was the result? A lot of wasted time.

For physicists, however, sociology isn’t science, and so they don’t want to believe social dynamics is something they should pay attention to. And as long as they don’t pay attention to how media coverage affects their objectivity, publicity skews judgement and promotes a rich-get-richer trend.

Ah, then, you might argue, at least exposure will help you get tenure because your university likes it if their employees make it into the news. Indeed, the “any publicity is good” line I get to hear mainly as justification from people whose research just got hyped.

But if your university measures academic success by popularity, you should be very worried about what this does to your and your colleagues’ scientific integrity. It’s a strong incentive for sexy-yet-shallow, headline-worthy research that won’t lead anywhere in the long run. If you hunt after that incentive, you’re putting your own benefit over the collective benefit society would get from a well-working academic system. In my view, that makes you a hurdle to progress.

What, then, is the result of hype? The public loses: Trust in research. Scientists lose: Objectivity. Who wins? The news sites that place an ad next to their big headlines.

But hey, you might finally admit, it’s just so awesome to see my name printed in the news. Fine by me, if that's your reasoning. Because the more bullshit appears in the press, the more traffic my cleaning service gets. Just don’t say I didn’t warn you.

Friday, March 31, 2017

Book rant: “Universal” by Brian Cox and Jeff Forshaw

Universal: A Guide to the Cosmos
Brian Cox and Jeff Forshaw
Da Capo Press (March 28, 2017)
(UK Edition, Allen Lane (22 Sept. 2016))

I was meant to love this book.

In “Universal” Cox and Forshaw take on astrophysics and cosmology, but rather than using the well-trodden historic path, they offer do-it-yourself instructions.

The first chapters of the book start with every-day observations and simple calculations, by help of which the reader can estimate eg the radius of Earth and its mass, or – if you let a backyard telescope with a 300mm lens and equatorial mount count as every-day items – the distance to other planets in the solar system.

Then, the authors move on to distances beyond the solar system. With that, self-made observations understandably fade out, but are replaced with publicly available data. Cox and Forshaw continue to explain the “cosmic distance ladder,” variable stars, supernovae, redshift, solar emission spectra, Hubble’s law, the Herzsprung-Russell diagram.

Set apart from the main text, the book has “boxes” (actually pages printed white on black) with details of the example calculations and the science behind them. The first half of the book reads quickly and fluidly and reminds me in style of school textbooks: They make an effort to illuminate the logic of scientific reasoning, with some historical asides, and concrete numbers. Along the way, Cox and Forshaw emphasize that the great power of science lies in the consistency of its explanations, and they highlight the necessity of taking into account uncertainty both in the data and in the theories.

The only thing I found wanting in the first half of the book is that they use the speed of light without explaining why it’s constant or where to get it from, even though that too could have been done with every-day items. But then maybe that’s explained in their first book (which I haven’t read).

For me, the fascinating aspect of astrophysics and cosmology is that it connects the physics of the very small scales with that of the very large scales, and allows us to extrapolate both into the distant past and future of our universe. Even though I’m familiar with the research, it still amazes me just how much information about the universe we have been able to extract from the data in the last two decades.

So, yes, I was meant to love this book. I would have been an easy catch.

Then the book continues to explain the dark matter hypothesis as a settled fact, without so much as mentioning any shortcomings of LambdaCDM, and not a single word on modified gravity. The Bullet Cluster is, once again, used as a shut-up argument – a gross misrepresentation of the actual situation, which I previously complained about here.

Inflation gets the same treatment: It’s presented as if it’s a generally accepted model, with no discussion given to the problem of under-determination, or whether inflation actually solves problems that need a solution (or solves the problems period).

To round things off, the authors close the final chapter with some words on eternal inflation and bubble universes, making a vague reference to string theory (because that’s also got something to do with multiverses you see), and then they suggest this might mean we live in a computer simulation:

“Today, the cosmologists responsible for those simulations are hampered by insufficient computing power, which means that they can only produce a small number of simulations, each with different values for a few key parameters, like the amount of dark matter and the nature of the primordial perturbations delivered at the end of inflation. But imagine that there are super-cosmologists who know the String Theory that describes the inflationary Multiverse. Imagine that they run a simulation in their mighty computers – would the simulated creatures living within one of the simulated bubble universes be able to tell that they were in a simulation of cosmic proportions?”
Wow. After all the talk about how important it is to keep track of uncertainty in scientific reasoning, this idea is thrown at the reader with little more than a sentence which mentions that, btw, “evidence for inflation” is “not yet absolutely compelling” and there is “no firm evidence for the validity of String Theory or the Multiverse.” But, hey, maybe we live in a computer simulation, how cool is that?

Worse than demonstrating slippery logic, their careless portrayal of speculative hypotheses as almost settled is dumb. Most of the readers who buy the book will have heard of modified gravity as dark matter’s competitor, and will know the controversies around inflation, string theory, and the multiverse: It’s been all over the popular science news for several years. That Cox and Forshaw don’t give space to discussing the pros and cons in a manner that at least pretends to be objective will merely convince the scientifically-minded reader that the authors can’t be trusted.

The last time I thought of Brian Cox – before receiving the review copy of this book – it was because a colleague confided to me that his wife thinks Brian is sexy. I managed to maneuver around the obviously implied question, but I’ll answer this one straight: The book is distinctly unsexy. It’s not worthy of a scientist.

I might have been meant to love the book, but I ended up disappointed about what science communication has become.

[Disclaimer: Free review copy.]