Friday, October 21, 2011

Interna

After the previous posts were somewhat heavy in content, for relaxation let me just show you some photos from a recent weekend excursion.










[Click to badastronate]

My month back at work is almost over, and we'll be commuting back to Germany in the coming days so Superdaddy can reappear in his office chair.

Monday, October 17, 2011

Super Extra Loop Quantum Gravity

In the summer, I noted a recent paper that scored remarkably low on the bullshit index:
    Towards Loop Quantum Supergravity (LQSG)
    Norbert Bodendorfer, Thomas Thiemann, Andreas Thurn

    Bullshit Index : 0.08
    Your text shows no or marginal indications of 'bullshit'-English.

But what is the paper actually about? It is an attempt to make contact between Loop Quantum Gravity (LQG) and Superstring Theory (ST). Both are approaches to a quantization of gravity, one of the big open problems in theoretical physics. LQG directly attacks the problem by a careful selection of variables and quantization procedure. String theory does not only aim at quantizing gravity, but at the same time at unifying also the other 3 interactions of the standard model by taking as fundamental the strings that give it its name. If quantizing gravity and unifying the standard model interactions are actually related problems, then string theorists are wise to attack them together. Yet, we don't know if they are related. In any case, it has turned out that gravity is necessarily contained in ST.

Both theories still struggle to reproduce general relativity and/or the standard model, and to make contact to phenomenology, though for very different reasons. This begs the question how the theories compare to each other, whether they give the same results for selected problems. Unfortunately, so far this has not been possible to answer because LQG has been developed for a 3+1 dimensional space-time, while ST famously or infamously, depending on your perspective, necessitates 6 additional dimensions that then have to be compactified. ST is also, as the name says, supersymmetric. It should be noted that these both features, supersymmetry and extra dimensions, are not optional but mandatory for ST to make sense.

I've always wondered why one hasn't extended LQG to higher dimensions since the idea of extra dimensions is appealing and somebody in the field who should have known better once told me it would be straight forward to do. It is however not so because one of the variables (a certain SU(2) Yang-Mills connection) used in the quantization procedure relies on a property (the equivalence of the fundamental and adjoint representations of SU(2)) that is fulfilled only in 3 spatial dimensions. So it took many years and two brilliant young students, Norbert Bodendorfer and Andreas Thurn, to come up with a variable that could be used in an arbitrary number of dimensions and to work through the maths which, as you can imagine, didn't get easier. It required to work around the difficulty that SO(1,D) is not compact and digging out a technique for gauge unfixing, a procedure that I had never heard of before.

Compared to the difficulty of adding dimensions, going supersymmetric can be done by simply generalizing the appropriate matter content which is contained in the supergravity actions, and constructing a supersymmetry constraint operator.

Taken together, this in principle allows one to compare the super extra loop quantized gravity to string theory, to which supergravity is a low energy approximation, though concrete calculations have yet to follow. One of the tasks on the to-do list the entropy of extremal supersymmetric black holes to see if LQG reproduces the ST results. (Or if not, which might be even more interesting.) Since LQG is a manifestly non perturbative approach, this relation to string theory might also help filling in some blanks in the AdS/CFT correspondence in areas where neither side of the duality is weakly coupled.

Friday, October 14, 2011

AdS/CFT confronts data

One of the most persistent and contagious side-effects of string theory has been the conjectured AdS/CFT correspondence (that we previously discussed here, here and here). The briefest of all brief summaries is that it is a duality that allows to swap the strong coupling limit of a conformal field theory (CFT) with (super)gravity in a higher dimensional Anti-de-Sitter (AdS) space. Since computation at strong coupling is difficult, at the very least this is a useful computational tool. It has been applied to some condensed matter systems and also to heavy ion physics, where one wants to know the properties of the quark gluon plama. Now the theory that one has to deal with in heavy ion collisions is QCD, which is neither supersymmetric nor conformal, but there have been some arguments for why it should be approximately okay.

The great thing about the application of AdS/CFT to heavy ion physics is that it made predictions for the LHC's heavy ion runs that are now being tested. One piece of data that is presently coming in is the distribution of jets in heavy ion collisions, but first some terminology.

A heavy ion is an atom with a high atomic number stripped of all electrons; typically one uses lead or gold. Compared to a proton, a heavy ion is a large clump of bound nucleons (neutrons and protons) that are accelerated and brought to collision. They may collide head-on or only peripherally, quantified in a number called "centrality." When the ions collide, they temporarily form a hot, dense soup of quarks and gluons called the "quark gluon plasma." This plasma rapidly expands and cools and the quarks and gluons form hadrons again (in a process called "hadronization" or also "fragmentation"), that are then detected. The temperature of the plasma depends on the energy of the colliding ions that is provided by the accelerator. At RHIC the temperature is about 350 MeV, in the LHC's heavy ion program it is about 500 MeV. The task of heavy ion physicists is to extract information about matter at nuclear densities and such high temperatures from the detected collision products.

A (di) jet is two back-to-back correlated showers of particles that are a typical signature in perturbative QCD. It is created if a pair of outgoing partons (quarks or gluons) hadronizes and produces a bunch of particles that then hit the detector. Since QCD is confined, the primary, colored, particles never reach the detector. In contrast to proton-proton collisions, in heavy ion collisions the partons have to first go through the quark gluon plasma before they can make a jet. Thus, the distribution of momenta of the observed jets depends on the properties of the plasma, in particular the energy loss that the partons undergo.

Different models predict different energy loss and dependence of that energy loss on the temperature of the medium. Jets are a QCD feature at weak coupling and strictly speaking in the strong coupling limit that AdS/CFT describes there are no jets at all. What one can however do is to use a hybrid model in which one just extracts the energy loss in the plasma from the conformal theory. This energy loss scales with L3 T4, where L is the length that the partons travel through the medium and T is the temperature. All other models for the energy loss scale with smaller powers of the temperature.

Heavy ion physicists like to encode observables into how different they are from the corresponding observables for collisions of the ion's constituents. The "nuclear suppression factor," denoted RAA, plotted in Thorsten Renk's figure below (Slide 17 of this talk), is basically the ratio of the cross-section for jets in lead-lead over the same quantity for proton-proton (normalized to the number of nucleons) and it's depicted as a function of the average transverse momentum (pT) of the jets. The black dots are the ALICE data, the solid lines are fits from various models. The orange line at the bottom is AdS/CFT.


[Picture credit: Thorsten Renk, Slide 17 of this presentention]

As the saying goes, a picture speaks a thousand words, but since links and image sources have a tendency to deteriorate over time, let me spell it out for you: The AdS/CFT scaling does not agree with the data at all.

A readjustment of parameters might move the total curve up or down, but the slope would still be off. Another problem with the AdS/CFT model is that the model parameters needed to fit the RHIC data are very different from the ones needed for the LHC. The model that does best is Yet another Jet Energy-loss Model (YaJEM) that works with in medium showers (I know nothing about that code). It is described in some detail in this paper. It doesn't only fit well with the observed scaling, it also does not require a large readjustment of parameters from RHIC to LHC.

Of course there's always caveats to a conclusion. One might criticize for example the way that AdS/CFT has been implemented into the code. But the scaling with temperature is such a general property that I don't think nagging at the details will be of much use here. Then one may want to point out that the duality is good actually only in the large N limit and N=3 isn't so large after all. And that is right, so maybe one would have to take correction terms more seriously. But that would then require calculating string contributions and one loses the computational advantage that AdS/CFT seemed to bring.

Some more details on the above figure are in Thorsten Renk's proceedings from the Quark Matter 2011, on the arxiv under 1106.2392 [hep-ph].

Summary: I predict applications of the AdS/CFT duality to heavy ion physics is a rapidly cooling area.

Wednesday, October 12, 2011

New constraints on energy-dependent speed of light from gamma ray bursts

Two weeks ago, an arXiv preprint came out with a new analysis of the highest energetic gamma ray bursts (GRBs) observed with the Fermi telescope. This paper put forward a bound on an energy-dependent speed of light that is an improvement of 3 orders of magnitude over existing bounds. This rules out a class of models for Planck-scale effects. If you know the background, just scroll down to "News" to read what's new. If you need a summary of why this is interesting and links to earlier discussions, you'll find that in the "Avant-propos".

Avant-propos

Deviations from Lorentz-invariance are the best studied case of physics at the Planck scale. Such deviations can have two different expressions: Either an explicit breaking of Lorentz-invariance that introduces a preferred restframe, or a so-called deformation that changes Lorentz-transformations at high energies without introducing a preferred restframe.

Such new effects are parameterized by a mass scale that, if it is a quantum gravitational effect, one would expect to be close by the Planck-mass. Extensions of the standard model that explicitly break Lorentz-invariance are very strongly constrained already, to 9 orders of magnitude above the Planck mass. Such constraints are derived by looking for effects on particle physics that are a consequence of higher order operators in the standard model.

Deformations of special relativity (DSR) evade that type of constraints, basically because there is no agreed upon effective limit from which one could actually read off higher order operators and calculate such effects. It is also difficult, if not impossible, to make sense of DSR in position space without ruining locality and these models have so-far unresolved issues with multi-particle states. So, as you can guess, there's some controversy among the theorists about whether DSR is a viable model for quantum gravitational effects. (See also this earlier post.) But that's arguments from theory, so let's have a look at the data.

Some models of DSR feature an energy-dependent speed of light. That means that photons travel with different speeds depending on their energy. This effect is very small. In the best case, it scales with the photon's energy over the Planck mass which, even for photons in the GeV range, is a a factor 10-19. But the total time difference between photons of different energies can add up if the photons travel over a long distance. Thus the idea is to look at photons with high energies coming to us from far away, such as those emitted from GRBs. It turns out that in this case, with distances of some Gpc and energies at some GeV, an energy-dependent speed of light can become observable.

There's two things one should add here. First, not all cases of DSR do actually have an energy-dependent speed of light. Second, not in all cases does it scale the same way. That is, the above discussed case is the most optimistic one when it comes to phenomenology, the one with the most striking effect. For that reason, it's also the case that has been talked about the most.

There had previously been claims from analysis of GRB data that the scale at which the effect becomes important had been constrained up to about 100 times the Planck mass. This would have been a strong indication that the effect, if it is a quantum gravitational effect, is not there at all, ruling out a large class of DSR models. However, we discussed here why that claim was on shaky ground, and indeed it didn't make it through peer review. The presently best limit from GRBs is just about at the Planck scale.

News

Now, three researchers from Michigan Technological University, have put forward a new analysis that has appeared on the arxiv:
    Limiting properties of light and the universe with high energy photons from Fermi-detected Gamma Ray Bursts

    By Robert J. Nemiroff, Justin Holmes, Ryan Connolly
    arXiv:1109.5191 [astro-ph.CO]

Previous analysis had studied the difference in arrival times between the low and high energetic photons. In the new study, the authors have exclusively looked at the high energetic photons, noting that the average difference in energies between photons in the GeV range is about the same as that between photons in the GeV and the MeV range, and for the delay it's only the difference that matters. Looking at the GeV range has the added benefit that there is basically no background.

For their analysis, they have selected a subsample of the total of 600 or so GRBs that Fermi has detected so far. From all these events, they have looked only at those who have numerous photons in the GeV range to begin with. In the end they consider only 4 GRBs (080916C, 090510A, 090902B, and 090926A). From the paper, it does not really become clear how these were selected, as this paper reports at least 19 events with statistically significant contributions in the GeV range. One of the authors of the paper, Robert Nemiroff, explained upon my inquiry that they selected the 4 GRBs with the best high energy data, numerous particles that have been identified as photons with high confidence.

The authors then use a new kind of statistical analysis to extract information from the spectrum, even though we know little to nothing about the emission spectrum of the GRBs. For their analysis, they study exclusively the timing of the high energetic photons' arrival. Just by looking at the Figure 2 from their paper you can see that on occasion two or three photons of different energies arrive almost simultaneously (up to some measurement uncertainty). They study two methods of extracting a bunch from the data and then quantify its reliability by testing it against a Monte Carlo simulation. If one assumes a uniform distribution and just sprinkles photons in the time interval of the burst, a bunch is very unlikely to happen by coincidence. Thus, one concludes with some certainty that this 'bunching' of photons must have been present already at the source and was maintained during propagation. An energy-dependent dispersion would tend to wash out such correlations as it would increase the time difference between photons with different energies. Then, from the total time of the bunch of photons and its variability in energy, one can derive constraints on the dispersion that this bunch can have undergone.

Clearly, what one would actually want to do is a Monte Carlo analysis with and without the dispersion and see which one fits the data better. Yet, one cannot do that because one doesn't know the emission spectrum of the burst. Instead, the procedure the authors use just aims at extracting a likely time variability. In that way, they can then identify in particular one very short substructure in GRB 090510A that in addition also has a large spread in energy. From this (large energy difference but small time difference) they then extract a bound on the dispersion and, assuming a first order effect, a bound on the scale of possible quantum gravitational effects that is larger than 3060 times the Planck scale. If this result holds up, this is an improvement by 3 orders of magnitude over earlier bounds!

Comments

The central question is however what is the confidence level for this statement. The bunching they have looked at in each GRB is a 3σ effect, i.e. it would appear coincidentally only in one out of 370 cases that they generated per Monte Carlo trials: "Statistically significant bunchiness was declared when the detected counts... occurred in less than one in 370 equivalent Monte Carlo trials." Yet they are extracting their strong bound from one dataset (GRB) of a (not randomly chosen) subsample of all recorded data. But the probability to expect such a short bunch just by pure coincidence in one out of 20 cases is higher than the probability to find it coincidentally in just one. Don't misunderstand me, it might very well be that the short-timed bunch in GRB 090510A has a probability of less than one in 370 to appear just coincidentally in the data we have so far, I just don't see how that follows from the analysis that is in the paper.

To see my problem, consider that (and I am not saying this has anything to do with reality) the GRB had a completely uniform emission in some time window and then suddenly stops. The only two parameters are the time window and the total number of photons detected. In the low energy range, we detect a lot of photons and the probability that the variation we see happened just by chance even though the emission was uniform is basically zero. In the high energy range we detect sometimes a handful, sometimes 20 or so photons. If you assume a uniform emission, the photons we measure will simply by coincidence sometimes come in a bunch if you measure enough GRBs, dispersion or not. That is, the significance of one bunch in one GRB depends on the total size of the sample, which is not the same significance that the authors have referred to. (You might want to correlate the spectrum at high energies with the better statistic at low energies, but that is not what has been done in this study.)

The significance that is referred to in the paper is how well their method extracts a bunch from the high energy spectrum. The significance I am asking for is a different one, namely what is the confidence by which a detected bunch does actually tell us something about the spectrum of the burst.

Summary

The new paper suggests an interesting new method to extract information about the time variability of the GRB in the GeV range by estimating the probability that the observed bunched arrivals of photons might have occurred just by chance even though there is dispersion. That allows to bound a possible Planck scale effect very tightly. Since I have written some papers arguing from theoretical grounds that there should be no Planck scale effect in the GRB spectra, I would be pleased to see an observational confirmation of my argument. Unfortunately, the statistical relevance of this new claim is not entirely clear to me. The relevance that is referred to in the paper I am not sure how to translate into the relevance of the bound. Robert Nemiroff has shown infinite patience to explain the reasoning to me, but I still don't understand it. Let's see what the published version of the paper says.

Wednesday, October 05, 2011

Away note

I'll be away the rest of the week for a brief trip to Jyväskylä, Finland. I'm also quite busy next week, so don't expect to hear much from me.

For your distraction, here's some things that I've come across that you might enjoy:

Sunday, October 02, 2011

FAZ: Interview with German member of OPERA collaboration

The German newspaper Frankfurter Allgemeine Zeitung (FAZ) has an interesting interview with Caren Hagner, from the University of Hamburg. Hagner is member of the OPERA collaboration and talked to the journalist Manfred Lindinger. The interview is in German and I thought most of you would probably miss it, so here's some excerpts (translation mine):
Frau Hagner, you are leader of the German group of the OPERA experiment. But one searches in vain for your name on the preprint.

I and a dozen of colleagues did not sign the preprint. I have no reservations about the experiment. I just think it was premature to go public with the results for such an unusual effect like faster than light travel. One should have done more tests. But then the publication would have taken at least 2 months longer. I and other colleagues from the OPERA collaboration wanted these tests to be done.

What tests?

First, a second independent analysis. In particle physics, if one believes to have discovered a new particle or effect, then in general there is not only one group analyzing the data but several. And if all get the same result then one can be convinced it is right. That has not been the case with OPERA.

Why?

Because there hasn't been time. For an effect like faster than light travel the analysis should certainly be controlled. Maybe there is a bug in the program [...] The majority of the collaboration preferred a quick publication.

Hagner also says that the statistical analysis (matching the proton spectrum with that of the neutrinos) should have been redone by different techniques and that this is currently under way. She further points out that the results are only from one of two detection methods that OPERA has, the scintillation-tracker. Another detector, the spectrometer, should yield an independent measurement that could be compared to the first, but that would take about 2 months.

The final question is also worth quoting:
[If true], might satellite navitation in the future be based on neutrino rays rather than light?

Yes, maybe. But then our GPS devices would weigh some thousand tons.

Friday, September 30, 2011

Interna

Lara and Gloria are now 9 months old, and it's time again for our monthly baby update. The girls are now both crawling well. Lara has learned to sit up on her own and Gloria knows how to pull herself up and stand on her feet. She's been doing that since 2 weeks already, but only now has she learned how to get back down in any other way than just letting go and falling backwards on her head. There's no day the babies don't get new scratches or bruises and they are relentlessly curious. The other day they escaped from the baby-safe part of the room and happily chewed on our passports.

When they are not sleeping or crying, they are babbling most of the time. For a few days in a row they pick a favorite syllable that they then repeat endlessly. Presently, Gloria is commenting everything with na-na-na, and Lara is practicing dadn-dadn. I've speculated she's echoing Stefan's "Was mascht Du dadn?" (What are you doing there? Saarland-style). On Monday we took them to the institute and they were duly impressed by the guy next door drawing Feynman-diagrams on the whiteboard, though more interesting still they found all the cables under my desk together with the occasional woodlouse that we evidently host down there.

I always thought babies typically swallow or choke on everything small enough to fit into their mouth. It turns out though the very little ones put things in their mouth but don't swallow. In fact, at this point ours still refuse to eat anything that's not smoothly mashed. They'll just push it around in their mouth for a little and then spit out. (It's called the "gag reflex" and should vanish by 7-9 months. You better not leave your baby alone with the combustion engine anyway.)

Neither Lara nor Gloria have teeth yet. That has not deterred the Swedish health authorities from assigning us dentists' appointment. It's not like they ask you to come, no, they just send a letter with a time, date, and location you have to appear. We actually missed the first two appointments. I then called them and tried to convey the information that the girls don't even have teeth for the dentist to look at, but to no avail. I'm picturing a long corridor with offices where Swedish doctors sit and cross out names of patients that didn't show up for their appointments, or belatedly notice the body part they wanted to examine is missing. But at least we know where our taxes are going. (The same health authorities that require amputees to prove every other year that the missing part hasn't regrown. Still better than no health insurance...)

Stefan was sent a list of gadgets the modern father needs to have, for example the full color, high-def, video monitoring system, that allows you to check on your babies by Skype, or a cry analyzer. But the gadget that I would really like to have is a diaper with an integrated microchip that sends a note to my BlackBerry when the diaper is full, and a number attached to it. It's somewhat degrading to having to push my nose onto baby-butts in order to examine the matter, and Stefan's nose evidently isn't up to the task. The German comedian Michael Mittermeier aptly referred to the nose-on-butt procedure as "the shit-check." Which reminds me, I should really write the report on that paper now...

Wednesday, September 28, 2011

On the universal length appearing in the theory of elementary particles - in 1938

Special relativity and quantum mechanics are characterized by two universal constants, the speed of light, c, and Planck's constant, ℏ. Yet, from these constants one cannot construct a constant of dimension length (or mass respectively as a length can be converted to a mass by use of ℏ and c). In 1899, Max Planck pointed out that adding Newton's constant G to the universal constants c and ℏ allows one to construct units of mass, length and time. Today these are known as Planck-time, Planck-length and Planck-mass respectively. As we have seen in this earlier post, they mark the scale at which quantum gravitational effects are expected to become important. But back in Planck's days their relevance was in their universality, since they are constructed entirely from fundamental constants.

In the early 20th century, with the advent of quantum field theory, it was widely believed that a fundamental length was necessary to cure troublesome divergences. The most commonly used regularization was a cut-off or some other dimensionful quantity to render integrals finite. It seemed natural to think of this pragmantic cut-off as having fundamental significance, though the problems it caused with Lorentz-invariance. In 1938, Heisenberg wrote "Über die in der Theorie der Elemtarteilchen auftretende universelle Länge" (On the universal length appearing in the theory of elementary particles), in which he argued that this fundamental length, which he denoted r0, should appear somewhere not too far beyond the classical electron radius (of the order some fm).

This idea seems curious today, and has to be put into perspective. Heisenberg was very worried about the non-renormalizability of Fermi's theory of β-decay. He had previously shown that applying Fermi's theory to the high center of mass energies of some hundred GeV lead to an "explosion," by which he referred to events of very high multiplicity. Heisenberg argued this would explain the observed cosmic ray showers, whose large number of secondary particles we know today are created by cascades (a possibility that was discussed at the time of Heisenberg's writing already, but not agreed upon). We also know today that what Heisenberg actually discovered is that Fermi's theory breaks down at such high energies, and the four-fermion coupling has to be replaced by the exchange of a gauge boson in the electroweak interaction. But in the 1930s neither the strong nor the electroweak force was known. Heisenberg then connected the problem of regularization with the breakdown of the perturbation expansion of Fermi's theory, and argued that the presence of the alleged explosions would prohibit the resolution of finer structures:

"Wenn die Explosionen tatsächlich existieren und die für die Konstante r0 eigentlich charakeristischen Prozesse darstellen, so vermitteln sie vielleicht ein erstes, noch unklares Verständnis der unanschaulichen Züge, die mit der Konstanten r0 verbunden sind. Diese sollten sich ja wohl zunächst darin äußern, daß die Messung einer den Wert r0 unterschreitenden Genauigkeit zu Schwierigkeiten führt... [D]ie Explosionen [würden] dafür sorgen..., daß Ortsmessungen mit einer r0 unterschreitenden Genauigkeit unmöglich sind."

("If the explosions actually exist and represent the processes characteristic for the constant r0, then they maybe convey a first, still unclear, understanding of the obscure properties connected with the constant r0. These should, one may expect, express themselves in difficulties of measurements with a precision better than r0... The explosions would have the effect... that measurements of positions are not possible to a precision better than r0.")

In hindsight we know that Heisenberg was, correctly, arguing that the theory of elementary particles known in the 1930s was incomplete. The strong interaction was missing and Fermi's theory indeed non-renormalizable, but not fundamental. Today we also know that the standard model of particle physics is perturbatively renormalizable and know techniques to deal with divergent integrals that do not necessitate cut-offs, such as dimensional regularization. But lacking that knowledge, it is understandable that Heisenberg argued gravity had no role to play for the appearance of a fundamental length:

"Der Umstand, daß [die Plancklänge] wesentlich kleiner ist als r0, gibt uns das Recht, von den durch die Gravitation bedingen unanschaulichen Zügen der Naturbeschreibung zunächst abzusehen, da sie - wenigstens in der Atomphysik - völlig untergehen in den viel gröberen unanschaulichen Zügen, die von der universellen Konstanten r0 herrühren. Es dürfte aus diesen Gründen wohl kaum möglich sein, die elektrischen und die Gravitationserscheinungen in die übrige Physik einzuordnen, bevor die mit der Länge r0 zusammenhängenden Probleme gelöst sind."

("The fact that [the Planck length] is much smaller than r0 gives us the right to leave aside the obscure properties of the description of nature due to gravity, since they - at least in atomic physics - are totally negligible relative to the much coarser obscure properties that go back to the universal constant r0. For this reason, it seems hardly possible to integrate electric and gravitational phenomena into the rest of physics until the problems connected to the length r0 are solved.")

Today, one of the big outstanding questions in theoretical physics is how to resolve the apparent disagreements between the quantum field theories of the standard model and general relativity. It is not that we cannot quantize gravity, but that the attempt to do so leads to a non-renormalizable and thus fundamentally nonsensical theory. The reason is that the coupling constant of gravity, Newton's constant, is dimensionful. This leads to the necessity to introduce an infinite number of counter-terms, eventually rendering the theory incapable of prediction.

But the same is true for Fermi's theory that Heisenberg was so worried about that he argued for a finite resolution where the theory breaks down - and mistakenly so since he was merely pushing an effective theory beyond its limits. So we have to ask then if we are we making the same mistake as Heisenberg, in that we falsely interpret the failure of general relativity to extend beyond the Planck scale as the occurence of a fundamentally finite resolution of structures, rather than just the limit beyond which we have to look for a new theory that will allow us to resolve smaller distances still?

If it was only the extension of classical gravity, laid out in many thought experiments (see eg. Garay 1994), that made us believe the Planck length is of fundamental importance, then the above historical lesson should caution us we might be on the wrong track. Yet, the situation today is different from that which Heisenberg faced. Rather than pushing a quantum theory beyond its limits, we are pushing a classical theory and conclude that its short-distance behavior is troublesome, which we hope to resolve with quantizing the theory. And several attempts at a UV-completion of gravity (string theory, loop quantum gravity, asymptotically safe gravity) suggest that the role of the Planck length as a minimal length carries over into the quantum regime as a dimensionful regulator, though in very different ways. This feeds our hopes that we might be working on unraveling another layer of natures secrets and that this time it might actually be the fundamental one.


Aside: This text is part of the introduction to an article I am working on. Is the English translation of the German extracts from Heisenberg's paper understandable? It sounds funny to me, but then Heisenberg's German is also funny for 21st century ears. Feedback would be appreciated!

Saturday, September 24, 2011

Theory Carnival: Phenomenological Quantum Gravity

[Geek Mommyprof from Academic Jungle is hosting a carnival on real people's work in theoretical or computational sciences, and what that work entails. She asked me to contribute some lines about what I do for a living, so here we go.]

I am a theoretical physicist and I work on the phenomenology of quantum gravity. Phenomenology is the part of theory that makes contact with experiment. (For more read my earlier post On the Importance of Phenomenology). Quantum gravity is the attempt to resolve our problems in formulating a common treatment for the quantum field theories of the standard model and Einstein's general relativity. Quantum gravity has for a long time been dominated by theory, and it's only been during the last decade or so that more effort has been invested into phenomenology.

I like working in this area because it offers interesting and still unexplored topics, and if there will ever be an experimentally confirmed theory of quantum gravity there's no way around phenomenology. My work requires keeping track of what the theorists are doing and what the experimentalists are planning and trying to find a way to connect both. Since gravity is a very weak interaction, finding evidence for its quantum effects is difficult to do, and so far there has been no signature. In fact, it can be quite frustrating if one puts in the numbers and finds the effect one considered is 40 orders of magnitude too small to be measurable, which is the normal state of affairs. I've joked on occasion I should write a paper "50 ways you can't measure quantum gravitational effects," just so all my estimates will finally be good for something. But there are areas, early universe and high energy densities, high energies and large distances, where it doesn't look completely hopeless.

Lacking a fully established theory of quantum gravity, phenomenology in this area requires developing a model that tests for some specific features, may that be extra dimensions, violations of Lorentz Invariance, antigravitation or faster-than-light travel. Model building is like having a baby. While you work on it, you have an idea of how it will be and what you can do with it. Yet, once it's come into life, it starts crying and kicking and doesn't care at all what you wanted it to do. Mathematical consistency is a very powerful constraint that is difficult to appreciate if one hasn't made the experience: You can't just go and, for example, introduce antigravitating masses into general relativity. It sounds easy enough to just put in stuff that falls up, but once you look into the details the easy ways are just not compatible with the theory, and it turns out to be so easy not. (I should know, since I spent several years on that question and out came a paper that I doubt anybody read.)

You might ask now, well, what has antigravitation got to do with phenomenological quantum gravity? Nothing actually. It's just that people always ask me what I work on and I used to say: A little bit of particle physics and a little bit of cosmology and my recent paper was about this-and-that and I'm also interested in the foundations of quantum mechanics and organizational design, and then I wrote this paper on the utility function in economics and so on. But I figured that what they actually wanted was a three word answer, so that's why I work on phenomenological quantum gravity. On the institute's website it says I work on "high energy and nuclear physics," which isn't too far off, still, 5 is larger than 3.

But no matter what the headline, what my work looks like is like this: I start with an idea and try to build a model that incorporates it while maintaining mathematical consistency, after all that's what I sat through all these classes for. In addition, the model should be compatible with available data and ideally predict something new. The failure rate is high. But there's the occasional idea that turns out not to be a failure. It gets written up and submitted to a journal and, if all goes well, gets published. I usually publish in Classical and Quantum Gravity, Physics Letters B or Physical Review D.

In the process of working on a paper, I almost always have an ongoing exchange with some people who work on related topics. If the finances allow it, I might visit them or invite them to come here. I might also attend a workshop or conference, or organize one myself. In addition, my work brings the usual overhead like writing or reviewing grant proposals, attending or giving seminars, coming up with a thesis topic, reading applications, reviewing papers, attending faculty meetings and so on. I presently work at a pure research institute, the Nordic Institute for Theoretical Physics in Stockholm, and have no teaching duties, which has advantages and disadvantages. And if you are following this blog you know that I'm only just back from parental leave.

For more on what my work is like, see also What I am is what I am and One day. You can also follow me on Twitter, or Google+.

Saturday, September 17, 2011

Interna

We'll resettle to Stockholm in the coming week and fight some bureaucratic fights, so you might not hear much from us for a while.

Since blogpoll apps have a tendency to vanish, here's a summary of the two recent polls.

Do you die when you go through a transmitter?
  • Yes, you die. 36.8% (70)
  • No, you don't die. 33.7% (64)
  • Some part of the process is physically impossible. 25.3% (48)
  • Something else (please explain in comments). 4.2% (8)

Do you believe in free will?
  • I believe human decisions are in principle predictable and there is no free will. 35.4% (45)
  • I believe human decisions are in principle predictable, but still there can be free will. 28.3% (36)
  • I believe human decisions are not predictable, neither in practice nor in principle, and we have free will. 27.6% (35)
  • I believe something else that I'll explain in the comments. 8.7% (11)


Wednesday, September 14, 2011

And yet it moves

This September, it's been 16 years since I started studying physics. That's 2^2^2 years which have gone by and bye. Stefan started in 1987. The first physics headline I can recall consciously taking note of was the 1995 discovery of the top quark, and Stefan cites inspiration by the Supernovae 1987a. This got us into a conversation about the most striking insights physics has delivered since we went to university. Here are our winners:

The biggest surprise for everybody except Raffael Sorkin was that the Cosmological constant is not zero. Since 1998, evidence has been adding up and up that our universe undergoes accellerated expansion caused by a small, positive cosmological constant. For more, read my earlier post on the Cosmological Constant and its cousins.

When I was a graduate student, physicists were still debating whether black holes exist or if black holes are just a mathematically possible solution to Einstein's field equations that is however not realized in nature. The first evidence was available already back then, but it took a while for more observations to be made and gradually everybody came to accept that black holes exist for real. (Well, almost everybody.) For more on black holes, see here.

Suspected by many, it still took several decades to unambiguously show that neutrinos have mass. Due to the neutrinos' weak interaction, many years of data had to be collected over different propagation distances at different energies. It wasn't until 2001 that the option of decay rather than oscillation could be ruled out by the SNO results. Yet, the neutrino sector of the standard model still has some mysteries to offer.

In my quantum mechanics class, EPR-type tests of Bell's theorem were Gedankenexperimente. Now they are reality. So are other tests of the foundations of quantum mechanics, down to single photons, double-slit experiments with atoms, while our understanding of entanglement and decoherence has increased and superpositions of larger and larger molecules succeed.

And on the computational side, amazing simulations of large scale structure formation have become possible. If you haven't seen the Millenium Simulation, it's time well spent.

The recent issue of Physik Journal (the membership journal of the German Physical Society) has an article "Physik im Aufwind" that summarizes recent statistical trends in physics. The below shows the number of beginning students in physics by year. I started in the middle dip. It is good to see that physics is drawing in more young people again.

Saturday, September 10, 2011

The Third Hand

Last week at the airport I read the July/August issue of Scientific American Mind, which has an interesting article "Reflections on the Mind" by two Ramachandrans from the Center for Brain and Cognition at UCSD. It is a brief walk through some recent experiments testing how our brain constructs and interprets our own body and how that interpretation can be twisted.

One experiment you have probably heard of is that letting amputees "see" a lost arm or leg with a mirror that doubles the remaining one allows them to scratch or move it. That is, scratching the reflection they see in place of the lost body part does register in the brain, even though there is no direct sensory input. Some months back, we also learned about the "body swap" illusion that makes use of somewhat more sophisticated technology to create the illusion that one is moving a different body, with the aim to test how readily the brain accepts it as one's own. The SciAm Mind article suggests some low tech experiments you can try at home. For example, using a mirror to produce an image of your hand in place of the actual hand and then stroking the image produces a conflict in the brain because the visual input doesn't match the expectation. As a result your hidden arm might feel numb, though there's nothing wrong with it.

This reminded me of a trick we used to play on the mind as children: Lock hands with a friend, with the index fingers straight (see image below). With the free hand, rub up and down your and your friend's index fingers (2nd image). We used to call it "rubber finger." Everybody I know who tried found it to feel weird. I don't know why, but it seems that the brain expects some signal from the friend's finger. It doesn't make a lot of sense to me since you'd need three hands for that. If you have a good interpretation, let me know.

Wednesday, September 07, 2011

Predetermined Lunch and Moral Responsibility

The final session of the 2011 FQXi conference concluded with a brief survey. The question “Is a ‘perfect predictor’ of your choices possible?” was answered with “Yes” by 17 out of 40 respondents. The follow-up question “If there were, would it undermine human free will?” was answered with “Yes” by 18 out of 38 respondents.

I’m in the Yes-Yes camp, and I was surprised that doubting one’s own free will was so common among the conference participants. It is striking how unrepresentative this result is for the general population who likes to hold on to the belief that personal choices are undetermined and unpredictable. In a cross-cultural study with participants from the United States, Hong Kong, India and Colombia, Sarkassian et al found that more than two thirds of respondents (82% USA, 85% India, 65% Hong Cong, 77% Colombia) believe that our universe is indeterministic and human decisions are “not completely caused by the past”(exact wording used in the study).

One of the likely reasons many people believe in free will is that if fundamentally there is no such thing as free will, how come that most of us* have the feeling that we do make decisions?

Lacking a good theory of consciousness, it may be that rather than making decisions, the role of our consciousness is to simply provide aggregated information about what our brain and body was doing, is currently doing, and provide a crude extrapolation of this information into the future. As we grow up, we become better at predicting what will be happening next –in our surrounding as well as with our own body and mind – and may mistake our prediction of what we will be doing for an intent to do it, and our imperfection of making precise predictions creates the illusion that we had a choice. (I doubt I'm the first to have this thought. If you know a reference with similar spirit, please let me know.)

This would mean, if you slap your forehead now, rather than consciously deciding to do so and making the choice to perform this action (which we may call the “standard interpretation”) your neuronal network has arrived at the necessary state that immediately precedes this action and your consciousness notes that next thing that will likely happen is you’ll be slapping your forehead, which it interprets as your impulse to do so (we may call it the “self-extrapolation interpretation”). You are not entirely certain about this since you have learned that your subconscious on occasion makes twists that your consciousness fails to properly predict, thus the possibility remains you’ll not be slapping your forehead after all.

It has in fact been argued that the reason why most people reject determinism it is their inability to predict actions, first by Thomas Reid I am told, and later by Spinoza, not that I actually read either. So possibly theoretical physicists are more inclined to believe in determinism because making precise predictions is their day job ;-)

Sean Carroll recently argued that free will can have a peaceful coexistence with modern science on an emergent level, in an effective description of human beings. That only works though if in the process of arriving at that effective level you throw away information that was fundamentally there. I believe Sean is aware of that when he writes “But we don’t know [all the necessary information to predict human decisions], and we never will, and therefore who cares?”

Well, I'd say that if you make room for free will by neglecting in principle available information, then his notion of free will is an empty concept that, as I've learned from the comments to his blogpost, the philosopher Edward Fredkin more aptly named “pseudo free will.”

I'm only picking around on Sean's post because it's short enough for you to go and read it unlike hundreds of pages that some philosophers have spent to say essentially the same thing. In any case, it is interesting how some scientists desperately try to hold on to some notion of free will in the face of an uncaring universe. I believe one of the reasons is that rejecting free will sheds a light of doubt on ones' moral responsibility, and since I feel personally offended, some words on that.

Morals and Responsibility


Whether the universe evolves deterministically, or whether its time evolution has a random element, an individual, fundamentally, has no choice over his or her actions in either case. It is then difficult to hold somebody responsible for actions if they had no way to make a different choice. This and similar thoughts have spurred a number of studies that claim to have shown that priming people’s belief in a deterministic universe reduces their moral responsibility.

For example, a study by psychologists Kathleen Vohs and Jonathan Schooler (summary here) had half of the participants read a text passage arguing against the existence of free will. All participants then filled out a survey on their belief in free will and completed an arithmetic test in which they had an option to cheat, but were asked not to. It turned out that disbelief in free will was correlated with the amount of cheating. Also, in the previously mentioned study by Sarkassian et al, most participants held the opinion that in a deterministic universe people are not responsible for their actions.

However, the issue of moral responsibility is a red herring, for morals are human constructs whether or not we have free will . From the viewpoint of natural selection, the reason why most of us don’t go around cheating, stealing, or generally making others suffer is not that it’s illegal or immoral or both, but that our self-extrapolation correctly predicts we will be suffering in return. Not primarily because we may be thrown into jail but because our brains would keep returning to that moment of offense, imagining how other people suffered because of our wrongdoing, telling us that way that we did act against the interests of our species, and more generally reducing our overall fitness.

In fact, that our species still exists and seems to be doing reasonably well means that most of us do not take pleasure in letting others suffer. The reason we don’t perform “immoral” acts is that we can’t: We’re the product of a billion years of natural selection that has done well to sort out those who pose a risk to our future, and we've called the result “moral.” (I am far from saying one can derive morals.)

The less consequences an act has for ones’ own future and that of others the larger the variety in people’s behavior. (There are more people jaywalking than strangling talk show hosts in front of running cameras.) That we have laws enforcing rules is because there remain people among us whose brains are some sigma away from the average and our laws are one more channel of natural selection, keeping these people off the streets, trying to readjust their brain’s functionality, or at least generally making their lives difficult. David Eagleman recently made a very enlightened argument for a rethinking of our justice system in light of neurobiological evidence for our reduced capability to change our brain’s working.

In a world without free will, we should not ask if a person is worth blaming, but simply look for the dominant cause of the problem and take steps to solve it.

Similarly, instead of asking if who is morally responsible we should ask what incentives do people have. The problem with the above mentioned test for moral responsibility in a deterministic universe it that the consequences of the alleged “immoral” act of cheating are entirely negligible. Putting forward the plausible thesis that the illusion of free will is beneficial to our brain’s performance (or otherwise, why is it so universal?), the test subject’s cheating might have been simply a reassurance of their illusion. If one would replace the temptation to cheat on a test with a questionnaire for the participant’s likings in food and then offer snacks, chances are those who were suggested a deterministic universe would feel the urge to select a food they do not usually prefer. Better still, one may have told the test subjects that the better their brains in the deterministic universe are adapted to living a modern life in modern times, the less likely they will be to perform “immoral acts” that violate the (written or unwritten) rules and values of that society (whether that is true or not doesn't matter).

Predetermined Lunch: Not Free Either

That our decisions are determined does not mean that we do not have to make them, which is a common misunderstanding, nicely summarized by Sean Carroll’s anecdote
“John Searle has joked that people who deny free will, when ordering at a restaurant, should say ‘just bring me whatever the laws of nature have determined I will get.’”

The decision what you will eat may be predetermined, but your brain still has to crunch the numbers and spit out a result. One could equally well joke that your computer, rather than running the code you’ve written, returns it back to you with the remark that the result is predetermined and follows from your input. Which is arguably true, but still somebody or something has to actually perform the calculation. Though in a deterministic universe it is in principle possible, it is highly questionable that the cook will be able to make the prediction about your order in your place, even after asking Laplace’s demon for input.

In other words, even if you don't have a free will, to make a decision you still have to collect all the information you deem necessary and scan your memory and experience to build an opinion, or perform whatever other process you have come to think is a good way to make decisions, may that be rolling a dice or calling your mom. Whether or not you believe you have a freedom in making a decision doesn’t save you the energy needed to do it.



The original version of this post had a poll included on the question "Do you believe in free will?" but the applet is no longer functional. The results were
  • I believe human decisions are in principle predictable and there is no free will. 35.4% (45)
  • I believe human decisions are in principle predictable, but still there can be free will. 28.3% (36)
  • I believe human decisions are not predictable, neither in practice nor in principle, and we have free will. 27.6% (35)
  • I believe something else that I'll explain in the comments. 8.7% (11)



* The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, describes Depersonalization Disorder as follows: “The essential features of Depersonalization Disorder are persistent or recurrent episodes of depersonalization characterized by a feeling of detachment or estrangement from one's self. The individual may feel like an automaton or as if he or she is living in a dream or a movie. There may be a sensation of being an outside observer of one's mental processes, one's body, or parts of one's body.”

Thus, interestingly enough, not all of us share the feeling of being in charge of one's actions. That the failure to relate to oneself is filed under "disorder" seems to me to show that believing in free will is beneficial to the individual's functionality and well-being.

Sunday, September 04, 2011

From my notepad

The 2011 FQXi conference was an interesting mix of people. The contributions from the life sciences admittedly caught my attention much more than those of the physicists. Thing is, I’ve heard Julian Barbour’s and Fotini Markopoulou’s talk before, I’ve seen Anthony Aguirre’s piano reassemble from dust before, and while I hadn’t heard Max Tegmark’s and George Ellis’ talk before I’ve read the respective papers. The discussions on physics conferences also seem to have a fairly short recursion time and it’s always the same arguments bouncing back and forth. One thing I learned from David Eagleman’s talk is that neuronal response decreases upon repetitive stimuli – so now I have a good excuse for my limited attention span in recursive discussions ;-)

All the talks on the conference were recorded and they should be on YouTube sooner or later. Stefan also just told me that the talks from the 2009 FQXi conference are on YouTube now. (My talk is here. Beware, despite the title, I didn’t actually speak on Pheno QG. Also, I can’t for the hell of it recall what that thing is I’m wearing.) Anyways, here is what I found on my notepad upon return, so you can decide what recording you might want to watch:

  • Mike Russell gave a very interesting talk on the origin of life or at least its molecular ancestors. He explained the conditions on our home planet 2 billion years ago and the chemical reactions he believes must have taken place back then. He claims that under these circumstances, it was almost certain that life would originate. With that he means a molecule very similar to ADP, the most important cellular energy source, is very easy to form under certain conditions that he claims were present in the environment. From there on, he says, it’s only a small step to protein synthesis, RNA and DNA and they are trying to “re-create” life in the lab.

    Chemical reactions flew by a little too fast on Russell’s slides, and it’s totally not my field, so I have no clue if what Russell says is plausible. Especially I don’t know how sure we really can be the environment was as he envisions. In any case, I took away the message that the molecular origins of life might not be difficult to create in the right environment. Somewhat disturbingly, in the question session he said he has trouble getting his work funded.

  • Kathleen McDermott, a psychologist from Washington University, reports the results of several studies in which they were trying to find out which brain regions are involved in recalling memory and imagining the future. Interestingly enough, in all brain regions they looked at, they found no difference in activity in between people recalling an event in the past and envisioning one in the future.

  • David Eagleman gave a very engaging talk about how our brains slice time and process information without confusing causality. The difficulty is that the time which different sensory inputs needs to reach your brain differs by the type and location of input, and also the time needed for processing that might differ from one part of the brain to the next. I learned for example that the processing of auditory information is faster than that of visual information. So what your brain does to sort out the mess is that it waits till all information has arrived, then presents you with the result and calls it “right now,” just that at this point it might be something like 100ms in the past actually.

    Even more interesting is that your brain, well trained by evolution, goes to lengths to correct for mismatches. Eagleman told us for example that in the early days of TV broadcast, producers were worried that they wouldn’t be able to send audio and video sufficiently synchronized. Yet it turned out, that up to 20ms or so your brain erases a mismatch between audio and video. If it gets larger, all of a sudden you’ll notice it.

    Eagleman told us about several experiments they’ve made, but this one I found the most interesting: They let people push a button that would turn on a light. Then they delayed the light signal by some small amount of time 50ms or so past pushing the button (I might recall the numbers wrong, but the order of magnitude should be okay). People don’t notice any delay because, so the explanation, the brain levels it out. Now they insert one signal that comes without delay. What happens? People think the light went on before they even pushed the button and, since the causality doesn’t make sense, claim it wasn’t them! (Can you write an app for that?) Eagleman says that the brains ability to maintain temporal order, or failure to do so, might be a possible root of schizophrenia (roughly: you talk to yourself but get the time order wrong, so you believe somebody else is talking) and they’re doing some studies on that.

  • From Simon Saunders talk I took away the following quotation from a poem by Henry Austin Dobson on “The Paradox of Time:”

      “Time goes, you say? Ah no!
      Alas, Time stays, we go;
      Or else, were this not so,
      What need to chain the hours,
      For Youth were always ours?
      Time goes, you say?- ah no”


  • Malcom MacIver, who blogs at Discover, studies electric fish. If that makes you yawn, you should listen to his talk, because it is quite amazing how the electric fish have optimized their energy needs. MacIver also puts forward the thesis that the development of consciousness is tied to life getting out of water simply because in air one can see farther and thus arises the need for ahead planning. In a courageous extrapolation of that, he claims that our problem as a species on this planet is that we can’t “see” the problems in other parts of the world (e.g. starving children) and thus fail to properly react to them. I think that’s an oversimplification and I’m not even sure that is the main part of the problem, but it’s certainly an interesting thesis to think about. He has a 3 part series on posts about this here: Part I, Part II, Part III.

  • Henry Roediger from the Memory Lab at Washington University explained us, disturbingly enough, that there is in general no correlation between the accuracy of a memory and the confidence in it. For example, shown a list of 16 words with a similar theme (bed, tired, alarm clock, etc) 60% of people (or so, again: I might mess up the numbers) will “recall” the word “sleep” with high confidence though it was not on the list. A true scientist, he is trying to figure out under which circumstances there is a good correlation and what this means for the legal process.

  • Alex Holcombe told us about his project evidencechart.com, a tool to collect and rate pro and con arguments on a hypothesis. I think this can be very useful, though more so in fields where there actually is some evidence to rate on.

Scott Aaronson's talk on free will deserves a special mentioning, but I found it impossible to summarize. I recommend you just watch the video when it comes out.

Saturday, September 03, 2011

Interna

I am back in Germany and happily reunited with the family. Time might not exist and its passage be an illusion, but the babies are growing irrespective, and our arrow of time points towards baby gates. Lara and Gloria are now 8 months old. They spent the previous week, that I was away for the FQXi conference, with Stefan at their grandma's place. It is difficult to say if they missed me during my absence or if they recognize me upon coming back. They do however clearly recognize our apartment and their own beds. Lara for example had found a way to lie in the corner of her bed in exactly the right angle that she could just look out through the door and onto the corridor - a position she immediately resumed.

The girls are now both moving around by doing the army crawl and Gloria has made first attempts to crawl on her knees. At present, she seems to be aiming at a career as breakdancer, standing on hands and the toes of one foot, turning around chasing the other foot, sometimes slipping and bumping on her head. Interestingly enough, Gloria has completely skipped the phase of moving around by rolling sidewards that Lara has had. Gloria meanwhile has learned how to clap her hands, which she does with enthusiasm. They can now both grab a pacifier and put them into their own mouth and if Lara is in a good mood, she'll try to put it into your mouth.

The babies are both fascinated by all things shiny and tiny and stringy and I've had the somewhat belated insight that the purpose of baby toys is not to entertain the baby but to distract the baby from mommy's toys till it's old enough to realize that pulling on a cable isn't always a good idea.

Our rapid throughput of clothes has been slowing down and we've childproofed the apartment as far as possible. However, in 2 weeks we're packing bags and going back to Stockholm where I will be working while Stefan is on parental leave. So, we'll have to childproof a second apartment and that with the difficulty that we can't remove items or drill into walls because the items aren't ours and the walls are solid concrete.

But, hey, we'll manage somehow. Baby reading this month is an article on "Baby Power" in SciAm Mind according to which mommy brains sprout new neurons, and body chemistry changes towards higher risk taking and better memory performance, at least when it comes to tracking down food. If you are a rat that is. The same article also informs us "that (human) mothers are more likely to rate their infant's odors as pleasant, compared with nonmothers" (Look, an English compound noun! And it's not my making!). Maybe I'm an aberration but, prolactin or not, shit still smells like shit to me. Spiegel Online informs us that we're supposed to train baby's concentration skills by the age of one (at the latest), but then Parents don't matter that much at least when it comes to the child's education and income, and the Globe and Mail reports that striving to be supermom is correlated with depression. So maybe we'll wait with teaching the babies differential geometry on complex spaces for some more while.

Wednesday, August 31, 2011

Will AI cause the extinction of humans?

Yesterday, at the 2011 FQXi conference in Copenhagen, Jaan Tallinn told us he is concerned. And he is not a man of petty worries. Some of us may be concerned they’ll be late for lunch or make a fool of themselves with that blogpost. Tallinn is concerned that once we have created an artificial intelligence (AI) superior to humans, the AIs will wipe us out. He said he has no doubt we will create an AI in the near future and he wishes that more people would think about the risk of dealing with a vastly more intelligent species.

Tallinn looks like a nice guy and he dresses very well and I wish I had something intelligent to tell him. But actually it’s not a topic I know very much about. Then I thought, what better place to talk about a topic I know nothing about than my blog!

Let me first say I think the road to AI will be much longer than Tallinn believes. It’s not the artificial creation of something brain-like with as many synapses and neurons that’s the difficult part. The difficult part is creating something that runs as stable as the human body for a sufficiently long time to learn how the world works. In the end I believe we’ll go the way of enhancing human intelligence rather than creating new ones from scratch.

In any case, if you would indeed create an AI, you might think of making humans indispensible for their existence, maybe like bacteria are for humans. If they’re intelligent enough, they’ll sooner or later find a way to get rid of us, but at least it’ll buy you time. You might achieve that for example by never building any AI with their own sensory and motor equipment, but make them dependent on the human body for that. You could do that by implanting your AI into the, still functional, body of braindead people. That would get you in a situation though where the AIs would regard humans, though indispensable, as something to grow and harvest for their own needs. Ie, once you’re adult and have reproduced, they’ll take out your brain and move in. Well, it kind of does solve the problem in the sense that it avoids the extinction of the human species, but I’m not sure that’s a rosy future for humanity either.

I don’t think that an intelligent species will be inherently evil and just remove us from the planet. Look, even we try to avoid the extinction of species on the planet. Yes, we do grow and eat other animals but that I think is a temporary phase. It is arguably not a very efficient use of resources and I think meat will be replaced sooner or later with something factory-made. You don’t need to be very intelligent to understand that life is precious. You don’t destroy it without a reason because it takes time and resources to create. The way you destroy it is with negligence or call it stupidity. So if you want to survive your AI you better make them really intelligent.

Ok, I’m not taking this very seriously. Thing is, I don’t really understand why I should be bothered about the extinction of humans if there’s some more intelligent species taking over. Clearly, I don’t want anybody to suffer in the transition and I do hope the AI will preserve elements of human culture. But that I believe is what an intelligent species would do anyway. If you don’t like the steepness of the transition and want more continuous predecessors of humans, then you might want to go the way I’ve mentioned above, the way of enhancing the human body rather than starting from scratch. Sooner or later genetic modifications of humans will take place anyway, legal or not.

In the end, it comes down to the question what you mean by “artificial.” You could argue that since humans are part of nature, nothing human made is more “artificial” than, say, a honeycomb. So I would suggest then instead of creating an artificial intelligence, let’s go for natural intelligence.

Monday, August 29, 2011

FQXi Conference 2011

We just arrived in Copenhagen after a 2-day trip on the National Geographic Explorer, a medium sized cruise ship, along Norway’s coast. On board were about 130 scientists and a couple of spouses in different sizes, plus an incredibly efficient, friendly, and competent crew that didn’t mind having nosy physicists hanging around on the bridge.

The 2011 FQXi conference turns out to be very different from the previous one (2009 on the Azores), and that not only thanks to the unique bonding experience of shared sea-sickness. As Sean Carroll mentioned the other day, during the organization of this conference on the nature of time, the FQXi folks were confronted with an application for a similar event with a similar topic and so they decided to join forces. As a result, this conference is larger and much more interdisciplinary than the previous one. Besides the physicists and philosophers, there are neurobiologists, biologists and psychologists, and a selection of guys interested in artificial intelligence from one or the other perspective, as well as a crew with cameras that are here for PBS I am told.

Among the physicists, the usual suspects are Max Tegmark and Anthony Aguirre, Paul Davies, George Ellis, David Albert, Garrett Lisi, Fotini Markopoulou, Julian Barbour, and Scott Aaronson. But there’s also Geoffrey West from the Santa Fe Institute, Jaan Tallinn, one of the developers of Skype, and David Eagleman the possibilian, just to mention a few. Also around are George Musser from Scientific American and Zeeya Merali who is blogging for FQXi here. There’s a list of alleged attendees here, though some of them I haven’t seen so far.

It is an interesting mix of people. I do enjoy interdisciplinary events a lot because there is always some cool research to learn about that I didn’t know of before. I have however grown skeptic about the benefits of interdisciplinarity when it comes to pushing forward on a particular problem. Take a topic such as free will or the origin of our impression of “now” that might or might not be an illusion. Yes, neurobiologists and psychologists have something to say about that. But they don’t in fact mean the same as physicists and I am not sure that, for example, the question how we achieve to remember the past and imagine the future, or fail to distinguish between true and false memory has any relevance for physicists trying to figure out the relevance of the past hypothesis, the consistency of alternatives to the block universe, or the role of observers in the multiverse. In fact, you already have people talking past each other within one discipline: If you ask three physicists what they mean with “free will” you’ll get four different answers. And after you’ve spent a significant amount of time figuring out what they mean to begin with there isn’t much left they have to say to each other.

That’s the downside of mixing academics – in my experience it does not add depth. Interdisciplinary exchange however adds breadth. Talking to somebody who has addressed a question for a completely different reason and with completely different methods helps one look at it from a different point of view, opening new ways forward. In my opinion though the largest benefit of events like this conference comes from just getting together a group of interesting and intelligent people who make an effort to listen to and complement each other. After some years at PI and NORDITA I’ve pretty much come to take for granted having plenty of folks at my disposal to talk to should I feel like it, but after the baby break I appreciate the opportunity for such exchange much more.

The idea with putting us on a ship was clearly to get us off the Internet for some while. I personally don’t have the impression people on the conferences I usually go make obsessive use of the internet, but evidently some need to have an evil third party as an excuse for not being available at least for a few days. I don’t find it such a great idea to punish all of us because a few guys can’t live without their newsfeed. I wasn’t the only one with family at home who would have appreciated at least a phone. (For an appropriate price that is. If you really, really had to you could have paid for an internet connection at $10 per kB or something like that.)

These are some first impressions. If I've had some time to process what I've heard and learned I might summarize some of the main questions that were discussed. But now (whatever that might be) I have to locate my baggage which I've last seen this morning vanishing into a bus somewhere.

Thursday, August 25, 2011

Away note

I'll be away for a week on the FQXi conference "Setting Time Aright." A significant amount of the participants are reportedly nuts, so I will be in good company. I'm supposed to moderate a session on "Choice" for reasons that are somewhat mysterious to me, but since I don't believe in free will I guess they had no choice, haha.

This is my first conference attendance since the babies and, believe me, it's required a significant amount of organization. It didn't help they're doing half of it on a ship and the idea of having to get around on a ship with a twin stroller didn't really appeal to Stefan and me. So I go, and Superdaddy stays with the babies while I'll cry over the no-signal sign on my BlackBerry. Side effects may include blogging congestion.

Sunday, August 21, 2011

Physics and Philosophie

I'm looking for topics where theoretical physics has a relevance for philosophy, for no particular reason other than my curiosity and maybe yours as well. Here's the usual suspects that came to my mind:

  1. Are there limits to what we can possibly know? The human brain has a finite capacity and computing power. What limits does this set? Is it possible to extend? What is consciousness?

  2. Why is the past different from the future? What is "The Now" and why do we have an "arrow of time"? (Or several?)

  3. Is there a fundamental theory that explains everything we observe and experience? Is this theory unique and does it explain everything only in principle or also in practice?

  4. Do we have free will? And what does that question mean?

  5. Are there cases where reductionism does not work? And what does that imply for 3?

  6. What is the role of chaos and uncertainty in the evolution of culture and civilization? Is it possible to reliably model and predict the dynamics of social systems? If so, what does that mean for 4?

  7. What is reality? What does it mean to "exist" and can an entirely mathematical theory explain this? Does everything mathematical exist in the same way? Why does anything exist at all?

  8. And Stefan submits: What is the ontological status of AdS/CFT?

Thursday, August 18, 2011

What makes you you?

Stefan's life is tough. When he comes home, instead of a cold beer (I support the local wineries) and dinner (ha-ha-ha) he gets one of the crying babies and a washcloth. And then there's his wife who lacks googletime and greets him with bizarre questions. What frequency does a CD player operate on? Something in the near infrared. How many atoms do you need to encode one bit? Maybe somewhat below the million it was in 2008. And why does he actually know all this stuff? Male brains are funny. He does not, for example, know that the Aspirin is in the medicine cabinet, out of all places. But yesterday he gave it a pass, so here's my question to you.

Suppose you have a transmitter, spaceship enterprise style. It reads all the information of all particles in your body (all necessary initial values), disintegrates your body, sends the information elsewhere, and reassembles it. Did you die in that process?



You could object that this process isn't physically possible, either theoretically or practically. Theoretically, there are for example the no-cloning and no-teleportation theorems in quantum information. But you might not actually need all the quantum details to reconstruct a human body. (I'm not sure though the role of quantum physics for consciousness has yet been entirely clarified.) And, if I reassemble you elsewhere you are arguably different in that the relative location of your body to all other objects in the universe has changed. But again, it doesn't seem like that's of any relevance. Or you could say that there won't be enough time to perform this process ever in the history in the universe or something like that. But these answers seem unsatisfactory to me.

Then you might say, well, if it looks like me, walks like me, and quacks like me, it probably is me. That is, nobody, including the person you have assembled could tell any difference. So that would seem like you didn't die.

On the other hand, the operation of your brain has a discontinuity in its timeline in the sense that it didn't do anything during transmission. That is in contrast to, say, anesthesia where your brain is actually quite active. (Interesting SciAm article on that here.) So that would seem like that what constitutes 'you' did cease to operate and 'you' did die.

But then again, who really cares if you stopped thinking for some seconds and then continued that process while in between you changed the set of quarks and electrons you're operating with. However, then consider now I don't send the information to one place, but to ten. And I assemble not one you, but ten. Which one are you?

Oh-uh, headache. I can understand Stefan does prefer to bath the baby. Now where is the Aspirin?