Wednesday, April 9, 2008
PAN
CH3CO-OO*
And nitrogen dioxide:
-NO2
The asterisk (*) on the peroxyacetyl is one of the conventions used for indicating that it is a radical; it has an unpaired electron that plays well with others, especially if they also have an unpaired electron.
Now a bit of history, in an attempt to lose anyone that I haven’t already lost with the chemical formulae.
Los Angeles was known to have a smog problem even before WWII, but during and after the war it got much worse, partly because of the massive expansion of oil refineries, and the attendant expansion of automobile travel. L.A. smog was known to be different from “London smog,” in that the L.A. sort was oxidizing, and London’s was reducing. Ozone was identified as a major component of L.A. smog, but the ozone alone couldn’t account for “plant bronzing,” damage with a characteristic yellow-brown splotches on the leaves of plants. A guy by the name of Haagen-Smit (mentioned in a magical incantation in SunSmoke), managed to replicate the plant damage by using the product of some smog chamber reactions, but could not identify the compound that was responsible.
Some researchers at the Franklin Institute in Philadelphia (Stevens, Hanst, Doerr, and Scott), used a technique called long-path infrared spectroscopy on smog chamber products and spotted a set of IR bands that were particularly strong in the results of a biacetyl-NOx run. They dubbed the responsible agent, “Compound X.” Compound X turned out to be PAN, and how cool is that?
In the mid-1970s, PAN was discovered to thermally decompose, i.e. at elevated temperatures, it rapidly changed back to a peroxyacetyl radical and nitrogen dioxide. That made everything much more interesting, because PAN gets formed early in the day, when it’s cooler, then, as the air warms, it can decompose and feed radicals and NOx back into the smog formation system, producing more ozone. The thermal behavior of PAN is one of the reasons why smog is worse on hot days. PAN can also assist in the long range transport of oxidizing smog, serving as sort of an ozone storage system.
The thing is that PAN and its constituents/products form a steady-state at constant temperature, with PAN existing in balance with peroxyacetyl and NO2. Change the temperature and the balance changes. At higher temperatures, PAN decays and if there is still sunlight around, ozone goes up. But this process is dominated by the behavior of peroxyacetyl radicals.
If NO2 were the only thing that peroxyacetyl could react with, this wouldn’t happen. But peroxyacetyl also reacts with nitric oxide (NO), and that is one of the reactions whereby ozone is generated, by converting NO to NO2, which then photolyzes to ozone (note: the entire system is ‘way complicated, which is why I spent 20 years studying it). By the same token, if something reduces the amount of peroxyacetyl, relative to other peroxy radicals, then PAN concentrations decline, NO2 comes back into the system, and ozone can increase.
Peroxyacetyl radicals also react with other radicals, and that alters the balance. In the early 1980s, looking over the set of chemical reactions we had available, I decided that the cross-reactions between radicals were set too low. Fortunately, there was a paper by a fellow named Addison that had measured them higher that the generally accepted values, so I used Addison’s numbers. I can still remember the combination of excitement and satisfaction that came when Addison’s numbers led to a simulation that just nailed the PAN decay data. Since then, rate constants have been measured that are even higher than Addison’s; when I used the new, higher still numbers, the results was almost exactly the same. There seems to be a point of diminishing returns, a gating function, call it what you will. Once you get above the critical numbers, there is little additional effect.
So even without my own insights into PAN decay, mostly the result of my paying attention to that particular problem, it would only have been a few years until the problem was solved by better measurements, and correct PAN decay would have been achieved in simulations anyway.
On the other hand, there were several features of the system, such as the specific products of some of the radical-radical reactions that have not been addressed to this very day, to the best of my knowledge, and, nearly as I can tell, no one is looking at those problems and no progress is being made. Sometimes the great grinding engines get it and sometimes they don’t. There’s room for a ton of lessons here, I’m just not sure what they all are.
Monday, March 17, 2008
Habitats
As nearly as I can tell, I may be the only person in my cohort who was never interested in blowing things up, and that included rocketry in general. Even in my interest in nuclear physics (which I later learned was actually nuclear chemistry and nuclear engineering), I was more interested in reactors than bombs. As for rocketry, I picked it up the way I learned country music. When it’s what everyone around you talks about, you learn some of it.
I was interested in astronomy, however, and I have always liked the deep space probe findings. I just was not that interested in how to get the probes to where they were going. Similarly, I did have an interest in some of the things that would go along with space colonization. To that end, one of the earliest things I ever tried to do with my trusty chemistry set was to grow some plants hydroponically. My effort met with dismal failure; I found out very quickly about root rot and the perils of constant immersion on several kinds of plants, including potatoes. To this day, the only plants I’ve ever grown have been in soil.
Nevertheless, I persisted in my interests; one of the attractions of the field of environmental modeling, in fact, was the notion that it would be possible to use such engineering tools to analyze and perhaps design, self-contained ecologies. It was in all the space novels, right?
But the whole thing seemed to be moving so slowly. A guy I lived with for a year after I first moved to Berkeley, Steve Ellner, is now a professor of biomathematics at Cornell, and one of his ongoing projects is a system of connected pools with water flowing through the system. His research team uses the setup to examine some basic ideas about ecosystem stability. And I mean really basic things like the onset of chaotic behavior and limit cycles, things that should have been studied thirty years ago.
There was a NASA program called CELSS, Contained Environment Life Support Systems. It was supposed to address the question of long term life support environments for manned deep space missions, like a Mars mission, or a Moon base. They gave up on doing something like it for the Space Station, because it turns out to be a lot easier and cheaper just to supply things from Earth, but the farther out you get, the more the economics change. But, as nearly as I can tell, the CELSS program was cancelled a few years ago. I say “as nearly as I can tell” because there doesn’t seem to be a lot of information about the program cancellation, just a cessation of work. It’s as if it just died a lingering death through disinterest.
Then there is the case of Biosphere II. On the inevitable convention panel, I once heard a supposedly knowledgeable person explain that it failed because “as any engineer can tell you” concrete oxidizes as it hardens, and that sucked oxygen out of the air. So the Biosphere II designers were just stupid, you see. Anyone with any sense (like the speaker, I daresay) could have gotten it to work.
In fact, concrete does not “oxidize.” It does absorb carbon dioxide, however, and that was actually beneficial to the folks in Biosphere II. Because they’d put in a lot of soils that were high in organic matter, and the soil bacteria oxidized the organic matter to CO2. If there had been no concrete, they’d have had to put in CO2 scrubbers, because there was no way the plants in BII could have absorbed all the CO2, and CO2 is a toxic gas, lethal at above 5% concentration.
The real problem with Biosphere II is that it had never been done before. Things that have never been done before don’t always turn out to be easy; sometimes they’re downright difficult, and occasionally they are outright impossible.
I don’t think that self-contained habitats are impossible. After all, we live in one such habitat; it just happens to be really, really big. What we don’t know is how small one can make a habitat, and how much control you have to put on it to make it small. And when I say we don’t know, I mean that no one has any idea. None. Because, as I just said, no one has ever done it.
In my experience, people who want to colonize space are of the belief that habitats are the easy part; they spend all their imagination on new and spiffy ways to get into space and none on how anyone is going to live there. But if we could make self-contained habitats, they would have enormous benefits for living here on Earth. We could put people into deserts, rain forests, glaciers, swamps, under the ocean, anywhere, without running the risk of destroying the local ecology. Such a technology could be of enormous benefit. And once we have it perfected, then moving people into space becomes a much easier task, if they really want to move into space, as opposed to just leaving the Earth because we’ve made such a mess of it.
Monday, March 10, 2008
The Big Mo
Cosmic rays are pretty interesting, highly energetic charged particles from space. Most are protons, but there's about a 9% component of helium nuclei, and there are even some heavy nuclei like iron in the mix. Prior to the creation of really big particle accelerators, cosmic rays were the only way to study particles having energies of above a GeV. The energy spectrum decreases with increasing energy, but some cosmic rays are way above a mere billion electron volts.
On October 15, 1991 a cosmic ray event was observed with an energy of 3 x 10^20 eletron volts, i.e. 300 billion, billion electron volts, or about 50 joules. There have been a number of similar observations since, confirming the existence of particles so energetic that they must be of recent origin (in the astronomical sense). Otherwise, they would lose energy by interacting with the cosmic microwave background left over from the big bang. The first such particle discovered was dubbed the "Oh-My-God" particle, a joking reference to the nickname of the Higgs particle as "The God Particle."
All very cool, but that's not precisely what this essay is about. No, this essay is about energy and momentum.
Every article that I've ever seen compares the energy of Ultra High Energy Cosmic Rays (as they are called) to some macroscopic object, traveling at a fairly low velocities. Science magazine writers are particularly fond of comparing UHECRs to a strongly hit golf ball. The Wikipedia article on them compares the energy to a baseball thrown at 60 miles per hour.
The thing is, people do not interact with macroscopic objects via their energy content. An object's momentum is what produces force when it encounters another object. If you are hit by a golf ball, and it bounces off you elastically, there need not be a lot of energy transfer, but the momentum transfer (and damage) can be substantial.
Once, on the inevitable science fiction convention panel, during the Q&A, we were asked what we considered to be the greatest scientific error common among the general public. My answer was "the confusion of energy and momentum."
Think of a movie like the Schwarzenegger vehicle Eraser, where the MacGuffin is an ultra-high velocity rail-gun rifle. The gun is shown as knocking people backwards, using the old "stunt wire" trick that movies love so much. The problem is that such a weapon would transfer very little momentum to the target (and would have very little recoil). What would happen is that the projectile would basically explode on contact with an interacting mass. To a lesser degree, that is what happens with high velocity rifle shells. Similarly, Hollywood used the stunt wire for practically all gun shots, often giving the impression that a handgun is really a "momentum pistol," like the one seen in Fritz Lieber's The Wanderer.
Meteor Crater in Arizona was originally thought to be of volcanic origin, in part because it was circular, and it was believed that a meteor would almost certainly come down at an angle, producing an elliptical crater. Eventually, experiments with high velocity projectiles confirmed that they produce circular craters from almost any angle.
A fellow on the old Compuserve Science Forum explained it by analogy to throwing a hand grenade. The grenade carries so much explosive energy that it overwhelms the momentum, so if you throw one into a sand box, it will create a circular crater, no matter what the angle it hits. In fact, the ratio of energy to momentum for a meteor is considerably higher than for a thrown grenade.
If you really want to get a feel for the energy of an Oh-My-God Particle, you should compare it to things in ordinary experience where the energy is important. Thus, the energy from such a particle would light a 50 watt bulb for one second. Or it would power a single flash from a mid-size photoflash, perhaps ten from a small flash attachment.
Fifty joules will raise the temperature of one gram of water about 12 degree C, about 22 degrees F.
Or, if you want to stay with the moving mass analogy, how about propelling a mid-size automobile at a speed of a quarter of a mile per hour, assuming you have a friction free environment and some perfect method of converting cosmic ray energy into the motion of a motor vehicle, and, these days, who doesn't have those lying around?
Tuesday, February 12, 2008
Observing
When I was seven, we had a siamese cat. Actually, it was a kitten; the little idiot never made to cat-hood. First he nearly drowned in the toilet, then he took to hiding atop the tire in the wheel well of the family automobile, to predictable results.
[In the previous paragraph, I’m engaging in either “blaming the victim,” which is usually thought of as a product of “identification with the aggressor,” or “reaction formation,” the covering of one emotion—sadness at the loss of a pet—with its opposite, or near opposite, in this case disdain. If I were to say that the kitten would be long dead in any case, given the life span of cats, I’d be “rationalizing.” Spending this much time analyzing my own reactions is an example of “intellectualizing.”]
In any case, one of the bonding events with the kitten was mediated though annoyance: he would jump up on my bed very early in the morning, like 4 or 5 A.M. and knead my chest while mewing to wake me up. It couldn’t have been hunger, because I didn’t feed him. Maybe he was just lonely.
One morning when he’d awakened me this way, I was intrigued by a pretty spectacular spectrum display on my bedroom wall. I investigated and it turned out that a shaft of light from the morning sun had gone through my aquarium before it hit the wall. The aquarium had acted like a prism, one with an internal reflection, in fact. Later I got some “pop sci” books on light and optics and read up on the subject.
Many years later, while flying home from college, I noticed some color on the cover of the book I was reading, which caught my attention because the cover was black-and-white. I knew that surface reflection of light is usually polarized, so I got out my Polaroid sunglasses and looked at the window of the plane. Sure enough, it showed spectral splitting of light, and the pattern looked like it was a strain pattern. A bit of reading later further informed me that looking at plastic strain via polarized light is an industrial testing procedure to check for defects in the plastic. The plane’s window pattern had been nice and symmetric.
I took further advantage of the sunglasses trick once when I was down in Los Angeles with my then housemate, Steve and some of his friends. Driving on 101, I noticed that the San Fernando Convergence Zone was clearly visible that day. The SFCV results when air blowing in from Los Angeles meets air coming from the other direction from Ventura County. Such convergence zones are common features of air flow near mountains, in this case the Santa Monica mountains.
Because the air from LA is more polluted than the air from Ventura, the SFCV has a clear demarcation, and it pushes air up above the nominal inversion height. I pointed it out to my companions, but several of them had to look at it through the polarized filter in order to see it. Polarization helps identify polluted air masses, because the fine particles exhibit surface scattering (Mie scattering) that is polarized. One of the people in the car said, “You know, I’ve lived almost my entire life in LA and I’ve never noticed that before.”
A while back, in the dressing room of Eastshore Aikikai, I noticed a circular spot of light on the floor. What caught my eye was the precision of the circularity. I looked up to the roof and spotted a small hole in a fan covering, and I realized that we actually had a pinhole camera in operation, a camera obscura. The reason why the light was perfectly circular was that it was an image of the sun. I’ve studied it since then and on partly cloudy days you can see the clouds move across the face of the sun. I suspect that if we had a better surface—smoother, whiter—it might be possible to make out sunspots.
Judging from its rate of travel across the floor, the camera obscura only operates for at most an hour a day, and I suspect it only does so for a few weeks or months per year. We'd only recently began classes in the middle of the day, and only one day a week, Sunday. So it’s not surprising that no one has noticed it in the year we’ve been there.
The other variable is having someone there who might pay attention to a spot of light on the floor, and wonder why it was so round. I have no idea of the odds on that, other than to suspect that they’re not very high.
Sunday, February 10, 2008
Faster
Edison, being partly deaf, was somewhat more interested in sound than Einstein, who was more of a light man, as it were. Still the speed of sound, as a principle, is mighty important; it just varies with a lot of things that were, to be fair, of interest to Einstein as well.
Sound propagates when atoms bump into each other, so it's important how fast the atoms can go, and the nature of the bumping. In solids and liquids, where molecules are sitting right next to each other, as it were, the forces between them, the elastic modulus is the critical factor, as is the nature of the wave that is being transmitted. Molecular movement in solids is also quantized, with the pseudo-particle being the phonon, which represents the quantum levels of forces transmitted from one molecule to another.
The speed of sound (SOS) in gases depends on how fast the individual molecules of the gas are moving, since any individual particle must actually traverse the distance between it and the next particle for momentum to be transferred. So there we get into all sorts of cool things like ideal gas laws, heat capacities, and statistical mechanics, some of which Einstein did have in his thoughts.
In the simplest approximations, the speed of sound for a gas is determined by two factors, the molecular weight of the gas and its temperature. The speed of sound is always limited by the RMS (root mean squared) molecular speed; the two are related via a fairly simple relationship:
RMS/SOS = Sqrt(3/BM)
where BM is the bulk modulus of the gas, it's resistance to pressure. For a diatomic gas, the bulk modulus is 1.4, so the ratio of RMS to SOS is about 3/2.
In rockets, the oomph that any given propellant will give is limited by the velocity of the exhaust gases. So basically you want your exhaust to be very hot, with the lightest molecular weight you can manage. In Rocket Ship Galileo, Heinlein had his protagonists use zinc as the propellant (heated via nuclear reactor), and has one of them muse that he'd have preferred to use mercury. This is, of course, almost exactly backwards, and Heinlein did a better job later, in, for example, Space Cadet, where "monoatomic hydrogen" is supposedly used.
Monoatomic hydrogen would indeed be a good rocket propellant, pretty much the best possible, if you could use it. However, the temperature at which diatomic hydrogen (which is to say, hydrogen gas) dissociates into atomic hydrogen is mighty high, in the thousands of Kelvin, and would probably destroy any rocket nozzle that could ever be built. As I recall, Heinlein had tanks of monoatomic hydrogen on his ships, no doubt made out of unobtainium metal, with a bolonium catalyst to keep the hydrogen atoms from recombining.
Rockets are, as I've said before, a horribly inefficient method of travel, since conservation of momentum means that you're hurling huge masses of material out the back end, and it�s taking most of your energy supply with it. In fact, the more "efficient" your rocket in terms of payload to fuel ratio, the higher the percentage of your energy supply is going into your exhaust stream.
Also, with chemical reactions as your energy source, you can't really use hydrogen as your exhaust gas, because it isn�t the product gas of the energetic reactions you'd like to use, always assuming that you don't actually have tanks of monoatomic hydrogen lying around. MH would produce some pretty hot molecular hydrogen when it recombined, so that would work. Too bad about the world wide unobtainium shortage.
All the speed of sound issues apply to explosively driven projectiles, aka "guns," as well, though such projectiles are much more efficient than rockets, energetically speaking. Mass drivers of all sorts have the advantage of using the Earth as a big momentum sink, and when you use something that large to absorb the recoil, it doesn't get much of the energy in the bargain.
You can't generally use hydrogen and liquid oxygen in a bullet (though there are some cannon designs that do), so typical muzzle velocities are limited by the average mass of the molecules in gases like nitrogen and carbon dioxide. Those have greater masses and hence lower particle velocities than does water vapor, to say nothing of hydrogen.
But then we come to gas guns, where the projectile is driven by compressed gas. Sure, you usually can't get the pressures in a compressed gas cylinder as high as you get from an explosive, but you can then use hydrogen, or helium as the gas. Helium, being honestly monoatomic, has only twice the mass of a hydrogen molecule, so its RMS and speed of sound is still pretty fast, which is why you get a high pitched voice if you inhale helium.
If you use a compressed gas cylinder, you have what is called a "single stage gas gun," which rather demands an answer to what a "two stage gas gun" is, right? Ah, there it gets interesting. In a two stage gas gun, you use an explosively driven piston to ram the gas into the compression chamber. Then, when it reaches a nice, high pressure (and remember, it's also been heated via compression), it ruptures a perforated valve and slams into the projectile, which is then propelled out of the barrel of the gun. Some designs preheat the original gas as well; you can exceed the melt temperatures for parts of the device for brief periods of time, and gun shots are nothing if not brief.
Lawrence Livermore Laboratory has a nice two stage gas gun that can propel a projectile weighing 5 kilograms to 3 kilometers per second. There were plans in the early 1990s, to upgrade the thing and to use lower weight projectiles, which would reach 8 kilometers per second, and LLL wanted to try putting things into orbit with it. Instead, absent the $1 billion upgrade, they had to content themselves with firing the thing into a liquid hydrogen target, experimentally demonstrating the existence of the previously only theoretical metallic phase of hydrogen. And even without quite so lavish funding, they do seem to have managed to get up to the 8 km/sec range, albeit with pretty light projectiles.
Theory doesn�t quite run out of oomph at 8 km/sec, however. As you go to higher and higher temperatures in hydrogen, you begin to get molecular dissociation. Heat your original gas hot enough, and compress it enough, and you can get a gas containing significant amounts of--wait for it--monoatomic hydrogen. I've seen a design document from The Rand Corporation on how to build one of those, and its theoretical top projectile velocity exceeds 10 km/sec. That's flirting with escape velocity and it's well over orbital velocity. It may also be getting close to the velocity necessary to compress inertial fusion materials to the point where a tritium-deuterium burn can occur, but that's a different essay, for another time.
Wednesday, February 6, 2008
Hot Buttered
Diacetyl (emphasis on the first syllable) is also called biacetyl (emphasis on the last syllable) and the latter is what we called in when I was working on the photooxidation of aromatic hydrocarbons a couple or three decades ago. Biactetyl, in fact, occupies an important place in the history of smog chemistry, though I have to admit the notion of "important" is open to interpretation.
There are basically four kinds of "reactive organics" that are important in smog photochemistry: paraffins, olefins, aromatics, and carbonyl compounds (aldehydes and ketones), the latter being more commonly formed in the smog process than emitted outright. I'm taking a bit of a liberty here by omitting alcohols, ethers, and other oxygenated compounds, partly because, ethanol and MTBE notwithstanding, they still don't amount to a large fraction of the mix, and partly because their photochemistry is pretty close to that of paraffins, or ketones that don't photolyze, i.e. break up by the direct action of sunlight.
The early days of smog chemistry were dominated by research into the chemistry of paraffins and olefins, so much so, in fact, that it wasn't until the mid-1970s that researchers realized that the photolysis of aldehydes and ketones was the primary source of catalytic radicals in the smog formation process. In fact, that was the biggest single difference between the first photochemical kinetic mechanism that I worked with, the Hecht-Seinfeld mechanism, and the later, Hecht, Seinfeld, Dodge mechanism. The former used oxygen atoms (from the photolysis of NO2) as its primary radical source, whereas the latter used formaldehyde and higher aldehydes to that purpose.
Both of these mechanisms were based on smog chamber experiments involving butane and propylene (or propene, if you're a nomenclature purist). Aromatics chemistry was tacked on as an afterthought, not because it was believed to be unimportant, but more because nobody had any idea what to do with it.
Aromatic hydrocarbons, as they are called, all have a "benzene ring" somewhere in them, and that makes everything very complex. Perhaps you remember the story about Friedrich Kekule literally dreaming up benzene's structure. It's formula is C6H6, and its structure "bites its own tail," so each carbon atom, with four chemical bonds, has, after accounting for the hydrogen, three bonds to share with its two neighboring carbon atoms. That could work out to two and one or one and two, i.e. a paraffinic bond with one neighbor and an olefinic bond with the other, but the wonders of quantum mechanisms allows it to actually be one and a half bonds with each neighbor. Such are the wonders of quantum electrons being able to be in several places at the same time.
Benzene itself is almost dead, photochemically speaking; put it into a smog chamber and it mostly just sits there, making a little tang of phenol after a while, but phenol is deader still, so…boring.
But if you replace one or more of benzene's hydrogens with a methyl group (-CH3), now you're talking. One added methyl group gives you toluene. Two, and you get xylene, which comes in three isomers, meta, para, and ortho, depending upon whether the methyl groups sit right next to each other (ortho), on opposite sides of the ring (para) or one over (meta). There are also, of course, trimethylated benzenes, and compounds where the substituted groups are more complex than methyl groups. But actually, toluene and the xylenes make up the bulk of aromatic compounds in air pollution. There is even a refinery stream referred to as "TBX" which stands for toluene, benzene, and xylene.
Okay, so I'm going to tell you how the photochemistry works, then how it got figured out. The tricky part had to do with how the aromatic rings would open up. Everyone knew it had to happen sometime, but how, and what the products were was a mystery for years.
What happens to something like toluene in smog is that, when it encounters an hydroxyl radical (-OH), the hydroxyl adds itself onto the ring somewhere, usually at the carbon that sits next to a methyl group, because of the way that methyl groups mess with the electron distribution of the aromatic ring. This is what hydroxyls do with olefins, incidentally, so you can look on it as the hydroxyl briefly looking at the ring and seeing, not that "one and a half bonds" thing I mentioned above, but a double carbon-carbon bond, which hydroxyls just love to glom onto.
This breaks one of the carbon-carbon bonds, and one end of it now has a romantic relationship with the hydroxyl radical. But the other end, like a jilted lover, is on the rebound, ready to pick up with just about any pretty face that comes by. That face, almost always, belongs to oxygen, a really promiscuous molecule. It's diatomic (i.e. O2), but not so committed to the relationship that it passes up some good carbon bond action.
So an O2 gloms onto the other, lonely, carbon and you now have a peroxy radical, an aromatic ring with an oxygen tail. The radical characteristic of the thing tends to be concentrated at the free swinging tip of the tail, and in most peroxy radicals, that tip winds up reacting with some other molecule.
Not so with the aromatic peroxy radicals, however, because it so happens that the radical tip is just right for swinging around and hooking up with another carbon, somewhere else on the aromatic ring. You may now consider all of the other sexual double entendres that I could use for this situation.
Anyway, another oxygen now gloms onto the group, but now the situation is stable enough (maybe) so that it waits around for some outside compound (usually a molecule of nitric oxide—NO) to take the last lonely oxygen atom away from the daisy chain.
All the oxygens then decide to settle down with their new carbon best buddies. The oxygen-oxygen bonds call it quits, and that leaves another oxygen bond for each oxygen connected carbon. If you're counting, and remember that carbon only has four bonds to its name, this means that it has a double bond with an oxygen, one for either a hydrogen or a methyl group, and, whoops, only one left for another carbon in the aromatic ring. In short, the ring opens, in multiple places, once for each oxygen. At some point, the poor hydroxyl group, which is now the radical of the bunch, meets yet another oxygen molecule and the hydrogen leaves the party to for hydroperyoxyl (HO2).
The aromatic ring is pretty much finished at this point, and it cleaves into at least two pieces, one with two ring carbons, the other with four. The one with four has, in addition to two oxygen atoms, a olefinic bond (there was some belief for a while that the fragments might all have two ring carbons, each, meaning that there would have been another oxygen molecule bridge on the ring, but later product yield measurements indicate otherwise).
Both ring fragments are called "dicarbonyls" because they each have two carbonyl (C=O) bonds. In one of the fragments, the two carbonyl bonds are right next to each other.
The simplest dicarbonyl is called "glyoxal." It's just H(C=O)(C=O)H. The next one is methyl glyoxal, with a single added methyl group: H(C=O)(C=O)CH3. Both of these are very hard to measure; they tend to stick to gas chromatographic columns nigh onto forever.
Ah, but the next in line is a dicarbonyl with two methyl substituants: CH3(C=O)(C=O)CH3. This is called biacetyl, or diacetyl. And it comes through a chromatographic column.
If you photooxidize orthoxylene, with it's two adjacent methyl groups, when the ring opens, a certain percentage of time you get biacetyl. A group at the University of California at Riverside, (Darnall, Atkinson, and Pitts, 1979) saw the biacetyl coming off of their chromatograph and realized that they had seen the first evidence of ring opening products.
It so happens that both biacetyl and methylglyoxal photolyze like crazy, so much so that they last only a few minutes in sunlight before splitting into radical fragments. I had been looking for something exactly like these dicarbonyls in my own studies of aromatics photochemistry, because I'd found good evidence of very powerful radical sources in toluene experiments. My calculations indicated that the radical formation rate from toluene was twice what it would be if toluene were going to pure formaldehyde, which of course it does not. It forms a significant amount of methyl glyoxal, and that was what I was looking for.
Later, I heard that biacetyl/diacetyl was used to flavor margarine; I also heard that microwave food products use excess flavoring agents because the microwave heating process drives the volatiles away faster than regular cooking.
I had some vague suspicions that it might not be a good idea to use a compound as photochemically unstable as biacetyl in food. Light causes biacetyl to break into two pieces, both acetyl radicals, and when there is any oxygen around, you get peroxyacetyl radicals. Add some nitrogen dioxide and you get peroxyacetyl nitrate (PAN), which is biologically active. Actually, it's a good bet that any give peroxy compound is biologically active. These are some pretty potent radicals.
So then we see a story about the guy who loved the buttery smell of microwaved popcorn and got a rare lung disease, bronchiolitis obliterans. More to the point, "popcorn lung" has been added to the list of industrial diseases affecting production workers.
All I had were a few suspicions, of course. Nothing to go on, really. But I can't say that I'm surprised in the slightest.
Sunday, February 3, 2008
Objective
After we came out of the church, we stood talking for some time together of Bishop Berkeley's ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it -- "I refute it thus." -- Boswell’s The Life of Samuel Johnson
“Reality is that which, when you stop believing in it, doesn't go away.” -- Philip K. Dick
In Stranger in a Strange Land Jubal Harshaw, as a demonstration, asks one of his secretaries the color of a neighbor’s house. She answers “It’s white on this side.” The idea was that she was a “Fair Witness,” a person with special training who didn’t make assumptions about her observations, so her testimony was given special credence in a court of law.
Sometime when I was in grade school, living on Ironwood Drive in Donelson, Tennessee, I was witness to an unusual atmospheric phenomenon. There was a very low cloud overhead; I think it may have been a contrail cloud from the relatively nearby airport, because the cloud was long and narrow. It was otherwise clear, and near sunset.
We all know how vivid the sunset can be in the last few minutes of light. This cloud picked up the neon pink of the last rays of sun, but it was close. The whole neighborhood lit up with that light. My hair became red; my skin looked dark and sunburned. Our house glowed electric pink.
Our house was actually encased in white asbestos shingles. But for a few moments it was pink—at least on the side that I could see. Truth to tell, though, for me to say that it would have also looked pink on the sides I couldn’t see would have involved fewer assumptions than Heinlein’s “Fair Witness,” was making.
Is this a cheap shot at Heinlein’s expense? I hope not. I’ve seen climate researchers Spenser and Christy refer to their satellite microwave measurements as “direct observations” of atmospheric temperatures, when they most assuredly are not, given that there have been over half a dozen “corrections” to their estimates since they were first published. They are hardly alone is this sort of scientific conceit; I’ve heard such claims many times over the years, as well as researchers referring to various chemical rate parameters (often photolysis rates) as being derived from “first principles,” another nigh onto meaningless phrase used to cloak a welter of assumptions and models of reality.
“What is reality?” appears in a Firesign Theater record as part of a series of audience heckles, and that’s what it often feels like. What we have to work with is subjective experience, which is then denigrated to “mere” subjective experience. In Zen and the Art of Motorcycle Maintenance Pirsig has a nice long exposition on why words like “just,” “merely,” and “only” are out of place in any descriptions of objective reality, including science. They are indicators of a sneaky, subjective value judgment that someone is trying to slip into the mix. Chemistry isn’t merely very complicated physics. Chemistry is very complicated physics. The second sentence reads differently, doesn’t it?
We have a number of tried-and-true methods of “factualizing” subjective experience and most of them have to do with repeated observations, especially different kinds of observations. We believe in the “reality” of a rose because we can see it, touch it, smell it, taste it, and even hear it if it is moving through the air. Things that register on all the senses are commonly thought to be “more real” than something that can only be seen, such as a rainbow.
Objects also are given greater claim to objective reality if they persist, since persistence is one of the ways a single observer can make multiple observations. Objects made of matter have greater weight because they have weight, which persists, and can be felt.
Science takes everyday observations of reality and gathers them together into grand theoretical constructs, like Universal Gravitation, the Standard Model, and Evolution by Natural Selection. Scientific theories make sense of the world, allowing us to make predictions, or construct gizmos (in the largest sense) that give us power over the material and immaterial worlds. As Lester del Rey once said, “Mysticism has been around for millennia, science for only centuries. Science is ahead.”
The danger is in forgetting that our ideas about reality are themselves constructs. We believe that there is a reality, but no one has it on a leash, and no one speaks for it. The danger itself factualizes when someone projects their own subjective needs, fears, and desires upon that construct, making it yet another servant to the unconscious mind. We’re all guilty of that to some extent; paradoxically, it’s the ones who claim to most serve “reality” who are most likely to make their own ideas into yet another simulacrum of God. Then just crank up the dial to eleven, ‘cause it’s time for another episode of Monsters from the Id.
Wednesday, January 30, 2008
Knock Knock
Still, Suck, Squeeze, Pop, Fooey. In the Suck (intake) stroke, the piston moves out from the cylinder head, pulling in external air in the case of diesel engines, or an air fuel mixture, in the case of gasoline engines. Both diesels and modern gasoline engines use fuel injection, but the diesel engine doesn’t do the injection until the top of the compression stroke.
For the Squeeze (compression) stroke, the intake valve closes and the piston rams the column of air/fuel toward the cylinder head. That compresses the air and heats it up. Compression ratios for gasoline engines go from about 10:1 as high maybe 18:1; for diesels, it’s more like 25:1, and diesels have to be much more ruggedly constructed to avoid being damaged by the higher pressures and temperatures.
At about the top of the stroke a spark plug triggers the ignition of the air fuel mix in a gasoline engine; in a diesel, the fuel is injected at high pressure, and ignition occurs because the air is already hot enough to ignite the fuel. The increase in temperature and pressure in both engines then pushes the piston away from the head. That’s Pop, or the power stroke.
Once the piston has reached its limit, the exhaust valve opens, and the final stroke (Fooey or exhaust stroke), clears the combusted gases from the system, which is now ready to start all over again.
All well and good. But it turns out that things don’t always work so well on the compression/ignition side of things for the gasoline engine. Because gasoline is easier to ignite than diesel fuel, sometimes the heat of compression alone will ignite the air/fuel mixture on the compression stroke, before full compression is achieved. That’s bad, because then some of the engine power winds up fighting itself, which reduces efficiency. Moreover, it puts more strain on the engine parts, and can damage the engine.
You could just back off on the compression when this sort of thing occurs, but then you’re also reducing efficiency, because lower compression ratios mean lower peak temperatures for your heat engine, and thermodynamics always wins in the end. So typically, you tune an engine to as close as you can get to the pre-ignition point.
Pre-ignition is also called “knock,” and it’s why we have “octane ratings” for gasoline. The name derives from an isomer of octane, 2,2,4 tri-methylpentane, and it’s defined as the ability to resist knocking of a fractional mixture of this octane isomer and n-heptane, heptane having a defined octane number of zero. The octane isomer has a good ability to resist premature detonation of an air fuel mix.
Real fuel mixtures are much more complex, of course, and the octane rating isn’t just a summation of all the individual components of the fuel. Instead, each component of gasoline has a “blending number” that better describes how it changes the octane rating.
Then there are “octane boosters,” things that are added to gasoline specifically to bring up the octane rating, despite your having put a lot of other low-octane trash into the fuel.
As higher compression IC engines began to really move in the 1920s, the need for octane boosters became apparent. Previously, when high compression engines were primarily for motor racing and aviation, specially blended fuels were used, but mass markets meant mass solutions.
There were two hydrocarbon octane boosters that were first suggested for fuels, alcohol and benzene. Alcohol was the better of the two. Benzene required almost 40% in fuel to really allow for high compression engines; ethyl alcohol only 20%. For a while, it looked like the fuel of the future was “Ethyl” meaning ethyl alcohol.
But then research showed that a number of inorganic elements could reduce engine knock. Iodine and selenium were too corrosive, but lead did the trick. Eventually, tetra ethyl lead (TEL) was developed, and it had the additional advantage that it was patentable, and thereby under corporate control for corporate profit. At first, TEL was blended in with gasoline at garages, or by the motorists themselves, but that wound up with a few too many cases of lead poisoning. After that, it was done at refineries, where it also produced lead poisonings, but those could be hushed up better. It also helped that the public health services helped to suppress the idea that there was a danger.
In other countries, particularly European countries, TEL had something of an uphill battle, because ethanol production was tied to farm policy. But with the weight of the U.S. Government behind it (and then, as now, U.S. foreign policy was at the disposal of those making money), TEL became the octane booster of choice.
Time passed and a lot of airborne lead got emitted into the environment. Fact is, tailpipe lead was in the form of very fine particles that stayed suspended for very long periods, under the right circumstances. Those circumstances were common enough so that detectable amounts of lead wound up in the Arctic even.
Then, in the 1970s, California passed some very tough clean air laws, and suddenly, automobile manufacturers were having trouble meeting them. In fact, the only way to meet them seemed to be to install catalytic converters on automobiles. (Actually, there was a while when lean burn engines such as the Honda CVCC could still meet the California regs, but, I mean really, you couldn’t hold Detroit to standards that the Japanese could meet, could you?).
Lead is toxic to people, but that’s nothing to the way it poisons catalysts. A single tankfull of leaded gasoline would reduce a catalyst’s efficiency by more than 50%. So unleaded fuel was born (fun fact: in Mexico, unleaded fuel is called Magna Sin).
The oil industry fought it, but maybe not as much as you’d think. I suspect that what they were doing was to manage the changeover, and to profit from it as much as possible. And they did profit, largely because the elimination of lead created a squeeze on refining capacity, and any time there is a capacity squeeze in the industry, profits increase, owing to the magic of inelastic demand. Sell less, make more money. Such a deal. They also get so squeeze out some independent refiners and distributors when expensive regulations take effect.
But the industry was also working on alternative octane boosters, again ones that weren’t ethanol, because, well, ethanol is evil, isn’t it? I mean, after all, demon rum.
Anyway, in the nick of time, they began producing MTBE, another oxygenated hydrocarbon, an ether instead of an alcohol, and it had all the good aspects of ethanol, with the added benefit (from an oil industry perspective) that it was made from natural gas.
Oxygenated fuels like ethanol and MTBE also have some interesting combustion characteristics in that they reduce the amount of carbon monoxide (CO) and nitrogen oxides that come from automobiles before the catalysts warm up (after they warm up you don’t even get enough CO to kill yourself in a closed garage). So some localities, like Denver, had been mandating oxygenated fuels in winter, in order to reduce their CO problem.
Then MTBE began to leak into the water supplies of some cities.
Refinery operations are a lot more sophisticated now than they were in the 1920s, and can generally turn almost anything into almost anything else – for a price. The oil industry has also become pretty good at using whatever comes their way, be it hurricanes, environmental regulations, or war to their advantage. I knew that the cheap oil prices in the late 1990s were transient and that there would be a big windfall coming, though I had no idea it would be built on so much blood. Even so, I didn’t put any money into oil stocks, because it just seemed like bad karma, and I can be such a prig sometimes.
Wednesday, January 16, 2008
Speech to the Creationists
I’ve heard it said that one of the ways of classifying people into two groups is between the “educational” and “adversarial” view. In the educational view, someone is less concerned with whose side everybody is on, and more concerned with whether or not everyone understands what the issues are about, and what the facts of the matter are. For the adversarial view, facts and understanding are not so important. If someone is one your side, they don’t need to know the facts, and if they are against you, you don’t want them to know the facts.
That may sound like a loaded distinction, and it may be, given that my own orientation is decidedly educational, but I do admit that there are times when the adversarial view is useful, like during a war, or in a court action. A lawyer doesn’t do well if everyone understands what the case is about but he loses, and a soldier cares even less.
Nevertheless, I’m going to go for the educational view here; I think it’s important for you to understand what the real issues are, what the real facts are, and what I think is important. How you then deal with that information is up to you.
So I want to clear up some misconceptions, and let me also say at the outset that many of these misconceptions are common to both sides of the evolution debate, so I’m not saying “Nyah, nyah, nyah, you’re ignorant and we’re educated.” I am going to be saying that the real issues are not what you think they are, but by the same token, I don’t think that the real issues are what most people think they are.
Take the word “evolution” or the words “theory of evolution.” Most people use those words as shorthand for “Darwin’s theory of evolution by natural selection,” but that’s both oversimplified, and it misses important historical facts, so let me review a few of those.
A couple of hundred years ago, as the science of geology was laying down its foundations, one of the things that kept hitting people who studied rocks and such was how old so many things seemed to be. To take a trivial example, stalactites that grow from the mineral calcite grow very slowly, yet there are caves that have very large such structures. If you look at the current rate for stalactite growth and compare it to those big growths, you wind up with an estimate that it took millions of years for them to grow. There are, incidentally, types of stalactites that grow more quickly; in fact, icicles are a kind of stalactite, and there are others that rapidly grow from gypsum, but no one has ever found or fast growing calcite stalactite, nor has anyone ever demonstrated a way to grow one quickly, because the process seems to be intrinsically slow.
There were a lot of these sorts of things that were found almost as soon as geology became a science, things like sedimentation rates, weathering by water and wind, and so forth. Later we found things like the rate at which the very ground beneath our feet slowly moves, which, over time, creates mountains, buries sediments, and moves the continents around. We also found things like radioisotope dating that corroborated some of the other geological age estimates, and often even extended them, taking our estimates of the ages of the oldest rocks into the billion years range.
Now it should be noted that most of the early geologists were religious, Christians even, but they weren’t what is called Biblical literalists. They didn’t believe each and every word in the Bible was true, and educated men hadn’t really done so since at least the days of St. Thomas Aquinas, who recognized both that the world is round, and that a literal interpretation of the Bible would pretty much require that it be flat. That had been argued by the Egyptian monk Cosmas Indicopleustes based on the Biblical references to the Earth’s four corners, and the fact that there were evenings and mornings on each reported day of creation, when a round world always has a morning and evening somewhere, and “day” depends on where you are on the globe. So the early geologists were willing to say that the “days” of creation couldn’t be literal, and couldn’t be just 24 hours long.
In any case, the Earth looked to be substantially old to the geologists, and they had no particular problem with this, at least not on the grounds of religious doctrine. Furthermore there seemed to be a lot of strange bones in among the rocks.
Now some of those bones were just that, bones. They were found in places like the La Brea tar pits, or ancient peat bogs, or even in the Siberian tundra, where we’ve found completely frozen dead animals tens of thousands of years old. Some of the bones, though, were even older, so old that they’d turned into stone, by what looked to be like a similar process that produces those stalactites, where dripping water slowly replaced the original bone with rock.
But what really got everyone’s interest was that the bones they found didn’t look like the bones of known animals. Some of the critters in the tar pits looked like big cats with huge, I mean, really big, teeth, that came to be called “saber toothed.” Some looked like really small horses. And some of the bones that had turned into stone were so big that the animals could never have fit into Noah’s Ark.
That was one of the theories of the time, as you might expect, that the bones belonged to creatures that had died in the Flood. They had to abandon Biblical literalism for that, though, since the Bible says that God told Noah to get male and females of “every living thing of all flesh,” not “every living thing except the dinosaurs and trilobites.”
Then too, a lot of the fossils that they found were fish, fish that probably wouldn’t have minded the Flood too much. They also kept finding one set of creatures in one rock formation, but if you went deeper, you’d find a much different set of creatures. No one really believed that sedimentation from a single event, be it the Flood or something like it, would also do a big sort on everything so that all the little horses and sabertooths floated to the top, while the trilobites went to the bottom and T. Rex wound up in the middle.
No, the fossils came in groups that were separated by geology, and the geologists figured that that was because they were separated in time. The animals that formed the fossils lived and died at different times, and those that lived at the same time wound up in the same geological formation and those that lived at other times wound up in different formations. Any yes, every now and then two geological formations would get jumbled up, the same way that when you knock all the books off the shelves, they aren’t in alphabetical order any more. But for the most part, they were separated.
Now as I said, different kinds of animals seemed to be in different times, and somebody had to figure out what to make of that. Realize, also, that while all this was happening, Europeans were fanning out across the globe, and periodically they’d stop off on an island, replenish their supplies, accidentally lose a dog, pig, rat or two, and move on. Then, sometimes, they’d come back later to discover that the island had “gone to the dogs,” as it were, and oops, you didn’t have any Dodos anymore. In other words, they discovered that species of animals can go extinct. And it occurred to people that, if species were going extinct, eventually we’d run out of species, unless there was something that replenished them.
A while earlier, that wouldn’t have been a problem, but there were some biologists who’d overturned the idea of “spontaneous generation,” the idea that animals are regularly appearing spontaneously out of mud, or rotting meat, or whatever. Some biologists had looked carefully at the mud and saw the eggs that had been laid there, or they kept the rotting meat in a closed container, and saw that no fly larvae came out when you did that. So biology got this idea that “like creates like” or “like comes from like” and that put them in opposition to the facts that seemed to be coming out of geology, where different things kept appearing.
Thus came the “theory of evolution.” There was no mechanism, just the idea that, somehow, over time, like didn’t produce like, but rather, some organisms, some part of a species, could slowly evolve into something else.
Then came all sorts of “theories of evolution.” Darwin’s mechanism was only one of them. Some believed that evolution occurred by animals “striving” to become better, and that in the striving, some of the things that they acquired, like stretched necks in giraffes, would be passed on to their offspring. There is a theory called “Panspermia” that holds that all evolutionary changes are preprogrammed by a rain of genetic material from space. There were even what could be called “theories of devolution,” which holds that, for the case of human beings at least, the original species was much more advanced than we are, and we are a sort of degenerate version. This is more or less what Disraeli was saying when he said, “Is man an ape or an angel? I, my lord, am on the side of the angels.” Given that Disraeli couldn’t fly, or work other miracles, he seems to have been at the most something of a devolved angel, don’t you think?
What Darwin suggested was that a plant or an animal that was a bit better suited to its environment than its neighbor would probably have a few more offspring than its neighbor, and that, over time, whatever it was that made it better suited would become more and more prevalent. Then he further went on that, over time, entire species might change, or sub-populations of a species might drift off from the rest of the species and become a new species in its own right. Of course there has been about a hundred and fifty years of thinking, observation, research, and modifications to this, and it’s a big subject, so big that I’m not going to try to go into it here. I will say that the theory of natural selection, as it’s called, is the cornerstone not just of modern evolutionary biology, but also of microbiology, biochemistry, biogenetics, paleontology, and a host of other scientific disciplines. I said earlier that I don’t care that much what side you’re on, and I don’t, but I will say that if you want to have anything to do with any of the related scientific fields, if you don’t know how the theory of natural selection works, you’re out of luck. You might as well try to get a job in a library without knowing how to read.
But there is another thing that I want to say here, and that is about some things that Darwin never said. But it does have Darwin’s name attached, and that, in my view is a tragic misunderstanding. What I’m talking about is what is called “Social Darwinism.”
We’ve all heard the phrase “survival of the fittest” and it’s usually applied to Darwinian natural selection, but in fact, Darwin didn’t invent the phrase, and it was not originally applied to animals, it was applied to corporations in 19th Century Great Britain.
It’s not uncommon to try to apply lessons from on field of learning to another, but it’s often a mistake. When Isaac Newton formulated the laws of motion and universal gravitation, he created an elegant theory that seemed able to predict the motion of the Moon, Earth, and planets for all time. And some people took this to mean that everything could be predictable, even the affairs of men. So we got what has sometimes been called the “Clockwork Universe,” the idea that everything is predictable. More recently, science has pretty well demolished that idea, both with quantum mechanics and with what is called “chaos theory,” but I imagine that most of you will join me in a little chuckle at the expense of anyone who ever looked at human affairs and failed to see the inherent chaos there.
In any case, the 19th Century had a lot of misunderstandings in it. There had been a theory of economics put forward by a fellow named Adam Smith in the same year as the American Revolution, and he referred to the “invisible hand” of the market. Some people in the 19th Century, and, sadly, even today, mistake this invisible hand of the market for the invisible hand of God, to very bad results. Some of these people were in charge of the policy that had Ireland continue to export grain during the Irish potato famine. That was in the 1840s, and a couple of million people starved to death. No doubt had it been some years later, after the 1859 publication of Darwin’s book On the Origin of Species by Means of Natural Selection, they would have cited Darwin as well as Adam Smith. But we all know the truth, don’t we? They just hated Irishmen, and the theories were just an excuse to let them starve.
What is called Social Darwinism actually began with the work of a man named Herbert Spencer, who believed that society was a struggle among individuals and that there was a “social evolution” that was equivalent to Darwin’s biological evolution. Actually, the ideas were even older, dating from a fellow by the name of Malthus, who did have some influence on Darwin as well, but the social stuff was all from the 19th century Victorians, who were looking for any excuse to justify their colonial empire. Plenty of people came to believe, because it was so comforting to their view of the world, that social evolution was the same thing as biological evolution, and that a person’s ranking in society reflected their rank in the grand evolutionary scheme of things, or as it was called, “the great chain of being,” another phrase that greatly predates Darwin. In this view, successful people, that is, the rich, the well-educated, the aristocratic were “more fit”, while people who were poor and uneducated were somehow “unfit.”
Well, when you make a mistake this big at the beginning, it just gets worse and worse. Darwinian natural selection talks about offspring, and it’s a general fact that poor and uneducated people have more offspring than do the rich and successful. In Darwinian terms, that would seem to make them “fitter.” Alternately, and this is my view, it says that social standing and wealth are irrelevant to evolution and vice versa.
You might think that this contradiction of the “fitter poor” would bring the idea of Social Darwinism into doubt, but the Social Darwinists weren’t having any of it. The fact that the poor were outbreeding the rich was taken as an indication that we just weren’t being harsh enough to the poor, or that we’d allowed the creation of civilization to get in the way of some biological imperative. The result of that thinking produced what came to be called the Eugenics Movement. In its saner moments, the Eugenics Movement merely advocated policies designed to get the well-educated to have more children. Unfortunately, the moments that weren’t so sane were more numerous, so we had advocates of brutal policies like laws against the “mixing of races”, the sterilization of “genetic inferiors,” various forms of discrimination and strange racial theories, and even outright genocide. We mostly managed to avoid the last one in this country, but the other policies were a matter of law for many decades at the beginning of the 20th Century.
And even now, one of the pitches that is made for the genetic engineering of human beings is that we could somehow “improve” people genetically, without anyone really knowing what that means.
And I mean that. Nobody knows what “genetically inferior” really means, because a person’s genetic makeup interacts with the environment, and what is “fit” for one set of circumstances may well be “unfit” for others. So it’s not something that you can establish from the outset. If aliens came down in spaceships and began to “intelligently design” a human being, the result would depend entirely upon what purpose the aliens had for humans and the environment that the humans were meant for. Frankly, I doubt that aliens would do a very good job of it, at least not from our perspective.
But people who have enjoyed worldly success want that success to be total and intrinsic. It’s often not enough for them to be rich and successful; they want to believe that it’s because of their basic virtue, that they are just plain better than other people. There are, in fact, some religious doctrines that hold worldly success to be the outward manifestation of inner virtue and godly grace. And if some people can enlist their ideas about God to justify themselves, it’s not very hard to imagine that some people, sometimes the same people, think that science will do that as well.
So let me say in conclusion, that, if some magic leprechaun were to give me a single powerful wish that it could be used to eradicate either Creationism or Social Darwinism, I’d get rid of Social Darwinism, because it has caused much greater harm than Creationism in this world. The idea that the day to day struggle for a decent life is part of some grand evolutionary struggle is pernicious at its core, and it does great harm. So if you see your fellow man in some distress, it’s okay to help them out. You don’t have to take every advantage at every step. Kindness is still a virtue; compassion does not harm the human race.
And Darwin is not your enemy, nor is evolution. We all know that we have an animal nature, but it need not define us. Disraeli may not have been an angel, nor are any of us, but it is not a bad thing to consider how you’d expect an angel to act, and maybe aspire to act like one every now and then.
Thursday, January 10, 2008
Surfaces
I didn’t think that should let us off the hook. What kind of surface effect was it? How did it behave? And were we absolutely sure that such effects didn’t occur elsewhere?
Eventually I wrote a paper, “Background Reactivity in Smog Chambers.” Google scholar tells me that it’s been cited at least 17 times, as recently as last year, so it did okay for a paper published 20 years ago.
In the 60s and 70s, there were a lot of smog chamber experiments done on all sorts of individual compounds; there was a belief that one could produce a “reactivity scale” that would let you reduce those things that had the most smog forming potential. As the complex nature of smog chemistry began to dawn on people, such experiments became less common, because “reactivity” has multiple components, sometimes 2 + 2 = 6 in smog chemistry, making the development of a single scale problematic. There’s a fellow at SAPRC in Riverside, Bill Carter, who has developed a much more complicated way of estimating “incremental reactivity,” which has its own problems, but it’s better than “one size fits all.”
Anyway, one of the “pure compound” experiments involved methyl chloroform, and I found it fascinating.
Methyl chloroform is also called 1,1,1 tri-chloroethane. If you start with ethane (CH3CH3) and replace all the hydrogens on one methyl group with chlorine, you get methyl chloroform. It’s pretty unreactive stuff; the only reaction sites for hydroxyl radicals are the ones on the methyl group and methyl hydrogens are bound pretty tightly. So for the first part of the chamber experiment, using very high concentrations of MCF with some added NOx, the thing just sat there.
Then, after a couple of hours of induction, something began to happen. The NO began to convert to NO2, some of the MCF began to decay, then suddenly, wham! The whole system kicked into high gear, NO went down like a shot, the MCF began to oxidize like crazy, and ozone began to shoot up. Then, just as suddenly, the ozone just disappeared, all of it, in just a couple of measurement cycles.
Everyone who looked at it said, “Ah, chlorine chemistry,” which was a sure guess. Chlorine will pull hydrogen off of even methyl groups with almost collisional efficiency (if a chlorine atom hits the molecule, it pulls off the hydrogen almost every time). Moreover, chlorine atoms destroy ozone; that’s the “stratospheric ozone depletion” thing.
But I was puzzled. Where did the chlorine atoms come from? Yes, there was plenty of chlorine in the MC, but that was bound. To get one off, you need to create a free radical and those ain’t cheap. If you create an HO radical, that can pull off one of the hydrogens, and that, after the usual reactions, gives you chloral, a tri-chlorinated version of acetaldehyde. Put in a high enough rate of photolysis for chloral in your simulation and you can get the whole system to react.
The problem was, it didn’t look right. With a high rate of photolysis for chloral, the simulation kicked off too quickly. Lower the rate and you never got the sudden takeoff. I’m pretty good at fitting the curves, and I could never get it to work.
So I started looking at the other actors in the system. The end result of chloral oxidation is phosgene (see why I was looking up all those post-WWI gas papers?), but phosgene itself didn’t fill the bill. So maybe the phosgene was converting to CO and Cl2 on the chamber surfaces like it does in someone’s lungs. No, that didn’t work either.
I kept returning to the problem over the years, trying yet another idea, each time getting no further.
In 1985, the “ozone hole” over the Antarctic was reported, and everyone in the stratospheric ozone community, including Gary Whitten, my boss at SAI, immediately suspected that it had something to do with the ice clouds that only form in the stratosphere over the Antarctic. In 1987, Mario Molina published a series of papers describing the surface reactions of stratospheric chemical species on ice crystal surfaces. The really critical reaction was the reaction of chlorine nitrate with hydrochloric acid to form nitric acid an molecular chlorine (Cl2). Cl2 photolyzes so rapidly that it might as well be two chlorine atoms.
I’m not sure when I first tried the Molina reaction on the methyl chloroform system, but it worked much better than anything else I’d tried. It makes the whole thing a very strong positive feedback system. It worked well enough to convince me that it was probably the missing factor; if I wanted to get a better simulation, I’d have to get very specific about some details of the original chamber experiment, and that one’s 35 years old. It’s pretty well moot at this point anyway.
Molina won the Nobel Prize for his work on stratospheric ozone depletion, and it was well-deserved. I was just looking at a single smog chamber experiment, one with a surface reaction that no one was interested in. The chance that I would have figured out the right answer to the peculiarities of that experiment is pretty small. The chance that I would have made the leap from the chamber walls to the stratospheric ice clouds is smaller still; I’d never heard of them before Whitten told me about them, and I certainly didn’t make the connection between them and the chamber experiment until Molina worked out the correct surface chemistry. So I’m certainly not trying to say that I coulda been a contenda.
But I will say that we all should have been paying more attention to the chamber wall effects. You don’t get to say beforehand what will turn out to be important.
Saturday, November 10, 2007
How Business Works: #236 in the Series
One interesting thing about this particular simulation was that it did not involve Los Angeles. It just so happened that a project involving Denver had coincided with several model upgrades, including the new chemistry, so Denver got the goodies before LA did.
The project team consisted of a goodly fraction of the employees of the (rather small) research consulting firm that had originally won the main EPA follow-on work for developing a photochemical grid model: Systems Applications Inc., not to be confused with Science Applications Inc., or several other firms that went by the initials SAI. Systems Applications no longer exists as such, having been part of the merger and acquisitions whirligig in the 1980s, followed by a breakaway group going to start up a unit of Environ, though not many of those at SAI in 1976 are with either the ghost of SAI or Environ now. That's the biz, you know?
Anyway, part of the SAI business model was to do these government research and consulting gigs, which did not have much profit margin, followed by environmental impact work for other groups, usually corporate, which did have decent profit margins—sometimes. And thereby hangs this tale.
After the work for the Denver Regional Council of Governments (pronounced "Dr. Cog"), we got a request for an impact statement for a facility that had a natural gas turbine power source. Natural gas burns without much in the way of hydrocarbon emission, but the combustion temperature creates some nitrogen oxides, NOx in the lingo, and we were charged with determining the air quality impact. The thing only emitted a few kilograms of NOx per day or thereabouts, barely enough to register on the meter, as it were, but part of the song and dance of environmental impact statements is to do your "due diligence" and if you can get the cutting edge of science on your side, well, good on you and here's your permit.
I'd been the primary modeler on the DRCOG project, for a lot of reasons that I'll describe some other time, and there was a computer programmer/operator who worked with me, and a project manager above me. This was back in the days of punch cards and CDC 7600s, and pardon while I get all misty eyed, okay, that was plenty, because, really, feeding cards into card readers to run programs sucks.
I asked the programmer how much time he expected the job to take. The only thing that needed be done was to add one single point source to the point source input deck, then a bit of analysis, AKA subtracting several numbers from each other and maybe drawing a picture or two. He estimated the time at something like three days, but said, "Call it a week."
I knew how much a week costed out at, so I got the dollar figure, then doubled it, and reported that as my estimated cost of the project to the Denver Project lead. He doubled my estimate and gave it to the Comptroller.
The Comptroller doubled that number and gave it to the company President, who then doubled it and made that offer to the company that wanted to hire us. They signed without blinking.
Okay, so that's between 16 and 32 times what the programmer had expected the thing to cost, a nice profit margin, and good work if you can get it.
Then the programmer added the emissions to the program, ran it, and compared it to the original "base case" or "validation" simulation. They were the same.
Okay, really small emissions source. It's not surprising that the effect was minor, miniscule even. But he expected something. I think he was looking at like five or six digit accuracy in the printouts. There should have been some differences in the numerical noise at least. So he multiplied the source strength by ten, then by a hundred.
Still no difference.
Well, a programmer knows a bug when it bites him on the ass. He went into the code and found an array size limit that basically meant that any point source greater than #20 didn't get into the simulation. The impact source we were looking for had been added to the end of the list, so it didn't show up.
But.
The Denver region at that time had one major power plant that was responsible for something like 30%-40% of all the nitrogen oxides emitted into the Denver airshed. And, wouldn't you know it, that power plant was like, #45 on the list, or whatever. Higher than #20, that's for sure.
Oops.
So now we had to go back and redo our base case. We also had to redo every single simulation in our original study, and rewrite every report, and all the papers that were in progress, and notify the nice folks at DRCOG, who, it should be noted, had already paid us for all of the above when we did the original study, so they weren't about to pay us to do it again. We were lucky in one way: large, elevated point sources (like power plants) don't have nearly the impact of ground-based sources like automobiles, so the omission hadn't had that much effect on our original simulations, at least not near the air quality monitoring stations that we'd used to test the veracity of the model. There were some differences, of course, and tables changed, future impact projections were modified, etc. etc. Oh, and we got to use the original base case as a "what if" scenario, as in "What if Denver's largest point source of NOx emissions were switched off?"
Fortunately, we had some money to do all these things with: the environmental impact contract. I was told that we did actually wind up making a profit on it. I think it was in the low triple digits.
Friday, November 9, 2007
Lavoisier
Antoine-Laurent Lavoisier was born in 1743 to Jean-Antoine Lavoisier, a prominent lawyer, and Emilie Punctis, who belonged to a rich and influential family, and who died when Antoine-Laurent was five years old. He was basically raised by his maiden aunt Mlle Constance Punctis, who arranged for his education at the College Mazarin, which was noted for its faculty of science.
Although young Antoine completed a law degree in accordance with family wishes, his true calling was in science. On the basis of his early scientific work, primarily in geology, he was elected at the age of 25—to the Academy of Sciences, France’s most elite scientific society.
In the same year as his election to the Academy, in order to finance his scientific research, he bought into the Ferme Générale, the private corporation that collected taxes for the Crown on a for profit (as you can see, “privatization” is hardly a new idea). A few years later he married the daughter of another “tax farmer.” Her name was Marie-Anne Pierrette Paulze, and she was not quite 14 at the time. Madame Lavoisier learned English, in order to translate the work of British chemists like Joseph Priestley and Henry Cavendish for her husband. She also studied art and engraving and illustrated Lavoisier’s scientific experiments.
Lavoisier has been called the “father of modern chemistry” for good reason. He established the principle of conservation of mass in chemistry and physics, and performed a series of experiments which, combined with the work of Priestly and Cavendish, overthrew the theory of phlogiston as an explanation of combustion, and thereafter the swept away the classical theory of the elements (earth, air, fire, and water). Lavoisier’s replacement table of the elements ran to some 33 “irreducible substances” most of which were what we today recognize as elements, such as mercury, sulfur, and oxygen, which he renamed from “dephlogistonized air.” He also performed such flashy experiments as demonstrating that diamond is made from carbon by burning one in an atmosphere of pure oxygen.
During the Reign of Terror in 1794, Antoine Lavoisier was arrested, along with 27 others, by the French Revolutionary Tribune for abusing the office of Ferme Générale by adulterating tobacco with water. They were guillotined the same day. When asked for his defense, Lavoisier is famously said to have remarked, “I am a scientist,” to which the tribunal replied, “The Revolution has no need of scientists.” Then “snick” went the head of Lavoisier. That’s the famous part of the story, anyway, usually given as a cautionary tale about the anti-science nature of revolutions.
Popular accounts often omit the predatory nature of the Ferme Générale, which was, after all, basically a protection racket, there being no limit to the taxes collected except what the tax collectors could gouge from the populace. The Crown got its share, but everything above that was pure profit, and the agency was very profitable, profitable enough to finance the purchase of diamonds to burn, something which was probably well-known to the revolutionaries.
So Lavoisier, despite actually being a politically liberal who had worked for many reforms, was vulnerable to the revolutionary fervor of the times. Still, he might have survived, were it not for the fact that he had a famous enemy, one Jean-Paul Marat. Yes, that Marat.
Why did Marat hate Lavoisier? Because, years before, Marat had applied for membership in the French Academy and had been rejected, with Lavoisier being a major factor in the rejection. It seems that Marat had taken to the idea of “animal magnetism” as propounded by Franz Mesmer, a process also called Mesmerism, and which is now called hypnosis. The French Academy had appointed a commission of scientists, which included Lavoisier, and also the American Ambasador, one Benjamin Franklin to look into the matter. The commission concluded that animal magnetism was “the product of mere imagination,” thus dashing Marat’s hopes for acceptance.
Think of it perhaps as being denied tenure.
So fate set up Laviosier for the perfect storm of vengeance, from Marat, over the professional slight, from the revolutionary tribunal over the tax farming business, and perhaps even from those who had been outraged by the extravagance of burning a fabulous gemstone simply to prove that it was just another form of coal.
The story doesn’t end there, though. Lavoisier’s widow remarried, to an Englishman whose language she spoke (in more ways than one) because of her service to her brilliant husband. The Englishman’s name was Benjamin Thompson, also known as Count Rumford. Thompson had been born in America, and was a Tory who fled the colonies after the American Revolution, leaving his wife behind (forever, as it turned out). He conducted studies on the physics of gunpowder explosions and manufactured munitions. During the course of boring out cannons, he took careful measurements of the heat generated. On the basis of those experiments, using the same methods whereby Lavoisier had overthrown the theory of phlogiston he established that one of Lavoisier’s proposed elements, caloric, could not be an element, and must be for form of energy, of motion (albeit, motion at the smallest scale). This was published in 1798.
What part did Marie-Anne play in all of this? She and Thompson were married in 1804 (Thompson’s wife having died some time before), and separated shortly thereafter. So Marie-Anne came too late as Thompson’s wife to be said to have played a role in his earlier researches. Still, there might be more to it all than that, but I’m not sure I can get through all the layers of irony in the stories of these interwoven lives, to say precisely what.
Monday, October 1, 2007
Electric Cars
There’s been a fair amount written about the entire affair, and I’ll only add the point that I suspect that one major factor was never addressed by the filmmakers or anyone else. In every large organization, the major decisions at the top have more to do with personal infighting amongst the managers than anything external. There were people pushing the project, for a variety of reasons, including some who wanted it sabotaged from the beginning, and they got their way. To whatever extent it was a con job designed to demonstrate that there was no market for electric vehicles only underscores the point. The GM divisions that were making big bucks selling gas guzzlers were never going to let an alternative vision get a fair hearing; the eventual fate of the Saturn is a good demonstration of the phenomenon.
Nevertheless, we do have some electric cars on the road now, though they are called “hybrids” and they burn gasoline in a motor to generate electricity that then runs electric motors to power the drive train. Some versions of the hybrid have the gasoline engine connected directly to the drive train and use the electric motors to add power when needed, and to recharge the batteries when the drive demands are less than the gasoline engine output.
A major part of the deal with hybrids is to greatly narrow the operating conditions of the gasoline engine. Under constant, optimized load conditions, you can finely tune the engine performance to maximize fuel economy and minimize emissions of pollutants. The other secret of any electric engine is that it can use “regenerative braking,” i.e., when the car slows (or if you’re going downhill), the electric motors become electric generators and you can recapture and store a substantial amount of the vehicle’s kinetic energy back into the batteries.
Having the internal combustion (IC) motor available finesses the major drawback of electric vehicles, limited range. Typically, EVs only manage less than 100 miles on a full charge, and take substantial time to recharge (1-15 hrs). Hybrids have the full range of gasoline powered vehicles.
The next obvious step in the development pathway is the “pluggable” hybrid, which allows the vehicle to be brought up to full charge from an external electrical source (and there would be a larger battery pack, bringing the thing closer to the true electric vehicle in electric storage). For a substantial number of commuters, the IC engine would seldom need to be engaged; one fellow who has “hacked” his Japanese hybrid claims that his overall fuel efficiency is over 100 MPG, though that doesn’t take into account the fuel burned to produce the pluggable power. (Actually, I seem to recall that he has home solar power panels, so he’s just the greenest of the green, isn’t he?).
Some environmentalists (and some fake environmentalists who carry water for the energy industry status quo), argue that electric vehicles merely shift the power generation (and pollution) elsewhere. That ignores the fact that IC engines operate at a pretty low thermodynamic fuel efficiency (the efficiency of converting the heat of combustion of the fuel to motive force, ignoring losses in the drive train), generally around 25%, though highly optimized, high compression engines can exceed 30%. By contrast, ordinary steam turbines generally start at 40% efficiency, and combined cycle and other tricks make more modern large stationary plants as much as 55% efficient.
It’s also considerably easier to manage pollution control from a single, large point source (whether the regulatory process manages to accomplish this is another story), and there is some hope that such things as geological sequestration of combustion CO2 might reduce the greenhouse gas emission from such facilities as well. Such tricks are pretty well out of the question for mobile vehicles.
Then there is the matter of the impending changeover from fossil fuel driven electric power generation to renewable sources. Wind power is already economically competitive to fossil fuels in many circumstances, and will become more so as the greenhouse gas "externalities” (economist-speak for “beggar thy neighbor”), are rationalized. As a scientist and engineer, I’m also a fan of nuclear power; I just don’t trust our current industrial oligarchy to do it in anything like a safe and sane manner.
There is also the photovoltaic option. The overall field and the statistics behind it are pretty slippery, but it looks like the cost per peak watt for photovoltaics have been halving at something like 5-10 year intervals, while the install capacity has been going up more dramatically, as each price reduction opens up a larger market. Also, in contrast to the U.S., Europe and Japan have been using regulatory action, subsidies, and guaranteed market tactics to encourage alternative energy, rather than discourage them. (In the U.S., a giant game of “crack the whip” has some governmental entities attempting to encourage alternate energy sources, while other entities penalize them, which is a good way to generate paralysis and wasted development efforts).
If you put photovoltaic cells on an automobile, you don’t get a “solar car” in the sense of being able to run on sunlight alone, but (the last time I did the calcs anyway), you do get the equivalent of about 10-20 miles per day of solar driving in the sunbelt areas of the country. That’s not trivial in terms of oil consumption; on average, it could reduce the fuel consumption of a hybrid by around 15-25%. If you’re already driving to work on the charge you got the night before, you’re down to needing maybe a half-gallon of gasoline per day on your daily commute.
That looks like the natural channel for the technology to follow, but there are plenty of things that can screw up a lovely vision of the future, including those who just plain want to keep everything “under control.”
Thursday, September 27, 2007
Rocket Boys Meet the Radioactive Boy Scout
Until I began to build and launch rockets, I didn't know that my hometown was at war with itself over its children, and that my parents were locked in a kind of bloodless combat over how my brother and I would live our lives. I didn't know that if a girl broke your heart, another girl, virtuous at least in spirit, cold mend it on the same night. And I didn't know that the enthalpy decrease in a converging passage could be transformed into jet kinetic energy if a divergent passage was added. The other boys discovered their own truths when we built our rockets, but those were mine.-- Rocket Boys by Homer Hickam
Rocket Boys was made into a movie, “October Sky,” the title being an anagram of Rocket Boys, and I’m still charmed by it. I’ve found that the film is much beloved in some quarters, but I found it to be a disappointment, as so many such films are, because the book had the texture of truth, while the film had the texture of Hollywood. Relationships were generified, characters were stereotyped, you know the drill.
There have been a number of historical paths whereby the bright kid gets out and up in the world. Rocket Boys is a description of a new path: Rocket Scientist, exemplified by Hickam himself, but also, to my reading, the more important character, Quentin, the hard scrabble kid who uses his brain and big words to protect himself from his circumstances, and who decides that Hickam, the son of the mine superintendent, has access to the resources they would need to start a rocketry club.
In 1957, the town of Coalwood, in West Virginia, is cut off from the world in ways that are simply unfathomable today. For example, a major point in the book is when their science teacher, through considerable effort, manages to procure for them a book on rocketry. One. Single. Book. Is it possible to picture such a time today, when Amazon.com and Abebooks.com are universally available? I’ve lived in towns nearly as removed as Coalwood, but I have to work very hard at imagining (or remembering) what it was like. It’s simply another world.
One running joke through Rocket Boys are the crazy ideas that Quentin gets, like when he and the rest of the club are talking about making out with girls, considering the wonders of the female undergarment, and Quentin begins to speculate that it might be an efficient thing to combine stockings with panties into a single garment. Or when he’s considering orange juice and instant coffee and wondering if it would be possible to produce some product like instant orange juice.
In the epilog that follows where the rocketry club boys wound up, a goodly number of them became engineers. One can only speculate how many of them became science fiction fans.
The dark side of the teenage geek can be seen in the story of the Radioactive Boy Scout, the story of David Hahn, who, as a teenager in suburban Detroit, managed to accumulate a large collection of radioactive materials, plus build a homemade neutron source that he used to irradiate thorium and uranium, in hopes of building a breeder reactor. What he got was a decontamination team from the NRC, who hauled away the shed in which he’d kept his material and a tour in the Navy, where he wasn’t allowed to work near nuclear reactors, because he’d already substantially surpassed the allowed lifetime exposure to radiation.
The book is based on an article from Harper’s magazine.
I’ll note this warning about the book, which often speaks of how “advanced” was David’s knowledge of radiation chemistry. In reality, Hahn’s knowledge was pretty spotty, which is what you’d expect from an autodidact. At one point he’s shown to be baffled as to why he doesn’t get a Geiger counter reading from polonium (it’s a pure alpha emitter and alpha radiation cannot penetrate the counting tube). He seems to have only the vaguest understanding about neutron moderation and implications on fissile fuel breeding, and, needless to say, the concept of radiation health safety is pretty much beyond him.
To be fair, I don’t know how much of this ignorance is Hahn’s and how much of it is the author’s, or, more specifically, how much of the author’s obvious ignorance is also the case for Hahn.
I’m given to muse a bit on both the upside and the downside of the geek effect. Is the difference in outcome between Rocket Boys, and The Radioactive Boy Scout merely one of luck? After all, the rockets were far from safe, and did nearly cause damage a time or two (though it must be observed that the fatalities in Coalwood were invariably from the coal mine, not the rockets). More importantly, I think, comes the observation that, if you’re going to go off into the wild blue yonder, figuratively or literally, it helps to have some friends in it with you, just to keep you grounded.