In reel-to-reel tape decks, there is a record head and a play head and they are separated by a small gap. The play head comes after the record head, and the record and playback circuitry are separate, so it's possible to monitor a tape recording more or less as its being recorded, albeit with a small delay.
The small delay was often used to produce an "echo effect" on recordings and in the studio. For the echo effect, the tape output was mixed with the line in and patched back into the tape input. Depending on the tape speed, the echo delay could be controlled, and the gain between output and input controlled the echo strength. A gain of greater than 1 produced the "infinite echo" that rapidly became a sound pulsation with its frequency centered at the maximum frequency response of the system.
One practical joke that was often played at radio stations was to hook up a tape deck to generate a delay, then feed the announcer's voice back to him with a fraction of a second delay. I was once trying to get an echo effect on my voice and I found that I'd practical joked myself; I had to remove my headphones in order to continue. The delay makes it almost impossible to speak. It's hard to explain why, but the experience is compelling.
In a course, Voice and Image Processing, that I took at RPI there was a similar demonstration with video. A ball was placed behind a small barrier, and a video camera showed the ball on a TV screen. Normally, you could just watch the monitor and reach behind the wall to pick up the ball. But with a half-second time delay, such a seemingly ordinary task became almost impossible. You soon found yourself reaching for the ball, overshooting, then overcorrecting, then overshooting, etc.
Such a thing is called a 'limit cycle' in systems control theory, but it's pretty eerie to be a part of a limit cycle and unable to break out of it. Eventually, you just stop moving entirely, then veeeeerrrrrrrryyyyyy slowly move your hand to get the ball. It could literally take 30 seconds or more to do that simple task.
There's a bunch of mathematics in systems theory that deals with time delay and "controllability." The upshot is that if you add enough time delay into a control system, it becomes uncontrollable. Your ability to affect events is slower than those events. Imagine trying to pick up the ball behind the wall if it is moving erratically.
One of my favorite jokes is about the economics professor walking through the Quad with his students. One of his students says, 'Look, there's a ten dollar bill on the ground.' The professor replies, 'Can't be. If it were, someone would have picked it up already.'
For a long time, economics was dominated by what are called "equilibrium calculations," models of an economy under steady state conditions, no shortages, prices in equilibrium, all the usual assumptions. Those are the simplest conditions to model and to easy calculate, so they were the first results. Evolutionary biology tended toward the same simplifications, for the same reasons. The advent of the computer, and the growing access to massive amounts of computing power changed the landscape, but it took a while for theoretical models to catch up to the improved tools. In fact, the catch-up is still going on.
I had lunch with a colleague a while ago, and he asked my opinion about global warming/climate change/greenhouse gases. I told him that it was pretty obvious that the signal was out of the noise, the whole process was clearly underway, and was he surprised at this answer? He noted my well-known contrarian streak. I observed that James Hansen hadn't made a wrong prediction since 1988, and I wasn't going to challenge that sort of success.
In truth, I was a little late to the global warming party, partly because of that contrarian streak, but also because I was focusing on the science and not the policy. I was also perhaps yielding too much to my own libertarian leanings. So let's review why I should have been convinced sooner than I was, at least on the policy issues.
From the standpoint of political philosophy, one fact should be paramount: if we do not have a right to the air we breathe, then human rights, including property rights, are meaningless. And that should include the right to have that air remain unaltered. You shouldn't have to prove that harm is being done to you, any more than you should have to prove that people are harming you in order to not want a stream of trespassers walking across your lawn.
Now any given individual has no real impact on the contents of the entire atmosphere, although it's certainly possible for an individual to affect your current breathable air, and you generally have recourse. If someone smokes in your house and you don't like it, you can throw them out. If the neighbor's barbecue is noxious, you can usually complain to some agency, and I, for one, do not consider that to be an infringement on your neighbor's rights, though your neighbor may disagree.
But group behavior can, and does, affect urban, regional, and global resources. The industrial world's propensity for fossil fuels has had an undeniable effect on the concentration of some important trace gases in the atmosphere. Regulating group behavior is not the same as regulating individual behavior. Regulating corporations or national economies is not the same as regulating individuals, and giving free license to groups and organizations reduces individual freedom.
In the case of global climate change, regulating group behavior is essential. Actually, of course, group behavior is regulated. It just happens that it is regulated by those who rule, manage, control, and lead those organizations, the corporate boards, the CEOs, the congresses, presidents, agency heads, judges, and lawyers whose fingers are entwined with the strings of authority.
But authority and control are meaningless if the system is uncontrollable. The global climate system takes decades, if not centuries to equilibrate to any given greenhouse gas level. Glaciers take even longer to melt or rebuild. And the human political process likewise has major delays built into it.
There is a thin straw to clutch at, called feedforward in control theory. Using feedforward, you attempt to compensate for feedback delays by anticipating the system response. But feedforward control is seriously limited by your understanding of the underlying system. Without that understanding, feedforward is useless.
In regulatory policy, science is the feedforward control signal. Science, however, is currently under political attack from numerous quarters. And big money is being spent to target climate research in one part of that attack.
We're going to lose south Florida, and, my colleague suggests, most of Louisiana and Mississippi. California will acquire a new inland sea. Much of Bangladesh will vanish, as will plenty of islands in the Pacific and Indian Oceans. The fact that these things are going to happen long after you and I are dead does not make the future more palatable. It makes it more inevitable.
Showing posts with label atmospheric science. Show all posts
Showing posts with label atmospheric science. Show all posts
Saturday, June 7, 2008
Friday, April 18, 2008
See How It All Fits Together
In my essay, "The Scientific Method," I described (and bragged a bit about) some work I once did on the photochemistry of toluene, which has the unusual property of, under some very special conditions limiting the amount of ozone that is generated in a smog system. It's a weird effect, and I was bragging because I'd predicted it, then designed an experiment to show that its weirdness was real.
In a more recent essay, "PAN", I noted that there were some features of the chemistry of that compound that I'd gotten right because of a detailed analysis, a I-knew-what-I-was-doing sort of thing, which is more bragging, of course, but I noted that, science being what it is, I was only a little bit ahead of the curve. The rate constants that I'd had to adjust to make my simulations work were routinely measured as being what I'd needed only a little while after I did my work, and the ordinary workings of science would have produced models that did the right thing, even if no one was paying attention.
In "The Linear Hypothesis," I remarked that sometimes (in fact, pretty often) scientific models are used for purposes of policy and decision making, and a model is often chosen to make that task easier, because, well, that's the purpose at hand. Sometimes this is done for good reasons, like selecting a conservative model in order to observe "The Precautionary Principle," where we are dealing with asymmetric error; if an error in one direction is vastly more costly than an error in the other direction, then simple caution suggests using the more conservative model, even if there is some weight of evidence on the other side.
Anyway, I've just been talking to an old colleague, who tells me that one major smog kinetics model has been "fixed" so that it no longer shows that weird toluene behavior that we actually proved to exist. The experiment that proves it is now considered "old" (as if chemistry somehow goes bad with age), or "sloppy," or the result of experimental error. Not that anyone is bothering to replicate it, you understand.
I expect that it has to do with it just being too confusing to have models tell you that sometimes adding one pollutant can produce less of another pollutant. Or something like that. The rationalizations sound pretty sad, however.
We've been hearing a lot lately about the ways and methods that various players in the Bush Administration have been tampering with scientific reports, muzzling scientists, and twisting the system to their own ends. This is, of course, despicable. What I am saying here is that I've seen a lot of this sort of thing throughout my entire scientific career, coming from every policy quarter. Yes, the Bush Adminstration does it, and has been totally shameless about it. But they had plenty of precedent from the Tobacco Industry, the Oil Industry, the Pharmaceutical Industry, and, I will add, Environmental organizations and regulators. When people have an ax to grind, they will first grind it on the facts of the matter, or at least the theories and models that are used to codify the facts.
The "Probability Engine" that the time meddlers found in Destiny Times Three by Fritz Leiber was originally a simulation engine, developed by advanced beings to calculate the probable results of various actions, and to avoid the worst actions and their consequences. The horror of the story is that the device came into the possession of humans, who, with the best intentions (but insufferable arrogance) used it to create those dystopian worlds, rather than simply model them.
I do so hope that this is not, ultimately, a metaphor for science in the hands of human beings.
In a more recent essay, "PAN", I noted that there were some features of the chemistry of that compound that I'd gotten right because of a detailed analysis, a I-knew-what-I-was-doing sort of thing, which is more bragging, of course, but I noted that, science being what it is, I was only a little bit ahead of the curve. The rate constants that I'd had to adjust to make my simulations work were routinely measured as being what I'd needed only a little while after I did my work, and the ordinary workings of science would have produced models that did the right thing, even if no one was paying attention.
In "The Linear Hypothesis," I remarked that sometimes (in fact, pretty often) scientific models are used for purposes of policy and decision making, and a model is often chosen to make that task easier, because, well, that's the purpose at hand. Sometimes this is done for good reasons, like selecting a conservative model in order to observe "The Precautionary Principle," where we are dealing with asymmetric error; if an error in one direction is vastly more costly than an error in the other direction, then simple caution suggests using the more conservative model, even if there is some weight of evidence on the other side.
Anyway, I've just been talking to an old colleague, who tells me that one major smog kinetics model has been "fixed" so that it no longer shows that weird toluene behavior that we actually proved to exist. The experiment that proves it is now considered "old" (as if chemistry somehow goes bad with age), or "sloppy," or the result of experimental error. Not that anyone is bothering to replicate it, you understand.
I expect that it has to do with it just being too confusing to have models tell you that sometimes adding one pollutant can produce less of another pollutant. Or something like that. The rationalizations sound pretty sad, however.
We've been hearing a lot lately about the ways and methods that various players in the Bush Administration have been tampering with scientific reports, muzzling scientists, and twisting the system to their own ends. This is, of course, despicable. What I am saying here is that I've seen a lot of this sort of thing throughout my entire scientific career, coming from every policy quarter. Yes, the Bush Adminstration does it, and has been totally shameless about it. But they had plenty of precedent from the Tobacco Industry, the Oil Industry, the Pharmaceutical Industry, and, I will add, Environmental organizations and regulators. When people have an ax to grind, they will first grind it on the facts of the matter, or at least the theories and models that are used to codify the facts.
The "Probability Engine" that the time meddlers found in Destiny Times Three by Fritz Leiber was originally a simulation engine, developed by advanced beings to calculate the probable results of various actions, and to avoid the worst actions and their consequences. The horror of the story is that the device came into the possession of humans, who, with the best intentions (but insufferable arrogance) used it to create those dystopian worlds, rather than simply model them.
I do so hope that this is not, ultimately, a metaphor for science in the hands of human beings.
Wednesday, April 9, 2008
PAN
PAN is fascinating stuff if you’re an air geek, and it’s maybe interesting to other sort of people. PAN is the acronym of peroxyacetyl nitrate. It’s got two parts to it, peroxyacetyl:
CH3CO-OO*
And nitrogen dioxide:
-NO2
The asterisk (*) on the peroxyacetyl is one of the conventions used for indicating that it is a radical; it has an unpaired electron that plays well with others, especially if they also have an unpaired electron.
Now a bit of history, in an attempt to lose anyone that I haven’t already lost with the chemical formulae.
Los Angeles was known to have a smog problem even before WWII, but during and after the war it got much worse, partly because of the massive expansion of oil refineries, and the attendant expansion of automobile travel. L.A. smog was known to be different from “London smog,” in that the L.A. sort was oxidizing, and London’s was reducing. Ozone was identified as a major component of L.A. smog, but the ozone alone couldn’t account for “plant bronzing,” damage with a characteristic yellow-brown splotches on the leaves of plants. A guy by the name of Haagen-Smit (mentioned in a magical incantation in SunSmoke), managed to replicate the plant damage by using the product of some smog chamber reactions, but could not identify the compound that was responsible.
Some researchers at the Franklin Institute in Philadelphia (Stevens, Hanst, Doerr, and Scott), used a technique called long-path infrared spectroscopy on smog chamber products and spotted a set of IR bands that were particularly strong in the results of a biacetyl-NOx run. They dubbed the responsible agent, “Compound X.” Compound X turned out to be PAN, and how cool is that?
In the mid-1970s, PAN was discovered to thermally decompose, i.e. at elevated temperatures, it rapidly changed back to a peroxyacetyl radical and nitrogen dioxide. That made everything much more interesting, because PAN gets formed early in the day, when it’s cooler, then, as the air warms, it can decompose and feed radicals and NOx back into the smog formation system, producing more ozone. The thermal behavior of PAN is one of the reasons why smog is worse on hot days. PAN can also assist in the long range transport of oxidizing smog, serving as sort of an ozone storage system.
The thing is that PAN and its constituents/products form a steady-state at constant temperature, with PAN existing in balance with peroxyacetyl and NO2. Change the temperature and the balance changes. At higher temperatures, PAN decays and if there is still sunlight around, ozone goes up. But this process is dominated by the behavior of peroxyacetyl radicals.
If NO2 were the only thing that peroxyacetyl could react with, this wouldn’t happen. But peroxyacetyl also reacts with nitric oxide (NO), and that is one of the reactions whereby ozone is generated, by converting NO to NO2, which then photolyzes to ozone (note: the entire system is ‘way complicated, which is why I spent 20 years studying it). By the same token, if something reduces the amount of peroxyacetyl, relative to other peroxy radicals, then PAN concentrations decline, NO2 comes back into the system, and ozone can increase.
Peroxyacetyl radicals also react with other radicals, and that alters the balance. In the early 1980s, looking over the set of chemical reactions we had available, I decided that the cross-reactions between radicals were set too low. Fortunately, there was a paper by a fellow named Addison that had measured them higher that the generally accepted values, so I used Addison’s numbers. I can still remember the combination of excitement and satisfaction that came when Addison’s numbers led to a simulation that just nailed the PAN decay data. Since then, rate constants have been measured that are even higher than Addison’s; when I used the new, higher still numbers, the results was almost exactly the same. There seems to be a point of diminishing returns, a gating function, call it what you will. Once you get above the critical numbers, there is little additional effect.
So even without my own insights into PAN decay, mostly the result of my paying attention to that particular problem, it would only have been a few years until the problem was solved by better measurements, and correct PAN decay would have been achieved in simulations anyway.
On the other hand, there were several features of the system, such as the specific products of some of the radical-radical reactions that have not been addressed to this very day, to the best of my knowledge, and, nearly as I can tell, no one is looking at those problems and no progress is being made. Sometimes the great grinding engines get it and sometimes they don’t. There’s room for a ton of lessons here, I’m just not sure what they all are.
CH3CO-OO*
And nitrogen dioxide:
-NO2
The asterisk (*) on the peroxyacetyl is one of the conventions used for indicating that it is a radical; it has an unpaired electron that plays well with others, especially if they also have an unpaired electron.
Now a bit of history, in an attempt to lose anyone that I haven’t already lost with the chemical formulae.
Los Angeles was known to have a smog problem even before WWII, but during and after the war it got much worse, partly because of the massive expansion of oil refineries, and the attendant expansion of automobile travel. L.A. smog was known to be different from “London smog,” in that the L.A. sort was oxidizing, and London’s was reducing. Ozone was identified as a major component of L.A. smog, but the ozone alone couldn’t account for “plant bronzing,” damage with a characteristic yellow-brown splotches on the leaves of plants. A guy by the name of Haagen-Smit (mentioned in a magical incantation in SunSmoke), managed to replicate the plant damage by using the product of some smog chamber reactions, but could not identify the compound that was responsible.
Some researchers at the Franklin Institute in Philadelphia (Stevens, Hanst, Doerr, and Scott), used a technique called long-path infrared spectroscopy on smog chamber products and spotted a set of IR bands that were particularly strong in the results of a biacetyl-NOx run. They dubbed the responsible agent, “Compound X.” Compound X turned out to be PAN, and how cool is that?
In the mid-1970s, PAN was discovered to thermally decompose, i.e. at elevated temperatures, it rapidly changed back to a peroxyacetyl radical and nitrogen dioxide. That made everything much more interesting, because PAN gets formed early in the day, when it’s cooler, then, as the air warms, it can decompose and feed radicals and NOx back into the smog formation system, producing more ozone. The thermal behavior of PAN is one of the reasons why smog is worse on hot days. PAN can also assist in the long range transport of oxidizing smog, serving as sort of an ozone storage system.
The thing is that PAN and its constituents/products form a steady-state at constant temperature, with PAN existing in balance with peroxyacetyl and NO2. Change the temperature and the balance changes. At higher temperatures, PAN decays and if there is still sunlight around, ozone goes up. But this process is dominated by the behavior of peroxyacetyl radicals.
If NO2 were the only thing that peroxyacetyl could react with, this wouldn’t happen. But peroxyacetyl also reacts with nitric oxide (NO), and that is one of the reactions whereby ozone is generated, by converting NO to NO2, which then photolyzes to ozone (note: the entire system is ‘way complicated, which is why I spent 20 years studying it). By the same token, if something reduces the amount of peroxyacetyl, relative to other peroxy radicals, then PAN concentrations decline, NO2 comes back into the system, and ozone can increase.
Peroxyacetyl radicals also react with other radicals, and that alters the balance. In the early 1980s, looking over the set of chemical reactions we had available, I decided that the cross-reactions between radicals were set too low. Fortunately, there was a paper by a fellow named Addison that had measured them higher that the generally accepted values, so I used Addison’s numbers. I can still remember the combination of excitement and satisfaction that came when Addison’s numbers led to a simulation that just nailed the PAN decay data. Since then, rate constants have been measured that are even higher than Addison’s; when I used the new, higher still numbers, the results was almost exactly the same. There seems to be a point of diminishing returns, a gating function, call it what you will. Once you get above the critical numbers, there is little additional effect.
So even without my own insights into PAN decay, mostly the result of my paying attention to that particular problem, it would only have been a few years until the problem was solved by better measurements, and correct PAN decay would have been achieved in simulations anyway.
On the other hand, there were several features of the system, such as the specific products of some of the radical-radical reactions that have not been addressed to this very day, to the best of my knowledge, and, nearly as I can tell, no one is looking at those problems and no progress is being made. Sometimes the great grinding engines get it and sometimes they don’t. There’s room for a ton of lessons here, I’m just not sure what they all are.
Labels:
atmospheric science,
chemistry,
philosophy of science,
science,
smog
Tuesday, April 8, 2008
Destiny Time Three
I recently reread Destiny Times Three, by Fritz Leiber. Given that Leiber is my favorite science fiction and fantasy writer, and DT3 is possibly my favorite of all his longer works, it may not require explanation as to my purpose in the endeavor. However, given that I don't even mention DT3 in my long essay on Leiber, "Sleeping in Fritz Leiber's Bed," I may have some 'splaining to do. Moreover, there was at least one ancillary purpose that bears exploring.
In his autobiographical writings, Lieber says that his original conception of Destiny Times Three was grandiose. He intended a work of around 100,000 words at a time when "complete novel in this issue" meant a novella of maybe 30-40,000 words, and 60,000 words was the standard length for a book.
But DT3 was a victim of the WWII paper shortages, and, by editorial demand, Leiber cut it down to the more standard "short novel" length, so that it could fit into two consecutive issues of Astounding, losing, by his own account, all of the female characters and a great deal of the richness of the worlds he'd created. I had something similar happen to me with the magazine version of "SunSmoke," but I got to make up for it somewhat when I expanded it to book length. Leiber's full version of Destiny Times Three is lost forever.
Dammit.
The general story of DT3 is that there are parallel worlds, but not due to the natural workings of physics, etc. Instead, sometime in the late 19th Century, an alien device was found by a fellow who fancied himself a scientist. He enlisted the assistance of seven other individuals, because it took eight minds to operate the thing, and they used it to slowly create a "utopia," by splitting the world at crucial decision points, observing which world was most to their liking, then "destroying" the "experimental control" worlds. Very scientific.
In fact, they had not destroyed each of these worlds, but merely placed them beyond their own ability to access them, "swept them under the rug" as it were.
The protagonists on Earth 1, the utopian world, are Thorn and Clawly, who rather closely resemble Fahfred and Gray Mouser, or, more accurately, Lieber and his friend Harry Fischer, at least in their imagined incarnations. It's also not a great leap to consider the duo as Thor and Loki (or Loke, as Leiber spells it), given the former's name and the latter's specific comparison to Loke as the tale unfolds. Also, Norse imagery is an ongoing motif throughout the story.
On Earth 1, the power of "subtronics" has been harnessed, subtronics being a Campbellian trope for a sort of "unified field theory" that can also be found in Heinlein's Sixth Column/The Day After Tomorrow, itself a reworking of material supplied by John W. Campbell. All have access to its power, and the unparalleled freedom that results, anti-gravity cloaks and almost total environmental control (the book begins with a description of a "symchromy," an optical symphony on a grand scale) being throwaway mentions in the first couple of pages.
On Earth 2, subtronics was kept as a secret by "The Party" and a totalitarian state was created. Later in DT3 an Earth 3 is discovered to exist, where an attempt was made to suppress the discovery, with a resulting war that destroyed most of humanity and ripped open the Earth's crust to such an extent that rapid geological weathering removed so much CO2 from the air as to produce an ice age. This may be the first mention of the "greenhouse effect" in science fiction, incidentally.
There are versions of Thorn in all three worlds, and versions of Clawly on at least Earth 1 and Earth 2. But on Earth 1, they are fast friends, and Earth 2, they are bitter enemies, the difference being primarily Clawly's personalities. On Earth 2, he is a Party member, while the Earth 2 Thorn is part of the Resistance, such as it is.
But despite the fact that the connections between the worlds has been severed by the "experimenters" who now live outside of normal time, the worlds are not totally separate. There remains a connection between individuals who have duplicates on other parallel worlds: They dream each other's dreams. The dream visions of utopia are a grinding torment to those who live in the totalitarian dystopia. And as a result of this desperate yearning of millions of minds, the barriers between the worlds are beginning to blur. Sometimes, someone goes to sleep in one world, and awakens in another.
So the plot thickens, events transpire, and eventually there is considerable resolution. You can find DT3 in various versions on either Amazon or ABE books. Wildside Press seems to be promising a release, but it doesn't appear on their website, so caveat emptor. I have both the Binary Star reissue (which also contains Spinrad's "Riding the Torch," it's printing as Galaxy Novel #28, and the two original issues of Astounding. I told you I liked it.
Lately, I have been haunted by that initial vision from Destiny Times Three, the portrait of a world of people yearning so profoundly for something better than what they have that the walls of reality have begun to crumble. Or, if you will, think about people who are so enamored by a dream life that they cross over and take up living there.
A minor point in the book, it's true, but still…
We have news items that World of Warcraft gamers have died from devoting so much time and energy to the game that they neglected such matters as eating and sleeping. Second Life seems to sometimes create an almost religious fervor and perhaps a Ponzi scheme in those who choose to spend a lot of time there. Such things are hardly new, of course. Many of us recall the guy who got into Dungeons and Dragons just a little too enthusiastically, or the fellow who tried to use his SCA credentials for something out in the real world. There are "RenFaire" bums, just as there are those who have tried to spend their entire adult lives surfing. Sure, I get that.
I also get that we seem to have switched "autobiographical fiction" with the "fictional autobiography." The former is pretty inevitable; the latter seems a lot more fraudulent, doesn't it?
Then there are the "reality shows," made so very omnipresent by the writers' strike. Most such fair is just new variations on old game shows, but some of it shows a new sort of creepy voyeurism for voyeurism's sake, where the old line about a celebrity being "famous for being famous" gets too close to the truth.
What is the result when millions of people yearn for fame as the only thing they can imagine that will fill their emptiness? Do the walls of reality begin to crumble when everything becomes a reality show?
Ah, sure, I'm just being dyspeptic here, or maybe even dystopian. It's still possible to live a normal life. But I do get a little peep of horror when I consider how extraordinary an effort that can take.
Labels:
atmospheric science,
Fritz Leiber,
science fiction,
writers,
writing
Saturday, February 23, 2008
Thinking Outside the Box
I spent a number of years developing what is called a “three-dimensional Eulerian photochemical grid model,” aka the “Urban Airshed Model.” I was one among many, of course, but I did make some significant contributions to the effort.
The “three-dimensional” part of the name says that a volume was divided up into a lot of compartments, “grid cells” in the jargon, and the “Eulerian” part says that the grid didn’t move around, although there was a bit of cheating on that one in that the top of the modeling region rose with the “mixing layer” in some versions of the model. The alternative to “Eulerian” is “Lagrangian” where the model volume itself moves around, usually with the fluid flow, which is to say, the wind. That’s a “trajectory model” and it usually had only a single box, although we developed some multi-box trajectory models to handle plumes like those from power plants. A single line of boxes is “one-dimensional;” a “moving wall” of boxes is “two-dimensional.” A single box, therefore, is “zero-dimensional.”
So-called “box models” are common in air pollution, and other areas of environmental modeling. They can be really simple, especially if you are dealing with pollutants that don’t react. Then all you have to do is have a source input for emissions, a “ventilation rate” for the combination of wind and diffusion that’s removing material from your box, and boundary conditions for what kind of air is replacing what’s in the box. This is the sort of model that you get on first year chemistry or physics courses; it can be expressed in a single differential equation.
You can make the box pretty big, too, provided you’re willing to take these big honking averages of everything. For either non-reactive or “first-order” (those that just decay all by themselves, without reacting with other things) pollutants, your average result for the single box calculation is the same as if you’d done the multi-box calculation and then averaged all the boxes. That’s what’s called “linear” in the biz.
I did a lot of work with box models, partly because it was easy to test chemical mechanisms with them, and the results are easy to understand also. And I got to thinking about that “ventilation rate.” And wind power.
See, if you extract energy from the wind, it slows down, and that will have an impact on the ventilation rate of any area whose air is passing by the windmills. So I did some box model calculations on the amount of energy that was being extracted from the wind at Altamont Pass near San Francisco, plus the degree of pollution that was in the air that went through the Pass. That allowed an estimate of the increase in air pollutants that would occur in San Francisco due to the decrease in ventilation.
Okay, it was a weird calculation to make in the first place, but the results weren’t that deranged. There was an effect, the largest of which was equivalent to the amount of nitrogen oxides that would have had to be emitted in order to generate the excess of ozone seen at the pass. On a per kilowatt basis, it turned out to be a little less than the amount of nitrogen oxides that would be emitted by a natural gas-fired plant, such plants being the cleanest of all fossil fueled power plants. Of course the result depended on the amount of pollution already in San Francisco; a totally clean area would see no pollution equivalent at all, and since I made those calculations, SF has reduced pollutant levels.
I wrote up my results, sent the paper off to a journal, and then received some of the most flagrantly wrong referee comments I’ve ever received on a paper. One of them showed that I was “wrong” with a calculation that was itself off by five orders of magnitude, assuming, among other things, that wind speeds are constant all the way up to the stratosphere. I think he managed to calculate the wind kinetic energy over the entire Bay Area also, rather than just through the Pass.
Well, I know when I’m licked, and it was obvious that I wasn’t going to get anyone to pay attention to that wacky idea. Even in science, sometimes I’m too clever by half, and that’s a rueful comment, not a brag.
The “three-dimensional” part of the name says that a volume was divided up into a lot of compartments, “grid cells” in the jargon, and the “Eulerian” part says that the grid didn’t move around, although there was a bit of cheating on that one in that the top of the modeling region rose with the “mixing layer” in some versions of the model. The alternative to “Eulerian” is “Lagrangian” where the model volume itself moves around, usually with the fluid flow, which is to say, the wind. That’s a “trajectory model” and it usually had only a single box, although we developed some multi-box trajectory models to handle plumes like those from power plants. A single line of boxes is “one-dimensional;” a “moving wall” of boxes is “two-dimensional.” A single box, therefore, is “zero-dimensional.”
So-called “box models” are common in air pollution, and other areas of environmental modeling. They can be really simple, especially if you are dealing with pollutants that don’t react. Then all you have to do is have a source input for emissions, a “ventilation rate” for the combination of wind and diffusion that’s removing material from your box, and boundary conditions for what kind of air is replacing what’s in the box. This is the sort of model that you get on first year chemistry or physics courses; it can be expressed in a single differential equation.
You can make the box pretty big, too, provided you’re willing to take these big honking averages of everything. For either non-reactive or “first-order” (those that just decay all by themselves, without reacting with other things) pollutants, your average result for the single box calculation is the same as if you’d done the multi-box calculation and then averaged all the boxes. That’s what’s called “linear” in the biz.
I did a lot of work with box models, partly because it was easy to test chemical mechanisms with them, and the results are easy to understand also. And I got to thinking about that “ventilation rate.” And wind power.
See, if you extract energy from the wind, it slows down, and that will have an impact on the ventilation rate of any area whose air is passing by the windmills. So I did some box model calculations on the amount of energy that was being extracted from the wind at Altamont Pass near San Francisco, plus the degree of pollution that was in the air that went through the Pass. That allowed an estimate of the increase in air pollutants that would occur in San Francisco due to the decrease in ventilation.
Okay, it was a weird calculation to make in the first place, but the results weren’t that deranged. There was an effect, the largest of which was equivalent to the amount of nitrogen oxides that would have had to be emitted in order to generate the excess of ozone seen at the pass. On a per kilowatt basis, it turned out to be a little less than the amount of nitrogen oxides that would be emitted by a natural gas-fired plant, such plants being the cleanest of all fossil fueled power plants. Of course the result depended on the amount of pollution already in San Francisco; a totally clean area would see no pollution equivalent at all, and since I made those calculations, SF has reduced pollutant levels.
I wrote up my results, sent the paper off to a journal, and then received some of the most flagrantly wrong referee comments I’ve ever received on a paper. One of them showed that I was “wrong” with a calculation that was itself off by five orders of magnitude, assuming, among other things, that wind speeds are constant all the way up to the stratosphere. I think he managed to calculate the wind kinetic energy over the entire Bay Area also, rather than just through the Pass.
Well, I know when I’m licked, and it was obvious that I wasn’t going to get anyone to pay attention to that wacky idea. Even in science, sometimes I’m too clever by half, and that’s a rueful comment, not a brag.
Labels:
atmospheric science,
memoir,
modeling,
philosophy of science
Thursday, February 14, 2008
Taking Your Lumps
Let’s suppose you want to look at how some chemical compounds react in the atmosphere. We’ll start with butane, a pretty simple hydrocarbon, C4H10, or to give more insight into its structure, CH3CH2CH2CH3. In chem speak, that’s a methyl group (CH3) attached to a two carbon alkyl chain (CH2CH2), terminated by another methyl group. The methyl group is called “primary carbon” because it’s connected to a single other carbon atom, while the CH2 groups are “secondary carbon.”
Now suppose you have a bunch of butane molecules flying around in the air, and the air also has some hydroxyl radicals (HO) in it. Every now and then, in accordance with the laws of statistical mechanics, one of the HOs will hit a butane molecule. Then what?
Well, most of the time, they just bounce right off each other. The hydroxyl is pretty reactive, radicals often are, but unless it hits the electron cloud of the butane in the right spot, with the right energy, etc., it’s just going to bounce. But every so often, it does hit right, and it grabs one of the hydrogens. Which one?
Well again, it will be the one it hit, but some of the hydrogens are more labile than others, so the HO is more likely to bounce if it hits one of the methyl groups, which have “primary” hydrogens because they are on primary carbons, and more likely to react if it hits the alkyl chain, on a “secondary” hydrogen.
Butane is nice and symmetrical, so there are only two possible outcomes. Due to symmetry, any primary hydrogen reaction looks like every other primary hydrogen reaction, and every secondary looks like every other secondary reaction. The hydroxyl always extracts a single hydrogen from the butane, which gives water, and an alkyl radical that immediately reacts with oxygen, and under smog conditions goes through a series of reactions that lead to either buteraldehyde, if the primary carbon was involved, or methyl ethyl ketone (MEK) if the secondary carbon was involved. (Actually, I’m ignoring some other pathways that get more important as molecular weight increases, like the formation of alkyl nitrates, and the times when the molecule fractures in the middle to produce acetaldehyde and an ethyl alkoxy radical. Having read that sentence, I’m sure you can appreciate my ignoring some details).
We can write a bunch of reactions for all this, assign rate constants to the reactions, put in temperature and pressure dependencies, etc. but the thing I want to point out is this: we’re simplifying a lot of events into a small set of descriptive equations. All the bounces are ignored, except insofar as they affect the reaction rate constant. All the different ways the molecules hit each other, along with the different energies of those collisions, all lumped into a few basic equations. We’re also taking advantage of the symmetries, by saying that reactions at either end carbon are equivalent, which they are, unless we had some way of telling the difference, like if one end or the other was isotope tagged.
Anyway, we’ve put all these things together and called them “reactions of the molecule.” That’s what chemistry does.
Now suppose we want to study the reactions of a number of different molecules, say add some pentane, hexane, heptane, and octane to the mix, and put in all the possible isomers of those compounds as well (there’s only one other isomer of butane, called isobutane, but toss in some of that as well). Now, how would you write your chemical equations?
You could try to write the equations for every single molecule—provided you wanted to go crazy, blow your computing budget, and not have the rate constants for even a tenth of what you wanted. You’re going to have to estimate that last one anyway, of course, though you might cheat and get some empirical data describing the reactivity of your mix.
You could look at what you have in the way of a mix and try to come up with some idea of an “average molecule.” That can get a little strange, because you’re going to have equations that account for some fraction of a carbon, for instance, and averaging rate constants is pretty iffy anyway. The fast reacting compounds will react away most quickly, so the “average” rate constant is going to keep changing. Nevertheless, you can do it, either as a constant average rate or as a continually changing average rate. It’s been done, though most often as a constant rate.
You could take your mix and wave your hands a little bit and say that it should look like some other, simpler mix, 45% butane and 55% octane, maybe. Of something like that.
The first one of these has come to be called the “explicit mechanism” approach. The second is the “lumped parameter” method. The third is a “surrogate mechanism” which is an explicit mechanism that is used on a reduced number of “surrogate compounds” to represent a more complex mixture.
All have been used in smog chemistry models, and all have their limitations. The mechanism that I first encountered was a lumped parameter mechanism called the Hecht-Seinfeld-Dodge mechanism. At that time I was coding what is called a Lagrangian Trajectory model version of the more elaborate Eulerian Grid model that had been developed by the research/consulting firm that employed me, then named Systems Applications Inc. One of my tasks was to code up and test the HSD mechanism in the simpler model.
At the same time, Gary Whitten (later to be my boss, because he was the only one who was willing to have me in his group, me being the charmer that I am) was attempting to use the HSD mechanism in an atmospheric application. He quickly ran into the problem that he had no idea what the “average molecular weight” of an average atmospheric hydrocarbon was, and there were parameters in the mechanism that depended upon that average.
What he did have was what are called “flame ionization detector” measurements of total reactive hydrocarbon, “as carbon.” In other words, he knew about how many carbon atoms there were, just not how many molecules they comprised. There were also a few gas chromatograph measurements that could be used to estimate the molar fractions of olefins (there were no real mechanisms for aromatic hydrocarbons at that time), but the breakdown of the alkyl hydrocarbons just wasn’t there.
Then he had an idea. I still think it was brilliant.
It turns out that the reactivity of an alkyl hydrocarbon (like butane, pentane, hexane, et. al.) goes up with increasing molecular weight, primarily because there are more carbon groups. In fact, the reactivity of any given primary, secondary, or tertiary carbon group is largely constant from one hydrocarbon to another, and if you normalize the reactivity by carbon atom, it’s reasonably close (within 20-40%) to constant. (This neglects the very lightest hydrocarbons, methane, ethane, and propane, because they are anomalously unreactive, but that also means that you can ignore them, mostly).
So Whitten devised a mechanism that ignored the idea of molecules for alkyl carbon. Instead it treated each carbon atom as a single “reactive structure” and did all the chemistry from there. He called it the “Carbon Bond Mechanism,” and its descendants are still the primary photochemical air quality chemical mechanisms used in air quality management in the U.S. (and elsewhere).
It wasn’t my idea, but I took to it like a duck to water. (So much so, in fact, that some people wound up thinking it had been my idea in the first place, something I later recognized as “ageist” since I was the young ‘un of the team. So I always tried to make sure everyone knew it was Gary’s eureka moment). The CBM had exactly the sort of “thinking around the corners” style that I love. And, it was practical. It made everything easier, emissions inventories, comparisons to air quality data, coding the mechanism. It’s actually a bit difficult to conduct “mechanism comparison studies” among other kinds of kinetic mechanisms in the U.S. because practically every emissions inventory is in the form used by CBM, and a fair amount of the differences between mechanisms is how they treat the emissions inventories.
Over the next few years, we devised a lot of twiddles to make the edges work, like an “operator species” that took intra-molecular reactions (like chain breaking) into account. We also extended the mechanism to include aromatic hydrocarbons, and biogenics such as isoprene and terpenes; those wound up being closer to explicit/surrogate mechanisms. I also came up with a cute trick that involved treating very reactive olefins as if they’d already reacted to their carbonyl containing products (aldehydes and ketones) because they reacted so quickly that their products were more important than the original compound. Not to get too egomaniacal, but it was all very cool.
Now let’s take this up a few levels of abstraction.
If you’ve managed to get through all this technical verbiage, one thing you might have noticed is that this sounds more than a little bit like engineering. We were designing a kinetic mechanism, for particular purposes, based on the resources (time, knowledge, computing power) that we had. Our goal was the construction of an atmospheric chemical kinetics simulation model, a tool that could be used for both scientific and air quality management purposes. If science is devoted to the acquisition of knowledge, what do you call something that assists in environmental management? Again, a lot like engineering.
Science operates on the model of “objective reality” and scientists like to think of themselves as dealing with that reality in an impersonal way. You can see that in the way that scientific papers are written, frequently in passive voice, rarely with individual actions described, and even more rarely as anything where the “arbitrary” is even acknowledged. The idea of choices is largely absent, because choices are the product of subjective individuals.
Art, on the other hand, glories in the subjective, the experiential. Choice is part of its very nature. Art is personal, and artists have no problem with the idea that their ego is involved. That’s part of the point of it. But it’s still often the case that some artistic element “has to be that way.” The artist feels like there is no choice in the matter, because making a different choice will lead to inferior, or even bad, art.
I’ve had careers in both science and art, and for a long while I thought that the art was for personal expression and the science was for the satisfaction of my curiosity about an objective world that was entirely independent of myself. I also had the notion that engineering was where the two met, where one applied the objective knowledge of science in service of the subjective needs of human beings, and those needs included the application of artistic principles to engineering, and engineering principles to art.
It’s a good line of patter, and there’s some truth to it, but as time goes on, I see more and more holes in it. For one thing, while art may be personal and expressive, it’s often pretty generic, and it starts looking a lot like other art. No one else would have written Book of Shadows, but if I hadn’t, there might very well have been another novel of “heroic fantasy,” in that publishing slot, and many of the same people might have read it and taken the same enjoyment from it. SunSmoke is a lot less interchangeable, in my view, but that is not necessarily obvious to the reader. I myself tend toward the idiosyncratic both as writer and reader, but most fiction, most art, is average; that’s what average means. And some proportion of popular entertainment is largely interchangeable with its near equivalents.
One the other hand, a great deal of science is more idiosyncratic, less objective, more personal than most scientists would admit. What is studied, how it’s studied, what sorts of theories and models are created, what sort of notation is used, all of that betrays the human face staring at the instruments, drawing the conclusions, writing up the results. Someone has to want to know the answer to the question that is being asked. Science is a human construct, no less than any other human construct, and to deny it is to deny both one’s self, and the truth.
Now suppose you have a bunch of butane molecules flying around in the air, and the air also has some hydroxyl radicals (HO) in it. Every now and then, in accordance with the laws of statistical mechanics, one of the HOs will hit a butane molecule. Then what?
Well, most of the time, they just bounce right off each other. The hydroxyl is pretty reactive, radicals often are, but unless it hits the electron cloud of the butane in the right spot, with the right energy, etc., it’s just going to bounce. But every so often, it does hit right, and it grabs one of the hydrogens. Which one?
Well again, it will be the one it hit, but some of the hydrogens are more labile than others, so the HO is more likely to bounce if it hits one of the methyl groups, which have “primary” hydrogens because they are on primary carbons, and more likely to react if it hits the alkyl chain, on a “secondary” hydrogen.
Butane is nice and symmetrical, so there are only two possible outcomes. Due to symmetry, any primary hydrogen reaction looks like every other primary hydrogen reaction, and every secondary looks like every other secondary reaction. The hydroxyl always extracts a single hydrogen from the butane, which gives water, and an alkyl radical that immediately reacts with oxygen, and under smog conditions goes through a series of reactions that lead to either buteraldehyde, if the primary carbon was involved, or methyl ethyl ketone (MEK) if the secondary carbon was involved. (Actually, I’m ignoring some other pathways that get more important as molecular weight increases, like the formation of alkyl nitrates, and the times when the molecule fractures in the middle to produce acetaldehyde and an ethyl alkoxy radical. Having read that sentence, I’m sure you can appreciate my ignoring some details).
We can write a bunch of reactions for all this, assign rate constants to the reactions, put in temperature and pressure dependencies, etc. but the thing I want to point out is this: we’re simplifying a lot of events into a small set of descriptive equations. All the bounces are ignored, except insofar as they affect the reaction rate constant. All the different ways the molecules hit each other, along with the different energies of those collisions, all lumped into a few basic equations. We’re also taking advantage of the symmetries, by saying that reactions at either end carbon are equivalent, which they are, unless we had some way of telling the difference, like if one end or the other was isotope tagged.
Anyway, we’ve put all these things together and called them “reactions of the molecule.” That’s what chemistry does.
Now suppose we want to study the reactions of a number of different molecules, say add some pentane, hexane, heptane, and octane to the mix, and put in all the possible isomers of those compounds as well (there’s only one other isomer of butane, called isobutane, but toss in some of that as well). Now, how would you write your chemical equations?
You could try to write the equations for every single molecule—provided you wanted to go crazy, blow your computing budget, and not have the rate constants for even a tenth of what you wanted. You’re going to have to estimate that last one anyway, of course, though you might cheat and get some empirical data describing the reactivity of your mix.
You could look at what you have in the way of a mix and try to come up with some idea of an “average molecule.” That can get a little strange, because you’re going to have equations that account for some fraction of a carbon, for instance, and averaging rate constants is pretty iffy anyway. The fast reacting compounds will react away most quickly, so the “average” rate constant is going to keep changing. Nevertheless, you can do it, either as a constant average rate or as a continually changing average rate. It’s been done, though most often as a constant rate.
You could take your mix and wave your hands a little bit and say that it should look like some other, simpler mix, 45% butane and 55% octane, maybe. Of something like that.
The first one of these has come to be called the “explicit mechanism” approach. The second is the “lumped parameter” method. The third is a “surrogate mechanism” which is an explicit mechanism that is used on a reduced number of “surrogate compounds” to represent a more complex mixture.
All have been used in smog chemistry models, and all have their limitations. The mechanism that I first encountered was a lumped parameter mechanism called the Hecht-Seinfeld-Dodge mechanism. At that time I was coding what is called a Lagrangian Trajectory model version of the more elaborate Eulerian Grid model that had been developed by the research/consulting firm that employed me, then named Systems Applications Inc. One of my tasks was to code up and test the HSD mechanism in the simpler model.
At the same time, Gary Whitten (later to be my boss, because he was the only one who was willing to have me in his group, me being the charmer that I am) was attempting to use the HSD mechanism in an atmospheric application. He quickly ran into the problem that he had no idea what the “average molecular weight” of an average atmospheric hydrocarbon was, and there were parameters in the mechanism that depended upon that average.
What he did have was what are called “flame ionization detector” measurements of total reactive hydrocarbon, “as carbon.” In other words, he knew about how many carbon atoms there were, just not how many molecules they comprised. There were also a few gas chromatograph measurements that could be used to estimate the molar fractions of olefins (there were no real mechanisms for aromatic hydrocarbons at that time), but the breakdown of the alkyl hydrocarbons just wasn’t there.
Then he had an idea. I still think it was brilliant.
It turns out that the reactivity of an alkyl hydrocarbon (like butane, pentane, hexane, et. al.) goes up with increasing molecular weight, primarily because there are more carbon groups. In fact, the reactivity of any given primary, secondary, or tertiary carbon group is largely constant from one hydrocarbon to another, and if you normalize the reactivity by carbon atom, it’s reasonably close (within 20-40%) to constant. (This neglects the very lightest hydrocarbons, methane, ethane, and propane, because they are anomalously unreactive, but that also means that you can ignore them, mostly).
So Whitten devised a mechanism that ignored the idea of molecules for alkyl carbon. Instead it treated each carbon atom as a single “reactive structure” and did all the chemistry from there. He called it the “Carbon Bond Mechanism,” and its descendants are still the primary photochemical air quality chemical mechanisms used in air quality management in the U.S. (and elsewhere).
It wasn’t my idea, but I took to it like a duck to water. (So much so, in fact, that some people wound up thinking it had been my idea in the first place, something I later recognized as “ageist” since I was the young ‘un of the team. So I always tried to make sure everyone knew it was Gary’s eureka moment). The CBM had exactly the sort of “thinking around the corners” style that I love. And, it was practical. It made everything easier, emissions inventories, comparisons to air quality data, coding the mechanism. It’s actually a bit difficult to conduct “mechanism comparison studies” among other kinds of kinetic mechanisms in the U.S. because practically every emissions inventory is in the form used by CBM, and a fair amount of the differences between mechanisms is how they treat the emissions inventories.
Over the next few years, we devised a lot of twiddles to make the edges work, like an “operator species” that took intra-molecular reactions (like chain breaking) into account. We also extended the mechanism to include aromatic hydrocarbons, and biogenics such as isoprene and terpenes; those wound up being closer to explicit/surrogate mechanisms. I also came up with a cute trick that involved treating very reactive olefins as if they’d already reacted to their carbonyl containing products (aldehydes and ketones) because they reacted so quickly that their products were more important than the original compound. Not to get too egomaniacal, but it was all very cool.
Now let’s take this up a few levels of abstraction.
If you’ve managed to get through all this technical verbiage, one thing you might have noticed is that this sounds more than a little bit like engineering. We were designing a kinetic mechanism, for particular purposes, based on the resources (time, knowledge, computing power) that we had. Our goal was the construction of an atmospheric chemical kinetics simulation model, a tool that could be used for both scientific and air quality management purposes. If science is devoted to the acquisition of knowledge, what do you call something that assists in environmental management? Again, a lot like engineering.
Science operates on the model of “objective reality” and scientists like to think of themselves as dealing with that reality in an impersonal way. You can see that in the way that scientific papers are written, frequently in passive voice, rarely with individual actions described, and even more rarely as anything where the “arbitrary” is even acknowledged. The idea of choices is largely absent, because choices are the product of subjective individuals.
Art, on the other hand, glories in the subjective, the experiential. Choice is part of its very nature. Art is personal, and artists have no problem with the idea that their ego is involved. That’s part of the point of it. But it’s still often the case that some artistic element “has to be that way.” The artist feels like there is no choice in the matter, because making a different choice will lead to inferior, or even bad, art.
I’ve had careers in both science and art, and for a long while I thought that the art was for personal expression and the science was for the satisfaction of my curiosity about an objective world that was entirely independent of myself. I also had the notion that engineering was where the two met, where one applied the objective knowledge of science in service of the subjective needs of human beings, and those needs included the application of artistic principles to engineering, and engineering principles to art.
It’s a good line of patter, and there’s some truth to it, but as time goes on, I see more and more holes in it. For one thing, while art may be personal and expressive, it’s often pretty generic, and it starts looking a lot like other art. No one else would have written Book of Shadows, but if I hadn’t, there might very well have been another novel of “heroic fantasy,” in that publishing slot, and many of the same people might have read it and taken the same enjoyment from it. SunSmoke is a lot less interchangeable, in my view, but that is not necessarily obvious to the reader. I myself tend toward the idiosyncratic both as writer and reader, but most fiction, most art, is average; that’s what average means. And some proportion of popular entertainment is largely interchangeable with its near equivalents.
One the other hand, a great deal of science is more idiosyncratic, less objective, more personal than most scientists would admit. What is studied, how it’s studied, what sorts of theories and models are created, what sort of notation is used, all of that betrays the human face staring at the instruments, drawing the conclusions, writing up the results. Someone has to want to know the answer to the question that is being asked. Science is a human construct, no less than any other human construct, and to deny it is to deny both one’s self, and the truth.
Wednesday, February 13, 2008
Terry
Hot August night
And the leaves hanging down
And the grass on the ground smelling sweet
Move up the road
To the outside of town
And the sound of that good gospel beat
Sits a ragged tent
Where there ain't no trees
And that gospel group
Telling you and me
It's Love
Brother Love's Traveling Salvation Show
Pack up the babies
Grab the old ladies
Everyone goes
Everyone knows
Brother Love's show
--Neil Diamond, "Brother Love's Traveling Salvation Show"
The Congressional Record contains many interesting items, especially from the days when a filibuster actually required Senators to continue speaking for the duration. Often a filibustering Senator would read from a book, insert cooking recipes, and the like, just in order to keep the words flowing. Nowadays, not only is this not required, owing to a thing called Senate Rule 22, which allows some Senators to say "we're filibustering," and then a cloture vote determines whether or not the bill is blocked.
It's also quite possible for things to show up in the Congressional Record that were never actually said on the floors of Congress, and things that are said may be taken back, the CR being amended to nullify the past, and isn't that the way it ought to be with everything?
I doubt that my name was ever said in the Hallowed Halls, but it does appear in the Congressional Record at least once, as a citation of an EPA report in the background documentation for some air quality legislation (you'd think I could be more specific, and I probably could, but there are limits to how much work I'm willing to put into these little memoirs). It was, as I recall, a monthly report that later went into a document sometimes cited as just "Killus et al." (heh, heh), primarily because I was co-author to a majority of the individual chapters. The final publication was titled "Continued research in mesoscale air pollution simulation modeling. Volume 5: Refinements in numerical analysis, transport, chemistry, and pollutant removal" [Final Report, Oct. 1979 - Jul. 1982] KILLUS, J P; MEYER, J P; DURRAN, G E; ANDERSON, G E; JERSKEY, T N.
The full report included new transport algorithms, chemistry, actinic flux calculations, aerosol formation mechanisms, and surface uptake models for a photochemical grid model. The subsection that went into the CR was on the surface uptake mechanisms, i.e. the way that pollutants are absorbed or otherwise destroyed or transformed by interactions with surfaces, and I co-wrote it with the last guy cited, Terry N. Jerskey.
We didn't really work that closely together, having broken up the problem into piece parts with Terry doing some chunks of it, and me the rest. But there was a fair amount of time sitting across the table from each other, talking about this or that aspect of things like surface resistance, diffusional transport in the planetary boundary layer and other nurdy things that we were being paid to talk about. It was a lot of fun, actually, for me at least. I hope Terry enjoyed it.
Terry's hands shook by that point, a tremor that was a side effect of the medication he was on, I think it was Haldol, but this is a 30 year old memory here, and he only told me once.
One day, late, after everyone else had left the office except Tom, who was a chronic workaholic, Terry went over to the shopping center across the street and bought several bottles of dry cleaning fluid, which he proceeded to swig down on the way back to the office, tossing the bottles into the trash cans on the way back. He made it back to the office and collapsed on the hall floor, where Tom found him a few minutes later.
In addition to being a workaholic, Tom was also a member of the Ski Patrol, and strong as an ox besides. Both turned out to be important, because, after he called for the paramedics, he had to use that strength to pry Terry's jaws apart, in order to give him mouth-to-mouth respiration. Terry's jaws had become locked with muscle spasms, you see.
Then, after the ambulance arrived, Tom raced across the street and located the bottles of cleaning fluid (which I suspect he'd tasted in Terry's vomit and breath during the time he was doing Terry's breathing for him) and reported what Terry had swallowed to the ER by the time Terry had arrived.
This wasn't Terry's first suicide attempt, it turned out. That was the reason for the anti-depressants. In fact, I heard that Terry's wife was pretty blasé about the matter when she was called.
The next day, Terry was sitting up in the ICU, alert, seemingly fine. He told everyone who visited that he'd be back at work pretty soon.
The next day he was dead. The cause of death was "aspirated pneumonia." Vomiting cleaning fluid and then breathing it into your lungs causes damage, and there was enough damage for entirely different fluids to build up in his lungs—enough to kill him, in fact.
The only thing that I ever learned about Terry other than our working together was that he loved Neil Diamond, even the later pretentious stuff like "Longfellow Serenade." When we spoke about Neil, I'd always talk about songs like "Brother Love's Traveling Salvation Show," because I could honestly say that I liked it.
I honestly liked Terry, too, but not nearly enough, really. For the most part, he was just a guy I worked with for a while.
And the leaves hanging down
And the grass on the ground smelling sweet
Move up the road
To the outside of town
And the sound of that good gospel beat
Sits a ragged tent
Where there ain't no trees
And that gospel group
Telling you and me
It's Love
Brother Love's Traveling Salvation Show
Pack up the babies
Grab the old ladies
Everyone goes
Everyone knows
Brother Love's show
--Neil Diamond, "Brother Love's Traveling Salvation Show"
The Congressional Record contains many interesting items, especially from the days when a filibuster actually required Senators to continue speaking for the duration. Often a filibustering Senator would read from a book, insert cooking recipes, and the like, just in order to keep the words flowing. Nowadays, not only is this not required, owing to a thing called Senate Rule 22, which allows some Senators to say "we're filibustering," and then a cloture vote determines whether or not the bill is blocked.
It's also quite possible for things to show up in the Congressional Record that were never actually said on the floors of Congress, and things that are said may be taken back, the CR being amended to nullify the past, and isn't that the way it ought to be with everything?
I doubt that my name was ever said in the Hallowed Halls, but it does appear in the Congressional Record at least once, as a citation of an EPA report in the background documentation for some air quality legislation (you'd think I could be more specific, and I probably could, but there are limits to how much work I'm willing to put into these little memoirs). It was, as I recall, a monthly report that later went into a document sometimes cited as just "Killus et al." (heh, heh), primarily because I was co-author to a majority of the individual chapters. The final publication was titled "Continued research in mesoscale air pollution simulation modeling. Volume 5: Refinements in numerical analysis, transport, chemistry, and pollutant removal" [Final Report, Oct. 1979 - Jul. 1982] KILLUS, J P; MEYER, J P; DURRAN, G E; ANDERSON, G E; JERSKEY, T N.
The full report included new transport algorithms, chemistry, actinic flux calculations, aerosol formation mechanisms, and surface uptake models for a photochemical grid model. The subsection that went into the CR was on the surface uptake mechanisms, i.e. the way that pollutants are absorbed or otherwise destroyed or transformed by interactions with surfaces, and I co-wrote it with the last guy cited, Terry N. Jerskey.
We didn't really work that closely together, having broken up the problem into piece parts with Terry doing some chunks of it, and me the rest. But there was a fair amount of time sitting across the table from each other, talking about this or that aspect of things like surface resistance, diffusional transport in the planetary boundary layer and other nurdy things that we were being paid to talk about. It was a lot of fun, actually, for me at least. I hope Terry enjoyed it.
Terry's hands shook by that point, a tremor that was a side effect of the medication he was on, I think it was Haldol, but this is a 30 year old memory here, and he only told me once.
One day, late, after everyone else had left the office except Tom, who was a chronic workaholic, Terry went over to the shopping center across the street and bought several bottles of dry cleaning fluid, which he proceeded to swig down on the way back to the office, tossing the bottles into the trash cans on the way back. He made it back to the office and collapsed on the hall floor, where Tom found him a few minutes later.
In addition to being a workaholic, Tom was also a member of the Ski Patrol, and strong as an ox besides. Both turned out to be important, because, after he called for the paramedics, he had to use that strength to pry Terry's jaws apart, in order to give him mouth-to-mouth respiration. Terry's jaws had become locked with muscle spasms, you see.
Then, after the ambulance arrived, Tom raced across the street and located the bottles of cleaning fluid (which I suspect he'd tasted in Terry's vomit and breath during the time he was doing Terry's breathing for him) and reported what Terry had swallowed to the ER by the time Terry had arrived.
This wasn't Terry's first suicide attempt, it turned out. That was the reason for the anti-depressants. In fact, I heard that Terry's wife was pretty blasé about the matter when she was called.
The next day, Terry was sitting up in the ICU, alert, seemingly fine. He told everyone who visited that he'd be back at work pretty soon.
The next day he was dead. The cause of death was "aspirated pneumonia." Vomiting cleaning fluid and then breathing it into your lungs causes damage, and there was enough damage for entirely different fluids to build up in his lungs—enough to kill him, in fact.
The only thing that I ever learned about Terry other than our working together was that he loved Neil Diamond, even the later pretentious stuff like "Longfellow Serenade." When we spoke about Neil, I'd always talk about songs like "Brother Love's Traveling Salvation Show," because I could honestly say that I liked it.
I honestly liked Terry, too, but not nearly enough, really. For the most part, he was just a guy I worked with for a while.
Labels:
atmospheric science,
dangerous jobs,
death,
memoir
Wednesday, February 6, 2008
Hot Buttered
Orville Redenbacher is on the TV, telling us again how great his microwave popcorn is, and by the way, it doesn't contain diacetyl. Any more.
Diacetyl (emphasis on the first syllable) is also called biacetyl (emphasis on the last syllable) and the latter is what we called in when I was working on the photooxidation of aromatic hydrocarbons a couple or three decades ago. Biactetyl, in fact, occupies an important place in the history of smog chemistry, though I have to admit the notion of "important" is open to interpretation.
There are basically four kinds of "reactive organics" that are important in smog photochemistry: paraffins, olefins, aromatics, and carbonyl compounds (aldehydes and ketones), the latter being more commonly formed in the smog process than emitted outright. I'm taking a bit of a liberty here by omitting alcohols, ethers, and other oxygenated compounds, partly because, ethanol and MTBE notwithstanding, they still don't amount to a large fraction of the mix, and partly because their photochemistry is pretty close to that of paraffins, or ketones that don't photolyze, i.e. break up by the direct action of sunlight.
The early days of smog chemistry were dominated by research into the chemistry of paraffins and olefins, so much so, in fact, that it wasn't until the mid-1970s that researchers realized that the photolysis of aldehydes and ketones was the primary source of catalytic radicals in the smog formation process. In fact, that was the biggest single difference between the first photochemical kinetic mechanism that I worked with, the Hecht-Seinfeld mechanism, and the later, Hecht, Seinfeld, Dodge mechanism. The former used oxygen atoms (from the photolysis of NO2) as its primary radical source, whereas the latter used formaldehyde and higher aldehydes to that purpose.
Both of these mechanisms were based on smog chamber experiments involving butane and propylene (or propene, if you're a nomenclature purist). Aromatics chemistry was tacked on as an afterthought, not because it was believed to be unimportant, but more because nobody had any idea what to do with it.
Aromatic hydrocarbons, as they are called, all have a "benzene ring" somewhere in them, and that makes everything very complex. Perhaps you remember the story about Friedrich Kekule literally dreaming up benzene's structure. It's formula is C6H6, and its structure "bites its own tail," so each carbon atom, with four chemical bonds, has, after accounting for the hydrogen, three bonds to share with its two neighboring carbon atoms. That could work out to two and one or one and two, i.e. a paraffinic bond with one neighbor and an olefinic bond with the other, but the wonders of quantum mechanisms allows it to actually be one and a half bonds with each neighbor. Such are the wonders of quantum electrons being able to be in several places at the same time.
Benzene itself is almost dead, photochemically speaking; put it into a smog chamber and it mostly just sits there, making a little tang of phenol after a while, but phenol is deader still, so…boring.
But if you replace one or more of benzene's hydrogens with a methyl group (-CH3), now you're talking. One added methyl group gives you toluene. Two, and you get xylene, which comes in three isomers, meta, para, and ortho, depending upon whether the methyl groups sit right next to each other (ortho), on opposite sides of the ring (para) or one over (meta). There are also, of course, trimethylated benzenes, and compounds where the substituted groups are more complex than methyl groups. But actually, toluene and the xylenes make up the bulk of aromatic compounds in air pollution. There is even a refinery stream referred to as "TBX" which stands for toluene, benzene, and xylene.
Okay, so I'm going to tell you how the photochemistry works, then how it got figured out. The tricky part had to do with how the aromatic rings would open up. Everyone knew it had to happen sometime, but how, and what the products were was a mystery for years.
What happens to something like toluene in smog is that, when it encounters an hydroxyl radical (-OH), the hydroxyl adds itself onto the ring somewhere, usually at the carbon that sits next to a methyl group, because of the way that methyl groups mess with the electron distribution of the aromatic ring. This is what hydroxyls do with olefins, incidentally, so you can look on it as the hydroxyl briefly looking at the ring and seeing, not that "one and a half bonds" thing I mentioned above, but a double carbon-carbon bond, which hydroxyls just love to glom onto.
This breaks one of the carbon-carbon bonds, and one end of it now has a romantic relationship with the hydroxyl radical. But the other end, like a jilted lover, is on the rebound, ready to pick up with just about any pretty face that comes by. That face, almost always, belongs to oxygen, a really promiscuous molecule. It's diatomic (i.e. O2), but not so committed to the relationship that it passes up some good carbon bond action.
So an O2 gloms onto the other, lonely, carbon and you now have a peroxy radical, an aromatic ring with an oxygen tail. The radical characteristic of the thing tends to be concentrated at the free swinging tip of the tail, and in most peroxy radicals, that tip winds up reacting with some other molecule.
Not so with the aromatic peroxy radicals, however, because it so happens that the radical tip is just right for swinging around and hooking up with another carbon, somewhere else on the aromatic ring. You may now consider all of the other sexual double entendres that I could use for this situation.
Anyway, another oxygen now gloms onto the group, but now the situation is stable enough (maybe) so that it waits around for some outside compound (usually a molecule of nitric oxide—NO) to take the last lonely oxygen atom away from the daisy chain.
All the oxygens then decide to settle down with their new carbon best buddies. The oxygen-oxygen bonds call it quits, and that leaves another oxygen bond for each oxygen connected carbon. If you're counting, and remember that carbon only has four bonds to its name, this means that it has a double bond with an oxygen, one for either a hydrogen or a methyl group, and, whoops, only one left for another carbon in the aromatic ring. In short, the ring opens, in multiple places, once for each oxygen. At some point, the poor hydroxyl group, which is now the radical of the bunch, meets yet another oxygen molecule and the hydrogen leaves the party to for hydroperyoxyl (HO2).
The aromatic ring is pretty much finished at this point, and it cleaves into at least two pieces, one with two ring carbons, the other with four. The one with four has, in addition to two oxygen atoms, a olefinic bond (there was some belief for a while that the fragments might all have two ring carbons, each, meaning that there would have been another oxygen molecule bridge on the ring, but later product yield measurements indicate otherwise).
Both ring fragments are called "dicarbonyls" because they each have two carbonyl (C=O) bonds. In one of the fragments, the two carbonyl bonds are right next to each other.
The simplest dicarbonyl is called "glyoxal." It's just H(C=O)(C=O)H. The next one is methyl glyoxal, with a single added methyl group: H(C=O)(C=O)CH3. Both of these are very hard to measure; they tend to stick to gas chromatographic columns nigh onto forever.
Ah, but the next in line is a dicarbonyl with two methyl substituants: CH3(C=O)(C=O)CH3. This is called biacetyl, or diacetyl. And it comes through a chromatographic column.
If you photooxidize orthoxylene, with it's two adjacent methyl groups, when the ring opens, a certain percentage of time you get biacetyl. A group at the University of California at Riverside, (Darnall, Atkinson, and Pitts, 1979) saw the biacetyl coming off of their chromatograph and realized that they had seen the first evidence of ring opening products.
It so happens that both biacetyl and methylglyoxal photolyze like crazy, so much so that they last only a few minutes in sunlight before splitting into radical fragments. I had been looking for something exactly like these dicarbonyls in my own studies of aromatics photochemistry, because I'd found good evidence of very powerful radical sources in toluene experiments. My calculations indicated that the radical formation rate from toluene was twice what it would be if toluene were going to pure formaldehyde, which of course it does not. It forms a significant amount of methyl glyoxal, and that was what I was looking for.
Later, I heard that biacetyl/diacetyl was used to flavor margarine; I also heard that microwave food products use excess flavoring agents because the microwave heating process drives the volatiles away faster than regular cooking.
I had some vague suspicions that it might not be a good idea to use a compound as photochemically unstable as biacetyl in food. Light causes biacetyl to break into two pieces, both acetyl radicals, and when there is any oxygen around, you get peroxyacetyl radicals. Add some nitrogen dioxide and you get peroxyacetyl nitrate (PAN), which is biologically active. Actually, it's a good bet that any give peroxy compound is biologically active. These are some pretty potent radicals.
So then we see a story about the guy who loved the buttery smell of microwaved popcorn and got a rare lung disease, bronchiolitis obliterans. More to the point, "popcorn lung" has been added to the list of industrial diseases affecting production workers.
All I had were a few suspicions, of course. Nothing to go on, really. But I can't say that I'm surprised in the slightest.
Diacetyl (emphasis on the first syllable) is also called biacetyl (emphasis on the last syllable) and the latter is what we called in when I was working on the photooxidation of aromatic hydrocarbons a couple or three decades ago. Biactetyl, in fact, occupies an important place in the history of smog chemistry, though I have to admit the notion of "important" is open to interpretation.
There are basically four kinds of "reactive organics" that are important in smog photochemistry: paraffins, olefins, aromatics, and carbonyl compounds (aldehydes and ketones), the latter being more commonly formed in the smog process than emitted outright. I'm taking a bit of a liberty here by omitting alcohols, ethers, and other oxygenated compounds, partly because, ethanol and MTBE notwithstanding, they still don't amount to a large fraction of the mix, and partly because their photochemistry is pretty close to that of paraffins, or ketones that don't photolyze, i.e. break up by the direct action of sunlight.
The early days of smog chemistry were dominated by research into the chemistry of paraffins and olefins, so much so, in fact, that it wasn't until the mid-1970s that researchers realized that the photolysis of aldehydes and ketones was the primary source of catalytic radicals in the smog formation process. In fact, that was the biggest single difference between the first photochemical kinetic mechanism that I worked with, the Hecht-Seinfeld mechanism, and the later, Hecht, Seinfeld, Dodge mechanism. The former used oxygen atoms (from the photolysis of NO2) as its primary radical source, whereas the latter used formaldehyde and higher aldehydes to that purpose.
Both of these mechanisms were based on smog chamber experiments involving butane and propylene (or propene, if you're a nomenclature purist). Aromatics chemistry was tacked on as an afterthought, not because it was believed to be unimportant, but more because nobody had any idea what to do with it.
Aromatic hydrocarbons, as they are called, all have a "benzene ring" somewhere in them, and that makes everything very complex. Perhaps you remember the story about Friedrich Kekule literally dreaming up benzene's structure. It's formula is C6H6, and its structure "bites its own tail," so each carbon atom, with four chemical bonds, has, after accounting for the hydrogen, three bonds to share with its two neighboring carbon atoms. That could work out to two and one or one and two, i.e. a paraffinic bond with one neighbor and an olefinic bond with the other, but the wonders of quantum mechanisms allows it to actually be one and a half bonds with each neighbor. Such are the wonders of quantum electrons being able to be in several places at the same time.
Benzene itself is almost dead, photochemically speaking; put it into a smog chamber and it mostly just sits there, making a little tang of phenol after a while, but phenol is deader still, so…boring.
But if you replace one or more of benzene's hydrogens with a methyl group (-CH3), now you're talking. One added methyl group gives you toluene. Two, and you get xylene, which comes in three isomers, meta, para, and ortho, depending upon whether the methyl groups sit right next to each other (ortho), on opposite sides of the ring (para) or one over (meta). There are also, of course, trimethylated benzenes, and compounds where the substituted groups are more complex than methyl groups. But actually, toluene and the xylenes make up the bulk of aromatic compounds in air pollution. There is even a refinery stream referred to as "TBX" which stands for toluene, benzene, and xylene.
Okay, so I'm going to tell you how the photochemistry works, then how it got figured out. The tricky part had to do with how the aromatic rings would open up. Everyone knew it had to happen sometime, but how, and what the products were was a mystery for years.
What happens to something like toluene in smog is that, when it encounters an hydroxyl radical (-OH), the hydroxyl adds itself onto the ring somewhere, usually at the carbon that sits next to a methyl group, because of the way that methyl groups mess with the electron distribution of the aromatic ring. This is what hydroxyls do with olefins, incidentally, so you can look on it as the hydroxyl briefly looking at the ring and seeing, not that "one and a half bonds" thing I mentioned above, but a double carbon-carbon bond, which hydroxyls just love to glom onto.
This breaks one of the carbon-carbon bonds, and one end of it now has a romantic relationship with the hydroxyl radical. But the other end, like a jilted lover, is on the rebound, ready to pick up with just about any pretty face that comes by. That face, almost always, belongs to oxygen, a really promiscuous molecule. It's diatomic (i.e. O2), but not so committed to the relationship that it passes up some good carbon bond action.
So an O2 gloms onto the other, lonely, carbon and you now have a peroxy radical, an aromatic ring with an oxygen tail. The radical characteristic of the thing tends to be concentrated at the free swinging tip of the tail, and in most peroxy radicals, that tip winds up reacting with some other molecule.
Not so with the aromatic peroxy radicals, however, because it so happens that the radical tip is just right for swinging around and hooking up with another carbon, somewhere else on the aromatic ring. You may now consider all of the other sexual double entendres that I could use for this situation.
Anyway, another oxygen now gloms onto the group, but now the situation is stable enough (maybe) so that it waits around for some outside compound (usually a molecule of nitric oxide—NO) to take the last lonely oxygen atom away from the daisy chain.
All the oxygens then decide to settle down with their new carbon best buddies. The oxygen-oxygen bonds call it quits, and that leaves another oxygen bond for each oxygen connected carbon. If you're counting, and remember that carbon only has four bonds to its name, this means that it has a double bond with an oxygen, one for either a hydrogen or a methyl group, and, whoops, only one left for another carbon in the aromatic ring. In short, the ring opens, in multiple places, once for each oxygen. At some point, the poor hydroxyl group, which is now the radical of the bunch, meets yet another oxygen molecule and the hydrogen leaves the party to for hydroperyoxyl (HO2).
The aromatic ring is pretty much finished at this point, and it cleaves into at least two pieces, one with two ring carbons, the other with four. The one with four has, in addition to two oxygen atoms, a olefinic bond (there was some belief for a while that the fragments might all have two ring carbons, each, meaning that there would have been another oxygen molecule bridge on the ring, but later product yield measurements indicate otherwise).
Both ring fragments are called "dicarbonyls" because they each have two carbonyl (C=O) bonds. In one of the fragments, the two carbonyl bonds are right next to each other.
The simplest dicarbonyl is called "glyoxal." It's just H(C=O)(C=O)H. The next one is methyl glyoxal, with a single added methyl group: H(C=O)(C=O)CH3. Both of these are very hard to measure; they tend to stick to gas chromatographic columns nigh onto forever.
Ah, but the next in line is a dicarbonyl with two methyl substituants: CH3(C=O)(C=O)CH3. This is called biacetyl, or diacetyl. And it comes through a chromatographic column.
If you photooxidize orthoxylene, with it's two adjacent methyl groups, when the ring opens, a certain percentage of time you get biacetyl. A group at the University of California at Riverside, (Darnall, Atkinson, and Pitts, 1979) saw the biacetyl coming off of their chromatograph and realized that they had seen the first evidence of ring opening products.
It so happens that both biacetyl and methylglyoxal photolyze like crazy, so much so that they last only a few minutes in sunlight before splitting into radical fragments. I had been looking for something exactly like these dicarbonyls in my own studies of aromatics photochemistry, because I'd found good evidence of very powerful radical sources in toluene experiments. My calculations indicated that the radical formation rate from toluene was twice what it would be if toluene were going to pure formaldehyde, which of course it does not. It forms a significant amount of methyl glyoxal, and that was what I was looking for.
Later, I heard that biacetyl/diacetyl was used to flavor margarine; I also heard that microwave food products use excess flavoring agents because the microwave heating process drives the volatiles away faster than regular cooking.
I had some vague suspicions that it might not be a good idea to use a compound as photochemically unstable as biacetyl in food. Light causes biacetyl to break into two pieces, both acetyl radicals, and when there is any oxygen around, you get peroxyacetyl radicals. Add some nitrogen dioxide and you get peroxyacetyl nitrate (PAN), which is biologically active. Actually, it's a good bet that any give peroxy compound is biologically active. These are some pretty potent radicals.
So then we see a story about the guy who loved the buttery smell of microwaved popcorn and got a rare lung disease, bronchiolitis obliterans. More to the point, "popcorn lung" has been added to the list of industrial diseases affecting production workers.
All I had were a few suspicions, of course. Nothing to go on, really. But I can't say that I'm surprised in the slightest.
Labels:
atmospheric science,
chemistry,
dangerous jobs,
memoir,
photochemistry,
science
Sunday, January 20, 2008
Justifications II
In the early 1980s, the California Air Resources Board proposed some stringent rules on how much NOx (nitrogen oxides) could be emitted from power plants. The new regulations were meant to be “technology forcing,” which means that the control technology to meet the regs either had not yet been developed, or it had never been used on a large scale. Moreover, the required control factor was proportional to current emissions, rather than the usual method of allowing X amount of emissions per Y amount of power generated. So any utility that had controlled emissions beyond what had been previously mandated would actually be penalized by being required to clean up more than if they had only just barely met previous regs. Call it a penalty on being good.
CARB had been trying to set stringent NOx controls for years, believing NOx to be the real culprit behind smog. In fact, the relationship between NOx emissions and smog formation is _very_ complex, with fresh NOx emissions, which are mostly nitric oxide (NO) combining with ozone to form nitrogen dioxide (NO2), thereby reducing the level of the main smog constituent – temporarily. Also, NO2 is a radical scavenger, so it slows the smog oxidation process at elevated levels. On the other hand, without a minimal amount of NOx, the smog formation process basically stops, so if you eliminate all NOx emissions, you also stop smog formation. The question in NOx control is always whether or not you can reduce NOx to low enough levels to be effective.
In any case, the consulting firm I worked for was hired by Southern California Edison (SCE) to do an impact study on the proposed regulations, to see if they were properly “grounded in science.” I was selected to be the technical lead on the project.
The project manager wanted a quick result. We had plenty of simulations of various days in Los Angeles, where SCE had its power plant that would be affected, and he wanted a simple reduced-emissions scenario run for some of those days. I wanted to extend the simulations to multiple days.
Part of the reason I wanted this was because it had never been done before, and I wanted to extend the science. That was self-serving in the sense that it would certainly enhance my reputation (and the company’s), and also, I was curious about a number of things that simply couldn’t be examined with single day results, such as the importance of day-to-day carryover of pollutants. But it was also true that such a simulation would be in the best interest of the client, since providing an answer to those unanswered questions greatly reduces the amount of wiggle-room for policy makers.
Anyway, there were argument, loud ones, but eventually my position carried the day. With hindsight, I now suspect that the project manager in question developed a grudge against me, a grudge that explains some of his later behavior, but that’s another story.
In any case, having won the argument, it was then up to me to deliver, which I did. I had to write a different chemical kinetics module to do night time chemistry, one that used a lot of heuristic reasoning and various other tricks of the trade, but it did work, and I had smog chamber data to validate it against, so we were in the clear on that point. I also did some fairly significant work on what “clean air” looks like, that has been used (and misused) by a lot of other people since.
Our baseline simulation ran for over three days. What we found, essentially, was that the near-field ozone suppression effect of NOx emitted by the power plant was greater than the amount of ozone that was eventually attributable to that NOx in smog formation reactions. Moreover, the highest concentration difference in ozone attributable to the power plant was 1 part per billion, less than 1% of the smog standard, and on the baseline day, less than ½ of 1% of total peak ozone at the impacted area.
We presented our results at a CARB hearing, and the result was that they sent the proposed regulations back for reanalysis, pretty much the best possible result for our client. Eventually, more stringent NOx regulations did come into effect, but they were not technology forcing, and no doubt had other aspects that were less unpleasant to SCE, because that’s the way things work. That particular plant, incidentally, was retired a couple of years ago.
There are a lot of “anti-environmentalists” in the conservative movement, and in the fellow-traveling wing of the libertarians who decry all environmental regulations as being anti-business, or an infringement of their rights as individuals. There are also a lot of industry-funded think tanks tasked with muddying the scientific waters, denouncing things they don’t like as “junk science” and working against the proper use of science as a policy tool. I’ve lost count of the number of occasions where one of the other of these folks has sneered at me for being in favor of some “environmentalist” policy.
There are also some environmentalists who would condemn the preceding story as being another case of big business trampling the regulatory process, but I don’t buy it anymore than I buy the anti-environmentalist narrative. I believe our results, and our results said that this particular issue wasn’t worth the price. The amount of smog reduction, if there was any at all, was immeasurable. The actual population exposure to ozone quite possibly would have gone up. And in any case, the primary health effects from air pollution turn out to be from fine particulates, with ozone, even now, after another couple of decades of study, being still problematic from the standpoint of assigning it a specific level of toxicity at urban smog levels. The effects of ozone on plants is better established than its effects on human health.
And what would have been the price or the proposed regulations? Well, the CARB staff said that it would amount to a small amount of money per rate payer per month. Calculated out to the total number of rate payers, it came to $50 million per year. I don’t think that CARB staff had any incentive to overestimate the cost, incidentally. Typically it’s the other way around.
That was over 20 years ago. A cost of $50 million a year, ignoring all present value calculations, etc. comes to over a billion dollars. I always figured that we probably only bought SCE maybe 5 years, thought the later regulations were probably better thought out. I always guesstimate the savings to Southern California rate payers at more like $250 million.
Too bad I couldn’t have held out for a percentage.
CARB had been trying to set stringent NOx controls for years, believing NOx to be the real culprit behind smog. In fact, the relationship between NOx emissions and smog formation is _very_ complex, with fresh NOx emissions, which are mostly nitric oxide (NO) combining with ozone to form nitrogen dioxide (NO2), thereby reducing the level of the main smog constituent – temporarily. Also, NO2 is a radical scavenger, so it slows the smog oxidation process at elevated levels. On the other hand, without a minimal amount of NOx, the smog formation process basically stops, so if you eliminate all NOx emissions, you also stop smog formation. The question in NOx control is always whether or not you can reduce NOx to low enough levels to be effective.
In any case, the consulting firm I worked for was hired by Southern California Edison (SCE) to do an impact study on the proposed regulations, to see if they were properly “grounded in science.” I was selected to be the technical lead on the project.
The project manager wanted a quick result. We had plenty of simulations of various days in Los Angeles, where SCE had its power plant that would be affected, and he wanted a simple reduced-emissions scenario run for some of those days. I wanted to extend the simulations to multiple days.
Part of the reason I wanted this was because it had never been done before, and I wanted to extend the science. That was self-serving in the sense that it would certainly enhance my reputation (and the company’s), and also, I was curious about a number of things that simply couldn’t be examined with single day results, such as the importance of day-to-day carryover of pollutants. But it was also true that such a simulation would be in the best interest of the client, since providing an answer to those unanswered questions greatly reduces the amount of wiggle-room for policy makers.
Anyway, there were argument, loud ones, but eventually my position carried the day. With hindsight, I now suspect that the project manager in question developed a grudge against me, a grudge that explains some of his later behavior, but that’s another story.
In any case, having won the argument, it was then up to me to deliver, which I did. I had to write a different chemical kinetics module to do night time chemistry, one that used a lot of heuristic reasoning and various other tricks of the trade, but it did work, and I had smog chamber data to validate it against, so we were in the clear on that point. I also did some fairly significant work on what “clean air” looks like, that has been used (and misused) by a lot of other people since.
Our baseline simulation ran for over three days. What we found, essentially, was that the near-field ozone suppression effect of NOx emitted by the power plant was greater than the amount of ozone that was eventually attributable to that NOx in smog formation reactions. Moreover, the highest concentration difference in ozone attributable to the power plant was 1 part per billion, less than 1% of the smog standard, and on the baseline day, less than ½ of 1% of total peak ozone at the impacted area.
We presented our results at a CARB hearing, and the result was that they sent the proposed regulations back for reanalysis, pretty much the best possible result for our client. Eventually, more stringent NOx regulations did come into effect, but they were not technology forcing, and no doubt had other aspects that were less unpleasant to SCE, because that’s the way things work. That particular plant, incidentally, was retired a couple of years ago.
There are a lot of “anti-environmentalists” in the conservative movement, and in the fellow-traveling wing of the libertarians who decry all environmental regulations as being anti-business, or an infringement of their rights as individuals. There are also a lot of industry-funded think tanks tasked with muddying the scientific waters, denouncing things they don’t like as “junk science” and working against the proper use of science as a policy tool. I’ve lost count of the number of occasions where one of the other of these folks has sneered at me for being in favor of some “environmentalist” policy.
There are also some environmentalists who would condemn the preceding story as being another case of big business trampling the regulatory process, but I don’t buy it anymore than I buy the anti-environmentalist narrative. I believe our results, and our results said that this particular issue wasn’t worth the price. The amount of smog reduction, if there was any at all, was immeasurable. The actual population exposure to ozone quite possibly would have gone up. And in any case, the primary health effects from air pollution turn out to be from fine particulates, with ozone, even now, after another couple of decades of study, being still problematic from the standpoint of assigning it a specific level of toxicity at urban smog levels. The effects of ozone on plants is better established than its effects on human health.
And what would have been the price or the proposed regulations? Well, the CARB staff said that it would amount to a small amount of money per rate payer per month. Calculated out to the total number of rate payers, it came to $50 million per year. I don’t think that CARB staff had any incentive to overestimate the cost, incidentally. Typically it’s the other way around.
That was over 20 years ago. A cost of $50 million a year, ignoring all present value calculations, etc. comes to over a billion dollars. I always figured that we probably only bought SCE maybe 5 years, thought the later regulations were probably better thought out. I always guesstimate the savings to Southern California rate payers at more like $250 million.
Too bad I couldn’t have held out for a percentage.
Labels:
atmospheric science,
computer modeling,
environment,
politics
Thursday, January 10, 2008
Surfaces
Something in the smog biz that used to drive me nuts was when someone would look at some smog chamber experiment that had some unusual feature to it and remark, “Well, that’s just a chamber effect.” The subtext was “We’re studying gas phase kinetics, and that’s something having to do with a surface phenomenon, so we shouldn’t pay any attention to it.”
I didn’t think that should let us off the hook. What kind of surface effect was it? How did it behave? And were we absolutely sure that such effects didn’t occur elsewhere?
Eventually I wrote a paper, “Background Reactivity in Smog Chambers.” Google scholar tells me that it’s been cited at least 17 times, as recently as last year, so it did okay for a paper published 20 years ago.
In the 60s and 70s, there were a lot of smog chamber experiments done on all sorts of individual compounds; there was a belief that one could produce a “reactivity scale” that would let you reduce those things that had the most smog forming potential. As the complex nature of smog chemistry began to dawn on people, such experiments became less common, because “reactivity” has multiple components, sometimes 2 + 2 = 6 in smog chemistry, making the development of a single scale problematic. There’s a fellow at SAPRC in Riverside, Bill Carter, who has developed a much more complicated way of estimating “incremental reactivity,” which has its own problems, but it’s better than “one size fits all.”
Anyway, one of the “pure compound” experiments involved methyl chloroform, and I found it fascinating.
Methyl chloroform is also called 1,1,1 tri-chloroethane. If you start with ethane (CH3CH3) and replace all the hydrogens on one methyl group with chlorine, you get methyl chloroform. It’s pretty unreactive stuff; the only reaction sites for hydroxyl radicals are the ones on the methyl group and methyl hydrogens are bound pretty tightly. So for the first part of the chamber experiment, using very high concentrations of MCF with some added NOx, the thing just sat there.
Then, after a couple of hours of induction, something began to happen. The NO began to convert to NO2, some of the MCF began to decay, then suddenly, wham! The whole system kicked into high gear, NO went down like a shot, the MCF began to oxidize like crazy, and ozone began to shoot up. Then, just as suddenly, the ozone just disappeared, all of it, in just a couple of measurement cycles.
Everyone who looked at it said, “Ah, chlorine chemistry,” which was a sure guess. Chlorine will pull hydrogen off of even methyl groups with almost collisional efficiency (if a chlorine atom hits the molecule, it pulls off the hydrogen almost every time). Moreover, chlorine atoms destroy ozone; that’s the “stratospheric ozone depletion” thing.
But I was puzzled. Where did the chlorine atoms come from? Yes, there was plenty of chlorine in the MC, but that was bound. To get one off, you need to create a free radical and those ain’t cheap. If you create an HO radical, that can pull off one of the hydrogens, and that, after the usual reactions, gives you chloral, a tri-chlorinated version of acetaldehyde. Put in a high enough rate of photolysis for chloral in your simulation and you can get the whole system to react.
The problem was, it didn’t look right. With a high rate of photolysis for chloral, the simulation kicked off too quickly. Lower the rate and you never got the sudden takeoff. I’m pretty good at fitting the curves, and I could never get it to work.
So I started looking at the other actors in the system. The end result of chloral oxidation is phosgene (see why I was looking up all those post-WWI gas papers?), but phosgene itself didn’t fill the bill. So maybe the phosgene was converting to CO and Cl2 on the chamber surfaces like it does in someone’s lungs. No, that didn’t work either.
I kept returning to the problem over the years, trying yet another idea, each time getting no further.
In 1985, the “ozone hole” over the Antarctic was reported, and everyone in the stratospheric ozone community, including Gary Whitten, my boss at SAI, immediately suspected that it had something to do with the ice clouds that only form in the stratosphere over the Antarctic. In 1987, Mario Molina published a series of papers describing the surface reactions of stratospheric chemical species on ice crystal surfaces. The really critical reaction was the reaction of chlorine nitrate with hydrochloric acid to form nitric acid an molecular chlorine (Cl2). Cl2 photolyzes so rapidly that it might as well be two chlorine atoms.
I’m not sure when I first tried the Molina reaction on the methyl chloroform system, but it worked much better than anything else I’d tried. It makes the whole thing a very strong positive feedback system. It worked well enough to convince me that it was probably the missing factor; if I wanted to get a better simulation, I’d have to get very specific about some details of the original chamber experiment, and that one’s 35 years old. It’s pretty well moot at this point anyway.
Molina won the Nobel Prize for his work on stratospheric ozone depletion, and it was well-deserved. I was just looking at a single smog chamber experiment, one with a surface reaction that no one was interested in. The chance that I would have figured out the right answer to the peculiarities of that experiment is pretty small. The chance that I would have made the leap from the chamber walls to the stratospheric ice clouds is smaller still; I’d never heard of them before Whitten told me about them, and I certainly didn’t make the connection between them and the chamber experiment until Molina worked out the correct surface chemistry. So I’m certainly not trying to say that I coulda been a contenda.
But I will say that we all should have been paying more attention to the chamber wall effects. You don’t get to say beforehand what will turn out to be important.
I didn’t think that should let us off the hook. What kind of surface effect was it? How did it behave? And were we absolutely sure that such effects didn’t occur elsewhere?
Eventually I wrote a paper, “Background Reactivity in Smog Chambers.” Google scholar tells me that it’s been cited at least 17 times, as recently as last year, so it did okay for a paper published 20 years ago.
In the 60s and 70s, there were a lot of smog chamber experiments done on all sorts of individual compounds; there was a belief that one could produce a “reactivity scale” that would let you reduce those things that had the most smog forming potential. As the complex nature of smog chemistry began to dawn on people, such experiments became less common, because “reactivity” has multiple components, sometimes 2 + 2 = 6 in smog chemistry, making the development of a single scale problematic. There’s a fellow at SAPRC in Riverside, Bill Carter, who has developed a much more complicated way of estimating “incremental reactivity,” which has its own problems, but it’s better than “one size fits all.”
Anyway, one of the “pure compound” experiments involved methyl chloroform, and I found it fascinating.
Methyl chloroform is also called 1,1,1 tri-chloroethane. If you start with ethane (CH3CH3) and replace all the hydrogens on one methyl group with chlorine, you get methyl chloroform. It’s pretty unreactive stuff; the only reaction sites for hydroxyl radicals are the ones on the methyl group and methyl hydrogens are bound pretty tightly. So for the first part of the chamber experiment, using very high concentrations of MCF with some added NOx, the thing just sat there.
Then, after a couple of hours of induction, something began to happen. The NO began to convert to NO2, some of the MCF began to decay, then suddenly, wham! The whole system kicked into high gear, NO went down like a shot, the MCF began to oxidize like crazy, and ozone began to shoot up. Then, just as suddenly, the ozone just disappeared, all of it, in just a couple of measurement cycles.
Everyone who looked at it said, “Ah, chlorine chemistry,” which was a sure guess. Chlorine will pull hydrogen off of even methyl groups with almost collisional efficiency (if a chlorine atom hits the molecule, it pulls off the hydrogen almost every time). Moreover, chlorine atoms destroy ozone; that’s the “stratospheric ozone depletion” thing.
But I was puzzled. Where did the chlorine atoms come from? Yes, there was plenty of chlorine in the MC, but that was bound. To get one off, you need to create a free radical and those ain’t cheap. If you create an HO radical, that can pull off one of the hydrogens, and that, after the usual reactions, gives you chloral, a tri-chlorinated version of acetaldehyde. Put in a high enough rate of photolysis for chloral in your simulation and you can get the whole system to react.
The problem was, it didn’t look right. With a high rate of photolysis for chloral, the simulation kicked off too quickly. Lower the rate and you never got the sudden takeoff. I’m pretty good at fitting the curves, and I could never get it to work.
So I started looking at the other actors in the system. The end result of chloral oxidation is phosgene (see why I was looking up all those post-WWI gas papers?), but phosgene itself didn’t fill the bill. So maybe the phosgene was converting to CO and Cl2 on the chamber surfaces like it does in someone’s lungs. No, that didn’t work either.
I kept returning to the problem over the years, trying yet another idea, each time getting no further.
In 1985, the “ozone hole” over the Antarctic was reported, and everyone in the stratospheric ozone community, including Gary Whitten, my boss at SAI, immediately suspected that it had something to do with the ice clouds that only form in the stratosphere over the Antarctic. In 1987, Mario Molina published a series of papers describing the surface reactions of stratospheric chemical species on ice crystal surfaces. The really critical reaction was the reaction of chlorine nitrate with hydrochloric acid to form nitric acid an molecular chlorine (Cl2). Cl2 photolyzes so rapidly that it might as well be two chlorine atoms.
I’m not sure when I first tried the Molina reaction on the methyl chloroform system, but it worked much better than anything else I’d tried. It makes the whole thing a very strong positive feedback system. It worked well enough to convince me that it was probably the missing factor; if I wanted to get a better simulation, I’d have to get very specific about some details of the original chamber experiment, and that one’s 35 years old. It’s pretty well moot at this point anyway.
Molina won the Nobel Prize for his work on stratospheric ozone depletion, and it was well-deserved. I was just looking at a single smog chamber experiment, one with a surface reaction that no one was interested in. The chance that I would have figured out the right answer to the peculiarities of that experiment is pretty small. The chance that I would have made the leap from the chamber walls to the stratospheric ice clouds is smaller still; I’d never heard of them before Whitten told me about them, and I certainly didn’t make the connection between them and the chamber experiment until Molina worked out the correct surface chemistry. So I’m certainly not trying to say that I coulda been a contenda.
But I will say that we all should have been paying more attention to the chamber wall effects. You don’t get to say beforehand what will turn out to be important.
Labels:
atmospheric science,
chlorine,
memoir,
science
Thursday, December 27, 2007
Nitrous
I don’t call them “pet peeves.” I call them “things that annoy the hell out of me.” One of them is comparing the energy of high energy cosmic rays to a hard hit golf ball, and I'll perhaps explain why sometime. But the one that gets me every time is when someone is talking or writing about nitrous oxide and then says “NO2.”
If you want to go all British, it’s true that the generic Brit for nitrogen oxides is “nitrous oxides,” but that’s not what being referred to, for example, on the TV program Mythbusters, or any reference to nitrous either as a combustion booster (the “poor man’s supercharger”) or as a dental anesthetic / recreational drug. That is N2O, a notably different compound that comes in big blue tanks, because it’s an oxidizer. Also, because nitrous is slightly sweet and has a relatively low vapor pressure (because its critical temperature is about 35 C, it can be liquefied at room temperature) it’s used as a propellant for whipping cream.
If you compress NO2, you wind up with the dimmer N2O4, dinitrogen tetroxide, which dissociates back to NO2 on pressure release, producing toxic levels of NO2. It’s also a fine oxidizer, and will cause a lot of things to burn mighty fast. It is also somewhat self-oxidizing, which means that it can burn itself mighty fast, giving a good replica of an explosion.
Nitrous oxide, N2O, is also a good oxidizing agent, hence its use in auto racing (or in the movie Road Warrior). It just drops the oxygen atom off the nitrogen molecule and away we go. And since the bottle doesn’t need to be at high pressure, it’s safer than using compressed oxygen. Both compressed oxygen and N2O can be a little dangerous if there's anything like grease in your line, however.
Nitrous is a pretty good greenhouse gas, and it’s also a source of nitrogen oxides in the stratosphere. Usually the photolysis of ozone just give what’s called the “triplet state” of the oxygen atom that cleaves off the O3, but if it’s hit with short wave UV (down below about 290 nm), it give a more energetic form of oxygen radical called the “singlet state.” These have electrons in the p and d orbits respectively, so the shorthand is O3p and O1d, pronounced “Oh triplet p” and “Oh singlet d” respectively.
Most of the time, O1d just bounces around until an inelastic collision drops it back to O3p, but not always. In the troposphere, the most common reactive fate of O1d is reaction with water vapor, to give two hydroxyl radicals:
O1d + H2O -> OH + OH
But water is scarce in the stratosphere, and O1d is more plentiful (because there’s more short wave UV. Sometimes the O1d runs into a molecule of N2O and you get nitric oxide:
O1d + N2O -> NO + NO
Whitten once had a very clever idea for measuring the amount of short wave UV in the UNC outdoor smog chamber that involved pumping some 20 pounds of N2O into the chamber along with a lot of acetaldehyde and ozone. The ozone absorbed the shortwave UV, which reacted with the N2O. The resulting NO got sucked up by peroxyacetyl radicals that had been formed from the normal smog chemistry reactions from the acetaldehyde and the amount of PAN that was formed was a quantitative measure of O1d formation. In essence the whole shebang had become a giant actinometer. Very cool, and it happened to match the light models that UNC had been using to estimate UV in their chamber, so everyone went home happy, though a few of them were disappointed that they hadn’t been allowed to sample the nitrous.
The recreational use of nitrous is not exactly illegal, but it is discouraged. Most automotive nitrous is sold “sour,” with added sulfur dioxide to make huffing unpleasant. One the other hand, “whippets,” for use on whipping cream dispensers can be bought in stores all across the land. Whippet nitrous is mighty expensive, but I’ve seen people walking down the streets of the French Quarter in New Orleans with little balloon dispensers that take a whippet charge. Those dispensers used to be sold in “head shops” before somebody figured out how to harass them out of existence.
There are, of course, dangers involved in using nitrous recreationally, not getting enough oxygen being one of them. That was one significant problem at the World Science Fiction Convention in Denver a couple of decades ago (when nitrous abuse was much more common than it is now). People used to sea level air found themselves passing out and falling off their chairs. There’s also a known, long-term danger to nitrous use/abuse, in that it causes a vitamin B12 related “die off” of nerves in the peripheral nervous system. Those do grow back, but over-exposure to nitrous can get ahead of the ability to regenerate. And various neurological problems result. This has mostly been reported in anesthesiologists, who sometimes get a high background dose of nitrous even without an abuse syndrome, though the case I read of the dentist who used to take 2-6 hour “naps” under an N2O/O2 mix probably counts as abuse,
The biggest bang for the buck is nitrous by the tank full, in either medical/dental or metallurgical grade. There was a time when the “typical Bay Area SF Convention party” featured one or more large nitrous tanks. A few time I brought along another tank filled with helium, and those of us who were dispensing gas made people ask for the nitrous by first inhaling a balloon full of helium.
That wasn’t how nitrous came to be called “laughing gas,” but it should have been. Those times are long gone now, but that doesn’t stop me from eyeing the empty can of whipped cream from time to time. It’s out of cream, but sometimes there’s just enough gas left in it for a little whiff of nostalgia.
If you want to go all British, it’s true that the generic Brit for nitrogen oxides is “nitrous oxides,” but that’s not what being referred to, for example, on the TV program Mythbusters, or any reference to nitrous either as a combustion booster (the “poor man’s supercharger”) or as a dental anesthetic / recreational drug. That is N2O, a notably different compound that comes in big blue tanks, because it’s an oxidizer. Also, because nitrous is slightly sweet and has a relatively low vapor pressure (because its critical temperature is about 35 C, it can be liquefied at room temperature) it’s used as a propellant for whipping cream.
If you compress NO2, you wind up with the dimmer N2O4, dinitrogen tetroxide, which dissociates back to NO2 on pressure release, producing toxic levels of NO2. It’s also a fine oxidizer, and will cause a lot of things to burn mighty fast. It is also somewhat self-oxidizing, which means that it can burn itself mighty fast, giving a good replica of an explosion.
Nitrous oxide, N2O, is also a good oxidizing agent, hence its use in auto racing (or in the movie Road Warrior). It just drops the oxygen atom off the nitrogen molecule and away we go. And since the bottle doesn’t need to be at high pressure, it’s safer than using compressed oxygen. Both compressed oxygen and N2O can be a little dangerous if there's anything like grease in your line, however.
Nitrous is a pretty good greenhouse gas, and it’s also a source of nitrogen oxides in the stratosphere. Usually the photolysis of ozone just give what’s called the “triplet state” of the oxygen atom that cleaves off the O3, but if it’s hit with short wave UV (down below about 290 nm), it give a more energetic form of oxygen radical called the “singlet state.” These have electrons in the p and d orbits respectively, so the shorthand is O3p and O1d, pronounced “Oh triplet p” and “Oh singlet d” respectively.
Most of the time, O1d just bounces around until an inelastic collision drops it back to O3p, but not always. In the troposphere, the most common reactive fate of O1d is reaction with water vapor, to give two hydroxyl radicals:
O1d + H2O -> OH + OH
But water is scarce in the stratosphere, and O1d is more plentiful (because there’s more short wave UV. Sometimes the O1d runs into a molecule of N2O and you get nitric oxide:
O1d + N2O -> NO + NO
Whitten once had a very clever idea for measuring the amount of short wave UV in the UNC outdoor smog chamber that involved pumping some 20 pounds of N2O into the chamber along with a lot of acetaldehyde and ozone. The ozone absorbed the shortwave UV, which reacted with the N2O. The resulting NO got sucked up by peroxyacetyl radicals that had been formed from the normal smog chemistry reactions from the acetaldehyde and the amount of PAN that was formed was a quantitative measure of O1d formation. In essence the whole shebang had become a giant actinometer. Very cool, and it happened to match the light models that UNC had been using to estimate UV in their chamber, so everyone went home happy, though a few of them were disappointed that they hadn’t been allowed to sample the nitrous.
The recreational use of nitrous is not exactly illegal, but it is discouraged. Most automotive nitrous is sold “sour,” with added sulfur dioxide to make huffing unpleasant. One the other hand, “whippets,” for use on whipping cream dispensers can be bought in stores all across the land. Whippet nitrous is mighty expensive, but I’ve seen people walking down the streets of the French Quarter in New Orleans with little balloon dispensers that take a whippet charge. Those dispensers used to be sold in “head shops” before somebody figured out how to harass them out of existence.
There are, of course, dangers involved in using nitrous recreationally, not getting enough oxygen being one of them. That was one significant problem at the World Science Fiction Convention in Denver a couple of decades ago (when nitrous abuse was much more common than it is now). People used to sea level air found themselves passing out and falling off their chairs. There’s also a known, long-term danger to nitrous use/abuse, in that it causes a vitamin B12 related “die off” of nerves in the peripheral nervous system. Those do grow back, but over-exposure to nitrous can get ahead of the ability to regenerate. And various neurological problems result. This has mostly been reported in anesthesiologists, who sometimes get a high background dose of nitrous even without an abuse syndrome, though the case I read of the dentist who used to take 2-6 hour “naps” under an N2O/O2 mix probably counts as abuse,
The biggest bang for the buck is nitrous by the tank full, in either medical/dental or metallurgical grade. There was a time when the “typical Bay Area SF Convention party” featured one or more large nitrous tanks. A few time I brought along another tank filled with helium, and those of us who were dispensing gas made people ask for the nitrous by first inhaling a balloon full of helium.
That wasn’t how nitrous came to be called “laughing gas,” but it should have been. Those times are long gone now, but that doesn’t stop me from eyeing the empty can of whipped cream from time to time. It’s out of cream, but sometimes there’s just enough gas left in it for a little whiff of nostalgia.
Labels:
atmospheric science,
chemistry,
drugs,
science fiction
Thursday, December 20, 2007
MTBE
Methyl tertiary butyl ether is what is known as an “oxygenated hydrocarbon.” For reasons that still weren’t clear the last time I checked, oxygenated hydrocarbons alter the combustion characteristics of gasoline in an internal combustion engine such that combustion is more complete and in particular, carbon monoxide is reduced. Carbon monoxide incidents are more numerous in winter (because atmospheric ventilation rates are lower), so oxygenated fuel additives have been mandated for winter gasoline blends.
Oxygenated hydrocarbons are also generally octane boosters. In fact, ethanol, the alternative oxygenate to MTBE, was originally considered promising as an octane additive in the early days of the automobile. One story as to why tetra ethyl lead became the gasoline octane additive of choice is that ethyl could be patented, while ethanol could not. Another suggestion is simply that ethanol is not generally made from oil, and oilmen didn’t want it in gasoline for that reason.
For whatever reason, I can say from personal experience that the attitude toward ethanol among managers in the petroleum industry is very close to foam-at-the-mouth psychotic. They really, really, hate ethanol. MTBE, on the other hand, is made by the oil and gas industry (from natural gas to methanol to MTBE), and oilmen loved it.
From an environmental standpoint, you could say that MTBE is good for the air and bad for the water. Specifically, MTBE, like ethers generally, is water soluble, and will move into groundwater pretty easily. It also has a characteristic flavor and odor, which is apparently detectable (and unpleasant) in very small concentrations in drinking water.
None of this information is new; it was certainly well known 25 years ago, when MTBE was first being touted as an “environmentally friendly” additive to gasoline. It was, from the start, a big part of the “reformulated gasoline” initiative, whose intent was to bring some environmental awareness kudos to the gasoline industry, and, incidentally, to create a bit of a refinery squeeze to drive some independent filling stations and chains out of the market, or to at least bring them into line.
But everyone was aware of the potential problem of groundwater contamination, especially from leaks in service stations’ underground storage tanks. In fact, the industry had an answer to this problem: double walled storage tanks. There was a certain amount of underground leak contamination anyway, and this was touted as the solution to it. And again, it had the additional benefit of being expensive enough that it would drive some marginal independent stations out of business, again squeezing the supply chain.
So the idea was to retrofit the storage tanks before MTBE was adopted. Unfortunately this didn’t happen. While one part of the petroleum industry was confidently lobbying to put MTBE in, and it won’t be a problem because we’ll have underground tanks to keep the stuff from leaking, another part of the industry was lobbying to delay, delay, delay the storage tank replacement program, because it would cost them money. Often you had lobbyists from the same company arguing both cases, though, it should be admitted, rarely on the same day, or at least not to the same people.
So that, more or less, is how Santa Monica came to lose 70% of its drinking water to MTBE contamination, why Santa Barbara had to spend millions on a new water treatment plant, and why a lot of people across the land came to be really pissed. It also played a role in the current fad for ethanol fuels (I give that one another two or three years, tops) but that's a story for another day.
Oxygenated hydrocarbons are also generally octane boosters. In fact, ethanol, the alternative oxygenate to MTBE, was originally considered promising as an octane additive in the early days of the automobile. One story as to why tetra ethyl lead became the gasoline octane additive of choice is that ethyl could be patented, while ethanol could not. Another suggestion is simply that ethanol is not generally made from oil, and oilmen didn’t want it in gasoline for that reason.
For whatever reason, I can say from personal experience that the attitude toward ethanol among managers in the petroleum industry is very close to foam-at-the-mouth psychotic. They really, really, hate ethanol. MTBE, on the other hand, is made by the oil and gas industry (from natural gas to methanol to MTBE), and oilmen loved it.
From an environmental standpoint, you could say that MTBE is good for the air and bad for the water. Specifically, MTBE, like ethers generally, is water soluble, and will move into groundwater pretty easily. It also has a characteristic flavor and odor, which is apparently detectable (and unpleasant) in very small concentrations in drinking water.
None of this information is new; it was certainly well known 25 years ago, when MTBE was first being touted as an “environmentally friendly” additive to gasoline. It was, from the start, a big part of the “reformulated gasoline” initiative, whose intent was to bring some environmental awareness kudos to the gasoline industry, and, incidentally, to create a bit of a refinery squeeze to drive some independent filling stations and chains out of the market, or to at least bring them into line.
But everyone was aware of the potential problem of groundwater contamination, especially from leaks in service stations’ underground storage tanks. In fact, the industry had an answer to this problem: double walled storage tanks. There was a certain amount of underground leak contamination anyway, and this was touted as the solution to it. And again, it had the additional benefit of being expensive enough that it would drive some marginal independent stations out of business, again squeezing the supply chain.
So the idea was to retrofit the storage tanks before MTBE was adopted. Unfortunately this didn’t happen. While one part of the petroleum industry was confidently lobbying to put MTBE in, and it won’t be a problem because we’ll have underground tanks to keep the stuff from leaking, another part of the industry was lobbying to delay, delay, delay the storage tank replacement program, because it would cost them money. Often you had lobbyists from the same company arguing both cases, though, it should be admitted, rarely on the same day, or at least not to the same people.
So that, more or less, is how Santa Monica came to lose 70% of its drinking water to MTBE contamination, why Santa Barbara had to spend millions on a new water treatment plant, and why a lot of people across the land came to be really pissed. It also played a role in the current fad for ethanol fuels (I give that one another two or three years, tops) but that's a story for another day.
Labels:
atmospheric science,
automobiles,
chemistry,
environment,
politics,
technology
Wednesday, November 14, 2007
More Wayback Machine
In early 1975, I went to work for a company called Systems Applications Inc. The job literally fell into my lap.
Henry and I were staying with Douglas in Berkeley, looking for jobs and sleeping on the floor. I'd had a part-time job in the fall of '74 writing practice test questions for a small firm offering a course in passing the FCC 1st Class Radiotelephone License exam, but in January of '75 I signed onto the quality assurance group at the Nuclear Submarine Fueling Station at Mare Island Shipyards in Vallejo. I wasn't fond of the job; it was winter and I had to be there before dawn. I'm not a morning person.
One evening, after I got back from work, Henry came in and dropped a 3 x 5 index card in my lap. "I talked to these guys on the phone," he told me. "It's not a good fit for me, but it looks like it would be right up your alley."
See, when I say "literally dropped into my lap," I mean literally.
SAI had begun its life as a consulting firm for telecommunication policy, primarily for the Office of Telecommunication Policy (which no longer exists, so enough of this "government bureaus never die" crap). Later, some disgruntled folks from Shell Oil, along with some Caltech brainpower, signed on to bid on a "seed money" contract to the USEPA, to develop a simulation model for urban smog, to ultimately be used in devising State Implementation Plans (SIPs) for urban smog abatement. There were three such contracts originally; it was essentially a competition, with the best initial design getting a much larger follow-on project to continue model development.
Okay, quick nurd stuff. There are two ways of modeling fluid mechanics, Lagrangian and Eulerian, named after 18th Century mathematicians. Eulerian modeling is probably the easiest to describe and understand: just divide the volume holding the fluid into a lot of small, connected boxes, and calculated the flows among the boxes. For incompressible fluids (and on the urban scale, air can be considered incompressible, though you sometimes have to make adjustments for altitude), you can take advantage of those nice conservation of mass laws.
In Lagrangian mechanics, you select a bit of the fluid and you follow it around, like watching snowflakes in the wind. Over time, you can follow the trajectory of an individual snowflake and that tells you how the wind got from point A to point B. Lagrangian models are sometimes called trajectory models, for the obvious reason.
Two of the three companies developing smog models chose the Lagrangian approach, creating what are called "trajectory models," because you are following the trajectory of an air parcel over land. Such models are much computationally cheaper than Eulerian models (often called "grid models"), and are also cheaper to develop. You don't need to worry about the fluid flow equations, for one thing. The trajectory can simply follow the estimates of wind speed and direction, getting those estimates from the nearest wind stations. So if you wanted to model the observed ozone peak at an individual monitoring station, you just "back calculated" the air parcel trajectory to some starting point, like sunrise, then ran it over the emissions field and calculated how the chemistry behaved, until it ran into the monitoring station.
Computer time was expensive in 1975. A trajectory model might have 3-5 stacked boxes in the air parcel, and you might have to run the thing 10-12 times to get a full prediction at a single monitoring station. But a grid model would typically have at least 25 cells in both horizontal directions, plus the same number of stacked boxes that a trajectory model would have. You can run a lot of trajectory simulations when the difference in computing cost per simulation is almost 3 orders of magnitude. There were some concerns as to whether or not a grid model could be made to work at all.
SAI, however, did manage to produce a photochemical grid model, in some measure thanks to the CDC 7600 and a lot of prior academic research on computing fluid flow etc. So SAI won the follow-on contract, and hired several new people. I was one of them. My job? To develop a trajectory model.
Okay, yeah, that's a bit funny. They won because they'd developed a grid model and one of the first things they did was develop a trajectory model. But it did make sense. As I say, trajectory models are much cheaper to run. They are also easier to diagnose and debug, because they are simpler. And there were a lot of bells and whistles that were slated for inclusion in the final product, things like surface deposition (assessing how rapidly smog ozone is destroyed by ground surfaces), changes in light through the air column (smog is hazy and haze redistributes light), microscale effects (does chemistry that takes place at small scales have a big impact on a 5 mile x 5 mile grid?), and so forth. There was also an ongoing development contract to research smog chemistry, so it was useful to have a cheap version of what would go into the grid model, in order to test that against other, more sophisticated chemical kinetics solvers.
So, after the usual water-up-your-nose that occur when you jump feet first into a new pool, I got the trajectory model running. I also got to know the new guy running the atmospheric chemistry show, Gary Whitten (the previous guy left for a lucrative career at Chevron), and learned how Whitten's new ideas about smog chemistry worked. This was called the "Carbon Bond Mechanism" (which gets about 350 hits on Google scholar). I learned it from the guts out, as I had to hard code every single reaction into the chemistry solver shared by the trajectory and grid models.
During the next couple of years, I also worked out the method for estimating pollutant depositions on surfaces, got estimates for how the photolysis of nitrogen dioxide and other important species varied with solar zenith angle, and provided a quick and dirty (so to speak) method for calculating aerosol haze formation in smog, along with how the haze affected the photolysis of the important chemical species. I revised the emissions inventories that we had, because those came in the form of simply "reactive hydrocarbon" (RHC) or "non-methane hydrocarbon" (NMHC), and we needed the hydrocarbons split into different reactive species.
I also coded a new vertical dispersion algorithm into the model. I had no input into this particular piece of work; the guy in charge of it tended to treat anyone else, as one of the programmers put it, "as just another pair of hands." I'm pretty sure he actually got it wrong, because his implementation used calculations at a point for diffusivity, but the algorithm he was using varied considerably over the bottom grid cell. Diffusivity is rather like conductivity; you can't use averages for conductivity and get meaningful results. You have to use its inverse: resistance. It's also called resistance in diffusion calculations as well, and getting that right was critical for the surface deposition calculations.
Later, I worked out some new methods for calculating wind fields that reduced some modeling artifacts caused by a spurious convergence that is created when winds turn. I think I had a handle on how to do really right when everyone switched over to what are called "prognostic models" for winds (basically using a full fluid flow model for your wind field), so I never got to see that idea in action.
And if all this sounds very productive, realize that I haven't even mentioned the work that I was doing with Whitten on the basic photochemistry of aromatic hydrocarbons, isoprene and other biogenics, and peroxyacetyl nitrate (PAN).
So there I was, fresh out of school with a newly minted Master's Degree, which meant that I was cheaper than any PhD. And within a short period of time I was doing major development and scientific work that is being cited to this day. Within a year of my hiring I was technical lead on the Denver modeling project that was the first application of the new, realistic chemistry that Whitten had developed. I did literature reviews on atmospheric sources of nitrogen oxides, including a pretty comprehensive review of nitrogen oxide production from internal combustion engines. At one point one of the senior scientists said that I was "essential" to any urban airshed modeling project that SAI wanted to undertake.
I was also, apparently, so fundamentally obnoxious that years later, Whitten told me that at a management meeting in late 1975, he was the only manager who was willing to supervise me. He followed that with, "I never saw what was so hard about it. A project would come up. I'd talk to you about it for a bit. You'd usually be pretty negative and pessimistic at first, then something would catch your interest. Then all I had to do was wait for you to come back and report on what you'd done, which you'd do every couple of days."
I'm reasonably sure I'm more easygoing and likeable these days. But that's still pretty much the way to manage me. Some managers are fine with it. Others, it drives up the wall. Sorry.
Henry and I were staying with Douglas in Berkeley, looking for jobs and sleeping on the floor. I'd had a part-time job in the fall of '74 writing practice test questions for a small firm offering a course in passing the FCC 1st Class Radiotelephone License exam, but in January of '75 I signed onto the quality assurance group at the Nuclear Submarine Fueling Station at Mare Island Shipyards in Vallejo. I wasn't fond of the job; it was winter and I had to be there before dawn. I'm not a morning person.
One evening, after I got back from work, Henry came in and dropped a 3 x 5 index card in my lap. "I talked to these guys on the phone," he told me. "It's not a good fit for me, but it looks like it would be right up your alley."
See, when I say "literally dropped into my lap," I mean literally.
SAI had begun its life as a consulting firm for telecommunication policy, primarily for the Office of Telecommunication Policy (which no longer exists, so enough of this "government bureaus never die" crap). Later, some disgruntled folks from Shell Oil, along with some Caltech brainpower, signed on to bid on a "seed money" contract to the USEPA, to develop a simulation model for urban smog, to ultimately be used in devising State Implementation Plans (SIPs) for urban smog abatement. There were three such contracts originally; it was essentially a competition, with the best initial design getting a much larger follow-on project to continue model development.
Okay, quick nurd stuff. There are two ways of modeling fluid mechanics, Lagrangian and Eulerian, named after 18th Century mathematicians. Eulerian modeling is probably the easiest to describe and understand: just divide the volume holding the fluid into a lot of small, connected boxes, and calculated the flows among the boxes. For incompressible fluids (and on the urban scale, air can be considered incompressible, though you sometimes have to make adjustments for altitude), you can take advantage of those nice conservation of mass laws.
In Lagrangian mechanics, you select a bit of the fluid and you follow it around, like watching snowflakes in the wind. Over time, you can follow the trajectory of an individual snowflake and that tells you how the wind got from point A to point B. Lagrangian models are sometimes called trajectory models, for the obvious reason.
Two of the three companies developing smog models chose the Lagrangian approach, creating what are called "trajectory models," because you are following the trajectory of an air parcel over land. Such models are much computationally cheaper than Eulerian models (often called "grid models"), and are also cheaper to develop. You don't need to worry about the fluid flow equations, for one thing. The trajectory can simply follow the estimates of wind speed and direction, getting those estimates from the nearest wind stations. So if you wanted to model the observed ozone peak at an individual monitoring station, you just "back calculated" the air parcel trajectory to some starting point, like sunrise, then ran it over the emissions field and calculated how the chemistry behaved, until it ran into the monitoring station.
Computer time was expensive in 1975. A trajectory model might have 3-5 stacked boxes in the air parcel, and you might have to run the thing 10-12 times to get a full prediction at a single monitoring station. But a grid model would typically have at least 25 cells in both horizontal directions, plus the same number of stacked boxes that a trajectory model would have. You can run a lot of trajectory simulations when the difference in computing cost per simulation is almost 3 orders of magnitude. There were some concerns as to whether or not a grid model could be made to work at all.
SAI, however, did manage to produce a photochemical grid model, in some measure thanks to the CDC 7600 and a lot of prior academic research on computing fluid flow etc. So SAI won the follow-on contract, and hired several new people. I was one of them. My job? To develop a trajectory model.
Okay, yeah, that's a bit funny. They won because they'd developed a grid model and one of the first things they did was develop a trajectory model. But it did make sense. As I say, trajectory models are much cheaper to run. They are also easier to diagnose and debug, because they are simpler. And there were a lot of bells and whistles that were slated for inclusion in the final product, things like surface deposition (assessing how rapidly smog ozone is destroyed by ground surfaces), changes in light through the air column (smog is hazy and haze redistributes light), microscale effects (does chemistry that takes place at small scales have a big impact on a 5 mile x 5 mile grid?), and so forth. There was also an ongoing development contract to research smog chemistry, so it was useful to have a cheap version of what would go into the grid model, in order to test that against other, more sophisticated chemical kinetics solvers.
So, after the usual water-up-your-nose that occur when you jump feet first into a new pool, I got the trajectory model running. I also got to know the new guy running the atmospheric chemistry show, Gary Whitten (the previous guy left for a lucrative career at Chevron), and learned how Whitten's new ideas about smog chemistry worked. This was called the "Carbon Bond Mechanism" (which gets about 350 hits on Google scholar). I learned it from the guts out, as I had to hard code every single reaction into the chemistry solver shared by the trajectory and grid models.
During the next couple of years, I also worked out the method for estimating pollutant depositions on surfaces, got estimates for how the photolysis of nitrogen dioxide and other important species varied with solar zenith angle, and provided a quick and dirty (so to speak) method for calculating aerosol haze formation in smog, along with how the haze affected the photolysis of the important chemical species. I revised the emissions inventories that we had, because those came in the form of simply "reactive hydrocarbon" (RHC) or "non-methane hydrocarbon" (NMHC), and we needed the hydrocarbons split into different reactive species.
I also coded a new vertical dispersion algorithm into the model. I had no input into this particular piece of work; the guy in charge of it tended to treat anyone else, as one of the programmers put it, "as just another pair of hands." I'm pretty sure he actually got it wrong, because his implementation used calculations at a point for diffusivity, but the algorithm he was using varied considerably over the bottom grid cell. Diffusivity is rather like conductivity; you can't use averages for conductivity and get meaningful results. You have to use its inverse: resistance. It's also called resistance in diffusion calculations as well, and getting that right was critical for the surface deposition calculations.
Later, I worked out some new methods for calculating wind fields that reduced some modeling artifacts caused by a spurious convergence that is created when winds turn. I think I had a handle on how to do really right when everyone switched over to what are called "prognostic models" for winds (basically using a full fluid flow model for your wind field), so I never got to see that idea in action.
And if all this sounds very productive, realize that I haven't even mentioned the work that I was doing with Whitten on the basic photochemistry of aromatic hydrocarbons, isoprene and other biogenics, and peroxyacetyl nitrate (PAN).
So there I was, fresh out of school with a newly minted Master's Degree, which meant that I was cheaper than any PhD. And within a short period of time I was doing major development and scientific work that is being cited to this day. Within a year of my hiring I was technical lead on the Denver modeling project that was the first application of the new, realistic chemistry that Whitten had developed. I did literature reviews on atmospheric sources of nitrogen oxides, including a pretty comprehensive review of nitrogen oxide production from internal combustion engines. At one point one of the senior scientists said that I was "essential" to any urban airshed modeling project that SAI wanted to undertake.
I was also, apparently, so fundamentally obnoxious that years later, Whitten told me that at a management meeting in late 1975, he was the only manager who was willing to supervise me. He followed that with, "I never saw what was so hard about it. A project would come up. I'd talk to you about it for a bit. You'd usually be pretty negative and pessimistic at first, then something would catch your interest. Then all I had to do was wait for you to come back and report on what you'd done, which you'd do every couple of days."
I'm reasonably sure I'm more easygoing and likeable these days. But that's still pretty much the way to manage me. Some managers are fine with it. Others, it drives up the wall. Sorry.
Labels:
atmospheric science,
computer modeling,
memoir
Saturday, November 10, 2007
How Business Works: #236 in the Series
In 1976, we had just completed what turned out to be the first urban airshed simulation using a grid model with realistic photochemistry. By "realistic" I mean that it had a close to accurate estimation of the primary source of photo-oxidizing radicals (from carbonyl compounds rather than what was previously thought to be the primary source: oxygen atoms reacting with hydrocarbons), thermal decomposition of PAN, and a strong nitrogen oxide sink related to aromatic hydrocarbons.
One interesting thing about this particular simulation was that it did not involve Los Angeles. It just so happened that a project involving Denver had coincided with several model upgrades, including the new chemistry, so Denver got the goodies before LA did.
The project team consisted of a goodly fraction of the employees of the (rather small) research consulting firm that had originally won the main EPA follow-on work for developing a photochemical grid model: Systems Applications Inc., not to be confused with Science Applications Inc., or several other firms that went by the initials SAI. Systems Applications no longer exists as such, having been part of the merger and acquisitions whirligig in the 1980s, followed by a breakaway group going to start up a unit of Environ, though not many of those at SAI in 1976 are with either the ghost of SAI or Environ now. That's the biz, you know?
Anyway, part of the SAI business model was to do these government research and consulting gigs, which did not have much profit margin, followed by environmental impact work for other groups, usually corporate, which did have decent profit margins—sometimes. And thereby hangs this tale.
After the work for the Denver Regional Council of Governments (pronounced "Dr. Cog"), we got a request for an impact statement for a facility that had a natural gas turbine power source. Natural gas burns without much in the way of hydrocarbon emission, but the combustion temperature creates some nitrogen oxides, NOx in the lingo, and we were charged with determining the air quality impact. The thing only emitted a few kilograms of NOx per day or thereabouts, barely enough to register on the meter, as it were, but part of the song and dance of environmental impact statements is to do your "due diligence" and if you can get the cutting edge of science on your side, well, good on you and here's your permit.
I'd been the primary modeler on the DRCOG project, for a lot of reasons that I'll describe some other time, and there was a computer programmer/operator who worked with me, and a project manager above me. This was back in the days of punch cards and CDC 7600s, and pardon while I get all misty eyed, okay, that was plenty, because, really, feeding cards into card readers to run programs sucks.
I asked the programmer how much time he expected the job to take. The only thing that needed be done was to add one single point source to the point source input deck, then a bit of analysis, AKA subtracting several numbers from each other and maybe drawing a picture or two. He estimated the time at something like three days, but said, "Call it a week."
I knew how much a week costed out at, so I got the dollar figure, then doubled it, and reported that as my estimated cost of the project to the Denver Project lead. He doubled my estimate and gave it to the Comptroller.
The Comptroller doubled that number and gave it to the company President, who then doubled it and made that offer to the company that wanted to hire us. They signed without blinking.
Okay, so that's between 16 and 32 times what the programmer had expected the thing to cost, a nice profit margin, and good work if you can get it.
Then the programmer added the emissions to the program, ran it, and compared it to the original "base case" or "validation" simulation. They were the same.
Okay, really small emissions source. It's not surprising that the effect was minor, miniscule even. But he expected something. I think he was looking at like five or six digit accuracy in the printouts. There should have been some differences in the numerical noise at least. So he multiplied the source strength by ten, then by a hundred.
Still no difference.
Well, a programmer knows a bug when it bites him on the ass. He went into the code and found an array size limit that basically meant that any point source greater than #20 didn't get into the simulation. The impact source we were looking for had been added to the end of the list, so it didn't show up.
But.
The Denver region at that time had one major power plant that was responsible for something like 30%-40% of all the nitrogen oxides emitted into the Denver airshed. And, wouldn't you know it, that power plant was like, #45 on the list, or whatever. Higher than #20, that's for sure.
Oops.
So now we had to go back and redo our base case. We also had to redo every single simulation in our original study, and rewrite every report, and all the papers that were in progress, and notify the nice folks at DRCOG, who, it should be noted, had already paid us for all of the above when we did the original study, so they weren't about to pay us to do it again. We were lucky in one way: large, elevated point sources (like power plants) don't have nearly the impact of ground-based sources like automobiles, so the omission hadn't had that much effect on our original simulations, at least not near the air quality monitoring stations that we'd used to test the veracity of the model. There were some differences, of course, and tables changed, future impact projections were modified, etc. etc. Oh, and we got to use the original base case as a "what if" scenario, as in "What if Denver's largest point source of NOx emissions were switched off?"
Fortunately, we had some money to do all these things with: the environmental impact contract. I was told that we did actually wind up making a profit on it. I think it was in the low triple digits.
One interesting thing about this particular simulation was that it did not involve Los Angeles. It just so happened that a project involving Denver had coincided with several model upgrades, including the new chemistry, so Denver got the goodies before LA did.
The project team consisted of a goodly fraction of the employees of the (rather small) research consulting firm that had originally won the main EPA follow-on work for developing a photochemical grid model: Systems Applications Inc., not to be confused with Science Applications Inc., or several other firms that went by the initials SAI. Systems Applications no longer exists as such, having been part of the merger and acquisitions whirligig in the 1980s, followed by a breakaway group going to start up a unit of Environ, though not many of those at SAI in 1976 are with either the ghost of SAI or Environ now. That's the biz, you know?
Anyway, part of the SAI business model was to do these government research and consulting gigs, which did not have much profit margin, followed by environmental impact work for other groups, usually corporate, which did have decent profit margins—sometimes. And thereby hangs this tale.
After the work for the Denver Regional Council of Governments (pronounced "Dr. Cog"), we got a request for an impact statement for a facility that had a natural gas turbine power source. Natural gas burns without much in the way of hydrocarbon emission, but the combustion temperature creates some nitrogen oxides, NOx in the lingo, and we were charged with determining the air quality impact. The thing only emitted a few kilograms of NOx per day or thereabouts, barely enough to register on the meter, as it were, but part of the song and dance of environmental impact statements is to do your "due diligence" and if you can get the cutting edge of science on your side, well, good on you and here's your permit.
I'd been the primary modeler on the DRCOG project, for a lot of reasons that I'll describe some other time, and there was a computer programmer/operator who worked with me, and a project manager above me. This was back in the days of punch cards and CDC 7600s, and pardon while I get all misty eyed, okay, that was plenty, because, really, feeding cards into card readers to run programs sucks.
I asked the programmer how much time he expected the job to take. The only thing that needed be done was to add one single point source to the point source input deck, then a bit of analysis, AKA subtracting several numbers from each other and maybe drawing a picture or two. He estimated the time at something like three days, but said, "Call it a week."
I knew how much a week costed out at, so I got the dollar figure, then doubled it, and reported that as my estimated cost of the project to the Denver Project lead. He doubled my estimate and gave it to the Comptroller.
The Comptroller doubled that number and gave it to the company President, who then doubled it and made that offer to the company that wanted to hire us. They signed without blinking.
Okay, so that's between 16 and 32 times what the programmer had expected the thing to cost, a nice profit margin, and good work if you can get it.
Then the programmer added the emissions to the program, ran it, and compared it to the original "base case" or "validation" simulation. They were the same.
Okay, really small emissions source. It's not surprising that the effect was minor, miniscule even. But he expected something. I think he was looking at like five or six digit accuracy in the printouts. There should have been some differences in the numerical noise at least. So he multiplied the source strength by ten, then by a hundred.
Still no difference.
Well, a programmer knows a bug when it bites him on the ass. He went into the code and found an array size limit that basically meant that any point source greater than #20 didn't get into the simulation. The impact source we were looking for had been added to the end of the list, so it didn't show up.
But.
The Denver region at that time had one major power plant that was responsible for something like 30%-40% of all the nitrogen oxides emitted into the Denver airshed. And, wouldn't you know it, that power plant was like, #45 on the list, or whatever. Higher than #20, that's for sure.
Oops.
So now we had to go back and redo our base case. We also had to redo every single simulation in our original study, and rewrite every report, and all the papers that were in progress, and notify the nice folks at DRCOG, who, it should be noted, had already paid us for all of the above when we did the original study, so they weren't about to pay us to do it again. We were lucky in one way: large, elevated point sources (like power plants) don't have nearly the impact of ground-based sources like automobiles, so the omission hadn't had that much effect on our original simulations, at least not near the air quality monitoring stations that we'd used to test the veracity of the model. There were some differences, of course, and tables changed, future impact projections were modified, etc. etc. Oh, and we got to use the original base case as a "what if" scenario, as in "What if Denver's largest point source of NOx emissions were switched off?"
Fortunately, we had some money to do all these things with: the environmental impact contract. I was told that we did actually wind up making a profit on it. I think it was in the low triple digits.
Labels:
atmospheric science,
chemistry,
computer modeling,
memoir,
science
Thursday, November 8, 2007
The Right Formula
In the numerical solution of complex, dynamic systems, you often run into what is called the stiffness problem. If there was ever a sentence designed to lose readers, that may be it. Nevertheless, onward.
Imagine a spring connected to a mass. If you pull on the mass, the spring stretches and exerts a force on the mass. The stronger the force caused by a given extension, the “stiffer” the spring.
Now imagine a lot of different size masses, interconnected by a lot of springs of differing stiffness. Some of the mass-spring combinations will react quickly to any change in the system; other combinations, those with a severe mismatch between mass and spring, will react more slowly. Each combination has what is often called a time constant or a characteristic time for its reaction to change.
When you’re doing a numerical solution of the system (the only option if the system is complex and non-linear), you have to solve the state of the system for one point in time, then for another point somewhat later, a “time step” later, and so on. Some of the mass-spring combinations will require shorter time steps than others, because they have different time constants.
If there are connections between mass-spring combinations of very dissimilar time constants, you have a problem, the stiffness problem in fact. If one part of your system has a time constant of a microsecond, while another has a time constant of an hour, you are going to need billions of time steps to calculate your system for each time step you’d need if you didn’t have that short time step piece to it.
There are some fancy solution algorithms that can deal with the stiffness problem. One of them is called “Gear-Hindemarsh,” and was developed at Lawrence Livermore Labs, originally to facilitate the calculations used in designing thermonuclear weapons. We used it for chemical kinetic calculations in simulating smog chamber experiments. Gear-Hindemarsh is the gold standard for that sort of thing, but it has some problems, especially when you give the system a kick, like turning the lights on or off, or otherwise messing with the boundary conditions in an unsmooth way. Then it becomes pretty inefficient. The first atmospheric smog model produced by Livermore was called LIRAQ, and it spent much of its computing time on the few seconds after sunrise and sunset.
The development group I worked with on the EPA Urban Airshed Model used a different approach to the stiffness problem, called the quasi-steady state approximation. The QSSA makes a few reasonable assumptions, such as the idea that a chemical species that you are treating as being at “steady-state” doesn’t have so much mass that it affects the rest of the system. Imagine an automobile with a bunch of bobble-heads inside. The bobbing of the heads doesn’t affect the behavior of the whole system because their mass is small, relative to the auto itself.
If you react a hydrocarbon with an HO radical, for example, the HO pulls an H off of it to make water plus what is called an acyl radical, a hydrocarbon missing the H. The acyl radical then absorbs an oxygen molecule to form a peroxyacyl radical. This takes a very short time to occur, and we don’t worry about the behavior of the system during the time it takes for the radical to absorb the oxygen. We’ve treated the acyl radical as if it were in steady state.
Of course when there are multiple sources and multiple reaction paths for a QSSA species, the algebra can get more complicated, but it’s not too bad. At least not until the QSSA species begin to react with themselves and each other. In the smog equations, the first place this happened was when we included the reaction of the hydroperoxyl radical, HOO, with itself. That yields oxygen plus hydrogen peroxide (and that’s where the peroxide came from to bleach the guy to death in SunSmoke). The QSSA equations for HOO are quadratic. Fortunately, we have an equation to solve quadratic equations, one we all learned in high school. Unfortunately, it’s the wrong equation.
As you’ll recall (said the character in the pulp novel), the solution to the quadratic equation
aX^2 + bX + c = 0
can be written:
X = -b + [or minus] sqrt(b^2 - 4ac) / 2a
What they don’t usually tell you in high school is what happens when this equation is used on a quadratic that sometimes has the value of “a” as zero. If that happens, we get the dread “divide by zero” condition, and your computer tells you that you’ve just done a Very Bad Thing, and refuses to continue, you naughty person.
It so happens that there is another form of the quadratic equation that they don’t tell you about in high school, or in most colleges, either:
X = 2c / (-b + [or minus] sqrt(b^2 - 4ac))
A little checking tells me that the Wikipedia now gives the alternate formula, no doubt because so many programmers have run into the same problem I did, ‘way back when. I forget exactly where I got the alternate quadratic formula from; it’s penciled into the margins of a handbook I have. Anyway, I used it when I coded the chemistry module for the UAM.
Later, we wound up with more radical-radical cross reactions, and the algebraic QSSA went from quadratic to fifth order. There is no general fifth order solution, so we used a numerical solution called “Newton-Raphson.” I didn’t code the first implementation we did of that, and the program kept blowing up in the QSSA solver. I looked at it and realized that the programmer had used a constant term as the initial value for the Newton-Raphson calculation, and N-R is notoriously sensitive to the initial value. For the best results, you need to start somewhere near the final value. The clever lad that was I realized that if I stripped out all but the HOO quadratic, it was going to be very close to the final value. Using that as the initial value, the N-R calculation usually converged in one or two iterations.
* * *
In 1984, I got very sick. The words “chronic fatigue syndrome” screw up your ability to get health insurance, so I never say that I had CFS on an insurance form, and besides, I was never diagnosed. Nevertheless, I had what was basically a bout of ‘flu that lasted for several years. I was unable to work full time; in 1985, working as a consultant, I averaged maybe 5-10 hours a week working.
I was no longer the go-to guy for working on the kinetics solver, and one person took our module for use in an acid deposition model, Mary, a PhD chemist recently graduated from Cal Tech. She took one look at my “quadratic formula,” saw that it did not conform to what she’d learned in school and replaced it with the “right” version. Of course, it promptly blew up. So she spent the next several weeks putting in all sorts of tests for when the “a” in the formula got too small, switching it over to the linear solution etc. I’m not implying that it took her a lot of effort; she just spent some number of hours over the next few weeks working the bugs out.
When I heard about it, I was, of course, annoyed. It’s one thing to have someone else catch your mistake; it’s quite another when it wasn’t a mistake in the first place.
More recently though, I’ve been working as a technical writer, and I’ve come to understand that I did, in fact, make a mistake. I did not document the tricks I used in the chemical kinetics solver, even the most basic documentation, which is to put comments into the code explaining what was done and why.
I never confronted Mary about the thing in the first place, and she unexpectedly died of a cerebral aneurysm many years ago, so there’s no closure in the cards, unless this essay counts.
Imagine a spring connected to a mass. If you pull on the mass, the spring stretches and exerts a force on the mass. The stronger the force caused by a given extension, the “stiffer” the spring.
Now imagine a lot of different size masses, interconnected by a lot of springs of differing stiffness. Some of the mass-spring combinations will react quickly to any change in the system; other combinations, those with a severe mismatch between mass and spring, will react more slowly. Each combination has what is often called a time constant or a characteristic time for its reaction to change.
When you’re doing a numerical solution of the system (the only option if the system is complex and non-linear), you have to solve the state of the system for one point in time, then for another point somewhat later, a “time step” later, and so on. Some of the mass-spring combinations will require shorter time steps than others, because they have different time constants.
If there are connections between mass-spring combinations of very dissimilar time constants, you have a problem, the stiffness problem in fact. If one part of your system has a time constant of a microsecond, while another has a time constant of an hour, you are going to need billions of time steps to calculate your system for each time step you’d need if you didn’t have that short time step piece to it.
There are some fancy solution algorithms that can deal with the stiffness problem. One of them is called “Gear-Hindemarsh,” and was developed at Lawrence Livermore Labs, originally to facilitate the calculations used in designing thermonuclear weapons. We used it for chemical kinetic calculations in simulating smog chamber experiments. Gear-Hindemarsh is the gold standard for that sort of thing, but it has some problems, especially when you give the system a kick, like turning the lights on or off, or otherwise messing with the boundary conditions in an unsmooth way. Then it becomes pretty inefficient. The first atmospheric smog model produced by Livermore was called LIRAQ, and it spent much of its computing time on the few seconds after sunrise and sunset.
The development group I worked with on the EPA Urban Airshed Model used a different approach to the stiffness problem, called the quasi-steady state approximation. The QSSA makes a few reasonable assumptions, such as the idea that a chemical species that you are treating as being at “steady-state” doesn’t have so much mass that it affects the rest of the system. Imagine an automobile with a bunch of bobble-heads inside. The bobbing of the heads doesn’t affect the behavior of the whole system because their mass is small, relative to the auto itself.
If you react a hydrocarbon with an HO radical, for example, the HO pulls an H off of it to make water plus what is called an acyl radical, a hydrocarbon missing the H. The acyl radical then absorbs an oxygen molecule to form a peroxyacyl radical. This takes a very short time to occur, and we don’t worry about the behavior of the system during the time it takes for the radical to absorb the oxygen. We’ve treated the acyl radical as if it were in steady state.
Of course when there are multiple sources and multiple reaction paths for a QSSA species, the algebra can get more complicated, but it’s not too bad. At least not until the QSSA species begin to react with themselves and each other. In the smog equations, the first place this happened was when we included the reaction of the hydroperoxyl radical, HOO, with itself. That yields oxygen plus hydrogen peroxide (and that’s where the peroxide came from to bleach the guy to death in SunSmoke). The QSSA equations for HOO are quadratic. Fortunately, we have an equation to solve quadratic equations, one we all learned in high school. Unfortunately, it’s the wrong equation.
As you’ll recall (said the character in the pulp novel), the solution to the quadratic equation
aX^2 + bX + c = 0
can be written:
X = -b + [or minus] sqrt(b^2 - 4ac) / 2a
What they don’t usually tell you in high school is what happens when this equation is used on a quadratic that sometimes has the value of “a” as zero. If that happens, we get the dread “divide by zero” condition, and your computer tells you that you’ve just done a Very Bad Thing, and refuses to continue, you naughty person.
It so happens that there is another form of the quadratic equation that they don’t tell you about in high school, or in most colleges, either:
X = 2c / (-b + [or minus] sqrt(b^2 - 4ac))
A little checking tells me that the Wikipedia now gives the alternate formula, no doubt because so many programmers have run into the same problem I did, ‘way back when. I forget exactly where I got the alternate quadratic formula from; it’s penciled into the margins of a handbook I have. Anyway, I used it when I coded the chemistry module for the UAM.
Later, we wound up with more radical-radical cross reactions, and the algebraic QSSA went from quadratic to fifth order. There is no general fifth order solution, so we used a numerical solution called “Newton-Raphson.” I didn’t code the first implementation we did of that, and the program kept blowing up in the QSSA solver. I looked at it and realized that the programmer had used a constant term as the initial value for the Newton-Raphson calculation, and N-R is notoriously sensitive to the initial value. For the best results, you need to start somewhere near the final value. The clever lad that was I realized that if I stripped out all but the HOO quadratic, it was going to be very close to the final value. Using that as the initial value, the N-R calculation usually converged in one or two iterations.
* * *
In 1984, I got very sick. The words “chronic fatigue syndrome” screw up your ability to get health insurance, so I never say that I had CFS on an insurance form, and besides, I was never diagnosed. Nevertheless, I had what was basically a bout of ‘flu that lasted for several years. I was unable to work full time; in 1985, working as a consultant, I averaged maybe 5-10 hours a week working.
I was no longer the go-to guy for working on the kinetics solver, and one person took our module for use in an acid deposition model, Mary, a PhD chemist recently graduated from Cal Tech. She took one look at my “quadratic formula,” saw that it did not conform to what she’d learned in school and replaced it with the “right” version. Of course, it promptly blew up. So she spent the next several weeks putting in all sorts of tests for when the “a” in the formula got too small, switching it over to the linear solution etc. I’m not implying that it took her a lot of effort; she just spent some number of hours over the next few weeks working the bugs out.
When I heard about it, I was, of course, annoyed. It’s one thing to have someone else catch your mistake; it’s quite another when it wasn’t a mistake in the first place.
More recently though, I’ve been working as a technical writer, and I’ve come to understand that I did, in fact, make a mistake. I did not document the tricks I used in the chemical kinetics solver, even the most basic documentation, which is to put comments into the code explaining what was done and why.
I never confronted Mary about the thing in the first place, and she unexpectedly died of a cerebral aneurysm many years ago, so there’s no closure in the cards, unless this essay counts.
Subscribe to:
Posts (Atom)