At Conjure a couple of weeks ago, Jenny and I found ourselves in the unusual situation (for us) of being on a panel together - in this case, we were two members of a suitably high-powered panel on what real scientists can learn from their science fictional counterparts.
Before the panel, we brainstormed a bit about the possibilities (as you do). We decided that, for example, real scientists could learn the following:
*Lock up your nubile daughter.
*Beware of monsters from the id.
*Never let a fly into your teleportation device.
*Don't pull that big, black lever!
*Practise saying, "MUAHAHAHAHAHA!"
In fact, the scientists in science fiction are more likely to be presented as the modern equivalent of the Impious Magician - someone who defies God, or the order of things, by seeking to uncover and control nature's secrets. Such characters don't have a lot to offer as role models for real scientists.
There are, of course, other kinds of scientists in science fiction: granite-jawed heroic scientists, for example, and the doddering, if useful, scientists who frequently act as helpers. But none of these are especially good role models for real scientists. Apart from the fact that it has inherited medieval notions of impiety, science fiction is a popular genre which presents stories of conflict and suspense, qualities that are difficult to generate from images of responsible working scientists sanely and competently going about their business. Some hard SF does try to present exactly that, occasionally with success (in particular, some of Gregory Benford's work comes to mind here), but this is a minority stream within the science fiction tradition.
On the other hand, science fiction seldom manages to be one-sidedly technophobic. Its scientists may not be role models for anyone, but even in the most cautionary tales the products of science and technology often end up being just plain cool. Their allure is surely part of what makes Hollywood action movies so attractive. Even Terminators can be good guys, after all, and the technophobic imagination always seems to work partly against itself.
Still, if you're a megalomaniacal scientist, remember to lock up your nubile daughter, especially when there are robots, aliens, prehistoric swamps, or lusty space captains around. Otherwise, you're surely asking for trouble.
About Me
- Russell Blackford
- Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).
Sunday, April 30, 2006
Dershowitz on the origins of rights
Alan Dershowitz's Rights from Wrongs: A Secular Theory of the Origins of Rights provides a sensible account of the nature and origin of rights. Dershowitz uses the word "rights" in a fairly narrow sense - as referring to claims that can be made by an individual against the power of government. This appears to be a useful sense of the word, and it has the virtue that it could refer to either negative rights (such as freedom of speech) or positive rights (such as a claim to the resources needed for basic subsistence, or for a decent life as understood in the society concerned).
What is most useful and refreshing about Dershowitz's account is that it does not attempt to ground rights in some implausible moral principle, such as the principle that we are obligated to treat everyone with equal concern and respect. On any non-trivial interpretation, such a moral claim is transparently absurd. I am under no obligation to show the same concern to strangers as I show to myself or my loved ones. Nor am I under any obligation to respect people who are clearly foolish or fanatical. It is plausible that I owe all human beings a duty not to treat them cruelly, and perhaps a positive duty to rescue them from imminent danger if it is easy for me to do so. But I have no obligation to respect them or their projects, much less to give their projects the same respect that I give to my own projects, those of people whom I especially care about, or those of people whom I particularly admire.
Instead, Dershowitz grounds rights in our collective historical experiences of wrongs - which he does not seem to define, but he evidently means those actions which contribute to great evils, such as massive suffering, and which no reflective person would want to defend. The idea is that the social recognition of rights that prevail over majoritarian wishes is necessary in order to prevent, or at least resist, the occurrence of certain kinds of evils in which governments are implicated, often with majority support. Dershowitz does not find any external grounding for rights, and nor does he try to give a more theoretical account of what wrongs and evils are. Though he does not explicitly put it this way, he seems content to rely on the fact that there are some things that it is rational for beings like us to fear, and this particularly includes some governmental acts.
In my terminology, Dershowitz is a moral sceptic - since I use the expression to refer to anyone who denies that there are objective sources of morality which transcend our actual interests. He prefers to talk about having a "nurtural" or "experiential" theory of rights, and this may in fact be better language for the purposes of advocacy: in particular, the word "experiential" sounds positive, and readily connects with the actual history of evils, rather than sounding vaguely nihilistic and defiant, as "moral scepticism" possibly does. Still, I count him as a moral sceptic, and a good example of how moral sceptics can say very useful things about how we should live our lives and contribute to public policy.
As I was reading, I sometimes felt that the argument could have been developed with more rigour, but in the end Rights from Wrongs is a book that I wholeheartedly recommend. I hope it will find a wide audience.
What is most useful and refreshing about Dershowitz's account is that it does not attempt to ground rights in some implausible moral principle, such as the principle that we are obligated to treat everyone with equal concern and respect. On any non-trivial interpretation, such a moral claim is transparently absurd. I am under no obligation to show the same concern to strangers as I show to myself or my loved ones. Nor am I under any obligation to respect people who are clearly foolish or fanatical. It is plausible that I owe all human beings a duty not to treat them cruelly, and perhaps a positive duty to rescue them from imminent danger if it is easy for me to do so. But I have no obligation to respect them or their projects, much less to give their projects the same respect that I give to my own projects, those of people whom I especially care about, or those of people whom I particularly admire.
Instead, Dershowitz grounds rights in our collective historical experiences of wrongs - which he does not seem to define, but he evidently means those actions which contribute to great evils, such as massive suffering, and which no reflective person would want to defend. The idea is that the social recognition of rights that prevail over majoritarian wishes is necessary in order to prevent, or at least resist, the occurrence of certain kinds of evils in which governments are implicated, often with majority support. Dershowitz does not find any external grounding for rights, and nor does he try to give a more theoretical account of what wrongs and evils are. Though he does not explicitly put it this way, he seems content to rely on the fact that there are some things that it is rational for beings like us to fear, and this particularly includes some governmental acts.
In my terminology, Dershowitz is a moral sceptic - since I use the expression to refer to anyone who denies that there are objective sources of morality which transcend our actual interests. He prefers to talk about having a "nurtural" or "experiential" theory of rights, and this may in fact be better language for the purposes of advocacy: in particular, the word "experiential" sounds positive, and readily connects with the actual history of evils, rather than sounding vaguely nihilistic and defiant, as "moral scepticism" possibly does. Still, I count him as a moral sceptic, and a good example of how moral sceptics can say very useful things about how we should live our lives and contribute to public policy.
As I was reading, I sometimes felt that the argument could have been developed with more rigour, but in the end Rights from Wrongs is a book that I wholeheartedly recommend. I hope it will find a wide audience.
Monday, April 24, 2006
C.S. Lewis
Many of C.S. Lewis's views cut little ice with me, and Lewis would have hated moral scepticism, transhumanism, evolutionary psychology, and much else that I am friendly to.
But I've just read a collection of his essays, reviews, etc., and it reminded me how clearly Lewis could think about topics such as literature, science fiction, and fantasy - and the lucidity with which he expressed his thoughts. Some of his remarks about science fiction, which he defended in a sensible way against dismissive critics, are still refreshing, and he wrote admirably clean, pleasing sentences - very different from a lot of ugly, modern-day academic prose.
Finally, I do like this joke, told by Lewis in a transcribed discussion with Kingsley Amis and Brian Aldiss. It gives a slightly different view of his character:
The Bishop of Exeter was giving prizes at a girls' school. They did a performance of A Midsummer Night's Dream, and the Bishop stood up afterwards and made a speech and said [piping voice]: "I was very interested in your delightful performance, and among other things I was very interested in seeing for the first time in my life a female Bottom."
But I've just read a collection of his essays, reviews, etc., and it reminded me how clearly Lewis could think about topics such as literature, science fiction, and fantasy - and the lucidity with which he expressed his thoughts. Some of his remarks about science fiction, which he defended in a sensible way against dismissive critics, are still refreshing, and he wrote admirably clean, pleasing sentences - very different from a lot of ugly, modern-day academic prose.
Finally, I do like this joke, told by Lewis in a transcribed discussion with Kingsley Amis and Brian Aldiss. It gives a slightly different view of his character:
The Bishop of Exeter was giving prizes at a girls' school. They did a performance of A Midsummer Night's Dream, and the Bishop stood up afterwards and made a speech and said [piping voice]: "I was very interested in your delightful performance, and among other things I was very interested in seeing for the first time in my life a female Bottom."
Sunday, April 23, 2006
Science fiction criticism
I find a lot of sf criticism annoying for reasons that are difficult to articulate without sounding crass. It's not that I'm anti-intellectual or opposed to academic critical writing as such. Rigorous critical writing is vital to art and literature. Perhaps it's the sense that a lot of individuals who take an interest in sf as academics don't seem, in their writings, to have any deep respect for the genre and its intellectual underpinnings. So often, I get the impression that they'd be just as happy applying their theories about popular culture to analysing postage stamp art, or the backs of cereal packets. They just happen to be writing about sf.
It's certainly not that they are critical of sf works, even harshly critical. Sometimes I am, too (let's face it: various kinds of sf have a lot to answer for). But there is often an appearance that these people are critical for reasons that show no real engagement with what the genre is actually about.
It's certainly not that they are critical of sf works, even harshly critical. Sometimes I am, too (let's face it: various kinds of sf have a lot to answer for). But there is often an appearance that these people are critical for reasons that show no real engagement with what the genre is actually about.
Wednesday, April 19, 2006
Lifeboat Foundation - scientific advisory board
I've accepted an invitation to join the Scientific Advisory Board for the Lifeboat Foundation. This puts me in good company with Gregory Benford, David Brin, Aubrey de Grey, Ray Kurzweil, James Hughes, Robert J. Sawyer, Natasha Vita-More, my distinguished colleague at Monash University, J.J.C. ("Jack") Smart, and a bunch of other people so astonishingly eminent as to make me feel humbled by the offer (the list includes at least a couple of Nobel Prize winners).
The Foundation's website describes it in this way:
The Lifeboat Foundation is a nonprofit, nongovernmental organization, dedicated to ensuring that humanity adopts the powerful technologies of genetics, nanotechnology, and robotics safely as we move towards the Singularity. This humanitarian organization is pursuing all possible options, including relinquishment when feasible (we are against the U.S. government posting the recipe for the 1918 flu virus on the internet), and helping accelerate the development of defensive technologies including anti-biological virus technology, active nanotechnological shields, and self-sustaining space colonies in case the other defensive strategies fail.
I'm hedging my bets about the likelihood and imminence of the so-called technological singularity - when technological progress is supposed to soar straight upwards like a wall across the future. Yes, that's the sceptic in me coming out again. But I can relate to most of that statement.
Perhaps it's a bit ironic that I'm making this announcement after I've just blogged saying that I'm not afraid of the future, but this is clearly not an anti-technology group, and seems to be going out of its way to include a range of positions. While we're welcoming the prospect of a strange future and massive technological advancement, it's also good if a high-powered think tank is at work considering what the genuine dangers might be. If I can help in any small way, that's more than cool.
The Foundation's website describes it in this way:
The Lifeboat Foundation is a nonprofit, nongovernmental organization, dedicated to ensuring that humanity adopts the powerful technologies of genetics, nanotechnology, and robotics safely as we move towards the Singularity. This humanitarian organization is pursuing all possible options, including relinquishment when feasible (we are against the U.S. government posting the recipe for the 1918 flu virus on the internet), and helping accelerate the development of defensive technologies including anti-biological virus technology, active nanotechnological shields, and self-sustaining space colonies in case the other defensive strategies fail.
I'm hedging my bets about the likelihood and imminence of the so-called technological singularity - when technological progress is supposed to soar straight upwards like a wall across the future. Yes, that's the sceptic in me coming out again. But I can relate to most of that statement.
Perhaps it's a bit ironic that I'm making this announcement after I've just blogged saying that I'm not afraid of the future, but this is clearly not an anti-technology group, and seems to be going out of its way to include a range of positions. While we're welcoming the prospect of a strange future and massive technological advancement, it's also good if a high-powered think tank is at work considering what the genuine dangers might be. If I can help in any small way, that's more than cool.
Why I'm not afraid of the future
I disagree with so many claims that I hear from various of my transhumanist friends that I sometimes wonder why, at the end of the day, I stand with them, rather than the bioconservatives - but there's no doubt that I do.
Alas, being in the southern hemisphere prevents me from getting to many of the international conferences where the current debates are being played out, such as the forthcoming Stanford conference on human enhancement.
But at Conjure, I chaired a panel whose other members were Bruce Sterling, Andrew Macrae, and Keith Stevenson. We'd been given a slightly confusing topic that encouraged us to talk about the prospects of uploading human personalities onto advanced computer hardware. I confess that I am an uploading sceptic, not because I deny that some materialist, and possibly computationalist, account of consciousness is ultimately true, but because I see huge problems relating to the continuity of personal identity. All four panel members are sensible people, and we all revealed ourselves as uploading sceptics, so there was furious agreement about that.
That could have been the end of it.
But the topic raised wider issues, and I did get a little concerned at one point when the mood in the room - among the panel members and the audience - took a strong turn in the direction of general technofear, an emphasis on the scariness of future technologies that may directly change our physical and cognitive abilities, rather than changing our environment. I tried to remind everyone that nature is not our friend (which is a position totally compatible with emphasising the beauty and even sublimity of wilderness areas; nature and wilderness are not the same thing).
If I could go to the Stanford conference, here's some of what I'd like to say.
To adapt some terminology from David Gems, the horizons of human desire - rather than what is pre-technologically "natural" or "given" - should determine what uses we make of technology. This is why the therapy/enhancement distinction in bioethics, though not entirely bogus from a biological viewpoint, is of only limited use in formulating public policy. In many cases, it may be possible to draw a boundary between therapy and enhancement, but in many other cases it may not be. More fundamentally, even where we can draw the therapy/enhancement boundary on some defensible scientific basis, much that gets classified as therapy may still fall well within the horizon of those human desires that it makes sense to try to satisfy.
Technologies coming down the pipeline from the future will sometimes be dangerous, either because they don't perform as expected or, worse, because they actually do perform as expected.
Some scrutiny and scepticism is a good thing, and I always reserve the right to engage critically with the visions of my transhumanist friends. Finding the correct technologies to satisfy rational human desires, such as the desire to live longer, healthier lives, may not be easy, and using them in the best ways may be even harder. At the same time, we are technological animals. We invent technologies to achieve our desires, and there is no deep reason why we should ever stop doing so, even if we transform ourselves, and create new desires, in the process. It's in our nature (i.e., our evolved psychological characteristics as a species) to alter ourselves from what is, in another sense, natural (i.e., pre-technologically given).
Let's be alert to all the dangers along the way, and work out rational policies to handle them. But we have desires to fulfil - desires that it is rational for beings like us to have. If new technologies can fulfil some of them, I won't be deterred merely by sentimentality about the given, or by other people's shudders at the unknown. There's a whole future of infinite possibility to explore. You can stay home if you want - but you're welcome to come with me. I'm going to tread carefully, but I'm not going to be afraid. The future will be strange, but I damn sure want to go there.
Alas, being in the southern hemisphere prevents me from getting to many of the international conferences where the current debates are being played out, such as the forthcoming Stanford conference on human enhancement.
But at Conjure, I chaired a panel whose other members were Bruce Sterling, Andrew Macrae, and Keith Stevenson. We'd been given a slightly confusing topic that encouraged us to talk about the prospects of uploading human personalities onto advanced computer hardware. I confess that I am an uploading sceptic, not because I deny that some materialist, and possibly computationalist, account of consciousness is ultimately true, but because I see huge problems relating to the continuity of personal identity. All four panel members are sensible people, and we all revealed ourselves as uploading sceptics, so there was furious agreement about that.
That could have been the end of it.
But the topic raised wider issues, and I did get a little concerned at one point when the mood in the room - among the panel members and the audience - took a strong turn in the direction of general technofear, an emphasis on the scariness of future technologies that may directly change our physical and cognitive abilities, rather than changing our environment. I tried to remind everyone that nature is not our friend (which is a position totally compatible with emphasising the beauty and even sublimity of wilderness areas; nature and wilderness are not the same thing).
If I could go to the Stanford conference, here's some of what I'd like to say.
To adapt some terminology from David Gems, the horizons of human desire - rather than what is pre-technologically "natural" or "given" - should determine what uses we make of technology. This is why the therapy/enhancement distinction in bioethics, though not entirely bogus from a biological viewpoint, is of only limited use in formulating public policy. In many cases, it may be possible to draw a boundary between therapy and enhancement, but in many other cases it may not be. More fundamentally, even where we can draw the therapy/enhancement boundary on some defensible scientific basis, much that gets classified as therapy may still fall well within the horizon of those human desires that it makes sense to try to satisfy.
Technologies coming down the pipeline from the future will sometimes be dangerous, either because they don't perform as expected or, worse, because they actually do perform as expected.
Some scrutiny and scepticism is a good thing, and I always reserve the right to engage critically with the visions of my transhumanist friends. Finding the correct technologies to satisfy rational human desires, such as the desire to live longer, healthier lives, may not be easy, and using them in the best ways may be even harder. At the same time, we are technological animals. We invent technologies to achieve our desires, and there is no deep reason why we should ever stop doing so, even if we transform ourselves, and create new desires, in the process. It's in our nature (i.e., our evolved psychological characteristics as a species) to alter ourselves from what is, in another sense, natural (i.e., pre-technologically given).
Let's be alert to all the dangers along the way, and work out rational policies to handle them. But we have desires to fulfil - desires that it is rational for beings like us to have. If new technologies can fulfil some of them, I won't be deterred merely by sentimentality about the given, or by other people's shudders at the unknown. There's a whole future of infinite possibility to explore. You can stay home if you want - but you're welcome to come with me. I'm going to tread carefully, but I'm not going to be afraid. The future will be strange, but I damn sure want to go there.
Tuesday, April 18, 2006
All eyes on Sean Williams
More moments from Conjure
Preposterous and depressing
It looks like our panel on using science fiction to teach science finds the whole prospect pretty crazy, at least judging by this candid shot. Left to right are David Kok, Sonny Whitelaw (panel convenor), Russell Blackford, Leanne Frahm, and Ian Nichols. Fortunately, we did think that there was some scope to use science fiction to introduce students to ideas in science and the humanities.
Most of us seem to think that it's a tough job writing media tie-in novels, if you can go by this candid shot from another Conjure panel, featuring Sabine Bauer, Sonny Whitelaw, Sean Williams (panel convenor) and a rather deflated-looking Russell Blackford.
Are we all in the dark about Mars exploration?
Chris McMahon, Russell Blackford (panel convenor), and Cameron Boyd discuss the prospects for a manned expedition to Mars. Possibly we will find exotic lifeforms, such as the warrior girl in the poster behind us.
Hang on! We can shed some light on the issue with this colour-enhanced version. Suddenly, the future is clear. Red planet, here we come.
Loss of personal identity
Drowning in e-mail
It's astonishing how quickly e-mail accumulates. I've been away for a few days from the several accounts that I have for different purposes, and now I'm drowning in unread e-mails.
Never mind, it was great to have a break from my usual round at Conjure, and I'll report in more detail as soon as I can get a breath of air.
Never mind, it was great to have a break from my usual round at Conjure, and I'll report in more detail as soon as I can get a breath of air.
Wednesday, April 12, 2006
Conjure
Over Easter I'll be attending Conjure, a science fiction convention in Brisbane, where the guests of honour will include Bruce Sterling, the distinguished author of Schismatrix, Holy Fire, and many other important science fiction works. I'm appearing on several panels - with Sterling as a fellow panelist on about three of them.
I'll report back on goings on when it's all over.
I'll report back on goings on when it's all over.
Thursday, April 06, 2006
Reading Darwin
I'm currently immersed in reading From So Simple a Beginning: The Four Great Books of Charles Darwin, a huge volume (over 1600 large pages) edited by Edward O. Wilson. I've been assigned to review this by Cosmos magazine. It looks like taking up most of my reading time for the next couple of weeks, but it'll be worth the effort.
Tuesday, April 04, 2006
Wikipedia
It's addictive, working on Wikipedia. I seem to have made about 2500 edits there since I signed up, just over three months ago.
Anyway, my main aim has been to ensure that the "Transhumanism" article, and related articles, are in the best possible shape (factual, objective, comprehensive, well written, etc.). There are other people who deserve at least as much credit as I do, but we're getting there as a team. I'm personally pleased with the article in its current form, and it was granted Good Article status yesterday. That achievement feels rather good.
Now for the process of getting it up to Featured Article status, recognition that it is an example of Wikipedia's very best work.
Sunday, April 02, 2006
The scientific quest to "cure aging"
In a recent issue of The Journal of Medical Ethics, Aubrey de Grey puts the case for science to find a "cure" for aging, giving human beings a kind of immortality. As it happens, I am not entirely convinced, though I do think there is an irrestible argument for research that could enable us to live far longer and healthier lives (but that, of course, is a much less radical claim). On this occasion, I want to put de Grey's argument "out there" in something like the form that he has chosen. My own reconstructions, criticisms, or alternative approach can wait for another time.
De Grey's starting point is that there is now a real prospect that biomedical science can develop technologies capable of stopping, and even reversing, the process of aging, and that our hesitation to fund and carry out the required research is already delaying this. I think that these claims are plausible enough to be accepted, at least for the sake of argument. Depending on what timeframe we are talking about, the development of such a technology seems possible in principle, even if, as I believe, there are some very difficult practical and ethical barriers to overcome. It is also quite plausible, I think, that we could now take action to hasten the quest for a "cure" (though whether anyone currently alive would benefit is another matter). It seems to follow that at least some present or future people could have their lives extended - beyond what would otherwise be their duration - if action were taken now to fund and conduct radical anti-aging research.
De Grey refers to a number of social tendencies to support his understanding of our contemporary moral convictions. One such tendency is the abolition of capital punishment in Western European countries, which includes an unwillingness to extradite prisoners who might be executed in countries where it is still practised. Another is the contemporary reticence of Western European nations about going to war: they have not been at war with each other since 1945. Such tendencies suggest an increasingly greater solicitude towards human life, and an unwillingness to deny others a choice to live, even in the extreme circumstance that the other has committed an outrageous crime.
Thus, so the argument goes, we share a central moral conviction about the impermissibity of shortening a human life against the individual's will. With a few caveats that are probably not relevant - we are not discussing the lives of embryos, for example - this all seems plausible.
Furthermore, whatever values count against such a conviction are trumped by it. We are committed to the overriding idea that there is what de Grey calls "the right of a healthy human being to carry on living."
From this starting point, de Grey argues that we ought to do what is required to discover a cure (I'll henceforth drop the scare quotes) for aging.
It is not intellectually tenable, he argues, to make a moral distinction between killing and failing to save life, such that the latter is merely optional. Nor can we distinguish between saving and extending a life in a way that makes extending human lives merely optional. Imagine someone had rescued Jeanne Calment, who eventually lived to be 122, from drowning when she was only eighty-five. Even though she had already lived beyond any measure of a "normal" life expectancy, this action would have counted as saving her life. We have no concept of a limited tenure for a human life, something that it is impermissible to foreshorten but okay not to extend. A human life is not like a fixed-term employment contract (though I'm not sure whether de Grey would approve of this particular contrast to illustrate his point).
To sum up the argument, de Grey believes that it is already possible in principle to "cure" ageing, and that to hold back from doing so is to fail to save some lives that could have been saved if we'd acted otherwise. Saving lives is as important, morally, as resisting impulses to kill. Thus, we are in a position where failing to fund and conduct research on a cure for aging is morally comparable to killing, or so it will inevitably seem to us once we understand the situation clearly, and provided that we hold to our central moral convictions.
It appears to follow that, if we are rational, we must accept that there is a moral imperative to quest for a cure for aging. Such an imperative coheres with central moral ideas so powerful as to override any imaginable countervailing considerations. To deny this imperative would involve a logical rupture within the structure of our morality.
Is all this persuasive? I've already stated that I'm not entirely convinced, but my misgivings are based on a moral theory which may be as controversial as de Grey's position, though for quite different reasons. At the least, it seems as if de Grey has put a powerful argument. I can't see any easy ways to avoid its conclusion.
Saturday, April 01, 2006
The spectre of infanticide? (Actually, no.)
One potential embarrassment to the the pro-choice position is the spectre that any successful arguments to the effect that abortion is morally permissible are likely to be too powerful. Something seems to go wrong if an argument shows that abortion is morally permissible, but also shows that infanticide must be morally permissible - something that sounds shocking.
I've written about this problem in a couple of places, and in slightly different contexts, and am thinking about it again, having received an invitation from a new journal to submit an article on the right to life.
Imagine that the possession of a right to life is contingent on personhood. I take it that "personhood" means, roughly speaking, something like the following: the capacities for reason and self-consciousness, including a concept of the past and future (this bundle of capacities seems to go neatly together as one complex). In that case, no embryo or fetus possesses the right to life, since it lacks personhood. However, it appears undeniable that young babies also lack personhood, as defined. Yet most of us are shocked at the idea of killing a young baby (at least in any common circumstance - leave aside issues of severe disability, for example). The situation seems to be intellectually satisfactory.
In resolving the problem, we should ask ourselves why we are shocked by the idea of killing a young baby, and whether we have any reason to wish we were not shocked in this way. It seems to me that the answers to those questions might be surprisingly complex, but surely our response is something that we will consider justified at the end of the day. Note that I am not talking about epistemic justification of an abstract truth claim such as, "It is morally wrong to kill young babies." I am suspicious of abstract truth claims like this, unless they are actually intended as shorthand for something else. Rather, I am talking about instrumental justification of the emotions of shock, or horror, combined with pity and anger, when people so much as contemplate such an action - together with social attitudes of repudiation and willingness to punish. All of this seems well justified to me - justified against widespread and fundamental values that most of us actually do share. The things we value include the natural love of parents for their children, the hopes of communities for their future, and so on.
The same might apply, to a lesser extent, if we consider a very late abortion. However, it's difficult to see how it could apply to early abortions, much less to such things as the use of human embryos in stem cell research. In such cases, the instrumental justifications for horror, shock, pity, and anger are not present in the same way . I try to explain some of this in more detail in my Journal of Medical Ethics article, "Stem cell research on other worlds".
From the point of view of strict intellectual analysis, the situation is satisfactory after all. What is less satisfactory, perhaps, is that resolving the problem uses a method of philosophical analysis that requires people to step (temporarily) out of their ordinary moral thinking. For practical purposes, is this asking too much of them?
I've written about this problem in a couple of places, and in slightly different contexts, and am thinking about it again, having received an invitation from a new journal to submit an article on the right to life.
Imagine that the possession of a right to life is contingent on personhood. I take it that "personhood" means, roughly speaking, something like the following: the capacities for reason and self-consciousness, including a concept of the past and future (this bundle of capacities seems to go neatly together as one complex). In that case, no embryo or fetus possesses the right to life, since it lacks personhood. However, it appears undeniable that young babies also lack personhood, as defined. Yet most of us are shocked at the idea of killing a young baby (at least in any common circumstance - leave aside issues of severe disability, for example). The situation seems to be intellectually satisfactory.
In resolving the problem, we should ask ourselves why we are shocked by the idea of killing a young baby, and whether we have any reason to wish we were not shocked in this way. It seems to me that the answers to those questions might be surprisingly complex, but surely our response is something that we will consider justified at the end of the day. Note that I am not talking about epistemic justification of an abstract truth claim such as, "It is morally wrong to kill young babies." I am suspicious of abstract truth claims like this, unless they are actually intended as shorthand for something else. Rather, I am talking about instrumental justification of the emotions of shock, or horror, combined with pity and anger, when people so much as contemplate such an action - together with social attitudes of repudiation and willingness to punish. All of this seems well justified to me - justified against widespread and fundamental values that most of us actually do share. The things we value include the natural love of parents for their children, the hopes of communities for their future, and so on.
The same might apply, to a lesser extent, if we consider a very late abortion. However, it's difficult to see how it could apply to early abortions, much less to such things as the use of human embryos in stem cell research. In such cases, the instrumental justifications for horror, shock, pity, and anger are not present in the same way . I try to explain some of this in more detail in my Journal of Medical Ethics article, "Stem cell research on other worlds".
From the point of view of strict intellectual analysis, the situation is satisfactory after all. What is less satisfactory, perhaps, is that resolving the problem uses a method of philosophical analysis that requires people to step (temporarily) out of their ordinary moral thinking. For practical purposes, is this asking too much of them?
Subscribe to:
Posts (Atom)