Over at his blog, in support of the existence of "free will", David Heddle says (in a comment):
If free will is an illusion, then deterrents are an illusion. How can a deterrent make me choose not to commit a crime, unless I have the facility of choice?
I think this is more muddled thinking about free will. I still don't know what free will is supposed to mean, exactly, and I don't think anyone else does, either.
But, ignoring this, let's address Heddle's claim. Could deterrents work on humans if they have no free will? I think the answer is clearly, yes. Let's pretend that humans are soulless computational machines, shaped by evolution, who act based on a very very very complicated algorithm that takes sensory impressions as inputs and produces actions as outputs. Let's say that this algorithm tends, generally speaking, to try to ensure survival and pleasure and reproduction of the individual. Now the human machine suddenly sees resources, for the taking, that belong to another. The human machine does a cost-benefit analysis to "decide" whether to take the resources or not. In the absence of a known deterrent, such as a dangerous dog or future incarceration, the human machine may decide to take the resource. In the presence of a deterrent, the human machine may make another decision.
How does this involve "free will"? You can call this decision-making "free will" if you like. But then a thermostat has free will, too.
It's not at all surprising to anyone who thinks about computation for a living that a complicated algorithm can result in different behavior based on different inputs. The mystery to me is why Heddle thinks this says anything about free will.
Showing posts with label free will. Show all posts
Showing posts with label free will. Show all posts
Friday, April 30, 2010
Friday, January 01, 2010
Free Will Being Challenged
I have thought for a long time that "free will" is an incoherent philosophical concept. I'm not sure one can define it in any reasonable way. It is not simply the capacity for choice, because a machine flipping a coin would achieve the same result. So what is it? For the present, I will assume it refers to our feeling of being "in control".
We all have the sensation of being "in control", but how do we decide whether a biological organism other than us possesses free will? Does a bonobo have it? A dolphin? A cockroach? A bacterium? Can philosophy alone offer any guidance? I don't think so. Samuel Johnson once remarked, "All theory is against the freedom of the will; all experience for it." But we know that our common-sense experience doesn't always match up to the physical world, as in our strange system for perceiving color and how it can be fooled. So simply feeling that we have free will doesn't mean we actually do. Maybe we don't.
I think it quite possible that we lack free will in any reasonable sense - that, in fact, our actions are essentially deterministic. Despite this, I also think that our feeling that we are "in control" has a plausible basis -- I guess this makes me a "compatibilist", like Daniel Dennett. But I have a slightly different take on why, which is probably not original, but which I've never seen discussed in philosophy texts, although someone has probably done so. Namely, I'd guess that our computational hardware and software is so complex that it is not easy to predict the outcome of any situation with high probability - and in particular, we cannot even know how we ourselves will react in any given situation. We probably can do a simulation in principle, but in practice such a simulation would take too much time. So although we don't have free will in actual fact, the unpredictability of our actions makes it appear we do to beings with limited computational resources, such as ourselves. I'm hopeful that the theory of computational complexity may eventually play a role in a generally-accepted solution to the conundrum that has baffled philosophers for centuries.
The experiments of Benjamin Libet and co-authors cast doubt on our perception of being "in control". Libet found that subjects had activity in their brains about 300 milliseconds before they were aware of their volition to press a button. A more recent study found brain activity as much as 10 seconds before subjects were aware of their own conscious decisions. This popular article in Wired addresses it; for more technical details, see the article in Nature Neuroscience.
I was motivated to mention this by a recent solicitation to give money by my alma mater that mentions a freshman seminar devoted to these topics. I think it's great that cutting-edge research (the Nature Neuroscience article is from 2008) makes it so quickly to undergrad classes. And as we understand the science of decision-making better, more philosophers will be able to base their age-old speculations on some actual data instead of armchair thoughts.
We all have the sensation of being "in control", but how do we decide whether a biological organism other than us possesses free will? Does a bonobo have it? A dolphin? A cockroach? A bacterium? Can philosophy alone offer any guidance? I don't think so. Samuel Johnson once remarked, "All theory is against the freedom of the will; all experience for it." But we know that our common-sense experience doesn't always match up to the physical world, as in our strange system for perceiving color and how it can be fooled. So simply feeling that we have free will doesn't mean we actually do. Maybe we don't.
I think it quite possible that we lack free will in any reasonable sense - that, in fact, our actions are essentially deterministic. Despite this, I also think that our feeling that we are "in control" has a plausible basis -- I guess this makes me a "compatibilist", like Daniel Dennett. But I have a slightly different take on why, which is probably not original, but which I've never seen discussed in philosophy texts, although someone has probably done so. Namely, I'd guess that our computational hardware and software is so complex that it is not easy to predict the outcome of any situation with high probability - and in particular, we cannot even know how we ourselves will react in any given situation. We probably can do a simulation in principle, but in practice such a simulation would take too much time. So although we don't have free will in actual fact, the unpredictability of our actions makes it appear we do to beings with limited computational resources, such as ourselves. I'm hopeful that the theory of computational complexity may eventually play a role in a generally-accepted solution to the conundrum that has baffled philosophers for centuries.
The experiments of Benjamin Libet and co-authors cast doubt on our perception of being "in control". Libet found that subjects had activity in their brains about 300 milliseconds before they were aware of their volition to press a button. A more recent study found brain activity as much as 10 seconds before subjects were aware of their own conscious decisions. This popular article in Wired addresses it; for more technical details, see the article in Nature Neuroscience.
I was motivated to mention this by a recent solicitation to give money by my alma mater that mentions a freshman seminar devoted to these topics. I think it's great that cutting-edge research (the Nature Neuroscience article is from 2008) makes it so quickly to undergrad classes. And as we understand the science of decision-making better, more philosophers will be able to base their age-old speculations on some actual data instead of armchair thoughts.
Subscribe to:
Posts (Atom)