Tuesday, May 31, 2011

Substance Dualism (Part Three): The Problem of Other Minds



(Part One, Part Two)

Substance dualism has been preoccupying me over the past week. In the two previous posts, I've looked at the basic argument for it, as well as the criticisms of that argument. In this post, I'll consider the first of three arguments against substance dualism: the argument from knowledge of other minds. Once again, William Jaworski's excellent book is the source for this post.

Monday, May 30, 2011

Game Theory (Part 5) - Best Response and Expected Utility



This post is part of my course on game theory. For an index, see here. The course follows the one by Ben Polak on the Open Yale website.

Over the past few entries we have been considering domination and the iterated deletion of dominated strategies. In today’s entry we move away from those concepts and begin our inexorable journey to the Nash Equilibrium. We do so by studying two key ideas: (i) best response and (ii) expected utility.

Sunday, May 29, 2011

Substance Dualism (Part Two): Criticisms of The Basic Argument



In the previous post, I summarised William Jaworski’s presentation of the basic argument for substance dualism. In this post, I’ll be covering Jaworski’s discussion of the criticisms of that basic argument. Remember, substance dualism is a time-honoured position within the philosophy of mind and it’s well worth knowing why some find it persuasive and others do not.

As we saw the last day, the basic argument for substance dualism relies on two premises. The first of those premises derives from the axioms of identity and is relatively uncontroversial. The second of those premises makes a modal claim. It says that we (i.e. minds/persons) can exist without bodies. This claim is controversial and is supported by two equally controversial arguments: (i) the conceivability-possibility argument; and (ii) the essential property argument. Both were discussed in part one. Now we turn to the criticisms of both those arguments.

Saturday, May 28, 2011

Substance Dualism (Part One): The Basic Argument



Substance dualism is an enduring theory within the philosophy of mind. According to substance dualists the mind is ontologically distinct from the body.

With the rise of neuroscience in the 19th and 20th centuries, the theory appeared to become less tenable: the evidence seemed to point, overwhelmingly, toward a deep connection between mind and brain. But despite the continued presence of this evidence, a satisfactory theory of mind-brain relations remains elusive. As a result, my current sense is that substance dualism may be due to undergo something of a recrudescence. Thus, those of us with an interest in the philosophy of mind would be well-advised to understand the arguments that can be mustered in its support.

Fortunately, William Jaworski, through the medium of his recent book Philosophy of Mind: a Comprehensive Introduction, is on hand to assist. I’ve mentioned and recommended this book before. In this post, I will run through Jaworski’s presentation of the basic argument in favour of substance dualism. In future posts, I’ll consider the various criticisms and responses to this argument.

Friday, May 27, 2011

Philosophy of Mind: Mapping the Theoretical Space



If you’ve any interest in the philosophy of mind, I highly recommend picking up a copy of William Jaworski’s recently-released Philosophy of Mind: a Comprehensive Introduction. It does exactly what it says on the tin. It provides detailed, but not overwhelming, coverage of all the major theories and arguments in the field. I’d say it might be heavy going if you’re a complete newcomer to the topic, but if you’re familiar with it it’s definitely one way to seriously bump-up your knowledge.

One of the particularly nice things about the opening chapter of the book is Jaworski’s attempt to map the theoretical space within the philosophy of mind. In other words, his attempt to delineate and organise all the different theoretical positions on the nature of the mind. To give you a flavour of the book, I’m going to summarise this cartographic exercise.

At the most general level, theories about the nature of the mind can be separated into three categories: (i) monistic theories; (ii) dualistic theories and (iii) non-standard theories. Let’s look at each category in more detail.


1. Monistic Theories
The philosophy of mind is motivated primarily by the desire to resolve a number of mind-body problems. I’ll look at these in a future post. For now, all that needs to be known is that these problems generally concern the appropriate relation between our understanding of the physical world of scientific description and our understanding of the mental world of first-person description.

Monistic theories propose that the distinction between the physical and mental worlds is ultimately illusory: they are both, fundamentally, made up of the same kind of stuff. But what kind of stuff? There are three theories to contend with:

Idealism: This theory maintains that everything is ultimately mental. That our supposition of an external physical world is merely the result of an elaborate way of describing subjective mental experiences. 
Neutral Monism: This theory maintains that everything is ultimately made up of a neutral substance that is neither physical nor mental. But this substance can be described in physical or mental terms. 
Physicalism: This theory maintains that everything is ultimately physical. That our subjective mental experiences can ultimately be redescribed in physicalistic terms.

Of these three, it is physicalism that has been subjected to the most refinement in the past 50 years or so. Thus we are forced to further distinguish between a number of physicalist theories:

Eliminative Physicalism: This theory maintains that a complete physicalist theory of reality will ultimately eliminate the need to refer to the mental. Our mentalistic vocabulary is just a folk theory that needs to be replaced.
Reductive Physicalism: This theory maintains that mental facts are ultimately reducible to physical facts, but this does not mean all reference to the mental is somehow redundant or unimportant. There are two subdivisions within this theory (actually there are even more, but there’s no need to get too fine-grained when you’re starting out): 
  • (a) Behaviourism: mental facts are reducible to facts about behaviour; 
  • (b) Identity Theory: mental facts are reducible to facts about the brain.

Non-reductive Physicalism: This theory maintains that although everything could ultimately be described by physics, the special sciences (psychology, sociology etc.) have descriptive and explanatory interests that cannot be fulfilled by physics. These interests are satisfied by the use of mentalistic descriptions. There are three sub-divisions within this category of physicalism: 
  • (a) Realisation Physicalism: Mental phenomena are realised by physical phenomena. Indeed, they can be realised by multiple kinds of physical phenomena. 
  • (b) Supervenience Physicalism: Mental phenomena supervene upon physical phenomena. 
  • (c) Anomalous Monism: All events are describable in physical terms; but some events are also describable in mental terms. The psychological explanations that use these mental terms are, however, not law-like (a - nomos, without law).



The descriptions of these different positions are exceptionally brief. As a result, it might be difficult to fully appreciate the distinctions between some of them.


2. Dualistic Theories
Dualistic theories adopt the same basic tagline: the distinction between the mental and the physical is real. Where they disagree is over the precise nature of that distinction:

Substance Dualism: This theory maintains that there are ultimately two kinds of stuff: mental stuff and physical stuff. Our minds, obviously, are made up of the former, not the latter. 
Dual Attribute Theory: This theory maintains that there is ultimately only one kind of stuff but some of this stuff exemplifies irreducible mental properties that are not captured by physical explanations. This position is sometimes referred to as “property dualism” but Jaworski prefers the dual-attribute moniker for reasons presented in his chapter on this theory (mainly, because substance dualism is also committed to a kind of property dualism).

As was the case with physicalism, most of the philosophical action has been associated with one of these theories over the past 50 years or so. The theory in question is Dual Attributism, which can be broken down in the following manner:

Organismic DAT: This version of dual attributivism maintains that the kinds of entities displaying mental attributes are physical organisms. This theory can, in turn, be split in two: 
  • (a) Emergentism: This theory maintains that mental properties emerge from or are caused by physical phenomena and that these mental properties can play an actual role in physical reality. 
  • (b) Epiphenomenalism: This theory maintains that mental properties emerge from or are caused by physical phenomena, but that these mental properties play no causal role in physical reality.

Non-organismic DAT: This version of dual attributivism maintains that the kinds of entities displaying mental attributes might have some physical components, but are not organismic. This is a somewhat obscure position and is similar to substance dualism.




3. Non-Standard Theories
Finally, we come to non-standard theories. These theories reject one or more of the key assumptions upon which the standard theories are premised. Three such theories are mentioned by Jaworski.

Instrumentalism: This theory rejects the realist assumption of the standard theories. According to this assumption the mental predicates we use are intended to pick out objects, events and states of affairs in the external world. Instrumentalism rejects this by maintaining that these predicates are merely tools used to predict human behaviour. 
Hylomorphism: This theory rejects the mental-physical distinction-thesis that is assumed by the standard theory. According to this thesis, there really are two vocabularies used to describe and explain human behaviour. Monistic theories may think the two can be reduced to one, and dualist theories may think they cannot, but they both agree that the vocabularies exist. Hylomorphism does not. It maintains that their is a unique vocabulary for describing and explaining human behaviour. One interesting feature of Jaworski’s book is his defence of the claim that hylomorphism is a distinct theory. 
Mind-body Pessimism: This theory rejects the optimism underlying the standard theories. That optimism encourages proponents of the standard theories to believe that their theory can give a satisfactory account of mind-body relations. This theory maintains that we may forever by cognitively closed-off from such a satisfactory account.

Thursday, May 26, 2011

Posts on Humanity's End by Nicholas Agar


Here's an index to all the posts I've written on Nicholas Agar's book Humanity's End. Agar's book is a contribution to the scholarly debate on the merits of human enhancement. It presents a species-relativist argument against radical enhancement and it assesses the views of Ray Kurzweil, Aubrey de Grey, Nick Bostrom and James Hughes.

(More are on the way. I'm determined to make it through the whole thing.)

Index
1. An Introduction to the Species-Relativist Argument

2. The Species-Boundary Line

3. Should we upload our minds? The Searlian Wager (Part 1)

4. Should we upload our minds? The Searlian Wager (Part 2)

5. Who Wants to Live Forever? Agar on Negligible Senescence (Part 1)

6. Who Wants to Live Forever? Agar on Negligible Senescence (Part 2)

7. Citizen Cyborg: Agar on Democratic Transhumanism (Part 1)

8. Citizen Cyborg: Agar on Democratic Transhumanism (Part 2)

Wednesday, May 25, 2011

Twitter

'

I've just started using Twitter so if anyone is interested in following me they can do so. I'll be twittering links to this blog, links to any interesting things I come across, and maybe some other random thoughts.

The link to my profile is here.

Who Wants to Live Forever? Agar on Negligible Senescence (Part Two)

He'd like to live forever, would you?

(Part One)

This post is the second in a brief series looking at Nicholas Agar’s critical assessment of the pursuit of strategies for negligible senescence (SENS). This is an idea associated with the gerontologist Aubrey de Grey. The pursuit of SENS, if successful, could allow us to live indefinitely extended lives with all the vim and vigour of young adults. Accepting that such a programme is feasible, the key critical question is whether we should pursue it.

In part one, I sketched a basic pro-SENS argument. I also sketched Agar’s three main lines of attack on that argument. In this post, I will go through all three in more detail. One general note at the outset: most of Agar’s comments depend on speculative predictions about the likely outcomes of negligible sensescent. As a fan of science fiction, I enjoy this, but it might prove to be frustratingly suppositional for some.


1. An (un)healthy dose of Fear
Agar argues, contra de Grey, that the success of SENS will not allow us to live lives that are roughly comparable to those lived by contemporary young adults. The reason being that a massively extended lifespan will dramatically alter one’s attitude towards certain kinds of risky activities. In short, it will turn you into a simpering, cowering wreck, paralysed by pathetic anxieties and reluctant to set foot outside your own door for fear of some fatal accident befalling you. The logic behind Agar’s sentiments is easily expressed.

Consider a simple decision that might face you today: the decision of whether or not to drive to the cinema to meet your friends and watch a movie. There are risks and rewards that need to be assessed when making this decision. There is the small, but nonzero, risk of a fatal road accident, and there is the moderate, but not negligible, reward of enjoying a congenial evening with your friends. For most of us today, unless we have other more pressing commitments, the rewards of going to the cinema will probably outweigh the risks.

But things will be oh so different for the negligibly senescent. They will have so much more to lose from the trip to the cinema. A forty-year-old senescing human being can expect to have maybe ten or twenty years of good health left (followed by more years of deteriorating health); a forty-year-old negligibly senescent being will expect to have maybe 1000 years of good health left. The increased life expectancy means the negligibly senescent have more to lose and so will have increased costs associated with a simple decision like driving to the cinema. [I’d like to see a calculation here, but Agar doesn’t offer one]

De Grey himself confirms the idea that the negligibly senescent will be more risk averse. He comments in one interview (p. 115 of Agar) that he reckons once we achieve negligible senescence driving will be outlawed because it is too dangerous. He resiles from this somewhat in a later interview when he suggests that cars will simply become more heavily protected. Whatever form it takes, increased risk aversion does seem to be likely consequence of increased life expectancy, as is evinced by contemporary attitudes to health and safety compared to those of former generations.

Increased risk aversion will have significant impacts on the kinds of lives the negligibly senescent will live. An increased reluctance to drive is only the start. Driving, after all, is a relatively safe activity. What about foreign travel to the more distressed regions of the globe or high-impact sports? Both could be ruled out, along with many more enjoyable activities such as real (as opposed to virtual) sexual intercourse. And this is where we hit upon the crucial point: becoming more risk averse might result in us losing much of what we value. Here we get a whisper of Agar’s species-relativist argument -- a whisper that will wait until chapter 9 to be properly heard.

There is a flipside to negligible senescence and risk that must also be addressed. While it is probably true to say that the negligibly senescent will avoid exposing themselves to sources of sudden or accidental risk, the same does not hold true for their exposure to sources of gradual or accumulating risk.

Currently, activities such as smoking and the excessive consumption of fast food are risky, but only in the sense that their continuation gradually increases the risk of contracting disease or illness. The advances medical treatment that accompany negligible senescence can be expected to reverse the risk associated with these activities. As a result, although we may become more isolated, timid, stay-at-home beings, we can at least smoke and eat to fill the emotional void in our lives.



[Cork band Fred sing about the need for a healthy dose of fear...]



2. Hidden Costs
Agar’s second line of attack on the pursuit of SENS is to point out that there may be hidden costs associated with extended life. These costs might be such as to reduce life expectancy from that envisioned by de Grey.

The main hidden cost is likely to come from an increased susceptibility to bacterial and viral infection. Bacteria and viruses reproduce and evolve at a frightening rate. As they do so, they tend to adapt to meet the challenges posed by their environments. This is readily apparent in the growth of antibiotic-resistant bacteria. At the moment, outside of antibiotics, sex and death are among our primary defences against such pathogens.

Bacteria and viruses attack the body by attaching themselves to protein receptors on our cell surfaces. Sometimes they are unable to do this because they have not adapted to latch onto the relevant protein receptors. But given enough time, they are likely to do so (indeed, de Grey makes use of the adaptability of bacteria in part of his scientific defence of SENS). This is where sex and death enter the picture. Death ensures that particular types of cell surfaces don’t hang around for too long; and sex allows for novel combinations of genetic material to create new receptors.

But since the negligibly senescent won’t be dying as early, and will probably not reproduce (to prevent overpopulation), they may become highly susceptible to infectious disease. Defenders of SENS can, of course, interject here that medical advances may be able to counter the effect of these highly adaptive bacteria and viruses. This is indeed possible, but that’s not the issue. The issue is whether it is likely. Bacteria evolve quickly, are we confident that medical advances will outpace them? This is something we need to consider when determining the desirability of pursuing SENS.


3. Social Costs
Agar’s final line of attack looks to the possible social costs of pursuing SENS. We’ve already commented on the effects of negligible senescence on the individual and their attitude to risk. How might those individual effects translate into social effects?

On the one hand, there might be some laudable social effects. Due to their increased risk-aversion, the negligibly senescent are likely to avoid the hollow pursuit of glory on the battlefield. Violent conflicts and wars may become a thing of the past as we all retreat to our hermitages to indulge our predilections for fast food, cigars and virtual sex.

We must, however, be careful when considering such positive effects. If different societies achieve negligible senescence at different times, then those who have achieved it may become more vulnerable to attack from those who have not (and are still hungry for glory on the battlefield). This concern is not strictly speculative. As far as I recall, some believe the increased risk aversion of Western societies is what renders them vulnerable to, for example, Islamic fundamentalist terrorism. The concern could be overcome by making negligible senescence available to all at the same time, but Agar thinks this will be highly unlikely: the wealthy funders of SENS are unlikely to wish to hold-off on receiving its benefits until they become universally available. Robot armies might also be an option, but Agar has already expressed qualms about making artificial intelligence too intelligent.

There is a deeper problem here. Increased risk aversion may end up compromising the very foundations of the SENS programme. If we are to achieve de Grey’s goal of longevity escape velocity (LEV) then we need to launch an aggressive campaign of developing and testing medical interventions. But testing such interventions is a risky business. Who among the risk averse is likely to subject themselves to unproven techniques?

Agar marches off into highly speculative territory in responding to this. He reckons the War on Aging (de Grey’s term, not Agar’s) will lead to an increased disregard for the welfare and rights of those subjected to medical tests. He suggests that the use of those who do not have access to longevity increasing treatments could be used as guinea pigs, and that the introduction of a kind of medical conscription is not impossible.

Anyone interested in pursuing these kinds of scenarios further would be well-advised to read Kazuo Ishiguro’s novel Never let me Go (or check out the film of the same name). It depicts a society that breeds an underclass of donors whose organs are harvested in order to help others live for longer. It’s beautifully written and highly evocative

Tuesday, May 24, 2011

Who Wants to Live Forever? Agar on Negligible Senescence (Part One)

He'd like to live forever, would you?

Welcome to yet another entry in my series on Nicholas Agar’s book Humanity’s End. Thus far, my relationship with the book has been somewhat dysfunctional. I was sceptical but intrigued by the species-relativist argument that Agar proposed in chapters 1 and 2; and I was disappointed by his failure to develop the argument further in chapters 3 and 4 (although I did, admittedly, enjoy the Searle’s wager argument).

I’m now going to turn my attention to chapters 5 and 6 of the book. In these chapters, Agar critically assesses the views of the Rasputin-esque computer-scientist-turned-gerontologist Aubrey de Grey. De Grey defends both the feasibility and desirability of pursuing strategies for engineering negligible senescence (or SENS for short). These are strategies that will slow down or completely halt the deterioration that is typically associated with aging. If successful, SENS may allow us to live lives of indefinite length (de Grey suggests up to 1000 years at least) with all the vim and vigour of young adulthood.

The initial phase of Agar’s critical assessment of de Grey (chapter 5) focuses on the science of SENS. I’m not going to address this, fascinating and all as it is, because I’m ill-qualified to do so. The important conclusion that Agar draws from this assessment is that de Grey’s views have considerable plausibility and that the funding of SENS is a public policy goal that could yield the desired fruit. The key question, however, is whether it would be desirable to do so. That is the question addressed in chapter 6 and it is the one I will focus on here.


1. The Irrationality of Deathism
To many in the transhumanist movement, the idea that SENS is undesirable seems downright irrational. Life, at least during its prime, is full of enjoyable experiences that we would all like to continue. To oppose SENS is to embrace the desirability of death. An ideology that is sometimes termed “deathism”.

Consider the charming picture that de Grey paints of an indefinitely extended life:

“For the moment, when you retire, you retire forever. We’re sorry for old people because they’re going downhill. [When negligible senescence is available] there will be no real moral or sociological requirement to do that....retirement will be a periodic thing. You’ll be a journalist for 40 years or whatever and then you’ll be sick of it and you’ll retire on your savings or on a state pension, depending on what the system is. So after 20 years, golf will have lost its novelty value [clearly a comment by someone who doesn’t play golf-JD], and you’ll want to do something else with your life. You’ll get more training and education, and go and be a rock star for 40 years and then retire again and so on” (de Grey, quoted on p. 108 of Agar's book).

Sounds lovely doesn't it? The sentiments expressed here suggest that the following kind of argument can be conscripted in service of SENS. (Call this the “pro-SENS argument”)

  • (1) Our lives, as lived with the health, well-being and enthusiasm associated with early-adulthood, are desirable and so we should desire their continuance in that form.
  • (2) SENS holds the possibility that (a) life could continue indefinitely (b) with the health, well-being, and enthusiasm of early adulthood.
  • (3) Therefore, we should desire to pursue SENS.

I think this argument speaks for itself, but perhaps a word or two should be said about premise (1). Note that this premise does not propose that continued existence, in and of itself, is a good thing. This is important. Suppose scientists someday discover a technique for changing us gradually into tortoises (preserving our consciousness all the while). This would allow us to live the kinds of extended lives currently enjoyed by tortoises. Would this be desirable? Agar thinks not and I agree. It is only the kind of experiences had during the optimal phases of human life that might be worth continuing.



2. The Boredom Objection and Agar’s Strategy
But is even this kind of existence worth pursuing? There is a classic boredom-based objection to premise (1). It is associated with the work of Bernard Williams but anyone who has read Robert Heinlein’s novel Time Enough for Love will be familiar with it. The story begins with Lazarus Long, the oldest human being in the universe, who becomes bored after two millenia experiencing all the universe has to offer and decides he wants to die. This is a sentiment I think we can appreciate.

But as a source of criticism this sentiment is not particularly persuasive, and I think Agar justifiably gives it short shrift in his analysis. I won’t repeat all aspects of his analysis but I will suggest two main reasons that can be offered to justify the dismissal: (i) even if we would eventually get bored this does not mean significantly extending our life expectancy relative to what it is now would be a bad thing; and (ii) there is always the opt-out of suicide.

Despite the failure of the boredom-objection, Agar still thinks that the pro-SENS argument is unpersuasive. Why might he think this? He has three main lines of attack. First, he argues that there are reasons to think that the negligibly senescent would live lives that are quite different from those currently lived by typical humans in the prime of life. This would effectively be a rebuttal to (2b). Second, he argues that there might be unintended consequences associated with SENS that prevent it from extending life expectancy by as much as we might like. This would target (2a). Finally, he suggests that the pursuit of SENS could have undesirable social effects. This would give us reason to think about the desire of pursuing continued existence at all costs, and might lead to a significantly more compromised version of premise (1).

I’ll consider all three lines of attack in more detail in part two.

Sunday, May 22, 2011

Should we Upload Our Minds? Agar on Searle's Wager (Part Two)



(Part One)

This post is the second in a brief series looking at Nicholas Agar’s Searlian Wager argument. The argument is a response to Ray Kurzweil’s claim that we should upload our minds to some electronic medium in order to experience the full benefits of the law of accelerating returns. If that means nothing to you, read part one.

The crucial premise in the Searlian Wager argument concerns the costs and benefits of uploading your mind versus the costs and benefits of not uploading your mind. To be precise, the crucial premise says that the expected payoff of uploading your mind is less than the expected payoff of not uploading your mind. Thus, it would not be rational to upload your mind.

In this post I want outline Agar’s defence of the crucial premise.


1. Agar’s Strategy
The following is the game tree representing Searle’s Wager. It depicts the four outcomes that arise from our choice of whether to upload or not under the two possible conditions (Strong AI or Weak AI).




The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):


  • (6) Eu(~U) > Eu(U)


But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).


  • (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).


  • (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).


As I say, this strategy is interesting. While I know that it is effective for a certain range of values (I checked), it is beyond my own mathematical competence to prove that it is generally effective (i.e. true for all values of a, b, c, and d, and all probabilities p and 1-p, that satisfy the conditions set down in 8 and 9). If anyone is comfortable trying to prove this kind of thing, I’d be interested in hearing what they have to say.

In the meantime, I’ll continue to spell out how Agar defends (8) and (9).


2. A Fate Worse than Death?
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:

You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool -196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.

This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/possible survival (with electronic enhancements), on the other.

The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that -- given appropriate funding -- medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.

If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.


3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.

The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:

For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).

How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.

I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?

Agar’s second line of defence for premise (9) concerns species-relative values and claims that converting ourselves into electronic beings will result in the loss of experiences and motivations that are highly valuable. Here, at last, we get a whisper of Agar’s main argument, but alas it remains a whisper. He promises to elaborate further in chapter nine.


4. Conclusion
This concludes Agar’s main defence of the Searlian Wager argument. The implication of the argument is simple: the greater certainty attached to continued biological existence will make it the more attractive option. As a result, it will never be rational to upload our minds.

Following on from his main defence, Agar looks at the possibility of testing to see whether uploading preserves conscious experience before deciding to fully upload ourselves. This could reduce the uncertainty associated with the wager and thus make uploading the rational choice. Agar thinks any proposed experiments are unlikely to prove what we would like them to prove. The uncertainty would seem to be at the heart of the hard problem of consciousness.

Finally, Agar also discusses, at the end of chapter four, the problem of unfriendly AI and the dangers associated with creating electronic copies of yourself. I won’t discuss these issues here. Enough food for thought should have been provided by the wager argument itself.

Saturday, May 21, 2011

Should we Upload Our Minds? Agar on Searle's Wager (Part One)



I’m currently working my way through Nicholas Agar’s book Humanity’s End. The book is a contribution to the ongoing debate over the merits of human enhancement. Agar develops and defends something he calls the species-relativist argument against radical enhancement. I set out the basic structure of this argument, and commented on some of its key elements, in previous posts. My comments were based on my reading of chapters 1 and 2 of the book. I now wish to turn my attention to chapters 3 and 4.

My initial reaction to these chapters is one of disappointment. Things had been running along rather smoothly up until this point: Agar had set out his conclusion, told us how he was going to argue for it, and left some important threads of the argument dangling tantalisingly before us. That he didn’t continue with its development was rather surprising.

For you see, in chapters 3 and 4, Agar discusses the views of the arch-technological utopianist Ray Kurzweil. This was not unexpected -- Agar told us he would discuss the views of four pro-enhancement writers in chapter 1 -- what was unexpected was the aspect of Kurweil’s arguments he chose to discuss. Only the faintest whispers of the species-relativist argument can be heard in the two chapters.

Despite this shortcoming, there is still much of value in Agar’s discussion of Kurzweil. And over the next two posts I want to focus on what I take to be the most interesting aspect of that discussion: the Searle’s Wager argument.


1. Wager Arguments in General
We are all familiar with the concept of a wager. It is a concept that applies in a certain kind of decision-making context, one with uncertainty. So you put money on a horse because you think it might win a race; you bet with your insurance company that your house will burn down in the coming year; and so on.

Those contexts can be described in a cumbersome form, using the tools of informal argumentation; or they can be described in a more elegant form, using the tools of decision theory. I’ll run through both forms of description here.

This is the cumbersome form, it assumes that there are two possible states of the world and two possible courses of action:


  • (1) The world is either in state X (with probability p) or state Y (with probability 1-p); and you can choose to do either φ or ψ.
  • (2) If the world is in state X and you do φ, then outcome a will occur; alternatively, if you do ψ, then outcome b will occur.
  • (3) If the world is in state Y and you do φ, then outcome c will occur; alternatively, if you do ψ then outcome d will occur.
  • (4) Therefore, the expected payoff or utility of φ (Eu(φ)) = (p)(a) + (1-p)(c); and the expected payoff of ψ (Eu(ψ)) = (p)(b) + (1-p)(d). (from 1, 2, and 3)
  • (5) You ought to do whatever yields the highest expected payoff.
  • (6) So if Eu(φ) > Eu(ψ), you ought to do φ; and if Eu(ψ) > Eu(φ) you ought to do ψ.


That’s it. As I said, this is a cumbersome way of expressing the logic of the wager. The more elegant way uses the same set of equations and inequalities, but represents the decision-making context in a graphical or diagrammatic form. One form of representation uses a decision tree (or game tree) and the other form uses an outcome matrix. The latter should be familiar to anyone who has been reading my series on game theory.

I prefer the tree representation and I give it below. The first node in this tree represents Nature. Nature is like a player in a game, albeit an unusual one. She (I’ll use the convention of viewing nature as a female) selects the possible states of the world in accordance with certain probabilities, not in anticipation of what you might do (which is what a strategic player would do). To get the picture right you can imagine Nature rolling a dice before making her move. The second set of nodes represents you. You have to make a decision about the most appropriate thing for you to do. You do so in accordance with the standard principles of practical rationality: pick the option with the highest expected payoff, given what you know about the likely probabilities guiding Nature’s move.


The Wager


This should all be relatively straightforward. Where wager arguments tend to get interesting is when they point to one overwhelmingly good (or bad) outcome, that can make it rational to choose the action that leads to (or avoids) that outcome, even when the probability of that outcome arising is very low.

The most famous example of such a wager argument comes, of course, from Pascal. He argued that even if the probability of God’s existence was low (perhaps exceedingly low), the expected reward that comes from believing in his existence is so high, that it would be practically rational to believe in his existence. This is because the costs of all the earthly encumbrances of belief pale in comparison to the potential rewards.



2. Uploading our Minds
Agar employs a very similar argument in response to Kurweil’s view that we should, in the future, upload our minds to an electronic, non-biological medium. He calls it the Searlian wager argument because it utilises some of the views of the cantankerous old Berkeley philosopher John Searle. As you can probably guess from my description, Searle is not the most agreeable of figures (at least, not to me). Still, that shouldn’t cloud our judgment of Agar’s argument which, despite its appeal to Searle, seems quite legitimate to me.

First things first, what would it mean to upload our minds? Agar envisions two possible scenarios. The first is a one-off scenario in which a fully biological being presses a button and completely copies his or her psychology into a computer. The second involves a step-by-step process in which a biological being gradually replaces parts of his or her mind with electronic analogues until eventually the electronic components dominate.

Second question: why might this seem like a good thing to do? Answer: it would allow us to take full advantage of the law of accelerating returns.* As Agar puts it (describing Kurweil’s position):

“The message from AI is that anything done by the brain can be done better by electronic chips. According to Kurzweil, those who grasp this message will progressively trade neurons for neuroprostheses. When the transfer of mind into machine is complete, our minds will be free to follow the trajectory of accelerating improvement...We’ll soon become millions and billions of times more intelligent than we currently are.”

Sounds good, right?

Maybe not. Agar thinks there is crucial uncertainty facing the person making the decision to upload. And that the uncertainty is such as to make the potential costs of uploading outweigh any of the benefits arising from being “millions and billions of times more intelligent than we currently are.”

The crucial uncertainty arises from the fact that there are two possible theories of artificial intelligence:

Strong AI: According to this theory it will be possible someday to create a computer that is capable of genuine, conscious thought. In other words, a computer that will have experiences that are similar to those had by ordinary human beings (the experiences may, of course, be more stupendous and awe-inspiring, but they will be there nonetheless).
Weak AI: According to this theory although it might be possible for computers to completely mimic and simulate aspects of human thought and behaviour, this does not mean that the computer will actually have conscious experiences and thoughts like those had by human being. To believe that a computer simulating thought is actually consciously thinking, is like believing that a computer simulating a volcano is actually erupting.

Kurweil defends the first theory. Searle defends the second. Who is right does not matter. All that matters is that it is possible for Weak AI to be true. This possibility creates the conditions necessary for the wager argument to thrive.



3. The Searlian Wager Outlined
The Searlian Wager argument can now be stated. We start with the premise that our conscious experience is valuable to us. In fact, it might be supremely valuable to us: the ground from which all other personal values emanate. So it follows that it would be pretty bad for us to lose our consciousness. But according to Weak AI that’s exactly what might happen if we choose to upload ourselves to a computer. Now, admittedly, Weak AI is just a possibility, but the loss it entails is sufficient to outweigh any of the potential benefits from uploading. Thus, following the logic of the wager argument, it will never be rational to choose to upload.

Let’s restate that argument in the more cumbersome form:


  • (1) It is either the case that Strong AI is true (with probability p) or that Weak AI is true (with probability 1-p); and you can either choose to upload yourself to a computer (call this “U”) or not (call this “~U”).
  • (2) If Strong AI is true, then either: (a) performing U results in us experiencing the benefits of continued existence with super enhanced abilities; or (b) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (3) If Weak AI is true, then either: (c) performing U results in us destroying ourselves; or (d) performing ~U results in us experiencing the benefits of continued biological existence with whatever enhancements are available to us that do not require uploading.
  • (4) Therefore, the expected payoff of uploading ourselves (Eu(U)) is = (p)(a) + (1-p)(c); and the expected payoff of no uploading ourselves (Eu(~U) is = (p)(b) + (1-p)(d).
  • (5) We ought to do whatever yields the highest expected payoff.
  • (6) Eu(~U) > Eu(U)
  • (7) Therefore, we ought not to upload ourselves.


Here’s the relevant game tree.


Searle's Wager


What are we to make of this argument? Is it any good? Well, one obvious problem is that we have no figures to plug into the relevant equations and inequalities. And it is these equations and inequalities that carry all the weight. In particular, the inequality expressed in premise (6) seems to be the crux of the argument. Agar thinks that this premise can be defended. We'll see why in the next part.


* The law of accelerating returns posits that the returns we receive from technological advance grow at an exponential, as opposed to linear, rate.

Friday, May 20, 2011

The Species-Relativist Argument: The Boundary Line



I’m currently reading Nicholas Agar’s book Humanity’s End. It is an interesting contribution to the ongoing to debate over the merits of human enhancement.

In an earlier post, I introduced Agar’s species-relativist argument against radical human enhancement. The argument proposes that because our use of radical enhancement technologies may lead to the creation of a new species, and because the experiences of that species may not be valuable to us, we are justified in preventing radical enhancement.

I tried to sketch this argument in more detail in the last post. The outline was based on chapter one of Agar’s book. I now want turn my attention to chapter two of the book, which looks in more detail at two of the crucial premises in Agar’s argument.


1. The Questions that need Answering
The premises in question are the following:

  • (2) Certain experiences and ways of existing properly valued by members of one species may lack value for the members of another species.
  • (3) Activities and policies that promote radical enhancement may lead to the creation of beings that do not belong to the human species.

Both of these premises direct our attention to the boundary line between different species. This boundary line plays a crucial role in the normative and factual components of Agar’s argument.

On the normative side, the claim being advanced is that if a being crosses the boundary line (whether that being is a future version of ourselves or a friend or somesuch) it may then lie beyond the pale of a certain set of values that are significant to us. As Agar puts it, the species concept will “set the boundaries on an important collection of values, and that we should recognize radical enhancement as infringing on these values” (pp. 21-22).

On the factual side, Agar is claiming that the kinds of enhancement endorsed by the likes of Ray Kurzweil, Aubrey de Grey, Nick Bostrom and James Hughes -- these four are the main conversants in Agar’s book -- will create a species boundary between the enhanced and the non-enhanced.

Given the important role that the boundary line between species plays in Agar’s argument, there are four questions that need to be addressed:

(a) What are the available species-concepts and which one does Agar use? 
(b) How is the boundary line defined by Agar’s preferred species concept? 
(c) How might radical enhancement create beings that cross that boundary line? 
(d) Why is that boundary line morally significant?

To be fair to him, Agar is mainly concerned with the first three questions in the course of chapter two. He seems aware of the need to answer the fourth question, but he doesn’t get round to it. Perhaps he will address it in future chapters. Acknowledging this possibility, I still want to pass some comments on this issue at the end of this post.


2. Agar’s preferred Species Concept
There are several species concepts available in the philosophical and biological literatures. John Wilkins -- who has been known to read this blog -- could no doubt regale us for hours around the campfire with tales from history of the species concept. Being somewhat more of a dilettante in this area, I’ll try to be brief and just mention three potential species concepts (from Griffiths and Sterelny, 1999).

First, there is the phenetic species concept. According to this concept, what makes two organisms members of the same species is some measure of their similarity across morphological, genetic or behavioural dimensions. There are problems with this concept. One concerns how we actually measure similarity; another concerns the fact that members of the same species can often be quite different (e.g. due to sexual dimorphism). As a result of these problems, this concept is generally out of favour in biology.

Second, there is the biological species concept (a process-based concept). This concept identifies a species by referring to the processes that create and sustain the species. In particular, to the fact that populations of organisms form reproductively isolated groups. So according to this definition, two organisms are members of the same species if they are part of population that shares and transfers genes.

Finally, there is the phylogenetic species concept (a pattern-based concept). This concept appeals to the tree of life and the relationships between organisms arrayed along the branches of this tree. Accordingly, two organisms are deemed part of the same species if they are part of the same lineage of ancestral and descendant populations and if they share a distinctive trajectory along the tree of life.

One of the important points about the biological and phylogenetic concepts is that they are, broadly speaking, historical in nature: a species is defined by having a particular kind of history. This stands in contrast to the phenetic approach which is based on either clusters of shared properties or, even, essential properties.

Which of the three species concepts does Agar opt for? Unsurprisingly, he goes for the biological species concept. I say “unsurprisingly” since this is the dominant species concept in modern biology. Agar acknowledges that it is not entirely without its own problems, but he can shrug those off. It does not matter to him whether the concept is the biologically or scientifically correct one; it only matters that it performs the normative and factual roles required by his argument. Does it?



3. Reproductive Barriers
Following the biological species concept, reproductive barriers between two populations are the key boundary lines between species. These reproductive barriers can come in a variety of forms. They could for example be physiological -- one organism is physically incapable of mating with another organism -- or psychological -- one organism is repulsed by or lacks the motivation to mate with another organism. They could also be geographical or genetic. Whatever form they take, they must, in accordance with the terms of Agar’s argument, divide one collection of important values from another.

There are problems with these kinds of barriers, particularly when it comes to humans. I want to look at some of these and consider how Agar responds to them.

First, it is not clear that humans are constrained by such barriers. Allen Buchanan (whose book I am also reading) argues, for instance, that technology has effectively allowed human beings to erode almost all reproductive barriers. Even when there are physiological or psychological barriers to reproduction, our technology allows (or will allow) us to combine genetic material according to our whim. Agar is unmoved by such an argument. He writes that:

“Facts about what might or might not happen in a laboratory are actually not directly relevant to decisions about species boundaries. We assign individual organisms to species not on the basis of what happens in human or posthuman laboratories, but according to what occurs in nature.” (p. 22)

This is not an unexpected move, but I worry about it. It’s willingness to disregard what may or may not happen in the lab might contradict something Agar says later on. Also, I think I’m with Buchanan in believing distinctions between the natural and unnatural, especially when applied to humans, are virtually pointless. So much of our lives are bound to our cultural and technological artifices that its difficult to know where nature begins and ends, if anywhere.

A second problem with reproductive barriers is that they may exclude a good number of beings we routinely consider to the members of our species, e.g. the post-menopausal woman, the castrated male, the homosexual, the celibate priest. Many more examples could be adduced. Agar responds to this worry by pointing out that in each of these cases there are past, potential or counterfactual reproductive connections with other human beings: "but for his castration, the male would be able to reproduce"; "the post-menopausal woman could have reproduced in the past"; and so on.

This kind of response is mildly persuasive, but then it raises the question: why can’t we take into account past, potential or counterfactual uses of technology when determining where the species boundary lies? Why not say that but for current technological limitations humans and chimpanzees could form a reproductively connected population? This point is significant. Agar’s belief that radically enhanced humans will in fact constitute a distinct species could arguably be vulnerable to this kind of counterfactual objection.

A third, and for now final, problem with reproductive barriers is that they are not, as Agar notes, all-or-none affairs. He cites the example of the New Zealand bird the black stilt which has been breeding itself to death by mating with the Australian pied stilt. The fact that such interbreeding occurs may suggest that they are not truly distinct species, but Agar argues that the situation is more complex than that. There is still some degree of reproductive isolation between the populations: pied stilts prefer to mate with other pied stilts; and black stilts prefer to mater with other black stilts. It is only when their preferred mating partners are absent that they mate with one another. Given this situation, it might be possible for full reproductive isolation to be reinstalled or for the reproductive barriers to break down completely.

I have no doubt that this is all true, but it seems to create problems for Agar. He argues against radical enhancement by using the precautionary principle. This principle maintains that if there is a risk of particularly bad outcomes arising from a set of activities, we should restrict those activities. But if the creation of reproductive barriers is not an irreversible process, then maybe things are not bad enough to warrant precautionary measures. The force of this criticism depends on how Agar chooses to specify the kind of precautionary principle he is using.



4. Would Posthumans be Non-humans?
The next topic we need to address is Agar’s factual claim that radical enhancement might lead to the creation of a new species. In line with his preferred species concept, this would happen if reproductive barriers were put in place between the normal and the enhanced. He accepts that not all enhancement would create such barriers, he only claims that some might.

What form would those barriers take? Agar notes that there might be physiological barriers created. For instance, genetic enhancements might require the creation of a new chromosome which makes reproduction impossible between those who have the chromosome and those who lack it. Or, if we replace significant parts of our bodies with mechanical or electrical components, we might be prevented from mating with one another.

Agar is not too interested in those kinds of barriers. He is more concerned about the psychological barriers that might be created. He thinks that radically enhanced beings would have little reproductive interest in normal human beings: they would them as something akin to pets. He makes this point through an amusing thought experiment involving a futuristic singles-ad placed by a normal human (“merelyhuman”) that has to compete with that of an enhanced human (“singularityman”). He thinks the psychological boundaries between singularityman and merelyhuman would be more robust and permanent than those between different cultural groups. Indeed, it might be the case that enhanced human beings with indefinitely extended lifespans drop out of the reproductive game altogether.

I won’t challenge any of this here. I have some concerns, expressed above, about the application of the biological species concept to human beings, and I do wonder how much can be inferred from a thought experiment about potential future being, but I’m more interested in the moral significance such reproductive barriers.


5. Why do we care about reproductive barriers?
The species-relativist argument hinges on the claim that by crossing the boundary between species we might be forced to sacrifice or give up something of great value. But if the boundary primarily concerns how we identify and select potential mating partners then I’m having trouble seeing what all the fuss is about.

Now, certainly, I would not deny that reproduction has some value to human beings. But is the value such as to outweigh the potential benefits of enhancement? More precisely, is the value such that a slight risk of losing it is sufficient to warrant taking precautionary measures? One obvious rejoinder to Agar is that even if singularityman is no longer interested in mating with merelyhumans, he might still be interested in mating with fellow singularitarians. And so whatever loss may initially be experienced might be pretty insignificant.

Agar doesn’t address these issues in chapter two. But he’ll need to do it sooner or later. If his argument is to succeed, he will need to convince me that the reproductive barrier is one that we should not cross.

Thursday, May 19, 2011

The Species-Relativist Argument: An Introduction

Do different species have different values?


As mentioned in an earlier post, I’ve recently begun reading two books on the ethics of human enhancement. One of those books is called Humanity’s End and it’s by Nicholas Agar.

Agar seems like an interesting character. In an  earlier book  he defended a liberal position on positive eugenics. This suggested he had a willingness to embrace certain forms of enhancement. And yet in this book he offers an argument against radical human enhancement. There's not necessarily an incompatibility between the two positions, but it’s an interesting shift nonetheless.

I must say, I’ve been looking forward to reading Agar’s book since I first heard about it. Although I consider myself to be (roughly-speaking) favourably disposed towards enhancement, I’m interested in reading good arguments against it. I’ve been hoping Agar might provide such an argument. Alas, after reading the first two chapters, things are not looking good. Still, I want to give it a chance.

As part of my wish to give Agar a fair hearing, I want to try to clarify in my own mind the kind of argument he is offering against radical enhancement. He hasn’t fully developed that argument yet (i.e. by the end of chapter two), but he has offered considerable hints about the direction in which he is headed. The following is my attempt to elucidate that direction based on his initial discussion of the argument in chapter one. I’ll look at chapter two some other time.


1. The Species-Relativist Argument
Agar defends something he calls the species relativist argument against radical enhancement. By radical enhancement he means anything that causes human attributes and capacities to greatly exceed their current potentialities (see p.1 of the book for this). That definition seems a little wooly to me, but I’m willing to leave it to one side for now because I want to focus my attention on the actual argument. I do so by first quoting a passage from Agar and then constructing a formal argument based on that passage. Following this, I’ll refine the argument a little bit by highlighting Agar's use of the precautionary principle.

Here’s the passage from Agar:

“According to species-relativism, certain experiences and ways of existing properly valued by members of one species may lack value for the members of another species. In chapter 2, I make the case that radical enhancement is likely to create beings who do not belong to the human species....[Subsequently] I’ll be offering species-relativist arguments for finding purportedly enhances posthuman existences inferior to human existences. Species-relativism, therefore, justifies rejecting radical enhancement.” (pp. 12-13)

As we can see from the use of “therefore” in this passage, Agar clearly thinks he is offering some kind of argument against radical enhancement. The problem, as is typical in summaries of this sort, is that the argument is incomplete: certain key premises are either implied or omitted. I’m going to try to fill-in the blanks.

Here’s my version of the argument:

  • (1) Humans should pursue activities and policies that promote or honour their values; they should not pursue acitivities and policies that do not promote or honour their values.
  • (2) Certain experiences and ways of existing properly valued by members of one species may lack value for the members of another species.
  • (3) Activities and policies that promote radical enhancement may lead to the creation of beings that do not belong to the human species.
  • (4) Therefore, radical enhancement may create beings whose experiences and ways of existing lack value for human beings (from 2 and 3).
  • (5) Therefore, we should not pursue activities and policies that promote radical enhancement (from 1 and 4).




A few things need to be said about this argument. First, premise (1) is my own interpolation. I’m presuming that something akin to this principle is guiding Agar’s evaluations. I think the principle is relatively innocuous. All it says it that we should honour or promote that which is valuable. The use of the terms “honour” and “promote” is intended to make the principle neutral as between deontological or consequentialist approaches to ethics. There’s another important point to be made about (1), but I will save that until the next section.

Premise (2) is almost a word-for-word repetition of what Agar says about species-relativism. As such, it should be uncontroversial. Premise (3) is simply what Agar tells us he will do in chapter two. Premise (4) follows from (2) and (3). And (5) follows, with certain caveats to be discussed in a moment, from (1) and (4).

One thing worth noting about (4) is that it mimics the structure of cultural relativist claims. In other words, it says that X (may) lack value relative to a particular perspective or set of characteristics; it does not say say that X absolutely or necessarily lacks value from all perspectives. This might be thought a weakness since even if it is true it will only raise the question: why not switch to another perspective from which X is valuable?

Agar acknowledges this in what he writes but makes a couple of comments. First, he says that the boundaries between species are more likely to be morally significant than cultural boundaries. Second, the claim it does not assume that there are no shared moral experiences between species (they might all experience pain, for example) just that important values may be different.

One thing that concerns me about this line of argument is the moral weight it places on the boundary between different species. For it to be successful, Agar really has to show that changing from being a human to being a posthuman involves crossing a significant (from the human perspective) moral threshold.


2. The Precautionary Principle
Now we need to return to premise (1). As set forth above, it states that we should not pursue policies and activities that dishonour or fail to promote our values. And yet the species-relativist argument as a whole reaches the conclusion that we should not pursue radical enhancement because it may lead to the creation of beings whose experiences lack value from our perspective.

Is this right? Can it really be the case that if a policy “may lead” to undesirable outcomes it should not be pursued? Surely, the good outcomes need to be weighed against the bad outcomes before deciding that? Well, maybe not.

Agar, you see, appeals to the precautionary principle. This is a key guiding principle in many areas of environmental policy. It maintains, roughly, that if there is a chance that a policy might lead to a particularly bad outcome, even if the relevant probability is low, then that policy should not be pursued. So, for example, in the climate change debate, a low probability of devastating outcomes can be thought to justify strict precautionary measures. It is up to opponents of those measures to show that those outcomes will not arise.

Agar thinks something similar can be adopted in the enhancement debate. He reckons that if he can show that radical enhancement has the chance of leading to particularly bad outcomes, then that alone justifies not pursuing radical enhancement. It will be left to the defenders of radical enhancement to show that those outcomes will not arise.

So premise (1) needs to be reformulated slightly:

  • (1*) Humans should pursue activities and policies that promote or honour their values; they should not pursue acitivities and policies that may possibly not promote or honour their values (in accordance with the terms of the precautionary principle). 

While I have no serious objection to the use of the precautionary principle in this debate, I think we have to be careful about it. As Neil Manson points out in his article on this issue, the precautionary principle is really an abstract template with different variables.  To see what I'm talking about it’s actually worth looking at Manson’s abstract version of the principle (I’ve reworded this):

Precautionary Principle: If an effect meets a damage condition, and if the link between the effect and a certain activity or range of activities meets a knowledge condition, then decision makers ought to enact a specified remedy.

The “damage condition” refers to the class of effects or outcomes that we think are undesirable and which merit taking precautionary measures. The “knowledge condition” allows us to say that there is a link between certain activities and certain, undesirable, outcomes. Finally, the remedies are the actual policies that need to be followed in order to avoid or avert the undesirable outcomes. Each of these three variables (damage condition, knowledge condition, remedies) needs to be filled-in.

The problem, as Manson points out, is that they can be filled-in in a variety of different ways. Later in his article, he offers the following table to illustrate this point:

(from Manson, 2002)


My motivation in reproducing this table is just to point out that Agar needs to do some more work to make his use of the precautionary principle compelling. It will be interesting to see whether he does this or whether he leaves it improperly specified.


3. Conclusion
Thus ends my introduction to Agar’s species-relativist argument. If my presentation is fair, there are lots of things Agar will need to do to make his argument persuasive. For starters, he’ll need to show that the dividing line between species is morally relevant and that we will lose something of significant value by crossing over the line into posthumanity; he will have to show that radical enhancement is likely to lead to the creation of a new species; and he will have to be more specific about the nature of the precautionary principle he is using to support his conclusions about the undesirability of pursuing radical enhancement.

Wednesday, May 18, 2011

Five Features of the Enhancement Debate



I’ve recently developed a bit of an interest in the enhancement debate. You know the one: should we allow human beings to enhance themselves? Is enhancement, perhaps, obligatory/impermissible? Would it lead to the creation of desirable/undesirable posthumans? Would it lead to increased social inequality and stratification? What are the appropriate legal regulations of enhancement technologies? And so on.

I’m specifically interested in cognitive enhancement. This kind of enhancement would try to improve how the human mind acquires, processes, stores and retrieves information. (More generally, we can speak of “neural enhancement” which would also target emotional and motivational capacities. Drawing a dividing line between cognitive capacities and these other capacities is difficult and, perhaps, pointless since the same basic philosophical issues arise in relation to enhancing both and, indeed, to enhancing any human capacity).

As a result of my interest, I’m currently reading two books on enhancement. One is, roughly speaking, a pro-enhancement book called Beyond Humanity? and it’s written by Allen Buchanan. The other is, roughly speaking, an anti-enhancement book called Humanity’s End and it’s written by Nicholas Agar. I say “roughly speaking” about each of these because, as we shall see, the pro- and anti- labels can be misleading.

Anyway, I decided I might do some occasional posts (I’m not promising anything too serious) on each of these books as I work my way through them. I literally only started this morning, but already I find myself wanting to share part of the first chapter of Buchanan’s book.

The part in question outlines five (negative) features of the contemporary debate on enhancement. I’ll summarise them as best I can below. Just note at the outset that Buchanan promises to provide more extensive argumentation on most of these issues in subsequent chapters. So if they feel insufficiently justified, it might be worth giving him the benefit of the doubt.


1. Murky Rhetoric
The first problem with the contemporary debate is the tendency for opponents of enhancement to rely on murky and overblown rhetoric to mask their (presumably feeble) arguments.

One of the examples Buchanan uses to illustrate this point is Michael Sandel’s talk of the need for “gratitude towards the given” (not a direct quote). Sandel argues that permitting enhancement would force us to take an attitude of excessive responsibility over the kinds of beings that we are. This would result in us ignoring the fact that much of what is good in our lives is not attributable to our own efforts but is instead a bequeathment of a complex cultural and biological history.

Buchanan notes several problems with this language. First, and most obviously, it involves a kind of category mistake. One cannot be grateful towards a mindless and undirected series of historical events (this assumes a non-theistic worldview, which is consistent with Sandel’s arguments).

Second, and more importantly, the rhetoric ignores the fact that our historical bequeathment is not all good. There are many bad aspects to the accidents of history. Should we display gratitude towards all of them? Many authors seem to think that the “natural” range of human functioning is somehow sacrosanct. They forget that evolutionary biology reveals that organisms never evolve toward an absolute optimal or ideal state; instead, they evolve toward an equilibrium state relative to a contingent set of constraints.

Buchanan goes on to provide other examples of the rhetoric problem, but this will suffice for present purposes.


2. Ignoring Evolutionary Biology
The second problem with the enhancement debate is that many authors within it display a profound ignorance of contemporary evolutionary biology. This problem was signposted above, but it goes much further than we might initially suspect.

For one thing, there is an excessive willingness to sacralise “human nature”. This is presumably taken to refer to our biological nature and to stand in contrast to cultural or environmental influences. But anyone who is familiar with contemporary behavioural biology will know that the simplistic nature/nurture distinction is discredited. We now know that our phenotype is the product of a set of causal feedback loops between genes and environment. Discussions of enhancement that ignore this new understanding are seriously deficient.

There is a further problem here. It is that opponents of enhancement move far too quickly from claims about what is natural for human beings to claims about what ought to be the case for human beings. Anyone who is familiar with ethics and metaethics will know that moves of this sort are dubious.


3. Empirical claims without Evidence
The third problem with the enhancement debate is the tendency for opponents of enhancement to advance sweeping empirical claims without evidence. This is a serious methodological error.

Sandel provides the post upon which Buchanan sharpens his claws. Sandel is guilty of making two sweeping empirical claims. The first relates to the motivations behind people’s desire for enhancement. As Sandel puts it, people are driven by a lust for total self mastery. The second relates to the social consequences of enhancement. Sandel maintains that enhancement will lead to extreme social stratification, and will undermine the commitment to distributive justice.

As Buchanan argues, both claims are open to doubt. But what is truly remarkable is that Sandel advances them as if they were self-evidently true. He provides no further evidence for them. This makes the enhancement debate one of the last places where a priori psychology and sociology are taken seriously.


4. Unclarity about the Bottom Line
A fourth problem with the enhancement debate is that opponents of enhancement equivocate about their bottom line. Are they opposed to all forms of enhancement or just some?

At this point in his discussion, Buchanan has developed a certain pattern. So it will come as no surprise to learn that Sandel is once again the whipping boy. Buchanan points out that despite claiming to offer an argument against enhancement (in toto), towards the end of his book Sandel makes the following statement:

“Nor do I claim that people who bioengineer their children or themselves are necessarily motivated by a desire for mastery, and that this motive is a sin no good result could possibly outweigh. I am suggesting instead that the moral stakes in the enhancement debate are not fully captured by the familiar categories of autonomy and rights, on the one hand, and the calculation of costs and benefits, on the other.” (The Case Against Perfection, p. 96)

Buchanan says that this passage is anachronistic in the context of Sandel’s book. Sandel retreats to a modest position here: he is merely offering further considerations that must be taken into account in the enhancement debate; he is not, despite all appearances to the contrary, offering a complete dismissal of enhancement. Having not read Sandel’s book, I’ll have to take Buchanan’s word for it.


5. Stuck at the “Pros and Cons” Stage
The final problem with the enhancement debate is that, despite the endless streams of ink that have been spilled in its pursuit, it is still stuck at the pro-enhancement vs. anti-enhancement stage. This is unfortunate for a couple of reasons.

Foremost among them is the fact that the labels “pro” and “anti” are misleading. While there are people who are anti-enhancement, there are very few (if any) people who are unqualifiedly pro-enhancement. Instead, there are people who are anti-anti-enhancement, i.e. who believe enhancement is permissible, or sometimes okay. Continuing with the absolutist “pro” and “anti” labelling leads to a perpetual slaying of straw men.

Another deficiency associated with the pro- and anti- posturing is that it prevents people from putting forward serious proposals for the regulation of enhancement technologies. Most participants are content to defend their position and leave it at that. But since developments in enhancement technologies continue apace, it’s necessary to engage with the reality of enhancement, in all its complexity, and to put in place a regulatory framework for guiding the “enhancement enterprise” (as Buchanan describes it).

Okay, that’s it for now. I might look at other parts of Buchanan’s book on another occasion.