Tuesday, June 27, 2017

The Tell-Tale Brain: The Effect of Predictive Brain Implants on Autonomy




What if your brain could talk to you?

’That’s a silly question’, I hear you say, ‘My brain already talks to me.’

To the best of our current knowledge, the mind is the brain, and the mind is always talking. Indeed, it’s where all the talking gets started. We have voices in our heads — a cacophony of different thoughts, interests, fears, and hopes— vying for attention. We live in a stream of self-talk. We build up detailed narratives about our lives. We are always spinning yarns, telling stories.

This is all probably true. But our brains don’t tell us everything. The stream of self-talk in which we are situated (or should that be ‘by which we are constituted’?) sits atop a vast, churning sea of sub-conscious neurological activity. We operate on a ‘need to know’ basis and we don’t need to know an awful lot. Many times we sail through this sea of activity unperturbed. But sometimes we don’t. Sometimes what is happening beneath the surface is deeply problematic, hurtful to ourselves and to others, and occasionally catastrophic. Sometimes our brains only send us warning signals when we are about to get washed up on the rocks.

Take epilepsy as an example. The brains of those who suffer from epilepsy occasionally enter into cycles of excessive synchronous neuronal activity. This results in seizures (sometimes referred to as ‘fits’), which can lead to blackouts and severe convulsions. Sometimes these seizures are preceded by warning signs (e.g. visual auras), but many times they are not, and even when they are, the signs often come too late in the day, well after anything can be done to avert their negative consequences. What if the brains of epileptics could tell them something in advance? What if certain patterns of neuronal activity were predictive of the likelihood of a seizure and what if this information could be provided to epileptic patients in time for them to avert a seizure?

That’s the promise of a new breed of predictive brain implants. These are devices (sets of electrodes) that are implanted into the brains of epileptics and, through statistical learning algorithms, used to predict the likelihood of seizures from patterns of neuronal activity. These devices are already being trialled on epileptic patients and proving successful. Some people are enthusiastic about their potential to help those who suffer from the negative effects of this condition and, as you might expect, there is much speculation about other use cases for this technology. For example, could predictive brain implants tell whether someone is going to go into a violent rage? Could this knowledge prove useful in crime prevention and mitigation?

These are important questions, but before we get too carried away with the technical possibilities (or impossibilities) it’s worth asking some general conceptual and ethical questions. Using predictive brain implants to control and regulate behaviour might seem a little ‘Clockwork Orange’-y at a first glance. Is this technology going to be great boon to individual liberty, freeing us from the shackles of unwanted neural activity? Or is it going to be a technique of mind control - the ultimate infringement of human autonomy? These are some of the questions taken up in Frederic Gilbert’s paper ‘A Threat to Autonomy? The Intrusion of Predictive Brain Implants’. I want to offer some of my own thoughts on the issue in the remainder of this post.


1. The Three Types of Predictive Brain Implants

Let’s start by clarifying the technology of interest. Brain implants of one sort of another have been around for quite some time. So-called ‘deep brain stimulators’ have been used to treat patients with neurological and psychiatric conditions for a couple of decades. The most common use is for patients with Parkinson’s disease, who are often given brain implants that help to minimise or eliminate the tremors associated with their disease. It is thought that over 100,000 patients worldwide have been implanted with this technology.

Predictive brain implants (PBIs) are simply variations on this technology. Electrodes are implanted in the brains of patients. These electrodes record and analyse the electrical signals generated by the brain. They then use this data to learn and predict when a neuronal event (such as a seizure) is going to take place. At the moment, the technology is its infancy, essentially just providing patients with warning signals, but we can easily imagine developments in the technology, perhaps achieved by combining it with other technologies. Gilbert suggests that there are three possible forms for predictive brain implants:

Purely Predictive: These are PBIs that simply provide patients with predictive information about future neuronal events. Given the kinds of events that are likely to be targets for PBIs, this information will probably always have a ‘warning signal’-like quality.

Advisory: These are PBIs that provide predictions about future neuronal events, as well as advice to patients about how to avert/manipulate those neuronal events. For example, in the case of epilepsy, a patient could be advised to take a particular medication or engage in some preventive behaviour. The type of advice that could be given could be quite elaborate, if the PBI is combined with other information processing technologies.

Automated: These are PBIs that predict neuronal events and then deliver some treatment/intervention that will avert or manipulate that event. They will do this without first warning or seeking the patient’s consent. This might sound strange, but it is not that strange. There are a number of automated-treatment devices in existence already, such as heart pacemakers or insulin pumps, and they regulate biochemical processes without any meaningful ongoing input from the patient.

The boundary between the first two categories is quite blurry. Given that PBIs necessarily select specific neuronal events from the whirlwind of ongoing neuronal events for prediction, and given that they will probably feed this selective information to patients in the form of warning signals, the predictions are likely to carry some implicit advice. Nevertheless, the type of advice provided by advisory PBIs could, as mentioned above, be more or less elaborate. It could range from the very general ‘Warning: you ought to do something to avert a seizure’ to the more specific ‘warning: you ought to take medication X, which can be purchased at store Y, which is five minutes from your present location’.



The different types of PBI could have very different impacts on personal autonomy. At a first glance, it seems like an automated PBI would put more pressure on individual autonomy than a purely predictive PBI. Indeed, it seems like a purely predictive or advisory PBI could actually benefit autonomy, but that first glance might be misleading. We need a more precise characterisation of autonomy, and a more detailed analysis of the different ways in which a PBI could impact upon autonomy, before we can reach any firm conclusions.


2. The Nature of Autonomy
Many books and articles have been written on the concept of ‘autonomy’. Generations of philosophers have painstakingly identified necessary and sufficient conditions for its attainment, subjected those conditions to revision and critique, scrapped their original accounts, started again, given up and argued that the concept is devoid of meaning, and so on. I cannot hope to do justice to the richness of the literature on this topic here. Still, it’s important to have at least a rough and ready conception of what autonomy is and the most general (and hopefully least contentious) conditions needed for its attainment.

I have said this before, but I like Joseph Raz’s general account. Like most people, he thinks that an autonomous agent is one who is, in some meaningful sense, the author of their own lives. In order for this to happen, he says that three conditions must be met:

Rationality condition: The agent must have goals/ends and must be able to use their reason to plan the means to achieve those goals/ends.

Optionality condition: The agent must have an adequate range of options from which to choose their goals and their means.

Independence condition: The agent must be free from external coercion and manipulation when choosing and exercising their rationality.

I have mentioned before that you can view these as ‘threshold conditions’, i.e. conditions that simply have to be met in order for an agent to be autonomous, or you can have a slightly more complex view, taking them to define a three dimensional space in which autonomy resides. In other words, you can argue that an agent can have more or less rationality, more or less optionality, and more or less independence. The conditions are satisfied in degrees. This means that agents can be more or less autonomous, and the same overall level of autonomy can be achieved through different combinations of the relevant degrees of satisfaction of the conditions. That’s the view I tend to favour. I think there possibly is a minimum threshold for each condition that must be satisfied in order for an agent to count as autonomous, but I suspect that the cases in which this threshold is not met are pretty stark. The more complicated cases, and the ones that really keep us up at night, arise when someone scores high on one of the conditions but low on another. Are they autonomous or not? There may not be a simple ‘yes’ or ‘no’ answer to that question.

Anyway, using the three conditions we can formulate the following ‘autonomy principle’ or ‘autonomy test’:

Autonomy principle: An agent’s actions are more or less autonomous to the extent that they meet the (i) rationality condition; (ii) optionality condition and (iii) independence condition.

We can then use this principle to determine whether, and if, PBIs interfere with or undermine an agent’s autonomy.

What would such an analysis reveal? Well, looking first to the rationality condition, it is difficult to see how a PBI could undermine this. Unless they malfunction or are misdirected, it is unlikely that a PBI would undermine our capacity for rational thought. Indeed, the contrary would seem to be the case. You could argue that a condition such as epilepsy is a disruption of rationality. Someone in the grip of a seizure is no longer capable of rational thought. Consequently, using the PBI to avert or prevent their seizure might actually increase, not decrease their rationality.

Turning to the other two conditions, things become a little more unclear. The extent to which autonomy is enhanced or undermined depends on the type of PBI being used.


3. Do advisory PBIs support or undermine autonomy?
Let’s start by looking at predictive/advisory PBIs. I’ll treat these as a pair since, as I stated earlier on, a purely predictive PBI probably does carry some implicit advice. That said, the advice would be different in character. The purely predictive PBI will provide a vague, implied piece of advice (“do something to stop x”). The advisory PBI could provide very detailed, precise advice, perhaps based on the latest medical evidence (“take medication x in ten minutes time and purchase it from vendor y”). Does this difference in detail and specification matter? Does it undermine or promote autonomy?

Consider this first in light of the optionality condition. On the one hand, you could argue that a vague and general bit of advice is better because it keeps more options open. It advises you to do something, but leaves it up to you exactly what that is. The more specific advice seems to narrow the range of choices, and this may seem to reduce the degree of optionality. That said, the effect here is probably quite slight. The more specific advice is not compelled or forced upon you (more on this in a moment), so you are arguably left in pretty much the same position as someone getting the more general advice, albeit with a little more knowledge. Furthermore, there is the widely-discussed ‘paradox of choice’ which suggests that having too many options can be a bad thing for autonomy because it leaves you paralysed in your decisions. Having your PBI specify an option might help you to break that paralysis. That said, this paradox of choice may not arise in the kinds of scenarios in which PBIs get deployed. The paradox of choice is best documented in relation to consumer behaviours and its not clear how similar this would be to decisions about which intervention to pick to avoid a neuronal event.

The independence condition is possibly more important. At a first glance, it seems pretty obvious that an advisory PBI does not undermine the independence condition. For one thing, the net effect of a PBI may be to increase your overall level of independence because it will make you less reliant on others to help you out and monitor your well-being. This is one thing Gilbert discusses in his paper on epileptic patients. He was actually involved with one of the first experimental trials of PBIs and interviewed some of patients who received them. One of the patients on the trial reported feeling an increased level of independence after getting the implant:

…the patient reported: “My family and I felt more at ease when I was out in the community [by myself], […] I didn’t need to rely on my family so much.” These descriptions are rather clear: with sustained surveillance by the implanted device, the patient experienced novel levels of independence and autonomy. 
(Gilbert 2015, 7)

In addition to that, the advisory PBI is merely providing you with suggestions: it does not force them upon you. You are not compelled to take the medication or follow the prescribed steps. This doesn’t involve manipulation or coercion in the sense usually discussed by philosophers of autonomy.

So things look pretty good for advisory PBIs on the independence front, right? Well, not so fast. There are three issues to bear in mind.

First, although the advice provided by the PBI may not be coercive right now, it could end up having a coercive quality. For example, it could be that following the advice provided by the PBI is a condition of health insurance: if you don’t follow the advice, you won’t be covered by your health insurance policy. That might lend a coercive air to the phenomenon.

Second, people may end up being pretty dependent on the PBI. People might not be inclined to second guess or question the advice provided, and may always go along with what it says. This might make them less resilient and less able to fend for themselves, which would undermine independence. We already encounter this phenomenon, of course. Many of us are already dependent on the advice provided to us by services like Google Maps. I don’t know you feel about that dependency. It doesn’t bother me most of the time, though there have been occasions on which I have lamented my overreliance on the technology. So if you think that dependency on Google Maps undermines autonomy, then you might think the same of an advisory PBI (and vice versa).

Third, and finally, the impact of an advisory PBI on independence, specifically, and autonomy, more generally, probably depends to a large extent the type of neuronal event it is being used to predict and manipulate. An epileptic on the cusp of a seizure is already in a state of severely compromised autonomy. They have limited options and limited independence in any event. The advisory PBI might impact negatively on those variables in moments just prior to the predicted seizure, but the net effect of following the advice (i.e. possibly avoiding the seizure) probably compensates for those momentary negative impacts. Things might be very different if the PBI was being used to predict whether you were about to go into a violent rage or engage in some other immoral behaviour. We don’t usually think of violence or immorality as diseases of autonomy so there may be no equivalent compensating effect. In other words, the negative impact on autonomy might be greater in these use-cases.


4. Do automated PBIs support or undermine autonomy?
Let’s turn finally to the impact of automated PBIs on autonomy. Recall, these are PBIs that predict neuronal events and use this information to automatically deliver some intervention to the patient that averts or otherwise manipulates those neuronal events. This means that the decisions made on foot of the prediction are not mediated through the patient’s conscious reasoning faculties; they are dictated by the machine itself (by its code/software). The patient might be informed of the decisions at some point, but this has no immediate impact on how those decisions get made.

This use of PBIs seems to be much more compromising of individual autonomy. After all, the automated PBI does not treat the patient as someone who’s input is relevant to ongoing decisions about medical treatment. The patient is definitely not given any options and they not even respected as independent autonomous agents. Consequently, the negative impact on autonomy seems clear.

But we have to be careful here. It is true that the patient with the automated PBI does not exercise any control over their treatment at the time that the treatment is delivered, but this is not to say they exercise no control at all. Presumably, the patient originally consented to having the PBI implanted in their brains. At that point in time, they were given options and were treated as independent autonomous agents. Furthermore, they may retain control over how the device works in the future. The type of treatment automatically delivered by the PBI could be reviewed over time, by the patient, in consultation with their medical team. During those reviews, the patient could once again exercise their autonomy over the device. You could, thus, view the use of the automated PBI as akin to a commitment contract or Ulysses contract. The patient is autonomously consenting to the use of the device as a way of increasing their level of autonomous control at all points in their lives. This may mean losing autonomy over certain discrete decisions, but gaining it in the long run.

Again, the type of neuronal event that the PBI is used to avert or manipulate would also seem crucial here. If it is a neuronal event that otherwise tends to compromise or undermine autonomy, then it seems very plausible to argue that use of the automated PBI does not undermine or compromise autonomy. After all, we don’t think that the diabetic has compromised their autonomy by using an automated insulin pump. But if it is a neuronal event that is associated with immorality and vice, we might feel rather different.

I should add that all of this assumes that PBIs will be used on a consent-basis. If we start compelling certain people to use them, the analysis becomes more complex. The burgeoning literature on neurointerventions in the criminal law would be useful for those who wish to pursue those issues.


5. Conclusion
That brings us to the end. In keeping with my earlier comments about the complex nature of autonomy, you’ll notice that I haven’t reached any firm conclusions about whether PBIs undermine or support autonomy. What I have said is that ‘it depends’. But I think I have gone beyond a mere platitude and argued that it depends on at least three things: (i) the modality of the PBI (general advisory, specific advisory or automated); (ii) the impact on the different autonomy conditions (rationality, optionality, independence) and (iii) the neuronal events being predicted/manipulated.




Monday, June 26, 2017

Episode #25 - McNamara on Fairness, Utility and High Frequency Trading

stevenmcnamara.jpg

In this episode I am joined by Steven McNamara. Steven is a Professor of Law at the American University of Beirut, and is currently a visiting professor at the University of Florida School of Law. Once upon a time, Steven was a corporate lawyer. He is now an academic lawyer with interests in moral theory, business ethics and technological change in financial markets. He also has a PhD in philosophy and wrote a dissertation on Kant’s use of Newtonian scientific method. We talk about the intersections between moral philosophy and high frequency trading, taking in the history of U.S. stock market in the process.

You can download the episode here. You can listen below. You can also subscribe on Stitcher and iTunes.



Show Notes

  • 0:00 - Introduction
  • 1:22 - The history of US stock markets
  • 7:45 - The (regulatory) creation of a national market
  • 13:10 - The origins of algorithmic trading
  • 18:15 - What is High Frequency Trading?
  • 21:30 - Does HFT 'rig' the market?
  • 33:47 - Does the technology pose any novel threats?
  • 40:30 - A utilitarian assessment of HFT: does it increase social welfare?
  • 48:00 - Rejecting the utilitarian approach
  • 50:30 - Fairness and reciprocity in HFT
 

Relevant Links




Thursday, June 22, 2017

Understanding Ideologies: Liberalism, Socialism and Conservatism




Are you a liberal, socialist or conservative? Are you fiscally conservative but socially liberal? Or socially conservative and fiscally liberal? Are you a classical liberal or a neo-liberal? Are you a Marxist socialist or a neo-Marxist socialist?

We frequently use these terms to describe our political beliefs and ideological preferences. But what do they really mean? If I am a liberal what makes me different, if anything, from a socialist or conservative? These are important questions. These terms effectively define the major points on the landscape of political opinion. But answering them is not easy. There are many different definitions and understandings of liberalism, socialism and conservatism. To borrow a phrase, it often seems to be the case that ‘one man’s liberalism is another man’s conservatism’ and so on.

In this post, I want to share one recent attempt to capture the distinction between these different ideologies. That attempt comes from James Alexander’s article ‘The Major Ideologies of Liberalism, Socialism and Conservatism’. It is a unique and somewhat idiosyncratic take on the topic, suggesting that there is one core defining feature to each of the three ideologies and that they can be arranged in a logical/hierarchical order. I want to see why Alexander thinks this. I start with his general approach to understanding ideologies.


1. Understanding Ideologies
Alexander’s article — like most academic articles — starts from a point of departure (i.e. a disagreement with the existing literature). In his case, the point of departure is the ‘collectivising’ approach to ideologies that is characteristic of most previous attempts to taxonomise and define liberalism, socialism and conservatism. The collectivising approach tries to group different conceptualisations of the respective ideologies together. Authors who adopt this approach tend to think that there is nothing simple about any of the ideologies. There is no one canonical version of liberalism, for example. Instead, there are many different liberals, each emphasising different properties or features of the ideology. Authors often proceed to categorise these properties and features, suggesting that some are ‘core’ and others ‘peripheral’. Nevertheless, they think that ideologies are complex assemblages of these properties, and that the distinctions between the different ideologies are blurry and often a matter of emphasis rather than something more fundamental.

Alexander favours a different approach. Instead of collecting and grouping, he prefers to distinguish and differentiate. He wants to focus on what makes the ideologies different from one another. This is a reasonable approach to take. As he points out, there is a certain irony to the fact that all those authors who focus on the complex and plural nature of the three ideologies still tend to separate them out and, usually, assume some hierarchy between them. He discusses three books in particular that take this approach: Heywood’s Political Ideologies; Vincent’s Modern Political Ideologies; and Freeden’s Ideologies and Political Theory.

[I]n all of these books the ideologies are compartmentalised into prefabricated categories—called chapters…Liberalism, conservatism and socialism are the ‘major ideologies’, and liberalism is the most important or the original of the three. 
(Alexander 2015)

They never fully justify why they do this or why liberalism is taken to be the most important ideology. Alexander tries to supply the missing justification. He does so by first trying to define what an ideology is. As he sees it, ideologies are distinctively modern phenomena. They arose in the aftermath of the Enlightenment and its associated revolutions, when the traditional Christian and monarchical order was called into question. Appeals to God-given inequalities or rights to govern were no longer persuasive. A new way to justify political arrangements was required. That’s how ideologies came into being.

More specifically, Alexander suggests that ideologies use a (typically secular) criterion to evaluate political arrangements; that they do so in an environment in which that criterion is always being challenged and contested by other ideologies; and that the criterion used by ideologies is best understood in terms of debt (i.e. to whom does society owe its fundamental duties). This gives him the following definition:

Ideology: 'An ideology is a view about what ought to be thought, said and done about politics in terms of a sole criterion, where that sole criterion is a suggestion about to what or whom a fundamental debt is owed; and where this view is contested by views dependent on rival criteria within a situation which is constituted by the continual contestation of criteria.' (Alexander 2015)

The idea that debt is the basic concept underlying all major political ideologies might seem a little odd, and Alexander’s approach to the topic of ideology is, as I said earlier, idiosyncratic. Nevertheless, you have to admire his attempt to develop a coherent theory, and its more abstract elements make more sense when you look at the criteria used by the three main ideologies.


2. Understanding Liberalism, Socialism and Conservatism
There are many ideologies. Nationalism is an ideology. Feminism is an ideology. Fascism is an ideology. Each of these ideologies has been or continues to be important. Nevertheless, most theorists agree that liberalism, socialism and conservatism are the most important ideologies, and that understanding them gives you access to most of the current political landscape. Why is this?As noted earlier, the collectivising approach to ideology gives no clear answer. But Alexander thinks that his ‘differentiating’ approach does.

He argues that the reason why these three ideologies are grouped together is that they all agree that society owes its debts to the self. In other words, that when arranging the political order, the powers-that-be must explain themselves and justify themselves by reference to the self. This attempt to justify political orders by reference to the self is the defining feature of the modern, post-Enlightenment, era.

Now, you might argue that justifying political orders by reference to the self sounds like it describes liberalism pretty well, but not socialism and conservatism. But that’s where the most interesting and novel feature of Alexander’s theory kicks-in. He argues that, contrary to what you might believe, socialism and conservatism also think that society owes its debts to the self. Where they differ from liberalism is in how they conceive of and understand the self. This, in turn, helps to explain why liberalism is usually taken to be the most important modern ideology. Liberalism, it turns out, has an exceptionally austere and simple (many would say ‘wrongheaded’) view of the self. Socialism and conservatism add layers of complexity to that view.

This is all a little abstract. Let’s make it more concrete by specifying exactly how the three different ideologies are to be understood.

Liberalism = The view that social arrangements have to be made acceptable to the self (i.e. that the fundamental debt in society is owed to the self). This is often taken to entail that social arrangements need to be understood and consented to by the self.

As Alexander puts it, somewhat poetically, the essence of the liberal view is that ‘the self has infinite credit, everything else…is in debt to that credit’. This should make sense to anyone with a passing familiarity with liberal political theory. Such theory is preoccupied with how the state justifies itself to its subjects, viewing them as having some fundamental normative (as opposed to practical) veto power over the rules by which they are governed.

The liberal conception of the self is simple. It views the self as an isolated, atomistic, individual. The self is capable of reason and understanding, and this is what marks it out as special and unique. Furthermore, the self is the only thing that is intrinsically necessary and valuable. The external world includes resources that might be instrumentally necessary to the survival of that self, but apart from those resources everything else is contingent. Many critique this view on the grounds that this self is illusory: no atomistic, isolated, rational self has ever, or will ever, exist.

Socialists and conservatives agree with this critique. Socialists agree because they think that the self cannot be understood in isolation from the community in which it lives. That community provides support for the self, and shapes how the self thinks and views the world. The liberal glosses over that dependency, noting some relationship between the self and the external world, but ignoring how important it is. This alternative conception of the self means that socialists have a different understanding of the fundamental debt:

Socialism = The view that the fundamental debt is owed to the self as constituted by society, i.e. that when justifying political orders you cannot assume a version of the self that is abstracted away from the society in which they are created.

This might seem like a subtle difference, but Alexander argues that it requires a radical shift of perspective. The essence of liberalism lies in its opposition of the self to society. The liberal self has to be protected from society. The socialist argues that the opposition presumed by the liberal is misleading. That said, Alexander suggests that socialists are often confused as to whether they should destroy liberalism or complete it. Marx, for example, favoured the goal of individual emancipation from certain aspects of the present social order, but then rejected liberalism in other ways.

Conservatives add a further layer of complexity to the understanding of the self. They agree with socialists (as odd as that may sound) that the self cannot be abstracted away from the society in which it is constituted. They add to this that it cannot be abstracted away from the historical forces that shaped that society either. In other words, we don’t just owe a debt to the self-as-constituted-by-society, we owe a debt to the self-as-constituted-by-society-and-tradition.

Conservatism = The view that the fundamental debt is owed to the self as constituted by society and by the set of traditions and cultures that shaped that society, i.e. that when justifying political arrangements you cannot assume a version of the self that is abstracted away from social and historical factors.

Now, you might argue that this doesn’t really differentiate conservatism from socialism. After all, Marxism is acutely aware of the historical forces that shape the societies we inhabit. That is true. But socialists like Marx do not think we have any obligations to history. Indeed, they often look to reform or deconstruct those historical forces. They focus on future possibilities and they long for the revolution in the social order. Conservatives are different. They think we ought to respect the historical forces. They want to hang on to what we have, rather than aspire to something hypothetical.
That said, conservatives are not necessarily opposed to change. They often resist change until change is inevitable and simply insist on a high degree of continuity with the past. There are confusing statements from self-describing conservatives on this score, and it means that they are not always consistent in the ideological commitments.



And that’s pretty much it. That’s how Alexander understands the differences between the three major political ideologies. As I said at the outset, it is an idiosyncratic view. I’m sure anyone who is associated the three named ideologies will be incensed at how much has been left out. But that’s kind of the point. This is a stripped-down taxonomy. It focuses on differentiating and ordering the ideologies. It does not aim for inclusivity and plurality. If you want to learn more, I’d recommend reading the full paper, as well as Alexander’s other work.




Thursday, June 15, 2017

The Quantified Relationship




I have a new paper coming out entitled 'The Quantified Relationship'. I wrote it with my colleagues Sven Nyholm (Eindhoven) and Brian Earp (Oxford). It's going to be a 'target article' in the American Journal of Bioethics. For those who don't know, when something is accepted as a 'target article' it is open to others to publish short (1500 word) replies/critiques. If you are interested in doing this, let me know and I'll keep you posted about when this is possible.

In the meantime, here are the paper details along with links to a pre-publication draft.

Title: The Quantified Relationship

Authors: John Danaher, Sven Nyholm, Brian Earp

Journal: American Journal of Bioethics

Links: Philpapers, Researchgate; Academia

Abstract: The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In the present article, we build upon this work to provide a detailed ethical analysis of the Quantified Relationship (QR). We identify eight core objections to QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness towards this technology and advocate the development of a research agenda for the positive use of QR technologies. 
 
 
 




Sunday, June 11, 2017

Can we derive meaning and value from virtual reality? An Analysis of the Postwork Future

Image courtesy of BagoGames via Flickr


Yuval Noah Harari wrote an article in the Guardian a couple of months back entitled ‘The meaning of life in a world without work’. I was intrigued. Harari has gained a great deal of notoriety for his books Sapiens and Homo Deus. They are ambitious books, both in scope and intent. Harari’s subject is nothing less than the entire history and future of humanity. He wants to show us where we have come from, how we got here, and where we are going. He writes in a sweeping, breathless and occasionally grandiose style. As you read, you can’t help but get caught up in the epic sense of scale.

The Guardian article was somewhat different. It was a narrower, more provocative thinkpiece, dealing with a theme raised in his second book Homo Deus: What happens when machines take over all forms of work? What will the ‘useless class’ of humans have left to do? These are questions that interest me greatly too. I have published a couple of articles about the meaning of life in a world without work, and I am always interested to hear others opine on the same topic.

Unfortunately, I was less than whelmed by Harari’s article. It seemed a little flippant and shallow in its argumentation. To some extent, I figured this was unavoidable: you can’t cover all the nuance and detail in a short newspaper piece. But I tend to think a better job could, nevertheless, have been done, whatever the word limits on the Guardian might have been. I want to explain why in the remainder of this post. I’ll start by outlining what I take to be Harari’s main thesis. I’ll then analyse and evaluate what I see as the two main arguments in his piece, highlighting flaws in both. I’ll conclude by explaining what I think Harari gets right.


1. Harari’s Thesis: Video Games Might be Good Enough
Interestingly, Harari starts his discussion in much the same place that I started my discussion in my paper ‘Technological Unemployment and the Search for Meaning’. He starts by pondering the role of immersive video games in the lives of those rendered obsolete by automation:

People must engage in purposeful activities, or they go crazy. So what will the useless class do all day? One answer might be computer games. Economically redundant people might spend increasing amounts of time within 3D virtual reality worlds, which would provide them with far more excitement and emotional engagement than the “real world” outside. 
(Harari 2017)


This isn’t purely idle, armchair speculation. Research by the economist Erik Hurst (and his colleagues) already suggests that young men in the US (specifically non-college educated men in their 20s) are opting for leisure activities, such as video games, over low paid and precarious forms of work. If the preference profiles of these young men carry over to others, then the automated future could be one in which the economically displaced live out their lives in virtual fantasies.

Is this a good or bad thing? Will it allow for human flourishing and meaning? Many will be inclined to say ‘no’. They will argue that spending your time in an immersive virtual reality world is deeply inhuman, perhaps even tragic. Harari’s central thesis is that it is neither. If we understand the lessons of human history, and if we play close attention to our different cultural practices and beliefs, we see that playing virtual reality games has always been at the core of human flourishing and meaning.

Harari’s Thesis: A future where those rendered economically useless spend their time playing virtual reality games is neither bizarre nor tragic; virtual reality games have always been central to human flourishing and meaning.

This is provocative stuff. It seems so counterintuitive and yet he might be on to something. We’ve all had the sense that there is something slightly unreal and fantastical about the trivial tasks that make up our daily lives. But can we put this on a firmer, intellectual footing? Perhaps. The way I read him, Harari offers two main arguments in support of his thesis. Let’s look at them both now.


2. The Big Argument: It’s All Virtual
Isaiah Berlin famously divided the intellectual world into two camps: the foxes and the hedgehogs. The foxes knew many little things and used them all, in various ways, to chisel away at the world of ideas, not giving much thought to how it all fit together in the process. The hedgehogs knew one big thing — they had one big idea or theory — through which everything was filtered and regurgitated. They had a hammer and everything was a nail.

Harari is definitely a hedgehog. His scope may be vast, but he has one big idea that he uses to elucidate the tapestry of human history. The idea is almost Kantian in nature. It is that the reality in which we live (i.e. the one that we really experience and engage with) is largely virtual in nature. That is to say: we don’t experience the world as it is in itself (in the ‘noumenal’ sense, to use Kant’s words), but rather through a set of virtual/augmented reality lenses that are generated by our intellects. Harari explains the idea by reference to his own experiences of Pokeman Go and the similarity between it and the perceived religious conflicts in the city of Jerusalem:

It struck me how similar the situation [playing Pokemon Go with his nephew] was to the conflict between Jews and Muslims in the holy city of Jerusalem. When you look at the objective reality of Jerusalem, all you see are stones and buildings. There is no holiness anywhere. But when you look through the medium of smartbooks (such as the Bible and the Qu’ran) you see holy places and angels everywhere. 
(Harari 2017

Later he supports this observation by appealing to his big idea:

In the end, the real action always takes place inside the human brain…In all cases, the meaning we ascribe to what we see is generated by our own minds. 
(Harari 2017)

Which leads me to formulate something I’m going to call ‘Harari’s General Principle’:

Harari’s General Principle: Much of the reality we experience (particularly the value and meaning we ascribe to it) is virtual in nature.

This general principle provides existential reassurance when it comes to contemplating a future spent living inside a virtual reality game. The idea is that there is nothing bizarre or tragic about this possibility because we already live inside a big virtual reality game and we seem to derive great meaning from that irrespective of its virtuality. That’s his main argument. It seems to work like this (this formulation is mine, not Harari’s)


  • (1) If it turns out that we already derive great meaning and value from virtual reality games, then a future in which we live out our lives in virtual reality games will also provide great meaning and value.

  • (2) It turns out that we already derive great meaning and value from virtual reality games.

  • (3) Therefore, a future in which we live out our lives in virtual reality games will provide great meaning and value.


Premise (1) is practically tautologous. It’s hard to see how one could object to it. There is, however, one important, perhaps pedantic, objection that could be raised: there may be differences in the quality of the experience provided by different virtual reality games. So the mere fact that we derive great meaning and value from the current crop of virtual reality games provides no guarantee that we will continue to derive meaning and value from a future crop. This is significant, but I won’t belabour this objection since I’m the one who formulated premise (1) and you could rectify the problem by arguing that the future crop of games will be broadly analogous to the current crop, though that may turn out to be contentious.

Premise (2) is supported by Harari’s general principle, but he also uses some case studies to show how it works in practice. One is that religion is a big virtual reality game; the other is that consumerism is a virtual reality game.

Religion: “What is religion if not a big virtual reality game played by millions of people together. Religions such as Islam and Christianity invent imaginary laws, such as “don’t eat pork”, “repeat the same prayers a set number of times each day”, “don’t have sex with somebody from your own gender” and so forth…Muslims and Christians go through life trying to gain points in their favorite virtual reality game…If by the end of your life you gain enough points, then after you die you go to the next level of the game (aka heaven).” (Harari 2017)

Consumerism: “Consumerism too is a virtual reality game. You gain points by acquiring new cars, buying expensive brands and taking vacations abroad, and if you have more points than everybody else, you tell yourself you won the game.” (Harari 2017)

You can probably see why I used the word ‘flippant’ to describe Harari’s argumentation earlier on, but let me give him his due. To someone like me — a religious sceptic and an agnostic capitalist — there something quite attractive in what he is saying. I think religion really is a bit of a virtual reality game: that all the rules and regulations are fake and illusory. But I am attracted to this line of reasoning only because it disproves the very point that Harari is trying to make. His view of religion and consumerism is deflationist in nature. To say that both practices are virtual reality games is to denude them of value; to rob them of their meaning and significance. It’s like ripping the mask off the ghost at the end of Scooby Doo.

And this is the critical point. Harari’s big argument doesn’t work because it isn’t true to the lived experiences of devoted religious believers and avid consumerists. They don’t think that the reality in which they live is virtual. They think the rules and regulations are real — handed down to them by God — and that the angels and demons they believe to exist are part of some deeper reality. They also probably don’t experience their daily practice in the gamified sense that Harari ascribes to them. It’s not about gaining points or levelling up; it’s about being true to the commitments and requirements of authentic religious practice. His perspective is that of the outsider — someone who has seen through the sham — not that of the insider.

This means that it is very difficult to draw any solace from Harari’s general principle, or the two case studies he uses to support his argument. The cultural practices and beliefs from which we currently derive great meaning and value are not normally understood by us to be either virtual or gamelike in nature (Perhaps some few people do understand them in that way) and we may not continue to derive meaning and value from them if we perceive them in this way. This matters. Presumably, in the virtual reality future, we will know that the reality we experience is virtual, and that the activities we engage in are part of one big game. To assume that we can still derive meaning and value from our activities when we have this knowledge requires a different, narrower argument.

Fortunately, Harari might have one.


3. The Narrower Argument: The Value of Deep Play
To this point, we have been trading on an ambiguity about the meaning of ‘virtual reality game’. Harari never defines it in his article, but we can get a sense of how he understands the term by reading between the lines. Harari seems to view religion and consumerism as ‘games’ because they involve goal-seeking and competitiveness (getting to heaven; acquiring more stuff than your peers) and ‘virtual’ because the rules by which people play these games involve constructs (beliefs, laws etc) that are not ‘out there’ but are generated by the brain.

I do not think this is a good way to understand the concept of a virtual reality game. It doesn’t really track with ordinary usage of the relevant terms. As per the argument just outlined, religious believers don’t think of their religious belief as ‘virtual’ or their practice as a ‘game’. And there seem to me to be decent reasons to reject the notion that goal-seeking and competitiveness are necessary properties of games — some of the goals that we pursue competitively (say knowledge or truth) might be objectively valuable — and that the reality we experience is virtual simply because it relies on internally-generated constructs — if for no other reason than accepting this leads to the absurdity that Harari seems to want to lead us to: that everything is virtual.

My preferred understanding of the concept ‘virtual reality game’, essentially collapses both ‘game’ and ‘virtual’ into the same thing. Following the work of the philosopher Bernard Suits, I would define a game as the ‘voluntary attempt to overcome unnecessary obstacles’. (Suits actually has a longer definition that I discuss here) In other words, it involves picking arbitrary or value neutral goals and imposing a set of constraints on the pursuit of those goals that are not required or dictated by reality (the ‘rules’). Thus the world constructed by the game is ‘virtual’ in nature. It floats free from objectively valuable ends and layers additional rules on top of those provided by objective reality. An example would be 100m freestyle swimming. There, the arbitrary goal is traversing 100m in water in the fastest time. The constraints are that you must do this using a particular stroke, wearing a particular costume, and without the aid of propellant technologies (such as flippers or underwater motors). These rules effectively construct a ‘virtual world’ within the swimming pool.

Admittedly this is still a pretty broad definition. If you are really cynical and nihilistic then it could well turn out that everything is a game. But if you retain any objectivist bent — i.e. still maintain that there is a reality beyond your head and that there are objective values — then it does narrow the concept of the game quite a bit. This is useful for the debate about the postwork future. As I see it, the future in which we all play virtual reality games would involve playing games in the Suitsian sense. The critical question then is whether if we know that we are playing Suitsian games, are we still living lives of meaning and value?

Although he doesn’t use any of this conceptual apparatus, Harari does offer an argument that answers that question in the affirmative. This is his narrower argument. The argument still follows the logic of the argument I laid out in the previous section (i.e. premises (1) - (3) are still the relevant ones), but uses a narrower understanding of what a virtual reality game is to motivate its central claims. Once again, Harari uses a case study to support his point: the Balinese Cockfight. The example comes from the work of Clifford Geertz:

Balinese Cockfight: “In his groundbreaking essay, Deep Play: Notes on the Balinese Cockfight (1973), the anthropologist Clifford Geertz describes how on the island of Bali, people spent much time and money betting on cockfights. The betting and fights involved elaborate rituals, and the outcomes had a substantial impact on the social, economic and political standing of both players and spectators. The cockfights were so important to the Balinese that when the Indonesian government declared the practice illegal, people ignored the law and risked arrest and hefty fines.” (Harari 2017).

The cockfight is clearly a game (a cruel and inhumane one, to be sure) and presumably is understood as such by the Balinese people (it’s unlike religious practice and belief in this sense). Furthermore, it is just one example of a far more general phenomenon. Soccer, American football, tennis, rugby, and golf are all games from which many people derive great meaning and value. Indeed, they become so important to people that the games — artificial and virtual though they may be — become a new and important part of people’s lives. When this happens the distinction between what was once virtual and what is real starts to breakdown:

For the Balinese, cockfights were “deep play” - a made up game that is invested with so much meaning that it becomes reality. 
(Harari 2017)

There is certainly something to this. For many people, games (that are clearly understood to be games) are central to their existence. They live for their sports and hobbies and leisure pursuits. They talk about them constantly with their peers. They dedicate themselves to understanding the intricacies of these games. Playing and conversing about them are their major social activities. It is how they achieve a sense of community and belonging, perhaps even a degree of social status. Does this, then, provide a proof of principle for the future? If we can find so much meaning and value in these forms of ‘deep play’, can we expect to find much meaning and value in a future of virtual reality games?

Perhaps. I definitely think that focusing on these examples of deep play is more persuasive than trying to argue that pretty much everything we do is a virtual reality game. But I don’t know if these examples of deep play are going to be sufficient. I suspect that every historical instance of deep play takes place in a world in which the games in question are merely a part of life, not the totality of life. In other words, although people derive significant meaning and value from those games, the games are only part of what they do. They still have jobs and families and other projects that seem (to them) to have some connection to the objective world. What will happen when they shift from a world in which the games are merely part of life to a world in which games are the majority (perhaps even the totality) of life?

I think it is hard to say.


4. Conclusion
I have suggested that Harari presents two arguments for thinking that a future in which we play virtual reality games would provide us with great meaning and value. I have argued that his second argument is more persuasive than the first. To argue that pretty much everything we do is a virtual reality game does violence to the lived experiences of those who derive meaning and value from what we currently do. On the other hand, to argue that we currently derive great meaning and value from pursuits that are clearly game-like in nature, is undoubtedly correct. The problem is that, at the moment, these games make up part of our reality, not its totality.

In conclusion, let me highlight something that I think Harari’s article gets right and that is worthy of serious reflection. Harari’s article reveals how troubled the distinction between the ‘virtual reality’ and ‘real reality’ really is. Some things that seem real to us may, already, be largely virtual; and some things that are clearly virtual have the tendency to become so important to us that they might as well be real. Even my attempt to clarify the distinction by appealing to Suits’s definition of game doesn’t eliminate all the problems. Within a Suitsian game, there are definitely things that happen that are ‘real’. The emotional responses one has to the game are real; the skills and knowledge that one develops are real; the social interactions and friendships are real; the virtues one acquires are real; and so on.

When it comes to discussions about meaning and value in a world without work, we need to consider whether it is worth continuing with the virtual/real distinction, or whether an alternative conceptual vocabulary is needed.




Wednesday, June 7, 2017

Episode #24 - Bryson on Why Robots Should Be Slaves

joanna-bryson1

In this episode I interview Joanna Bryson. Joanna is Reader in Computer Science at the University of Bath. Joanna’s primary research interest lies in using AI to understand natural intelligence, but she is also interested in the ethics of AI and robotics, the social uses of robots, and the political and legal implications of advances in robotics. In the latter field, she is probably best known for her article, published in 2010 entitled ‘Robots Should be Slaves’. We talk about the ideas and arguments contained in that paper as well as some related issues in roboethics.

You can download the episode here or listen below. You can also subscribe on Stitcher or Itunes (or RSS).



Show Notes

  • 0:00 - Introduction
  • 1:10 - Robots and Moral Subjects
  • 5:15 - The Possibility of Robot Moral Subjects
  • 10:30 - Is it bad to be emotionally attached to a robot?
  • 15:22 - Robots and legal/moral responsibility
  • 19:57 - The standards for human robot commanders
  • 22:22 - Are there some contexts in which we might want to create a person-like robot?
  • 26:10 - Can we stop people from creating person-like robots?
  • 28:00 - The principles that ought to guide robot design
 

Relevant Links




Tuesday, June 6, 2017

Why we should create artificial offspring: meaning and the collective afterlife


The iCub Robot - Image courtesy of Jiuguang Wang


That's the title of a new article I have coming out. It argues that the creation of artificial offspring could add meaning to our lives and that it might consequently be worth committing to the project of doing so. It's going to be published in the journal Science and Engineering Ethics. The official version will be out in a few weeks. In the meantime, you can read the abstract below and download a pre-publication version at the links provided.


Journal: Science and Engineering Ethics

Links: Philpapers; Academia.edu

Abstract: This article argues that the creation of artificial offspring could make our lives more meaningful (i.e. satisfy more meaning-relevant conditions of value). By ‘artificial offspring’ is meant beings that we construct, with a mix of human and non-human-like qualities. Robotic artificial intelligences are paradigmatic examples of the form. There are two reasons for thinking that the creation of such beings could make our lives more meaningful. The first is that the existence of a collective afterlife — i.e. a set of human-like lives that continue in this universe after we die — is likely to be an important source and sustainer of meaning in our present lives (Scheffler 2013). The second is that the creation of artificial offspring provides a plausible and potentially better pathway to a collective afterlife than the traditional biological pathway (i.e. there are reasons to favour this pathway and there are no good defeaters to trying it out). Both of these arguments are defended from a variety of objections and misunderstandings.



Wednesday, May 24, 2017

Advice on Publishing Peer Review Articles




I was recently asked to give a short, ten minute presentation on writing and publishing peer review articles. The presentation was aimed at PhD students. In preparing for the talk, I realised how difficult it is to distill my thoughts on the process into just ten minutes. I have a love-hate relationship with publishing for peer review. It is essential to my life as an academic, but I sometimes feel trapped by the publication ‘game’, and I often feel that the benefits are often minimal and ephemeral. I could probably talk for several hours about these feelings without getting to any practical advice.

Anyway, since I didn’t have several hours, I decided I would focus my talk on eight key ‘tips’, divided broadly into three main categories (perspective, process, and promotion). None of these tips deals with how to actually write an article (I have dealt with that topic on a previous occasion). Instead, they focus on the attitude toward the process and how to respond to reviewer’s comments. I thought it might be worth sharing them here.


A. Perspective
It is important to approach the peer review process with the right attitude. I have three tips for cultivating the right attitude:

(1) Don’t lose sight of ‘why’: This is the most important thing. As a budding academic, it is very easy to get trapped in the ‘game’ of publication. As you begin to succeed in publishing you become acutely aware of your total number of publications. Very few academics can keep track of the substance of what their colleagues write, but they can all keep track of the number of pieces they publish. And so your number becomes the currency of self-worth. Try to avoid thinking in this way. If you become obsessed with your number, you will never be happy. I speak from experience. I once set myself the target of publishing 20 peer reviewed articles, thinking that if I reached that target I would have ‘arrived’ as an academic. But once I reached the 20-article target, I realised that the 30-article target wasn’t too far away. I needed to knuckle down and reach that too. I just as soon realised how silly I was being. I had lost sight of why I was publishing in the first place. Publishing is not an end in itself. There are reasons for doing it. The most important of those reasons — and the ones that sustain you in the long-run — are the intrinsic joys/pleasures you experience in researching, thinking and writing about a topic that interests you. The other reasons are more instrumental in nature. They are quite important too, but for more practical reasons. After all, publication is a gateway to achieving academic impact, social impact, public engagement and career advancement.

(2) Prepare for failure: The average article is rejected. You are unlikely to be above average. It’s possible that you are, but don’t bet on it. The important thing is that you learn to expect failure and frame it in a positive way. Following Paul Silvia, I would suggest that you have the goal of becoming ‘the most rejected author in your department/peer group’. If you are being rejected, at least you haven’t given up. Giving up is worse than being rejected. (I gave this advice previously. On that occasion I suggested that it was the most important thing to bear in mind when publishing. I no longer think that is true. I think remembering why you are publishing is the most important thing. This might reflect a degree of maturity on my part and an increasing sense of detachment from the need to publish.)

(3) Don’t fetishise failure: Don’t assume that you can learn too much from your failures. Sometimes you can, but most of the time you can’t. Academic failure is overdetermined. What I mean by this, is that there are probably many factors that prevented your article from being accepted for publication, no one of which was necessarily fatal or would be fatal if you were to resubmit the article elsewhere. Editors and reviewers are looking for reasons to reject your paper. Their default is ‘reject’. They have to set this default to maintain the prestige of their journal [thanks to Ashley Piggins for emphasising this point to me]. The reasons for rejection provided by reviewers often do not overlap. If you addressed every objection they raised before sending your article on to another journal, you would probably end up with an incoherent article. If you are rejected by a journal, look over the reviewer reports (if any), see if there are any consistent criticisms or comments that strike you as being particularly astute, revise the article in light of those comments, and then send it off to another journal. If there are no such comments, just send it off to another journal without substantive revisions. Persistence is the name of the game. I am now willing to resubmit the same piece to several journals (sometimes as many as 4 or 5) before giving up on it.

B. Process
You must deal with the process of submitting to journals and responding to reviewer’s comments in the right way. The most important thing here, of course, is to submit a high-quality piece, i.e. something that is well-written, full of persuasive arguments, and makes an original contribution to the literature. I don’t think there is a perfect formula for doing that. But there are a few other things to keep in mind:

(4) Have at least 3-4 target journals: This really follows from my previous bit of advice (“Don’t fetishise failure”). I always start writing articles by having at least 3-4 target journals in mind. I don’t think you should be too wedded to one target journal. You should aim for something of reasonably high quality, but don’t predicate your well-being on having your article accepted by the top journal in your field. That’s something that will come with time and persistence. I also don’t think it is worth revising your article for your target journal’s ‘house style’. I have never had an article desk-rejected because I failed to format it in house style. As long as the article is a good fit for your target journal and you have written and referenced it well, it stands a chance. You can worry about house style after you have been accepted.

(5) Be meticulous in responding to reviewers’ comments: If you are lucky enough to be asked for revisions, be sure to take the process seriously. You should always prepare a separate ‘response to reviewers’ document as well as a revised manuscript. In this document, you should respond to everything the reviewer has highlighted and pinpoint exactly where in the revised draft you have addressed what they have said. Speaking as someone who has reviewed many manuscripts, I feel pretty confident in saying that reviewers are lazy. They don’t want to have to read your article again. They only want to read that parts that are relevant to the comments they made and check to see whether you have taken them seriously. This is all I ever do when I read a revised manuscript.

(6) Be courteous in responding to reviewers’ comments: Remember that reviewers have egos; they want to be flattered. They will have taken time out of their busy schedules to read you article. They will have raised what they take to be important criticisms or concerns about your article. You should always thank them for their ‘thoughtful’, ‘insightful’, and ‘penetrating’ comments. This is one area of life where you cannot be too obsequious.

(7) Pick your battles: Sometimes reviewers will say things with which you fundamentally disagree. You don’t have to bow down and accept everything they say. You should stand your ground when you think it is appropriate to do so. But when doing this be sure to acknowledge that the reviewer is raising a reasonable point (and always consider the possibility that the fault lies in how you originally worded or phrased what you wrote) and be sure to make concessions to them in other ways. To give a somewhat trivial example, I feel pretty strongly that academic articles shouldn’t be dry and devoid of ‘colour’. One of the ways in which I try to provide colour is by using well-known cultural or fictional stories to illustrate the key points I am making. This is one of the principles on which I stand firm. I once had a reviewer who wanted me to take a cultural reference out of an article because it was unnecessary to the point I was making. I stood my ground in my response, explaining at some length why I felt the example was stylistically valuable, even if logically unnecessary, and further discussing the importance of lively academic style. At the same time, I accepted pretty much everything else they reviewer had to say. Fortunately, they were gracious in their response, saying that they enjoyed my ‘spirited’ defence of the example, and accepting the article for publication. (It was this article, in case you were wondering).


C. Promotion
If you get an article accepted for publication, you should celebrate the success (particularly if it is your first acceptance), but you should also:

(8) Remember that it doesn’t end with publication: If you care about your research and writing, you won’t want it to languish unread in a pay-walled academic journal. You will want to promote it and share it with others. There are a variety of ways to do this, and discussing them all would probably warrant an entire thesis in and of itself. I personally use a combination of strategies: sharing open access penultimate versions of the text on various academic repositories; blogging; social media; podcasting; and interviews with journalists. I have never issued a ‘press release’ for anything I have written. I find I get enough attention from journalists anyway, but I think there probably is value in doing so and I may experiment with this in the future.


Bonus: Can you fast-track publications?
It takes a long time to write and publish for peer review. It is easy to get disheartened if you experience a lot of rejection. I am not sure that there is any way to truly ‘fast-track’ the process, but if you are hungry for an acceptance, I would suggest two strategies:

Write a response piece: i.e. write an article for a particular journal that responds, in detail, to another article that recently appeared in the same journal. This was how I got my first couple of acceptances and I think it can be very effective. In reality, of course, every academic article is a ‘response’ piece (they all respond to some aspect of the literature), it’s just that most are not explicitly labeled as such. What I am calling a ‘response piece’ is an article that is noteworthy for its academic narrowness (it only responds to one particular article) and journal specificity (it is really only appropriate for one journal). Both of those features limit its overall value. It is likely to have a more limited audience and is unlikely to achieve long-term impact. But it can provide invaluable experience of the peer review process.

Collaborate: In some disciplines collaboration is common; in others it is rare. I come from one of the latter disciplines. Nearly everything I have published has been solo-authored, but I have recently started to collaborate with others and I am beginning to appreciate its virtues. I think collaboration can work to accelerate the writing and publishing process, provided you collaborate with the right people. Some people are really frustrating to collaborate with (I’m pretty sure I am one of those people); some people are a delight. Obviously, you should pick a collaborator who shares some relevant research interest with you. On top of that, I recommend finding someone who is more productive and more ambitious than you are: they are likely to write fast and will push you outside your comfort zone. Furthermore, collaborating with them is far more likely to elicit engagement than simply asking them to provide feedback on something you have written. That said, I don’t think you should aim too high with your potential collaborators, at least when you are starting out. Pick people you know and who are broadly within your peer group. Don’t aim for the most renowned professor in your field, unless they happen to be your supervisor or a close friend. Again, you can build up to that.

Okay, so those are all my tips. To reiterate what I said at the outset, these tips only address part of the process. They don’t engage with the substance of your article and that really is the most important thing. Still, I hope some of you find them useful. The handout below summarises everything discussed above.







Monday, May 22, 2017

Episode #23 - Liu on Responsibility and Discrimination in Autonomous Weapons and Self-Driving Cars

s200_dr_hin-yan.liu.jpg

In this episode I talk to Hin-Yan Liu. Hin-Yan is an Associate Professor of Law at the University of Copenhagen. His research interests lie at the frontiers of emerging technology governance, and in the law and policy of existential risks. His core agenda focuses upon the myriad challenges posed by artificial intelligence (AI) and robotics regulation. We talk about responsibility gaps in the deployment of autonomous weapons and crash optimisation algorithms for self-driving cars.

You can download the episode here or listen below. You can also subscribe on Stitcher and iTunes (the RSS feed is here).

Show Notes

  • 0:00 - Introduction
  • 1:03 - What is an autonomous weapon?
  • 4:14 - The responsibility gap in the autonomous weapons debate
  • 7:20 - The circumstantial responsibility gap
  • 13:44 - The conceptual responsibility gap
  • 21:00 - A tracing solution to the conceptual problem?
  • 27:47 - Should we use strict liability standards to plug the gap(s)?
  • 29:48 - What can we learn from the child soldiers debate
  • 33:02 - Crash optimisation algorithms for self-driving cars
  • 36:15 - Could self-driving cars give rise to structural discrimination?
  • 46:10 - Why it may not be easy to solve the structural discrimination problem
  • 49:35 - The Immunity Device Thought Experiment
  • 54:12 - Distinctions between the immunity device and other forms of insurance
  • 59:30 - What's missing from the self-driving car debate?
 

Links