There is a wide debate on the role of politics in adjudication, and I am going to wade in and say that it's not a good idea. No, really. This brings a number of difficult problems. Firstly, what about those times where judgement seems necessarily to fall on political lines? Secondly, and more pointedly, how do I reconcile this, which in the US is the line espoused mostly by conservatives, with myself firmly being a liberal?
Well, let's look at the American model. My favourite American legal / political blogger, Publius, has said that judicial appointments are and should be about the politics of the potential appointees:
"To be clear, I’m not denouncing this politicization at all. Just the opposite, in fact...Nominations should be about politics because judging is inherently and necessarily political. You simply can’t avoid it, so we should stop trying to."
With respect to the writer, I think that he is completely wrong here. However, I can see why he has come to this conclusion. In the US, Constitutional law has become highly politicised. When interpreting things like freedom of speech, freedom of religion, sexual issues and issues of race and affirmative action, the Supreme Court increasingly splits along political lines. This has got to the point where it is thought of as a political arena, something of an analogy to our (legislative) House of Lords: It has a crucial legislative role but democracy only filters through to it very slowly through appointments from the ruling primary legisature / executive, which last long after the appointing administration has gone from office. The Bush administration has long used this to control its voters even when it has done nothing for many of them - "They'll appoint judges who will one day overturn Roe!" is practically a desperate conservative slogan, (Roe v Wade being the landmark Supreme Court decision requiring that abortions be permitted up to a certain time in pregnancy). Control of the Supreme Court, and to a lesser extent, of lower courts, has become a deliberately partisan issue.
The British model is very, very different. Here, accusations of political bias among the judiciary are rare and taken seriously, and such judges looked down upon. The vast majority of adjudication is done on an intellectual level, leaving policy judgements to the legislature. For this a number of interpretive doctrines have grown up which are generally applied well, with deviations harshly criticised. Legislative intent is crucial with statute, and for both statute and precedent, looking at the harm meant to be avoided is essential. Where there is no real answer and the court really does have to create new law, the general way forward is not to upset the understood balance, leaving it to Parliament to make the assessment of whether it would be a good idea. Again, judges are harshly criticised when they do otherwise. The idea of politicising appointments would be completely abhorrent here.
I think the British way is just far, far superior. It will lead to judicial conservatism but I cannot help feeling that that is healthy, due to the severe lack of democratic control and accountability of the judiciary, which while necessary for its impartiality, leaves it in a bad position to determine policy. It should be left to the most democratic institutions as much as possible to develop policy. The judiciary should act within the reasonably determinable scope of what the legislature would have done or would do, and where this is controversial it should be left to the legislature to determine. I rather like the idea of creating a method whereby truly political questions can be submitted to the legislature to answer - not to determine the case, only the question of law - but that is an idea I will have to develop another time.
Now I can't help feeling that in an ideal world my ideological cousins in the US would agree with me. Liberals have traditionally been strong proponents of the separation of powers. So what has happened? Why is it that US conservatives are now the ones who talk about keeping politics out of lawmaking?
I think that the primary differences which vastly change the situation over there are the Constitution and specifically the amendment procedure, and the federal nature of the US. The amendment procedure is incredibly onerous, requiring the consent of 2/3 of both Houses of Congress and 2/3 of state legislatures, an incredible feat. The result of this is stagnation, which suits conservatives: Even where the public as a whole is ready for change in Constitutional provisions, it is almost impossible to get it done. The Supreme Court is able to remedy some of this stagnation by acting in the name of good policy and the public will.
The other intertwining feature, the federal system, is perhaps even more problematic. The Constitution given limited powers to the federal government and reserves many powers for the states. Meanwhile, citizens identify strongly with the US as a whole, and so are immensely keen to mould other parts of it according to their own preferences in a way that they would not when it came to other countries. To give a British analogy, if Yorkshire decided to allow wife beating within its borders, the rest of the country would be outraged in a way it would not about Middle Eastern countries. Now here, we can do something about it through Westminster Parliament. In the US, on some issues there is no clear way to do so, and because of the difficulty of getting an Amendment together, the only clear way is to get the Supreme Court involved. So the Supreme Court followed the general mood of the country in developing a right to privacy to act against certain reactionary states which do things like ban homosexuality (Texas and others, until 2003) or sex toys (Alabama and others, still), or of course abortion (which led to Roe v Wade) - something which rankles conservatives no end. Where there was no democratic way for the country as a whole to determine these issues, the Supreme Court was made to do it instead.
Essentially, what these two elements mean is that in the US, the Supreme Court has been used to overcome deficiencies in democratic control and accountability. Still, it is a perversion of the function of courts, because it is also lacking in these qualities and because it makes objective application of laws impossible. Not only does politicisation mean that judicial interpretation will lag far behind the ideologies favoured by the electorate, but legal process will become more and more a cynical game of legal realism, determining judgements not according to what makes sense or follows precedent, but who is to be judging. Legal certainty and the separation of powers take terrible hits.
In Britain we don't have anything like these problems. Admittedly, sometimes judges cannot resist applying their political beliefs to cases, but they are roundly critised for it. But any question can, as a fall back, be left to Parliament without fear that they will be unable to deal with it. This suggests that any limits we place on Parliamentary action need to be subject to change at the behest of the electorate (a subject I will consider at another time). Furthermore, we have nothing like the level of federalisation of the US, and this analysis suggests that we should be careful how much we allow to be devolved - when something outrages the moral sentiment of the rest of the country, the people need to be able to rectify it democratically.
Publius suggests that people start baldly supporting or opposing judicial candidates according to their politics, just as in elections. I suggest instead that the US would benefit far more from centralising more power over 'moral' issues and making Amendments much easier. I recognise that neither are likely to happen any time soon. But only then will the Supreme Court be able to get back to its proper task of judging, and candidates can be considered harshly if they look like they are going to apply political bias - any political bias - to their adjudications.
Sunday, August 19, 2007
Friday, June 29, 2007
Tort's Wrongs, Conclusion: A New Frontier
In the past posts I have given three examples of the tension in tort law between its compensatory role and its penal role. The latter is habitually denied by judges but is impossible to ignore without the appearance of great injustice, given that liability is generally attached according to some form of moral culpability.
The result is that Tort law as a whole survives without a clear rationale. If it were meant to compensate claimants, then why require strict standards of causation to link the loss to the action of another? Why grant damages based on what is lost regardless of money's ability to actually help? Why only penalise employers where a tort is in the course of employment and the tortfeasor is an employee rather than a contractor? On the other hand, if it were meant to penalise wrongdoing, then why only extract damages where actual harm has come about and a causal link established? Why determine damages according to loss rather than degree of fault? Why punish employers who could not have prevented employee torts?
Small changes could be made in the interests of more intuitive justice. In the causation area, so much progress could be made by looking at all damage in terms of chance of causation and discounting it appropriately, rather than the all or nothing approach at the moment. This would mean that if medical negligence is 30% likely to have been the cause of harm, the hospital should be liable for 30% of total damages. Equally among those responsible for asbestos or coal dust poisoning, where damages should be apportioned according to the contribution of each individual, and need not add up to 100% due to potential non-tortious factors. In the vicarious liability context, a rationale (and possibly a test) based on employer conduct in putting someone in the position to do wrong would be far preferable to the current mess.
However, such reforms cannot address the problem at the heart of the matter. As I hope to have demonstrated, the only way to rationalise the mess of tort law is to separate out the punitive and compensatory functions, and I propose to do this similarly to how Atiyah originally suggested (before he fell in love with private insurance). This would leave tort law as a purely punitive system alongside criminal law, with actions brought by the state. Its rationale would be to disincentivise wrongdoing, extracting money according to the degree of fault. Ideally, this would not rely on someone actually having been harmed - unjustifiably causing a risk of this would be enough - although this would usually be the best evidence that a tort had been committed. Meanwhile compensation would be dealt with through the same mechanism as compensation where one is hurt in a pure accident, because there appears to be no moral distinction explaining why the mechanisms should be different, once punishment is dealt with separately. Thus I say that social security payments should compensate all injuries, and the money available for this cause should be supplemented with all of that extracted from tortfeasors through the new system.
Once one realises that there is no real justification for the discrepancies between victims of tortfeasors who can affort to pay, victims of tortfeasors who cannot afford to pay (or who cannot themselves afford to go to court), and victims of non-tortious accidents, this appears to be only common sense. However, one big objection to it is that the social security system is generally inadequate for helping people who have received injuries. Therefore at least tort sometimes yields full compensation, and so should be maintained. The idea here is that even between state input and tort awards, there is not enough money for everyone.
The first response to this is to suggest that the state should be willing to put more money into social security to ensure adequate money is given to those in need. However, given the difficulties of this evidenced by systems across the world, more than this is necessary. The suggestion has been that it is better that some injuries are fully compensated and others barely, rather than all being somewhat compensated. There may be something to this. Clearly when we consider catastrophic injuries, only full compensation may help one to get their life back on track, so under-compensating absolutely everyone will give the worst of all worlds.
However, the current tort / social security distinction apportions funds to the injured based on what amount to random and irrelevant circumstances. We may as well pick from a hat those lucky few who are to be fully compensated. Instead, under my proposed system, the state should prioritise the catastrophically injured over those with minor injuries, making sure that the worst off get full compensation to ensure that overall, the minimum possible disruption to lives occurs through injuries.This may appear unfair to those with relatively minor injuries. But it should be kept in mind that our starting premise was that someone will have to lose out due to lack of funds. This prioritisation approach seems better than inadequately compensating absolutely everyone, and also better than randomly fully compensating some but not others.
What are the benefits of my proposed system (which is in truth, not my system at all, but is based on the system in places like New Zealand)? Firstly, it properly penalises wrongdoers according to their wrong and so accords with intuitive justice. Secondly, it avoids the incredible arbitrariness of the current system by which people receive money for injuries, replacing it with prioritisation for those most in need. It avoids the problems of a supposedly litigious culture and also the difficulties for ordinary people attempting to go to court (the financial risks, loss of time, and personal effort) by giving that responsibility to the state. Indeed, the financial point should be emphasised. Poor victims have much more trouble getting compensation than the rich because they often cannot afford the time, bare costs and decent lawyers to effectively go to court, creating a class divide in compensation. My system gives compensation according to need, not resources, while creating a responsible society through adequate punishment for wrongdoing.
Finally, I should point out some of the shortcomings of my system, purely in terms of its scope. The whole way through this I have dealt with personal injury, often considered the central case of tort (and Burrows argues that tort should be restricted to this). I do not see a problem with extension to damage to / loss of property, where priority compensation should go to those who lose more essential property like houses. There is more of a problem when it comes to defamation, but is still potentially possible where irreparable damage to reputation should be prioritised. The difficulties of adequately weighing these three areas together suggest to me that they should be compensated from separate pots, but that is subject to much further consideration. Finally, Burrows raises the spectre of nationalised damages for breach of contract. I see both sides of the argument here, as I see contracts as involving promises between parties creating a special relationship not present in the tort context, but I also know that the ability to recover damages depends on the party with which one has contracted staying solvent. Perhaps we can say that the act of contracting contains with in it assumption of the risk of the other's insolvency in a way which tort acts (in which the victim is usually involved involuntarily) do not. I am unsure on this point and would welcome comment. Nevertheless, these difficulties aside, I am quite convinced that it is quite time for tort law to undergo some major changes.
The result is that Tort law as a whole survives without a clear rationale. If it were meant to compensate claimants, then why require strict standards of causation to link the loss to the action of another? Why grant damages based on what is lost regardless of money's ability to actually help? Why only penalise employers where a tort is in the course of employment and the tortfeasor is an employee rather than a contractor? On the other hand, if it were meant to penalise wrongdoing, then why only extract damages where actual harm has come about and a causal link established? Why determine damages according to loss rather than degree of fault? Why punish employers who could not have prevented employee torts?
Small changes could be made in the interests of more intuitive justice. In the causation area, so much progress could be made by looking at all damage in terms of chance of causation and discounting it appropriately, rather than the all or nothing approach at the moment. This would mean that if medical negligence is 30% likely to have been the cause of harm, the hospital should be liable for 30% of total damages. Equally among those responsible for asbestos or coal dust poisoning, where damages should be apportioned according to the contribution of each individual, and need not add up to 100% due to potential non-tortious factors. In the vicarious liability context, a rationale (and possibly a test) based on employer conduct in putting someone in the position to do wrong would be far preferable to the current mess.
However, such reforms cannot address the problem at the heart of the matter. As I hope to have demonstrated, the only way to rationalise the mess of tort law is to separate out the punitive and compensatory functions, and I propose to do this similarly to how Atiyah originally suggested (before he fell in love with private insurance). This would leave tort law as a purely punitive system alongside criminal law, with actions brought by the state. Its rationale would be to disincentivise wrongdoing, extracting money according to the degree of fault. Ideally, this would not rely on someone actually having been harmed - unjustifiably causing a risk of this would be enough - although this would usually be the best evidence that a tort had been committed. Meanwhile compensation would be dealt with through the same mechanism as compensation where one is hurt in a pure accident, because there appears to be no moral distinction explaining why the mechanisms should be different, once punishment is dealt with separately. Thus I say that social security payments should compensate all injuries, and the money available for this cause should be supplemented with all of that extracted from tortfeasors through the new system.
Once one realises that there is no real justification for the discrepancies between victims of tortfeasors who can affort to pay, victims of tortfeasors who cannot afford to pay (or who cannot themselves afford to go to court), and victims of non-tortious accidents, this appears to be only common sense. However, one big objection to it is that the social security system is generally inadequate for helping people who have received injuries. Therefore at least tort sometimes yields full compensation, and so should be maintained. The idea here is that even between state input and tort awards, there is not enough money for everyone.
The first response to this is to suggest that the state should be willing to put more money into social security to ensure adequate money is given to those in need. However, given the difficulties of this evidenced by systems across the world, more than this is necessary. The suggestion has been that it is better that some injuries are fully compensated and others barely, rather than all being somewhat compensated. There may be something to this. Clearly when we consider catastrophic injuries, only full compensation may help one to get their life back on track, so under-compensating absolutely everyone will give the worst of all worlds.
However, the current tort / social security distinction apportions funds to the injured based on what amount to random and irrelevant circumstances. We may as well pick from a hat those lucky few who are to be fully compensated. Instead, under my proposed system, the state should prioritise the catastrophically injured over those with minor injuries, making sure that the worst off get full compensation to ensure that overall, the minimum possible disruption to lives occurs through injuries.This may appear unfair to those with relatively minor injuries. But it should be kept in mind that our starting premise was that someone will have to lose out due to lack of funds. This prioritisation approach seems better than inadequately compensating absolutely everyone, and also better than randomly fully compensating some but not others.
What are the benefits of my proposed system (which is in truth, not my system at all, but is based on the system in places like New Zealand)? Firstly, it properly penalises wrongdoers according to their wrong and so accords with intuitive justice. Secondly, it avoids the incredible arbitrariness of the current system by which people receive money for injuries, replacing it with prioritisation for those most in need. It avoids the problems of a supposedly litigious culture and also the difficulties for ordinary people attempting to go to court (the financial risks, loss of time, and personal effort) by giving that responsibility to the state. Indeed, the financial point should be emphasised. Poor victims have much more trouble getting compensation than the rich because they often cannot afford the time, bare costs and decent lawyers to effectively go to court, creating a class divide in compensation. My system gives compensation according to need, not resources, while creating a responsible society through adequate punishment for wrongdoing.
Finally, I should point out some of the shortcomings of my system, purely in terms of its scope. The whole way through this I have dealt with personal injury, often considered the central case of tort (and Burrows argues that tort should be restricted to this). I do not see a problem with extension to damage to / loss of property, where priority compensation should go to those who lose more essential property like houses. There is more of a problem when it comes to defamation, but is still potentially possible where irreparable damage to reputation should be prioritised. The difficulties of adequately weighing these three areas together suggest to me that they should be compensated from separate pots, but that is subject to much further consideration. Finally, Burrows raises the spectre of nationalised damages for breach of contract. I see both sides of the argument here, as I see contracts as involving promises between parties creating a special relationship not present in the tort context, but I also know that the ability to recover damages depends on the party with which one has contracted staying solvent. Perhaps we can say that the act of contracting contains with in it assumption of the risk of the other's insolvency in a way which tort acts (in which the victim is usually involved involuntarily) do not. I am unsure on this point and would welcome comment. Nevertheless, these difficulties aside, I am quite convinced that it is quite time for tort law to undergo some major changes.
Sunday, April 22, 2007
Tort's Wrongs, Part 3: The Impecunious Defendant
The final example I shall use in this series is theoretically the simplest. It deals with the ability of the defendant to pay the compensation ordered. Simply put, even if a case is watertight and a favourable ruling guaranteed, it is just not worth anyone's time and trouble to sue a tortfeasor with little money. Such a claimant would take as much as the defendant had and then be forced to rely on the state welfare system along with all those who suffer non-tortious injuries - receiving a far lower degree of compensation.
Without some ingenious legal methods, this would massively lower the effectiveness of tort law as a corrective system - the majority of people cannot personally afford to pay the costs ordered in compensation. However, methods have been used with the effect of greatly increasing the number of cases where claimants are able to get full compensation. Of these, by far the most significant is vicarious liability of employers.
Vicarious liability is liability for someone else's acts. Clearly where an employer can be found vicariously liable for an employee's tort, the claimant is in luck - companies are far more likely to have the resources (and often liability insurance) to compensate them fully. But why are the employers liable? This is a controversial issue involving many theories, but these can be roughly split into two camps:
On the one hand, there are theories that the employer has in fact done something morally culpable, making it just to make them compensate. Perhaps the employer is culpably responsible for the chance for the employee to commit the tort, or should pay the costs of an activity from which it takes profits. Such theories are intuitively attractive as satisfying the moral hurdle usually required for liability. However, it does not really mesh with the reality of the situation. Firstly, because the employer cannot discharge such a duty no matter how comprehensive the methods it uses. So even if it takes all steps possible to prevent employee torts, it will still be liable if such torts occur. Moreover, once a tort is established on the part of the employee, none of the normal tort defences can be invoked by the employer. Secondly, the liability only applies in the case of employees, not independent contractors. Case law on the distinction is complicated, but suffice to say that the kinds of criteria by which the distinction is applied do not really suggest a clear line in the moral responsibility of the employer. The result of these observations is that it is difficult to view vicarious liability as simply a form of primary liability.
On the other hand we have theories that moral culpability on the part of the employer is entirely irrelevant, and that what is important is ensuring compensation. The fact that employers are more likely to be able to pay is used as the justification for making them pay, either simply because they have deep pockets or because this is the best way to spread the loss to society as a whole through their customers, who will bear slightly higher prices. Textbook writers have suggested that courts lean more and more in this direction, but of course from the point of view of morality or justice this looks completely unprincipled. Those who are able to invoke vicarious liability can rely on this 'loss spreading' rationale while those hurt by individuals just being individuals are often short-changed by the tort system. Meanwhile companies which may be no more morally culpable than any others have to bear the brunt of paying compensation. Even if they can spread the loss, it will usually still be to their detriment.
The inability of employers to exclude liability through reasonable precautions makes sense on this rationale. So does the distinction between employees and contractors, in a way - individuals often hire contractors (not employees) to perform jobs around their houses, and courts are eager to avoid forcing them to pay for such contractors' torts. More honest (and less arbitrary) would be to draw the line according to whether the employer was acting as a business or as an individual, but this would expose the rationale to more direct scrutiny: It might well be thought unfair that a business is liable but an individual is not for the same conduct.
Once a tort and a contract of employment is established, the last limiting factor for vicarious liability is that the tort be done in the course of employment. This may sound like common sense, but without some fault element on the part of employers, it is baffling. If the employer need not have any moral culpability for the torts, why should it matter whether or not they were done on company time? If loss spreading is the aim, why not make all employers vicariously liable for torts committed by current employees no matter what the connection to their work?
The traditional test for a course of employment is the 'Salmond test', that the tort be authorised by the employer or be an unauthorised way of doing an authorised task. The first of these is unproblematic - this would make the employer morally culpable. The second appears to be neat, but as mentioned above there is no way to exclude liability. Rose v Plenty determined that even if the employer specifically forbids the employee from acting in the manner in question (here, using the help of young children to complete a milk round), they will still be vicariously liable for it.
The test for course of employment changed in Lister v Hesley Hall (and for the better, all things considered) but it only got wider. This was a case about a school boarding house where the warden abused the children. The House of Lords found that this could not be seen as a way of doing his task. However vicarious liability was assured under the new test of the tort being 'closely connected with the employment'. This standard is more vague and difficult to apply, but it does better reflect our sense of justice. If he had failed to stop someone else from abusing them, this would have been negligent performance of an authorised task (looking after them) so negating vicarious liability because it was him doing the abusing would appear absurd. After all, the distinction appears to have nothing whatsoever to do with the moral position of the employer - we may well consider them more at fault for hiring a paedophile than a mere idiot.
Our instincts drag us back towards a view resting vicarious liability on some form of moral culpability. I would support this - I am certainly not arguing against all vicarious liability. Employers create situations which reckless and malicious employees can exploit in ways otherwise impossible - Hesley Hall is an example. However a loss spreading rationale seems to preclude such considerations, insisting instead that the claimant be compensated one way or another. I certainly have sympathy with that impulse, but its effect on tort law has been to create even more arbitrariness and separate it further from the penal function which intuitive justice requires. Just as in the previous examples I have put forward, actual fault should determine the liability of defendants, not the claimant's need. The latter should be satisfied, but this is true whether vicarious liability can be found, whether it is just the tort of the random (and often broke) man in the street, or whether no-one is tortiously at fault (in a random accident). In my final post on this topic, I will attempt to show how such a split of the penal and compensatory functions of tort law could be achieved, and its implications.
Without some ingenious legal methods, this would massively lower the effectiveness of tort law as a corrective system - the majority of people cannot personally afford to pay the costs ordered in compensation. However, methods have been used with the effect of greatly increasing the number of cases where claimants are able to get full compensation. Of these, by far the most significant is vicarious liability of employers.
Vicarious liability is liability for someone else's acts. Clearly where an employer can be found vicariously liable for an employee's tort, the claimant is in luck - companies are far more likely to have the resources (and often liability insurance) to compensate them fully. But why are the employers liable? This is a controversial issue involving many theories, but these can be roughly split into two camps:
On the one hand, there are theories that the employer has in fact done something morally culpable, making it just to make them compensate. Perhaps the employer is culpably responsible for the chance for the employee to commit the tort, or should pay the costs of an activity from which it takes profits. Such theories are intuitively attractive as satisfying the moral hurdle usually required for liability. However, it does not really mesh with the reality of the situation. Firstly, because the employer cannot discharge such a duty no matter how comprehensive the methods it uses. So even if it takes all steps possible to prevent employee torts, it will still be liable if such torts occur. Moreover, once a tort is established on the part of the employee, none of the normal tort defences can be invoked by the employer. Secondly, the liability only applies in the case of employees, not independent contractors. Case law on the distinction is complicated, but suffice to say that the kinds of criteria by which the distinction is applied do not really suggest a clear line in the moral responsibility of the employer. The result of these observations is that it is difficult to view vicarious liability as simply a form of primary liability.
On the other hand we have theories that moral culpability on the part of the employer is entirely irrelevant, and that what is important is ensuring compensation. The fact that employers are more likely to be able to pay is used as the justification for making them pay, either simply because they have deep pockets or because this is the best way to spread the loss to society as a whole through their customers, who will bear slightly higher prices. Textbook writers have suggested that courts lean more and more in this direction, but of course from the point of view of morality or justice this looks completely unprincipled. Those who are able to invoke vicarious liability can rely on this 'loss spreading' rationale while those hurt by individuals just being individuals are often short-changed by the tort system. Meanwhile companies which may be no more morally culpable than any others have to bear the brunt of paying compensation. Even if they can spread the loss, it will usually still be to their detriment.
The inability of employers to exclude liability through reasonable precautions makes sense on this rationale. So does the distinction between employees and contractors, in a way - individuals often hire contractors (not employees) to perform jobs around their houses, and courts are eager to avoid forcing them to pay for such contractors' torts. More honest (and less arbitrary) would be to draw the line according to whether the employer was acting as a business or as an individual, but this would expose the rationale to more direct scrutiny: It might well be thought unfair that a business is liable but an individual is not for the same conduct.
Once a tort and a contract of employment is established, the last limiting factor for vicarious liability is that the tort be done in the course of employment. This may sound like common sense, but without some fault element on the part of employers, it is baffling. If the employer need not have any moral culpability for the torts, why should it matter whether or not they were done on company time? If loss spreading is the aim, why not make all employers vicariously liable for torts committed by current employees no matter what the connection to their work?
The traditional test for a course of employment is the 'Salmond test', that the tort be authorised by the employer or be an unauthorised way of doing an authorised task. The first of these is unproblematic - this would make the employer morally culpable. The second appears to be neat, but as mentioned above there is no way to exclude liability. Rose v Plenty determined that even if the employer specifically forbids the employee from acting in the manner in question (here, using the help of young children to complete a milk round), they will still be vicariously liable for it.
The test for course of employment changed in Lister v Hesley Hall (and for the better, all things considered) but it only got wider. This was a case about a school boarding house where the warden abused the children. The House of Lords found that this could not be seen as a way of doing his task. However vicarious liability was assured under the new test of the tort being 'closely connected with the employment'. This standard is more vague and difficult to apply, but it does better reflect our sense of justice. If he had failed to stop someone else from abusing them, this would have been negligent performance of an authorised task (looking after them) so negating vicarious liability because it was him doing the abusing would appear absurd. After all, the distinction appears to have nothing whatsoever to do with the moral position of the employer - we may well consider them more at fault for hiring a paedophile than a mere idiot.
Our instincts drag us back towards a view resting vicarious liability on some form of moral culpability. I would support this - I am certainly not arguing against all vicarious liability. Employers create situations which reckless and malicious employees can exploit in ways otherwise impossible - Hesley Hall is an example. However a loss spreading rationale seems to preclude such considerations, insisting instead that the claimant be compensated one way or another. I certainly have sympathy with that impulse, but its effect on tort law has been to create even more arbitrariness and separate it further from the penal function which intuitive justice requires. Just as in the previous examples I have put forward, actual fault should determine the liability of defendants, not the claimant's need. The latter should be satisfied, but this is true whether vicarious liability can be found, whether it is just the tort of the random (and often broke) man in the street, or whether no-one is tortiously at fault (in a random accident). In my final post on this topic, I will attempt to show how such a split of the penal and compensatory functions of tort law could be achieved, and its implications.
Saturday, April 07, 2007
Tort's Wrongs, Part 2: Damages and Loss
In the previous post I suggested that instinctive justice required a penal element to tort law, in the area of causation. Today I will argue the same thing in an area which actually has, in some ways, been modified in line with that - assessment of damages.
Assessment of tort damages was always going to be a difficult enterprise. For starters, it is inevitably incredibly difficult and fictionalised to put monetary amounts to certain kinds of loss, like pain, cuts and the loss of limbs. However, the problems I will be looking at are more about tension between different theories of what compensation should do. The two crucial theories are that objective loss (loss in comparison to an alternate world where the tort never happened) should be compensated, and that compensation should be required as far as it is in fact possible to cure someone (to the state in which they would be in the alternate world). Some cases will show why I think the first of these to be vastly superior.
Firstly, we have the West v Shephard situation of a tort putting someone into a permanent vegitative state. The costs of care for him were vastly less than the money he would have received for loss of amenity, loss of earnings etc. had he been awake to appreciate these losses. However compensation could not in fact do him any more good than compensating the cost of care. The Law Lords were split in the measure of damages. Lord Reid felt that damages must actually be able to help the person in some way, improving their position, and so would have limited them to the cost of care. However the majority went with Lord Devlin who thought that the compensation should reflect the actual loss, even if not appreciated. The reasoning behind his judgement is not quite clear, but the biggest concern was almost certainly that the tortfeasor should not have to pay less for causing more harm. This reflects the intuitive view that the damages should be proportionate to the harm done, and so the wrongdoing. Given that the victim could not actually benefit from the increased damages, the rationale looks suspiciously penal, ensuring that the tortfeasor is punished in line with his wrong.
A similar dilemma should in theory present itself when it comes to torts causing death. After all, surely there is even less chance of a cure? However, the Fatal Accident Act 1976 attempts to smoothe over the proposterous conclusion that there should be no damages for death by providing actions which can be used by dependents left behind. They are allowed to recover what the victim would have been able to collect had he lived, as well as a fixed sum for bereavement, and compensation for funeral charges. Of course, this means that if someone dies without dependents, then the tortfeasor is not liable. Only the rarety of this situation prevents this unsightly fact from having more influence.
For dependants, this solution is extremely helpful and often very necessary. However, it is slightly odd that they get the entirety of the compensation the victim would have received, because on the cost of cure approach this must rest on the fiction that had he lived, all of the damages would have been spent on them. It would seem that a healthy discount for the costs of living the victim himself would have had to expend during his lifetime would seem to make sense, if we were trying to put the family members in the position they would have been in without the death. Again, the fact that this has not been suggested (and it's not a problem of quantifying the discount, as will be shown below) suggests that really, the legislature is keen to ensure that tortfeasors don't have to pay any less because of causing death - a penal concern.
A closely related area is the 'lost years' problem - how to deal with compensating for loss of earnings where the number of years the person would be able to work has been cut short because their death will now be premature? On objective loss principles, it would be thought that they should still be paid exactly as much as if they hadn't lost years off their life. However on the cost of cure view, they only need as much as to keep them in the same lifestyle for those years they will in fact survive. Giving them more would be overcompensating. Happily, the objective loss model is once again generally applied - Picket v British Rail. Here, however, the logic appears to be to compensate for the loss of dependants in the lost years. This is shown in a couple of ways. Firstly, here a deduction *is* made for the amount during the lost years which the victim would have spent on himself. Secondly, Croke v Wiseman showed that if one was harmed so as to ensure that they never have dependants, then they cannot claim for the lost years. Given the objective approach in West and the FAA 1976, this is really surprising, and an example of more harm leading to fewer damages under the cost of cure rationale.
What is the heart of the difference between the objective loss and cost of cure approaches? It is the party on which the focus is laid. Objective loss looks at the actual harm done so as to determine the level of wrongdoing, and charges the tortfeasor accordingly. Cost of cure instead looks at what the victim should actually get, in terms of what will do the most justice for him - curing without providing over-compensation (lovingly referred to as a 'windfall' among lawyers). This dichotomy between claimant-centred compensation and defendant-centred compensation exposes the heart of the contradiction of tort law. For when we focus on the claimant, we ignore the very agent whose actions bring the scenario into the region of wrongs. If we are using the tort as the gatekeeper to decide whether to allow compensation, why should its seriousness not determine the level of compensation? On the defendant's side, this looks unjust. However when we take the objective approach the claimant can on his side appear overcompensated, with nothing useful to do with the money received. The tension between the two sides can only be solved by separating the amount taken from the one party from the amoutn received by the other. That is the theme I will be taking up when I come to draw my conclusions from this series.
Assessment of tort damages was always going to be a difficult enterprise. For starters, it is inevitably incredibly difficult and fictionalised to put monetary amounts to certain kinds of loss, like pain, cuts and the loss of limbs. However, the problems I will be looking at are more about tension between different theories of what compensation should do. The two crucial theories are that objective loss (loss in comparison to an alternate world where the tort never happened) should be compensated, and that compensation should be required as far as it is in fact possible to cure someone (to the state in which they would be in the alternate world). Some cases will show why I think the first of these to be vastly superior.
Firstly, we have the West v Shephard situation of a tort putting someone into a permanent vegitative state. The costs of care for him were vastly less than the money he would have received for loss of amenity, loss of earnings etc. had he been awake to appreciate these losses. However compensation could not in fact do him any more good than compensating the cost of care. The Law Lords were split in the measure of damages. Lord Reid felt that damages must actually be able to help the person in some way, improving their position, and so would have limited them to the cost of care. However the majority went with Lord Devlin who thought that the compensation should reflect the actual loss, even if not appreciated. The reasoning behind his judgement is not quite clear, but the biggest concern was almost certainly that the tortfeasor should not have to pay less for causing more harm. This reflects the intuitive view that the damages should be proportionate to the harm done, and so the wrongdoing. Given that the victim could not actually benefit from the increased damages, the rationale looks suspiciously penal, ensuring that the tortfeasor is punished in line with his wrong.
A similar dilemma should in theory present itself when it comes to torts causing death. After all, surely there is even less chance of a cure? However, the Fatal Accident Act 1976 attempts to smoothe over the proposterous conclusion that there should be no damages for death by providing actions which can be used by dependents left behind. They are allowed to recover what the victim would have been able to collect had he lived, as well as a fixed sum for bereavement, and compensation for funeral charges. Of course, this means that if someone dies without dependents, then the tortfeasor is not liable. Only the rarety of this situation prevents this unsightly fact from having more influence.
For dependants, this solution is extremely helpful and often very necessary. However, it is slightly odd that they get the entirety of the compensation the victim would have received, because on the cost of cure approach this must rest on the fiction that had he lived, all of the damages would have been spent on them. It would seem that a healthy discount for the costs of living the victim himself would have had to expend during his lifetime would seem to make sense, if we were trying to put the family members in the position they would have been in without the death. Again, the fact that this has not been suggested (and it's not a problem of quantifying the discount, as will be shown below) suggests that really, the legislature is keen to ensure that tortfeasors don't have to pay any less because of causing death - a penal concern.
A closely related area is the 'lost years' problem - how to deal with compensating for loss of earnings where the number of years the person would be able to work has been cut short because their death will now be premature? On objective loss principles, it would be thought that they should still be paid exactly as much as if they hadn't lost years off their life. However on the cost of cure view, they only need as much as to keep them in the same lifestyle for those years they will in fact survive. Giving them more would be overcompensating. Happily, the objective loss model is once again generally applied - Picket v British Rail. Here, however, the logic appears to be to compensate for the loss of dependants in the lost years. This is shown in a couple of ways. Firstly, here a deduction *is* made for the amount during the lost years which the victim would have spent on himself. Secondly, Croke v Wiseman showed that if one was harmed so as to ensure that they never have dependants, then they cannot claim for the lost years. Given the objective approach in West and the FAA 1976, this is really surprising, and an example of more harm leading to fewer damages under the cost of cure rationale.
What is the heart of the difference between the objective loss and cost of cure approaches? It is the party on which the focus is laid. Objective loss looks at the actual harm done so as to determine the level of wrongdoing, and charges the tortfeasor accordingly. Cost of cure instead looks at what the victim should actually get, in terms of what will do the most justice for him - curing without providing over-compensation (lovingly referred to as a 'windfall' among lawyers). This dichotomy between claimant-centred compensation and defendant-centred compensation exposes the heart of the contradiction of tort law. For when we focus on the claimant, we ignore the very agent whose actions bring the scenario into the region of wrongs. If we are using the tort as the gatekeeper to decide whether to allow compensation, why should its seriousness not determine the level of compensation? On the defendant's side, this looks unjust. However when we take the objective approach the claimant can on his side appear overcompensated, with nothing useful to do with the money received. The tension between the two sides can only be solved by separating the amount taken from the one party from the amoutn received by the other. That is the theme I will be taking up when I come to draw my conclusions from this series.
Friday, April 06, 2007
Tort's Wrongs, Part 1: Loss of a Chance
Tort embodies the odd distinction of being one of the branches of law least recognisable by name, while actually being one of the top contributors to stereotypes of lawyers and the law. It is the law of civil wrongs, encompassing most cases of people suing each other for wrongdoing (as opposed to breach of obligations under contract). The ideas of 'ambulance chasing' and 'no win no fee' arise from tort law.
The basic idea behind tort law is to restore people who have been wronged against to their previous position (or as far as money will go to that end) at the expense of the one who wronged them. Judges insist that, with very few minor exceptions, tort law does not have a penal function; it is compensatory, while penalty is left to the criminal law. This should immediately strike one as somewhat odd. The fact that the compensation comes from the tortfeasor (the one who commits the tort) rather than anyone else must be punishment as it is a negative consequence inflicted due to the wrongdoing. I will try to show that the attempt to avoid penalising tortfeasors inevitably leads to a tort law which appears and is unjust and contradictory.
I will use three major examples of injustice before coming to my main point and drawing them together. My first will be liability for loss of a chance.
To establish liability in tort the facts upon which it rests must be shown to be true 'on the balance of probabilities,' which means more than 50% likely to be true. This can be contrasted with the standard of proof in the criminal law of 'beyond reasonable doubt,' which might mean something like 95% or 99% likely to be true. To receive compensation for my negligence, you will have to show on the balance of probabilities things like it being foreseeable to me that my actions might hurt someone like you and (at issue here) that my actions did, as a matter of fact, cause your harm.
This becomes very tricky in a number of cases. The first is where medical negligence damaging someone's chances to avoid a harm (loss of a chance) like in Hotson v East Berkshire, but only reducing those chances by less than 50%. Because then the chance that they caused the harm will be less that 50%, there is no liability. None at all. If they are 45% likely to have been the cause they will have to pay nothing, while if they are 55% likely to have been the cause they will have to pay the whole amount for it.
Another situation is cases where multiple factors may have caused the harm like in Fairchild v Glenhaven. Where each causes a bit of the harm this is no problem, but where only one of them did, the probabilities become tricky. A classic scenario will involve multiple employers negligently exposing an employee to asbestos, leading to him contracting mesothelioma. Causally, only one will have been responsibly for the harm, but each individual may be only be 30% or 40% likely to be responsible. Following the orthodox approach, none should have to pay anything. However, in this case the Law Lords responded to the patent injustice by creating an exception to the rule: If you add up all the tortious causes and they come to over 50%, then the tortfeasors together are liable for the whole amount, in proportion to the likelihood of each being responsible. This may look fair, but in Wilsher v Essex there was a new twist to the tale. Here multiple tortfeasors were not liable, and the best explanation appears to be that it is because the harm was not all caused by the same 'agent' i.e. asbestos fibres! This is almost universally recognised as patently absurd.
What becomes clear from these cases is a devotion to all or nothing liability. It may appear extremely odd that no-one has suggested apportioning liability in accordance with the percentage chance of having caused the harm. In the Hotson case, why not make the doctors liable in proportion to the chance of them having caused the harm? If someone is 45% likely to have caused harm, charge them 45% of the loss, and if 55% likely, charge them 55%. In the Fairchild and Wilsher cases, make each tortfeasor responsible for their share of the chance of harm they caused, rather than arbitrarily splitting between cases where full compensation will be awarded and where none will be. This instinctively appears the most just solution. So why is it not adopted?
The best answer appears to be because this would no longer be compensating based on actual causal responsibility for the harm. It will be penalising for wrongdoing (contributing to a risk of harm) and then using the proceeds to compensate the victims of this. This is truly penal in that it is sensitive to the degree of responsibility and sees causing risk as wrong, even if the risk did not in actual fact come about. It is recognising that people should not get away with such actions simply because of statistics in their favour.
A classic thought experiment is put forward to explain the results of the current approach (minus the Fairchild exception). It imagines a nuclear facility which negligently increases the risk of leukemia among nearby children, increasing the number of cases of leukemia so that 40% of cases are due to its negligence, and for any particular ill child there is a 40% chance that their leukemia is due to the negligence. On orthodox, Hotson principles, the facility is not liable for a thing. If on the other hand the percentage was 60%, it would be liable for the entire damages of every child with leukemia in the area. To me, the just thing to do would be to penalise it for causing the risk, forcing it to pay to each child with leukemia the percentage of its losses, 40% or 60%, in proportion to the chance of its responsibility. But tort law as it is just can not cope with this.
If it were to be remedied as I suggest, the system would appear much more just and would, for me, actually be more just. To the non-legal observer, it may seem incredible that this is avoided for the sake of keeping tort law 'non-penal.' However once you accept that the causal link is not sacrosanct, you start to unravel the foundations of the discipline. I will continue to explain why this is in my next post.
The basic idea behind tort law is to restore people who have been wronged against to their previous position (or as far as money will go to that end) at the expense of the one who wronged them. Judges insist that, with very few minor exceptions, tort law does not have a penal function; it is compensatory, while penalty is left to the criminal law. This should immediately strike one as somewhat odd. The fact that the compensation comes from the tortfeasor (the one who commits the tort) rather than anyone else must be punishment as it is a negative consequence inflicted due to the wrongdoing. I will try to show that the attempt to avoid penalising tortfeasors inevitably leads to a tort law which appears and is unjust and contradictory.
I will use three major examples of injustice before coming to my main point and drawing them together. My first will be liability for loss of a chance.
To establish liability in tort the facts upon which it rests must be shown to be true 'on the balance of probabilities,' which means more than 50% likely to be true. This can be contrasted with the standard of proof in the criminal law of 'beyond reasonable doubt,' which might mean something like 95% or 99% likely to be true. To receive compensation for my negligence, you will have to show on the balance of probabilities things like it being foreseeable to me that my actions might hurt someone like you and (at issue here) that my actions did, as a matter of fact, cause your harm.
This becomes very tricky in a number of cases. The first is where medical negligence damaging someone's chances to avoid a harm (loss of a chance) like in Hotson v East Berkshire, but only reducing those chances by less than 50%. Because then the chance that they caused the harm will be less that 50%, there is no liability. None at all. If they are 45% likely to have been the cause they will have to pay nothing, while if they are 55% likely to have been the cause they will have to pay the whole amount for it.
Another situation is cases where multiple factors may have caused the harm like in Fairchild v Glenhaven. Where each causes a bit of the harm this is no problem, but where only one of them did, the probabilities become tricky. A classic scenario will involve multiple employers negligently exposing an employee to asbestos, leading to him contracting mesothelioma. Causally, only one will have been responsibly for the harm, but each individual may be only be 30% or 40% likely to be responsible. Following the orthodox approach, none should have to pay anything. However, in this case the Law Lords responded to the patent injustice by creating an exception to the rule: If you add up all the tortious causes and they come to over 50%, then the tortfeasors together are liable for the whole amount, in proportion to the likelihood of each being responsible. This may look fair, but in Wilsher v Essex there was a new twist to the tale. Here multiple tortfeasors were not liable, and the best explanation appears to be that it is because the harm was not all caused by the same 'agent' i.e. asbestos fibres! This is almost universally recognised as patently absurd.
What becomes clear from these cases is a devotion to all or nothing liability. It may appear extremely odd that no-one has suggested apportioning liability in accordance with the percentage chance of having caused the harm. In the Hotson case, why not make the doctors liable in proportion to the chance of them having caused the harm? If someone is 45% likely to have caused harm, charge them 45% of the loss, and if 55% likely, charge them 55%. In the Fairchild and Wilsher cases, make each tortfeasor responsible for their share of the chance of harm they caused, rather than arbitrarily splitting between cases where full compensation will be awarded and where none will be. This instinctively appears the most just solution. So why is it not adopted?
The best answer appears to be because this would no longer be compensating based on actual causal responsibility for the harm. It will be penalising for wrongdoing (contributing to a risk of harm) and then using the proceeds to compensate the victims of this. This is truly penal in that it is sensitive to the degree of responsibility and sees causing risk as wrong, even if the risk did not in actual fact come about. It is recognising that people should not get away with such actions simply because of statistics in their favour.
A classic thought experiment is put forward to explain the results of the current approach (minus the Fairchild exception). It imagines a nuclear facility which negligently increases the risk of leukemia among nearby children, increasing the number of cases of leukemia so that 40% of cases are due to its negligence, and for any particular ill child there is a 40% chance that their leukemia is due to the negligence. On orthodox, Hotson principles, the facility is not liable for a thing. If on the other hand the percentage was 60%, it would be liable for the entire damages of every child with leukemia in the area. To me, the just thing to do would be to penalise it for causing the risk, forcing it to pay to each child with leukemia the percentage of its losses, 40% or 60%, in proportion to the chance of its responsibility. But tort law as it is just can not cope with this.
If it were to be remedied as I suggest, the system would appear much more just and would, for me, actually be more just. To the non-legal observer, it may seem incredible that this is avoided for the sake of keeping tort law 'non-penal.' However once you accept that the causal link is not sacrosanct, you start to unravel the foundations of the discipline. I will continue to explain why this is in my next post.
Friday, February 16, 2007
Torture: Ethics v Law
In the previous post I explained my broad theory of the moral limits of the law. It can be summarised in this way: The moral limits of the law should not depend on the subject-matter in question, but on the peculiar nature of the law which is to enforce it. I put forward the principles of effectiveness (balancing the goals of the law to come up with an effective system) and certainty (coming up with a reasonably clear set of rules so as to allow conduct to be guided and prevent judges from having too much control over matters best left to the individual conscience) in determining how law should enforce morality. I also explained that freedom of speech meant that offence (including disgust and outrage), although a (mild) moral harm, should be discounted by the law - policing it would infringe too deeply on freedom of speech.
I will now turn my analysis on a very different, and substantially more controversial topic - torture. I will argue that it some rare cases, torture can be morally justifiable from the individual point of view, but that the law must draw one of its clear lines to say that it never accepts it as justifiable. There is a parallel with the popular hypothetical case of stealing to feed one's family. In both cases the fact that the action was justifiable in the individual case suggests that punishment is harsh. Nevertheless the alternative is handing to courts a decision which they should not have the power to make.
The suggestion that torture can rarely be justified comes from the much-vaunted ticking bomb scenario. In this situation a bomb will soon kill hundreds of innocents if a terrorist is not tortured to reveal its location. As a hypothetical it is simplistic and it is often meant to use such a rare instance to justify a whole edifice of torture through a wedge strategy. Nevertheless, the scenario might have something to it.
The problems of the scenario are large and obvious. How are we to know that the person we have is the real terrorist? Will torture actually effectively get us the bomb's location? However, hidden in the centre of all this is the fact that from the interrogator's point of view, it might well seem both possible that torture will work, and certain that the person in question is the perpetrator. Now, I am firmly against abusing basic rights of innocents (ie. torturing or killing them) for the 'greater good'. However the situation is much less clear with the non-innocent. We are willing to allow killing in self-defence or defence of others if necessary. Why not allow torture on the same footing? As long as constrained strictly to those who are actively trying to kill others (or similar) and to where necessary to prevent such evil, it would seem morally difficult to allow one but not the other.
If the leap is difficult to stomach, imagine this. A man has a bomb strapped to him in a crowded place. The timer ticks down with each of his heartbeats. The only way to stop the detonation is to shoot him dead, stopping the countdown. On a simple preservation of others principle, this can be justified. Now imagine that the man has the same set up but the man and bomb are separate, remotely linked. The man is in custody when you discover that his heartbeat will still detonate the bomb. The only way to stop it is to kill him. It would seem that since the only real thing that has changed is proximity, it must still be okay to kill him. Which brings us to the ticking bomb scenario. Once again, the only way to diffuse the bomb is to violate one of the man's basic rights. Unless we are willing to argue that torture is so absolutely awful that it cannot be allowed even where killing can (an argument I am loathe to accept here), we must accept that torture could here be morally justified on the same principle as killing the prisoner in custody.
Hopefully now it will be clear why an interrogator might legitimately feel morally justified in torturing, at least in a rare number of cases where they are certain of the person's guilt. So what should the law's response to this be? Firstly, consider efficiency - that the law must balance its legitimate aims. Whatever the rare good that can be done by torture, many aims point against the use of torture: Upholding the reputability of the law and keeping society generally opposed to the concept of torture; ensuring as little as possible harm is done to innocents (where torture is allowed, more innocents will end up tortured); and avoiding fruitless punishment (often torture will fail to yield anything, merely adding more harm to the equation). There is a very real danger that once torture is introduced for extreme cases it will become increasingly normalised, on a slippery slope towards routine use to make suspects confess rather than to save lives.
Secondly, remember certainty - the value of ending up with clear cut rules which it is safe to give to judges to adjudicate. It is true that clear cut rules could be set down, only authorising torture when people's lives or bodies could be saved by it, and where there is overwhelming certainty of guilt. It could even be required to be no more than necessary and appropriate to the situation, although that would be difficult to define. The first big problem with such a situation is that it allows the interrogators or police to act as judges or juries. It will require them to judge on those issues, and give a legitimacy to their decisions. Even if there are special investigators appointed to make decisions, the pressure from police will impinge on any standard of unbiased decision making. An emergency judgement from a proper court might improve this, but the constrained time will still have corrosive effects. Judges will inevitably be tempted to find for the interrogators for fear that unlike torture, killing is irreversible. The evidence may not justify such a finding, but it becomes more and more likely.
But what about the easy cases? The case where the perpetrator is gloating about what is going to happen? Firstly, if such actions were likely to lead to torture, such people would simply stop gloating and instead post anonymously or give tip-offs to police without any documentation of the fact. It would become more difficult to find anyone guilty under such a standard. Secondly, however, the situation has become so amazingly hypothetical as to become almost irrelevant as a legal standard. Someone would have to willingly admit that they were party to actions likely to cause death or serious injury to authorities they knew would be likely to torture them for it. There would have to be time to take them before a judge where they would have to again admit to such knowledge. Only then could torture take place. It seems unlikely that such an event would ever take place, and if it did the culprit seems more likely to be a psychotic masochist keen to be tortured than someone with any actual information. For this case, should the law be willing to risk all the other aims above, especially the risk of the spread of torture to less worthy areas? It would seem implausible.
What is generally advocated is more akin to torturing those reasonably suspected of being terrorists until either they confess or seem unlikely to have anything to know. The harm caused by the law permitting anything near this level suggests that it is unthinkable, and this is not affected by the fact that in certain mostly hypothetical cases, it might be morally acceptable.
I will now turn my analysis on a very different, and substantially more controversial topic - torture. I will argue that it some rare cases, torture can be morally justifiable from the individual point of view, but that the law must draw one of its clear lines to say that it never accepts it as justifiable. There is a parallel with the popular hypothetical case of stealing to feed one's family. In both cases the fact that the action was justifiable in the individual case suggests that punishment is harsh. Nevertheless the alternative is handing to courts a decision which they should not have the power to make.
The suggestion that torture can rarely be justified comes from the much-vaunted ticking bomb scenario. In this situation a bomb will soon kill hundreds of innocents if a terrorist is not tortured to reveal its location. As a hypothetical it is simplistic and it is often meant to use such a rare instance to justify a whole edifice of torture through a wedge strategy. Nevertheless, the scenario might have something to it.
The problems of the scenario are large and obvious. How are we to know that the person we have is the real terrorist? Will torture actually effectively get us the bomb's location? However, hidden in the centre of all this is the fact that from the interrogator's point of view, it might well seem both possible that torture will work, and certain that the person in question is the perpetrator. Now, I am firmly against abusing basic rights of innocents (ie. torturing or killing them) for the 'greater good'. However the situation is much less clear with the non-innocent. We are willing to allow killing in self-defence or defence of others if necessary. Why not allow torture on the same footing? As long as constrained strictly to those who are actively trying to kill others (or similar) and to where necessary to prevent such evil, it would seem morally difficult to allow one but not the other.
If the leap is difficult to stomach, imagine this. A man has a bomb strapped to him in a crowded place. The timer ticks down with each of his heartbeats. The only way to stop the detonation is to shoot him dead, stopping the countdown. On a simple preservation of others principle, this can be justified. Now imagine that the man has the same set up but the man and bomb are separate, remotely linked. The man is in custody when you discover that his heartbeat will still detonate the bomb. The only way to stop it is to kill him. It would seem that since the only real thing that has changed is proximity, it must still be okay to kill him. Which brings us to the ticking bomb scenario. Once again, the only way to diffuse the bomb is to violate one of the man's basic rights. Unless we are willing to argue that torture is so absolutely awful that it cannot be allowed even where killing can (an argument I am loathe to accept here), we must accept that torture could here be morally justified on the same principle as killing the prisoner in custody.
Hopefully now it will be clear why an interrogator might legitimately feel morally justified in torturing, at least in a rare number of cases where they are certain of the person's guilt. So what should the law's response to this be? Firstly, consider efficiency - that the law must balance its legitimate aims. Whatever the rare good that can be done by torture, many aims point against the use of torture: Upholding the reputability of the law and keeping society generally opposed to the concept of torture; ensuring as little as possible harm is done to innocents (where torture is allowed, more innocents will end up tortured); and avoiding fruitless punishment (often torture will fail to yield anything, merely adding more harm to the equation). There is a very real danger that once torture is introduced for extreme cases it will become increasingly normalised, on a slippery slope towards routine use to make suspects confess rather than to save lives.
Secondly, remember certainty - the value of ending up with clear cut rules which it is safe to give to judges to adjudicate. It is true that clear cut rules could be set down, only authorising torture when people's lives or bodies could be saved by it, and where there is overwhelming certainty of guilt. It could even be required to be no more than necessary and appropriate to the situation, although that would be difficult to define. The first big problem with such a situation is that it allows the interrogators or police to act as judges or juries. It will require them to judge on those issues, and give a legitimacy to their decisions. Even if there are special investigators appointed to make decisions, the pressure from police will impinge on any standard of unbiased decision making. An emergency judgement from a proper court might improve this, but the constrained time will still have corrosive effects. Judges will inevitably be tempted to find for the interrogators for fear that unlike torture, killing is irreversible. The evidence may not justify such a finding, but it becomes more and more likely.
But what about the easy cases? The case where the perpetrator is gloating about what is going to happen? Firstly, if such actions were likely to lead to torture, such people would simply stop gloating and instead post anonymously or give tip-offs to police without any documentation of the fact. It would become more difficult to find anyone guilty under such a standard. Secondly, however, the situation has become so amazingly hypothetical as to become almost irrelevant as a legal standard. Someone would have to willingly admit that they were party to actions likely to cause death or serious injury to authorities they knew would be likely to torture them for it. There would have to be time to take them before a judge where they would have to again admit to such knowledge. Only then could torture take place. It seems unlikely that such an event would ever take place, and if it did the culprit seems more likely to be a psychotic masochist keen to be tortured than someone with any actual information. For this case, should the law be willing to risk all the other aims above, especially the risk of the spread of torture to less worthy areas? It would seem implausible.
What is generally advocated is more akin to torturing those reasonably suspected of being terrorists until either they confess or seem unlikely to have anything to know. The harm caused by the law permitting anything near this level suggests that it is unthinkable, and this is not affected by the fact that in certain mostly hypothetical cases, it might be morally acceptable.
Wednesday, January 24, 2007
Law and the Harm Principle
One of the biggest issues straddling the areas of ethics, law and politics is how far ethics should be implemented as law. Of course not all law attempts to implement pre-existing moral duties. Often the law itself can help inform people of considerations which will alter their moral duties (as with some health and safety laws) and at other times it has a crucial regulatory function, imposing a uniform standard which is no better than another for the reason that there needs to be some standard (the classic example is a law requiring all to drive on one side of the road). However the bulk of law is meant to enforce pre-existing morality, so it is crucial to know how far this should go - in other words, what are the moral limits of the law.
For liberals, the traditional standard is the harm principle. The Wolfenden Report which eventually led to the legalisation of homosexuality in the UK suggested a private sphere of morality into which the state should not intrude. But modern liberals see this in a different light. Homosexuality should be legal not so much because it is private, but because it is not immoral. The harm principle decides what is in fact immoral, not which immoralities should be criminal. Nevertheless there clearly are some immoralities we believe should remain legal, like adultery. The principle behind this needs further exploration.
However at the same time, the harm principle as applied to ethics does lead to difficult questions. The distinction between acts only harming oneself and acts harming others is tenuous. Although drug use, masturbation, contraception and homosexuality are examples of private acts with no direct impact on those who do not consent, all if discovered can cause offense and distress to others, particularly loved ones who disapprove. Even those with no connection to the individuals in question can feel anger and outraged at the presense of the phenomenon in their society. Liberals would have no problem that such moral outrage should not count as actionable harm - that such mere offence should be discounted from the harm principle. The problem is justifying this.
The best definition of harm appears to be detriment of any kind, and this would seem clearly to include offense, outrage and distress. An immediate reaction may be to draw a line between physical harms and mental harms. However this is a worrying distinction. On the mental side of the line would also be fear due to intimidation as well as any number of mental illnesses. It would be unjust to say that causing such ailments does not violate the harm principle. The distinction is also unjustified. Mental harms can cause as much, if not more, misery than physical and are by no means necessarily easier to 'get over' - such an idea is discredited by modern understanding. So how do we explain why offense should not be treated as harm?
I think that the answers to this problem and the problem of translating morality into legisaltion are intertwined, and must be understood together. The crucial answer to the offense problem is to realise that offense and related ailments are indeed harms to be factored into our moral considerations. For this reason, it can be morally wrong to swear in front of those who it will offend, or insult someone for no reason. Crucially however, harms must always be balanced against the benefits of action. If we reasonably minimise the chances of people getting offended by our private actions and the benefits justify what small risk there is left, it can still be morally permissible to do those acts. While homosexuality may disgust some and offend others, engaging in homosexual activity can still be justified by the following factors among others: The negative effects of repressing orientation in terms of emotional health and fulfilment, the happiness brought to oneself and ones partner(s) by engaging in it and the benefit to society of encouraging more openness and acceptance. Offence in cases like this is taken into account but outweighed by the positives. It is further submitted that our right to self-determination is also a good which should be weighed against restrictions. As long as mere offence is considered a low-level factor to be taken into account (as opposed to more weighty concerns like physical harms and more profound mental harms) there is not a problem.
In the case of criticising the beliefs of others, this calculation must include considerations of the public good of free debate. It may be wrong to simply mock and ridicule another's beliefs just to upset them. However reasonable criticism is vital to our society, and the good of allowing ideas to be questioned can easily outweigh temporary offence at the criticism. Whether criticism is morally justified must very much be considered on a case-by-case basis.
This sounds dangerous when it comes to the application of law, however. If we were to leave it to judges to determine what is and what is not reasonable criticism, free speech would be left to the personal opinions of individuals with their own agenda and their own criteria. It is a question much better left to the individual conscience to determine. This is because the nature of law requires that we filter morality in certain ways before imposing it on the public. This is the crucial insight which should make us read the Wolfenden Report in a different light. It is not that law should be different from morality due to some problem with coercing others to behave morally (the harm principle deals with this at the stage of determining what is moral), nor due to some distinction between private and public morality. Rather, the very nature of law requires that morality be filtered in certain ways before application. A couple of the key principles of that filtering will now be set out.
Effectiveness - Where law attempts to uphold morality, it is useless if it has no effect and counter-productive where it actually encourages the wrong or wrongs it seeks to prevent. Laws making thought crimes fail not just because such laws may be wrongheaded in their subject matter, but also because they are impossible to police and so encourage disrespect for the law as a whole. Moreover a society which criminalises drugs may find that this drives them underground, causing vast harm in other ways. Effectiveness should be considered in terms of the legal order as a whole: Where a law is effective against one wrong but actually increases another, it must be considered whether the tradeoff is worthwhile. Where it isn't the law should be removed, even though the act in question might still be wrong.
Certainty - Certainty is crucial to any system attempting to guide behaviour. People must be able to stay clear of prohibited actions. At the same time liberty must not be restricted more than necessary. The result is a need for clear, somewhat simplistic guidelines. Unfortunately, morality does not provide us with such guidelines. Ethics tirelessly requires us to assess the individual situation and weigh up competing factors, which gives it an unavoidably personal element. Giving such a decision to the courts to decide increases the uncertainty of those who are considering how to act. Different judges may weigh factors differently and come to different moral decisions on the same set of facts. Legal certainty therefore dictates that morality be simplified down to a reasonable number of bare rules. The question of whether killing is wrong or not in a certain situation can be difficult, but the law makes it simpler - killing a person is always wrong, with a few clear exceptions. The reason why we might all agree that stealing food to survive might be morally permissible but should remain illegal is that otherwise the court would have to examine the socio-economic circumstances leading to the theft, whether there was any other way to get food, whether the person from whom it was stolen had more need of it etc. Courts are simply not best placed to make such assessments - they must remain for the individual conscience to determine. What the courts and the legislature do is balance fairness with simplicity to come up with rules to be applied depite the fact that they may lead to unfortunate consequences in individual cases.
So these principles of effectiveness and certainty must be used to determine in what way and to what extent the law should enforce morality. Now, as mentioned above, free speech and debate is crucial for a healthy democratic society to function. The principle of effectiveness therefore indicates that there should be extremely strong reasons wherever they are to be curtailed. Certainly when it comes to useful debate, offence and outrage are not sufficiently strong reasons, especially as they constitute an ordinary and expected part of reasonable discourse. However then the principle of certainty comes in. It cannot be for judges to simply decide for themselves whether a certain type of speech serves a sufficiently useful role to justify offence. Therefore, offence has to be removed from consideration as a harm from the law's point of view. Even when from an individual point of view, it would be wrong to offend someone, a judge must not be allowed to determine this because to do so would allow free speech to be subject to the will of the court. Much better is the judgement of the people as a whole who can accept or reject any ideas contained within the discourse.
People can have horrendous views. Sometimes, hearing them, it is difficult to imagine how they can morally justify to themselves promulgating such views. However what I submit is that we must never think that it is for the courts to put an end to such views. They must be judged by public opinion, a public open to hearing and determining any issue.
For liberals, the traditional standard is the harm principle. The Wolfenden Report which eventually led to the legalisation of homosexuality in the UK suggested a private sphere of morality into which the state should not intrude. But modern liberals see this in a different light. Homosexuality should be legal not so much because it is private, but because it is not immoral. The harm principle decides what is in fact immoral, not which immoralities should be criminal. Nevertheless there clearly are some immoralities we believe should remain legal, like adultery. The principle behind this needs further exploration.
However at the same time, the harm principle as applied to ethics does lead to difficult questions. The distinction between acts only harming oneself and acts harming others is tenuous. Although drug use, masturbation, contraception and homosexuality are examples of private acts with no direct impact on those who do not consent, all if discovered can cause offense and distress to others, particularly loved ones who disapprove. Even those with no connection to the individuals in question can feel anger and outraged at the presense of the phenomenon in their society. Liberals would have no problem that such moral outrage should not count as actionable harm - that such mere offence should be discounted from the harm principle. The problem is justifying this.
The best definition of harm appears to be detriment of any kind, and this would seem clearly to include offense, outrage and distress. An immediate reaction may be to draw a line between physical harms and mental harms. However this is a worrying distinction. On the mental side of the line would also be fear due to intimidation as well as any number of mental illnesses. It would be unjust to say that causing such ailments does not violate the harm principle. The distinction is also unjustified. Mental harms can cause as much, if not more, misery than physical and are by no means necessarily easier to 'get over' - such an idea is discredited by modern understanding. So how do we explain why offense should not be treated as harm?
I think that the answers to this problem and the problem of translating morality into legisaltion are intertwined, and must be understood together. The crucial answer to the offense problem is to realise that offense and related ailments are indeed harms to be factored into our moral considerations. For this reason, it can be morally wrong to swear in front of those who it will offend, or insult someone for no reason. Crucially however, harms must always be balanced against the benefits of action. If we reasonably minimise the chances of people getting offended by our private actions and the benefits justify what small risk there is left, it can still be morally permissible to do those acts. While homosexuality may disgust some and offend others, engaging in homosexual activity can still be justified by the following factors among others: The negative effects of repressing orientation in terms of emotional health and fulfilment, the happiness brought to oneself and ones partner(s) by engaging in it and the benefit to society of encouraging more openness and acceptance. Offence in cases like this is taken into account but outweighed by the positives. It is further submitted that our right to self-determination is also a good which should be weighed against restrictions. As long as mere offence is considered a low-level factor to be taken into account (as opposed to more weighty concerns like physical harms and more profound mental harms) there is not a problem.
In the case of criticising the beliefs of others, this calculation must include considerations of the public good of free debate. It may be wrong to simply mock and ridicule another's beliefs just to upset them. However reasonable criticism is vital to our society, and the good of allowing ideas to be questioned can easily outweigh temporary offence at the criticism. Whether criticism is morally justified must very much be considered on a case-by-case basis.
This sounds dangerous when it comes to the application of law, however. If we were to leave it to judges to determine what is and what is not reasonable criticism, free speech would be left to the personal opinions of individuals with their own agenda and their own criteria. It is a question much better left to the individual conscience to determine. This is because the nature of law requires that we filter morality in certain ways before imposing it on the public. This is the crucial insight which should make us read the Wolfenden Report in a different light. It is not that law should be different from morality due to some problem with coercing others to behave morally (the harm principle deals with this at the stage of determining what is moral), nor due to some distinction between private and public morality. Rather, the very nature of law requires that morality be filtered in certain ways before application. A couple of the key principles of that filtering will now be set out.
Effectiveness - Where law attempts to uphold morality, it is useless if it has no effect and counter-productive where it actually encourages the wrong or wrongs it seeks to prevent. Laws making thought crimes fail not just because such laws may be wrongheaded in their subject matter, but also because they are impossible to police and so encourage disrespect for the law as a whole. Moreover a society which criminalises drugs may find that this drives them underground, causing vast harm in other ways. Effectiveness should be considered in terms of the legal order as a whole: Where a law is effective against one wrong but actually increases another, it must be considered whether the tradeoff is worthwhile. Where it isn't the law should be removed, even though the act in question might still be wrong.
Certainty - Certainty is crucial to any system attempting to guide behaviour. People must be able to stay clear of prohibited actions. At the same time liberty must not be restricted more than necessary. The result is a need for clear, somewhat simplistic guidelines. Unfortunately, morality does not provide us with such guidelines. Ethics tirelessly requires us to assess the individual situation and weigh up competing factors, which gives it an unavoidably personal element. Giving such a decision to the courts to decide increases the uncertainty of those who are considering how to act. Different judges may weigh factors differently and come to different moral decisions on the same set of facts. Legal certainty therefore dictates that morality be simplified down to a reasonable number of bare rules. The question of whether killing is wrong or not in a certain situation can be difficult, but the law makes it simpler - killing a person is always wrong, with a few clear exceptions. The reason why we might all agree that stealing food to survive might be morally permissible but should remain illegal is that otherwise the court would have to examine the socio-economic circumstances leading to the theft, whether there was any other way to get food, whether the person from whom it was stolen had more need of it etc. Courts are simply not best placed to make such assessments - they must remain for the individual conscience to determine. What the courts and the legislature do is balance fairness with simplicity to come up with rules to be applied depite the fact that they may lead to unfortunate consequences in individual cases.
So these principles of effectiveness and certainty must be used to determine in what way and to what extent the law should enforce morality. Now, as mentioned above, free speech and debate is crucial for a healthy democratic society to function. The principle of effectiveness therefore indicates that there should be extremely strong reasons wherever they are to be curtailed. Certainly when it comes to useful debate, offence and outrage are not sufficiently strong reasons, especially as they constitute an ordinary and expected part of reasonable discourse. However then the principle of certainty comes in. It cannot be for judges to simply decide for themselves whether a certain type of speech serves a sufficiently useful role to justify offence. Therefore, offence has to be removed from consideration as a harm from the law's point of view. Even when from an individual point of view, it would be wrong to offend someone, a judge must not be allowed to determine this because to do so would allow free speech to be subject to the will of the court. Much better is the judgement of the people as a whole who can accept or reject any ideas contained within the discourse.
People can have horrendous views. Sometimes, hearing them, it is difficult to imagine how they can morally justify to themselves promulgating such views. However what I submit is that we must never think that it is for the courts to put an end to such views. They must be judged by public opinion, a public open to hearing and determining any issue.
Sunday, December 24, 2006
Self-Corruption and Legal Obligation
In my previous post I discussed self-corruption and how it should influence our view of morality and the harm principle. Here, to begin my reflections on the relationship between law and morality, I shall explain why self-corruption grounds a prima facie moral obligation to obey the law. The following was submitted to Oxford's Law Society for an essay competition and it is therefore in a slightly different format from my normal posts, and repeats much of the groundwork for the concept of self-corruption as laid out in the previous post.
Self-Corruption and the Moral Obligation to Obey the Law
Introduction
The question of a moral obligation to obey the law, straddling as it does the fence between legal and moral philosophy, must appear one of the more imminent and relevant aspects of jurisprudence to the legal outsider. Most of us will have encountered a situation where we could break a law of some kind without any apparent chance of punishment or harm arising. Is there a moral dimension which arises here, encouraging us to act in accordance with the law despite it seeming to serve no coherent moral aim? I will argue that there is. Specifically, that in a reasonably just society, there will always be a prima facie obligation to obey the law.
Nobody would defend the position that there is an absolute duty to obey the law, at least not since the morally horrific yet legally binding norms of the Nazi German state. Instead, I would argue for a prima facie duty: A duty-creating reason which can be bolstered or displaced by other considerations. In this way there is a prima facie moral duty not to kill which may, in select situations like self-defence and possibly euthanasia, be displaced. The duty to obey the law will serve as scant defence to those who committed Nazi horrors, since there are clear and overwhelming reasons ensuring the overall moral balance is against obedience.
Raz’s argument against the prima facie obligation
Philosophers epitomised by Joseph Raz however argue emphatically that even such a prima facie obligation cannot exist. Raz argues that we would label as immoral anyone who refrained from murder because it was illegal, rather than for other reasons. Therefore any ‘prima facie obligation’ would there be dead (1). I submit that this is a confused way to consider moral obligations. Raz implicitly assumes that moral obligations can be added up in specific situations to give an overall moral weighting, rather like adding up the costs and benefits of a business transaction in purely economical terms. On this view, the obligation to obey the law does indeed not seem to add any weight to strong moral reasons against murder. However, we do not look at moral obligations in the way that he suggests.
Let us consider the moral obligation to uphold a promise made. Once again it would seem clear that this cannot be an absolute obligation (considering promises to do wrong). Nevertheless there are good moral reasons for upholding a promise which would need to be displaced in specific circumstances: Since society depends to an extent upon people being able to rely on the word of others, every broken promise damages the society in which it takes place. As long as that society is worth preserving (a reasonably just society) this will underpin a prima facie duty to uphold promises. Now, imagine that after some terrible slight I am moved to kill an enemy, but promise a good friend of mine that I will not. Still, I go ahead and do the deed. Clearly from an objective point of view the promise can be excluded from consideration of my moral wrong. It has been swallowed up in the heinous act of murder. Nevertheless, I would argue that the obligation to obey my promise still existed, running concurrently with the obligation not to murder and merely eclipsed by it. Certainly, it would seem absurd to argue that because in this case other considerations make the promise practically morally irrelevant, there is not a prima facie obligation to obey promises. Exactly the same is true with the obligation to obey the law. It is eclipsed by more pressing moral matters, certainly, but it still exists, Raz’s assertion notwithstanding.
The bad example argument
Despite the failure of this argument, it is still for me to make my case in favour of the duty. I must express gratitude to Raz and Smith here in aptly dismissing a number of unsatisfactory positions (2). I will focus on the sole ground for a duty which I do not believe they succeeded in demolishing: The ‘bad example’ argument which is helpfully summarised by Raz (3).
It states that in a reasonably just society, there will be many laws with which it is better to comply simply because they are laws. Raz accepts this in the cases of the government having better expertise over regulations than (most) individuals and the government co-ordinating collective action which would fail without the government’s intervention. Therefore, the argument goes, there is a prima facie obligation to obey all laws, even those not falling under these categories, since to do otherwise would set an example of contempt for the law, discouraging others from obeying the law even in worthwhile cases. This will be damaging for a society which, as reasonably just, we wish to preserve.
Raz respects the argument but says that it is insufficient to ground a general duty as it requires the possibility and likelihood of setting a bad example in every single case. He gives the counter-examples of horrific murders, which will actually strengthen feeling in favour of obedience, and running red lights when there is no-one about, which provides no example at all. It is submitted that he has missed some crucial points. Regarding the murder situation, one cannot know in advance what effect a crime will have on other people’s opinions of crime. It may encourage or discourage them from it. Nevertheless, the balance is always in favour of encouraging, because instances cumulatively bring an act closer to general acceptance. Also, the shaking up of people’s perceptions of the act always carries a risk of desensitising them to it. Far more difficult to answer is the traffic lights situation. The central argument of this essay is an attempt to answer it.
Self-corruption
Imagine that I am a forgetful person who must remind myself to do things by a system of notes. One night I promise my friend to buy him something the next day and write a note to that effect. However just before bed the friend annoys me. In anger I throw away the note, aware that this will ensure that I forget my promise. Imagine further that I well know that I tend to forgive (or forget!) wrongs in my sleep so that had I seen the note the next day, I would almost certainly have obeyed its contents. My acts the next day in failing to uphold the promise are not wrong. Who can blame me for failing to remember something when it is outside of my control? Indeed, I would be blameless if some rogue, and not I, had dispensed with my note. The wrong was done last night. In effect I manipulated events so that I did harm (breaking the promise) the next day. Although I did not breach the harm principle in the immediate timeframe, I did so in an inchoate way: I pushed myself to cause harm in the future.
Why the slightly far-fetched example? I am trying to show that setting myself up to cause harm in the future is in itself a wrong. This seems relatively clear in this memory case. Now consider a new scenario. Imagine that I am fed up with people being mean to me. In order to gain some respect, I train myself to respond automatically to taunts and teasing with disproportionate physical violence. As a result I later cause terrible injuries to people who fall foul of my training. I would imagine it relatively uncontroversial to hold that, as in the memory example, I was doing something wrong in training myself thus. Even before I actually harmed anyone, I was influencing myself so as to cause harm at a later date, and this must be wrong.
In essence, I am arguing that whatever formulation of the harm principle people go by, this kind of inchoate harm must be included. Just as I do wrong in persuading my friend to harm another, I do wrong if I ‘persuade’ myself to do so. I will call this ‘self-corruption’. Both of the examples so far involved deliberate choices to cause harm in the future, but the principle seems to logically extend to where causing harm in the future is a logically foreseeable consequence or risk of my actions now. Of course, such actions may still be justifiable by other means: A soldier at war may develop a violent character but this might be justified by the circumstances of war. This makes avoiding self-corruption which makes harm more likely a prima facie duty which would need to be displaced in individual cases.
Self-corruption and the bad example argument
Now recall the traffic light example that I considered earlier. In running red lights when no other vehicles or pedestrians are around we might well not risk any real harm nor set any bad example to others. However surely we are setting a bad example to ourselves. Every time we break the law we lessen our respect for its normative force, and make it more likely that we will break the law again. There is a close parallel with promises here. It may well be possible to break a promise and get away with it without anyone ever knowing or getting harmed. However, in doing so we reduce our respect for promises as a whole, making it more likely that we will breach promises in the future. In both cases it seems quite reasonable to assume that our diminished respect for the concept (law or promise) makes it more likely for us to breach it again, even where it is uncontroversially wrong to do so.
There are two levels to this. The first is that once we break a law or promise without anyone else knowing or being affected, we are more likely to do so again in ways which do harm people due to the promise or law having existed (as well as Raz’s situations when law itself creates moral duties, people also rely on both law and promises in ways which can make it wrong to violate them). This is a direct step from harmless breach to harmful breach. On the indirect level however, once we break a law or promise without anyone else knowing or being affected, we are more likely to do so again in ways which set bad examples to people around us. They are then more likely to break laws and promises in ways which harm people due to the promise or law having existed. On both direct and indirect levels, disobedience of even trivial laws risks leading to harm.
It is submitted that this must therefore be prima facie wrong, as it is an example of self-corruption. Remember that self-corruption is influencing oneself so as to cause or risk causing harm in the future. Breaching promises or trivial laws risks us causing harm both ourselves and through the medium of other people through bad example. Therefore there is a prima facie obligation not to breach laws and promises. Of course as earlier conceded, this can be displaced where there is a good reason for the breach. Nevertheless the obligation always exists.
Conclusion
My conclusion that there exists a prima facie obligation to obey the law in reasonably just societies rests on just two new foundations. One is empirical: Breaching the law without causing harm or setting a bad example for others makes it more likely that we will breach the law where it has created a moral obligation and / or where it sets a bad example for others. The other is moral: It is prima facie wrong to influence ourselves so as to make us more likely to cause harm in the future. If these two stand, as I believe they do, then an obligation exists after all. In fact given the level of truth Raz has conceded to the classical formation of the bad example argument, it would seem difficult for him to deny my extension of the argument through self-corruption.
Notes:
1. ‘The Obligation to Obey the Law’, Chapter 12 in Raz, J. (1983) The Authority of Law. Oxford University Press.
2. See especially Smith, M.B.E. (1973) “Is There a Prima Facie Obligation to Obey the Law?” Yale Law Journal 82:5. p. 950.
3. ‘The Obligation to Obey the Law’, Chapter 12 in Raz, J. (1983) The Authority of Law. Oxford University Press.
Self-Corruption and the Moral Obligation to Obey the Law
Introduction
The question of a moral obligation to obey the law, straddling as it does the fence between legal and moral philosophy, must appear one of the more imminent and relevant aspects of jurisprudence to the legal outsider. Most of us will have encountered a situation where we could break a law of some kind without any apparent chance of punishment or harm arising. Is there a moral dimension which arises here, encouraging us to act in accordance with the law despite it seeming to serve no coherent moral aim? I will argue that there is. Specifically, that in a reasonably just society, there will always be a prima facie obligation to obey the law.
Nobody would defend the position that there is an absolute duty to obey the law, at least not since the morally horrific yet legally binding norms of the Nazi German state. Instead, I would argue for a prima facie duty: A duty-creating reason which can be bolstered or displaced by other considerations. In this way there is a prima facie moral duty not to kill which may, in select situations like self-defence and possibly euthanasia, be displaced. The duty to obey the law will serve as scant defence to those who committed Nazi horrors, since there are clear and overwhelming reasons ensuring the overall moral balance is against obedience.
Raz’s argument against the prima facie obligation
Philosophers epitomised by Joseph Raz however argue emphatically that even such a prima facie obligation cannot exist. Raz argues that we would label as immoral anyone who refrained from murder because it was illegal, rather than for other reasons. Therefore any ‘prima facie obligation’ would there be dead (1). I submit that this is a confused way to consider moral obligations. Raz implicitly assumes that moral obligations can be added up in specific situations to give an overall moral weighting, rather like adding up the costs and benefits of a business transaction in purely economical terms. On this view, the obligation to obey the law does indeed not seem to add any weight to strong moral reasons against murder. However, we do not look at moral obligations in the way that he suggests.
Let us consider the moral obligation to uphold a promise made. Once again it would seem clear that this cannot be an absolute obligation (considering promises to do wrong). Nevertheless there are good moral reasons for upholding a promise which would need to be displaced in specific circumstances: Since society depends to an extent upon people being able to rely on the word of others, every broken promise damages the society in which it takes place. As long as that society is worth preserving (a reasonably just society) this will underpin a prima facie duty to uphold promises. Now, imagine that after some terrible slight I am moved to kill an enemy, but promise a good friend of mine that I will not. Still, I go ahead and do the deed. Clearly from an objective point of view the promise can be excluded from consideration of my moral wrong. It has been swallowed up in the heinous act of murder. Nevertheless, I would argue that the obligation to obey my promise still existed, running concurrently with the obligation not to murder and merely eclipsed by it. Certainly, it would seem absurd to argue that because in this case other considerations make the promise practically morally irrelevant, there is not a prima facie obligation to obey promises. Exactly the same is true with the obligation to obey the law. It is eclipsed by more pressing moral matters, certainly, but it still exists, Raz’s assertion notwithstanding.
The bad example argument
Despite the failure of this argument, it is still for me to make my case in favour of the duty. I must express gratitude to Raz and Smith here in aptly dismissing a number of unsatisfactory positions (2). I will focus on the sole ground for a duty which I do not believe they succeeded in demolishing: The ‘bad example’ argument which is helpfully summarised by Raz (3).
It states that in a reasonably just society, there will be many laws with which it is better to comply simply because they are laws. Raz accepts this in the cases of the government having better expertise over regulations than (most) individuals and the government co-ordinating collective action which would fail without the government’s intervention. Therefore, the argument goes, there is a prima facie obligation to obey all laws, even those not falling under these categories, since to do otherwise would set an example of contempt for the law, discouraging others from obeying the law even in worthwhile cases. This will be damaging for a society which, as reasonably just, we wish to preserve.
Raz respects the argument but says that it is insufficient to ground a general duty as it requires the possibility and likelihood of setting a bad example in every single case. He gives the counter-examples of horrific murders, which will actually strengthen feeling in favour of obedience, and running red lights when there is no-one about, which provides no example at all. It is submitted that he has missed some crucial points. Regarding the murder situation, one cannot know in advance what effect a crime will have on other people’s opinions of crime. It may encourage or discourage them from it. Nevertheless, the balance is always in favour of encouraging, because instances cumulatively bring an act closer to general acceptance. Also, the shaking up of people’s perceptions of the act always carries a risk of desensitising them to it. Far more difficult to answer is the traffic lights situation. The central argument of this essay is an attempt to answer it.
Self-corruption
Imagine that I am a forgetful person who must remind myself to do things by a system of notes. One night I promise my friend to buy him something the next day and write a note to that effect. However just before bed the friend annoys me. In anger I throw away the note, aware that this will ensure that I forget my promise. Imagine further that I well know that I tend to forgive (or forget!) wrongs in my sleep so that had I seen the note the next day, I would almost certainly have obeyed its contents. My acts the next day in failing to uphold the promise are not wrong. Who can blame me for failing to remember something when it is outside of my control? Indeed, I would be blameless if some rogue, and not I, had dispensed with my note. The wrong was done last night. In effect I manipulated events so that I did harm (breaking the promise) the next day. Although I did not breach the harm principle in the immediate timeframe, I did so in an inchoate way: I pushed myself to cause harm in the future.
Why the slightly far-fetched example? I am trying to show that setting myself up to cause harm in the future is in itself a wrong. This seems relatively clear in this memory case. Now consider a new scenario. Imagine that I am fed up with people being mean to me. In order to gain some respect, I train myself to respond automatically to taunts and teasing with disproportionate physical violence. As a result I later cause terrible injuries to people who fall foul of my training. I would imagine it relatively uncontroversial to hold that, as in the memory example, I was doing something wrong in training myself thus. Even before I actually harmed anyone, I was influencing myself so as to cause harm at a later date, and this must be wrong.
In essence, I am arguing that whatever formulation of the harm principle people go by, this kind of inchoate harm must be included. Just as I do wrong in persuading my friend to harm another, I do wrong if I ‘persuade’ myself to do so. I will call this ‘self-corruption’. Both of the examples so far involved deliberate choices to cause harm in the future, but the principle seems to logically extend to where causing harm in the future is a logically foreseeable consequence or risk of my actions now. Of course, such actions may still be justifiable by other means: A soldier at war may develop a violent character but this might be justified by the circumstances of war. This makes avoiding self-corruption which makes harm more likely a prima facie duty which would need to be displaced in individual cases.
Self-corruption and the bad example argument
Now recall the traffic light example that I considered earlier. In running red lights when no other vehicles or pedestrians are around we might well not risk any real harm nor set any bad example to others. However surely we are setting a bad example to ourselves. Every time we break the law we lessen our respect for its normative force, and make it more likely that we will break the law again. There is a close parallel with promises here. It may well be possible to break a promise and get away with it without anyone ever knowing or getting harmed. However, in doing so we reduce our respect for promises as a whole, making it more likely that we will breach promises in the future. In both cases it seems quite reasonable to assume that our diminished respect for the concept (law or promise) makes it more likely for us to breach it again, even where it is uncontroversially wrong to do so.
There are two levels to this. The first is that once we break a law or promise without anyone else knowing or being affected, we are more likely to do so again in ways which do harm people due to the promise or law having existed (as well as Raz’s situations when law itself creates moral duties, people also rely on both law and promises in ways which can make it wrong to violate them). This is a direct step from harmless breach to harmful breach. On the indirect level however, once we break a law or promise without anyone else knowing or being affected, we are more likely to do so again in ways which set bad examples to people around us. They are then more likely to break laws and promises in ways which harm people due to the promise or law having existed. On both direct and indirect levels, disobedience of even trivial laws risks leading to harm.
It is submitted that this must therefore be prima facie wrong, as it is an example of self-corruption. Remember that self-corruption is influencing oneself so as to cause or risk causing harm in the future. Breaching promises or trivial laws risks us causing harm both ourselves and through the medium of other people through bad example. Therefore there is a prima facie obligation not to breach laws and promises. Of course as earlier conceded, this can be displaced where there is a good reason for the breach. Nevertheless the obligation always exists.
Conclusion
My conclusion that there exists a prima facie obligation to obey the law in reasonably just societies rests on just two new foundations. One is empirical: Breaching the law without causing harm or setting a bad example for others makes it more likely that we will breach the law where it has created a moral obligation and / or where it sets a bad example for others. The other is moral: It is prima facie wrong to influence ourselves so as to make us more likely to cause harm in the future. If these two stand, as I believe they do, then an obligation exists after all. In fact given the level of truth Raz has conceded to the classical formation of the bad example argument, it would seem difficult for him to deny my extension of the argument through self-corruption.
Notes:
1. ‘The Obligation to Obey the Law’, Chapter 12 in Raz, J. (1983) The Authority of Law. Oxford University Press.
2. See especially Smith, M.B.E. (1973) “Is There a Prima Facie Obligation to Obey the Law?” Yale Law Journal 82:5. p. 950.
3. ‘The Obligation to Obey the Law’, Chapter 12 in Raz, J. (1983) The Authority of Law. Oxford University Press.
Tuesday, December 19, 2006
Self-Corruption
In my opinion, the harm principle is often construed far too narrowly to adequately encompass the whole range of moral wrongs. I submit that the biggest common omission is self-corruption, and that this should cause us to somewhat re-evaluate liberal moral theory. Self-corruption, put simply, is acting so as to make oneself more likely to do harm in the future. Accepting that this is wrong can lead to potentially quite radical conclusions.
It is almost unnecessary to point out that there is widespread consensus that encouraging another to do wrong is itself wrong. Various incitement laws express our deep-seated belief that encouraging a crime is, morally speaking, committing the act itself only through an agent. Indeed, even if the event never occurs I am doing wrong in increasing the probability of harm. This need not be constrained to clear encouragement. By lying about a person to another I may encourage the latter to get angry and hurt the former without ever so much as mentioning the idea. From an ethical point of view and as long as there is the necessary guilty mindset, clearly this action is also wrong.
What I want to suggest is that there is no reason to constrain this to interactions with others. Our choices today can foreseeably alter our future actions and cause us to do real harm at a later date. Although our initial actions do not directly cause harm they increase the risk of it and, unless this can be justified (by weighing it against other factors), this must also be wrong.
But what do I mean by choices altering our future actions? An easy example would be a forgetful person choosing to throw away a note written to himself so that he will not remember to fulfil a promise. Failing to remember something does not look like a moral wrong, but acting earlier so as to cause this does. We can alter our future actions in a way which is wrongful right now.
However the central case of self-corruption is acting so as to change our character in some way. If doing so makes us more likely to cause harm in the future, then these early actions are themselves violations of the harm principle (even if harm does not in the end arise) unless they can be justified - they are prima facie wrong. To see what this means, I will first consider the example of promises.
It is sometimes suggested that unless there is a special meta-physical property to promises (in a 'thou shalt not lie' kind of way), there can be nothing wrong with violating them unless doing so also causes harm. While one might say that any breach of trust damages the sanctity of promises as a whole and so potentially society at large, this would only appear to be true where others might find out about the breach. Therefore a promise to a dying relative may often later be broken without appearing to damage anyone's trust in promises.
However self-corruption suggests a different conclusion. Every time we break a promise, we would appear to damage our own view of the inviolability of promises. Each time we break a promise, we make it more likely that we will do so again in the future, even when in these cases to do so would certainly cause harm and disappointment. We to some extent self-corrupt ourselves, altering our character in a negative way.
At this point I should point out that I am not arguing that upholding our promises is an absolute duty. Other considerations can well justify us not doing so, perhaps even making it immoral to do so. If I promise a dying relative to marry someone I do not wish to, it is probably most sensible to break this promise as to uphold it could cause unnecessary misery and harm. It may nevertheless still have been morally permissible to make the promise as a way of putting the dying relative's mind to rest. Moreover the situation may change after a promise so as to make performance gravely immoral. All I argue is that in all cases, self-corruption must be figured into considerations. Where there are no sufficiently weighty countervailing considerations, there is a duty not to self-corrupt. In fact, as long as self-corruption is constrained to cases where there are powerful reasons for it, the self-corruption will be less potent - less likely to cause us to act wrongfully when these reasons do not apply.
None of this, however, looks in the slightest bit radical. If it helps us see that there is always a prima facie obligation to uphold our promises then this does not seem to upset liberal moral theory. However what might do so is its implications for moral 'thought crimes'. Orthodox harm principle theory suggests that mere mental activity cannot generally be wrong. Only where it actually prepares for physical behaviour leading to harm does it violate the principle. I suggest this is misguided.
If thinking in a certain way or subjecting ourselves to certain stimuli changes our character so as to make us more likely to harm others then doing so is wrong. Imagine that I know that I become violent and am liable to hurt people after watching violent films. In this case it would seem that I am under a duty not to do so, at least not when I am likely to be around people afterwards. The situation is no different to drinking alcohol when I know that this makes me violent. In either case, it is wrong for me to risk other people's safety for no good reason.
What this means is that we should consider carefully the question of to what media we should expose ourselves. If violence really does make us more violent or pornography make us more likely to commit sexual offences, then unless there are suffiencient moral benefits to outweigh this, we should refrain from exposing ourselves to them. Now I am of the opinion that in most cases the benefits will outweigh this risk: Exposure to violenct media often allows us to vicariously release violent tendencies and exposure to pornography often allows us to similarly release potentially aggressive sexuality. However in certain cases, individuals may find that they respond negatively and take steps to avoid exposure.
As for the legal consequences of this argument, this is not a call for censorship. The result of the highly personalised account of the morality of violent and pornographic media set out above is that it must be judged on the individual level. However on the the moral level, my intention is to point out that we must be aware of the possibility that behaviour which looks purely private may well in fact mask violations of the harm principle through self-corruption. The idea of private activity should be carefully thought through before it is used as a general shield from criticism.
It is almost unnecessary to point out that there is widespread consensus that encouraging another to do wrong is itself wrong. Various incitement laws express our deep-seated belief that encouraging a crime is, morally speaking, committing the act itself only through an agent. Indeed, even if the event never occurs I am doing wrong in increasing the probability of harm. This need not be constrained to clear encouragement. By lying about a person to another I may encourage the latter to get angry and hurt the former without ever so much as mentioning the idea. From an ethical point of view and as long as there is the necessary guilty mindset, clearly this action is also wrong.
What I want to suggest is that there is no reason to constrain this to interactions with others. Our choices today can foreseeably alter our future actions and cause us to do real harm at a later date. Although our initial actions do not directly cause harm they increase the risk of it and, unless this can be justified (by weighing it against other factors), this must also be wrong.
But what do I mean by choices altering our future actions? An easy example would be a forgetful person choosing to throw away a note written to himself so that he will not remember to fulfil a promise. Failing to remember something does not look like a moral wrong, but acting earlier so as to cause this does. We can alter our future actions in a way which is wrongful right now.
However the central case of self-corruption is acting so as to change our character in some way. If doing so makes us more likely to cause harm in the future, then these early actions are themselves violations of the harm principle (even if harm does not in the end arise) unless they can be justified - they are prima facie wrong. To see what this means, I will first consider the example of promises.
It is sometimes suggested that unless there is a special meta-physical property to promises (in a 'thou shalt not lie' kind of way), there can be nothing wrong with violating them unless doing so also causes harm. While one might say that any breach of trust damages the sanctity of promises as a whole and so potentially society at large, this would only appear to be true where others might find out about the breach. Therefore a promise to a dying relative may often later be broken without appearing to damage anyone's trust in promises.
However self-corruption suggests a different conclusion. Every time we break a promise, we would appear to damage our own view of the inviolability of promises. Each time we break a promise, we make it more likely that we will do so again in the future, even when in these cases to do so would certainly cause harm and disappointment. We to some extent self-corrupt ourselves, altering our character in a negative way.
At this point I should point out that I am not arguing that upholding our promises is an absolute duty. Other considerations can well justify us not doing so, perhaps even making it immoral to do so. If I promise a dying relative to marry someone I do not wish to, it is probably most sensible to break this promise as to uphold it could cause unnecessary misery and harm. It may nevertheless still have been morally permissible to make the promise as a way of putting the dying relative's mind to rest. Moreover the situation may change after a promise so as to make performance gravely immoral. All I argue is that in all cases, self-corruption must be figured into considerations. Where there are no sufficiently weighty countervailing considerations, there is a duty not to self-corrupt. In fact, as long as self-corruption is constrained to cases where there are powerful reasons for it, the self-corruption will be less potent - less likely to cause us to act wrongfully when these reasons do not apply.
None of this, however, looks in the slightest bit radical. If it helps us see that there is always a prima facie obligation to uphold our promises then this does not seem to upset liberal moral theory. However what might do so is its implications for moral 'thought crimes'. Orthodox harm principle theory suggests that mere mental activity cannot generally be wrong. Only where it actually prepares for physical behaviour leading to harm does it violate the principle. I suggest this is misguided.
If thinking in a certain way or subjecting ourselves to certain stimuli changes our character so as to make us more likely to harm others then doing so is wrong. Imagine that I know that I become violent and am liable to hurt people after watching violent films. In this case it would seem that I am under a duty not to do so, at least not when I am likely to be around people afterwards. The situation is no different to drinking alcohol when I know that this makes me violent. In either case, it is wrong for me to risk other people's safety for no good reason.
What this means is that we should consider carefully the question of to what media we should expose ourselves. If violence really does make us more violent or pornography make us more likely to commit sexual offences, then unless there are suffiencient moral benefits to outweigh this, we should refrain from exposing ourselves to them. Now I am of the opinion that in most cases the benefits will outweigh this risk: Exposure to violenct media often allows us to vicariously release violent tendencies and exposure to pornography often allows us to similarly release potentially aggressive sexuality. However in certain cases, individuals may find that they respond negatively and take steps to avoid exposure.
As for the legal consequences of this argument, this is not a call for censorship. The result of the highly personalised account of the morality of violent and pornographic media set out above is that it must be judged on the individual level. However on the the moral level, my intention is to point out that we must be aware of the possibility that behaviour which looks purely private may well in fact mask violations of the harm principle through self-corruption. The idea of private activity should be carefully thought through before it is used as a general shield from criticism.
Thursday, November 02, 2006
Racism and the Pro-Life Connection
I recently read a pro-lifer suggest that the pro-life position would one day come to be seen as the anti-racism position now is. It got me thinking about the connection between the two, and I came to a very different conclusion. Obviously I understand his point - both extend protection to more human beings than previously. Nevertheless actually I think if we look a bit deeper, we will find that a pro-choice position (certainly one which does not demand equal rights from conception) is the true heir of the anti-racism movement. To be clear, in no way do I suggest or believe that pro-lifers tend to be racist. However, I think that the very success of anti-racism suggests that the pro-choice position is to be preferred.
The reason for this is that the most widely-held intellectual justification of racism was that people of certain skin colours or ethnic origins are inherently inferior in some way to people of the favoured skin colour (usually white). Slavery was justified by the idea that black people were not worthy of protection as they were not like the slave owners. Now, as we know, skin colour is a genetic varient. The suggestion was that we can determine who is worthy of protection by genetic facts. The repudiation of the racist viewpoint is therefore a rejection of the idea that looking to genetics is enough. They were found to be wanting as an adequate explanation for why people are worth protecting.
On a superficial level the shift was from protection for whites to protection for humans. However, humanity is equally a genetic fact, albeit more widespread. If the shift was merely from one genetic fact to another then there appears to be no real justification for it. Why should we prefer one genetic fact to another? Was there any principle to the shift? Of course there was. People recognised that protection was needed because of the ability to suffer and feel pain or to grow and flourish. This is common to all colours and unifies our conception of those worthy of moral consideration. In short, the success of anti-racism was the success of a consideration of the characteristics of beings as beings, rather than merely their genetic make-up.
The pro-life movement (narowly defined as those who desire protection from conception) denies this shift. It argues that what is important is the genetic fact of humanity and nothing else. Thus all those genetically human must be protected whether or not they have any capacity for consciousness, pain or pleasure. They eschew any consideration of beings as beings. While they would use the wider genetic fact of humanity as their criterion, they fail to move past its arbitrary nature and merely insist that it is intuitively true, just as white supremacy was once intuitively true for so many people.
The shift from a genetic criterion to a beings as beings criterion was one from arbitrariness to principle. It expanded the scope of protection in some ways, to those of different colours. However it also excluded those who only fulfilled the biological condition of humanity without any of the characteristics (faculties and consciousness) of beings worthy of protection. Those desperate to protect such zygotes rely on a purely genetic argument in a way which, if accepted, would damage the coherence of the anti-racism movement. In the end, the pro-choice lobby is the heir of anti-racism.
The reason for this is that the most widely-held intellectual justification of racism was that people of certain skin colours or ethnic origins are inherently inferior in some way to people of the favoured skin colour (usually white). Slavery was justified by the idea that black people were not worthy of protection as they were not like the slave owners. Now, as we know, skin colour is a genetic varient. The suggestion was that we can determine who is worthy of protection by genetic facts. The repudiation of the racist viewpoint is therefore a rejection of the idea that looking to genetics is enough. They were found to be wanting as an adequate explanation for why people are worth protecting.
On a superficial level the shift was from protection for whites to protection for humans. However, humanity is equally a genetic fact, albeit more widespread. If the shift was merely from one genetic fact to another then there appears to be no real justification for it. Why should we prefer one genetic fact to another? Was there any principle to the shift? Of course there was. People recognised that protection was needed because of the ability to suffer and feel pain or to grow and flourish. This is common to all colours and unifies our conception of those worthy of moral consideration. In short, the success of anti-racism was the success of a consideration of the characteristics of beings as beings, rather than merely their genetic make-up.
The pro-life movement (narowly defined as those who desire protection from conception) denies this shift. It argues that what is important is the genetic fact of humanity and nothing else. Thus all those genetically human must be protected whether or not they have any capacity for consciousness, pain or pleasure. They eschew any consideration of beings as beings. While they would use the wider genetic fact of humanity as their criterion, they fail to move past its arbitrary nature and merely insist that it is intuitively true, just as white supremacy was once intuitively true for so many people.
The shift from a genetic criterion to a beings as beings criterion was one from arbitrariness to principle. It expanded the scope of protection in some ways, to those of different colours. However it also excluded those who only fulfilled the biological condition of humanity without any of the characteristics (faculties and consciousness) of beings worthy of protection. Those desperate to protect such zygotes rely on a purely genetic argument in a way which, if accepted, would damage the coherence of the anti-racism movement. In the end, the pro-choice lobby is the heir of anti-racism.
Subscribe to:
Posts (Atom)