Legal Theory Blog

All the theory that fits!


This is Lawrence Solum's legal theory weblog. Legal Theory Blog comments and reports on recent scholarship in jurisprudence, law and philosophy, law and economic theory, and theoretical work in substantive areas, such as constitutional law, cyberlaw, procedure, criminal law, intellectual property, torts, contracts, etc.

This page is powered by Blogger. Isn't yours?
Friday, December 31, 2004
Hasen on Thomas for Chief Rick Hasen has posted Why I Don't Expect There to Be a Chief Justice Thomas on Election Law Blog. Hasen has two arguments:
  • Fear of Anita Hill redux. Here's how Hasen puts it:
      Any confirmation hearing for Justice Thomas would provide Democrats (and the country's cable media, which loves salacious stories) to reinvograte the investigation into the Anita Hill story. David Brock, who wrote The Real Anita Hill and whose evidence was relied upon by supporters of Justice Thomas, has claimed now that his book is made up of lies. Timothy Noah of Slate has noted, in reviewing Brock's newer book, Blinded by the Right, "the unique difficulty posed by any narrative that begins, 'I'm a liar, here's my tale.'" Whether Brock was lying then or is lying now, the point is that there would now be wall-to-wall media coverage of the issue again. Why would a rational Bush administration do this, when, if James Taranto is right, Republicans get a "free pass" on a Rehnquist replacement?
    Maybe. But I am not sure that this issue has political legs--given the extensive public airing and prior confirmation. The Republicans will control the procedure of the Senate hearings, and, unless there is a new "smoking gun," it is hard to imagine they would allow any significant hearing time on this issue. The story is old news, and it seems unlikely that mainstream media would do more than a day or two on the story--absent hearings to cover.
  • Democratic mobilization on account of Thomas's age. Here's how Hasen puts the argument:
      Justice Thomas is much younger than Justice Scalia. The risks to Democrats of a chief lasting that much longer are greater, and therefore it is worth fighting harder against Thomas.
    Perhaps. But Thomas's confirmation as Chief will not change any votes. He is already on the Court. The interest groups that would oppose Scalia, Thomas, or someone new for Chief are already mobilized and likely to fight as hard as they are able against truly conservative nominee. The real questions are whether Democrats will filibuster, whether Republicans will go nuclear, and whether the Democrats will retaliate against the nuclear option with guerilla warfare tactics that would paralyze the Senate. (More on this here.) My sense is that the Democrats would need heavy political cover to shut down the Senate--and Thomas's age simply wouldn't do the trick. Moreover, the guerilla warfare option is very risky for Democrats, because it can trigger even more draconian transfers of power to the Senate majority leadership. Thomas's age just doesn't seem to be worth the risks involved, especially since we are not talking about a shift in votes.
Read Hasen's post!

Legal Theory Books of 2004 Among the new books that came to the attention of the Legal Theory Bookworm in 2004, the following were especially interesting:

Downloads of the Year Among the many articles and papers mentioned on Legal Theory Blog in 2004, here are a few that are especially recommended:

Thursday, December 30, 2004
LoPucki & Weyrauch on Legal Strategy Lynn M. LoPucki and Walter O. Weyrauch (University of California, Los Angeles - School of Law and University of Florida, Levin College of Law) have posted A Theory of Legal Strategy (Duke Law Journal, Vol. 49, No. 6, April 2000) on SSRN. Here is the abstract:
    By the conventional view, case outcomes are largely the product of courts' application of law to facts. Even when courts do not generate outcomes in this manner, prevailing legal theory casts them as the arbiters of those outcomes. In a competing strategic view, lawyers and parties construct legal outcomes in what amounts to a contest of skill. Though the latter view better explains the process, no theory has yet been propounded as to how lawyers can replace judges as arbiters. This article propounds such a theory. It classifies legal strategies into three types: those that require willing acceptance by judges, those that constrain the actions of judges, and those that entirely deprive judges of control. Strategies that depend upon the persuasion of judges are explained through a conception of law in which cases and statutes are almost wholly indeterminate and strategists infuse meaning into these empty rules in the process of argumentation. That meaning derives from social norms, patterns of outcomes, local practices and understandings, informal rules of factual inference, systems imperatives, community expectations, and so-called public policies. Constraint strategies operate through case selection, record making, legal planning, or media pressure. Strategists deprive judges of control by forum shopping, by preventing cases from reaching decision, or by causing them to be decided on issues other than the merits. The theory presented explains how superior lawyering can determine outcomes, why local legal cultures exist, how resources confer advantage in litigation, and one of the means by which law evolves.
Highly recommended!

Goldman on Internet Trademark Law Eric Goldman (Marquette University - Law School) has posted Deregulating Relevancy in Internet Trademark Law (Emory Law Journal, Vol. 54, 2005). Here is the abstract:
    Abstract: This Article examines the complex world of Internet search. The Article seeks to ensure that trademark law does not interfere with the free flow of Internet content that consumers find relevant. The Article starts with three complementary looks at Internet search from the perspectives of searchers, publishers and search providers. From the searcher's perspective, the Article explains how searchers select keywords poorly and decontextualized keywords provide inadequate insight into the searcher's true objectives. From the publisher's perspective, the Article discusses how publishers try to anticipate search keywords and provide responsive content. From the search provider's perspective, the Article shows that search providers are not passive intermediaries manipulated by deceptive publishers. Instead, search providers actively mediate the relationship between searchers and publishers, often modifying searcher keywords and publisher content to facilitate a match. The Article also explains that all search providers use keywords to make those matches, and the emergence of keyword-driven searches has eliminated any meaningful distinctions between domain names, metatags and keyword-triggered ads. Based on this factual foundation, the Article looks at Internet trademark law. The Article particularly scrutinizes the initial interest confusion doctrine, showing its doctrinal deficiencies. The Article concludes with several proposals: 1) Trademark infringement analysis should be moved to later stages of a searcher's search process because harms at earlier stages are too speculative. 2) The traditional likelihood of consumer confusion test should be updated to include a factor that considers the relevancy of content presented to searchers. 3) Search providers should be given a safe harbor from liability to encourage them to do the best job possible at delivering relevant content to searchers.

Wednesday, December 29, 2004
New on Law & Politics Book Review
    LAWLESSNESS AND ECONOMICS: ALTERNATIVE MODES OF GOVERNANCE, by Avinash K. Dixit. Princeton: Princeton University Press, 2004. 176pp. Cloth $35.00 / £22.95. ISBN: 0-691-11486-2. Reviewed by Wesley T. Milner.
    COMPARATIVE CONSTITUTIONALISM AND GOOD GOVERNANCE IN THE COMMONWEALTH: AN EASTERN AND SOUTHERN AFRICAN PERSPECTIVE, by Johan Hatchard, Muna Ndulo, and Peter Slinn. Cambridge: Cambridge University Press, 2004. 388pp. Hardback. £65.00 / $120.00. ISBN: 0-521-58464-7. Reviewed by James B. Kelly.
    A CRITICAL INTRODUCTION TO LAW, THIRD EDITION, by Wade Mansell, Belinda Meteyard and Alan Thomson. London: Cavendish Publishing, 2004. 224pp. Paper £18.95 / $38.00. ISBN: 1-85941-892-9. Reviewed by Trish Oberweis.
    BUILDING THE UK'S NEW SUPREME COURT: NATIONAL AND COMPARATIVE PERSPECTIVES, by Andrew Le Sueur (ed). New York, N.Y.: Oxford University Press, 2004. 376pp. Hardback. £50.00 / $90.00. ISBN 0-19-926462-7. Reviewed by Carla Thorson.
    LAW AND EMPLOYMENT: LESSONS FROM LATIN AMERICA AND THE CARIBBEAN, by James J. Heckman and Carmen Pagés (eds). Chicago: University of Chicago Press, 2004. 475pp. Cloth $95.00. ISBN: 0-226-32282-3. Reviewed by Matthew M. Taylor.

Book Announcement: Soames on Reference & Description
    Reference and Description The Case against Two-Dimensionalism Scott Soames To read the entire book description and a sample chapter, please visit: In this book, Scott Soames defends the revolution in philosophy led by Saul Kripke, Hilary Putnam, and David Kaplan against attack from those wishing to revive descriptivism in the philosophy of language, internalism in the philosophy of mind, and conceptualism in the foundations of modality. Soames explains how, in the last twenty-five years, this attack on the anti-descriptivist revolution has coalesced around a technical development called two-dimensional modal logic that seeks to reinterpret the Kripkean categories of the necessary aposteriori and the contingent apriori in ways that drain them of their far-reaching philosophical significance. 0-691-12100-1 Cloth $39.50 US and £26.95 384 pages. 6 x 9.

Klick on Salvation as Solution to Free Rider Problems Jonathan Klick (Florida State University - College of Law) has posted Salvation as a Selective Incentive (International Review of Law and Economics, Forthcoming) on SSRN. Here is the abstract:
    As club goods, religions face the problem of free riding. Smaller religious clubs, such as cults or sects, can often surmount this problem through communal pressures or by requiring their members to provide easily monitored signals. Generally, however, such tactics will be unavailable or too costly for large denominations, and, as such, these denominations must look for other techniques to avoid free riding. This paper argues that the Roman Catholic doctrine of justification by faith and works serves as an Olsonian selective incentive, and presents empirical evidence in support of this claim. Specifically, I show that Catholics contribute significantly more to their churches as they approach death than do members of Protestant denominations. More generally, this paper suggests that church doctrines influence behavioral incentives and religious leaders may be able to capitalize on these behavioral effects for the benefit of their church.

Tuesday, December 28, 2004
Onwuachi-Willig on Justice Thomas & Racial Identity Angela Onwuachi-Willig (University of California, Davis - School of Law) has posted Just Another Brother on the SCT?: What Justice Clarence Thomas Teaches Us About the Influence of Racial Identity (Iowa Law Review, Vol. 90, 2005) on SSRN. Here is the abstract:
    Justice Clarence Thomas has generated the attention that most Justices receive only after they have retired. He has been boycotted by the National Bar Association, caricatured as a lawn jockey in Emerge Magazine, and protested by professors at an elite law school. As a general matter, Justice Thomas is viewed as a "non-race" man, a Justice with a jurisprudence that mirrors the Court's most conservative white member, Justice Antonin Scalia­, in other words, Justice Scalia in "blackface." This Article argues that, although Justice Thomas's ideology differs from the liberalism that is more widely held by Blacks in the United States, such ideology is deeply grounded in black conservative thought, which has a "raced" history and foundation that are distinct from white conservatism. In so doing, this Article examines the development of black conservative thought in the United States; highlights pivotal experiences in Justice Thomas's life that have shaped his racial identity; and explicates the development of Justice Thomas's jurisprudence from a black, conservative perspective in cases concerning education and desegregation, affirmative action, and crime.

Bellia on Federal Common Law & State Courts Anthony J. Bellia Jr. (Notre Dame Law School) has posted State Courts and the Making of Federal Common Law (University of Pennsylvania Law Review, Vol. 153, 2005) on SSRN. Here is the abstract:
    The authority of federal courts to make federal common law has been a controversial question for courts and scholars. Several scholars have propounded theories addressing primarily whether and when federal courts are justified in making federal common law. It is a little-noticed phenomenon that state courts, too, make federal common law. This Article brings to light the fact that state courts routinely make federal common law in as real a sense as federal courts make it. It further explains that theories that focus on whether the making of federal common law by federal courts is justified are inadequate to explain whether the making of federal common law by state courts is justified. A common premise of such theories - that courts make federal common law for the kinds of forward-looking policy reasons that would move Congress to enact a statute - largely accounts for the inadequacy. The Article proceeds to provide an account of the making of federal common law by state courts that considers historical practice, the constitutional structure, and certain normative claims about the way in which courts can and ought to make law. It concludes that state courts, when it is necessary to make federal common law in order to enforce existing law, are justified in doing so not as a deputy legislature, but as the result of attempting to best discern and apply existing principles of federal law. Finally, the Article identifies implications of this analysis for the operation of federal common law in federal courts.

Monday, December 27, 2004
Posner on Morality and Public Policy Richard Posner is guest blogging over at the Leiter Reports. He has a post entitled Faith-Based Morality and Public Policy. Here's a taste:
    There are secular moralities, such as utilitarianism. But should the Constitution, or political philosophy, be understood to prescribe utilitarianism, whether in the Benthamite or J. S. Mill versions, or maybe "secular humanism," as our civic religion? That might depend on the character of morality, on what kind of normative order morality is, exactly. Specifically, on whether it must be reasoned, functional, practical, articulably derived from or related to some unexceptionable social goal. Well, much or even most morality seems based, rather, on instinct, emotion, custom, history, politics, or ideology, rather than on widely shared social goals. Think of the absolute prohibition of infanticide in contrast to the far more tolerant view of even late-term abortions. Think of the prohibition of bullfighting, cock fights, and cruelty to animals generally. Think of the rejection in our society of the Islamic punishment code, public nudity, polygamy, indentured servitude, chain gangs, voluntary gladiatorial combat, forced redistribution of wealth, preventive war, torture, the mutilation of corpses, sex with corpses, sex with nonobjecting animals, child labor, duelling, suicide, euthanasia, arranged marriages, race and sex discrimination. Are there really compelling reasons for these unarguable tenets of the current American moral code? One can give reasons for them, but would they be anything more than rationalizations? They have causes, that history, sociology, or psychology might elucidate, but causes are not reasons.
And then Posner makes the following, extraordinary, statement:
    If morality, or at least a large part of the moral domain, lives below reason as it were, isn't the practical consequence that morality is simply dominant public opinion?
This point is a very large one--implicating more than can be adequately discussed within the compass of a blog post. But perhaps I can make a few useful comments:
  • To begin, we need a distinction between two different senses of morality. On the one hand, there is "morality" as used by both ordinary folk and moral philosophers to refer to the realm of judgments about what is good and evil, right and wrong. On the other hand, there is "morality" as used to refer the norms of a particular culture. The term "morality" is used in both senses, and sometimes is used carelessly in ways that slide from one sense to the other.
  • In the second sense, morality as the norms of a particular culture, Posner is right: "morality is simply dominant public opinion." And how could he be wrong, that simply is the meaning of the second sense of morality. This is not as Posner puts it, a "practical consequence" of the facts that Posner enumerates. It is, rather, a more or less conceptual (or analytic) point.
  • In the first sense, morality as the realm of judgments about right and wrong, morality is most emphatically not "simply dominant public opinion." This is easy to see, because we can readily speak of the dominant public opinion about some question as being wrong or incorrect. "Nazi's thought that their treatment of Jews and other groups was morally correct, but they were wrong"--this statement does not involve an error of conceptual grammar. "Nazi's thought that their treatment of Jews an other groups was morally correct, but despite the fact that most people agreed with them at the time, it was nonetheless not the norm that characterized the culture of Nazi Germany"--that statement is self contradictory.
Posner also makes the following point about John Rawls's idea of public reason:
    Rawls and others have thought that religious beliefs shouldn't be allowed to influence public policy, precisely because they are nondiscussable. But this view rests on a misunderstanding of democracy. Modern representative democracy isn't about making law the outcome of discussion. It is not about modeling politics on the academic seminar. It is about forcing officials to stand for election at short intervals, and about letting ordinary people express their political preferences without having to defend them in debate with their intellectual superiors.
This is a rather careless reading of Rawls--who said no such thing. Rawls does advance an ideal of public reason. A very early statement of that ideal was similar to Posner's characterization of Rawls's position:
    [G]reat values fall under the idea of free public reason, and are expressed in the guidelines for public inquiry and in the steps taken to secure that such inquiry is free and public, as well as informed and reasonable. These values include not only the appropriate use of the fundamental concepts of judgment, inference, and evidence, but also the virtues of reasonableness and fair-mindedness as shown in the adherence to the criteria and procedures of common sense knowledge, and to the methods and conclusion of science when not controversial, as well as respect for the precepts governing reasonable political discussion.
But as Rawls worked out his idea, it evolved and was clarified in ways that are inconsistent with Posner's assertion that Rawls thought "religious beliefs should not be allowed to influence public policy." Here are a few of the differences between Rawls's view and Posner's characterization:
    First and foremost, Rawls's idea of public reason was limited to what he called "the constitutional essentials" and hence it did not apply to ordinary legislation. Rawls most emphatically did not believe that ordinary democratic politics should exclude reliance on comprehensive philosophical and religious conceptions of the good.
    Second, as Rawls's thought evolved, he eventually came to what he called the "wide view" of public reason. Here is how he expressed the crucial feature of the wide view:
      reasonable comprehensive doctrines, religious or nonreligious, may be *784 introduced in public political discussion at any time, provided that in due course proper political reasons--and not reasons given solely by comprehensive doctrines--are presented that are sufficient to support whatever the comprehensive doctrines introduced are said to support.
    That is, Rawls's view of public reason does not require the exclusion of religious reasons (or the reasons provided by other comprehensive, nonreligious doctrinces, such as utilitarianism. Rather, it requires the inclusion of public reasons--in due course.
Posner's characterization, "religious beliefs shouldn?t be allowed to influence public policy," is inaccuare for yet another reason:
    Third, religious reasons are allowed as supporting grounds, even for the constitutional essentials, if they are the foundations (or grounds) for public reasons. For example, in our political cutlure, the great value of the liberty of conscience is a clear example of a public reason. (Posner refers to President Bush's formulation of this principle in his post.) But one can support the liberty of conscience for religious reasons. An example is the role the doctrine of free faith played in gaining support for the liberty of conscience early in the history of liberalism. This religious reason for supporting liberty of conscience is, more or less, that belief can lead to salvation, only if the belief is free and therefore that coerced belief cannot lead to salvation. But the value of the liberty of conscience is a public reason?one that can be shared from a variety of perspectives.
That is enough for now. Read Posner's post!

Johnson on Race Kevin Johnson (UC Davis) has posted Roll Over Beethoven: 'A Critical Examination of Recent Writing about Race' (Texas Law Review, Vol. 82, No. 717, 2004) on SSRN. Here is the abstract:
    In Crossroads and Blind Alleys: A Critical Examination of Recent Writing About Race, 62 Tex. L. Rev. 121 (2003), Professor Richard Delgado criticized the scholarly direction of Critical Race Theory (CRT). As a starting point for his criticism, Delgado reviews Crossroads, Directions, and a New Critical Race Theory (2002) edited by Francisco Valdes, Jerome McCristal Culp, and Angela P. Harris. The volume consists primarily of papers and speeches presented at the Critical Race Theory conference at Yale Law School in 1997, an important event reflecting on ten years of CRT. Among the contributors to Crossroads are influential CRT scholars Derrick Bell, Kimberle Williams Crenshaw, Charles Lawrence III, Mari Matsuda, and others. Delgado laments CRT's current focus, which he characterizes as "idealist" (and too much talk of discourse about inequality) as opposed to the "materialist" (and power disparities contributing to racial injustice). Crossroads, to Delgado, devotes too much to the ideal and, put simply, is filled with discourse about discourse. Although Delgado makes important points about the state of CRT scholarship, this response contends that he overstates the distinction between the ideal and material forms of discourse and, by so doing, excessively criticizes CRT's direction, and fails to acknowledge the emerging critical scholarship that analyzes current the racial justice issues. In sum, this response questions Professor Delgado's criticism of Critical Race Theory, as well as his challenges to Critical Latina/o (LatCrit) Theory.

De Soysa, Bailey, & Neumayer on Democracy, Institutional Design, and Economic Sustainability Indra De Soysa , Jennifer Bailey and Eric Neumayer (Norwegian University of Science and Technology , Norwegian University of Science and Technology - General and London School of Economics - Department of Geography and Environment) have posted Free to Squander? Democracy, Institutional Design, and Economic Sustainability, 1975-2000 on SSRN. Here is the abstract:
    While democracy's effect on economic growth has come under intense empirical scrutiny, its effect on economic sustainability has been noticeably neglected. We assess the effects of regime type and democratic institutional design on economic, or "weak" sustainability. Sustainability requires that stocks of capital do not depreciate in value over time. The World Bank gauges the rate of net investment in manufactured, human, and natural capital, a unified indicator of weak sustainability (the genuine savings rate). All four indicators of democracy we examine show that freer societies have higher genuine savings rates because they invest more in human capital, create less CO2 damage, and extract fewer natural resources per economic unit produced, even if they show lower net investment in manufactured capital. Democracies may trade off immediate material welfare gains for future pay-offs. This finding justifies why scholars should assess the effects of regime type on more than just immediate growth or the rate of change of manufactured capital. Among democracies, we find that pure parliamentary systems spend more on education than do presidential ones, but exhibit no statistically significant difference for the overall genuine savings rate. Proportional representation electoral systems fare worse than plurality when it comes to genuine and net national savings, even though they do better on education spending. The results taken together show that differences in regime type and democratic institutional design allow for different trade-offs. The results are robust to a range of specifications and a developing country only sub-sample.

Brown Reviews Feelings and Emotions On Metapsychology, Sam Brown reviews an anthology entitled Feelings and Emotions: The Amsterdam Symposium by Antony S.R. Manstead, Nico H. Frijda and Agneta Fischer, Cambridge University Press, 2004. Here is a taste:
    Feelings and Emotions offers a pleasing snapshot of current scientific thinking and research on emotions. It accurately depicts the contemporary status of feelings in psychological research by largely downplaying them. Feelings, to most theorists, are the "tip of the iceberg" (e.g. Scherer, p.139): a minor facet, passive component or even a distraction in emotion theory. There are indications, however, that the study of feelings may soon make a resurgence from an unlikely quarter: neuroscience. There are also hints that previously fundamentalist positions on the notorious cognition-emotion debate are converging at last. These are the subtle trends, more implied than declared, that help to distinguish Feelings and Emotions from similar anthologies.

Sunday, December 26, 2004
Legal Theory Lexicon: Speech Acts
    Introduction Speech act theory will forever be associated with the great J. L. Austin, the Oxford philosopher whose work in the 1950s had an enormous influence on analytic philosophy. One of Austin's core insights is reflected in the title of his William James lectures, delivered at Harvard in 1955, How to Do Things with Words. When we use language, we don't just communicate information or say things about how the world is; when we use language, we do things. We command, request, apologize, contract, convey, and admonish. Speech act theory focuses on the ways in which language (both oral and written) can be used to perform actions.
    Legal theorists are interested in speech act theory for a variety of reasons, but one of the most important is that speech act theory helps to explain the way that the law uses language. Statutes, holdings, and constitutional provisions aren't like "the cat is on the mat." That is, a statute does not tell us how the world is in the same way that a declaratory sentence does. Legal language is full of speech acts. This entry in the Legal Theory Lexicon provides a rough and ready introduction to speech act theory pitched at law students (especially first-year law students) with an interest in legal theory.
    Sentences, Propositions, Meaning, and Truth There are lots of ways we could start, but let's begin with a simple sentence. "The sidebar of legal theory blog contains a link to Balkinization." What does this sentence mean? One answer to that question is pretty straightforward. There is an object in the world (the sidebar of legal theory blog) and that object includes another, "a link to Balkinization." Simple declarative sentences like this have truth values (or are "truth-apt"). In this case, the sentence is true, because the sidebar to Legal Theory Blog actually does have a link to Balkinization. There is a temptation to think that all sentences are like simple declarative sentences in that (1) the meaning of the sentence can be cashed out by the way it refers to the actual world, and (2) if the sentence is meaningful (i.e. it succeeds in referring), then the sentence has a truth value.
    O.K., that was a lot to swallow, but what does it have to do with "speech acts"? Now, take this expression in English: "Please add my blog to your blogroll." Does this sentence refer to anything? Well, it does include elements that refer, e.g. "my blog" and "your blogroll." But this sentence doesn't assert that my blog is on your blogroll. It may imply that my blog currently is not on your blogroll, but that implicit assertion doesn't exhaust its meaning. The sentence "Please add my blog to your blogroll" is a request. By uttering (or posting) these words, I am making a request. If you do add my blog to your blogroll, the request will succeed. If you don't, the request will have failed. Although the request can succeed or fail, it would be strange indeed to say that "Please add my blog to your blogroll" is either true or false. Requests are not truth-apt; they do not bear truth values.
    Are there any other types of expressions that are similar to requests? Once we start looking, we will discover lots and lots. Orders, questions, offers, acceptances, warnings, invitations, greetings, welcomes, thank yous--all of these are types of expressions that do not seem to refer or to have truth values. What do these expressions mean then, if they don't refer? When I gave an order, I perform an action--the act of ordering X to do Y. When I make an offer, I perform an action--the act of creating a legally effective option for the offeree to form a legally binding contract by accepting the offer. When I extend an invitation to a party, I perform an action--the act of inviting person P to event E. Speech act theory begins with the idea that language can be used to perform actions.
    Form and Function We might be tempted to think that we can tell the difference between sentences that describe the world and expressions that perform actions simply by their form. So we might be tempted to say, "Sentences of the form X is Y express propositions that refer," whereas sentences of the form, "I hereby do X" perform a speech act. But language is much messier than this. Take the sentence, "This room is a pig sty." In some contexts, this sentence might be referential. If one were taking a tour of an animal husbandry research facility, the sentence "This room is a pig sty" might express a true proposition about the function of a particular room. But if the same words were used by a parent, in an annoyed tone, and directed to a teenage child, the real meaning of the expression might be, "Clean up your room!" Certain forms are characteristically associated with propositions that refer and others with the performance of speech acts, but the question of meaning depends on the context of utterance.
    Utterance, Locution, Illocution, Perlocution With the basic idea of a speech act under out belts, we can now introduce a useful set of terminological distinctions:
    • Utterance--We can use the term "utterance" to refer to the words (e.g. the sounds or letters) that constitute a particular use of language.
    • Locution--We can use the term "locution" to refer to the semantic meaning of the utterance.
    • Illocution--We can use the term "illocution" to refer to the speech act that is performed by use of a particular utterance in a particular context.
    • Perlocution--We can use the term "perlocution" to refer to the effect that a given expression has when it is uttered in a particular context.
    Take the example of the sentence, "This room is a pig sty." The utterance is simply the words that are used: suppose this is an oral statement in English made by a parent to a child on a particular occasion. The same parent could utter similar worlds in English (or another language) that have the same semantic content. "The family room is a pig sty"--would express the same propositional content as "This room is a pig sty" if "this room" was "the family room." The illocutionary force of this statement is ambiguous. If the child spoken to was responsible for the mess, then both parent and child might understand that "This room is a pig sty" is the equivalent of "Clean up this room." The same illocutionary force can be obtained by a variety of expressions. Finally, the perlocutionary effect of "This room is a pig sty" will also depend on context. The effect might be to produce shame, but it might also produce anger. Thus, one utterance has both locutionary content, illocutionary force, and perlocutionary effect.
    A Typology of Speech Acts One of the tasks of speech act theory has been to develop typologies of speech acts. Here is one typology developed by Bach and Hamish:
    • Constatives: affirming, alleging, announcing, answering, attributing, claiming, classifying, concurring, confirming, conjecturing, denying, disagreeing, disclosing, disputing, identifying, informing, insisting, predicting, ranking, reporting, stating, stipulating
    • Directives: advising, admonishing, asking, begging, dismissing, excusing, forbidding, instructing, ordering, permitting, requesting, requiring, suggesting, urging, warning
    • Commissives: agreeing, guaranteeing, inviting, offering, promising, swearing, volunteering
    • Acknowledgments: apologizing, condoling, congratulating, greeting, thanking, accepting (acknowledging an acknowledgment)
    There are other ways of slicing and dicing the types of speech acts, but Bach and Hamish's typology gives a good sense of how such a typology might work.
    Speech Act Theory and Legal Theory How can legal theorists use speech act theory? We could start by noting the important role that speech acts play in the law. Laws themselves might be seen as speech acts--as types of commands or authorizations. In contract law, issues of contract formation frequently turn on questions whether particular utterances were speech acts of particular types. Was this utterance an offer? Was that statement an acceptance? In a very general way, speech act theory is helpful simply because it allows us to understand legal phenomena from a new angle.
    Speech act theory may also be helpful in resolving particular sorts of doctrinal puzzles. For example, in the theory of the freedom of speech, one might be puzzled about the unprotected status of certain expressions. Oral contracts are speech. Threats are speech. An order from a Mafia boss to a hitman is speech. But no one thinks that these instances of speech raise serious questions under the First Amendment. Why not? One possible answer to this question could begin with "marketplace of ideas" theory of free speech famously associated with Justice Holmes--a theory that emphasizes the role of freedom of speech in facilitating the emergence of truth from the unrestricted public debate and discussion. Directive speech acts, such as orders, do not make truth claims, and hence might be entirely outside the freedom of speech. But constantive speech acts, such as affirming, conjecturing, or disagreeing, do make speech claims and hence would raise free speech issues on the marketplace of ideas theory. Of course, one paragraph does not a theory of the freedom of speech make--for more on this, see my Freedom of Communicative Action.
    Here is another example. The hearsay rule is notoriously difficult to conceptualize precisely, because the canonical formulation, that hearsay is "an out-of-court declaration introduced for the truth of the matter asserted," is not transparent. Speech act theory may perform a clarifying function. The phrase "out of court declaration" may be clarified by reference to the categories of speech acts: out-of-court declarations are constantive speech acts. Other categories of speech acts, e.g. directives, commisives, and acknowledgements, are not declarations. Moreover, the phrase "for the truth of the matter asserted" may be illuminated by distinguishing propositional contents which may bear truth values, on the one hand, and illocutionary force and perlocutionary effects on the others. The hearsay rule is usually not violated if an out-of-court declaration is introduced for the purpose of demonstrating its illocutionary force. For example, a third party can testify to the making of an oral contract for the purpose of showing that the action--making the contract--was performed.
    If you are interested in acquiring a very basic knowledge of speech act theory, I recommend that you start with Austin's marvelous How to Do Things with Words. Although many of Austin's particular points have been criticized or superceded by subsequent work, this is a marvelous book--concise, illuminating, and a model of ordinary language philosophy at its best. More advanced readings are included in the bibliography below.
    Links Bibliography
    • Austin, J. L. (1962) How to Do Things with Words, Cambridge, Mass.: Harvard University Press.
    • Bach, K. and R. M. Harnish (1979), Linguistic Communication and Speech Acts, Cambridge, Mass.: MIT Press.
    • Grice, H. P. (1989) Studies in the Way of Words, Cambridge, Mass.: Harvard University Press.
    • Searle, J. (1969) Speech Acts: An Essay in the Philosophy of Language, Cambridge, Eng.: Cambridge University Press.
    • Strawson, P. F. (1964) 'Intention and convention in speech acts', Philosophical Review 73: 439-60.

Saturday, December 25, 2004
Legal Theory Bookworm The Legal Theory Bookworm recommends On The Rule of Law: History, Politics, Theory by Brian Z. Tamanaha. Here's a brief description:
    Although it is currently the most important political ideal, there is much confusion about what the 'rule of law' means and how it works. Brian Tamanaha outlines the concerns of Western conservatives about the decline of the rule of law and suggests reasons why the radical Left have promoted this decline. Two basic theoretical streams of the rule of law are then presented, with an examination of the strengths and weaknesses of each. The book's examination of the rule of law on a global level concludes by deciding whether the rule of law is a universal human good.

Download of the Week The Download of the Week is Minimalism at War by Cass R. Sunstein. Here is the abstract:
    When national security conflicts with individual liberty, reviewing courts might adopt one of three general orientations: National Security Maximalism, Liberty Maximalism, and minimalism. National Security Maximalism calls for a great deal of deference to the President, above all because of his authority as Commander-in-Chief of the Armed Forces. Liberty Maximalism asks courts to assume the same liberty-protecting posture in times of war as in times of peace. Minimalism asks courts to follow three precepts: the President needs clear congressional authorization for intruding on interests having a strong claim to constitutional protection; fair hearings should generally be provided to those who have been deprived of their freedom; and courts should discipline their own authority through narrow, incompletely theorized rulings. Of the three positions, Liberty Maximalism is the easiest to dismiss; courts will not and should not adopt it. National Security Maximalism is far more plausible, but it is in grave tension with the constitutional structure, and it is built on excessive optimism about the incentives of the President. The most appealing approach is minimalism, which does remarkably well in capturing prominent decisions of the Supreme Court in World War I, World War II, the Cold War, and the war on terrorism.
Download it while its hot!

Friday, December 24, 2004
Two Papers by Rawls Online Two well-known papers by John Rawls, Two Concepts of Rules and Justice as Fairness are now available online at HIST-ANALYTIC. I suspect most readers of LTB are familiar with these papers, at least by reputation. If not, these two papers are among the most important in modern political and moral philosophy. Very highly recommended.

Confirmation Wars Department: Bush to Resubmit 20 Nominees The Los Angeles Times reports:
    President Bush intends to renominate to federal judgeships 20 candidates who failed to win Senate approval during his first term, the White House said Thursday.
Here is the list:
    Court of Appeals
      Terrence Boyle, 4th Circuit William J. Haynes II, 4th Circuit Priscilla Richman Owen, 5th Circuit David W. McKeague, 6th Circuit Susan Neilson, 6th Circuit Henry W. Saad, 6th Circuit Richard A. Griffin, 6th Circuit William H. Pryor Jr., 11th Circuit William G. Myers III, 9th Circuit Janice Rogers Brown, District of Columbia Circuit Brett M. Kavanaugh, District of Columbia Circuit Thomas B. Griffith, District of Columbia Circuit
    District Courts
      James C. Dever III, Eastern District, North Carolina Robert J. Conrad, Western District, North Carolina Thomas L. Ludington, Eastern District, Michigan Sean F. Cox, Eastern District, Michigan Daniel P. Ryan, Eastern District, Michigan Peter G. Sheridan, New Jersey Paul A. Crotty, Southern District, New York J. Michael Seabright, Hawaii

Lipshaw on Rational Choice Modelling of Judicial Decision Making In reply to Mialon, Rubin, & Schrag on Judicial Hierarchies & Judicial Preferences & A Comment on Rational Choice Modelling of Judicial Decision Making (posted yesterday), Jeff Lipshaw writes:
    As a relative newcomer to the world of legal theory, I am alternatively amazed and aghast at the confusion around attempts to model legal or moral choice as though they were analogous to the pricing decisions facing the sellers and purchasers of summer wheat. (No doubt this is why Sen is your favorite economist.) To your point, microeconomic models, in their sphere, explain so much we can accept their relatively minor failings. I may disagree whether it is appropriate to have so much in merger analysis hinge on the market definition issues that precede the Herfindahl calculations, but there can be little doubt that economic analysis brings insight into what is fundamentally a contingent economic issue: is consumer welfare likely diminished or enhanced by the merger? I think there is more out there than you credit in the attempt to have rational choice theory explain (and predict) individual action. In Economic Analysis of Law, Richard Posner observes: “The basic assumption, that human behavior is rational, seems contradicted by the experiences and observations of everyday life. The contradiction is less acute when one understands that the concept of rationality used by the economist is objective rather than subjective, so that it would not be a solecism to speak of a rational frog.” In his most recent writing, he proposes an algorithm for the relationship between ex ante and e x post costs of contract construction: “The equation thus identifies the essential tradeoffs in analyzing the interpretation problem: the more the parties invest at the first stage, the lower the expected costs at the second stage.” I suppose this still purports to be an objective assessment, though to what end I am not sure, because the model assumes what we were spending the costs to clarify ex ante bear some relationship to the dispute ex post (see my Bewitchment paper you have previously been kind enough to post on your blog). Or to put it another way, the model is only helpful, I think, if individuals in the exercise of subjective judgment, actually make that calculation.& nbsp; Otherwise, what is the point? We should have a policy that we incur lots of buckshot ex ante costs regardless of their impact on the outcome? When the world is so complex the exceptions to the model subsume the model's predictive capability, we have to step back and ask what we are doing. What I see is the desire, notwithstanding the disclaimer (also see the work of Eric Posner in this regard), to find the last link that would unravel, a la Freud, the mystery of subjective choice (hence, Judge Posner's view it is an embarrassment that economics cannot model how a judge will decide). As we move from the objective to t he subjective, and from the collective to the individual, from simple modelable decisions to those most normal humans would never permit to be resolved by a computer, the likelihood, as you point out, we will ever find a single predictive maximand decreases. Yet this is the overwhelming thrust of current scholarship. I am skeptical - no, quite positive - there is no unified field theory equivalent of the objective and subjective, because despite the yearning for attainment of an Unconditioned First Principle of human behavior in science or economics (see my paper Contingency and Contracts: A Philosophy of Complex Business Transactions,, no algorithm will ever tell a judge how to temper justice wit h mercy. That is because in the moment of decision, when it is time to apply a rule to a circumstance, we are free. (In this statement, I reveal myself as a child of Kant and Wittgenstein, and not Freud or any other philosophical determinist. I also recommend Linda Ross Meyer's "Is Practical Reason Mindless?" 86 Geo. L. J. 647 (1998)) We can measure (in theory) how most of us react and decide in that moment, but the possibility that one of us dissents and says "aha, now I understand the principle to be something else" invokes something we will never be able to measure. (My Bewitchment paper discusses this in the context of contract interpretation). (For an application of this to the mystery of judging where the demand of justice radically exceeds the ability of law or jurisprudence to address it, see the postcript to Hannah Arendt's Eichmann in Jerusalem.) My friend Susan Neiman uses Kant's  ;thought experiment to demonstrate the crossing from rational self-interested calculation to free moral choice. If the punishment for frequenting a brothel is that I will be hanged when I come out, it is of no great consequence to predict that I will find a way to avoid going in. But if now I am ordered to kill an innocent person on the pain of my own death, the possibility I will choose my own death proves that I am free. That moment of choice, which is the same as the moment of freedom when I apply a rule to the next circumstance or when I have a creative epiphany (also see Linda Ross Meyer, Beyond Reason and Power: Experiencing Legal Truth, 67 U. Cin. L. Rev. 727 (1999), is simply inconsistent with an objective model that, in its heart yearns to predict the subjective.

Thursday, December 23, 2004
Mialon, Rubin, & Schrag on Judicial Hierarchies & Judicial Preferences & A Comment on Rational Choice Modelling of Judicial Decision Making
    Introduction I recently read with interest a paper by Hugo M. Mialon , Paul H. Rubin and Joel L. Schrag (Emory University, Department of Economics , Emory University and Emory University - Department of Economics) entitled Judicial Hierarchies and the Rule-Individual Tradeoff. (t's available on SSRN.) The paper deals with a cluster of topics that deeply interest me: rational choice modelling of judicial decision making, the relationship between trial and appellate courts, and theories of judicial motivation and character. This is an interesting paper, but as I read it I was acutely conscious of the vast gulf between economic models of judging and the phenomena being modelled. I have some comments on their paper, but first let me give you the abstract.
    The Abstract
      We analyze decision-making in a simple model of the judicial hierarchy. We assume that trial court judges are more concerned with ex post efficiency with respect to the individuals involved in the cases at hand, and less concerned with ex ante efficiency with respect to the precedents established for society, than are appeals court judges. This implies that the preferred decisions of appeals court judges differ systematically from those of trial court judges. Appeals court judges can enforce their preferred decisions by reversing those of the trial court judges. However, in the model, litigants do not always appeal decisions that would be reversed, both because appeals are costly and because the outcome is uncertain. Consequently, appeals court judges may prefer to enact higher level rules that reduce the discretion of all judges.
    Some Difficulties with Modelling Judicial Decision Making One of the nice things about this paper is its very succinct summary of the literature. The authors include the famous quote from Richard Posner, which is worth repeating:
      “At the heart of economic analysis of law is a mystery that is also an embarrassment: how to explain judicial behavior in economic terms...”
        --Richard Posner, What Do Judges Maximize? (The Same Thing Everybody Else Does),” 3 Supreme Court Economic Review 1, 2 (1993).
    Read the paper for the full literature summary, but here is a taste:
      There have to date been several attempts to explain judges’ behavior. Posner (1992) has argued that judges seek efficiency, and, more recently (since becoming one), that they seek the pleasures of spectators at plays (Posner, 1993). Whitman (2000), Miceli and Cosgel (1994), Rasmusen (1994), and Kornhauser (1992a,b) have argued that judges' behavior is based on a tradeoff between writing decisions they prefer and the possibility of reversal, either by higher courts or by future judges. Kobayashi and Lott (1994) have argued that judges will want to maximize litigation, and therefore will seek inefficient rules.
    Lawyers are likely to be quite skeptical of all these models. Actual judicial motivations are complex and judicial behavior results from more than just motivations. Let me share just a few thoughts about these complexities by way of illustration:
    • Judges are motivated in part by the desire to do what the law requires. Extreme forms of legal realism may deny that this is case, but it is extremely difficult to reconcile such global skepticism with the facts. There may be a few judges who don't care a whit for the law, but this kind of corruption is relatively rare. Some judge want to do what is legally correct above all else, and almost all judges want to do what is legally correct at least some of the time.
    • But realist judging is also a fact. Some judges want to impose their personal preferences about outcomes through judicial fiat. Other judges believe that the law frequently gives them a certain amount of "wiggle room," that allows them to do what they think is right (as a matter of political morality) at least in some cases.
    • We might call the desire to do what the law requires "formalism" and the desire to do what the judge prefers "realism." Particular judges are likely to possess a mix of realist and formalist motivations, and the mix may vary from context to context. A particular judge might be a formalist in cases involving commercial law, but more realist in cases involving civil or constitutional rights.
    • This picture is complicated by the fact that judges have other motivations that may influence their decisions. Some judges may hope for promotion to a higher court, leading them to shade their decisions to increase the chance of their being selected by a President or Govenor with the power to promote them. Other judges may be partial to one (or more) of the parties to a dispute. Yet others may be prone to bias against a party or prone to anger that distorts their judgment. And of course, there is bad old-fashioned corruption. Some judges are disposed to resist these temptations--they have what I have called the "judicial virtues." Other judges are disposed to give into temptation--they suffer from "judicial vice."
    • Yet more complexity is added by the other inputs to judicial decision making. For example, one party to a dispute may have a better lawyer who provides higher quality inputs in the form of evidence and legal argument. In other cases, the lawyers for both sides may be poor; as a consequence, the judge may decide based on a distorted picture of the facts and the law.
    • Moreover, in the United States we have a multi-tiered system of appellate review. In the federal system, this usually means a trial court (the United States District Courts), an appellate court (the United States Courts of Appeals), and a court of last resort (the United States Supreme Court). If the case originates in a state system, there may be a trial court, intermediate appellate court, state court of last resort, with an appeal to the United States Supreme Court on questions of federal law--for a total of four tiers.
    • Multi-tiered appellate review is further complicated by the complex system of rules governing deference vel non in appellate review of trial court decisions. Vastly simplifying, there are three basic standards of appellate review:
        --De Novo: Questons of law are reviewed de novo (as new), meaning that the appellate court owes no deference to the trial court's decision.
        --Clearly Erroneous: Judicial findings of fact are reviewed (in the federal system) under a clearly erroneous standard. This is a highly deferential standard. The appellate court should uphold findings of fact that the appellate judges believe are in error, unless the error is clear. If the disagreement is over which witness to believe (credibility determinations), which of several reasonable inferences to draw from the evidence, or how to balance conflicting evidence, then the appellate court should uphold the trial judge's finding of fact.
        --Abuse of Discretion: Managerial decisions by trial judges, as well as other decisions that characteristically require the exercise of practical judgment, are frequently review for abuse of discretion. This standard requires the appellate court to defer to the discretion of the trial court judge--even if the appellate judges would have decided the issue in a different way.
      Juries add even more complexity, but let's set that aside for now.
    • Finally, judges also vary in their abilities. Some judges are learned; others are relatively ignorant of the content of the law. Some judges are brilliant; others are of average (or rarely, below average) intelligence. Some judges have good practical judgment or common sense; others are somewhat foolish.
    So it is is not suprising that it is difficult to produce a robust model of judicial decision making. A model of economic behavior by firms that assumes that firms profit maximize misses quite a lot, but it gets so much right that it yields useful and interesting predictions. A model that simply assumes judges try to maximize efficiency (or any other single maximand) will be just plain awful. And if there are several different competing goals that judges pursue, it will be quite difficult to model their internal deliberative processes for familiar reasons.
    This should come as no surprise. When judges decide cases they are engaged in a complex practical activity that responds to differential and imperfect information as well as individuated motivations and abilities. We don't expect rational choice models to predict individual behavior in detail in particular choice situations: try asking an economist to predict what you will do tomorrow!
    Assumptions & Reactions Back to the paper! Mialon, Rubin, & Schrag make a number of assumptions--as good modellers must. Assumptions must be simple in order to get robust models off the ground, but some of their assumptions weren't so much "simple" as "simply wrong." Here is an example or two:
    • Mialon, Rubin, & Schrag state "The trial court judge JT decides the cases that arise in his jurisdiction, which corresponds to the geographic area where his rulings serve as precedents unless they are overturned on appeal." This assumption sounds quite reasonable, but it is inaccurate as a matter of fact in almost all jurisdictions. Trial court decisions of questions of law do not set precedents. Thus, a decision by a judge in the Southern District of New York (the federal trial court that encompasses Manhattan) does not set a precedent for other Southern Distirct judges; in fact it doesn't even set a precedent for the very judge who made the decision. In our system, mandatory stare decisis or binding precednets can only be set by appellate courts.
    • Another assumption is formualted as follows: "We assume that trial court judges are more concerned with ex post efficiency with respect to the individuals involved in the cases at hand, and less concerned with ex ante efficiency with respect to the precedents established for society, than are appeals court judges." Some judges actually are concerned with "efficiency," ex post or ex ante. But despite the influence of the law and economics movement, efficiency as a motive is quite rare. Many judges have only the vaguest idea what the economic conception of efficiency is; of those who understand it, only a few endorse it as a proper end of judicial decisionmaking. Of those who endorse it, fewer still consider it the only proper end. Very few trial judges are likely to endorse "ex post efficiency" as a permissible goal of decision making. (I assume that "efficiency" means Kaldor-Hicks efficiency or welfare-maximizatin as defined by some Berson-Samuelson social welfare function.) If a judge were to explicitly state that her decision was based on ex ante efficiency, she would be reversed: such a decision would be so bad that it would be grounds for a writ of mandamus (allowing an instant appeal) even in the absence of a final judgment.
    Lochner One of the most extraordinary passages in Judicial Hierarchies and the Rule-Individual Tradeoff deals with the demise of Lochner and the rise of the New Deal Court. Here are the key passages:
      The political process, in attempting to alleviate the shock of the Great Depression, behaved in its normal manner and adopted various short term solutions as part of the New Deal. Since the problem was catastrophic, the solutions adopted were extraordinary. Among these were the Supreme Court rulings reducing contractual freedom. While these rulings were aimed at protecting statutes that would have previously been overturned, the effect was to institute rules that overturned explicit contracts.
      Of course, this outcome was not instantaneous. Lower courts could not immediately adopt principles favoring individuals at the expense of rules. For one thing, the subsidiary rules were not known or even knowable because under a regime of free contract, many events that would later be interpreted as violations were not so considered. If a contract protected a manufacturer against all claims arising from all product related injuries, for example, then there would be no need to determine if liability was based on negligence or was strict, and no need to distinguish between design and manufacturing defects. The law would not even recognize the existence of these differences; they are legally hidden distinctions. Moreover, to change the law requires bringing cases that establish new precedents. This is also a time consuming process. A case must first be heard at the level of the trial court. Before it becomes a precedent, however, it must be appealed through the higher level courts and affirmed. Each step takes time. Agents conducting the appeals process to change the law in the way discussed here include plaintiffs, and also attorneys (Rubin and Bailey, 1994).
      However, as argued in this paper, lower level courts provide less protection to rules, and more to individuals, than do higher level courts. At any given time, lower courts would prefermore individual oriented rulings than they are allowed by the higher courts. Thus, if a higher court changes rulings to allow more attention to individuals (as did the Supreme Court after the Great Depression), then lower courts will gladly adopt these rulings. On the other hand, if the higher court has moved in the other direction, announcing more emphasis on rules (perhaps the situation that now obtains in the courts, with many Reagan-Bush judges in the Supreme Court), then we would expect the lower level courts to resist moving to this new level. Thus, for example, it should take longer to reverse the movement away from freedom of contract than it took to implement the movement in the first place.
    Wow! I don't really know how to take these assertions. Presumably, all of these assertions include implied modal constraints, e.g. "It could have been the case that . . ." The authors can't possibly believe that they know that thier story about Lochner is true; at best it is a speculative possibility.
    Even as speculative possibility, however, this account is highly contestable. Consider the following points:
    • The Lochner era included a complex web of legal rules. The diminuition of the federal right of freedom of contract doesn't by itelf establish the content of the law. Why not? Because Lochner implicates federalism as well as the substance of liberty of contract--shifting power from federal courts to state institutions. At the same time, however, the New Deal Corut was expanding the power of Congress--shifting power from state institutions to Congress.
    • It is very difficult to explain the results in particular Lochner era decisions on the basis of ex post efficiency. Take Lochner itself. The dispute was over the validity of regulations of the hours that might be worked by bakers. There is no evidence that the New York legislature was motivated by the plight of specific bakers from the ex post perspective, as opposed to the welfare of bakers as the welfare of the general class of bakers from the ex post perspective. The ex ante/ex post distinciton cuts no ice here.
    • More generally, even if we were to try to explain New Deal jurisprudence on the basis of efficiency, there are ex ante explanations. If Roosevelt's appointees were motivated by efficiency, then it seems most likely that they would have believed that New Deal policies would be welfare maximizing from the ex ante perspective. Do Mialon, Rubin, & Schrag really believe believe that the New Deal court thought it was making society worse off, ex ante?
    Conclusion I hope you will read Judicial Hierarchies and the Rule-Individual Tradeoff. It is fascinating, both on its own terms and as an illustration of the difficulties faced by economic models of the judicial process. Highly recommended!

Wednesday, December 22, 2004
Westen on Consent Peter K. Westen (University of Michigan Law School) has posted Some Common Confusions About Consent in Rape Cases (Ohio State Journal of Criminal Law, Vol. 2, No. 1, pp. 332-359, Fall 2004) on SSRN. Here is the abstract:
    Consent to sex matters, because it can transform coitus from being among the most heinous of criminal offenses into sex that is of no concern at all to the criminal law. Unfortunately, the normative task of making the law of rape more just is commonly impaired by conceptual confusion about what consent means. Consent is both a single concept in law and a multitude of opposing and cross-cutting conceptions of which courts and commentators tend to be only dimly aware. Thus, consent can be a mental state on a woman's part, an expression by her, or both; it can consist of facts about a woman's mental state or expressive conduct that do not necessarily constitute a defense to rape or consist only such facts as do constitute a defense to rape; and it can consist of facts about a woman's mental state or expressive conduct or a legal fiction of such facts. In so far as we are unaware of the ways in which this conceptual framework structures the way we think about consent, we risk confusing ourselves and others in undertaking to make the law of rape more just. Some examples are (1) confusion as to whether the defense of consent ought to be deemed to consist of a mental state on a woman's part or an expression; (2) confusion about the relationship between consent to sexual intercourse and resistance to it; and (3) confusion about the relationship between force and non-consent.

Lerner & Tirole on the Economics of Information Sharing Josh Lerner and Jean Tirole (Harvard University - Finance Unit and University of Toulouse I - GREMAQ) have posted The Economics of Technology Sharing: Open Source and Beyond on SSRN. Here is the abstract:
    This paper reviews our understanding of the growing open source movement. We highlight how many aspects of open source software appear initially puzzling to an economist. As we have acknowledge, our ability to answer confidently many of the issues raised here questions is likely to increase as the open source movement itself grows and evolves. At the same time, it is heartening to us how much of open source activities can be understood within existing economic frameworks, despite the presence of claims to the contrary. The labor and industrial organization literatures provide lenses through which the structure of open source projects, the role of contributors, and the movement's ongoing evolution can be viewed.

Hay & Spier on Manufacturer Liability for Other-Caused Harms Bruce L. Hay and Kathryn E. Spier (Harvard Law School and Northwestern University - Kellogg School of Management) have posted Manufacturer Liability for Harms Caused by Consumers to Others on SSRN. Here is the abstract:
    Should the manufacturer of a product be held legally responsible when a consumer, while using the product, harms someone else? We show that if consumers have deep pockets then manufacturer liability is not economically efficient. It is more efficient for the consumers themselves to bear responsibility for the harms that they cause. If homogeneous consumers have limited assets, then the most efficient rule is "residual-manufacturer liability" where the manufacturer pays the shortfall in damages not paid by the consumer. Residual-manufacturer liability distorts the market quantity when consumers' willingness to pay is correlated with their propensity to cause harm. It distorts product safety when consumers differ in their wealth levels. In both cases, consumer-only liability may be more efficient.
Kathy Spier presented this paper at the USD/UCSD Law, Economics, & Politics series in the Spring. Recommended.

Tuesday, December 21, 2004
Downward Spirals Department Courtesy of Howard Bashman, I came across Supreme battle looms for Rehnquist successor by Andrew Miga in the Boston Herald. Here is a snippet:
    Liberal and conservative groups alike are mounting pre-emptive strikes, plotting strategy, raising money, planning public opinion polls, researching prospective nominees and organizing networks of supporters for what is expected to be the most explosive confirmation fight in decades.
. . . and . . .
    People for the American Way, a high-profile liberal group, has already put together a Supreme Court confirmation war room replete with nearly three dozen computers. Such conservative groups as the American Center for Law, meanwhile, have already amassed a war chest of more than $3 million for the anticipated fight.

Purdy on Ecosystem Management Bruce Pardy (Queen's University (Canada) - Faculty of Law) has posted Changing Nature: The Myth of the Inevitability of Ecosystem Management (Pace Environmental Law Review, Vol. 20, Summer 2003) on SSRN. Here is the abstract:
    Must ecosystems be managed? Ecosystem management is a process that measures, controls and changes ecosystems to produce the most desirable environment in human terms. It derives its legitimacy from two developments, the theory of nonequilibrium in ecosystems and the extinction of pristine systems: ecosystems exist in a fluid and dynamic state, and there are no ecosystems that are completely unaffected by human impact. Therefore, according to the prevailing view, it is not possible to preserve ecosystems in a natural state. The purpose of this article is to question the logic of the foregoing conclusion. Neither nonequilibrium nor the absence of pristine systems dictates that ecosystems must be controlled and deliberately changed. The argument presented in this article is not that natural is preferable, but that it is possible, and that the debate between ecological preservation and environmental utilitarianism can and should occur. If science and law dictate that there are no options but to deliberately change ecosystems, as the managers would have us believe, then the debate has no relevance. Thus, the case is not that ecological preservation is a better choice than ecosystem management, but that there is a choice to make. Ecosystem management is a policy choice masquerading as an inevitability.

Khanna on Corporate Crimes Legislation Vikramaditya S. Khanna (University of Michigan at Ann Arbor - Law School) has posted Politics and Corporate Crime Legislation (Regulation, Vol. 27, No. 1, pp.30-35, Spring 2004) on SSRN. Here is the abstract:
    The recent spate of alleged corporate fraud has led to calls for new corporate crime legislation. Interestingly, there are already many such laws; before the passage of the Sarbanes-Oxley Act in 2002, some 300,000 federal corporate criminal offenses were already on the books. How did so much corporate crime legislation get enacted, given the lobbying strength of corporate interests? We would expect that wealthy, organized corporations would largely be able to get their way in the legislative process, yet they appear to be losing the battle over corporate crime legislation. How can we explain that outcome?

Conference Announcement: Value Inquiry
    The 32nd Conference on Value Inquiry will be held at Louisiana State University, 8-10 April, 2005, on Reason and Evaluation. Broad participation is sought. Papers and proposals for papers that assess the nature and practice of reason, and the nature and practice of evaluation, are welcome. Early submission is strongly encouraged. Papers should be twenty minutes reading time. Plenary speakers include Alan Koors, David Copp, Marina Oshana, and Thomas Magnell. For more information and submission of an abstract or paper contact: James Stacey Taylor, Conference Coordinator, 32nd Conference on Value Inquiry, Department of Philosophy and Religious Studies Louisiana State University Baton Rouge, LA 70803, USA. Email:

Monday, December 20, 2004
Conference Announcement: Honoring and Examining the Work of Susan Moller Okin
    CONFERENCE ANNOUNCEMENT Toward a Humanist Justice: A Conference Honoring and Examining the Work of Susan Moller Okin February 3-5, 2005 Stanford University Susan Moller Okin passed away unexpectedly in March 2004. Organized by Rob Reich (Political Science) and Debra Satz (Philosophy), Stanford University will sponsor a public conference honoring and examining Okin’s scholarship. The conference will cover the four major areas of Okin's contributions to contemporary debates: justice and the family, multiculturalism and liberalism, gender and international development, and women in the history of political thought. Paper Givers and Commentators include: Brooke Ackerly, Vanderbilt Corey Brettschneider, Brown Eamonn Callan, Stanford Joshua Cohen, MIT Judith Goldstein, Stanford Russell Hardin, NYU Sally Haslanger, MIT Robert Keohane, Princeton Chandran Kukathas, Utah Catharine A. MacKinnon, Michigan David Miller, Oxford Carole Pateman, UCLA Rob Reich, Stanford Nancy Rosenblum, Harvard Debra Satz, Stanford Molly Shanley, Vassar Ayelet Shachar, Toronto John Tomasi, Brown Elizabeth Wingrove, Michigan Iris Marion Young, Chicago The conference is open to the public, but registration is required. For more information on registration, email Joan Berry, For a full conference schedule, see the conference website at:

Sunstein on Minalism at War Cass R. Sunstein (University of Chicago Law School) has posted Minimalism at War (Supreme Court Review, Forthcoming) on SSRN. Here is the abstract:
    When national security conflicts with individual liberty, reviewing courts might adopt one of three general orientations: National Security Maximalism, Liberty Maximalism, and minimalism. National Security Maximalism calls for a great deal of deference to the President, above all because of his authority as Commander-in-Chief of the Armed Forces. Liberty Maximalism asks courts to assume the same liberty-protecting posture in times of war as in times of peace. Minimalism asks courts to follow three precepts: the President needs clear congressional authorization for intruding on interests having a strong claim to constitutional protection; fair hearings should generally be provided to those who have been deprived of their freedom; and courts should discipline their own authority through narrow, incompletely theorized rulings. Of the three positions, Liberty Maximalism is the easiest to dismiss; courts will not and should not adopt it. National Security Maximalism is far more plausible, but it is in grave tension with the constitutional structure, and it is built on excessive optimism about the incentives of the President. The most appealing approach is minimalism, which does remarkably well in capturing prominent decisions of the Supreme Court in World War I, World War II, the Cold War, and the war on terrorism.

Dogan & Lemley on Merchandising Rights Stacey L. Dogan and Mark A. Lemley (Northeastern University School of Law and Stanford Law School) have posted The Merchandising Right: Fragile Theory or Fait Accompli? on SSRN. Here is the abstract:
    Trademark merchandising is big business. One marketing consultant estimated the global market for licensing and marketing sports-related merchandise at $17 billion in 2001. With this much money at stake, it's no surprise that trademark holders demand royalties for use of "their" marks on shirts, keychains, jewelry, and related consumer products. After all, the value of these products comes largely from the allure of the trademarks, and it seems only fair to reward the party that created that value…doesn’t it? It turns out that the answer is more complicated than this intuitive account would predict. Trademark law historically has existed primarily to protect against the consumer deception that occurs when one party attempts to pass off its products as those of another. From an economic and policy perspective, it is by no means obvious that trademark holders should have exclusive rights over the sale of products that use marks for their ornamental or "intrinsic" value, rather than as indicators of source or official sponsorship. Trademark law seeks to promote, rather than hinder, truthful competition in markets for products sought by consumers; if a trademark is the product, then giving one party exclusive rights over it runs in tension with the law's pro-competitive goals, frequently without any deception-related justification. On the other hand, there may be circumstances in which consumers expect that trademark holders sponsored or produced products bearing their mark, in which case use of the mark by others - even as a part of a product - might result in genuine confusion. Given these complexities, together with the economic interests at stake, one might expect that the law and practice of merchandising rights would be well-settled and reflect a considered balancing of the interests of trademark holders and their competitors. In reality, however, much of the multi-billion dollar industry of merchandise licensing has grown around a handful of cases from the 1970s and 1980s that established merchandising rights with little regard for the competing legal or policy concerns at stake. Those cases are far from settled law - indeed, at least as many decisions decline to give trademark owners the right to control sales of their trademarks as products. We think it is high time to revisit that case law and to reconsider the theoretical justifications for a merchandising right. That review provides little support for trademark owners' assumptions about merchandising. Doctrinally, the most broad-reaching merchandising cases - which presumed infringement based on the public recognition of the mark as a trademark - were simply wrong in their analysis of trademark infringement and have been specifically rejected by subsequent decisions. Philosophically, even a merchandising right that hinges on likelihood of confusion raises competition-related concerns that should affect courts' analysis of both the merits and appropriate remedies in merchandising cases. Perhaps most importantly, recent Supreme Court case law suggests that, if it had the opportunity to evaluate the merchandising theory (something it has never done), the Court would deny the existence of such a right. Further, the Court would be right to do so. When a trademark is sold, not as a source indicator, but as a desirable feature of a product, competition suffers - and consumers pay - if other sellers are shut out of the market for that feature.

Robinson & Cahill on the Model Penal Code Paul H. Robinson and Michael T. Cahill (University of Pennsylvania Law School and Brooklyn Law School) have posted Can a Model Penal Code Second Save the States from Themselves? (Ohio State Journal of Criminal Law, Vol. 1, No. 169, 2003) on SSRN. Here is the abstract:
    This commentary summarizes some of the institutional obstacles to serious reform the authors encountered in their work on two recent criminal-code redrafting efforts, in Illinois and Kentucky. The authors call for a project to create a Model Penal Code Second, in the hope that such a centralized, high-profile, and less directly politically charged or biased effort would be an effective spur to major reform at the state level.

Sunday, December 19, 2004
Legal Theory Lexicon: Causation
    Introduction Causation is one of the basic conceptual tools of legal analysis. And for most purposes, we can get along with a notion of causation that is both vague and ambiguous. In the world of medium sized physical objects (automobiles, pedestrians, etc.), our judgments about causation rarely depend on conceptual niceties. The driver’s negligence caused the death of the pedestrian but did not cause John Kerry to win the Iowa caucuses in 2004. In these cases, various notions of causality converge. The person on the street, the scientist, and lawyer can all agree in such cases that for all practical purposes X caused Y but not Z. But sometimes the various notions of cause come apart exposing ambiguities and vagueness in both ordinary and legal talk about causes and effects. This post provides a very basic introduction to causation for law students (especially first-year law students) with an interest in legal theory.
    Cause-in-Fact & Legal Cause Let’s put the most important distinction on the table right away. Contemporary legal theory and judicial practice assume that there is a distinction between legal cause on the one hand and cause-in-fact on the other. What does that mean? That’s a huge question, of course, but we can state one conclusion straight away: that X is a cause-in-fact of Y does not entail that X is a legal cause of Y. Less obviously, that X is a legal cause of Y does not entail that X is a cause-in-fact of Y. The various ways that cause-in-fact and legal cause can come apart leads many to the conclusion that legal cause simply has nothing to do with causation, but this turns out to be an exaggeration. I know this all sounds very airy. So let’s get down to brass tacks!
    Cause-in-Fact What do we mean when we say that X is a cause-in-fact of Y? Many law students learn that the answer to this question is but-for causation. If it is the case that but for X, Y would not have occurred, then X is a but-for cause of Y and hence X is a cause-in-fact of Y. This simple story works most of the time, and as a rough and ready rule of thumb, it isn’t half bad. But it turns out that if you try to use but-for causation as a hard and fast rule for determining whether X is the cause of Y, you will run into trouble, sooner or later. In torts and criminal law, but-for causation runs into trouble somewhere in the midst of the first-year course. In a sense, the point of this Lexicon post is to provide a set of tools that for understanding the troubles that overreliance on but-for causation can cause.
    Necessary and Sufficient Causes The first item in the causation toolkit is the distinction between necessary and sufficient cause. The basic ideas are simple and familiar. X is a necessary cause of Y, if Y would not have occurred without X. Ben’s running the red light is a necessary cause of the damage to Alice’s car, just in case the damage would not have occurred without Ben’s having run the light. The idea of "necessary cause" is the same idea expressed by the phrase "but-for cause."
    X is a sufficient cause of Y, if Y would have occurred so long as X occurred. Alice’s shooting Ben through the heart is a sufficient cause of Ben’s death, just in case the shot thru the head by itself would have caused Ben’s death. This is true, even though Ben would have died anyway, because Cynthia shot him through the head at the same time Alice shot him through the heart.
    The Role of Counterfactuals The notions of necessary and sufficient causation are familiar to almost everyone. We use these ideas all the time in everyday life. But the very familiarity of these concepts creates a temptation to take them for granted. There is an important feature of these ideas that our day-to-day use of them does not make explicit. Both necessary and sufficient causation are counterfactual concepts. What does that mean? “Counterfactual” is simply the fancy name for “what if” thinking. What if Ben had stopped at the red light? Would the damage to Alice’s car still have occurred? What if the Ben had gotten immediate medical attention? Would the shot through the head still have killed him? Every statement regarding a necessary or sufficient cause can be interpreted as making a counterfactual (“what if”) claim.
    What-if reasoning is itself familiar and ordinary. When we say, Ben’s running the red light was a necessary cause of the damage to Alice’s car, we are claiming that if the world had been different and Ben had not run the red light, then Alice’s car would not have been damaged. We imagine what the world would have been like if Ben had stopped at the red light, and Alice had proceeded through the intersection without being struck by Ben’s car. Counterfactual reasoning can get more complicated that this, but for our purposes we can use everyday what-if reasoning as our model of role of counterfactuals in necessary and sufficient causation.
    Overdetermination Once we’ve gotten the notions of necessary and sufficient causes, we can move on to the idea of overdetermination. An effect is overdetermined if it has more than one sufficient cause. Take the case of Alice shooting Ben through the heart. We have postulated that the bullet passing through the heart was a sufficient cause of Ben’s death, but it may not have been a necessary cause. Suppose that Alice was a member of a firing squad, and that at the exact same moment that Alice’s bullet passed through Ben’s heart, another Bullet, fired by Cynthia, passed through Ben’s cerebral cortex and that this would have resulted in Ben’s death, even if Alice’s had not fired or her bullet had missed. Ben’s death now results from two sufficient causes, but neither Alice’s shot nor Cynthia’s shot was necessary. If Alice had not fired, Cynthia’s shot would have killed Ben. If Cynthia had not fired, Alice’s shot would have killed Ben.
    Overdetermination is important, because it undermines the idea that but-for causation tells us everything we need to know about cause-in-fact. We might say that both Alice and Cynthia’s shooting caused Ben’s death or we might say they were both partial causes of Ben’s death, but we would not be likely to say that neither Alice nor Cynthia’s shot was the cause.
    The firing squad example was described as a case of simultaneous overdetermination—both sufficient causes occurred at the same time. What if Cynthia shot a few seconds before Alice and Ben died before Alice’s shot pierced his heart? In that case, Cynthia’s shot would have preempted the causal role of Alice’s shot. If Cynthia had missed, then Alice’s shot would have killed Ben. This kind of case is sometimes called preemptive causation.
    Coincidence Overdetermination poses one kind of problem for but-for causation, coincidence poses another a different sort of difficulty. Suppose the driver of a trolley is speeding. As a result the trolley is in just wrong place and time and a tree falls, injuring a passenger. If trolley had gone just a little faster or just a little slower, the tree would have missed the trolley and the injury would not have occurred. Given these circumstances, speeding was a but-for cause (a necessary cause) of the tree injuring the passenger. So what? Coincidence is no problem for cause-in-fact, but it does pose a problem for the legal system. Intuitions vary, but lots of folks are inclined to believe that one should not be legally responsible for harms that one causes as a result of coincidences.
    Coincidence is related to a variety of other problems with but-for causation. Take our example of Ben running the stoplight and hitting Alice’s car. Running the stoplight was one but-for cause of this accident, but there are many others. For example, Alice’s being in the intersection was also a but-for cause. And how did Alice come to be in the intersection at just the time when Ben was running the red light? If her alarm clock hadn’t gone off, she would have slept in and arrived in the intersection long after Ben, so her alarm clock’s ringing was another but-for cause. And you know how the story goes from here. As we trace the chain of but-for causes back and out, we discover that thousands and millions and billions of actions and events are but-for causes of the accident.
    Legal Cause What do we about the problems with problems created by but-for cause? One way that the law responds is with the idea of legal cause or proximate cause. In this post, we cannot hope to achieve a deep understanding of legal cause, but we can get a start. Here are some of the ideas that help me to understand legal cause.
    First, there is a terminological issue: causation may be confused with responsibility. “Legal cause” is only partially about cause. We start with the idea of cause-in-fact (understood in light of the distinction between necessary sufficient cause). This idea of cause seems, on the surface, to fit into the structure of various legal doctrines. So we imagine that if a defendant breaches a duty of care and causes a harm, then defendant is legally responsible for the harm. This works for lots of cases, but then we start thinking about other cases like overdetermination and coincidence. “Legal cause” is the way that we adjust our ideas about legal responsibility to overcome the counterintuitive results that would follow from a simple reliance on but-for causation. In other words, “legal cause” may be a misnomer. It might be clearer if we used the phrase “legal responsibility” (or some other phrase) to describe the ways in which we adjust the law.
    Second, legal cause is frequently associated with the idea of foreseeability. For example, in coincidence cases, the harm (the tree injuring the passenger) is not a foreseeable consequence of the wrongful act (driving the trolley at an excessive speed). If the purpose of the law is deterrence, then no good purpose may be served by assigning legal responsibility in cases where the effect is unforeseeable.
    Third, legal cause is sometimes associated with the idea of proximity in time and space. Of course, the phrase “proximate cause” emphasizes this connection. We usually don’t want to hold defendants responsible for the remote and attenuated effects of their actions. We frequently do want to hold defendants responsible for the immediate and direct effects of their actions. “Proximity” seems to capture this point, but an overemphasis on proximity in time and space leads to other problems. Some immediate consequences do not give rise to legal responsibility: the trolley driver may have started speeding just seconds before the tree fell. Some causal chains that extend for long distances over great durations do give rise to legal responsibility: Osama bin Laden’s responsibility for 9/11 would not be vitiated by the fact that he set events in motions years in advance and thousands of miles away.
    Probability Our investigation of causality so far has elided an important set of issues—the connections between causation and probability. These connections are far too large a topic for this post, but even a superficial analysis requires that we consider two perspectives--ex ante and ex post.
    Ex post questions about causation arise in a variety of contexts, but for the legal system, a crucial context is provided by litigation and especially trial. In many cases, there is no doubt about causation. When Ben’s car speeds through the red light and hits Alice’s car, we don’t have much doubt about what caused the damage. But in many types of cases, causation will be in doubt. Did the chemical cause cancer? Was the desk job the cause of the back injury? Sometimes the evidence will answer these questions with certainty (or perhaps, with something that is so close to certainty that we treat it as certainty for legal and practical purposes). But in other cases, the evidence will leave us with a sense that that the defendant’s action is more or less likely to have caused the harm to the defendant. Such probabilities may be expressed either qualitatively or quantitatively. That is, we might say that it is “highly likely” that X caused Y or we might say that there is a 50% chance (p = .5) that X caused Y.
    Ex ante issues of causation also arise for the law. For example, the legal system may be required to assign a value to a risk of harm that has not yet been realized. David has been exposed to asbestos, but may or may not develop cancer. In this case, probabilities refer to the likelihood of future events.
    Decision theory and mathematics have elaborate formal machinery for representing and calculating probabilities. In this short post, we cannot even scratch this surface, but there are two or three bits of notation that every legal theorist should know:
      --The letter “p” is frequently used to represent probability. Most law students encounter this notation in Justice Hand’s famous opinion in the Carroll Towing case (B < PL or “burden less than loss discounted by probability). The notation p(x) = 0.1 can be read “the probability of x equals 1/10.” And the notation, p=0.5 can be read “probability equals one in two.”
      --The symbol “|” is frequently used to represent conditional probabilities. Suppose we want to represent the probability that X will occur given that Y has occurred, we can use this notation: p(X|Y). So we could represent the sentence, “The probability of Cancer given Exposure to Asbestos is ten percent,” as p(C|EA)=0.1.
    Types and Tokens So far, we have been focusing mostly on cases where an individual instance of harm is caused by some particular wrongful action. But of course, we frequently think about causation as a more general relationship. For example, in science we might speak of “causal laws.” There is no standard terminology for this distinction: we might use the phrase “individual causation” and “systematic causation.” One helpful bit of terminology for getting at this idea is to differentiate “types” and “tokens.” Ben’s running the rend light at a particular time and location is an event token and it is a token of a type of events, i.e. the type “running a red light.”
    Once we have the distinction between types and tokens in place, we can define individual causation as a causal relationship between a token (e.g. a token event) and another token (e.g. a token action). And we can define systematic causation as a causal relationship between a type (e.g. a type of event) and another type (e.g. a type of action). Science studies causal relationships between types; trials frequently involve questions about the causation of one token by another. This leads to another important point: the question whether an individual harm was caused by an individual action will sometimes depend on the question whether a systematic causal relationship exists; for example, the question whether this factory’s release of a chemical caused an individual case of cancer may require a jury to resolve a “scientific” question about systematic causation.
    Conclusion Even though this is a long entry by the standards of the Legal Theory Lexicon it is a very compressed and incomplete treatment of the concept of causation. Given the way legal education is organized (around doctrinal fields like torts, criminal law, and evidence), most law students never get a truly comprehensive introduction to causation. Torts may introduce the distinction between cause-in-fact and legal cause; criminal law, problems of overdetermination; and evidence, the relationship between probability and causation. If this post accomplishes anything of value, I hope that it serves as warning—causation is a deep and broad topic about which there is much to learn.
      H.L.A. Hart & Tony Honore, Causation in the Law (2d ed. 1985). This is the book on causation and the law. Currently out of print, but used copies are available on
      Causation (Oxford Readings in Philosophy) (Ernest Sosa & Michael Tooley eds. 1993). A fine collection of essays, with contributions by J.L Mackie, Michael Scriven, Jaegwon Kim, G.E.M. Anscombe, G.H. von Wright, C.J. Ducasse, Wesley C. Salmon, David Lewis, Paul Horwich, Jonathan Bennett, Ernest Sosa, and Michael Tooley.

Saturday, December 18, 2004
Legal Theory Bookworm The Legal Theory Bookworm recommends Law's Quandary by my colleague, Steven D. Smith. Here's the review from Michael Perry (Emory) on
    Smith's treatment of the issues he addresses is outstanding. His discussion is consistently probing, thoughtful, and imaginative. Smith's range of reference is impressively broad--yet I never had the sense that he was trying to impress. His clarity--aided by his wonderfully engaging, and occasionally humorous, conversational style--is exemplary. But the enviable clarity/accessibility of Smith's writing should not obscure just how penetrating--I am tempted to say, brilliant--his commentary is. It may sound faintly ridiculous to say this, but I thought that this book was a jurisprudential page turner.
And here's a brief description:
    This lively book reassesses a century of jurisprudential thought from a fresh perspective, and points to a malaise that currently afflicts not only legal theory but law in general. Steven Smith argues that our legal vocabulary and methods of reasoning presuppose classical ontological commitments that were explicitly articulated by thinkers from Aquinas to Coke to Blackstone, and even by Joseph Story. But these commitments are out of sync with the world view that prevails today in academic and professional thinking. So our law-talk thus degenerates into "just words"--or a kind of nonsense. The diagnosis is similar to that offered by Holmes, the Legal Realists, and other critics over the past century, except that these critics assumed that the older ontological commitments were dead, or at least on their way to extinction; so their aim was to purge legal discourse of what they saw as an archaic and fading metaphysics. Smith's argument starts with essentially the same metaphysical predicament but moves in the opposite direction. Instead of avoiding or marginalizing the "ultimate questions," he argues that we need to face up to them and consider their implications for law.
The way that I usually express high praise is "Highly Recommended," but that simply does not suffice in this case. Steve's book is truly a joy to read. His thesis is original, provocative, and expressed with clarity and elegance. Buy Law's Quandry and have a great read!

Download of the Week The Download of the Week is John Hart Ely and the Problem of Gerrymandering: The Lion in Winter by Pam Karlan. Here is the abstract:
    In Democracy and Distrust, John Hart Ely articulated a "participation-oriented, representation-reinforcing approach to judicial review" that advanced both an anti-entrenchment and an antidiscrimination rationale for judicial intervention. This essay, prepared for a symposium honoring Ely held at Yale Law School in November 2004, explores the implications of his work for a central issue of democratic governance: legislative apportionment. Part I shows that although Ely celebrated the Warren Court's Reapportionment Revolution as a paradigmatic example of the anti-entrenchment approach, he essentially ignored the ways in which the Burger Court's jurisprudence of racial vote dilution, with its focus on geographically discrete minority groups subjected to majority prejudice, exemplifies the antidiscrimination approach. Part II looks at the implications of Ely's theory for contemporary controversies over race-conscious redistricting. Ely's final work - a trilogy defending the Rehnquist Court's Shaw jurisprudence as a wedge for attacking political gerrymandering more broadly - reveals an implicit tension within his approach: while the anti-entrenchment and antidiscrimination rationales may have dovetailed during the years of Democracy and Distrust, today they can operate at cross purposes. The protection of minority interests is now often best served not by judicial skepticism of legislative outcomes, but by judicial deference to plans that allocate power to politicians elected from minority communities. In the end, Ely's trilogy may reflect his romance with the Warren Court, which saw discrete and insular racial minorities as essentially objects of judicial solicitude, rather than efficacious political actors in their own right.
And I hope the readers of LTB will forgive me for also recommending Procedural Justice--the final version of a paper on which I worked for 10 years. Let me know what you think!

Friday, December 17, 2004
Solum on Procedural Justice Lawrence Solum (University of San Diego, School of Law) has posted Procedural Justice on SSRN. Here is the abstract:
    Procedural Justice offers a theory of procedural fairness for civil dispute resolution. The core idea behind the theory is the procedural legitimacy thesis: participation rights are essential for the legitimacy of adjudicatory procedures. The theory yields two principles of procedural justice: the accuracy principle and the participation principle. The two principles require a system of procedure to aim at accuracy and to afford reasonable rights of participation qualified by a practicability constraint. The Article begins in Part I, Introduction, with two observations. First, the function of procedure is to particularize general substantive norms so that they can guide action. Second, the hard problem of procedural justice corresponds to the following question: How can we regard ourselves as obligated by legitimate authority to comply with a judgment that we believe (or even know) to be in error with respect to the substantive merits? The theory of procedural justice is developed in several stages, beginning with some preliminary questions and problems. The first question - what is procedure? - is the most difficult and requires an extensive answer: Part II, Substance and Procedure, defines the subject of the inquiry by offering a new theory of the distinction between substance and procedure that acknowledges the entanglement of the action-guiding roles of substantive and procedural rules while preserving the distinction between two ideal types of rules. The key to the development of this account of the nature of procedure is a thought experiment, in which we imagine a world with the maximum possible acoustic separation between substance and procedure. Part III, The Foundations of Procedural Justice, lays out the premises of general jurisprudence that ground the theory and answers a series of objections to the notion that the search for a theory of procedural justice is a worthwhile enterprise. Sections II and III set the stage for the more difficult work of constructing a theory of procedural legitimacy. Part IV, Views of Procedural Justice, investigates the theories of procedural fairness found explicitly or implicitly in case law and commentary. After a preliminary inquiry that distinguishes procedural justice from other forms of justice, Part IV focuses on three models or theories. The first, the accuracy model, assumes that the aim of civil dispute resolution is correct application of the law to the facts. The second, the balancing model, assumes that the aim of civil procedure is to strike a fair balance between the costs and benefits of adjudication. The third, the participation model, assumes that the very idea of a correct outcome must be understood as a function of process that guarantees fair and equal participation. Part IV demonstrates that none of these models provides the basis for a fully adequate theory of procedural justice. In Part V, The Value of Participation, the lessons learned from analysis and critique of the three models are then applied to the question whether a right of participation can be justified for reasons that are not reducible to either its effect on the accuracy or its effect on the cost of adjudication. The most important result of Part V is the Participatory Legitimacy Thesis: it is (usually) a condition for the fairness of a procedure that those who are to be finally bound shall have a reasonable opportunity to participate in the proceedings. The central normative thrust of Procedural Justice is developed in Part VI, Principles of Procedural Justice. The first principle, the Participation Principle, stipulates a minimum (and minimal) right of participation, in the form of notice and an opportunity to be heard, that must be satisfied (if feasible) in order for a procedure to be considered fair. The second principle, the Accuracy Principle, specifies the achievement of legally correct outcomes as the criterion for measuring procedural fairness, subject to four provisos, each of which sets out circumstances under which a departure from the goal of accuracy is justified by procedural fairness itself. In Part VII, The Problem of Aggregation, the Participation Principle and the Accuracy Principle are applied to the central problem of contemporary civil procedure - the aggregation of claims in mass litigation. Part VIII offers some concluding observations about the point and significance of Procedural Justice.
This is the final version, with pagination & citation information. I would like to express my very great gratitude to the editors of the Southern California Law Review for all of their hard work!

Hasen on the San Diego Mayoral Election Election Law superblogger Rick Hasen has a post & op/ed on the San Diego Mayoral election. Here's a taste:
    In a number of ways, the coming legal contest is a lower-stakes version of Bush v. Gore, the Florida litigation that ended the 2000 presidential election. Again, Democrats are asking for the intent of the voters to prevail and for courts to require elections officials to 'count every vote.' And Republicans are stressing compliance with the technical rules. They argue that fairness in elections requires that counting be done following the rules as written.

More on Natural Law, Public Reason, & Justice Thomas Over at, Jon Rowe has a post entitled Just how “Religious” is the Declaration of Independence?. Here's a taste:
    I think it’s very important that secularists not reject the Declaration because it invokes a “Creator.” The Declaration’s invocation of a “Creator” ought not to be understood as transforming it into any kind of “religious” document, more appropriate for “priests or preachers” than for public officials.
Read Rowe's post, but I would just note that there is a distinction between "public reason" and "secular reason," the Declaration appealed to reasons that were public but not secular.
Over at Ciceronian Review, check out Natural Law (And No Thomas). Here is a snippet:
    Balkin asserts that one can be both a legal realist and natural law theorist. If the standard here is whether there is an obvious logical contradiction between the two, Balkin may be right. But it is hard to see how anyone could sincerely hold such views. Balkin asserts that one can be both a legal realist and natural law theorist. If the standard here is whether there is an obvious logical contradiction between the two, Balkin may be right. But it is hard to see how anyone could sincerely hold such views. A legal realist, in the end, thinks there is no normative theory for decision-making. Decision-making always in fact is governed by sociological factors, hence normative theories cannot have any effect and cannot matter. But if normative theory has no practical import, it cannot really be a normative theory as one is already committed to the view that such theories do not guide action. One might nevertheless believe both, but the concoction is quite implausible, if not ridiculous. But if normative theory has no practical import, it cannot really be a normative theory as one is already committed to the view that such theories do not guide action. One might nevertheless believe both, but the concoction is quite implausible, if not ridiculous.
I'm inclined to think that this characterization of legal realism is simply too narrow: "A legal realist, in the end, thinks there is no normative theory for decision-making. Decision-making always in fact is governed by sociological factors, hence normative theories cannot have any effect and cannot matter." These characterizations may be true of some legal realists, but surely not all.
My original post was Natural Law, Public Reason, and the Constitution.

Interview with Sen Check out this interview with Amartya Sen (my favorite economist!).
And by the way, this link courtesy of the marvelous political theory daily review.

The Virtue of Courage Frequent surfers know that I have a keen and deep interest in the virtues. I highly recommend this very short piece by George Kateb entitled Courage as a virtue.

Hatch Replies to Gerhardt & Chemerisnky Orren Hatch replies to an op/ed by Michael Gerhardt & Erwin Chemerinsky in a letter to the L.A. Times yesterday. Here is a snippet:
    Before the 108th Congress, no majority-supported judicial nomination had been defeated by a filibuster. President Johnson withdrew Abe Fortas' nomination in 1968 because the vote on invoking cloture, or ending debate, showed it lacked majority support. The votes against cloture were evenly bipartisan; in contrast, the 20 failed cloture votes in the 108th Congress were partisan.

Eliaz,Offerman, & Schotter on Right to Chose Auctions Kfir Eliaz , Theo Offerman and Andrew Schotter (New York University - Department of Economics , University of Amsterdam - Faculty of Economics & Econometrics (FEE) and New York University - Department of Economics) have posted Creating Competition Out of Thin Air: Market Thickening and Right-to-Choose Auctions. Here is the abstract:
    We study a procedure for selling multiple heterogenous goods, which is commonly used in practice but rarely studied in the literature. The novel feature of this procedure is that instead of selling the goods themselves, the seller offers buyers the right to choose among the available goods. Thus, buyers who are after completely different goods are forced to compete for the same good, the 'right to choose'. Competition can be further enhanced by restricting the number of rights that are sold. This is shown both theoretically and experimentally. Our main experimental finding is that by auctioning 'rights-to-choose' rather than the goods themselves, the seller induces an aggressive bidding behavior that generates more revenue than the theoretical optimal mechanism.

Graham Reviews Frankfurt George Graham reviews Harry Frankfurt's The Reasons of Love (The Reasons of Love) on Metapscyhology. Here's a taste:
    As its title suggests, this is a book about love. It is based on a series of lectures that Frankfurt delivered in 2000 and 2001, at Princeton University and University College, London, respectively. It is short (slightly more than 100 pages) with three long chapters. It is a philosophical book, written by an excellent and influential philosopher and dealing primarily with problems associated with love as they appear not just in personal life but in writings about the philosophy and psychology of love.

Conference Announcement: Impact of Direct Democracy
    January 14 & 15, sponsored by the USC-Caltech Center for the Study of Law and Politics, the Initiative and Referendum Institute at USC, the Center for the Study of Democracy at UC-Irvine, and the Southern California Law Review. "The Impact of Direct Democracy" will bring together scholars in law and social sciences to discuss various aspect of the initiative and referendum process through interdisciplinary analysis. The two-day conference will be held on the UC-Irvine campus, Social Science Plaza B, Room 5206. The papers and commentaries presented on the second day will be published a spring issue of the Southern California Law Review. You can download the papers from our website: If you have any questions about the conference, feel free to contact Elizabeth Garrett at USC Law School or any of the other organizers, John Matsusaka of USC, Shaun Bowler of UC-Riverside, or Ami Glazer of UC-Irvine. The following is the schedule of events -- the conference will begin at 9:00 am on both days. Friday, January 14th
      Theme I: Initiatives and Political Actors
        Chair Shaun Bowler, University of California, Riverside Caroline Tolbert, Kent State University, Political Science Department "The Impact of Direct Democracy on the Citizenry" Fred Boehmke, University of Iowa "The Impact of Direct Democracy on Interest Groups" Daniel Smith, University of Florida, Political Science Department "The Impact of Direct Democracy on Political Parties"
      Theme II: Initiatives and Political Institutions
        Chair University of California, Irvine Papers Shaun Bowler, UC Riverside and Todd Donovan, University of Western Washington, "The Impact of Direct Democracy on Legislative Elites" John Matsusaka, University of Southern California "The Impact of Direct Democracy on the Executive"
      Theme III: Initiatives and State Governance
        Chair Matthew Beckman, University of California, Irvine Russell Dalton, UC Irvine "Direct Democracy as a Predictor of State Governance" Amihai Glazer, UC Irvine and Anthony McGann, UC Irvine "What Direct Democracy Teaches Us about Politics"
    Saturday, January 15
      9:00 11:00 Session I
        Richard Hasen, Loyola Law School "Rethinking the Unconstitutionality of Contribution and Expenditure Limits in Ballot Measure Campaigns" Thomas Stratmann, George Mason University, Dept. of Economics "The Effectiveness of Money in Politics: Ballot Measures, Candidate Elections, and Roll Call Votes" Discussants Bruce Cain, UC Berkeley, Political Science Department John De Figueiredo, Princeton University, Woodrow Wilson School Bernard Grofman, UC Irvine, Political Science Department Daniel Ortiz, University of Virginia Law School Moderator Daniel Smith, University of Florida, Political Science Department
      11:15 12:15 Session II
        Clayton Gillette, New York University Law School "Voting with Your Hands: Direct Democracy in Annexation" Discussants Jan Brueckner, UC Irvine, Department of Economics William Fischel, Dartmouth College, Department of Economics Moderator: Edward McCaffery, University of Southern California Law School
      12:15 1:45 Lunch
        Remarks M. Dane Waters, Founder and Chairman, Initiative and Referendum Institute "Legal Developments Affecting Initiatives and Referendums"
      1:45 2:45 Session III
        Melissa Cully Anderson, UC Berkeley, Political Science Department, and Nathaniel Persily, University of Pennsylvania Law School "Regulating Democracy through Democracy: The Use of Direct Legislation in Election Law Reform" Discussants Jonathan Katz, Caltech, Division of the Humanities and Social Science Nolan McCarthy, Princeton University, Woodrow Wilson School Moderator Caroline Tolbert, Kent State University, Political Science Department
      3:00 4:00 Session IV
        Mathew McCubbins, UC San Diego, Political Science Department, and Thad Kousser, UC San Diego, Political Science Department Discussants Elizabeth Garrett, University of Southern California Law School Daniel Rodriguez, University of San Diego Law School Moderator: Linda Cohen, UC Irvine, Department of Economics

Thursday, December 16, 2004
Sandefur on Public Reason Timothy Sandefur responds to my post entitled Natural Law, Public Reason, and the Constitution at Freespace. The gist of his argument is that a Rawlsian ideal of public reason would limit public officials to the status quo. Here's how Sandefur puts it:
    The problem with this is that this assumes that the role of a public official is to echo the populace—not to teach it. That is, our leaders ought to follow, and not to lead.
I've actually written a whole article that deals with a similar point made by Jeremy Waldron. An electronic version of Novel Public Reasons is available; just click on the link.

Baude on the Next Chief Will Baude has a very nice piece about the selection of the next Chief Justice of the United States Supreme Court on the New Republic Online. Here is a taste:
    Inserting a strong chief justice from outside the Court has often been an effective way to induce change. Externally nominated chief justices can provide the leadership necessary to help the Court move past old logjams and squabbles, creating stronger decisions. John Marshall galvanized the Court behind the then-novel doctrine of judicial review; he also eliminated the old practice of letting each justice write his own opinion for every case, creating a Court that spoke with a stronger institutional voice. Similarly, Earl Warren was able to lead the legal revolution of the mid-twentieth century partly because he came from outside the Court. It seems unlikely that a long-time associate justice could have given the Court the ideological push necessary to leap from its incremental racial decisions of the 1940s to a unanimous decision in Brown. Because they had no prior allegiances on the Court--and could therefore take a new look at old fights--Marshall and Warren were able to command unanimity for most of their important decisions, making the Court's pronouncements much harder for the other branches to ignore. Of course, no chief justice could bridge all of the current Court's ideological gaps--the gulf separating Scalia and Thomas from Stevens and Ginsburg is wide indeed. Still, in some areas where the Court's jurisprudence is particularly in flux, a new chief justice could help resolve competing doctrines through compromise. For example, a new chief might be able to build a stable consensus around the Court's fractured handling of religion cases. The shifting and closely divided majorities have made the long-term future of school vouchers unclear, inhibiting experimentation by local governments. The Court's rulings on public invocations of religion--in cases involving the Pledge of Allegiance, crèches, and the Ten Commandments--have been similarly confused. A strong chief would not be able to make all the justices agree, but he could push a large majority to compromise on a consistent theory of religious neutrality. Even though he has only one vote, the other justices traditionally look to the chief to build consensus, and his control of internal conferences and the assignment of majority opinions allow him to accomplish this more easily than an associate justice could.
Read Baude's fine piece.

Steve Smith Has Questions About Justice Thomas & Natural Law with an Updated Post Script Steve Smith (my colleague at USD & an eminent scholar of law and religion) wrote in response to my post entitled Natural Law, Public Reason, and the Constitution:
    I just read over your interesting discussion of the comments on Justice Thomas's statements. Your presentation of public reason was very able, of course (if it's permissible for a lackey to compliment the master). But these discussions always provoke a couple of questions in my mind.
      1. Were Thomas Jefferson and the other signatories of the Declaration of Independence behaving badly when, acting as founders and statesmen and not as preachers or even private citizens, they made the statements that provoke this discussion-- e.g., that our rights are attributable to a creator, etc.? Is this what the critics of Thomas believe? Is it what Rawlsian are forced to say, or perhaps eager to say?
      2. Is it the current consensus that when Justice Douglas, writing for the Court in Zorach v. Clauson, famously said that "we are a religious people whose institutions presuppose a Supreme Being," was he (a) just plain wrong, or (b) saying something he should not have said as a Justice and for the Court, or (c) both?
These are excellent questions. They deserve a short essay or a whole article by way of reply, and not just a short blog post. Let me try to say a few things about my own take on these issues:
  • Ideals of public reason are, of practical necessity, relative to the mix of beliefs (religious and secular, philosophical and theological) that characterize the citizenry at any particular point in history.
  • This "practical necessity" flows from several considerations, among them, the following:
      1. Knowledge of what is public--available to the reason of citizens at large--is itself contextual. You can't know about perspectives with which you are not familiar. Eighteenth-century thinkers simply could not take into account the form of pluralism that characterizes the early twenty-first century.
      2. The principle that underwrites Rawls's ideal of public reason is the liberal principle of legitimacy, which is the idea that one should be able to offer to one's fellow citizens justifications for the constitutional essentials that they could accept as reasonable. The principle only applies to those who actually are one's fellow citizens, and does not apply to others (hypothetical or actual, but future) who are not.
  • So the requirements of public reason appropriate to the United States in the eighteenth century will differ from those that apply to the twenty-first century.
  • With respect to Jefferson, then, I am inclined to think that the Declaration of Independence did comport with that particular conception of public reason that would have been appropriate to Jefferson's own time.
  • With respect to Douglas, the case is more difficult. Zorach v. Clauson was decided in 1952--half a century ago. The United States is a different place than it was in 1952. This is a consequence of several factors: (1) the greater diversity of religious perspectives as a consquence of the immigration of significant numbers of adherents of Shinto, Hinduism, Buddhism, Jainism, and other religious traditions that substantially differ from Christianity, Islam, and Judaism, with respect to the notion of a "Supreme Being." (2) the increasining significance of secural, nontheological conceptions of the good, reflected in the complaint, often heard these days, that our culture is dominated by "secular humanism." Nonetheless, at the time Douglas wrote in Zorach these trends were noticeable. Douglas was aware of the fact that many Americans were not theists. Here is the full context of the statement that Smith quotes from the case:
      We are a religious people whose institutions presuppose a Supreme Being. We guarantee the freedom to worship as one chooses. We make room for as wide a variety of beliefs and creeds as the spiritual needs of man deem necessary. We sponsor an attitude on the part of government that shows no partiality to any one group and that lets each flourish according to the zeal of its adherents and the appeal of its dogma. When the state [343 U.S. 306, 314] encourages religious instruction or cooperates with religious authorities by adjusting the schedule of public events to sectarian needs, it follows the best of our traditions. For it then respects the religious nature of our people and accommodates the public service to their spiritual needs. To hold that it may not would be to find in the Constitution a requirement that the government show a callous indifference to religious groups. That would be preferring those who believe in no religion over those who do believe. Government may not finance religious groups nor undertake religious instruction nor blend secular and sectarian education nor use secular institutions to force one or some religion on any person. But we find no constitutional requirement which makes it necessary for government to be hostile to religion and to throw its weight against efforts to widen the effective scope of religious influence.
    As I read this passage, the main thrust is the argument that as a matter of fact, the form of moral and religious pluralism that characterized American society when Douglas wrote, was predominantly a pluralism within religious systems of belief, and that Establishment Clause doctrine should take that fact into account. That much is not inconsistent with a requirement of public reason--even one more strict than that which I articulated in my prior post. Nonetheless, the particular sentence that Smith quoted is still problematic: "We are a religious people whose institutions presuppose a Supreme Being." This passage could be read in at least two ways. First, it might be read as an endorsement by Douglas of the proposition that there is a Supreme Being and that this really existing Supreme Being is presupposed by our institution. Second, it might be read slightly differently, as the observation that our institutions presuppose belief in a Supreme Being. The first reading makes Douglas's statement inconsistent with the sort of ideal of public reason discussed in my prior post. The second reading is not inconsistent in this way. In my mind, the passage as a whole suggests that the second reading is more plausible, but I recognize that others might believe the first reading is the better.
My thanks to Steve for his excellent question. By the way, I urge you to take a look at Steve's recent book, Law's Quandry (Harvard University Press 2004), which will be featured this Saturday on the Legal Theory Bookworm.
Update: Post Script Steve Smith replies:
    It seems to me a very peculiar position that has the consequence that if government or public officials today, acting in their public roles, quote from landmark and formative pronouncements like the Declaration of Independence (and many other landmarks-- Jefferson's Virginia Statute for Religious Freedom, Lincoln's Second Inaugural), and if these public officials today actually believe and mean such statements in basically the same sense that the original authors did, they thereby violate the Constitution and/or political morality. Regarding Douglas's statement, I agree with you that the statement can most plausibly be understood in the second sense you identify: Douglas didn't actually say that there is a Supreme Being, but only that our institutions presuppose that there is one. But it seems to me that Rawlsians should still find this assertion troublesome.
Thank you again!
Further Update: Rick Garnett offers some additional thoughts over at Mirror of Justice. Here is a taste:
    For my own part, I'm inclined to think -- with respect to the first question -- that the Declaration signers were not "behaving badly" (i.e., not violating political morality), and not only because (as Larry observes) theistic claims and premises fit comfortably within the zone of "public reason" of the 18th Century. (I can imagine coming to a different conclusion, had the signers' theistic premises been more denominational and sectarian.) But, with respect to the specific kinds of public claims the Founders made, and the premises they appear to have endorsed -- that is, claims and premises having to do with, for example, the connections between basic human rights, the existence of God, the knowability of certain moral truths through reason, the dignity of the person as a creature of God -- it seems to me that they must always be acceptable in public political and judicial discourse. They are acceptable, and any attractive theory of political morality will permit and welcome them, because they are powerful on the merits, and not only because they are widely shared at a particular moment in time. No worthy political discourse, in my judgment, could exclude such claims and arguments, and so any theory of political morality that purported to require their exclusion should be, for that reason, a non-starter.

Conference Announcement: Biomedicine within the Limits of Human Existence
    ESF Research Conference on Biomedicine within the Limits of Human Existence: Biomedical Technology and Practice Reconsidered Doorn (near Utrecht), The Netherlands, 8-13 April 2005 With support from INTAS, University of Utrecht – Ethics Institute, Faculty of Philosophy and ZENO Research Institute for Philosophy, KNAW (Royal Netherlands Academy of Arts & Sciences), NWO (Netherlands Organisation for Scientific Research) and OZSE (Onderzoeksschool Ethiek - The Netherlands School for Research in Practical Philosophy). Chair: Marcus Düwell (Universiteit Utrecht, NL) Vice-Chairs: Dietmar Mieth (Eberhard-Karls-Universität Tübingen, DE) & Christoph Rehmann-Sutter (Universität Basel, CH) Deadline for application: 10 January 2005. Invited Speakers will include:
      B. Baertschi (Geneva U., CH) - G. Becker (Hong-Kong U., CN) - D. Beyleveld (SIBLE Sheffield, UK) - R. Braidotti (Utrecht U., NL) - D. Callahan (Hastings Centre NY, US) - M. Düwell (Utrecht U., NL) - O. Döring (Bochum U., DE) - S. Graumann (IMEW Berlin, DE) - H. Haker (Harvard U., US) - L. Honnefelder (Bonn U., DE) - A. Kahn (Institut Cochin, Paris, FR) - M. Korthals (Wageningen U., NL) - S. McLean (Glasgow U., UK) - D. Mieth (Tübingen U., DE) - M. Mori (Turin U., IT) - B. Musschenga (Amsterdam U., NL) - C. Rehmann-Sutter (Basel U., CH) - C. Romeo-Casabona (Bilbao U., ES) - J. Scully (Basel U., CH) - T. Shakespeare (Newcastle U., UK) - C. Shalev (Tel Aviv U., IL) - L. Siep (Münster U., DE) - P. Tichtchenko (RAS Moscow, RU) - T. van Willigenburg (Rotterdam U., NL) - J.-P. Wils (Nijmegen U., NL).
    Scope: Building on the results of the first conference, this second event in the series will take up some of the basic notions that have become important in various fields within the life sciences, including the key notions of ‘life’ and ‘nature’. The concept of ‘contingency’ introduces several intriguing topics. At first sight, the life sciences seem to be focussed on reducing contingency. However, with the more recent developments in the life sciences, the complexity of the biological processes has generated deeper insights into the contingency of nature and of human beings as a part of nature. Life, nature and contingency shall be discussed in the context of the new developments of the biomedicine in their social, philosophical and ethical importance. This conference welcomes ethicists and strongly encourages the participation of researchers from the life sciences and medicine, sociologists, anthropologists and researchers of philosophy of law. Apart from plenary sessions with prominent speakers, there will also be parallel free paper sessions in which (young) researchers are kindly invited to participate. No poster sessions are planned. Financial support: A certain number of grants will be available upon request. Additional support from INTAS will also be available for INTAS members and NIS scientists.

Conference Announcement: 5th International Conference of the Friedrich Nietzsche Society
    The 15th International Conference of the Friedrich Nietzsche Society NIETZSCHE ON TIME AND HISTORY Peterhouse, University of Cambridge, United Kingdom, 16th – 18th September 2005 Nietzsche is well known for his criticism of all modes of thinking that render temporal existence defective and illusory. According to many of his remarks, ‘the whole’ must no longer be conceived as static and a-temporal. Instead, he attempts to re-describe the relationship between past, present and future by contesting the idea of time as a linear succession of moments of presence. Time and space, being and becoming(s) enter into non-reductive and creative relationships. In the wake of Nietzsche’s attempt to rethink time, the task of recording history also undergoes a fundamental reformulation. History can no longer be a discipline that merely registers the successions and constellations of entities and objects that remain identical over time. Nevertheless, history remains an integral part of his thinking. ‘Only as the most general form of history’, Nietzsche remarks in 1885, ‘is philosophy still acceptable to me’. History has to fulfil a much wider and a much more dynamic task. While philosophy definitely requires the corrective of history, the latter might have to be improved through a new philosophy of time. Does Nietzsche, as some critics have argued, merely idealise time, transitoriness and difference in the same way that his predecessors idealised permanence, being and identity? What are the new conceptions of time that Nietzsche has to offer? What kind of historian was Nietzsche himself? What kinds of ‘temporal’ histories and ‘historical’ philosophies did Nietzsche write/or fail to write? The Friedrich Nietzsche Society welcomes proposals for 30-minute papers on the following themes:
      • Time and/or History • Becoming(s) • Memory and Time • Time and modern science • Genealogy and repetition • Human and trans-human time • Time and immanent transcendence • Eternal recurrence • Static versus dynamic history • History of the earth • Nietzsche’s philosophy of history • Nietzsche the historian • Historical/temporal consciousness • Reception of Nietzsche ideas of time and history in 20th century
    Abstracts (no longer than 400 words) should be submitted by 01 April 2005 to Manuel Dries ( Early submissions are welcome. For further information, please go to our website: Supported by: The Tiarks Fund Department of German, University of Cambridge: The Friedrich Nietzsche Society:

A Classic by Kang Jerry Kang (University of California, Los Angeles - School of Law) has posted his Cyber-race (Harvard Law Review, Vol. 113, p. 1131, 2000) on SSRN. Here is the abstract:
    To date, most inquiries into race and cyberspace have focused on the "digital divide" - whether racial minorities have access to advanced computing-communication technologies. This paper asks a more fundamental question: Can cyberspace change the way that race functions in American society? Professor Jerry Kang starts his analysis with a social-cognitive account of American racial mechanics that centers the role of racial schemas. These schemas consist of racial categories, rules of racial mapping that place individuals into these categories, and racial meanings associated with each category. He argues that cyberspace can disrupt racial schemas because it alters the architecture of both identity presentation (enabling racial anonymity and pseudonymity) and social interaction (enabling increased interracial interactions). Thus, cyberspace presents society with three design options: abolition, which challenges racial mapping by promoting racial anonymity; integration, which reforms racial meanings by promoting interracial social interaction; and transmutation, which disrupts the very notion of fixed racial categories by promoting racial pseudonymity (or "cyber-passing"). After analyzing each option's merits, Professor Kang concludes that society need not adopt a single, uniform design strategy for all of cyberspace. Instead, society can embrace a policy of digital diversification, which explicitly zones different cyber spaces according to different racial environments. For example, most market places could be zoned abolition, whereas most social spaces could be zoned integration. By encouraging a diversified policy portfolio, society can exploit synergies created by flexible zoning while avoiding policy lock-in. Although cyberspace is no panacea for the racial conflicts and inequality that persist, it offers new possibilities for furthering racial justice that should not be wasted.

Lemley et al. on Divided Patent Infringement Claims Mark A. Lemley , David W. O'Brien , Ryan M. Kent , Ashok Ramani and Robert Van Nest ( Stanford Law School , Zagorin, O'Brien & Graham, LLP , Keker & Van Nest LLP , Keker & Van Nest LLP and Keker & Van Nest LLP) have posted Divided Infringement Claims on SSRN. Here is the abstract:
    Patent law is territorial. It is also designed to deal with the circumstance of unified infringement by a single actor. But modern commerce is not limited by national boundaries or by corporate forms. Patents written to cover modern technologies, particularly network computing technologies, are attempting to bring the distributed acts of different users around the globe into the ambit of a territorial legal system that looks for a single infringer. Not surprisingly, the effort to do so has created significant problems for patent cases. Two of those problems are the subject of our article. They involve what we call "divided" or "distributed" patent claims - claims that are infringed only by aggregating the conduct of more than one actor, or aggregating conduct that occurs in more than one country. Patent law doesn't deal well with either class of divided patent claim. Prosecutors and litigators need to be aware of these problems in order to most effectively represent their clients.

Burk & Lemley on Quantum Patent Mechanics Dan L. Burk and Mark A. Lemley (University of Minnesota Law School and Stanford Law School) have posted Quantum Patent Mechanics on SSRN. Here is the abstract:
    Determining the meaning of patent claims necessarily requires the judge to break the text of a claim into discrete elements or units of text corresponding to the elements or units that comprise the claimed invention - essentially, organizing the language of the claims into chunks or quanta of text. Define an element narrowly – limit it to a single word, say – and you will tend to narrow the resulting patent, because to prove infringement the patentee must show that each word has a corresponding structure in the accused device. By contrast, defining an element broadly tends to broaden the patent, because it permits the text to read on a greater range of accused devices. For each discrete packet identified, the courts must determine the meaning of the constituent words. They can assign those words definitions that range from narrow, specific meanings to broad, general meanings. In determining the meaning of terms within a particular element, judges practicing patent claim interpretation are engaged in an exercise that to some degree resembles the famous levels of abstraction test articulated by Judge Learned Hand for analysis of infringement under copyright law's idea/expression doctrine. There are no hard and fast standards in the law by which to make the right decision as to either the size of the textual element or the level of abstraction at which it will be evaluated. Indeed, the indeterminacy is so acute that courts generally don't acknowledge that they are even engaging in either inquiry. They define an element almost arbitrarily, and even when judges disagree as to the proper definition they can offer no principled basis for doing so. The problem may be worse than a simple failure to acknowledge subconscious decisions that affect the scope of a patent, however. This indeterminacy may well be inherent in the process of mapping words to things, as modern literary theorists suggest. While courts purport to rely on the ordinary or plain meaning of the words of a patent claim, there may simply be no such thing. If we can’t define the metes and bounds of the invention in any meaningful way, we might instead start with the patentee's invention itself, construing patent claims narrowly and in light of the actual invention when the claim terms are ambiguous. Courts could then supplement this narrower claim construction with a doctrine of equivalents analysis, which would permit them to decide how broadly to apply the principle of the invention.

Wednesday, December 15, 2004
Natural Law, Public Reason, and the Constitution Responding to an editorial by Thomas Krannawitter, Kevin Drum comments on this idea, which Krannawitter attributes to Thomas:
    [T]he principle that our rights come not from government but from a "creator" and "the laws of nature and of nature's God," as our Declaration of Independence says, and that the purpose and power of government should therefore be limited to protecting our natural, God-given rights.
Drum's response:
    Coming from a priest or a preacher, this would be fine. Coming from a Supreme Court justice who's supposed to interpret the constitution on secular grounds, it's an embarrassment.
Jack Balkin, in a characteristically thoughtful post, offers these observations:
    Kannawitter seems to assume that belief in natural law is the same thing as belief in original understanding which is belief in the conservative opinions that Justice Thomas espouses, and he further assumes that disbelief in natural law (which he equates, mistakenly with legal realism) is equivalent to belief in a living constitution, which is equivalent to belief in lefty positions on constitutional issues.
    He is wrong to assume this. It's quite possible to take the view that the best interpretation of the Constitution is one that which comports with natural law, and that the Framers' understandings are defective, because the Framers supported a wide variety of practices that are inconsistent with natural justice. Similarly, it's entirely possible to believe in natural law and in the idea that the application of moral principles must change with changing circumstances, and hence, that the best interpretation of the Constitution must also change accordingly. Finally, it's possible to be a legal realist about the mechanics of judging-- that is, that judicial decision is inevitably influenced by surrounding historical, political, and social conditions, and that judges are sensitive to underlying facts rather to abstract doctrinal formulas-- and still believe that the best interpretation of the Constitution is one which conforms to one's notion of what natural law requires.
I would like to respond to Drum's comments from a different angle. Drum wasn't arguing that Justice Thomas was an embarassment because of a lack of "lawyerly skills"--as Balkin implies later in his post. Rather, Drum's point is that Thomas was violating an ideal of public reason, because Supreme Court Justices are "supposed to interpret the constitution on secular grounds." Of course, Justice Thomas does interpret the constitution on secular grounds in his opinions for the Court. The question then, is whether it is proper for a Supreme Court justice to argue that the underlying moral foundation for the constitution is religious in extra-judicial discourse.
That is a deeply interesting question. I think the best theoretical framework for handling that question is provided by the theory of public reason offered by the late John Rawls. In an early formulation, Rawls explained what he has called the "idea of free public reason":
    [G]reat values fall under the idea of free public reason, and are expressed in the guidelines for public inquiry and in the steps taken to secure that such inquiry is free and public, as well as informed and reasonable. These values include not only the appropriate use of the fundamental concepts of judgment, inference, and evidence, but also the virtues of reasonableness and fair-mindedness as shown in the adherence to the criteria and procedures of common sense knowledge, and to the methods and conclusion of science when not controversial, as well as respect for the precepts governing reasonable political discussion.
Although this discussion contains the core of the Rawls' position, a few additional points deserve separate discussion:
    First, Rawls understands public reason as the reason of a political society. A society's reason is its "way of formulating its plans, of putting its ends in an order of priority and of making its decisions accordingly." Public reason contrasts with the "nonpublic reasons of churches and of many other associations in civil society." Both public and nonpublic reason share features that are essential to reason itself, such as simple rules of inference and evidence. Public reasons, however, are limited to premises and modes of reasoning that can appeal to the public at large. Rawls argues that these include "presently accepted general beliefs and forms of reasoning found in common sense, and the methods of science when these are not controversial." By contrast, the nonpublic reason of a church might include premises about the authority of sacred texts and modes of reasoning that appeal to the interpretive authority of particular persons.
    Second, the limits imposed by Rawls' ideal of public reason do not apply to all actions by the state or even to all coercive uses of state power. Rather, his ideal is limited to what he calls "the constitutional essentials" and "questions of basic justice." Thus, the scope of the freedom of speech and qualifications for the franchise would be subject to the Rawlsian ideal, but the details of tax legislation and the regulation of pollution control would not.
    Third, Rawls' ideal of public reason applies to citizens and public officials when they engage in political advocacy in a public forum; it also governs the decisions that officials make and the votes that citizens cast in elections. The ideal does not apply to personal reflection and deliberation about political questions; by implication it could not apply to such reflection or deliberation about questions that are not political in nature.
With these features in mind, we can offer a summary of the Rawlsian ideal of public reason; this ideal has three main features: (1) The ideal of public reason limits the use of reason to (a) the general features of all reason, such as rules of inference and evidence, and (b) generally shared beliefs, common- sense reasoning, and the noncontroversial methods of science. (2) The ideal applies to deliberation and discussion concerning the basic structure and the constitutional essentials. (3) The ideal applies (a) to both citizens and public officials when they engage in public political debate, (b) to citizens when they vote, and (c) to public officials when they engage in official action - so long as the debate, vote or action concerns the subjects specified in (2).
How is the idea of public reason relevant the question whether a Supreme Court Justice ought to present religious reasons in the way that Krannawitter lauds and Drum condemns? One answer to this question might begin with Rawls's observation that judicial reasoning, for example the reasoning of the Supreme Court, exemplifies public reason. It would be unusual to see a Supreme Court justice rely on a particular religion or on a deep philosophical view about the meaning of life or the ultimate nature of the good. There are exceptions, however. One of the most infamous Supreme Court opinions in the contemporary period is Chief Justice Burger's concurring opinion in Bowers v. Hardwick, the case that was recently overruled in Lawrence v. Texas. Burger argued that criminalization of homosexual conduct was constitutionally permissible, because the prohibition on such conduct was rooted in Judeo-Christian morality. Arguably this argument exceeded the bounds of public reason, because the United States is a pluralist society in which there are many citizens outside of the Judeo-Christian tradition, including, for example, Buddhists, adherents of Native American religions, and nonbelievers.
But Justice Thomas's opinions for the Court are not like Burger's opinion in Hardwick. In this regard, it is important to remember that Justice Thomas's opinion in Lawrence v. Texas was not an echo of Justice Berger's opinion in Hardwick. Here is what Justice Thomas actually wrote:
    Justice THOMAS, dissenting. I join Justice SCALIA's dissenting opinion. I write separately to note that the law before the Court today "is ... uncommonly silly." Griswold v. Connecticut, 381 U.S. 479, 527, 85 S.Ct. 1678, 14 L.Ed.2d 510 (1965) (Stewart, J., dissenting). If I were a member of the Texas Legislature, I would vote to repeal it. Punishing someone for expressing his sexual preference through noncommercial consensual conduct with another adult does not appear to be a worthy way to expend valuable law enforcement resources.
    Notwithstanding this, I recognize that as a member of this Court I am not empowered to help petitioners and others similarly situated. My duty, rather, is to "decide cases 'agreeably to the Constitution and laws of the United States.' " Id., at 530, 85 S.Ct. 1678. And, just like Justice Stewart, I "can find [neither in the Bill of Rights nor any other part of the *606 Constitution a] general right of privacy," ibid., or as the Court terms it today, the "liberty of the person both in its spatial and more transcendent dimensions," ante, at 2475. U.S.,2003.
But what about Thomas's other extrajudicial pronouncements? I am hesitant to say very much, because I haven't done the work necessary--actually reading everything Thomas has written. I would note that even in his extrajudicial speeches, Thomas is quite careful to state that his judicial decision making is based on public, legal reasons. For example, in A SPEECH DELIVERED BY ASSOCIATE CLARENCE THOMAS TO STUDENTS AT WASHINGTON AND LEE UNIVERSITY SCHOOL OF LAW LEXINGTON, VIRGINIA TUESDAY, MARCH 10, 1998, Thomas said:
    [S]hould my faith as a Catholic play a role in my decision making process? Should I telephone the Pope or the Vatican before I make a decision concerning the establishment clause or abortion? Well, obviously, you will say "No, to the latter." And I will say, well, if you say no to the latter, it must be no to the former. Both considerations are wrong. I am an Article III judge and I am required by oath to be impartial. I think each member of the Court is required to do the same.
Based on this passage, Thomas seems to be going out of his way to adhere to an ideal of public reason, but I haven't actually researched all of Thomas's extrajudicial writings--much less studied them closely. Nonetheless, I think I can make some general observations:
  • The best ideal of public reason does not equate the line between reasons that are public and reasons that are nonpublic with the line between reasons that are religious and reasons that are secular. Utilitarianism is a secular comprhensive philosophical theory of the good and the right. But some of the premises of utilitarianism do not qualify as public reasons. Among these are the premise that only consequences count and the premise that the only consequences that count are those specified by a particular conception of utility--utilitarians themselves disagree about this last point. Moreover, religious reasons may be public reasons. For example, the Declaration of Independence includes a reference to a creator, but because of the role of Declaration in our history, it is properly cited as a source of shared political values.
  • The best ideal of public reason is inclusive rather than exclusive--except in very limited and formal contexts. Thus, with few exceptions, Supreme Court opinions should include only public reasons. But when the Justices speak in their private capacities, as participants in public intellectual life, they should be free to offer reasons that draw from the comprehensive religious or philosophical conceptions of the good--especially when such reasons are foundation for (or supportive of) public reasons. So, for example, Judge Posner properly introduces his own pragmatist version of wealth maximization (as species of consequentialism, related to welfarism) in his extrajudicial writings. And Justice Thomas may properly discuss the relationship of his comprehensive religious doctrine--Catholicism--to the Constitution in his extrajudicial writings. (Posner does far more of this than Thomas, of course.) I've written extensively about the difference between inclusive and exclusive public reasons, if you are interested in a fuller and less simplified treatment of this topic, you could look at my Constructing an Ideal of Public Reason.
  • Thomas's approach to constitutional interpretation is highly formalist, emphasizing as it does text and original meaning. One of the virtues of legal formalism is that it avoids the problem of decision making on the basis of nonpublic reasons. Of course, one could be a legal realist and still adhere to public reason. Instrumentalist decision making could be limited to those aspects of the individual judge's beliefs about the right and the good that are rooted in public reason. But once judges begin to play the game of translating their own beliefs about the right and the good into law, it may be difficult to draw a line between those policy preferences that are rooted in public reasons and those that are rooted in one's deep religious and philosphical beliefs about ultimate goods. On this score, legal formalists have an advantage and legal instrumentalists are at risk of further corruption.
Read Balkin's post!

Karlan on Ely & Gerrymandering Pamela S. Karlan (Stanford Law School) has posted John Hart Ely and the Problem of Gerrymandering: The Lion in Winter on SSRN. Here is the abstract:
    In Democracy and Distrust, John Hart Ely articulated a "participation-oriented, representation-reinforcing approach to judicial review" that advanced both an anti-entrenchment and an antidiscrimination rationale for judicial intervention. This essay, prepared for a symposium honoring Ely held at Yale Law School in November 2004, explores the implications of his work for a central issue of democratic governance: legislative apportionment. Part I shows that although Ely celebrated the Warren Court's Reapportionment Revolution as a paradigmatic example of the anti-entrenchment approach, he essentially ignored the ways in which the Burger Court's jurisprudence of racial vote dilution, with its focus on geographically discrete minority groups subjected to majority prejudice, exemplifies the antidiscrimination approach. Part II looks at the implications of Ely's theory for contemporary controversies over race-conscious redistricting. Ely's final work - a trilogy defending the Rehnquist Court's Shaw jurisprudence as a wedge for attacking political gerrymandering more broadly - reveals an implicit tension within his approach: while the anti-entrenchment and antidiscrimination rationales may have dovetailed during the years of Democracy and Distrust, today they can operate at cross purposes. The protection of minority interests is now often best served not by judicial skepticism of legislative outcomes, but by judicial deference to plans that allocate power to politicians elected from minority communities. In the end, Ely's trilogy may reflect his romance with the Warren Court, which saw discrete and insular racial minorities as essentially objects of judicial solicitude, rather than efficacious political actors in their own right.

Sullivan & Karlan on Ely Kathleen M. Sullivan and Pamela S. Karlan (Stanford Law School and Stanford Law School) have posted The Elysian Fields of the Law on SSRN. Here is the abstract:
    In Democracy and Distrust and War and Responsibility, John Hart Ely advanced a participation-oriented, representation-reinforcing approach to judicial review that addressed problems of entrenchment, discrimination, and legislative delegation. This essay, which was written as the foreword to a symposium honoring Ely held at Stanford Law School in April 2004, discusses four recent Supreme Court decisions that map onto the central preoccupations in Ely's work: McConnell v. Federal Election Commission; Lawrence v. Texas; Vieth v. Jubelirer; and Hamdi v. Rumsfeld. McConnell raises important questions about how the Court ought to approach campaign finance legislation, given cross-cutting concerns with problems of entrenchment. While there are anti-entrenchment arguments on both sides of the debate over campaign finance reform, we suggest that, given the way in which Ely's anti-entrenchment theory focused on incumbent holders of government power, courts should be especially wary of restrictions that limit the speech of challengers. Lawrence offers an intriguing variation on judicial protection of discrete and insular minorities. Ely was a harsh critic of substantive due process. While the Court's opinion rests as a formal matter on substantive due process, rather than equal protection, a close reading suggests that Lawrence gives perhaps the first know Elysian reason for a substantive due process ruling: that it was necessary to invalidate a discriminatory law as if it applied to all persons in order to prevent the aftereffects of discrimination that would linger if it were not. Vieth shows how questions of political gerrymandering lie at the intersection of Ely's concerns with entrenchment and discrimination. While the Supreme Court has political gerrymandering as a species of discrimination, the larger problem is one of entrenchment, rather than the mistreatment of discrete and insular groups. The problem with the contemporary approach is not just that it is factually ill grounded: whatever else may be the case, it is hard to view the adherents of the two major political parties as discrete and insular minorities incapable of protecting themselves and victimized by prejudice. Rather, the problem is that the failure to recognize the issue as one of entrenchment can actually exacerbate political channel clogging and undercut effective and accountable representation. Finally, Hamdi confronts the question of how judicial review can reinforce congressional responsibility with respect to the use of military force and the protection of civil liberties given a world in which our most threatening enemies are no longer other nations. We show how War and Responsibility fleshes out one of the often-overlooked sections of Democracy and Distrust - its proposal to revive some version of the nondelegation doctrine - as a tool for ensuring accountability in decisions regarding the decision to go to war and identify echoes of Ely's theory in the three opinions in Hamdi that reject the government's sweeping assertion of executive power.

Kreitner on Fear of Contract Roy Kreitner (Tel Aviv University - Buchmann Faculty of Law) has posted Fear of Contract (Wisconsin Law Review, 2004) on SSRN. Here is the abstract:
    In recent years, a growing body of scholarly literature written from divergent perspectives has argued that courts should curtail the expansion of contractual liability. This article begins by distinguishing between two strains of formalism within this literature: Global conceptual formalism fears the expansion of contractual liability beyond what it considers the nature of contract and beyond its inherent justifications within corrective justice. Local instrumental formalism fears the expansion of contractual liability because such expansion undermines commercial parties’ interests in efficiency in order to advance societal ideals about fairness. After rehearsing a legal realist-inspired critique of both strains of formalism, this article goes on to offer a view of contractual ordering that does not suffer from a fear of contract. It argues that contract is best understood as a framework for cooperation, or an infrastructure that provides means to carry out collaborative projects. Such a view of contract highlights the necessity of accounting for three types of considerations in generating and applying rules of contract law: ex post governance considerations; welfare considerations; and institutional considerations. These considerations sometimes cohere, but at other times conflict, and the article argues that narrower categories of contracts based on transaction types would advance our understanding of the design of default rules. The last part of the article takes issue with the claim from local instrumental formalism that efficiency concerns should always dominate ex post governance considerations, and suggests reasons for refusing to ignore ex post governance considerations, particularly considerations of cooperation, in the adjudication of contract disputes.

Garvey on Commuting Death Sentences Stephen P. Garvey has posted Is it Wrong to Commute Death Row? Retribution, Atonement and Mercy (North Carolina Law Review, Vol. 82) on SSRN. Here is the abstract:
    Is it a morally permissible exercise of mercy for a governor to commute the death sentences of everyone on a state's death row, as Governor Ryan recently did in Illinois? I distinguish three different theories of mercy. The first two theories locate mercy within a theory of punishment as retribution. The first theory treats mercy as a means by which to achieve equity. As such, this theory is not really a theory of mercy; it is instead a theory of justice. The second theory treats mercy as a genuine virtue independent of justice. In particular, mercy is understood as an imperfect obligation. But such a theory cannot, I argue, justify mass commutations. Mercy so understood comes at the cost of doing justice. As such, at some point short of the last commutation the demands of mercy must yield to those of justice. The third theory, in contrast to the first two, locates mercy within a theory of punishment as atonement, not in relationship to a theory of punishment as retribution. This theory of mercy, which treats mercy as a means by which to preserve the possibility of eventual atonement between the offender and the family of the victim, can, I suggest, provide a plausible and morally attractive basis for permitting, though not requiring, the commutation in the name of mercy of the death sentences of every inmate on death row.

Two by Hirose Here are two new papers by Iwao Hirose:
    Against weighted lottery:
      John Taurek (1977) argues that we should not aggregate the loss of di®erent people's lives in order to decide what to do in this case, and that we should show an equal and positive concern to each person by giving an equal chance of being saved: he supports flipping a fair coin. These two responses have been much discussed in the literature, but not the third response. To my knowledge, John Broome (1985)is the first attempt to examine the weighted lottery, although he is sceptical about it. It is Kamm (1993) who endorses and puts forward the idea of weighted lottery, or what she calls the procedure of proportional chances. But there is no substantial criticism against the weighted lottery. In this paper, I concentrate on the weighted lottery, and present two criticisms.
    Contractualism and consistency:
      In this paper I discuss the inconsistent judgements in Thomas Scanlon's contractualism. Unlike utilitarianism and other forms of theories based on a consistent betterness ordering about states of affairs, contractualism derives the wrongness of a person's act from the fact that his reason stands in relation with other people's reasons. Its moral judgement would violate some basic consistency conditions. However, I claim that the inconsistent judgement about the wrongness of acts does not undermine contractualism, simply because it is not concerned with the consistent judgement across di®erent situations.

Tuesday, December 14, 2004
An Appreciation Brian Leiter has reported news that is just a few days old, that Dan Rodriguez--the Dean at the University of San Diego School of Law--has resigned. Brian wrote:
    There can be little doubt that Rodriguez's tenure transformed San Diego, making it a genuine national contender in the market for faculty talent and establishing the school, along with George Mason on the East Coast, as one of the "up-and-coming American law schools" of the new century.
I would like to add my personal appreciation for Dan's superb leadership. The greatest consolation is that Dan will remain on our faculty. Thank you Dan!

Welcome to the Blogosphere . . . . . . to Ken Anderson (American University) whose blog is entitled Kenneth Anderson's Law of War and Just War Theory Blog! Ken is an old friend from the philosophy department at UCLA and Harvard Law School! Check it out!

Cornell on Originalism Saul Cornell (History, Ohio State University) sent a very thoughtful email that responds to the Legal Theory Lexicon entry on Originalism:
    Most historians, as I am sure you are aware are deeply skeptical about originalism. The problems is not that it is impossible to reconstruct original meanings, (plural-not singular!) but that most scholarship in the last fifty years points to a complex and deeply contested debate in 1787-1788. So the question becomes when we speak of original meaning and seek what a rational or typical person might have thought are we talking about an Anti-Federalist or a Federalist? If an Anti-Federalist is he a back-country populist or an elite southern planter? If a supporter of the Constitution is he a New York artisan or a rich Boston merchant? At what moment in the debate are we taking this snap shot? Do we freeze the meaning at the moment the constitutional text was made public or at the moment James Wilson gave his state house speech, or perhaps at some point when Publius enters the debate? While there has been a revival of a new version of originalism, I would not call the Yale school led by Ackerman as originalist. It is concerned with the past, but the idea of constitutional moments is more evolutionary than originalist. I would also contrast originalist and evolutionary theories from genuinely historical approaches such as those used by Larry Kramer and Martin Flaherty. I think originalism is actually anti-historical if one looks closely at the method. History is a contextualist enterprise and originalists such as Barnett have attacked contextualism. The recent critiques of Kramer, “When Lawyers Do History” and Flaherty’s expose of “History Lite” argues that originalist scholarship starts with a set of assumptions about the past that bears little resemblance to historical reality. Kramer likens the originalist world to a fun house mirror in which the past is distorted almost beyond recognition. Until originalism moves beyond “history lite” it is not likely to be taken seriously by historians as a genuine intellectual contribution, but derided as an ideological stance.
Just a few observations about Cornell's points:
  • Whether we call Ackerman "originalist" or "evolutionary", I think it is absolutely clear that a commitment to popular sovereignty requires that the actions of "the people themselves" be interpreted with fidelity to the people's own understanding of what they were doing. That the people can act in a variety of ways and have done so at many different times is not inconsistent with this point.
  • Surely Cornell is right that there were a plurality of opinions at the various points in our history when constitutional provisions have been drafted and ratified. No one disputes this.
  • There is an important distinction between disagreements about what the law should be, and disagreements about the meaning of the constitutional text. Thus, Anti-Federalists, whether back country populists or elite planters, may have disagreed with Federalists about many, many normative questions, including most obviously, the question whether the Constitution should be ratified. But that disagreement is not about the meaning of the constitution's text.
  • Of course, there were disagreements about the meaning of the Constitution. Antifederalists made dire predictions about abuses of power, and Federalists played down the amount of power granted to Congress and the President. A sophisticated originalism must offer an account of these disagreements, and, in my opinion, that account will discount many of the predictions made in the ratification debates--from all sides.
  • There are many forms of originalism and it is very important to distinguish between them. The kind of originalism that appeals to me is based on the idea that the interpretation of the constitutional text should take into account the meaning (and meanings) that the text would have had to its intended audience at the time it was ratified. (Different times, obviously, for different parts.) This kind of originalism would pay a great deal of attention to evidence of ordinary usage of the terms and phrases in the Constitution, for example, and would (most emphatically) not privilege the intentions, fears, hopes, and predictions made in secret at the Philadelphia convention.
  • The problem that Cornell identifies--a plurality of meanings associated with social pluralism--is not unique to the Constitution of 1789. It is just as real a problem today, whether we are interpreting a new provision in a state Constitution, a statute passed by Congress, or a regulation promulgated by the NLRB. We do know our own era--as well as we can know any era, but we nonetheless recognize that some legal texts are subject to dispute. It would be quite odd if we expected the situation to be different for the Constitution of 1789 or the Reconstruction Amendments.
  • Nonetheless, evidence about original meaning will frequently illuminate and clarify the meaning of the constitutional text. Let me give a simple example. The so-called intellectual property clause is usually read as creating the copyright power by conferring on Congress the power "to promote the progress of science" "by securing for limited times to authors" "the exclusive right to their" "writings". What does "science" mean in the context of the copyright power? When I was researching that question, I tried to gather as many uses of the word science in the decades immediately before and after 1789 as I could. This lead me to the conclusion that the term "science" was not limited to what we would call the natural sciences or the hard sciences. "Science" had a much broader meaning, encompassing a variety of forms of systematic knowledge--logic, rhetoric, and law would have been understood as science. The original meaning of the copyright power is inconsistent with an anarchronistic, modern interpretation that would limit the term "science" to physics, chemistry, biology, and similar fields.
  • The point is that original meaning can constrain constitutional interpretation without determining a unique outcome in each and every constitutional case.
  • One final point: history and law involve different enterprises. The method of law is forensic--judges must select between competing interpretations of the constitutional text, because their institutional role requires that they make such choices. Historians are not required to act in this way. They can say things like, "I am not sure yet" or "the evidence is indecisive," or "more research needs to be done." And because historians are not lawyers, I find that many historians are quite unclear about legal methods and theories of constitutional interpretation. In particular, I rarely see a clear distinction made in historical writing between historical evidence about the legal meaning of particular legal texts and historical evidence about the attitudes of various groups about the desirability of those texts. That difference is crucial to the law, but largely irrelevant to history. Neither academic lawyers nor legal historians have a monopoly on intellectual virtue or intellectual vice.
My thanks to Saul Cornell for his very illuminating comments!

Conference Announcement: Aggregation and Numbers
    The Oxford Centre for Ethics and Philosophy of Law presents: One-day Workshop on Aggregation and Numbers Date: 5 February 2005 Time: 9:30-18:00 Venue: Goodhart Seminar Room, University College, Oxford Speakers:
      Joseph Raz (Oxford, Columbia) Michael Otsuka (UCL) David McCarthy (Edinburgh) Iwao Hirose (Oxford)
    Workshop website: Participation is free, but registration is required due to the limited space. To register, please e-mail to Iwao Hirose ( -- Iwao Hirose University College, Oxford OX1 4BH URL:

Call for Papers: Gender, the Body, and Objectification
    CALL FOR PAPERS: GENDER, THE BODY AND OBJECTIFICATION SATURDAY 21ST/SUNDAY 22ND MAY 2005 The Department of Philosophy University of Sheffield (UK) Keynote speakers:
      Sally Haslanger (MIT) Rae Langton (MIT)
    The conference will focus on issues relating to feminist discussions of the body. We invite papers for blind review (suitable to be delivered in approximately 45 minutes) on topics including, but not limited to, the following: What is the relationship of the body to conceptions of gender? What role does the body play in the construction of gender, and what role should it play in our theorising of Owomanness¹? How should we understand the concept woman? Discussion of masculinity is also welcome. How should we understand the role of the body in objectification? When does focus on women¹s bodies result in objectification? Is objectification always problematic? How should we understand the concept objectification itself? Finally, what is the role of gender in objectification, and the role of objectification in gender? How should we theorise the relationship (if any) between the two concepts? Contributions on other topics related to the conference title are also welcome. Although we do not plan to have a separate graduate session, we do encourage submissions from graduate students. Submissions via email to: Be sure to remove all identifying information from the paper itself, including it only in the email accompanying the submission. Enquiries via email to Jules Holroyd: Deadline for submissions: 28th FEBRUARY 2005 Prof. Jennifer M Saul Director of Graduate Studies Department of Philosophy University of Sheffield Sheffield, UK S10 2TN [44]-(0)114-222-0578

Monday, December 13, 2004
Visiting Fellowships at the Oxford Centre for Ethics & Philosophy of Law
    Oxford Centre for Ethics and Phlosophy of Law visiting fellowships 12 December 2004 The Oxford Centre for Ethics and Philosophy of Law (CEPL) invites applications for up to four short-term Visiting Fellowships to be held at University College, Oxford, during the academic year 2005-6. CEPL was founded in 2002 as a collaboration between three neighbouring colleges of the University of Oxford (Corpus Christi College, Merton College, and University College). It exists to encourage and support advanced work in moral, political, and legal philosophy, not only in Oxford but also nationally and internationally. CEPL occupies attractive shared premises in Merton Street, in the heart of the University’s historic centre, next to the University’s Philosophy Faculty and Library. More detailed information about CEPL is available on its website at CEPL’s new Visiting Fellowship Scheme is aimed at philosophers on sabbatical leave from other institutions who wish to spend a period of time working in Oxford, normally for the duration of one extended university term (up to eleven weeks), although consideration will also be given to applications for Fellowships spanning two consecutive terms. The Oxford terms are known as Michaelmas (early October to mid-December), Hilary (early January to late March), Trinity (mid-April to end June). The Long Vacation starts in early July, until late September, and applicants should bear in mind that during this period there are few academic events. The Fellowships are unsalaried, but attract a housing allowance of up to £800 per calendar month, depending on circumstances. Successful applicants will be provided with an attractive office in the Centre. They will also enjoy full use of University College’s nearby academic and social facilities, on the same basis as other College Fellows, including meals at common table. The College and the Centre will normally have just one Visiting Fellow in residence at any one time, but special arrangements may be possible for those working on joint projects, or in other exceptional circumstances. Visiting Fellows are expected to engage in advanced philosophical work in Oxford during the period of their Fellowship, and to participate in the common academic life of the legal, political and moral philosophy communities, as appropriate. CEPL welcomes applications from philosophers at a relatively early stage of their careers as well as from senior and well-established philosophers. Applicants should send a brief CV, a statement of the research plan to be pursued during the period of the Fellowship, details of the term(s) which the applicants would like to spend in Oxford (Michaelmas, Hilary, Trinity, Long Vacation), and the names and contact details of two academic referees to the Senior Tutor, University College, Oxford to reach her by Friday 4 February 2005. Candidates should ask their referees to write to the Senior Tutor by the same date. Applicants and their referees are encouraged to submit all material electronically to the dedicated email address In case this is not possible, alternative contact details for the Senior Tutor are shown below. Informal enquiries from those interested in Visiting Fellowships may be directed to Professor John Gardner ( or Professor John Broome. Dr Anne Knowland Senior Tutor University College, Oxford OX1 4BH tel. +44 1865 276676 fax +44 1865 27690 email

Conference Announcement: Character & Imagination
    Conference at Sheffield: CHARACTER AND IMAGINATION 29 January 2005 Various influential ethical theories propose that we should strive to develop morally sound character traits. This one-day conference will investigate the nature of character and the role of imagination in our attribution of traits, in the context of the current debates. Speakers:
      Nafsika Athanassoulis (Leeds) Gregory Currie (Nottingham) Peter Goldie (London) Robert Hopkins (Sheffield)
    Generously supported by The British Society for Aesthetics Venue: The Staff Club, 197 Western Bank, Sheffield. Times: we will start at 10am, finish by 6pm. Some travel bursaries available for students. Full details:

Call for Papers: Joint Session
    CALL FOR PAPERS 2005 JOINT SESSION OF THE MIND ASSOCIATION AND THE ARISTOTELIAN SOCIETY UNIVERSITY OF MANCHESTER, 8-11 JULY OPEN SESSIONS A number of parallel sessions on Saturday and Sunday afternoons will be available for the presentation of papers not previously published. There will be a considerable number of these sessions available, allowing room for many submissions to be included. The intention is to accommodate as many papers as time and space in the programme will allow. Each presentation should last no more than 20 minutes, so that a further 10-15 minutes may be allowed for discussion. Presented papers should aim to introduce material involving recent research. There are no restrictions on the areas of philosophy which papers may address. Philosophers whose papers are included in this part of the programme must be or become subscribing members of one of the organising societies. Those wishing to make a presentation should submit by e-mail attachment a copy of their paper (no more than 2000 words), together with a 250-word abstract, to Dr. Anthony Hatzimoysis ( by 1st March 2005. Decisions on whether papers have been accepted will be made by the end of April 2005. Papers accepted for the Open Sessions will not be published in the Supplementary Volume of the Aristotelian Society (unlike papers invited for the plenary programme of the conference); and expenses will not be paid. POSTGRADUATE SESSIONS Two parallel sessions on the Saturday afternoon will be devoted to short presentations by graduate students (or those who have recently obtained a postgraduate degree). Each student should speak for 20 minutes, allowing 10 minutes for discussion. Students wishing to participate should send their paper, preferably by attachment in Word 95 or higher, otherwise in two hard copies, by 1st March 2005 to: Mr. A. W. Price, Department of Philosophy, Birkbeck College, Malet Street, London, WC1E 7HX. Email: The paper should be about 2000 words but no more than 2500 words, including notes and bibliography, and should begin with a brief abstract. It should be typewritten in 12-point text, single-spaced throughout (i.e. including references and quotations), on one side of white A4 paper. All pages should be numbered and have margins of 1 inch or more. Papers containing symbols liable to distortion in transmission should be submitted as hard copies; otherwise soft copy is welcome. Please ensure that there are no self-identifying references in the text. Submissions should be accompanied by a separate page containing the title of the paper, the name of the author, institution and status, and email and postal addresses. Authors are advised to consult supervisors about what may be suitable for presentation to a largely professional audience. Given the tight word-limit, they are advised to give as much space as they can to the statement of their own ideas. The papers will be sent to referees, and a maximum of eight will be selected by the Joint Committee for presentation at the Joint Session. The programme will be settled in May 2005. The selected authors will have their conference fee and accommodation expenses (but not their travel costs) paid by the Mind Association and the Aristotelian Society. Some papers may subsequently be considered for publication in the Proceedings of the Aristotelian Society. Nobody should submit a paper for both the Postgraduate and the Open Sessions, and only one paper may be submitted per individual. However, graduate students whose submission for the Postgraduate Sessions is unsuccessful may subsequently be advised that their paper has been accepted for the Open Sessions. Conference costs, however, will not be paid by the organisers. INVITED SPEAKERS Inaugural Address - Simon Blackburn Symposia: Alan Richardson and Thomas Uebel Derek Matravers and Jerrold Levinson Samuel Scheffler and Véronique Munoz-Dardé Stewart Shapiro and Patrick Greenough Jennifer Hornsby and Jason Stanley Marilyn Adams and Richard Cross Georgia Testa Executive Secretary The Aristotelian Society Room 260 Senate House Malet Street London WC1E 7HU

Sunday, December 12, 2004
Legal Theory Lexicon: Originalism
    Introduction There are many different theories of constitutional interpretation, but the most controversial and also perhaps the most influential is "originalism"--actually a loosely-knit family of constitutional theories. The idea that courts would look to evidence from the constitutional convention, the ratification debates, The Federalist Papers, and the historical practice shortly after ratification of the Constitution of 1789 (or to equivalent sources for amendments) is an old one. This post provides a very brief introduction to "originalism" that is aimed at law students (especially first-year law students) with an interest in legal theory.
    The Originalist Revival No one scholar or judge can deserves credit for originalism as a movement in constitutional theory and practice, but in my opinion one of the crucial events in the originalist revival was the publication of Raoul Berger's book, Government by Judiciary in 1977 by Harvard University Press. As you can guess from the title, Berger's book was very critical of the Warren court (and its aftermath in the 70s). One of the key responses to Berger was the publication of The Misconceived Quest for the Original Understanding by Paul Brest in 1980. Brest's article initiated an intense theoretical debate over the merits of originalism that continues today. At various points in time, both sides have claimed the upper hand, but at the level of theory, the case for originalism has always been contested.
    Originalism is not an ivory tower theory. It has had a profound influence on the practice of constitutional interpretation and the political contest over the shape of the federal judiciary. President Reagan's nomination of Robert Bork (an avowed originalist) was one key moment--with his defeat by the democrats seen as a political rejection of originalism. The current Supreme Court has at least three members who seem strongly influenced by originalist constitutional theory--Chief Justice William Rehnquist and Associate Justices Antonin Scalia and Clarence Thomas.
    The final chapter of the originalism debate in legal theory has yet to be written--and perhaps it never will be. But one last set of developments is particularly important. In the 70s and early 80s, originalism was strongly associated with conservative judicial politics and conservative legal scholars. But in the late 1980s and in the 1990s, this began to change. Two developments were key. First, Bruce Ackerman's work on constitutional history suggested the availability of "left originalism" that maintained the commitment to the constitutional will of "We the People" but argued that the constitution included a New Deal constitutional moment that legitimated the legacy of the Warren Court--We the People: Foundations, published in 1991. Second, Randy Barnett (along with Richard Epstein, the leading figure in libertarian legal theory) embraced originalism in an influential article entitled An Originalism for Nonoriginalists. Ackerman and Barnett represent two trends in originalist thinking: (1) the political orientation of originalism has broadened from conservatives to liberals and libertarians, and (2) the theoretical structure of originalism has morphed and diversified from the early emphasis on "the original intentions of the framers." After the publication of Paul Brest's Misconceived Quest one heard talk that originalism was dead as a serious intellectual movement. These days one is more likely to hear pronouncements that "we are all originalists, now."
    Original Intentions Early originalists emphasized something called the original intentions of the framers. Even in the early days, there were disputes about what this phrase meant. Of course, there were debates about whether the framers (a collective body) had any intentions at all. And there were questions about what counted as "intentions," e.g. expectations, plans, hopes, fears, and so forth. But the most important early debate concerned levels of generality. The intentions of the framers of a given constitutional provision can be formulated as abstract and general principles or as particular expectations with respect to various anticipated applications of the provision. Most theorists will assent to this point, which flows naturally from the ordinary usage and conceptual grammar of the concept of intention. The difficulty comes because the different formulations of intention can lead to different results in any given particular case. For example, the intention behind the equal protection clause might be formulated at a relatively high level of generality--leading to the conclusion that segregation is unconstitutional--or at a very particular level--in which case the fact that the Reconstruction Congress segregated the District of Columbia schools might be thought to support the "separate but equal" principle of Plessy v. Ferguson. Perhaps the most rigorous defender of the original intentions version of originalism has been Richard Kay in a series of very careful articles.
    Yet another challenge to original-intent originalism was posed by Jefferson Powell's famous article, The Original Understanding of Original Intent, published in 1985. Powell argued that the framers themselves did not embrace an original intention theory of constitutional interpretation. Of course, this does not settle the theoretical question. The framers, after all, could have been wrong on this point. But Powell's critique was very powerful for those who insisted that constitutional interpretation must always return to origins. A certain kind of original-intent theory was self-defeating if Powell's historical analysis was correct. Moreover, some of the reasons that Powell identified for the framers' resistance to originalism were quite powerful. Especially important was the idea that "secret intentions" or "hidden agendas" had no legitimate role to play in constitutional meaning. In the end, however, Powell's article actually had the effect of turning originalism in a new direction--from original intention to original meaning.
    Original Meaning The original-meaning version of originalism emphasizes the meaning that the Constitution (or its amendments) would have had to the relevant audience at the time of its adoptions. How would the Constitution of 1789 have been understood by an ordinary adult citizen at the time it was adopted? Of course, the same sources that are relevant to original intent are relevant to original meaning. So, for example, the debates at the Constitutional Convention in Philadelphia may shed light on the question how the Constitution produced by the Convention would have been understood by those who did not participate in the secret deliberations of the drafters. But for original-meaning originalists, other sources become of paramount importance. The ratification debates and Federalist Papers can be supplemented by evidence of ordinary usage and by the constructions placed on the Constitution by the political branches and the states in the early years after its adoption. The turn to original meaning made originalism a stronger theory and vitiated many of the powerful objections that had been made against original-intentions originalism.
    The concept of original meaning originalism in its modern incarnation has been attributed to Justice Scalia, who is reported to have introduced the idea in a series of lectures in the 1980s; his essay, Originalism, The Lesser Evil, published in 1989, focuses on "original understanding" rather than "original intent." The idea has also been traced to a brief mention in Robert Bork's The Tempting of America, but Bork did not develop the idea extensively. Original-meaning originalism was develped more extensively by Justice Scalia in his opening essay in A Matter of Interpretation. Although the distinction between original meaning and original intent can be found in a variety of early contemporary sources including an article by Robert Clinton in 1987, the systematic development of original-meaning originalism is a relatively recent phenomenon. Original meaning originalism receives its most comprehensive explication and defense in Randy E. Barnett's new book, Restoring the Lost Constitution: The Presumption of Liberty--a systematic development of the original meaning approach and critique of the original intention theory.
    Regime Theory Yet another important twist in originalist theory is emphasized by the work of Bruce Ackerman: a twist that I shall call "regime theory." The foundation for regime theory is the simple observation that the Constitution of the United States was adopted in several pieces--the Constitution of 1789 was supplemented by a variety of amendments. And of these amendments, the three reconstruction amendments (the 13th, 14th, and 15th) are of especial importance--because of the significant structural transformation they work in the relationship between the powers of the national government and the powers of the states. Interpreting the whole Constitution requires an understanding of the relationship between the provisions of 1789 and those adopted during Reconstruction. Some regime theorists argue that the interaction between these two constitutional regimes has the implication that provisions adopted in 1789 take on a new meaning and significance after the Reconstruction Amendments were adopted.
    Ackerman's own version of regime theory includes a fascinating and important challenge for originalists of all stripes. Ackerman emphasized the fact that both the Constitution of 1789 and the Reconstruction Amendments were adopted through processes that were extralegal under the legal standards the prevailed at the time. The Articles of Confederation required unanimous consent of all the states for constitutional amendments and for complicated reasons, it seems likely that the Reconstruction Amendments were of dubious legality if strictly judged by the requirements set forth for amendments in Article V. Ackerman's conclusion was that the Constitution derives its legitimacy, not from the legal formalities, but from "We the People," when mobilized in extraordinary periods of constitutional politics. Perhaps the most controversial conclusion that Ackerman reaches is that the New Deal involved another such constitutional moment, in which "We the People" authorized President Roosevelt to act as an extraordinary Tribune, empowered to alter the constitutional framework through a series of transformative appointments. If one accepts this view, then one might begin to ask questions about the "original meaning" of the New Deal--a kind of originalism that would surely not be embraced by the conservative proponents of originalism in the 70s and early 80s.
    Originalism and Precedent Whither originalism? Given the ups and downs of originalism over the past three decades, making long-term predictions seems perilous indeed. But I will make one prediction about the future of originalism. We are already beginning to see originalists coming to grips with the relationship between original meaning and precedent--both in the narrow sense of Supreme Court decisions and the broader sense of the settled practices of the political branches of government and the states. Already, originalists of various stripes are beginning to debate the role of precedent in an originalist constitutional jurisprudence. Given the conferences and papers that are already in the works, I think that I can confidently predict that the debate over originalism and stare decisis will be the next big thing in the roller-coaster ride of originalist constitutional theory.
    Bibliography This very selective bibliography includes some of the articles that have been influential in the ongoing debates over originalism.
    • Bruce Ackerman, We the People: Foundations (Harvard University Press 1991) & We the People: Transformations (Harvard University Press 1998).
    • Randy Barnett, An Originalism for Nonoriginalists, 45 Loyola Law Review 611 (1999) & Restoring the Lost Constitution (Princeton University Press 2004).
    • Raoul Berger, Government by Judicary (Harvard University Press 1977).
    • Robert Bork, The Tempting of America (Vintage 1991).
    • Paul Brest, The Misconceived Quest for the Original Understanding, 60 Boston University Law Review 204 (1980).
    • Robert N. Clinton, Original Understanding, Legal Realism, and the Interpretation of the Constitution, 72 Iowa L. Rev. 1177 (1987).
    • Richard Kay, Adherence to the Original Intentions in Constitutional Adjudication: Three Objections and Responses, 82 Northwestern Univeristy Law Review 226 (1988)
    • Jefferson Powell, The Original Understanding of Original Intent, 98 Harv. L. Rev. 885 (1985).
    • Antonin Scalia, Originalism: The Lesser Evil, 57 U. Cin. L. Rev. 849 (1989)
    • Antonin Scalia, A Matter of Interpretation (Princeton University Press 1997)
    • Lawrence Solum, Originalism as Transformative Politics, 63 Tulane Law Review 1599 (1989).
    • Keith E. Whittington, Constitutional Interpretation: Textual Meaning, Original Intent, and Judicial Review (Kansas 1999).

Saturday, December 11, 2004
Legal Theory Bookworm The Legal Theory Bookworm recommends The Blackwell Guide To The Philosophy Of Law And Legal Theory (Blackwell page here) by Martin P. Golding, William A. Edmundson. Here are three blurbs:
    “Golding and Edmundson have assembled many of the most luminous figures in legal theory to write deep and totally original essays on a variety of central jurisprudential topics. The authors are the right people writing on the right subjects, and this book is likely to become a standard source for many years to come.” --Frederick Schauer, Harvard University “In addition to offering excellent introductions to the central topics of legal philosophy, the articles in this volume are in their own right distinguished scholarly contributions to the field. Students and specialists alike will find the book to be of great interest.” --Stephen Perry, New York University School of Law “This is a Guide that actually guides. All the contributors provide excellent routemaps, sometimes across very tricky terrain. At the same time, many of the contributors open up new paths and new vistas. The result is a book that works at more than one level: accessible secondary literature for those just mastering the subject as well as challenging primary literature for those already steeped in it.” --John Gardner, University of Oxford

Download of the Week The Download of the Week is Shopping for Law in the Coasean Market by Marcus Cole. Here is the abstract:
    In the twentieth century, two Nobel-Prize winning economists wrote two seemingly unrelated characterizations of the processes constraining human behavior. One, Ronald Coase, wrote a short article entitled The Nature of the Firm, in which he reduced all managerial decision-making to a choice between making the factors of production, or buying them. This article, and the "make or buy" decision for which it has come to be known, has proven to be among the most seminal in the history of financial economics and organizational behavior. The second economist, Friedrich Hayek, wrote what he thought to be a comprehensive treatment of the approach that ought to be taken to the generation of rules constraining human interaction. This voluminous work, Law, Legislation and Liberty, characterized the creation of legal rules as the product of either spontaneous or planned orders. Hayek argued that spontaneous orders like the common law were more efficient mechanisms for governing human behavior than planned or made orders like legislation. This thesis received a lukewarm reception generally, and in legal circles, virtually no reception at all. This Article demonstrates that Coase and Hayek were actually making the same observation, albeit in different contexts. Hayek's conception of rule-making is, in fact, a specific application of Coase's "make or buy" decision. Common law adjudication, as a retrospective, recurrent, and cumulative spontaneous order, has the ability to make use of more and better information about human interaction than legislation, as a prospective made order, can ever hope to access. Although rules generated through adjudication have been met with mistrust, it is the process that distinguishes informed from under-informed rule-making. The efficient rule generation processes of adjudication may be employed by courts, agencies, or legislatures. Likewise, the less efficient mechanisms of legislation can characterize rule generation by any legal institution, including courts. Properly understood, Hayek and Coase together demonstrate that if courts can be restrained from engaging in legislation, the rules they generate are more likely to promote social welfare than can be accomplished through central planning.
Download it while its hot!

Friday, December 10, 2004
Friday Calendar
    University of Texas, School of Law: J. Morgan Kousser, Division of Humanities & Social Sciences, California Institute of Technology, "Are Expert Witnesses Whores".

Camp & Vincent on Internet Governance Models L. Jean Camp and Charles Vincent (Harvard University - John F. Kennedy School of Government and Harvard University - John F. Kennedy School of Government) have posted Setting Standards: Looking to the Internet for Models of Governance on SSRN. Here is the abstract:
    If code is law then standards bodies are governments. This flawed but powerful metaphor suggests the need to examine more closely those standards bodies that are defining standards for the Internet. In this paper we examine the International Telecommunications Union, the Institute for Electrical and Electronics Engineers Standards Association, the Internet Engineering Task Force, and the World Wide Web Consortium. We compare the organizations on the basis of participation, transparency, authority, openness, security and interoperability. We conclude that the IETF and the W3C are becoming increasingly similar. We also conclude that the classical distinction between standards and implementations is decreasingly useful as standards are embodies in code - itself a form of speech or documentation. recent Internet standards bodies have flourished in part by discarding or modifying the implementation/standards distinction. We illustrate that no single model is superior on all dimensions. The IETF is not effectively scaling, struggling with its explosive growth with the creation of thousands of working groups. The IETF coordinating body, the Internet Society, addressed growth by reorganization that removed democratic oversight. The W3C, initially the most closed, is becoming responsive to criticism and now includes open code participants. The IEEE SA and ITU have institutional controls appropriate for hardware but too constraining for code. Each organization has much to learn from the others.

Basu & Emerson on the Law & Economics of Rent Control Kaushik Basu and Patrick Munro Emerson (Cornell University - Department of Economics and University of Colorado at Denver - Department of Economics) have posted The Economics and Law of Rent Control on SSRN. Here is the abstract:
    What stirs most people against rent control laws in the United States and elsewhere are stories of people who have held apartments for many years and now pay absurdly low rents for them. There are important reasons for removing rent controls, but the shock value of a low rent is not one of them. Basu and Emerson construct a model of second-generation rent control, describing a regime that does not permit rent increases for sitting tenants - or their eviction. When an apartment becomes vacant, however, the landlord is free to negotiate a new contract with a higher rent. They argue that this stylized system is a good (though polar) approximation of rent control regimes that exist in many cities in India, the United States, and elsewhere. Under such a regime, if inflation exists, landlords prefer to rent to tenants who plan to stay only a short time. The authors assume that there are different types of tenants (where type refers to the amount of time tenants stay in an apartment) and that landlords are unable to determine types before they rent to a tenant. Contracts contingent on departure date are forbidden, so a problem of adverse selection arises. Short stayers are harmed by rent control while long-term tenants benefit. In addition, the equilibrium is Pareto inefficient. Basu and Emerson show that when tenant types are determined endogenously (when a tenant decides how long to stay in one place based on market signals) in the presence of rent control, there may be multiple equilibria, with one equilibrium Pareto-dominated by another. In other words, many lifestyle choices are made based on conditions in the rental housing market. One thing rent control may do is decrease the mobility of the labor force, because tenants may choose to remain in a city where they occupy rent-controlled apartments rather than accept a higher-paying job in another city. Basu and Emerson show that abolishing the rent control regime can do two things: Shift the equilibrium to a better outcome and result in lower rents, across the board.

Weida on Martial Law Jason Collins Weida (University of Connecticut - School of Law) has posted A Republic of Emergencies: Martial Law in American Jurisprudence (Connecticut Law Review, Vol. 36, 2004) on SSRN. Here is the abstract:
    A precise definition of martial law is near impossible. The United States Constitution makes no reference to martial law, nor does any state constitution provide a working definition or model to implement martial law. Current notions of martial law are a compilation of how different people have used martial law at different times. From an historical overview of martial law invocations, one can deduce that, first, martial law is a military power exercised during times of perceived crises. Second, martial law is an inward power, imposed on the citizens and in the territory of the government invoking martial law. Third, martial law seems to be a device, in intention at least, for the protection of the citizens it so drastically affects. Yet abstract definitions do not provide a coherent framework of martial law, nor instruct the courts on how to determine the legality of the exercise of that extraordinary power. This article begins by looking to Congress and state legislatures to provide a working definition of martial law. The law-making branch of the federal government, and its state counterparts, are best positioned to determine the boundaries of martial law in the absence of constitutional provisions. Accordingly, this article addresses acts of Congress and state statutes which bestow martial law authority on the President and state governors, respectively. Although martial law legislation provides the clearest boundaries of when martial law is permissible, it fails to provide a universal model for the implementation of martial law, if such a model exists at all. Legislation is usually limited to specific circumstances, triggered by certain events, or bound by the narrow reaches of a state or territory. The main thrust of this article evaluates the Supreme Court's review of martial law powers. The Court reviews instances of state and federal martial law in two distinct analyses. Each review reflects the different grants of power bestowed upon state and federal governments under the United States Constitution. One tests the constitutionality of state invoked martial law through the application of the Fourteenth Amendment. The other examines federal emergency measures purporting to be in accord with congressional authorization. Both attempt to harness the wayward exercise of extraordinary powers of American government; both have differing degrees of success. This article recommends a strategy to improve the Court's construction of emergency and martial law powers delegated by Congress to the Executive.

Bybee on Legal Realism & Hypocrisy Keith J. Bybee (The Maxwell School, Syracuse Univeristy) has posted Legal Realism, Common Courtesy, and Hypocrisy (Law, Culture, and the Humanities, Forthcoming) on SSRN. Here is the abstract:
    In the United States, courts are publicly defined by their distance from politics. Politics is said to be a matter of interest, competition, and compromise. Law, by contrast, is said to be a matter of principle and impartial reason. This distinction between courts and politics, though common, is also commonly doubted - and this raises difficult questions. How can the courts at once be in politics yet not be of politics? If the judiciary is mired in politics, how can one be sure that all the talk of law is not just mummery designed to disguise the pursuit of partisan interests? In one sense, an ambivalent public understanding of the courts and suspicions of judicial hypocrisy pose a threat to judicial and democratic legitimacy. Yet, in another sense, public ambivalence and suspected hypocrisy may actually open up space for the exercise of legal power. I illustrate and critique the enabling capacities of ambivalence and hypocrisy by drawing an analogy to common courtesy.

Thursday, December 09, 2004
Thursday Calendar

Law & Politics Book Review
    LEGALIZING GAY MARRIAGE, by Michael Mello. Philadelphia: Temple University Press, 2004. 352pp. Cloth $68.50. ISBN: 1-59213-078-X. Paper. $22.95. ISBN: 1-59213-079-8. Reviewed by Susan Burgess.
    THAT EMINENT TRIBUNAL: JUDICIAL SUPREMACY AND THE CONSTITUTION, by Christopher Wolfe (ed). Princeton, NJ: Princeton University Press, 2004. 256pp. Cloth $55.00 / £35.95. ISBN: 0-691-11667-9. Paper. $19.95 / £12.95. ISBN: 0-691-11667-9. Reviewed by Kenneth Ward.
    SELVES, PERSONS, INDIVIDUALS: PHILOSOPHICAL PERSPECTIVES ON WOMEN AND LEGAL OBLIGATIONS, by Janice Richardson. Aldershot: Ashgate, 2004. 169pp. Cloth. $79.95 / £45.00. ISBN: 075462398X. Reviewed by Catherine Lane West-Newman.

Smith on Mapping the Law Stephen A. Smith (McGill University - Faculty of Law) has posted A Map of the Common Law? (Canadian Business Law Journal, Vol. 40, pp. 364-383, 2004) on SSRN. Here is the abstract:
    This essay evaluates the argument, as developed in Stephen Waddams' book Dimensions of Private Law, that the common law, and in particular common law private law, is too complex and contradictory to be explained on the basis of any classificatory scheme or 'map'. The essay distinguishes different criticisms of map-making projects, and argues that historical evidence about law-maker's motivations and the past use of classification schemes have little force against contemporary projects. It is further argued that, while it is unrealistic to expect any scheme to account perfectly for the existing rules, this is not the standard against which classification schemes should be judged. Such schemes are successful if they help us to understand the law, and there is no a priori reason to suppose this is not possible in the case of law. Moreover, it is no objection that mapmakers often respond to claims that their maps do not fit this or that part of the law by suggesting the law in question is wrong, marginal, or misclassified. Unless we are to give up entirely the attempt to better understand the law, we must acknowledge the possibility that the current understanding is inadequate. Admittedly, existing maps contain significant gaps. But there are a number of ways such gaps might be filled. The success of common law mapmakers over the last two centuries suggests it would be highly premature to conclude their work has gone as far as it can. To the contrary, the project is still in its infancy.

Wednesday, December 08, 2004
Welcome to the Blogosphere . . . . . . to The Becker-Posner Blog. Wow!

Richards on Data Privacy Neil Richards (Washington University) has posted Reconciling Data Privacy and the First Amendment on SSRN. Here is the abstract:
    This article challenges the First Amendment critique of data privacy regulation - the claim that data privacy rules restrict the dissemination of truthful information and thus violate the First Amendment. The critique, which is ascendant in privacy discourse, warps legislative and judicial processes by constitutionalizing information policy. Rejection of the First Amendment critique is justified on three grounds. First, the critique mistakenly equates privacy regulation with speech regulation. Building on scholarship examining the boundaries of First Amendment protection, this article suggests that "speech restrictions" in a wide variety of commercial contexts have never been thought to trigger heightened First Amendment scrutiny, refuting the claim that all information flow regulations fall within the First Amendment. Second, this article divides regulations of information flows into four analytic categories, and demonstrates how, in each category, ordinary doctrinal tools can be used to uphold the constitutionality of consumer privacy rules. Third, relying on recent intellectual histories of American constitutional law, this article argues that fundamental jurisprudential reasons counsel against acceptance of the First Amendment critique. From the perspective of privacy law, there are striking parallels between the critique's "freedom of information" and the discredited "freedom of contract" regime of Lochner. More importantly, from the perspective of First Amendment law, the critique threatens the obliteration of the distinction between economic and political rights at the core of post-New Deal constitutionalism. Rejection of the First Amendment critique thus has real advantages. At the level of policy, it preserves the ability of legislatures to develop information policy in a nuanced way. And at the level of theory, it preserves the basic dualism upon which the modern edifice of rights jurisprudence is built.

Cole on the Coasean Market for Law G. Marcus Cole (Stanford Law School) has posted Shopping for Law in the Coasean Market (NYU Journal of Law and Liberty, Vol. 1, Forthcoming) on SSRN. Here is the abstract:
    In the twentieth century, two Nobel-Prize winning economists wrote two seemingly unrelated characterizations of the processes constraining human behavior. One, Ronald Coase, wrote a short article entitled The Nature of the Firm, in which he reduced all managerial decision-making to a choice between making the factors of production, or buying them. This article, and the "make or buy" decision for which it has come to be known, has proven to be among the most seminal in the history of financial economics and organizational behavior. The second economist, Friedrich Hayek, wrote what he thought to be a comprehensive treatment of the approach that ought to be taken to the generation of rules constraining human interaction. This voluminous work, Law, Legislation and Liberty, characterized the creation of legal rules as the product of either spontaneous or planned orders. Hayek argued that spontaneous orders like the common law were more efficient mechanisms for governing human behavior than planned or made orders like legislation. This thesis received a lukewarm reception generally, and in legal circles, virtually no reception at all. This Article demonstrates that Coase and Hayek were actually making the same observation, albeit in different contexts. Hayek's conception of rule-making is, in fact, a specific application of Coase's "make or buy" decision. Common law adjudication, as a retrospective, recurrent, and cumulative spontaneous order, has the ability to make use of more and better information about human interaction than legislation, as a prospective made order, can ever hope to access. Although rules generated through adjudication have been met with mistrust, it is the process that distinguishes informed from under-informed rule-making. The efficient rule generation processes of adjudication may be employed by courts, agencies, or legislatures. Likewise, the less efficient mechanisms of legislation can characterize rule generation by any legal institution, including courts. Properly understood, Hayek and Coase together demonstrate that if courts can be restrained from engaging in legislation, the rules they generate are more likely to promote social welfare than can be accomplished through central planning.
I heard this paper at a conference early in the summer. Highly recommended, very interesting!

Farber on Models of Legal Change Daniel A. Farber (University of California, Berkeley - School of Law (Boalt Hall)) has posted Earthquakes and Tremors in Statutory Interpretation: An Empirical Study of the Dynamics of Interpretation (Minnesota Law Review, 2004) on SSRN. Here is the abstract:
    Using citation data from the Supreme Court's 1984 and 1990 Terms, this study tests three models of judicial dynamics. The first model posits that the extent of an opinion's importance to the law, as measured by how frequently it is cited by courts and commentators, is determined by a host of relatively small factors. This model predicts a normal, bell-shaped curve of citation frequencies. The second model posits that judges have bounded rationality and strong attachments to existing rules, leading them to practice normal science most of the time with occasional paradigm shifts. In empirical studies by various social scientists, this kind of model has been found to produce frequency distributions that are roughly bell-shaped but have a characteristic known as leptokurtosis. The third model stems from complexity theory (also known as chaos theory or fractal geometry. This type of model predicts a power curve that is characteristic of many social and natural processes, such as earthquake severity. Because earthquakes provide such a vivid metaphor for legal change, this can be called the tectonic model of legal dynamics. As it turns out, the first model is clearly wrong, and the second model is also at odds with the data. On the other hand, the tectonic model provides a good statistical fit for the data. Thus, at least in terms of this preliminary empirical investigation, complexity theory may provide important insights into judicial dynamics.
Highly recommended.

Gross on Stability Oren Gross (University of Minnesota Law School) has posted Stability and Flexibility: A Dicey Business (GLOBAL ANTI-TERRORISM LAW AND POLICY (Victor Ramraj, Michael Hor, and Kent Roach eds. Cambridge University Press, 2005)) on SSRN. Here is the abstract:
    significant part of the life of the law has been attempts to balance the competing values of stability and flexibility. Emergencies present the challenge of enabling government to confront the crisis by, if necessary, using special emergency powers and greater flexibility of operation while, at the same time, ensuring that such powers and flexibility do not get out of control and enable government to impose long-term limitations on individual rights and liberties or modify the nature of the relevant constitutional regime. The paper focuses on A.V. Dicey's treatment of the challenge of balancing stability and flexibility in the context of grave crises. Prof. David Dyzenhaus of the University of Toronto Faculty of Law has recently suggested that Dicey distinguishes between two legal responses to an emergency situation. In the first, the response is the after-the-fact recognition that officials made an excusable decision to act outside of the law because it was necessary that they act and the law did not provide them with the resources they needed. In the second, Parliament in advance gives to officials resources to deal with emergencies in accordance with the rule of law. Dyzenhaus argues that Dicey prefers the second option and himself takes a similar position. The paper argues that while Dicey does present two ways of responding to emergency situations, he sees both as complementary, allowing the use of one when the other may be unavailable or undesirable. The paper goes on to tie Dicey's analysis with John Locke's theory of the prerogative, suggesting that Dicey answers a significant problem with Locke's theory. The article goes on to focus on a closer examination of the ex post ratification component of what I called elsewhere the Extra-Legal Measures model for dealing with emergency powers. Again, the paper does so by using Dicey's discussion of the Act of Indemnity, which is a particular case of ex post ratification. The paper seeks to demonstrate that the critique of the Extra-Legal Measures model as placing public officials in a legal black hole . . . a zone uncontrolled by law misses some of the essential components of the model. Particularly, it misses the fact that, as Dicey puts it, the relief to be obtained [from Acts of Indemnity] is prospective and uncertain. Until the extralegal action is ratified ex post, and potentially even after it is so ratified, the acting public official does not know what the personal consequences of violating the rule are going to be. The more uncertain it is that ratification will be forthcoming, the more uncertain its potential scope, and the greater the personal risk involved in wrongly interpreting either of those is, the greater the incentive for individual actors to conform their action to the existing legal rules and norms and not risk acting outside them.

Morrison on Barnett Trevor W. Morrison (Cornell University - School of Law) has posted Lamenting Lochner's Loss: Randy Barnett's Case for a Libertarian Constitution (Cornell Law Review, Vol. 90, March 2005) on SSRN. Here is the abstract:
    This review essay discusses Randy Barnett's new book, "Restoring the Lost Constitution: The Presumption of Liberty." In general terms, the book argues that current constitutional doctrine grants too much power to federal and state legislatures, and provides too little protection to individual liberty. To remedy the problem, Professor Barnett proposes abandoning the "presumption of constitutionality" that courts typically accord to state and federal laws, and adopting in its place a "presumption of liberty" that, he contends, is more consistent with the Constitution's original meaning. In this review, I discuss four points on which "Restoring the Lost Constitution" gives me pause. First, the book proceeds from an extra-textual political theory that is difficult to square with the actual framing of the Constitution, a dilemma that is particularly acute for Professor Barnett since he defends a form of originalism as the appropriate mode of constitutional interpretation. Second, Professor Barnett's defense of an "original meaning" approach to constitutional interpretation features a rather strained attempt to analogize constitutions to contracts, and, in the process, slights competing accounts of constitutional interpretation. Third, especially in his discussion of the state police power, Professor Barnett operationalizes his "presumption of liberty" by injecting into the Constitution a number of remarkably unstable conceptual distinctions. Fourth and finally, Professor Barnett's argument for a generalized jurisprudence of liberty neglects the extent to which particular articulations of liberty in our constitutional system may be linked to another core constitutional value: equality. Greater attention to a liberty-equality link could yield a significantly different, and better grounded, understanding of the appropriate constitutional balance between government power and individual freedom.

Tuesday, December 07, 2004
Nobis on Truth in Ethics and Epistemology Nathan Nobis has posted Truth in Ethics and Epistemology: A Defense of Normative Realism. Here is a taste:
    I argue that common reasons to think that no moral judgments are true suggest that epistemic judgments, e.g., that some belief is rational, justified or should be held, are not true either. I argue that these epistemic anti-realisms are rationally unacceptable and that the major premises that entail them are false. Thus, I undercut the case against moral realism, which rests on these premises.

Monday, December 06, 2004
Monday Calendar

Calsamiglia on Equality of Opportunity C. Calsamiglia has posted Decentralizing equality of opportunity. Here is a taste:
    Equality of opportunity is an important welfare criterion in po- litical philosophy and policy analysis. Philosophers define it as the requirement that an individual’s well-being be independent of his or her irrelevant characteristics. The difference among philosophers is mainly about which characteristics should be considered irrelevant. Policymakers, however, are often called upon to address more specific questions: “how should admission policies be designed so as to pro- vide equal opportunities for college?” or “how should tax schemes be designed so as to equalize opportunities for income?” These are called local distributive justice problems. In the literature global and local distributive justice problems have been analyzed independently. This paper interprets local distributive justice problems as essentially the decentralized global distributive justice problem. Therefore, the question of interest is whether we can solve the global distributive justice problem by solving a series of local distributive justice prob- lems. In particular, we characterize the set of local distributive justice rules that attain global equality of opportunity as defined by political philosophers. We show that the provision of equality of opportunity, which is not concerned about effort directly, is decentralized if the collection of local mechanisms provides local equality of rewards to effort, that is, equality of rewards to productive individual choices. Moreover, in the mechanisms and environments are rich, a collection of local mech- anisms decentralizes equality of opportunity only if it provides local equality of rewards to effort.

Sunday, December 05, 2004
Legal Theory Calendar
    Monday, December 6 Thursday, December 9
      Boston University, School of Law: David Walker, "The Manager's Share".
      University of Michigan, Law & Economics: Keith Hylton, Boston University, Church and State: An Economic Analysis
      Vanderbilt School of Law: Pauline Kim, Washington University, "Constructing Legal Disputes: A Look at Workplace Drug Testing"
    Friday, December 10
      University of Texas, School of Law: J. Morgan Kousser, Division of Humanities & Social Sciences, California Institute of Technology, "Are Expert Witnesses Whores".

Legal Theory Lexicon: Transparency
    Introduction Sooner or later, most law students encounter the idea that "transparency" (as opposed to "opaqueness") is a desirable characteristic in markets, procedures, and governance institutions (both private and public). But what is "transparency" and why is it a good thing? This entry in the Legal Theory Lexicon provides a very brief introduction to the concept of transparency for law students (especially first-year law students) with an interest in legal theory. The basic idea of transparency is simple: things go better when processes are open. Markets function best when transactions are public. Judicial processes work best when they are visible to the participants and the public. Governments work best when both inputs to decisions and the meetings in which decisions are made are public. This post provides a brief introduction to the idea of transparency in a few important contexts.
    Transparency and Democratic Process Why should the processes of democratic decisionmaking be transparent? There are so many different answers to this question that one hardly knows where to begin, but we might start by distinguishing between answers that rely on consequentialist reasoning and those that appeal to ideas about rights, fairness, or legitimacy. The consequentialist case for transparency in government usually rests on the idea that opaque processes are likely to facilitate corruption or capture. Corruption is more likely because secret decisionmaking facilitates rent-seeking (soliciting bribes) by public officials; transparency processes make bribery more difficult and increase the likelihood that it will be exposed. "Capture" is the term used to describe domination of a regulatory process by the interests who are supposed to be regulated. When lawmaking (or administrative rulemaking) is done in secret, there is a greater likelihood that the information flow will be one sided.
    The Bush Administration's energy policy provides a good example of debates over the pros and cons of transparency in government. The administration developed its energy policy through non-transparent procedures. Vice-President Cheney met in private with a variety of interest groups, and the records of the meetings were not made available to the public. Critics charged that this secrecy allowed oil and coal interests to dominate the decision-making process to the detriment of the public interest. The administration defended the process, arguing that public processes would have inhibited free and frank discussion of the issues by the various interest groups. Whether or not this argument was correct in this particular context, it illustrates an important point. Transparency in government comes at a price. Transparent processes may be inefficient--what can be done in private in minutes may take hours in public. Transparent processes may also distort decision-making, forcing political actors to pander to public opinion at the expense of good policy.
    The case for transparency in government need not rest on consequences. It might also be argued that transparent government is required by the rights of citizens to meaningfully participate in democratic self-government. If public officials conduct business in private, then it becomes more difficult for citizens to make meaningful decisions at the ballot box.
    Transparency in the Market and the Boardroom The case for transparent markets is simple. Efficiency requires information. Efficient pricing, for example, requires that buyers know what they are buyng and sellers know what they are selling. "Buying a pig in a poke" is simply a colorful way of expressing the idea that a nontransparent transaction has occurred. Transparency is especially important in capital markets Securities regulation in the United States rests on the assumption that mandatory disclosure of accurate financial information will lead to investor confidence and facilitate efficient financial markets. Without transparency each investor would face either uncertainty or enormous information acquisition costs. Efficient capital markets produce enormous benefits, because they enable resources to be allocated to their highest and best use. Finally, transparency in corporate governance aims to prevent management from appropriating wealth owned by stockholders.
    There are, however, situations in which transparency is inconsistent with efficient markets. Trade secret law, for example, aims at the opposite of transparency. The theory is that the ability to keep secrets creates an incentive to develop new ideas, inventions, and processes; disclosure would allow competitors to appropriate the new idea without compensation, and hence would reduce the incentives for the creation of new knowledge. Similarly, corporations are not required to disclose business strategies and tactics.
    Transparent Judicial Procedures Civil litigation and criminal trials provide a final context in which transparency is an important value. When we think about the transparency of judicial procedures, there are two different groups for whom the process may be transparent or opaque. The first group is comprised of litigants (plaintiffs/defendants in civil litigation and defendants in criminal litigation). The second group consists of the public at large. Most legal systems place a higher value on transparency to participants than on transparency to the public. While it is not unusual for a hearing to be closed to the public, it is very unusual for a judicial proceeding to exclude the parties themselves. But there are important exceptions to this rule. Deliberations by both judges and juries are usually opaque. Thus, even the defendant in a criminal case is not allowed to observe the deliberations of the jury. A similar rule applies to judicial deliberations. For example, the conferences of an appellate court (e.g. the United States Supreme Court) are conducted in the strictest secrecy, as are the communications between among judges and between judges and their clerks. In these contexts, the thought is that open deliberations would actually distort the decision making process, leading to worse rather than better decisions.
    Conclusion Concern with process is ubiquitous in legal theory, and processes can be transparent or opaque. As a law student, you might begin to ask yourself about the effect of legal rules on transparency. Does this rule make the process more transparent or more opaque. When you encounter rules that render processes opaque, always ask why?

Saturday, December 04, 2004
Legal Theory Bookworm The Legal Theory Bookworm recommends Morals by Agreement by David Gauthier. Here's a description:
    Are moral principles actually principles of rational choice? Starting from the view that it is rational always to choose what will give one the greatest expectation of value or utility--and the common counter-claim that this procedure, applied in many situations, will actually leave people worse off than need be--Gauthier instead proposes a principle of cooperation whereby each must choose in accordance with a principle to which all can agree. He shows that not only does such a principle ensure mutual benefit and fairness, but also that each person may expect greater utility from actually adhering to a morality based on it, even though his other choice did not have that specific end primarily in view. In resolving what may appear to be a paradox, he establishes morals on the foundation of reason.
Gauthier's book is a contemporary classic. Highly recommended!

Download of the Week The Download of the Week is Group Judgments: Deliberation, Statistical Means, and Information Markets by Cass R. Sunstein. Here is the abstract:
    How can groups elicit and aggregate the information held by their individual members? The most obvious answer involves deliberation. For two reasons, however, deliberating groups often fail to make good decisions. First, the statements and acts of some group members convey relevant information, and that information often leads other people not to disclose what they know. Second, social pressures, imposed by some group members, often lead other group members to silence themselves because of fear of disapproval and associated harms. The unfortunate results include the propagation of errors; hidden profiles; cascade effects; and group polarization. A variety of steps should be taken to ensure that deliberating groups obtain the information held by their members. Because of their ability to aggregate privately held information, information markets substantial advantages over group deliberation. These points bear on discussion of normative issues, in which deliberation might also fail to improve group thinking.
Download it while its hot!

Friday, December 03, 2004
Kang on Race & Prejudice Jerry Kang (University of California, Los Angeles - School of Law) has posted Trojan Horses of Race (Harvard Law Review, March 2005) on SSRN. Here is the abstract:
    Recent social cognition research - a mixture of social psychology, cognitive psychology, and cognitive neuroscience - has provided stunning results that measure our implicit bias against various social categories. In particular, they reveal that most of us have implicit biases in the form of negative beliefs (stereotypes) and attitudes (prejudice) against racial minorities. This is notwithstanding sincere self-reports to the contrary. These implicit biases have been demonstrated to have real-world consequence - in how we interpret actions, perform on exams, interact with others, and even shoot a gun. The first half of this Article imports this remarkable science into the law reviews and sets out a broad intellectual agenda to explore its implications.he second half explores where implicit bias comes from, and focuses on "vicarious" experiences with racial others mediated through electronic communications. This, in turn, raises a timely question of communications policy concerning how the public interest standard was recently reshaped in the FCC's controversial June 2003 Media Ownership Order. There, the FCC repeatedly justified relaxing ownership rules by explaining how it would increase, of all things, local news. Since local news was viewed as advancing "diversity" and "localism," two of the three core elements of the "public interest," any structural deregulation that increased local news was lauded. Troubling is what's on the local news. Sensationalistic crime stories are disproportionately shown: "If it bleeds, it leads." Racial minorities are repeatedly featured as violent criminals. Consumption of these images, the social cognition research suggests, exacerbates our implicit biases against racial minorities. Since implicit bias is fueled in part by what we see, the FCC has recently redefined the public interest so as to encourage the production of programming that make us more biased. We seek local news for valuable information necessary to plan our lives, but embedded in that information transfer is a sort of Trojan Horse that increases our implicit bias. Unwittingly, the FCC linked the public interest to racism. Potential responses, such as recoding the "public interest," and examining potential firewalls and disinfectants for these viruses are discussed. These solutions are explored in light of both psychological and constitutional constraints.

Friday Calendar

Thursday, December 02, 2004
Cohen on Commodification I. Glenn Cohen (U.S. Department of Justice, Civil Section, Appellate Staff) has posted The Price of Everything, the Value of Nothing: Reframing the Commodification Debate on SSRN. Here is the abstract:
    Under current law, sperm, art, pollution rights, and life insurance can be sold; votes, organs, draft cards, and children cannot. Demarcating a line between what can and cannot be permissibly sold is the goal of the commodification debate (also sometimes called blocked exchanges.) This paper attempts to add some precision to the dialogue by mapping out the conceptual space of the commodification debate and deriving (and tentatively evaluating) the entailments of different positions adopted by anticommodificationists - those opposing the sale or trade of certain goods. I begin by dividing coercion arguments (focused on the autonomy of the seller of the good and inequality in background conditions) from corruption arguments (focused on the idea that sale of the good corrupts it, or does violence to the way we think the good is best valued.) I further subdivide coercion arguments between those that focus on the voluntariness of the seller (e.g., if organ sale was allowed only very poor people would want to sell) and those focused on access (e.g., if organ sale was allowed only the very rich would be able to pay the premium prices to access it). I demonstrate that both forms of the argument share the same philosophical basis point, and both can be eliminated by using the same public policy solutions (e.g., placing a fairly low price ceiling on the good). The second set of arguments, the corruption arguments, prove more interesting because they do not depend on a particular state of the world and are less susceptible to public policy solutions. The crux of these arguments is that an exchange is corrupting when the relevant goods cannot be aligned along a single metric without doing violence to our considered judgments about how these goods are best characterized. More specifically, one might suggest that there are various spheres (sometimes called modes) of valuation, and an exchange is corrupting when it ignores the differences between these spheres of valuation and forces us to value all goods in the same way. For example, exchanging children for money corrupts the value of children because children belong in a higher sphere of valuation than does money. I show that the corruption arguments can also be subdivided into two categories, Conventionalist arguments (which argue that determining how a good should be valued and which exchanges are improper is relative to a particular society at a particular time) and Essentialist arguments (which argue that goods have an essence or nature that determines how a good should be valued and which exchanges are improper). After examining the shortcomings of the Conventionalist account, I try to develop a nuanced approach to the Essentialist position. I catalogue various possible conceptions of how many spheres of valuation exist and how they are divided, and evaluate the merits of the various conceptions. I conclude, however, that any such account will end in failure if it cannot explain why the sale or barter of a particular good (e.g., an organ) should be blocked, while the gifting of the same good (organ donation) is permitted. My solution is to suggest that a plausible Essentialist position has to offer not only an account based on the nature of the goods exchanged, but also an account based on the nature of the transaction. I offer one such account I call the formula from the nature of the transaction that argues that part of what makes an exchange improper is the transaction's expression of value equilibrium - that the things being exchanged are of equivalent value. I suggest that this is why gift exchanges like organ donation are not improper on the Essentialist anticommodificationist account, and also explains why compensation for the medical costs and lost wages of a mother in a surrogacy contract is different from commercial surrogacy (a blocked exchange) - the former transaction recognizes that the money is not a substitute for the child, that some value remains uncompensated, whereas the latter does not.

Conference Announcement: Free Speech in Wartime On Sunday, January 16, & Monday, January 17, 2005, at Rutgers-Camden School of Law, there will be a conference entitled "Free Speech in Wartime" on Geoff Stone's book, Perilous Times: Free Speech in Wartime from the Sedition Act of 1798 to the War on Terrorism. Follow this link for details.

Thursday Calendar

Wednesday, December 01, 2004
Brown on Utilitarian Welfarism Check out The Arbitrariness of Utilitarian Welfarism by Chris Brown over at Desert Landscapes. Here's a taste:
    Bothered by the apparent popularity of the utilitarian maximization principle, I’ve been trying to come up with an alternative principle for consequentialists. I think I might’ve stumbled onto a good one, and want to get some feedback on it. What I’ll say here presupposes that the correct form of consequentialism is a form of welfarism. (For convenience, think of welfarism as the view that well-being is the only intrinsically valuable thing.) On this, I agree with contemporary utilitarians such as James Griffin and Wayne Sumner, though I disagree with their accounts of well-being. My hope is that what I’ll say here works no matter which of the various popular accounts of well-being we assume. Part of what I’m assuming, then, is that we can assign a cardinal value corresponding to the extent of any being’s well-being.
Highly recommended! Check it out.

Balkin's Hypo Jack Balkin has a very nice hypo re Ashcroft v. Raich:
    What then, of a statute that regulates the ability of doctors to prescribe Schedule I substances for their patients for medical treatment? Why wouldn't such a law, making it a crime for doctors to prescribe marijuana, be well within Congress's commerce powers? After all, the practice of medicine is a business, and therefore is economic activity. And there is no problem cumulating effects if the activity is economic in nature. If Congress may make it a crime for doctors to prescribe marijuana for medical use, then it would, in effect, preempt California's law allowing cultivation of marijuana for personal use when recommended by a licensed physician, because no doctor could legally recommend the use of marijuana for a patient.
This is quite a nice argument. Of course, there is counter move that could and would be made by the medical cannabis community. Voluntary clinics that did not charge for the prescription would not be economic activity--as such activity is defined by the Ashcroft v. Raich petitioners. (Their definition as articulated by Barnett in oral argument is that for activity to be economic it must be part of a process that includes sale or barter of a good or service.) Hence, an as applied challenge could be brought to the application of the federal statute to the noneconomic activity by the volunteer physicians.

Vladeck on Empirical Measurements in Judicial Selection David Vladeck (Georgetown University Law Center) has posted Keeping Score: The Utility of Empirical Measurements in Judicial Selection (Florida State University Law Review, Vol. 32, 2005) on SSRN. Here is the abstract:
    This paper is a response to the article authored by Professors Stephen Choi and Mitu Gulati entitled "A Tournament of Judges?", proposing that the judicial selection process for Supreme Court nominees focus on empirical measures of judicial quality. The paper first acknowledges that the judicial appointment process has become so politically polarized that vacancies go unfilled for extended period of time, delaying the administration of justice and leading to the disintegration of collegiality on a number of courts of appeals. The paper then agrees that empirical measurements may shed light on judicial qualifications and provides a number of illustrations where empirical measures do tell us a great deal about a nominee's qualifications for high judicial office. The paper then turns to the measurements identified by Choi and Gulati and explains why they are flawed in many respects. The paper concludes by encouraging further discourse on empirical measurements because the alternative, perpetual partisan bickering, is simply unacceptable.

Lowenstein on a Perfect Storm in Equity Markets Louis Lowenstein (Columbia Law School) has posted A Perfect Storm: Changing a Culture on SSRN. Here is the abstract:
    In October, 1991, there occurred off the coast of Massachusetts a perfect storm, a tempest created by a rare coincidence of events. In the late -90s, there was another perfect storm, an also rare coincidence of forces which caused huge waves in our financial markets, as the NASDAQ index, for one, soared from 1200 in 1997 to 5000 in 2000 and back to 1100 in 2002. These were the days of the New Economy - low inflation and unemployment, government surpluses, and the Internet that would change how we work and play. Suddenly, all of us were watching crawlers on CNBC to see how rich we were. But the bubble triggered a period of unmatched and pervasive corporate fraud. Using academic blessings of stock options as the link of pay for performance, executives took huge helpings and then manipulated reported earnings to achieve the stock gains that would bring their compensation to stunning heights. In the five years ended 2003, over 1300 companies, many major, restated their earnings, and those were only the more egregious cases. It was a broad-based, cultural failure, one that enlisted the active support, nay connivance, of the various gatekeepers, notably of course auditors, but also analysts, audit committees, bankers, and, yes, the lawyers who crafted the hollow transactions that would enable debt to disappear from balance sheets and create revenues from thin air. But then, we all wanted so much to believe. The author brought to the article his background as a member of the Panel on Audit Effectiveness, appointed in 1998 at the instance of Chairman Levitt of the SEC, and also his longstanding skepticism about efficient market theory.

Geis on Hadley v. Baxendale George S. Geis (University of Alabama - School of Law) has posted Empirically Assessing Hadley v. Baxendale (Florida State University Law Review, Forthcoming) on SSRN. Here is the abstract:
    The rule of Hadley v. Baxendale enjoys an important place in the economic analysis of contract law. Over time, Hadley has taken on great significance as an archetype for contract default rules that efficiently expose asymmetric information. But a hotly contested debate questions whether economic theories of Hadley - and economic approaches to contract law more generally - have failed. There are two concerns. First, it may be hard to empirically measure key variables in the economic models. Second, the models are complex, making it difficult to sum the effects of multiple variables. This Article takes up the challenge of empirically assessing the Hadley rule with a new approach that draws upon willingness-to-pay studies in the field of marketing. The first of its kind, this work presents evidence that the Hadley rule is a preferable legal default in three simple markets - subject to several important qualifications. This study implies that markets with similar conditions might also benefit from a Hadley default rule. More broadly, it suggests that marketing research may be a rich source of data for testing economic theories of contract law.

McEvoy & Conway on the Politics of the Past Kieron McEvoy and Heather Conway (Queen's University Belfast - School of Law and Queen's University Belfast - School of Law) have posted The Dead, the Law, and the Politics of the Past (Journal of Law and Society, Vol. 31, pp. 539-562, December 2004) on SSRN. Here is the abstract:
    This article explores the role of law in cultural and political disputes concerning dead bodies. It uses three interconnecting legal frameworks: cultural and moral ownership, commemoration, and closure. It begins with a critique of the limitations of the private law notion of 'ownership' in such contexts, setting out a broader notion of cultural and moral ownership as more appropriate for analysing legal disputes between states and indigenous tribes. It then examines how legal discourses concerning freedom of expression, religious and political traditions, and human rights and equality are utilized to regulate the public memory of the dead. Finally, it looks at the relationship between law and notions of closure in contexts where the dead have either died in battle or have been 'disappeared' during a conflict, arguing that law in such contexts goes beyond the traditional retributive focus of investigation and punishment of wrongdoers and instead centres on broader concerns of societal and personal healing.

Ferrari on Bobbio Vincenzo Ferrari (Università degli Studi di Milano - Facolta' di Giurisprudenza) has posted The Firm Subtleties of a Philosopher in 'Everlasting Doubt': Remembering Norberto Bobbio (Journal of Law and Society, Vol. 31, pp. 578-591, December 2004) on SSRN. Here is the abstract:
    This is the first paper in a new series which focuses on European scholars whose work, because of language issues, may not be widely known within the English-speaking world.

Hunt on Marx & Foucault in Bed Alan J. Hunt (Carleton University - Department of Law) has posted Getting Marx and Foucault into Bed Together (Journal of Law and Society, Vol. 31, pp. 592-609, December 2004) on SSRN. Here is the abstract:
    This article is a contribution to the occasional series dealing with a major book that influenced the author. Previous contributors include Stewart Macaulay, John Griffith, William Twining, Carol Harlow, Geoffrey Bindman, Harry Arthurs, and Andre-Jean Arnaud.

Abel on Collective Action in a Law Firm Richard L Abel (University of California, Los Angeles - School of Law) has posted Varieties of Social Discipline: Collective Action in a Law Firm (Journal of Law and Society, Vol. 31, pp. 610-624, December 2004) on SSRN. No abstract is available.

Gier on Hindu Virtue Ethics Nick Gier has posted Hindu Virtue Ethics. Here is a taste:
    Following an aesthetics of virtue, I will propose that the Hindu virtues are personal creations that are, as Aristotle maintains, “relative to us,” and that strictly deontological or utilitarian readings of the ethics of the Hindu epics are not supported. In the first section I will discuss the different roles that rules and virtues play in our moral lives, and I will demonstrate that the virtues have axiological priority. The second section will present the outlines of an aesthetics of virtue, which I will explicate in terms of Confucius and Gandhi. Drawing heavily on Matilal, I argue, in the third section, that there are good reasons to read the Hindu epics as a virtue ethics. Matilal offers some wonderful insights about the true nature of karma and I, in the fourth section, combine these with my own discoveries about Buddhism to offer a non-fatalistic interpretation of the motto “character is destiny.” Finally, Matilal’s acute observations about Krishna gives me the link that I needed for a Hindu virtue aesthetics in the fifth and final section.

Sunstein on Group Judgments Cass R. Sunstein (University of Chicago Law School) has posted Group Judgments: Deliberation, Statistical Means, and Information Markets (New York University Law Review, Forthcoming) on SSRN. Here is the abstract:
    How can groups elicit and aggregate the information held by their individual members? The most obvious answer involves deliberation. For two reasons, however, deliberating groups often fail to make good decisions. First, the statements and acts of some group members convey relevant information, and that information often leads other people not to disclose what they know. Second, social pressures, imposed by some group members, often lead other group members to silence themselves because of fear of disapproval and associated harms. The unfortunate results include the propagation of errors; hidden profiles; cascade effects; and group polarization. A variety of steps should be taken to ensure that deliberating groups obtain the information held by their members. Because of their ability to aggregate privately held information, information markets substantial advantages over group deliberation. These points bear on discussion of normative issues, in which deliberation might also fail to improve group thinking.

Wednesday Calendar
    Florida State University, School of Law: Sanja Kutnjak Ivkovich, Florida State University School of Criminology, "A Comparative Perspective on the Police Code of Silence."
    Oxford Centre for Criminology: James L. Nolan, Problem-Solving Courts: a Comparative Study of a Legal Transplant.
    Oxford Institute of European and Comparative Law in conjunction with Comparative Law Discussion Group: Professor Jacques du Plessis, Duress and Undue Influence in Mixed Legal Systems and the Principles of European Contract Law.
    Oxford Public International Law Discussion Group in conjunction with Financial Law Discussion Group: Lee Buchheit, The Role of the Official Sector in Sovereign Debt Workouts: the Case of Iraq.
    NYU Legal History: William Novak, Visiting Professor, NYU School of Law.
    Oxford Comparative Law Discussion Group in conjunction with Private and Commercial Law Discussion Group: Professor Jacques du Plessis, Duress and Undue Influence in Mixed Legal Systems and the Principles of European Contract Law.