Legal Theory Blog



All the theory that fits!

Home

This is Lawrence Solum's legal theory weblog. Legal Theory Blog comments and reports on recent scholarship in jurisprudence, law and philosophy, law and economic theory, and theoretical work in substantive areas, such as constitutional law, cyberlaw, procedure, criminal law, intellectual property, torts, contracts, etc.

RSS
This page is powered by Blogger. Isn't yours?
Saturday, January 31, 2004
 
Legal Theory Bookworm This week the Legal Theory Bookworm recommends Constitutional Interpretation: Textual Meaning, Original Intent, and Judicial Review by Keith E. Whittington (Princeton). Whttington's book is one of the very best about originalism in constitutional theory. Here is a blurb:
    Constitutional scholarship has deteriorated into a set of armed camps, with defenders of different theories of judicial review too often talking to their own supporters but not engaging their opponents. This book breaks free of the stalemate and reinvigorates the debate over how the judiciary should interpret the Constitution. Keith Whittington reconsiders the implications of the fundamental legal commitment to faithfully interpret our written Constitution. Making use of arguments drawn from American history, political philosophy, and literary theory, he examines what it means to interpret a written constitution and how the courts should go about that task. He concludes that when interpreting the Constitution, the judiciary should adhere to the discoverable intentions of the Founders. Other originalists have also asserted that their approach is required by the Constitution but have neither defended that claim nor effectively responded to critics of their assumptions or their method. This book sympathetically examines the most sophisticated critiques of originalism based on postmodern, hermeneutic, and literary theory, as well as the most common legal arguments against originalists. Whittington explores these criticisms, their potential threat to originalism, and how originalist theory might be reconstructed to address their concerns. In a nondogmatic and readily understandable way, he explains how originalist methods can be reconciled with an appropriate understanding of legal interpretation and why originalism has much to teach all constitutional theorists. He also shows how originalism helps realize the democratic promise of the Constitution without relying on assumptions of judicial restraint. This book carefully examines both the possibilities and the limitations of constitutional interpretation and judicial review. It shows us not only what the judiciary ought to do, but what the limits of appropriate judicial review are and how judicial review fits into a larger system of constitutional government. With its detailed and wide-ranging explorations in history, philosophy, and law, this book is essential reading for anyone interested in how the Constitution ought to be interpreted and what it means to live under a constitutional government.


 
Download of the Week This week's Download of the Week is Rescuing Justice from Constructivism by G. A. Cohen (Oxford). Here is a taste of this sophisticated paper that poses a fundamental challenge for constructivists like Tim Scanlon & the late Jack Rawls. Here is a taste:
    On the constructivist view of justice, fundamental principles of justice are the outcome of an idealized legislative procedure, whose task is to elect principles that will regulate our common life. In Rawls’s version of constructivism, the legislators are citizens who are ignorant of how they in particular would fare under various candidate principles. In a Scanlonian version of constructivism about justice, the legislators are motivated to live by principles that no one could reasonably reject (I shall, for the most part, be interested, here, in the Rawlsian version of constructivism, although some of my objections to it also apply against Scanlonian and other versions of it.) But however the different versions of constructivist theories of social justice differ, whether in the nature of the selection procedure that they mandate, or in the principles that are the output of that procedure, they all assign to principles of justice the same role. That role is determined by the fact that constructivism's legislators are asked to elect principles that will regulate their common life: the principles they arrive at are said to qualify as principles of justice because of the special conditions of motivation and information under which principles that are to serve the role of regulating their common life are reached. But, and here I state my disagreement with the constructivist metatheory, in any enterprise whose purpose is to select principles of regulation, attention must be paid, either expressly or in effect, to considerations that do not reflect the content of justice itself: while justice (whatever it may be: the present point holds independently of who is right in disagreements about the content of justice) must of course influence the selection of regulating principles, factual contingencies that determine how justice is to be applied, or that make justice infeasible, and values and principles that call for a compromise with justice, also have a role to play in generating the principles that regulate social life, and legislators, whether flesh-and-blood or hypothetical, would be profoundly mistaken to ignore those further considerations. It follows that any procedure that generates the right set of principles to regulate society fails thereby to identify a set of fundamental principles of justice, by virtue of its very success in the former, distinct, exercise. But, while the relevant non-justice considerations indeed affect the outcome of the constructivist procedure, constructivists cannot acknowledge that their influence on the output of that procedure means that what it produces is not fundamental justice, and is sometimes, indeed, as we shall see in section 5, not justice at all. Given its aspiration to produce fundamental principles of justice, constructivism sets its legislators the wrong task, although the precise character, and the size, of the discrepancy between fundamental justice and the output of a constructivist procedure will, of course, vary across constructivism’s variants. That it sets its idealized legislators the wrong task is my principal - and generative - complaint against constructivism, as a meta-theory of fundamental justice.
Download it while its hot!


Friday, January 30, 2004
 
Conference Announcement: Workshop on Vagueness
    Universita' degli Studi di Bologna Dipartimento di Scienze della Comunicazione Dipartimento di filosofia SECOND WORKSHOP ON VAGUENESS 9-10 January, 2004 Bologna Dipartimento di Scienze della Comunicazione Via Azzo Gardino 23, Bologna Sala delle Riunioni, III piano Invited speakers Roy Cook, Patrick Greenough, Sebastiano Moruzzi, Agustin Rayo, Sven Rosenkranz, Crispin Wright Program: FRIDAY JANUARY 9th 10.30 - 12.30 Crispin Wright (St. Andrews University, New York Univerities) "Introduction: the State of Play" 12.30 - 14.30 Lunch 14.30 - 16.30 Sven Rosenkranz (Freie Universitaet Berlin) "Wright on Knowledge in Borderline Cases" Discussant: Manuel Gatto (Università del Piemonte Orientale) 16.30 - 17 Coffee Break 17 - 19 Sebastiano Moruzzi (Università del Piemonte Orientale) "Vagueness and Agnosticism" Discussant: Vittorio Morato (Università di Bologna) SATURDAY JANUARY 10th 10.30 - 12.30 Agustin Rayo (St. Andrews University) "The Unexplained Supervenience Objection" Discussant: Luca Morena (Università di Bologna) 12.30 - 14.30 Lunch 14.30 - 16.30 Roy Cook (St. Andrews University) "The Symptoms of Vagueness" Discussant: Andrea Sereni (Università di Bologna) 16.30 - 17 Coffee Break 17 - 19 Patrick Greenough (St. Andrews University) "Looks. On the Phenomenal Sorites" Discussant: Antonio Capuano (Università di Bologna) INFO: http://www.dsc.unibo.it/~leonardi/altro/vaghezza.html Organizing Commitee: Sebastiano Moruzzi, mer0090@iperbole.bologna.it Andrea Sereni, phdts@tin.it


 
Call for Papers: Human Rights, Democracy, and Religion
    Call for Papers "Human Rights, Democracy, and Religion" 21st International Social Philosophy Conference Creighton University in Omaha, Nebraska (USA) July 29-31, 2004 Organized by the North American Society for Social Philosophy Submissions on the conference theme are encouraged, but proposals in all areas of social philosophy are welcome. Potential contributors should submit 300-500 word proposals for individual presentations, panels, workshops, or roundtable discussions by March 15, 2004 (North American contributors), or January 15, 2004 (international contributors) to: Lisa Schwartzman Dept. of Philosophy 517 S. Kedzie Hall Michigan State University East Lansing, MI 48824-1032 email: lhschwar@msu.edu Direct questions about local arrangements to: Kevin Graham Dept. of Philosophy Creighton University 2500 California Plz. Omaha, NE 68178-0301 email: kgraham@creighton.edu


 
Conference Announcement: Rocky Mountain Virtue Ethics Conference
    University of Colorado at Boulder April 3-5, 2004 The three-day Rocky Mountain Virtue Ethics Summit, organized and hosted by the Department of Philosophy of the University of Colorado at Boulder, is an opportunity for scholars to evaluate the possibilities of and problems raised by virtue ethics. The conference will feature three of the most influential contemporary virtue ethicists: Rosalind Hursthouse, Michael Slote, and Christine Swanton. The conference is designed to illuminate the most important topics in virtue ethics for the attendees; the topics discussed will range from the foundations of virtue theory to the epistemology of virtue. In addition to the formal sessions, there will also be opportunities for informal discussions. To foster an intimate, intensive atmosphere, the Summit will host under two dozen presenters and commentators.


 
Complex Egalitarianism Check out Complex Egalitarianism by Erik Olin Wright and Harry Brighouse. Here is a taste:
    [F]ormulating . . . reform strategies and pushing for them within capitalism is essential if the anticapitalist Left is ever to be a credible force within capitalism. For the anticapitalist left to be able to take advantage of even the most favorable conditions, it has to be able to offer well-designed reforms which resonate with the public, which accomplish real improvements in the present, and which show the way forward to the better social structure we ultimately advocate. Public disillusion with the left is deep in Western societies, nowhere more so than the United States. There is no guarantee that when (or if) conditions change in the future the left will be able to take advantage of them; whether we can do so depends on what we have to offer. The left cannot be content with offering revolution and some hand-waving comments about something that has never been tried: it has to be able to point to concrete successes within capitalism and to offer up for scrutiny detailed prescriptions of what it would do as an alternative to capitalism. There is nothing elitist or undemocratic about this – the point is to subject proposals to popular scrutiny so they can be rejected, refined, or embraced.


 
Gross on Indian Citizenship & Identity at Texas At the University of Texas, Ariela Gross, USC, presents Administering Citizenship, Identity and Land in Indian Territory, 1865-1907.


 
Ashiagbor on Economic and Social Rights in the EU Charter at Oxford At Oxford's faculty of laws, Diamond Ashiagbor presents Economic and social rights in the EU charter (on human rights, social rights and social policy discourse).


 
Saul on Pornography & Speech Acts at Oxford At Oxford's Jowett Society, Jennifer Saul (Sheffield) presents Pornography, Speech Acts, and Context.


 
Start on Contractarian Approaches to Disability at North Carolina At the University of North Carolina's philosophy department, Cynthia Stark (Utah) presents How To Include the Severely Disabled in a Contractarian Theory of Justice.


 
Anti-Theory in Literature Check out Theory in chaos by David Kirby over at CSMonitor.com:
    [F]or some academics, what the rejection of theory is really about is the joyous rediscovery of literature itself. There is today "a renewed appreciation of the irreducible particularity of an art work, an author, an historical moment, a particularity that theory may illuminate but never fully explain," according to Dennis Todd, professor of British literature at Georgetown University.
The joys of caselaw, anyone?


 
Goodman on Telecosm Spectrum Rights Ellen P. Goodman (Rutgers University - Law School) has posted Spectrum Rights in the Telecosm to Come (San Diego Law Review, Vol. 41, 2004) on SSRN. Here is the abstract:
    How access to radio frequencies should be controlled and what different control structures might mean for the development of wireless communications has been the subject of intense debate. Legal scholars and economists have proposed radical reformation of the current regime of spectrum regulation, and such reform is being considered at both the FCC and in Congress. The next few years will be critical in shaping the wireless world to come. Despite the importance and timeliness of the debate over spectrum rights, the theoretical literature has not advanced beyond first principles. Many have written, in the tradition of Coase, in favor of exclusive property rights in spectrum. More recently, several scholars have countered that spectrum should be managed as a commons in which transmission rights are broadly shared, subject only to compliance with certain technical protocols. What has received little attention is the question of how spectrum disputes should be resolved the day after the revolution in spectrum management, whatever its character. Little consideration has been given to what legal structures and rules will be necessary, and to what extent even radical change in spectrum management will relieve decisionmakers of the public interest balancing the FCC undertakes today in distributing spectrum entitlements. I consider these questions by first developing a framework for understanding different kinds of interference disputes among wireless operators. Then, focusing on the possibility of "fee simple" ownership in spectrum, I apply the insights of Calabresi's and Melamed's Cathedral and follow-on literature to the resolution of these interference disputes. I conclude that a nuisance-like common law, as applied to spectrum, will require its own public interest standard. Like the FCC, decisionmakers will have to balance efficiency and fairness goals in the pursuit of a particular kind of communications environment. I show, moreover, that the development of liability standards and nuisance remedies will be difficult and costly. The costs and indeterminacy of dispute resolution could be reduced, however, with the development of a hybrid approach that combines the strengths of regulation and the common law. Such an approach might involve defining categorical nuisances in spectrum and establishing presumptions as to the appropriate entitlements in different kinds of interference disputes. The commons alternative to property rights will not eliminate all this complexity and uncertainty. In the wireless commons, as in the wireless subdivision, the resolution of interference disputes will require choices among various efficiency and fairness goals. Here too, judicious use of the regulatory function will be necessary to implement a mature legal structure for the telecosm to come. Whether a revolution in spectrum management is at hand or still far off, the administration of spectrum rights is changing. These changes should be undertaken with an eye to the private and common property rights of the future, and the efficient and fair resolution of spectrum disputes.


 
Reidenberg on States and Internet Enforcement Joel Reidenberg (Fordham University School of Law) has posted States and Internet Enforcement (University of Ottawa Law & Technology Journal, Vol. 1, 2004) on SSRN. Here is the abstract:
    This essay addresses the enforcement of decisions through Internet instruments. Traditionally, a state's enforcement power was bounded by territorial limits. However, for the online environment, the lack of local assets and the assistance of foreign courts no longer constrain state enforcement powers. States can enforce their decisions and policies through Internet instruments. Online mechanisms are available and can be developed for such pursuits. The starting point is a brief justification of Internet enforcement as the obligation of democratic states. Next, the essay describes the movement to re-engineer the Internet infrastructure by public and private actions and argues that the re-engineering facilitates state enforcement of legal and policy decisions. The essay maintains that states will increasingly try to use network intermediaries such as payment systems and Internet service providers as enforcement instruments. Finally and most importantly, the essay focuses on ways that states may harness the power of technological instruments such as worms, filters and packet interceptors to enforce decisions and sanction malfeasance.


 
Mossoff on Epstein on "Is Copyright Property?" Adam Mossoff (Michigan State University-DCL College of Law) has posted Is Copyright Property? A Comment on Richard Epstein's Liberty vs. Property (from Adam Mossoff, PROMOTING MARKETS IN CREATIVITY: COPYRIGHT IN THE INTERNET AGE, James V. DeLong, ed., 2004) on SSRN. Here is the abstract:
    This short essay is derived from commentary on Richard Epstein's article, Liberty vs. Property, which were delivered at the 2003 conference on Promoting Markets in Creativity: Copyright in the Internet Age, co-sponsored by The Progress & Freedom Foundation and the George Mason University's Tech Center. The essay suggests that the opponents of Epstein's position that copyright entitlements are derived from similar policy concerns as tangible property rights would reject his thesis at the conceptual level, maintaining that copyright is not property, especially in the context of digital media. By assuming their rallying cry that "copyright is policy, not property," this essay reveals that opponents of digital copyright are caught in a dilemma of their own making. In one sense, their claim that "copyright is policy, not property," is an uninformative truism about all legal entitlements, and in another sense, represents a fundamental misconception of the history and concept of copyright. The concept and historical development of copyright are more substantial than its representation today as merely a monopoly privilege issued to authors according to the government's utility calculus. The essay concludes with the observation that those who wish to see copyright eliminated or largely restricted in digital media are in fact driven by an impoverished concept of property that has dominated twentieth-century discourse on property generally. As a doctrine in transition - we are still in the midst of the digital revolution-copyright may be criticized for various fits and starts in its application to new areas, but the transition itself does not change copyright's status as a property entitlement.


 
Speta on FCC Authority Over the Internet James B. Speta (Northwestern University - School of Law) has posted FCC Authority to Regulate the Internet: Creating It and Limiting It (Loyola University Chicago Law Journal, Vol. 35, No. 15, 2004) on SSRN. Here is the abstract:
    This short paper discusses the FCC's authority, under its so-called ancillary jurisdiction (under Title I of the Communications Act), to address competition problems that may arise in Internet markets. It is argued that the FCC likely does not have jurisdiction to address most Internet regulatory issues, because whatever expansive readings such ancillary jurisdiction has received in the past are no longer tenable. The paper proposes, instead, a new, limited statutory interconnection rule, which the FCC could enforce in limited ways in Internet markets. The paper also argues that, even if the FCC does have authority to develop its own common law of Internet regulation, a limited grant of statutory authority is a superior regulatory construct. The paper also argues that FCC administration of this proposed statute is superior to remitting all Internet interconnection problems to the common law processes of antitrust. Professor Philip Weiser's contribution to the same journal issue (also available on SSRN) takes a different, more expansive view of the FCC's ancillary jurisdiction.


Thursday, January 29, 2004
 
Fisher on Alterntive Compensation for the Entertainment Industry The Thursday is Workshop Day post below already mentions that Terry Fisher (Harvard Law School) is delivering An Alternative Compensation System for the Entertainment Industry at Stanford's Olin series today. I've now had a chance to look at Fisher's paper on this very timely and important topic. Here is a taste:
    [T]his chapter proposes that we replace major portions of the copyright and encryption-reinforcement models . . . a governmentally administered reward system. In brief, here?s how such a system would work. A creator who wished to collect revenue when her song or film were heard or watched would register it with the Copyright Office. With registration would come a unique file name, which would be used to track transmissions of digital copies of the work. The government would raise, through taxes, sufficient money to compensate registrants for making their works available to the public. Using techniques pioneered by American and European performing rights organizations and television rating services, a government agency would estimate the frequency with which each song and film was heard or watched by consumers. Each registrant would then periodically be paid by the agency a share of the tax revenues proportional to the relative popularity of his or her creation. Once this system were in place, we would modify copyright law so as to eliminate most of the current prohibitions on unauthorized reproduction, distribution, adaptation, and performance of audio and video recordings. Music and films would thus be readily available, legally, for free.
If you are interested in the future of copyright, you will want to read this Chapter from Fisher's forthcoming book.


 
Rational Agency Without Noumenal Selves As I posted below, Geoffrey Sayre-McCord (Professor and Chair, Department of Philosophy at University of North Carolina at Chapel Hill) is presenting Rational Agency and Normative Concepts at Penn's law and philosophy series. I've had a chance to look at this marvelous paper. Here is a very brief snippet from the introduction:
    As Kant emphasized, famously, there’s a difference between merely acting in accord with duty and acting from duty, where the latter requires a distinctive capacity. More generally, there is a difference between conforming to norms (intentionally or not, from ulterior motives or not) and doing what one does because one judges it to be morally good or right. The difference is, I think, central to morality and my main interest here is to get a handle on what has to be true of people for them to do what they do because they think it right or good. The abilities required are, I think, a special case of the abilities that are required to be what Kant identified as a rational agent. I focus on this more general capacity (the having of which is a necessary condition of moral agency) in the rest of the paper. As will become clear, I think Kant was right that the rational agency is important. I hope, though, to spell out what rational agency requires in a way that steers clear of Kant’s own appeal to hypothetical and categorical imperatives as well as his eventual reliance on noumenal selves and kingdom of ends. What follows is an attempt to underwrite Kantian convictions (concerning rational agency) with more or less Humean resources.
Highly recommended.


 
Original Meaning and Martin Luther King, Jr. See this post by John Rosenberg on Discriminations.


 
Thursday is Workshop Day Here is the roundup of workshops from hither and yon:
    At Penn's law and philosophy series, Geoffrey Sayre-McCord (Professor and Chair, Department of Philosophy at University of North Carolina at Chapel Hill) is presenting Rational Agency and Normative Concepts with comments by Hans Oberdiek.
    Also at Penn, Martha Nussbaum (Ernst Freund Distinguished Service Professor of Law and Economics, University of Chicago) is giving the JUDITH R. BERKOWITZ ENDOWED LECTURESHIP IN WOMEN'S STUDIES. Her title is Gender Justice, Human Rights, and Human Capabilities.
    At Boston University law, Susan Koniak (BU) presents How Like a Winter? The Plight of Absent Class Members Denied Adequate Representation.
    At Florida State, Jennifer Mnookin, University of Virginia School of Law, presents Atomism, Holism and the Law of Evidence.
    At Georgetown's Colloquium on Intellectual Property & Technology Law, Rosemary J. Coombe, York University, presents The Globalization of Intellectual Property: Informational Capital and Its Cultures.
    At Stanford's Olin series, Terry Fisher (Harvard Law School) presents An Alternative Compensation System for the Entertainment Industry.
    At the University of Michigan's law and economics series, Omri Ben-Shahar, Michigan, presents "Agreeing to Disagree": Filling Gaps in Deliberately Incomplete. The title on the website is "incomplete," but not deliberately so.
    At George Mason, Craig Lerner, GMU School of Law, presents “Accomodations” for the Learning Disabled: A Level Playing Field or Affirmative Action for Elites?
    At Oxford's Public International Law Discussion Group, Robert Volterra presents The Commission on the Limits of the Continental Shelf: Technical Science, Star Chamber, or Quasi-Judicial Tribunal?
    At the Australian National University's RSSS, Norva Lo (La Trobe University) presents Humpty Dumpty Analysis of 'Valuing', Empty Analysis of 'Valuable'.
    At UCLA's legal history series, Sally Gordon, University of Pennsylvania, presents Parochial School Funding: Catholics, Protestants, and Legal Activism at Mid-Century.


 
Will the Tenure Devolution Hit the Legal Academy? While the legal academia sleeps, tenure is rapidly disappearing. Consider the following from The Morphing of the American Academic Profession by Martin Finkelstein:
    Quite beyond the surge in part-time faculty appointments over the past quarter century, the majority (i.e., over half) of all new full-time faculty hires in the past decade have been to non-tenure-eligible, or fixed-term contract positions (Finkelstein and Schuster 2001). Put another way, in the year 2001, only about one-quarter of new faculty appointments were to full-time tenure track positions (i.e., half were part-time, and more than half of the remaining full-time positions were “off” the tenure track). This is nothing short of what Jack Schuster and I have labeled elsewhere a new academic “revolution”—albeit a largely silent one.
Of course, the legal academy already has non-tenure track positions for legal writing instructors, clinicians, and adjuncts. More interesting is the recent development of the pre-tenure track (VAP or Visiting Associate Position) that is increasingly becoming an entry point for tenure-track jobs in the legal academy. One can imagine the evolution of a system where entry-level candidates must essentially fulfill the old-fashioned requirements for tenure before getting onto the tenure-track.


 
Nelkin on Moral Luck Dana Nelkin (UC San Diego & affiliated with USD's Institute on Law and Philosophy) has the Stanford Encyclopaedia of Philosophy entry on Moral Luck posted. Here is a taste:
    Moral luck occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control. Bernard Williams writes, “when I first introduced the expression moral luck, I expected to suggest an oxymoron” (Williams 1993, 251). Indeed, immunity from luck has been thought by many to be part of the very essence of morality. And yet, as Williams (1981) and Thomas Nagel (1979) showed in their now classic pair of articles, it appears that our everyday judgments and practices commit us to the existence of moral luck. The problem of moral luck arises because we seem to be committed to the general principle that we are morally assessable only to the extent that what we are assessed for depends on factors under our control (call this the “Control Principle”). At the same time, when it comes to countless particular cases, we morally assess agents for things that depend on factors that are not in their control. And making the situation still more problematic is the fact that a very natural line of reasoning suggests that it is impossible to morally assess anyone for anything if we adhere to the Control Principle.
Also up at the superb Stanford site Moral Cognitivism vs. Non-Cognitivism by Mark van Roojen.


 
Lipton on Information Policy Jacqueline D. Lipton (Case Western Reserve University School of Law) has posted A Framework for Information Law and Policy (Oregon Law Review, Vol. 82, No. 3, 2004) on SSRN. Here is the abstract:
    The information age calls for new legal and policy approaches to the ways in which we deal with information. Previous moves in this area have tended to center around developing a 'cyberlaw' or 'Internet law'. This has involved largely piecemeal attempts to gather together miscellaneous legal issues that happen to relate to digital communications technologies. No clear discernible normative framework has yet emerged. Rather than focusing on these new technologies, any new legal and policy framework for the information age should be organized around the idea of 'information' per se, with its focus on society's interactions with various kinds of information. Such a development would require the identification and development of normative principles that will shape the development of relevant laws and policies. This article suggests that an appropriate set of normative principles might be derived from identifying a set of 'control' and 'access' rights in relation to information. These rights could be utilized as 'organizing tools' for the development of a legal and policy framework that would help inform the development of a harmonized and cohesive set of 'information law and policy principles' for the global information age. The following discussion demonstrates how this might be achieved in theory and practice, and presents case studies to illustrate how such a law and policy framework might prove useful in informing future debate in the 'information law' area.


 
Ginsburg & McAdams on International Dispute Resolution Tom Ginsburg and Richard H. McAdams (University of Illinois College of Law and Yale Law School (Visiting)) have posted Adjudicating in Anarchy: An Expressive Theory of International Dispute Resolution (William & Mary Law Review, Fothcoming) on SSRN. Here is the abstract:
    Frequent compliance with the adjudicative decisions of international institutions, such as the International Court of Justice, is puzzling because these institutions do not have the power domestic courts possess to impose sanctions. This paper uses game theory to explain the power of international adjudication via a set of expressive theories, showing how law can be effective without sanctions. When two parties disagree about conventions that arise in recurrent situations involving coordination (such as a convention of deferring to territorial claims of first possessors), the pronouncements of third-party legal decision-makers - adjudicators - can influence their behavior in two ways. First, adjudicative expression may construct focal points that clarify ambiguities in the convention. Second, adjudicative expression may provide signals that cause parties to update their beliefs about the facts that determine how the convention applies. Even without the power of sanctions or legitimacy, an adjudicator's focal points and signals influence the parties' behavior. After explaining the expressive power of adjudication, the paper applies the analysis to a range of third party efforts to resolve international disputes, including the first-ever review of the entire docket of the International Court of Justice. We find strong empirical support for the theory that adjudication works by clarifying ambiguous conventions or facts via cheap talk or signaling. We claim that the theory has broad implications for understanding the power of adjudication generally.


Wednesday, January 28, 2004
 
KaZaA Strikes Back Check out the CNET story here:
    A U.S. federal court has cleared the way for Kazaa file-sharing software owner Sharman Networks to sue the entertainment industry for copyright infringement, Sharman said on Friday. Sharman, targeted by studios and record companies because its software is used to trade music and video files, has sought to turn the tables on the industry, accusing it of misusing Kazaa software to invade users' privacy and send corrupt files and threatening messages.
This should be fun!


 
Hasen on Slate Election law superblogger Rick Hasen has a new piece on Slate. Here is a taste:
    Whoever ultimately emerges as the presumptive Democratic nominee from the front-loaded primary season can expect a pummeling from President Bush's re-election committee. That committee will have between $130 million and $200 million to spend on attack ads during Bush's own "primary season" (in which he is running unopposed by serious candidates) lasting up to the Republican convention—a convention slated later than usual to maximize the pummeling time. Bush's committee is borrowing from Bill Clinton's 1996 playbook when the Democrats used that period to run ads beating up on presumptive Republican nominee Bob Dole, though with only a fraction of the money raised by Bush's committee. Whether supporters of the Democratic nominee will have the resources to fight back this spring and summer may depend a great deal on arcane administrative decisions to be made by the Federal Election Commission. At issue is whether pro-Democratic non-party organizations can raise large "soft money" donations to spend supporting the Democrats and pummeling Bush back. The fight to limit donations to these groups has created an odd alliance between campaign-finance-reform organizations and the Republican Party, and the coalition just may win before the FEC.
I always learn from Hasen on election-law issues. Surf on over!


 
Confirmations Wars Department: More on the Memos The Hill has a detailed report on the inner workings of the Senate Judiciary Committee in relation to the access by Republican staffers to Democratic memos on judicial selection. The story focuses on a shift in control of the committee from the leadership to Senator Hatch's personal staff. Here is a taste:
    Hatch’s acquiescence to the probe seems to have shifted control of the fight over judicial nominees from the leadership, whom conservatives had convinced to take an aggressive approach, to that of his personal office. The conservatives’ ire has focused on Patricia Knight, the chief of staff in Hatch’s personal office, who conservative staffers say is now calling the shots at the Judiciary Committee.


 
Hasen's Guide to Bush v. Gore Rick Hasen of Election Law Blog has posted A Critical Guide to Bush v. Gore Scholarship on SSRN. Here is the abstract:
    This article evaluates the emerging legal and political science scholarship created in the wake of the United States Supreme Court's decision in Bush v. Gore, the case that ended the 2000 Florida election controversy between supporters of George W. Bush and Al Gore. It surveys answers that scholars have given to four central questions: (1) Were the Supreme Court's majority or concurring opinions legally sound? (2) Was the Supreme Court's result justified, even if the legal reasoning contained in the opinions was unsound? (3) What effects, if any, will the case and the social science research it has spurred have on the development of voting rights law? (4) What does the Court's resolution of Bush v. Gore tell us about the Supreme Court as an institution?


 
Kamm on Just War Theory and Terrorism at UCL At University College's Colloquium in Legal and Social Philosophy, Frances Kamm presents Failures of Just War Theory and Terrorism. Here is a bit from the introduction:
    This article has three parts. In the first part, I shall try to provide an overview of issues related to both terror and nonterror-killing inside and outside of standard war. It provides a framework within which we can locate some issues that will be explored in more detail in subsequent parts. The second part deals with the Doctrine of Double Effect (DDE) in standard just war theory. I criticize its prohibition on intending harm and consider cases where it is permissible, for example, to terror bomb combatants and noncombatants. Through criticism of the DDE as a way of justifying unintended noncombatant deaths, I am led in the third part to focus on (A) the relative degrees of inviolability of various types of people in intergroup conflict and (B) a better justification for the permissibility of causing some types of foreseen noncombatant deaths.
This is the famous series, hosted by Ronald Dworkin and Stephen Guest, at University College. Kamm is the first speaker of 2004!


 
Parry on Torture at Villanova At Villanova law today, John Parry (University of Pittsburgh School of Law) presents Chavez v. Martinez and the Jurisprudence of Torture.


 
Levinson on Non-Evidence at Loyola Marymount At Loyola Marymount, Laurie Levenson (LMU) presents Why Looks Matter: The Impact of Non-Evidence on the Courtroom.


 
Sussman on Disgrace at Yale Today at Yale's philosophy series, David Sussman presents Kant and the Politics of Disgrace..


 
Tehranian on Natural Law and Fair Use John Tehranian (University of Utah) has posted Et Tu, Fair Use? The Triumph of Natural Law Copyright on SSRN. Here is the abstract:
    Since its advent in 1841, the fair use doctrine has been hailed as a powerful check on the limited monopoly granted by copyright. Fair use, we are told, protects public access to the building blocks of creation and advances research and criticism. This Article challenges the conventional wisdom about fair use. Far from protecting the public domain, the fair use doctrine has played a central role in the triumph of a natural law vision of copyright that privileges the inherent property interests of authors in the fruits of their labor over the utilitarian goal of progress in the arts. Thus, the fair use doctrine has actually enabled the expansion of the copyright monopoly well beyond its original bounds and has undermined the goals of the copyright system as envisioned by the Framers. Specifically, this the Article first analyzes the anti-monopolistic impetus for federal copyright protection and reflects on the original understanding of copyright as epitomized by a series of early cases on the rights of translation and abridgement. The Article then examines the impact of the fair use doctrine on the copyright monopoly and progress in the arts. All told, the Article calls for a serious reassessment of the role of fair use in the infringement calculus, especially in an age where networked computers and malleable digital content has enabled new forms of artistic and post-modern experimentation.


 
Liebowitz and Margolis on the Economists' Brief in Eldred Stan J. Liebowitz and Stephen E. Margolis (University of Texas at Dallas - School of Management and North Carolina State University) have posted Seventeen Famous Economists Weigh in on Copyright: The Role of Theory, Empirics, and Network Effects on SSRN. Here is the abstract:
    The case of Eldred v. Ashcroft, which sought to have the Copyright Term Extension Act (CTEA, aka Sonny Bono Copyright Act) found unconstitutional, was recently argued before the Supreme Court. A remarkable group of seventeen economists including five Noble laureates, representing a wide spectrum of opinion in economics, submitted an amicus curie brief in support of Eldred. The economists condemned CTEA on the grounds that the revenues earned during the extension are so heavily discounted that they have almost no value, while the extended protection of aged works creates immediate monopoly deadweight losses and increases the costs of creating new derivative works. More important, we believe, than the particulars of this case, is the articulation of the economic issues involved in copyright extension. The articulation of those issues is not well framed in the brief. Nor is the case as one sided as the Eldred economists have claimed. First, private ownership of creative works may internalize potentially important externalities with respect to the use of existing works and the creation of derivative works. Second, the Eldred economists neglect the elasticity of the supply of creative works in their analysis, focusing instead solely on the benefits received by authors, leading to potential underestimation of additional creativity that confers benefits immediately. Third, the Eldred economists neglect certain features of copyright law, such as fair use, the distinction between idea and expression, and the parody exemption, which mitigate the costs of copyright. Finally, we present data that counters a common claim that copyright extension so far out in the future can have little effect on creativity. The small fraction of books that have the majority of commercial value when they are new appear to remain valuable for periods of time that are consistent with the expanded term of copyright under CTEA.


 
Two by Weisbach David A. Weisbach (University of Chicago Law School) has posted two papers on SSRN:
    The (Non) Taxation of Risk:
      A long line of literature argues that income taxes do not tax the return to risk bearing. The conclusion, if correct, has important implications for the choice between an income tax and a consumption tax and for the design of income taxes. The literature, however, on its face seems unrealistic because it models only very simplified tax systems, assumes perfect rationality by individuals, and requires the government to take complex positions in securities markets to hold in equilibrium. This paper examines the extent to which these problems affect the conclusions we draw from the literature. It argues that the criticisms are overstated. Moreover, the criticisms do not detract from the central value of the models, which is to understand ideal income taxes, which are the purported goal of most who support an income tax.
    Corporate Tax Avoidance:
      This essay analyzes the problem of corporate tax avoidance. It shows how the marginal efficiency cost of funds and optimal elasticity of taxable income measures can be used to analyze the problem and determine the proper scope of allowable tax planning. It then analyzes the optimal form of tax laws addressing shelters, such as whether the law should use more detailed rules or broad standards.


 
Iontcheva on the International Criminal Court Jenia Iontcheva (University of Chicago - Law School) has posted Nationalizing International Criminal Law: The Internatinoal Criminal Court as a Roving Mixed Court on SSRN. Here is the abstract:
    International law scholars often assume that the best way to enforce human rights is by establishing strong international institutions that develop the law progressively and enforce it independently. Political realists counter that such institutions are only as useful as powerful states permit them to be, and discourage expansive visions of their mandate. Partisans of the recently created International Criminal Court (ICC) must come to terms with the realist challenge. They must work to adapt the institution accordingly, without abandoning hope for the project altogether. Although the ICC will be constrained by the state support it commands, it can make a difference in the enforcement of human rights law by encouraging and assisting national authorities in upholding and enforcing international law.


Tuesday, January 27, 2004
 
Bainbridge on Corporate Responsibility for Past Wrongs Stephen Bainbridge has a provocative & sensible post on this interesting topic. Here's a taste:
    So who do we punish when we force the corporation to pay reparations? Since the payment comes out of the corporation's treasury, it reduces the value of the residual claim on the corporation's assets and earnings. In other words, the shareholders pay. Not the directors and officers who actually committed the alleged wrongdoing (who in most of these cases are long dead anyway), but modern shareholders who did nothing wrong. Retributive justice is legitimate only where the actor to be punished has committed acts to which moral blameworthiness can be assigned. Even if you assume the corporation is still benefiting from alleged wrongdoing that happened decades or even centuries ago, which seems implausible, the modern shareholders are mere holders in due course. It is therefore difficult to see a moral basis punishing them. They have done nothing for which they are blameworthy


 
Cohen on Justice and Constructivism at Oxford At Oxford's Jurisprudence Discussion Group, G. A. Cohen (Oxford) presents Rescuing Justice from Constructivism. Here is a taste:
    On the constructivist view of justice, fundamental principles of justice are the outcome of an idealized legislative procedure, whose task is to elect principles that will regulate our common life. In Rawls’s version of constructivism, the legislators are citizens who are ignorant of how they in particular would fare under various candidate principles. In a Scanlonian version of constructivism about justice, the legislators are motivated to live by principles that no one could reasonably reject (I shall, for the most part, be interested, here, in the Rawlsian version of constructivism, although some of my objections to it also apply against Scanlonian and other versions of it.) But however the different versions of constructivist theories of social justice differ, whether in the nature of the selection procedure that they mandate, or in the principles that are the output of that procedure, they all assign to principles of justice the same role. That role is determined by the fact that constructivism's legislators are asked to elect principles that will regulate their common life: the principles they arrive at are said to qualify as principles of justice because of the special conditions of motivation and information under which principles that are to serve the role of regulating their common life are reached. But, and here I state my disagreement with the constructivist metatheory, in any enterprise whose purpose is to select principles of regulation, attention must be paid, either expressly or in effect, to considerations that do not reflect the content of justice itself: while justice (whatever it may be: the present point holds independently of who is right in disagreements about the content of justice) must of course influence the selection of regulating principles, factual contingencies that determine how justice is to be applied, or that make justice infeasible, and values and principles that call for a compromise with justice, also have a role to play in generating the principles that regulate social life, and legislators, whether flesh-and-blood or hypothetical, would be profoundly mistaken to ignore those further considerations. It follows that any procedure that generates the right set of principles to regulate society fails thereby to identify a set of fundamental principles of justice, by virtue of its very success in the former, distinct, exercise. But, while the relevant non-justice considerations indeed affect the outcome of the constructivist procedure, constructivists cannot acknowledge that their influence on the output of that procedure means that what it produces is not fundamental justice, and is sometimes, indeed, as we shall see in section 5, not justice at all. Given its aspiration to produce fundamental principles of justice, constructivism sets its legislators the wrong task, although the precise character, and the size, of the discrepancy between fundamental justice and the output of a constructivist procedure will, of course, vary across constructivism’s variants. That it sets its idealized legislators the wrong task is my principal - and generative - complaint against constructivism, as a meta-theory of fundamental justice.
Update: I've now had a chance to look at this paper (or book excerpt). Cohen has given other parts of this project in other fora over the last year or so. As always, Cohen's work is brilliant and interesting. The fundamental argument--that constructivism does not capture the internal requirements of justice, because constructivism takes into account external constraints on justice--certainly gets at something. But what? My first read of this section of the paper left me with the impression that this particular criticism may be deflected by moves that merely clarify the aim of the constructivist project without modifying the substantive conclusions that constructivists reach. Cohen makes the following comment in a footnote:
    The denizens of Rawls’s original position do not, of course, expressly distinguish between considerations of justice and other considerations. They simply choose whatever principle which, given their particular combination of knowledge and ignorance, they see (not as serving justice but) as serving their interests. But principles of regulation must reflect both sorts of considerations. Accordingly, if the denizens of the original position select the right principles of regulation, then the principles they select are not fundamental principles of justice, but, at best, applied ones.
    (Note that, for all that I am here purporting to show, the original position might be the right procedure for generating principles of regulation. But I do not, in fact, believe that, for uneccentric reasons that have nothing to do with the case being mounted here.)
Expressed in this way, the comment seems to be based on an odd construal of the role that interests play in constructivist reasoning. It is as if Cohen believes that the original position--to take Rawls's form of constructivism--is a representation of self-interest, because the representative parties in the original position are concerned with the shares of primary goods of those whom they represent. I'm sure I've got this wrong, but it almost seems as if Cohen is arguing that theories of justice that take interests into account are thereby partially theories of interest and not theories of justice. Perhaps, but if interests are of concern to justice, then this point seems toothless. The question is whether interests play the right role in the theory--not whether they play a role at all. This really is a must download for anyone interested in contemporary political philosophy. My very highest recommendation!


 
Lichtman on Irreperable Harms and Benefits at Chicago At the University of Chicago, The Coase Lecture is presented by Douglas Lichtman, Professor of Law, University of Chicago Law School, who will deliver Irreparable Harms and Irreparable Benefits.
Update: Amanda Butler has a report on the event here.


 
Szigeti on Moral Sentiments & Dilemmas at Oxford Today at Oxford's Ockham Society, Andreas Szigeti presents Moral Sentiments and Moral Dilemmas.


 
Fennell on Contracting Communities I was especially interested in this paper, which approaches its question from a very interesting perspective. Lee Anne Fennell (University of Texas School of Law) has posted Contracting Communities (University of Illinois Law Review, 2004) on SSRN. Here is the abstract:
    Private residential developments governed by homeowners associations have rapidly proliferated in recent decades. The servitudes that form the backbone of these private developments are usually viewed as autonomy- and value-enhancing private contractual arrangements that are presumptively valid. The appealing contractual justification for private land use regimes seems to have shut down many of the usual paths of inquiry into the ability of the resulting arrangements to deliver on consumer preferences. In this article, I focus on several factors that can drive a wedge between homeowner preferences and the private land use regimes that the market provides. The analysis proceeds in six parts. I begin by sketching the conceptual underpinnings of private developments, after outlining some key features of the legal landscape in which they are presently flourishing. Part II examines the problems that arise from the fact that servitudes are typically uniform ? and uniformly enforced ? across an entire community. Part III examines two dynamics that might push servitude regimes towards a stricter convergence point than many individuals might desire: the potential for adverse selection into lenient regimes, and the path dependence of community formation. Part IV considers some additional obstacles to the realization of consumer preferences in servitude regimes, including gaps in consumer understanding and difficulties that consumers may have in effectively sending market signals to developers. Part V considers the implications of a contract-based and association-administered regime for the development and deployment of norms and social capital within a community. Part VI presents some concluding observations that suggest how we might begin to address these difficulties.
Highly recommended.


 
Two by Goldman Eric Goldman (Marquette University - Law School) has posted two papers on SSRN:
    Where's the Beef? Dissecting Spam's Purported Harms (John Marshall Journal of Computer & Information Law, Forthcoming):
      Virtually everyone seems to agree that spam causes tremendous harm, but there is surprisingly little consensus on exactly what those harms are. This Essay examines various harms that spam purportedly causes to assess if the harm is real and if spam is treated dichotomously compared to other media communications. Based on this analysis, the Essay concludes that many harms purportedly caused by spam are not appropriate policy justifications for regulation.
    Warez Trading and Criminal Copyright Infringement:
      Warez traders have been blamed as a significant cause of copyright piracy, which has led to several dozen conviction of warez traders in the past two years. The article analyzes how criminal copyright infringement and other laws apply to warez trading. The article also describes the prosecutions of warez trading, including a comprehensive chart of all warez trading convictions. The article concludes with a brief policy discussion about the problems created by Congress' effort to criminalize warez trading.


 
Conference Announcement: Religiously Affiliated Law Schools
    A Conference of religiously affiliated law schools March 25–27, 2004 University of Notre Dame Law School
    Friday, March 25, 2004 9:00?10:15 a.m. Session 1
      curriculum: religion in the public law courses Religion in the Teaching of Legal History--Howard Bromberg, Ave Maria Law School The Relevance of Religion in Teaching Criminal Law--Sam Levine, Pepperdine Law School Religion? in the Constitutional Law Course--Richard Myers, Ave Maria Law School Notes Toward a Catholic Critique of American Establishment Clause Jurisprudence--John Stinneford, University of Dayton School of Law
    10:30 a.m. Session 2
      curriculum: religion in the private law courses Religious Belief and Private Law Ordering--Matt Harrington, George Washington University School of Law The Intersection of Law and Theology in a Products Liability Course--Amy Uelmen, Fordham Law School Law and Community in the Law School Classroom: The Case of Torts--Robert Cochran, Pepperdine Law School Intersections of Law and Religion in a First-Year Property Course--David Thomas, BYU Law School
    1:30 p.m. Session 3
      religion across the curriculum Faith and Formation--Tom Mengler, St. Thomas School of Law The Jesuit Legal Tradition and the Curriculum--Dan Morrissey, Gonzaga Law School Teaching Ethics in a Religiously Affiliated Law School--Rev. John J. Coughlin, O.F.M., Notre Dame Law School
    3:00 p.m. Session 4
      assessment strategies Assessment of the Impact of Religious Faith on the Well-Being of Law Students--Jerry Organ, University of St. Thomas School of Law A Longitudinal and Holistic Design for Assessing the Law Student Experience--Mark Guntym, Institutional Research, University of Notre Dame Listening to Learners: The Law School Survey of Student Engagement--Patrick O?Day, Indiana University Center for Postsecondary Research
    Saturday, March 27 10:15 a.m. Session 5
      integrating faculty who do not share the institution's religious tradition into advancing the tradition Pilgrimage or Exodus: Responding to Faculty Faith Diversity at Religious Law Schools--Marie A. Failinger, Hamline University School of Law Most Faculty Do Not Support the Mission: The Dilemma of (Most) Catholic Law Schools, and How to Deal With It--Mark Sargent, Villanova Law School Fostering the Conversation--Bill Treanor, Fordham Law School
    10:30 a.m. Session 6
      "hard scholarship" on relevant issues of interest to the legal academy, but from particular "faith-based" perspectives "[T]hrough a glass, darkly...": Christianity, Law and Capital Execution in Twenty-First-Century America--Anthony Baker, Campbell University School of Law Catholic Social Teaching on Taxation, Canon Law, and Beyond--Matthew J. Barrett, Notre Dame Law School Natural Law in the Development of International Human Rights Law--Kathryn Lee Boyd, Pepperdine Law School Render Unto Caesar: How the Catholic Church Should Deal with Civil Legal Authorities in the Clergy Abuse Cases--James V. Feinerman, Georgetown University Law Center


 
Guzman on the Design of International Agreements Andrew T. Guzman (University of California, Berkeley - School of Law (Boalt Hall)) has posted The Design of International Agreements on SSRN. Here is the abstract:
    States entering into international agreements have at their disposal several tools to enhance the credibility of their commitments, including the ability to make the agreement a formal treaty rather than soft law, provide for mandatory dispute resolution procedures, and establish monitoring mechanisms. Each of these strategies - referred to as "design elements" - increases the costs associated with the violation of an agreement and, therefore, the probability of compliance. Yet even a passing familiarity with international agreements makes it clear that states routinely fail to include these design elements in their agreements. This Article explains why rational states sometimes prefer to design their agreements in such a way as to make them less credible and, therefore, more easily violated. In contrast to domestic law, where contractual violations are sanctioned through zero-sum payments from the breaching party to the breached-against party, sanctions for violations of international agreements are not zero-sum. To the extent sanctions exist, they almost always represent a net loss to the parties. For example, a reputational loss felt by the violating party yields little or no offsetting benefit to its counter-party. When entering into an agreement, then, the parties take into account the possibility of a violation and recognize that if it takes place, the net loss to the parties will be larger if credibility enhancing design measures are in place. In other words, the design elements offer a benefit in the form of greater compliance, but do so by increasing the cost of a violation and the net cost to the parties. When deciding which design elements, if any, to include, the parties must balance the benefits of increased compliance against the costs triggered in the event of a violation.


Monday, January 26, 2004
 
Weekend Wrap Up On Saturday, the Download of the Week was Inheriting Responsibilties by David Miller. Also on Saturday, The Legal Theory Bookworm recommended Michael Moore's Placing Blame, a General Theory of the Criminal Law. Sunday's regular features were delayed, but you can now find the Legal Theory Lexicon entry on Causation and the Legal Theory Calendar.


 
Mirowski on the Philosophical Hammer at George Mason At George Mason's Philosophy, Politics and Economics series, Phil Mirowski (Department of Economics, University of Notre Dame) presents Philosophizing with a Hammer.


 
Silberman on International Jurisdiction and Judgments at NYU At NYU's law series, Linda Silberman discusses the ALI Project on International Jurisdiction and Judgments.


 
Holthoefer on International Law and Order at Chicago Today at the University of Chicago's political theory workshop series, Anne Holthoefer, University of Chicago, presents A Procrustean Bed? International Law and the Shaping of International Order.


 
Crisp on Hedonism at Oxford Today at Oxford's Moral Philosophy Seminar, Roger Crisp (Oxford) presents Hedonism Reconsidered.


 
Strahilevitz on the Right to Destroy Lior Strahilevitz (University of Chicago Law School) has posted The Right to Destroy on SSRN. Here is the abstract:
    Do you have the right to destroy that which is yours? This paper addresses that fundamental question. In contested cases, courts are becoming increasingly hostile to owners' efforts to destroy their own valuable property. This sentiment has been echoed in the legal academy, with recent scholarship calling for further restrictions on an owner's right to destroy cultural property. Yet this property right has received little systematic attention. The paper therefore examines owners' rights to destroy various forms of property, including buildings, jewelry, transplantable organs, frozen human embryos, patents, personal papers, and works of art. A systematic treatment of the subject helps support a qualified defense of the right to destroy one's own property. For example, an examination of American laws and customs regarding the disposition of cadaveric organs helps one understand and weigh the expressive interests that prompt people to try to destroy jewelry via will. Similarly, an examination of patent suppression case law points toward a form of ex ante analysis that has been de-emphasized in opinions involving the destruction of buildings and other structures. An analysis of cases involving the destruction of frozen human embryos may shed light on creators' rights to burn unpublished manuscripts or works of art. And collectivist theories of free speech may help explain why the Visual Artists Rights Act sensibly prohibits the destruction of paintings by living artists, but not Old Masters. In advocating a more unified treatment of destruction rights, the paper argues that greater deference to owners' destructive wishes often serves important welfare and expressive interests. The paper also critiques existing case law that calls for particular hostility toward will provisions that direct the destruction of a testator's valuable property. Courts and commentators have not given particularly persuasive justifications for restricting testamentary destruction, and the paper proposes a safe-harbor provision whereby sincere testators who jump through certain hoops during their lifetimes can have their destructive wishes enforced.


 
Gibbons on a Federal Common Law of Copyright Contract Llewellyn Joseph Gibbons (University of Toledo - College of Law) has posted Stop Mucking up Copyright Law: A Proposal for a Federal Common Law of Contract is a Common Sense Solution (Rutgers Law Journal, Forthcoming) on SSRN. Here is the abstract:
    This article proposes an alternative to the two current schools of academic thought regarding which body of contract law should govern copyrights. The traditionalists contend that existing state contract law, either common law contract or Article 2 of the Uniform Commercial Code, is adequate to meet the marketplace's need for stable and predictable law. The other school argues that neither form of state law is adequate. Rejecting existing contract law, this school proposes creating new sui generis bodies of state law, such as the proposed Uniform Computer Information Transactions Act (UCITA), to address problems applying state common law or the UCC to copyrights and copyrightable works. Both schools, to some degree, recognize the vexing problems encountered by scholars trying to reconcile state contract law and the federally created body of copyright law. This article rejects this artificial dilemma in proposing a third alternative, one that has not been suggested in the literature: federal courts or Congress should create a body of contract law for copyrights. This avoids the federal preemption and choice of law issues presented by applying state law principles to a federal body of property rights by creating a federal body of contract law. Further, in light of The National Conference of Commissioners on Uniform State Laws (NCCUSL) recent withdrawal of UCITA as a proposed uniform state law, this article represents the only counter proposal to the traditional school of thought and may serve as a possible starting point for developing a uniform body of federal contract law for copyrights.


 
Wasserman on Symbolic Counter-Speech Howard M. Wasserman (Florida International University College of Law) has posted Symbolic Counter-Speech (William & Mary Bill of Rights Journal, Vol. 12, February 2004) on SSRN. Here is the abstract:
    In this article, Professor Wasserman introduces, defines, and explores a new form of expression, labeled symbolic counter-speech. Symbolic counter-speech is an outgrowth of two extant free expression concepts - the right and opportunity to communicate through symbols and the Brandeis imperative of counter-speech as the acceptable answer to objectionable speech. Symbolic counter-speech responds to a symbol on its own terms, countering the message presented by a particular symbol while using that symbol as the vehicle or medium for the contrary message. Symbolic counter-speech includes a range of expressive actions, from silent non-participation with a symbol or symbolic ceremony to confrontation of the symbol with a different, contrary symbol to attacks on the original symbol by destroying it or altering it to create a new message. Professor Wasserman considers symbolic counter-speech in the post-September 11 environment, when the United States has returned to what Vincent Blasi called a "pathological period," a period in which commitment to free speech wanes and in which government is especially likely to engage in systemic suppression. Although there have not been widespread governmental restrictions on expression, the primary feature of previous pathologies, there has been a dramatic increase in government and private patriotic symbolism and expression and of intolerance for objections to that patriotism. This has been particularly true with regard to the American flag and its complementary symbols, such as the Pledge of Allegiance, the national anthem and God Bless America. The focus of this paper is the increase in patriotic symbolism, along with incidents of counter-speech to that symbolism, at professional and collegiate sporting events, the primary forum in American society in which crowds of adults regularly engage in patriotic expression. Finally, the concept of symbolic counter-speech and these examples of flag-related symbolic counter-speech show the inconsistency between principles and traditions of freedom of speech and the movement for "flag preservation," which logically would eliminate all symbolic counter-speech directed against the flag and its complements.


 
Gross on Constitutional Emergency Provisions Oren Gross (University of Minnesota Law School) has posted Providing for the Unexpected: Constitutional Emergency Provisions (Israel Yearbook on Human Rights, Vol. 32, 2004) on SSRN. Here is the abstract:
    The article seeks to examine some of the general patterns with respect to treating emergencies as they are reflected in domestic constitutional arrangements. The article explores existing constitutional emergency arrangements of over seventy countries around the world, attempting to classify some of the important attributes of such constitutional arrangements into meaningful categories. Specifically, the article examines the various constitutional options with respect to such questions as: (1) how (and whether) to define a state of emergency in the constitutional document; (2) who has the power and authority to declare a state of emergency (and to terminate such a declaration); (3) what political and judicial control (if any) exists under the constitutional framework over the use of emergency powers; and (4) what are the legal ramifications of declaring a state of emergency with respect, for example, to the protection of individual rights and civil liberties and the possibility of suspending the constitution, in whole or in part.


 
Legal Theory Calendar
    Monday, January 26 Tuesday, January 27
      At the University of Chicago, The Coase Lecture is presented by Douglas Lichtman, Professor of Law, University of Chicago Law School, who will deliver Irreparable Harms and Irreparable Benefits.
      At Oxford's Jurisprudence Discussion Group, G. A. Cohen (Oxford) presents Rescuing Justice from Constructivism.
      At Oxford's Ockham Society, Andreas Szigeti presents Moral Sentiments and Moral Dilemmas.
    Wednesday, January 28
      At University College's Colloquium in Legal and Social Philosophy, Frances Kamm presents Failures of Just War Theory and Terrorism.
      At Villanova law, John Parry (University of Pittsburgh School of Law) presents Chavez v. Martinez and the Jurisprudence of Torture
      At Loyola Marymount, Laurie Levenson (LMU) presents Why Looks Matter: The Impact of Non-Evidence on the Courtroom.
      At Yale's philosophy series, David Sussman presents Kant and the Politics of Disgrace..
    Thursday, January 29
      At Penn's law and philosophy series, Geoffrey Sayre-McCord (Professor and Chair, Department of Philosophy at University of North Carolina at Chapel Hill) is presenting Rational Agency and Normative Concepts with comments by Hans Oberdiek.
      Also at Penn, Martha Nussbaum (Ernst Freund Distinguished Service Professor of Law and Economics, University of Chicago) is giving the JUDITH R. BERKOWITZ ENDOWED LECTURESHIP IN WOMEN'S STUDIES. Her title is Gender Justice, Human Rights, and Human Capabilities.
      At Boston University law, Susan Koniak (BU) presents How Like a Winter? The Plight of Absent Class Members Denied Adequate Representation.
      At Florida State, Jennifer Mnookin, University of Virginia School of Law, presents Atomism, Holism and the Law of Evidence.
      At Georgetown's Colloquium on Intellectual Property & Technology Law, Rosemary J. Coombe, York University, presents The Globalization of Intellectual Property: Informational Capital and Its Cultures.
      At Stanford's Olin series, Terry Fisher (Harvard Law School) presents An Alternative Compensation System for the Entertainment Industry.
      At the University of Michigan's law and economics series, Omri Ben-Shahar, Michigan, presents "Agreeing to Disagree": Filling Gaps in Deliberately Incomplete. The title on the website is "incomplete," but not deliberately so.
      At George Mason, Craig Lerner, GMU School of Law, presents “Accomodations” for the Learning Disabled: A Level Playing Field or Affirmative Action for Elites?
      At Oxford's Public International Law Discussion Group, Robert Volterra presents The Commission on the Limits of the Continental Shelf: Technical Science, Star Chamber, or Quasi-Judicial Tribunal?
      At the Australian National University's RSSS, Norva Lo (La Trobe University) presents Humpty Dumpty Analysis of 'Valuing', Empty Analysis of 'Valuable'.
      At UCLA's legal history series, Sally Gordon, University of Pennsylvania, presents Parochial School Funding: Catholics, Protestants, and Legal Activism at Mid-Century.
    Friday, January 30
      At the University of Texas, Ariela Gross, USC, presents Administering Citizenship, Identity and Land in Indian Territory, 1865-1907.
      At Oxford's faculty of laws, Diamond Ashiagbor presents Economic and social rights in the EU charter (on human rights, social rights and social policy discourse).
      At Oxford's Jowett Society, Jennifer Saul (Sheffield) presents Pornography, Speech Acts, and Context.
      At the University of North Carolina's philosophy department, Cynthia Stark (Utah) presents How To Include the Severely Disabled in a Contractarian Theory of Justice.


 
Legal Theory Lexicon: Causation
    Introduction Causation is one of the basic conceptual tools of legal analysis. And for most purposes, we can get along with a notion of causation that is both vague and ambiguous. In the world of medium sized physical objects (automobiles, pedestrians, etc.), there are many clear-cut cases. The driver’s negligence caused the death of the pedestrian but did not cause John Kerry to win the Iowa caucuses in 2004. In these cases, various notions of causality converge. The person on the street, the scientist, and lawyer can all agree in such cases that for all practical purposes X caused Y but not Z. But sometimes the various notions of cause come apart exposing ambiguities and vagueness in both ordinary and legal talk about causes and effects. This post provides a very basic introduction to causation for law students (especially first-year law students) with an interest in legal theory.
    Cause-in-Fact & Legal Cause Let’s put the most important distinction on the table right away. Contemporary legal theory and judicial practice assume that there is a distinction between legal cause on the one hand and cause-in-fact on the other. What does that mean? That’s a huge question, of course, but we can state one conclusion straight away: that X is a cause-in-fact of Y does not entail that X is a legal cause of Y. Less obviously, that X is a legal cause of Y does not entail that X is a cause-in-fact of Y. The various ways that cause-in-fact and legal cause can come apart leads many to the conclusion that legal cause simply has nothing to do with causation, but this turns out to be an exaggeration. I know this all sounds very airy. So let’s get down to brass tacks!
    Cause-in-Fact What do we mean when we say that X is a cause-in-fact of Y? Many law students learn that the answer to this question is but-for causation. If it is the case that but for X, Y would not have occurred, then X is a but-for cause of Y and hence X is a cause-in-fact of Y. This simple story works most of the time, and as a rough and ready rule of thumb, it isn’t half bad. But it turns out that if you try to use but-for causation as a hard and fast rule for determining whether X is the cause of Y, you will run into trouble, sooner or later. In torts and criminal law, but-for causation runs into trouble somewhere in the midst of the first-year course. In a sense, the point of this Lexicon post is to provide a set of tools that for understanding the troubles that overreliance on but-for causation can cause.
    Necessary and Sufficient Causes The first item in the causation toolkit is the distinction between necessary and sufficient cause. The basic ideas are simple and familiar. X is a necessary cause of Y, if Y would not have occurred without X. Ben’s running the red light is a necessary cause of the damage to Alice’s car, just in case the damage would not have occurred without Ben’s having run the light. X is a sufficient cause of Y, if Y would have occurred so long as X occurred. Alice’s shooting Ben through the heart is a sufficient cause of Ben’s death, just in case the shot thru the head by itself would have caused Ben’s death.
    The Role of Counterfactuals The notions of necessary and sufficient causation are familiar to almost everyone. We use these ideas all the time in everyday life. But the very familiarity of these concepts creates a temptation to take them for granted. There is an important feature of these ideas that our day-to-day use of them does not make explicit. Both necessary and sufficient causation are counterfactual concepts. What does that mean? “Counterfactual” is simply the fancy name for “what if” thinking. What if Ben had stopped at the red light? Would the damage to Alice’s car still have occurred? What if the Ben had gotten immediate medical attention? Would the shot through the head still have killed him? Every statement regarding a necessary or sufficient cause can be interpreted as making a counterfactual (“what if”) claim.
    What-if reasoning is itself familiar and ordinary. When we say, Ben’s running the red light was a necessary cause of the damage to Alice’s car, we are claiming that if the world had been different and Ben had not run the red light, then Alice’s car would not have been damaged. We imagine what the world would have been like if Ben had stopped at the red light, and Alice had proceeded through the intersection without being struck by Ben’s car. Counterfactual reasoning can get more complicated that this, but for our purposes we can use everyday what-if reasoning as our model of role of counterfactuals in necessary and sufficient causation.
    Overdetermination Once we’ve gotten the notions of necessary and sufficient causes, we can move on to the idea of overdetermination. An effect is overdetermined if it has more than one sufficient cause. Take the case of Alice shooting Ben through the heart. We have postulated that the bullet passing through the heart was a sufficient cause of Ben’s death, but it may not have been a necessary cause. Suppose that Alice was a member of a firing squad, and that at the exact same moment that Alice’s bullet passed through Ben’s heart, another Bullet, fired by Cynthia, passed through Ben’s cerebral cortex and that this would have resulted in Ben’s death, even if Alice’s had not fired or her bullet had missed. Ben’s death now results from two sufficient causes, but neither Alice’s shot nor Cynthia’s shot was necessary. If Alice had not fired, Cynthia’s shot would have killed Ben. If Cynthia had not fired, Alice’s shot would have killed Ben.
    Overdetermination is important, because it undermines the idea that but-for causation tells us everything we need to know about cause-in-fact. We might say that both Alice and Cynthia’s shooting caused Ben’s death or we might say they were both partial causes of Ben’s death, but we would not be likely to say that neither Alice nor Cynthia’s shot was the cause.
    The firing squad example was described as a case of simultaneous overdetermination—both sufficient causes occurred at the same time. What if Cynthia shot a few seconds before Alice and Ben died before Alice’s shot pierced his heart? In that case, Cynthia’s shot would have preempted the causal role of Alice’s shot. If Cynthia had missed, then Alice’s shot would have killed Ben. This kind of case is sometimes called preemptive causation.
    Coincidence Overdetermination poses one kind of problem for but-for causation, coincidence poses another a different sort of difficulty. Suppose the driver of a trolley is speeding. As a result the trolley is in just wrong place and time and a tree falls, injuring a passenger. If trolley had gone just a little faster or just a little slower, the tree would have missed the trolley and the injury would not have occurred. Given these circumstances, speeding was a but-for cause (a necessary cause) of the tree injuring the passenger. So what? Coincidence is no problem for cause-in-fact, but it does pose a problem for the legal system. Intuitions vary, but lots of folks are inclined to believe that one should not be legally responsible for harms that one causes as a result of coincidences.
    Coincidence is related to a variety of other problems with but-for causation. Take our example of Ben running the stoplight and hitting Alice’s car. Running the stoplight was one but-for cause of this accident, but there are many others. For example, Alice’s being in the intersection was also a but-for cause. And how did Alice come to be in the intersection at just the time when Ben was running the red light? If her alarm clock hadn’t gone off, she would have slept in and arrived in the intersection long after Ben, so her alarm clock’s ringing was another but-for cause. And you know how the story goes from here. As we trace the chain of but-for causes back and out, we discover that thousands and millions and billions of actions and events are but-for causes of the accident.
    Legal Cause What do we about the problems with problems created by but-for cause? One way that the law responds is with the idea of legal cause or proximate cause. In this post, we cannot hope to achieve a deep understanding of legal cause, but we can get a start. Here are some of the ideas that help me to understand legal cause.
    First, there is a terminological issue: causation may be confused with responsibility. “Legal cause” is only partially about cause. We start with the idea of cause-in-fact (understood in light of the distinction between necessary sufficient cause). This idea of cause seems, on the surface, to fit into the structure of various legal doctrines. So we imagine that if a defendant breaches a duty of care and causes a harm, then defendant is legally responsible for the harm. This works for lots of cases, but then we start thinking about other cases like overdetermination and coincidence. “Legal cause” is the way that we adjust our ideas about legal responsibility to overcome the counterintuitive results that would follow from a simple reliance on but-for causation. In other words, “legal cause” may be a misnomer. It might be clearer if we used the phrase “legal responsibility” (or some other phrase) to describe the ways in which we adjust the law.
    Second, legal cause is frequently associated with the idea of foreseeability. For example, in coincidence cases, the harm (the tree injuring the passenger) is not a foreseeable consequence of the wrongful act (driving the trolley at an excessive speed). If the purpose of the law is deterrence, then no good purpose may be served by assigning legal responsibility in cases where the effect is unforeseeable.
    Third, legal cause is sometimes associated with the idea of proximity in time and space. Of course, the phrase “proximate cause” emphasizes this connection. We usually don’t want to hold defendants responsible for the remote and attenuated effects of their actions. We frequently do want to hold defendants responsible for the immediate and direct effects of their actions. “Proximity” seems to capture this point, but an overemphasis on proximity in time and space leads to other problems. Some immediate consequences do not give rise to legal responsibility: the trolley driver may have started speeding just seconds before the tree fell. Some causal chains that extend for long distances over great durations do give rise to legal responsibility: Osama bin Laden’s responsibility for 9/11 would not be vitiated by the fact that he set events in motions years in advance and thousands of miles away.
    Probability Our investigation of causality so far has elided an important set of issues—the connections between causation and probability. These connections are far too large a topic for this post, but even a superficial analysis requires that we consider two perspectives--ex ante and ex post.
    Ex post questions about causation arise in a variety of contexts, but for the legal system, a crucial context is provided by litigation and especially trial. In many cases, there is no doubt about causation. When Ben’s car speeds through the red light and hits Alice’s car, we don’t have much doubt about what caused the damage. But in many types of cases, causation will be in doubt. Did the chemical cause cancer? Was the desk job the cause of the back injury? Sometimes the evidence will answer these questions with certainty (or perhaps, with something that is so close to certainty that we treat it as certainty for legal and practical purposes). But in other cases, the evidence will leave us with a sense that that the defendant’s action is more or less likely to have caused the harm to the defendant. Such probabilities may be expressed either qualitatively or quantitatively. That is, we might say that it is “highly likely” that X caused Y or we might say that there is a 50% chance (p = .5) that X caused Y.
    Ex ante issues of causation also arise for the law. For example, the legal system may be required to assign a value to a risk of harm that has not yet been realized. David has been exposed to asbestos, but may or may not develop cancer. In this case, probabilities refer to the likelihood of future events.
    Decision theory and mathematics have elaborate formal machinery for representing and calculating probabilities. In this short post, we cannot even scratch this surface, but there are two or three bits of notation that every legal theorist should know:
      --The letter “p” is frequently used to represent probability. Most law students encounter this notation in Justice Hand’s famous opinion in the Carroll Towing case (B < PL or “burden less than loss discounted by probability). The notation p(x) = 0.1 can be read “the probability of x equals 1/10.” And the notation, p=0.5 can be read “probability equals one in two.”
      --The symbol “|” is frequently used to represent conditional probabilities. Suppose we want to represent the probability that X will occur given that Y has occurred, we can use this notation: p(X|Y). So we could represent the sentence, “The probability of Cancer given Exposure to Asbestos is ten percent,” as p(C|EA)=0.1.
    Types and Tokens So far, we have been focusing mostly on cases where an individual instance of harm is caused by some particular wrongful action. But of course, we frequently think about causation as a more general relationship. For example, in science we might speak of “causal laws.” There is no standard terminology for this distinction: we might use the phrase “individual causation” and “systematic causation.” One helpful bit of terminology for getting at this idea is to differentiate “types” and “tokens.” Ben’s running the rend light at a particular time and location is an event token and it is a token of a type of events, i.e. the type “running a red light.”
    Once we have the distinction between types and tokens in place, we can define individual causation as a causal relationship between a token (e.g. a token event) and another token (e.g. a token action). And we can define systematic causation as a causal relationship between a type (e.g. a type of event) and another type (e.g. a type of action). Science studies causal relationships between types; trials frequently involve questions about the causation of one token by another. This leads to another important point: the question whether an individual harm was caused by an individual action will sometimes depend on the question whether a systematic causal relationship exists; for example, the question whether this factory’s release of a chemical caused an individual case of cancer may require a jury to resolve a “scientific” question about systematic causation.
    Conclusion Even though this is a long entry by the standards of the Legal Theory Lexicon it is a very compressed and incomplete treatment of the concept of causation. Given the way legal education is organized (around doctrinal fields like torts, criminal law, and evidence), most law students never get a truly comprehensive introduction to causation. Torts may introduce the distinction between cause-in-fact and legal cause; criminal law, problems of overdetermination; and evidence, the relationship between probability and causation. If this post accomplishes anything of value, I hope that it serves as warning—causation is a deep and broad topic about which there is much to learn.


Sunday, January 25, 2004
 
Calendar & Lexicon Late Today The Legal Theory Calendar and the Legal Theory Lexicon will be posted late today. I'm returning home from the Roundtable on Causation and Probability in Death Valley.
Update: When I returned home, my intenet connection was out. The Lexicon entry is now up, but the Legal Theory Calendar won't go up until later on Monday.


Saturday, January 24, 2004
 
Legal Theory Bookworm This week the Legal Theory Bookworm recommends Placing Blame, a General Theory of the Criminal Law (Oxford University Press ). Moore is one of the most interesting and deep thinkers in contemporary legal theory. Here is the blurb:
    This is a collection of essays which form a thorough examination of the theory of criminal responsibility. The author covers a wide range of topics, but perhaps the most significant feature of this book is Moore's espousal of a retributivist theory of punishment. This anti-utilitarian standpoint is a common thread throughout the book. It is also a trend which is currently manifesting itself in all areas of moral, political and legal philosophy.
Moore is retributivist to the bone, embracing and defending a rigorous version of retributivist theory that must be taken into account if you are interested in criminal law theory. Highly recommended.


 
Download of the Week This week the Download of the Week is Inheriting Responsibilities by David Miller. Here is a taste of this nifty paper:
    One quite striking feature of the politics of the last half-century has been the escalation of demands for redress, issued by groups who see themselves as the victims of historic acts of injustice. Present-day governments and their citizens are being asked to bear responsibility for the actions and policies of earlier generations, and to take a variety of steps to correct the harm and injustice that they perpetrated. Not all such demands have been successful, but many have been, and the costs incurred have in some cases been considerable. The claims in question have been very diverse, both in terms of who is making them and in terms of the acts singled out as standing in need of redress. Let me remind you of some well-known examples:
      1) The payments that have been made by the German government to Jews as reparation for the Nazi holocaust, mainly in the form of transfers to Israel, and estimated to be in the order of 80 billion Deutschmarks.
      2) The demands made by members of the Australian Aboriginal community for compensation and for a national Day of Apology for the so-called ‘stolen generation’ of Aboriginal children taken from their families and brought up in white homes or orphanages.
      3) The compensation of $122 million awarded by the US Supreme Court to the Sioux Indians for the occupation by whites in the late 19th century of the goldrich Black Hills area that had previously been reserved to the Sioux by treaty.
      4) Demands that Japan should pay compensation to ‘comfort women’ taken from other East Asian countries (especially Korea) and forced into prostitution by the Japanese military, giving rise to official apologies and the creation of an Asian Women’s Fund to offer compensation to the women involved.
      5) Demands that items of symbolic significance seized from their original owners should be returned to those owners or their descendants, for instance the demand that the Parthenon Marbles should be returned to Greece, or the demand by some aboriginal peoples that the bones of their ancestors now held in museums across the world should be sent back to them for reburial.
      6) The many and varied demands that have been made in the US as forms of redress for black slavery, from land settlements for blacks, to financial compensation to the descendants of slaves, to affirmative action policies, to formal apologies for slavery on the part of Congress or the President.
    My interest in this issue stems from a broader interest in the idea of national responsibility. When does it make sense to hold those collectivities we call nations responsible both for the benefits and harms they bring to themselves and for the benefits and harms that they inflict on others? I have tried in another paper to answer this question as it applies to nations considered as groups of contemporaries, and for the purpose of the present paper I am going to assume that the core idea of national responsibility is defensible.3 But clearly it is one thing to argue that we share in responsibility for the actions of the present generation – for political decisions taken in our name, for instance – and another to argue that we can inherit responsibilities from the past. How can we be liable for what our forefathers did when we had no opportunity either to contribute to or to prevent the actions and policies that created the injustice?
Download it while its hot!


Friday, January 23, 2004
 
Probability and Causation Roundtable at Death Valley My blogging will be light and erratic over the next three days. I am attending the Roundtable on Probability and Causation, organized by the law and philosophy institutes at the University of Illinois, with support from the law and philosophy institutes of the University of Pennsylvania and the University of San Diego. A terrific set of readings has been prepared by Michael Moore and Tom Ulen. Other participants include Matt Adler (Penn), Larry Alexander (San Diego), Ron Allen (Northwestern), Lee Fennell (Texas), Claire Finkelstein (Penn), Richard Fumerton (Iowa-Philosophy), Thomas Ginsburg (Illinois), Susan Haack (Miami-Philosophy & Law), Christopher Hitchcock (Cal Tech), Heidi Hurd (Illinois), David Hyman (Maryland), Leo Katz (Penn), Richard Lembert (National Science Foundation & Michigan), Alan Schwartz (Yale), and Steve Smith (San Diego). This should be terrific!
Update: I'm writing this post script on Sunday morning. This was a marvelous roundtable--with lots of fascinating discussion. Although I won't be blogging any of the conversations, I will put up two or three posts on ideas and issues that arose during the roundtable. Thank you Tom Ulen and Michael Moore!


 
Rickey on the Senate Judiciary Memos Anthony Rickey has a thorough discussion of the simmering controversy over Republican access to Democratic memos on judicial confirmations: here, here, and here.


 
Bracha on Copyright History at Texas At the University of Texas, Oren Bracha (UT) presents The Transformation of American Copyright Law 1789-1909. Here is an excerpt from the introduction:
    When the federal copyright regime was initiated in 1790 its founders relied on a long and thick institutional and conceptual history. The English copyright tradition went all the way back to the late sixteenth century in which a printer’s trade privilege to exclusively print a “copy” emerged within the institutional apparatus of the printers and booksellers guild- the Stationers’ Company. A more recent landmark was the 1710 Statute of Anne and the new copyright framework it created. At the heart of this framework was a fourteen years exclusive right to print a book, bestowed on the author and his assignees. The English statute was accompanied by eighty years of elaboration, glossary and re-interpretation from the common law courts. This English background circa 1790 is best understood not so much as a coherent stable whole, but more as different layers of ideas and practices, the newer ones of which gradually modified and transformed the older, but also incorporated and retained much of their form and substance. Moreover, by the end of the eighteenth century the field of copyright in England was in a state of flux. New conceptions were beginning to appear in embryonic forms and older ones were being subtly mutated. This paradigmatic shift occurred not overnight but as a gradual process of transformation. In 1790 this process was still in its early stages, leaving in its path exposed conceptual textures, open debates and many ambiguities. To add to the complexity the American conceptual universe of copyright at the time included two additional layers, which were local mutations of traditional English institutions. The first was the American colonial and state practice of legislative grants of limited time exclusive privileges for the printing and sale of certain texts. More recent and significant than those rather sporadic grants were the general copyright laws legislated in the 1780s by twelve out of the thirteen states, all modeled in varying degrees after the Statute of Anne. Thus, the American founders of the new federal copyright regime were, to use Levi-Strauss’ term, “bricoleurs.” They were creating the American copyright system with a set of instruments and materials that were at hand- “the contingent result of all the occasions there have been to renew or enrich the stock or to maintain it with the remains of previous constructions or destructions.” The result of their first attempt at bricolage- the 1790 Act for the encouragement of learning by securing the copies of maps charts and books, to the authors and proprietors of such copies, during the times therein mentioned - was deeply rooted in the previous layers of meaning and in familiar institutional mechanisms. With some minor variations the 1790 act created the traditional fourteen years trade privilege to print a copy, now formally bestowed on the author rather than the printer (as in post-1710 England). Incorporating all of this institutional and conceptual baggage also meant incorporating much of the ambiguities, instabilities and controversies of the English context. The system was soon met with conflicting demands, pressures and forces (all exercised through the agency of interested groups and individuals), that would bring about a century long process of transformation. To some extent this process occurred through legislative reforms, but its main agents were the courts that were left to deal with many of the unanswered questions and to regulate the various pressures.


 
Cohen at Oxford At Oxford's Jowett Society, G.A. Cohen (Oxford) is speaking. Could someone provide the title of Cohen's paper?


 
Little on Intimate Duties at Tulane At Tulane's Center for Ethics and Public Affairs, Margaret Little (Georgetown University) presents Intimate Duties.


 
Constitution Making in Israel and Palestine Today, at the University of Chicago, there is a conference on Constitution-Making in Israel and Palestine, sponsored by the Center for Comparative Constitutionalism.


 
Hersch on Jury Demands and Trials Joni Hersch (Harvard University - Harvard Law School) has posted Jury Demands and Trials on SSRN. Here is the abstract:
    The behavior of juries in civil cases has been a focal concern in the legal reform debate. Whether a case would have a jury trial rather than a bench trial depends on decisions made by the parties to the legal dispute. In most civil litigation, either party may demand a jury trial, and this demand cannot be vetoed by the other party. This paper provides the first economic analysis of demand for jury trial and the implications of this choice on parties' settlement behavior. The empirical exploration of these issues uses a unique data set of almost 4,000 federal cases. The results are consistent with an economic model of the litigation process. Plaintiffs are more likely to demand trial by jury when juries are relatively more favorable to plaintiffs in similar cases, jury awards are more variable relative to bench awards, and the disparity in trial costs is smaller. Cases demanding jury trial are 5.5 percentage points more likely to settle without trial.


 
Dauber on the Sympathetic State Michele Landis Dauber (Stanford University - School of Law) has posted The Sympathetic State on SSRN. Here is the abstract:
    Despite nearly universal scholarly agreement on the absence of federal redistribution during the late nineteenth and early twentieth centuries (except for Civil War pensions), the frequency and generosity of federal disaster relief appropriations actually escalated during this period. These appropriations, which included such measures as the Freedmen's Bureau and other Southern war relief, and relief of floods, fires, and earthquakes, were seen as constitutionally unproblematic and indeed mandated by prior precedent. Not surprisingly, members of Congress and other advocates for the poor pointed to disaster appropriations, albeit unsuccessfully, as a precedent for spending policy innovations. For example, Congressional Populists argued during the Depression of 1893 that unemployment relief was analogous to disaster relief. Proponents of Henry Blair's bill for federal aid to common schools in the 1880s made a similar case, also fruitlessly. Similarly, disaster relief precedents figured prominently in Supreme Court litigation, including the Sugar Bounty cases in the 1890s. The efforts by claimants in all of these instances to expand the definition of what could legitimately count as a "disaster" that could be relieved with federal funds foreshadowed the similar, though more successful, efforts by New Dealers during the 1930s on behalf of the unemployed, tenant farmers, and the elderly.


 
Bradford on a Natural Law Justification for Preventive War William C. Bradford (Indiana University Purdue University Indianapolis (IUPUI) - School of Law) has posted The Duty to Defend them: A Natural Legal Justification for the Bush Doctrine of Preventive War (Notre Dame Law Review, Vol. 79, 2004) on SSRN. Here is the abstract:
    Part I analyzes the primary customary and treaty-based sources constituting the international legal regime governing self-defense, as well as relevant state practice in the post-Charter era, to evaluate the arguments as to the legality of measures undertaken in anticipation of an armed attack, as well as the continued functionality of the UN Charter framework in the Age of Terror. Part II examines the Bush Doctrine as an expression of a doctrine of preventive war that transcends the debate over the use of armed force in anticipation of an imminent attack. Part III claims that an examination of historical sources of international legal obligation and a less restrictivist, less positivist read of the Charter reveals a natural legal basis for the right, and, even more pointedly, the duty of states to engage in preventive war in order to defend against existential threats. Part IV examines the U.S. Constitution as a domestic expression of a Presidential legal duty, arising under natural law, to defend the U.S. against external threats. Part IV asserts the claim that the Bush Doctrine is an expression of the intent to faithfully discharge this natural legal duty and that the doctrine of preventive war it elaborates is not only theoretically consistent with obligations under international law but even promotive of the ends law is intended to secure, even if the exercise of the right to preventive war is subject to some important qualifications. Part V offers proposals to harmonize the Bush Doctrine with the UN Charter and guide formal international legal institutions, including the Security Council and the International Criminal Court, toward enhanced functionality in the simultaneous defense of the natural right of states and peoples to life on the one hand and the promotion of law-governed order and justice in the international system on the other.


Thursday, January 22, 2004
 
Gutmann Selected as President of the University of Pennsylvania Amy Gutmann, the political theorist and currently Provost at Princeton, has been named President of the University of Pennsylvania. Gutmann was at the ASPLP meeting earlier this month and she was in fine form. (Thanks to Jacob Levy for the pointer.)


 
Confirmation Wars Department: Boston Globe on Spying The Boston Globe has an article on the simmering controversy re internal Democratic judiciary committee memos on the judicial selection process. Here is an excerpt:
    From the spring of 2002 until at least April 2003, members of the GOP committee staff exploited a computer glitch that allowed them to access restricted Democratic communications without a password. Trolling through hundreds of memos, they were able to read talking points and accounts of private meetings discussing which judicial nominees Democrats would fight -- and with what tactics.
Both the GOP action & the content of the memos seem to confirm the intensely partisan atmosphere that has characterized the confirmation process. Thanks to Ken Simons (Boston University) of Punishment Theory for the link.
And for more on the confirmation wars, see this story re the likely Demoratic reaction to the recess appointment of Pickering.
And I just spent a few minutes listening in to the Senate Judicary Committee confirmation hearings today. The discussion concerned the sentencing guidelines and the Feeney Amendment--with very little of substance said.


 
Barbara Fried on Nozick at UCLA At UCLA's Legal Theory Workshop, Barbara Fried (Stanford) presents Begging the Question with Style: Anarchy, State and Utopia at Thirty Years. Here is an early passage:
    I start with the conviction-- reinforced by a recent close rereading of the book-- that the answer cannot be found in the cogency of its central argument. Many of the critical observations in the book-- chiefly of Rawls’s Theory of Justice, but also (in passing) of Williams, Hart, Marxian economics, egalitarian theory in general-- remain important, fresh, illuminating, thirty years later. By contrast, the affirmative argument for the minimal state that makes up the bulk of the book is so thin and undefended as to read, often, as nothing more than a placeholder for an argument yet to be supplied. Its central intuition (“Individuals have rights, and there are things no person or group may do to them”) continues to resonate thirty years later, precisely because, articulated at that level of generality, it will provoke dissent only among hardcore utilitarians. (Indeed, even utilitarians will blanch more at the rhetoric of rights than anything that follows from it.) The problem is defending the particular version of rights that make up libertarianism. Where Nozick hasn’t simply begged that question, the answers he provides are often internally contradictory, or seemingly random with respect to any coherent moral vision.
And from somewhat later:
    The brash, insouciant, tone of much of the book is only one of the many rhetorical devices that Nozick deploys to charm and disarm his audience, simultaneously establishing his own credibility with readers, turning them on his ideological opponents, and deflecting attention from some of the more serious gaps in his affirmative argument. For the balance of the paper, I want to take a look at these various devices at work. Some are clearly more successful than others; but together they seem to me to explain in part the immensely respectful reception the book has gotten over the years, from converts and critics alike.


 
Priest on the Market for Judicial Clerks at Michigan At the University of Michigan's law and economics series, George Priest, Yale, presents Reexamining the Market for Judicial Clerks and other Assortative Matching Markets. Here is an excerpt from early in the paper:
    The study--the most extensive empirical analysis extant of the market for judicial clerks--was conducted by a distinguished group of social scientists from Harvard and the University of Chicago, one of whom is an equally distinguished federal judge, Richard A. Posner, whose endorsement and participation surely enhanced the extent and frankness of judicial response to the survey. The Harvard-Chicago Study concluded that the unregulated clerk market was plagued by inefficiencies: decisions were made at inefficiently early times; both judges and students expended inefficient search costs; and decisions were made based on inadequate information. More damning, the process was unfair in many respects and had degenerated into an ethics and practice creating a "frenzy of hiring [that] has cast the judiciary into disrepute . . ." The authors proposed the adoption of a mandatory matching program similar to the mandatory program that, by means of a centralized computer algorithm, matches medical school graduates with hospital residencies.9 The authors recognized that many federal judges--independent and protected under the Constitution by lifetime tenure--would resist being bound by a computer matching program, and so proposed as an amendment that the match be mandatory only for those judges who wanted their clerks considered for Supreme Court clerkships.
And from a bit later:
    The market for judicial clerks is an unusual market, but it is not an incomprehensible one. Like many other assortative matching markets, both judges and clerks face uncertainty over the success of a match. Given restrictions on the availability of alternative terms of trade to reflect intensity of preference--here, inevitable restrictions on the availability of price, set by Congress--time-of-offer emerges as a currency. The time currency rewards judges who develop skills or techniques of recognizing talent based upon limited information. Distributional consequences result from the exercise of those skills, similar to the distributional consequences of the exercise of predictive skills in other markets--like real estate or securities--in which predictive ability allows early buyers to gain advantage over later ones. The introduction of restrictions on the time-of-offer currency, like any other restriction on terms of trade, will change the allocative outcomes of the market, but can generally be predicted to reduce aggregate welfare.


 
Krawiec at Florida State on the Penalty Default Canon At Florida State, Kimberly Krawiec, University of North Carolina School of Law (short-course visiting professor at FSU), presents The Penalty Default Canon--link courtesy of Gary O'Connor of the fabulous Statutory Construction Blog.


 
Klick on the Effects of Police on Crime at George Mason At George Mason's Levy series, Jonathan Klick of the American Enterprise Institute will present his paper Using Terror Alert Levels to Estimate the Effect of Police on Crime.


 
Hylton at Boston University At Boston University, Keith Hylton is presenting. Can someone supply the title?


 
Rosen on the Enron Bankruptcy Robert Rosen (University of Miami - School of Law) has posted Risk Management and Corporate Governance: The Case of Enron (Connecticut Law Review, Vol. 35, No. 1157, 2003) on SSRN. Here is the abstract:
    Enron Board's Finance Sub-Committee's approval of the first bankrupting Raptor transaction, Talon, is examined in as much detail as published documents allow. In so doing, this article examines a failure of corporate social responsibility. As not only members of the public were harmed, but also Enron's residual owners, the shareholders, this article examines a failure of corporate governance. The examination reveals that the decision was governed by analyses of the transaction's risks. The examination also reveals that the sub-committee was presented with false risk management information. The article highlights the importance of the risk management function, especially in corporations redesigned, or re-engineered, by strategies of outsourcing and project team management.


 
Garrett on the California Recall Elizabeth Garrett (University of Southern California - Law School) has posted Democracy in the Wake of the California Recall (University of Pennsylvania Law Review, Vol. 152, 2004) on SSRN. Here is the abstract:
    The recall of Governor Gray Davis and simultaneous election of Arnold Schwarzenegger provide a unique window on aspects of elections and democratic institutions that are not limited to statewide recall elections. Although one must be wary of drawing general conclusions about the political process from an unusual event such as the statewide recall, this election can serve as a way to think about broader issues relevant not only to future recalls but also to all candidate and issue elections in California and throughout the nation. In this article, I discuss insights that the recent recall provides with respect to four familiar areas of law and politics. First, the recall demonstrated the significant and sometimes troubling role that money plays in modern campaigns, as well as the difficulty of constructing effective and comprehensive campaign finance laws. Second, the unusual structure of the recall election, where an election for Davis's successor was on the same ballot as the recall question, helps to assess the role of political parties in elections. It suggests that independent and minor party candidates can be part of an election without causing widespread voter confusion. Third, the over twenty lawsuits filed before the election was held - with one threatening to delay the election for months until an en banc panel of the Ninth Circuit stepped in - suggest that litigation is being used more aggressively as political strategy in the wake of the Supreme Court's intervention into the 2000 presidential election. Unless courts take a less activist role in cases affecting elections, this disturbing trend is likely to continue. Finally, I conclude with a discussion of the interaction between direct democracy and representative democracy. In states with a hybrid system like California, these two forms of democracy influence each other - a reality that we witnessed in the days before the recall election and that we are likely to continue to see as Governor Schwarzenegger threatens to use initiatives to pressure a recalcitrant legislature to do his bidding.


 
Call for Papers: State Blaine Amendments
    Call for Symposium Contributions Tulsa Law Review seeks contributions for its Winter 2004 issue dedicated to a paper symposium discussing state Blaine Amendements and Blaine-like provisions. This discussion will be facilitated by the United States Supreme Court's decision in Locke v. Davey. TLR seeks contributions of all types--including both essays and articles--for inclusion in the symposium issue. Editing is expected to begin approximately September 13, 2004. TLR has provided a modest stipend for participation in past symposia and expects to be able to continue this practice. Space in the symposium in very limited. Interested authors are encouraged to contact TLR soon to reserve a place in the issue. Once the anticipated capacity has been reached, the issue must be closed. For more information about the symposium issue or TLR's editing process, or to reserve a place in the symposium issue, please contact: Brian McKay Editor-in-Chief Tulsa Law Review 3120 E. Fourth Place Tulsa, Oklahoma 74104 918-631-3532 brian-mckay@utulsa.edu


 
Roe on Corporate Governance Mark J. Roe (Harvard Law School) has posted Political Determinants of Corporate Governance on SSRN. Here is the abstract:
    The claim I advance is that the large firm's ownership structure is too often analyzed as one arising solely from organizational imperatives and technical foundations. The political and social predicates that make the large firm possible and that shape its form can deeply affect which firms, which ownership structures, and which governance arrangements survive and prosper, and which do not. To be concrete, much political analysis can be made to fit a principal-agent model. For ownership to separate from control, managers must be sufficiently aligned with shareholders. But the ways in which some polities settle conflict - or the ways in which the corporate players team up to work together - can affect the degree to which managers ally with shareholders and, concomitantly, how easy it is for ownership and control to separate. Managers' agendas can differ from shareholders'; tying managers tightly to shareholders has been central to American corporate governance. But in other economically advanced nations, ownership is not diffuse but concentrated. It is concentrated in no small measure because the delicate threads that tie managers to shareholders in the public firm fray easily in common political environments, such as those in common in continental European in the late 20th century. Politics can press managers to stabilize employment, to forego some profit-maximizing risks with the firm, and to use up capital in place rather than to downsize when markets no longer are aligned with the firm's production capabilities; these political tendencies correspond closely to managers' historical tendencies, even in the United States. Since managers must have discretion in the public firm, how they use that discretion is crucial to stockholders, and common political pressures induce managers to stray farther than otherwise from their shareholders' profit-maximizing goals. Owners may be reluctant to turn the firm over to independent managers if managers would more willingly expand and make hard-to-reverse investments. The polity may refuse to give distant shareholders the tools that roughly align managers with shareholders, and it may denigrate the private means that align the two. And some of these political results are easily to implement in weakly competitive product markets, the kind of markets that give managers yet more discretion. Hence, public firms in such polities, all else equal, have higher managerial agency costs, and large-block shareholding has persisted as shareholders' best remaining way to control those costs. Indeed, when we line up the world's richest nations on a left-right political continuum and then line them up on a close-to-diffuse ownership continuum, the two correlate powerfully. True, the effects on total social welfare are ambiguous; such polities may enhance total social welfare, but if they do, they do so with fewer public firms than less socially responsive nations. These results strongly suggest that the corporate governance and ownership characteristics are linked, directly or indirectly, to basic political configurations in the wealthy West. European structures, for example, may link more tightly to Europe's late 20th-century politics than to technical institutions, and the technical institutions may derive from late 20th-century politics as much as anything else. We thus uncover not only a political explanation for ownership concentration in Europe, but also a crucial political prerequisite to the rise of the public firm in the United States, namely the weakness of social democratic pressures on the American business firm.


Wednesday, January 21, 2004
 
Froomkin on Anononymity & Free Speech Check out Michael Froomkin's post on the Second Circuit's blow against anonmymous expression. Here is a tiny taste:
    [In Church of the American Knights of the Ku Klux Klan V. Kerik,] the Second Circuit upheld New York’s anti-mask law against a group of constitutional challenges—although it dodged one of the key issues, the extent to which a right to speak annonymously was implicated. The court was able to do this by making the scarecely-credible assertion that the right to protect one’s associations (NAACP v. Alabama) was not implicated when demonstrators were forced to expose their faces as a condition of appearing in public—in this case at a public demonstration. The court also rejected as irrelevant the claim that this would discourage attendence at KKK rallies, but the argument it uses seems too broad.
As Froomkin observes:
    It’s not just that the court seems to have outlawed Halloween. No, it’s that the precedent is just waiting to be used to block the operation of anonymous remails and the use of strong cryptography.
Here's the PDF of the opinion.


 
Stearns Overviews Public Choice at Villanova At Villanova Law, Max Stearns (George Mason University School of Law) presents An Overview of Public Choice


 
Case at Northwestern At Northwestern's Constitutional Theory Colloquium, Mary Anne Case, University of Chicago Law School, is presenting "Of 'This' and 'That' in Lawrence v. Texas." Thanks to Rick Garnett for the title.


 
Manson on Freud and Folk Psycholology at Hertfordshire At the University of Hertfordshire Centre for Normativity and Narrative, Neil Manson (Cambridge) presents Freud, Folk Psychology and Mental Order.


 
Lawsuits and Copynorms Has the RIAA litigation offensive changed copynorms? Here are some excerpts from a CNET story:
    The NPD Group, an independent market research firm, reported on Friday that peer-to-peer usage was up 14 percent in November 2003 from September. This upturn comes after six straight months of declines in digital file sharing.
    * * *
    "It's important to keep in mind that file sharing is occurring less frequently than before the RIAA began its legal efforts to stem the tide of P2P (peer-to-peer) file sharing," Russ Crupnick, vice president of NPD, said in a statement. "We're just seeing the first increase in these numbers. NPD will continue to monitor whether it's a temporary seasonal blip or a trend that suggests that the industry should be more aggressive in capping the use of illegal methods to acquire digital music."
These results are not necessarily inconsistent with those reported by the Pew Internet and American Life Project--time periods & survey methods were different. For more on this, see this post by Edward Lee. Of course, these studies measure copybehavior not copynorms. On the norms front, I belatedly came across this anecdote from Steven Yu on Law Meme:
    This is a somewhat more personal post than usually appears here on Lawmeme, but I thought it was amusing enough to note. Today I saw The Matrix: Revolutions (which is not so great). Prior to the movie, before any trailers were played, there appeared a short advertisement about a stuntman, who emphasized how much work he put into his stunts, and how much danger he faced every day--only to be ripped off by movie pirates. The moral of this little advertisement: piracy hurts ordinary people too, so stop doing it. The one problem: the audience hated it. There were boos, there were hisses, there were even a few shouted comments directed at the screen. Now maybe the audible reactions were a self-selecting sample: those audience members who strongly opposed piracy might have been the ones who tended to be silent. But, in a room filled with people around my age, I couldn't help feeling that I was hearing and seeing my generation's copynorms given voice.
I have some anecdotal evidence of my own to share. I've been discussing these issues in my intellectual property class over the course of the last week or two. Of course, law students are hardly a representative sample, but if I had to characterize the class sentitment, I would put it like this: It is socially unacceptable to take the position that unlawful P2P filesharing is morally wrong.
Update: The always-intelligent Ernest Miller comments here.


 
Risse Asks A Big Question Mathias Risse (Harvard University - John F. Kennedy School of Government) has posted Do We Live in an Unjust World? on SSRN. Here is the abstract:
    Many unjust relationships continue to exist among peoples as well as among individuals. Perhaps there are so many of them that their sum total supports the verdict that we live in an unjust world. Yet this study asks whether the "global order" as such is unjust, and seeks to give a partial answer to that question by showing that at least some prominent ways of arguing that it is fail. That question must seem hopelessly amorphous. Yet not only do we have to ask it, we will also mean something reasonably precise by it. We must ask it since the unit of political discourse becomes ever more the world as such. "Globalization" is a household world. So we must ask a question in the global context that we have asked all along about relationships among individuals and societies, namely, is it just? This question has great practical import: if the global order is unjust, the culprits will have duties in justice to rectify the situation, duties that could not simply be subordinated to domestic concerns. To make our question more precise, I adopt a minimal conception of justice and ask whether the global order as such is unjust in that sense. I ask whether there is a straightforward sense in which the global order harms the poor. Much intricacy is tied to the idea of "harming", as we know from the debate about Mill's Harm Principle ("just what counts as 'harming'?"). The sort of "harm" that is beyond this inquiry is vulnerability, the fact (rather than the harmful exercise) of domination, harm done because some people's needs remain unsatisfied, or because some deserve more than they own and harm done by the mere omission of bringing about a better state of affairs, and so is any injustice characterized along such lines. Adopting this minimal conception entails that we will be able to provide but a partial answer to the title question if a broader conception of justice at the global level can be made plausible.


 
Chicago Judges Project Check out the website of the Chicago Judges Project. Here is an excerpt from the front page:
    Are Judges Political? This is a much-disputed question, usually explored abstractly and in theoretical terms. Our goal is to produce a comprehensive study of judicial behavior on federal courts. We want to know how judges vote, in different cases, and whether their votes can be predicted by features of their appointment and their background. How, for example, do Republican and Democratic appointees differ in their votes in cases involving sex discrimination, affirmative action, environmental regulation, and campaign finance? Ideological voting Ideological dampening Ideological amplification In short, are judicial votes predictable from their ideology? Can we predict votes based on the political party of the appointing president? Are judges affected by their colleagues? Do conservative judges vote more conservatively when sitting with other conservatives? Do liberal judges become less liberal when sitting with conservative judges? We are compiling a massive database to answer these questions. In a preliminary investigation, we found that ideology affects judicial voting in many cases. The Chicago Judges Project will substantially expand the empirical examination. In addition, the extended study will apply the findings to enduring questions in both jurisprudence and politics. It will explore how judicial behavior relates to the question of judicial neutrality, the nature of the rule of law, and the appropriate behavior of the Senate and the President in the confirmation process.
The lead investigators are Cass Sunstein and David Schkade. Kudos to Sunstein and Schkade!


 
Confrence Announcement: The Rehnquist Court
    The Rehnquist Court Northwestern University School of Law April 23-24, 2004
      The Rehnquist Court is sixteen years old. In areas such as federalism, the free speech and religion clauses of the First Amendment, the Fourteenth Amendment, and substantive due process, it has staked out new ground. Many of its justices have embraced distinctive theories of administrative law and statutory interpretation as well. The first day the conference will take stock of these developments with papers on salient legal doctrines of the Rehnquist Court. The second day of the conference will address cross cutting themes in a roundtable of all participants. The Rehnquist Court has coincided with the rise of positive political theory in the legal academy. How does the new political science address the work of this Court? Other scholars have argued that the decisions of the Court can be understood only with reference to other branches of government. We will assess how the Court’s jurisprudence has reacted to the work of other branches. Finally, others have argued that Rehnquist Court, like the Warren Court of the sixties, is moving toward a distinctive jurisprudence, such as the reviving constitutional provisions that sustain the decentralized generation of social norms or restricting the scope of antidiscrimination principles. We will end by considering the degree to which the decisions of the Court reflect elements of a coherent jurisprudential approach.
    Friday, April 23
      10 am - 12 noon Administrative Law, Statutory Interpretation, and Civil Rights
        Presenters: Michael Herz (Cardozo), Nelson Lund (George Mason), John Manning(Columbia) Commentators: Robert Bennett (Northwestern), Michael Rappaport (San Diego), Mark Tushnet (Georgetown)
      12:-1:30 pm Lunch 1:30 - 3:30 pm The Structural Constitution
        Presenters: Pamela Karlan (Stanford), Elizabeth Magill (Virginia), John McGinnis (Northwestern) and Ilya Somin (George Mason) Commentator: Steven Calabresi (Northwestern), Neal Devins (William and Mary), John Harrison (Virginia)
      4-6 pm The Bill of Rights
        Presenters: Eric Claeys (St. Louis), Kent Greenawalt (Columbia), Suzanna Sherry (Vanderbilt), Commentator: Stephen Presser (Northwestern), Stewart Sterk (Cardozo), Eugene Volokh (UCLA)
      6-7 pm Reception 7-9 pm Dinner for Participants
    Saturday, April 24
      9-12: A Roundtable on Rehnquist Court 9-10 The Rehnquist Court and Political Science, Discussion Leader: Thomas Merrill (Columbia) 10-11 The Rehnquist Court and Other Branches, Discussion Leader: Neal Devins (William and Mary) 11-12 Jurisprudential Theories of the Rehnquist Court, Discussion Leader: John McGinnis (Northwestern)


Tuesday, January 20, 2004
 
Extreme Mental or Emotional Disturbance You will want to check out the discussion over at Punishment Theory, including posts by John Gardner and Antony Duff.


 
Miller on Inherited Responsibility at Oxford At Oxford's Jurisprudence discussion group, David Miller presents Inheriting Responsibilities. Here is a taste:
    One quite striking feature of the politics of the last half-century has been the escalation of demands for redress, issued by groups who see themselves as the victims of historic acts of injustice. Present-day governments and their citizens are being asked to bear responsibility for the actions and policies of earlier generations, and to take a variety of steps to correct the harm and injustice that they perpetrated. Not all such demands have been successful, but many have been, and the costs incurred have in some cases been considerable. The claims in question have been very diverse, both in terms of who is making them and in terms of the acts singled out as standing in need of redress. Let me remind you of some well-known examples:
      1) The payments that have been made by the German government to Jews as reparation for the Nazi holocaust, mainly in the form of transfers to Israel, and estimated to be in the order of 80 billion Deutschmarks.
      2) The demands made by members of the Australian Aboriginal community for compensation and for a national Day of Apology for the so-called ‘stolen generation’ of Aboriginal children taken from their families and brought up in white homes or orphanages.
      3) The compensation of $122 million awarded by the US Supreme Court to the Sioux Indians for the occupation by whites in the late 19th century of the goldrich Black Hills area that had previously been reserved to the Sioux by treaty.
      4) Demands that Japan should pay compensation to ‘comfort women’ taken from other East Asian countries (especially Korea) and forced into prostitution by the Japanese military, giving rise to official apologies and the creation of an Asian Women’s Fund to offer compensation to the women involved.
      5) Demands that items of symbolic significance seized from their original owners should be returned to those owners or their descendants, for instance the demand that the Parthenon Marbles should be returned to Greece, or the demand by some aboriginal peoples that the bones of their ancestors now held in museums across the world should be sent back to them for reburial.
      6) The many and varied demands that have been made in the US as forms of redress for black slavery, from land settlements for blacks, to financial compensation to the descendants of slaves, to affirmative action policies, to formal apologies for slavery on the part of Congress or the President.
    My interest in this issue stems from a broader interest in the idea of national responsibility. When does it make sense to hold those collectivities we call nations responsible both for the benefits and harms they bring to themselves and for the benefits and harms that they inflict on others? I have tried in another paper to answer this question as it applies to nations considered as groups of contemporaries, and for the purpose of the present paper I am going to assume that the core idea of national responsibility is defensible.3 But clearly it is one thing to argue that we share in responsibility for the actions of the present generation – for political decisions taken in our name, for instance – and another to argue that we can inherit responsibilities from the past. How can we be liable for what our forefathers did when we had no opportunity either to contribute to or to prevent the actions and policies that created the injustice?
This should be good!


 
Melamed on Spinoza at Yale At Yale philosophy, Yitzhak Melamed presents Spinoza's Anti-Humanism.


 
Lookofsky on Contracts for the International Sale of Goods Joseph Lookofsky (University of Copenhagen - Faculty of Law) has posted In Dubio Pro Conventione? Some Thoughts About Opt-Outs, Computer Programs and Preemption Under the 1980 Vienna Sales Convention (CISG) (Duke Journal of Comparative & Internaional Law, Vol. 13, No. 3, Summer 2003) on SSRN. Here is the abstract:
    The CISG is a shorthand expression for the 1980 United Nations Convention on Contracts for the International Sale of Goods. Also sometimes referred to as the Vienna Convention, the CISG is the first uniform sales law to win acceptance on a worldwide scale: more than sixty States have ratified the Convention, representing more than two-thirds of all world trade. Simply because the parties to this sale of goods (dresses) have their places of business in different CISG Contracting States, an American or French court or an arbitral tribunal asked to resolve the dispute in question will do so - not on the basis of UCC Article 2 or French domestic sales law, but - on the basis of the CISG. In a case like Illustration 1 the application of the Convention (and, conversely, the non-application of domestic sales law) is a very straightforward affair. In other cases, we may need to ask and answer questions. According to some Convention commentators, doubts regarding CISG application are often best resolved favor conventionis, i.e., in favor of the Convention and its application (and at the expense of domestic law). The main rationale for such a pro-CISG bias is that the Convention - now accepted by the world community as a suitable default regime for international sales - should be applied wherever sufficient reasons for its application exist, and where its language does not preclude such application. Of course, not all issues which relate to the Convention's Sphere of Application (CISG Part I) are amenable to resolution by the use of legal maxims, mechanical allocation of proof-burdens or other simple means. Indeed, according to the Understanding I shared and developed with Herbert Bernstein, at least some doubts regarding Convention application are best resolved the other way, i.e., by - or in conjunction with - the application of domestic rules of law. So, while it might sound catchy and convenient, the phrase in dubio pro conventione (which Herbert himself coined) does not represent a principle to be applied blindly, to answer all controversial questions arising under CISG Part I.


 
Danner on Jackson's Lament Richard A. Danner (Duke University School of Law) has posted Justice Jackson's Lament: Historical and Comparative Perspectives on the Availability of Legislative History (Duke Journal of Comparative & International Law, Vol. 13, No. 3, Summer 2003) on SSRN. Here is the abstract:
    Bob Berring has suggested that the forms in which legal information is published and distributed can be influential in the development of legal knowledge. This article tests the possibilities of that idea by examining the role of greater availability of legislative history information on the increased use of legislative history in the early twentieth century. The article explores the availability question in light of developments in the history of the printing and distribution of Congressional documents, while looking specifically at the impacts of late nineteenth century changes in the systems for publication and distribution of federal documents. Part II of the article introduces the primary approaches to statutory interpretation in United States courts, provides comparisons with other common law jurisdictions, and describes the publication history of Congressional committee reports and records of debates on the floor of Congress. Part III discusses uses of Congressional materials in nineteenth century courts, and how legislative history was viewed in contemporary treatises. Part IV explores possible explanations for the increased uses of legislative history by federal courts in the late nineteenth and early twentieth centuries. Part V examines the impacts of the Printing Act of 1895 and other changes in the distribution system for government publications on the greater availability of legislative history in the early twentieth century. Part VI discusses the continued applicability of concerns about availability in the twentieth-first century information environment, twenty years after Justice Jackson's lament was deemed anachronistic in light of technological advances.


 
Allen and Lively on the Burden of Persuasion in Civil Cases Ronald J. Allen and Sarah Lively (Northwestern University Law School and Northwestern University - School of Law) have posted Burdens of Persuasion in Civil Cases: Algorithms v. Explanations (Law Review of Michigan State University-Detroit College of Law, Forthcoming) on SSRN. Here is the abstract:
    The conjunction paradox has fascinated generations of scholars, primarily because it brings into focus the apparent incompatibility of equally well accepted conventions. One the one hand, trials should be structured to reduce the total number, or optimize the allocation, of errors. On the other hand, burdens of persuasion are allocated to elements by the standard jury instruction rather than to a case as a whole. Because an error in finding to be true any element of the plaintiff's cause of action will result in an error if liability is found, errors on the overall case accumulate with errors on discrete issues. This, in turn, means that errors will neither be minimized nor optimized (except possibly randomly). Thus, the conventional view concerning the purpose of trial is inconsistent with the conventional view concerning the allocation of burdens of persuasion. Two recent efforts to resolve this conflict are examined in this article. Dean Saul Levmore has argued that the paradox is eliminated or reduced considerably because of either the implications of the Condorcet Jury Theorem or the implications of super majority voting rules. Professor Alex Stein has constructed a micro-economic explanation of negligence that is also offered as resolving the paradox. Neither succeed, and both fail for analogous reasons. First, each makes a series of ad hoc adjustments to the supposedly formal arguments that are out of place in formal reasoning. The result is that neither argument is, in fact, formal; both arguments thus implicitly reject the very formalisms they are supposedly employing in their explanations. Second, both articles mismodel the system of litigation they are trying to explain in an effort to close the gap between their supposedly formal models and the reality of the legal system; and when necessary corrections are made to their respective models of litigation, neither formal argument maps onto the reality of trials, leaving the original problem untouched and unexplained. These two efforts are thus very much similar to the failed effort to give a Bayesian explanation to trials and juridical proof, which similarly failed due to the inability to align the formal requirements of subjective Bayesianism with the reality of modern trials. We also explore the reasons for this consistent misuse of formal arguments in the evidentiary context. Rationality requires, at a minimum, sensitivity to the intellectual tools brought to a task, of which algorithmic theoretical accounts are only one of many. Another, somewhat neglected in legal scholarship, is substantive explanations of legal questions that take into account the surrounding legal landscape. As we show, although the theoretical efforts to domesticate the conjunction paradox fail, a substantive explanation of it can be given that demonstrates the small likelihood of perverse consequences flowing from it. The article thus adds to the growing literature concerning the nature of legal theorizing by demonstrating yet another area where legal theorizing in one of its modern conventional manifestations (involving the search for the algorithmic argument that purportedly explains or justifies an area of law) has been ineffectual, whereas explanations that are informed by the substantive contours of the relevant legal field have considerable promise.


 
Conference Announcement: The Law of Democracy
    University of Pennsylvania Law Review 2003-04 Symposium The Law of Democracy Friday, February 6, 2004 12:00 - 1:15: Lunch with Keynote Address by FEC Chairman Bradley Smith 1:30 - 3:15: Panel I - Campaign Finance Chair/Discussant: Heather Gerken (Harvard) Robert Bauer (Perkins Coie) Rick Hasen (Loyola) Spencer Overton (George Washington) Nathaniel Persily (University of Pennsylvania) & Kelli Lammie (Annenberg School) 3:30 - 4:45: Panel II - New Issues in the Law of Democracy Chair/Discussant: Pamela Karlan (Stanford) Richard Briffault (Columbia) Elizabeth Garrett (USC) William Marshall (UNC) 6:30 - 9:30: Dinner Saturday, February 7, 2004 8:30 - 9:00: Breakfast 9:00 - 10:30: Panel III - New Issues in Minority Legislative Representation Chair/Discussant: Samuel Issacharoff (Columbia) Pamela Karlan (Stanford) Ellen Katz (Michigan) Jonathan Nagler (NYU) and Michael Alvarez (Cal. Tech.) 10:45 - 12:15: Panel IV - Redistricting: Case Law & Consequences Chair/Discussant: Richard Pildes (NYU) Steven Ansolabehere (MIT) and Jim Snyder (MIT) Guy-Uriel Charles (Minnesota) Daniel Ortiz (UVA) 12:15 - 1:30: Lunch - Roundtable on the Texas and Pennsylvania Partisan Gerrymandering Cases Heather Gerken Samuel Issacharoff Pamela Karlan Nathaniel Persily Richard Pildes


Monday, January 19, 2004
 
Original Meaning and the Vicious Circle Today seems to be Originalism Day, with Matthew Yglesias entering the fray. Challenging original-meaning originalism, Yglesias has the following argument:
    Say you are Median Citizen, 1789 (MC1789), how should you understand the meaning of the constitution? By reference to MC1789's understanding of the terms? Well, that's not going to work -- you are MC1789. You're going to have to use some independent standard to assess the meaning. And so you figure one out, Standard X.
Yglesias's post goes on, and you will want to read the whole thing. The conjuring trick here is the assumption that "You're going to have to use some independent standard to assess the meaning.," because there is no need for an "independent" standard. One can simply ask, "What is the ordinary meaning meaning of this provision of the Constitution?" If one had been the "median citizen" (or more aptly, a competent speaker of the language, conversant with contemporary usage), then ordinarily one wouldn't have needed to go futher than to ask, "What does this mean?" But in some cases, one might want to further. For example, one might sensibly reason, "This seems ambiguous (or vague), but given the context (e.g. it is a constitutional provision), how would someone using this language have thought it would be understood?" This process simply doesn't require an independent standard X. A dependent, nonstandard is sufficient for comprehension--as we all know from our own ability to understand public utterances--including, for example, Yglesias's post. This doesn't mean that such comprehension doesn't depend on lots of stuff--acquiring linguistic competence, vocabulary, and an appreciation for contemporary idiom is after all, a complex and lengthy process. The point of original-meaning originalism is that meanings (and hence understandings) change over time, and historical evidence sometimes (but surely not always) allows us to put ourselves in the shoes of the audience for a given constitutional provision at the time is was enacted. In fact, if we can forget about the politics of originalism, the notion that we can sometimes appreciate a change in meaning is pretty mundane. Any fancy philosophical argument that attempts to show that we can't appreciate changes in meaning and therefore appreciate earlier meanings has got have an assumption that is wrong or a move that is invalid.
For more, see the Legal Theory Lexicon post on originalism, Randy Barnett's reply to Dominic Murphy's question, and my comment on Murphy.


 
Weekend Wrap Up The Download of the Week was an important article by Rick Hasen on McConnell--thank goodness someone else is reading the whole opinion! Also on Saturday, the Legal Theory Bookworm recommended Restoring the Lost Constitution--originalism seems to be in the air! Sunday, the Legal Theory Calendar previewed this week's talks, workshops, and conferences. Also on Sunday, the Legal Theory Lexicon provided an introduction to--you guessed it--Originalism in constitutional theory.


 
Price on the Logic (So-Called) of Practical Inference at Oxford At Oxford's Moral Philosophy Seminar, Anthony Price (Birkbeck) presents On the so-called logic of practical inference. Here is a taste of this interesting paper:
    Anscombe gives a notorious example of doing a lot in order to achieve less: ‘The British … wanted to destroy some German soldiers on a Dutch island in the second world war, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.)’ In this case, a single action, bombing the dykes, was a way of drowning everybody on the island, and hence of drowning everybody on the island who was German. As she later adds, ‘What is in question here is something outside the logic that we are considering, namely whether there is “one action” which is a way of effecting (p & q) and therefore a way of effecting p.’ Some partly similar cases might well be described otherwise: an action may have the intended result that p, and the predicted side-effect that q. On some moral views, this difference is more than notional: according to the doctrine of double effect, one is not to do good through intentionally doing evil, but may cause evil incidentally (so long as it is not disproportionate) through doing good. If so, this had better not be a distinction that is manipulable by choosing how to think about an action; yet its application is complicated by the fact that an end that is genuinely embraced may yet be optional and occasional.


 
Grant on Ethics and Incentives at Chicago At the University of Chicago's Political Theory Workshop, Ruth Grant, Duke University presents Ethics and Incentives with discussant Jay Cost. Here is a taste:
    In the United States, governments at all levels, as well as private institutions, are increasingly turning to the use of incentives and disincentives to bring about desired policy outcomes.1 Generally speaking, quite apart from considerations of effectiveness, such an approach is considered ethically superior to coercive rules and regulations as a means of governance because it allows for voluntary action. Nonetheless, many particular instances of the use of incentives in public policy leave people with vaguely defined ethical qualms. There are many examples from a wide variety of policy areas: plea-bargaining, offering welfare benefits only to women who agree to take birth control pills, company payments to public schools willing to install soda machines or televisions, using grants of federal dollars to the states to influence state policies in areas constitutionally outside of the jurisdiction of the federal government. Should incentives such as these elicit ethical concerns? What causes the discomfort?


 
Murphy & Barnett on Originalism Yesterday's Legal Theory Lexicon entry was on Originalism. So I was especially interested this morning by a post on the Volokh Conspiracy. Cal Tech philosopher Dominic Murphy poses the following question for Randy Barnett:
    What I want to know is what you mean by "the public meaning of the text at the time of its adoption": because (1) if the public meaning of the text is only what the words REFERRED TO at the time, then when the constitution uses the term " the states", it is referring only to the original 13 states, and all states admitted to the union since then are presumably not bound by it. So you can't mean that. So do you mean that (2) many terms in the constitution are defined functionally, as "whatever fulfills the criteria for being a thing of this kind" - in that case all the states come within the meaning of "states." But now look: in this second version, we admit that the class of entities a constitutional term refers to can expand over time, even if the term keeps its original meaning. In that case, why can't all the terms work like that?
You will want to read Barnett's full response (same link), but I can't resist adding a few words. The first alternative offered by Murphy is a red herring. No one thinks that general laws are limited in meaning to their extensions at the time of adoption. Most laws (setting aside ex post facto laws and Bills of Attainder) use general language that obviously has prospective meaning. So what about Murphy's second alternative, "whatever fulfills the criteria for a being a thing of this kind"? Originalists need not object to this formulation. An original-meaning originalist simply maintains that the criteria that define the type (the class of entities [e.g. actions, events, transactions]) covered by the constitutional provision are the criteria as they would have been understood by the public at the time the relevant constitutional provision was adopted. Murphy's next move ignores this possibility:
    [W]hen the constitution says "rights" it means "whatever fulfills the criteria for being a right". In that case, we can decide that something new (like privacy, or even universal health care) in fact meets the criteria for being a right, and expand the set of rights even while keeping the original meaning of the term (philosophers call this the difference between the sense and reference of a term).
The criteria for what counts as a "right" (or better, a "right retained by the people" for the purposes of the Ninth Amendment) may change over time. An original-meaning originalist maintains that when such changes in meaning occur, the relevant criteria are those that would have been understood by the public--the audience addressed by the Constitution. Murphy tries to avoid this possibility with the following argument:
    some terms in the constitution are functionally defined in such a way as to permit more objects of that type to be added to the set of things that the terms refer to. But other terms aren't: they just refer to what they originally referred to. I don't know how you defend that without either (a) appealing to the intentions of the founders, as the men in control of the definitions, or (b) providing some hideously complicated semantic theory.
But this argument has no traction against original meaning originalism. It commits the fallacy of the excluded middle--assuming that the only two alternatives are: (1) the original extension of the constitutional language or (2) the contemporary criteria for defining the type identified by the constitutional language. Read this very interesting exchange.


 
Welcome to the Blogosphere . . . to Legal Fiction, another anonymous law clerk blog. And to That's News to Me, a University of Chicago law student blog.
Update: "And by coincidence, Legal Fiction has a post on . . . You guessed it again! . . . originalism.


 
Walsh on the Foundations of Corporate Law Joseph T. Walsh (Supreme Court of Delaware) has posted The Fiduciary Foundation of Corporate Law (Journal of Corporation Law, Vol. 27, No. 3) on SSRN. Here is the abstract:
    The fiduciary relationship between directors and shareholders is based on the law of trusts. In early corporate law, and in common law, the fiduciary duty was owed only to the stockholders of a corporation. However, as the corporation evolved from being viewed solely as property to being viewed as a social entity, the fiduciary duty of directors has also evolved, and today it is often unclear to whom directors owe their fiduciary duty. This Article explores the history of directors' fiduciary duties, to whom directors' owe this duty, and directors' duty of disclosure. The author argues that the recent Enron collaps may cause use to reexamine traditional notions of private corporations' public responsibility and to whom the duty of disclosure is owed.


 
Gana on Law, Literature, and Hermeneutics Nouri Gana's article Beyond the Pale: Toward an Exemplary Relationship Between the Judge and the Literary Critic is now available on Westlaw. Here is the abstract:
    Gadamer's pursuit in Truth and Method of an applicative literary hermeneutics modeled on legal hermeneutics earns him the status of a precursor to the emergence of what is known in North America as the "literature and law movement." Attentive to the debates and controversies surrounding this movement, this article seeks to explore an interpretive interzone in which the judge and the literary critic, if they apply themselves to a poetics of elasticity, might be of exemplary significance to each other. The notion of "exemplarity" does not, however, imply a mechanical appropriation of the practices of the one by the other, but a mutually nuanced and complicated approximation of the strengths of each by the other. In the light of this normative poetics of proximity and distance, Dworkin's model of the "chain novel" is assessed and supplemented by (an alternative) model grounded in Foucault's genealogy of authorship as expounded in his article "What Is an Author?"
And here is another excerpt:
    [I]ndeterminacy cannot be by any means différanced sine die; it is in the final analysis as contingent as the call for determinacy itself. Determinacy and indeterminacy codetermine each other; hardly does the one cease to inhabit the other. Furthermore, a contingent decision should not be mistaken for a paralytic indecision, and a nomadic search for a determinable interpretation should not be reduced to a hopeless quest under the shadow of indeterminacy. These long and nuanced searches, the attributes of a literary critic, must in the final analysis forearm the judge against a premature leap into decision making.
Hmm.


 
Conference Announcement: Mutual Fund Litigation and Regulation
    American Enterprise Institute
    Mutual Fund Litigation and Regulation: Is the Cure Worse than the Disease? Wednesday, January 28, 2004, 9:00 a.m.–noon Wohlstetter Conference Center, Twelfth Floor, AEI
      For most individual investors, mutual funds represent the best vehicle for wealth creation. However, recent revelations suggest that many investors have been exploited for the benefit of others. Special arrangements between some fund managers and large investors—allowing certain customers to engage in market timing and late trading—have prompted large-scale litigation and demands for increased regulation of the industry.
      This event will explore the theoretical foundations of this scandal, the empirical evidence of harm to investors, and the wisdom of regulatory intervention. In the first panel, finance scholars will present innovative research, which attempts to quantify the effects of late trading, market timing, and other practices such as soft-dollar brokerage arrangements and mutual fund fee dispersion. To provide a context for this academic research, the second panel will feature a discussion among individuals involved in mutual fund litigation.
    Schedule
      8:45 a.m.--Registration 9:00 a.m.--Panel I
        Moderator: KEVIN HASSETT, AEI Panelists:
          ERIC ZITZEWITZ, Stanford Business School CHAD SYVERSON, University of Chicago D. BRUCE JOHNSEN, George Mason University School of Law
      10:30 a.m.--Panel II
        Panelists:
          ROBERT NELSON, Lieff Cabraser Heimann & Bernstein LLP HON. JED RAKOFF, Southern District of New York PAUL STEVENS, Dechert LLP
    Online registration is available at http://www.aei.org/events.
    For additional information, please contact Kate Rick at 202.862.5848. For media inquiries, please contact Veronique Rodman at 202.862.4871 or vrodman@aei.org.


Sunday, January 18, 2004
 
Legal Theory Lexicon: Originalism
    Introduction There are many different theories of constitutional interpretation, but the most controversial and also perhaps the most influential is "originalism"--actually a family of constitutional theories. The idea that courts would look to evidence from the constitutional convention, the ratification debates, The Federalist Papers, and the historical practice shortly after ratification of the Constitution of 1789 (or to equivalent sources for amendments) is an old one. This post provides a very brief introduction to "originalism" that is aimed at law students (especially first-year law students) with an interest in legal theory.
    The Originalist Revival No one scholar or judge can deserves credit for originalism as a movement in constitutional theory and practice, but in my opinion one of the crucial events in the originalist revival was the publication of Raoul Berger's book, Government by Judiciary in 1977 by Harvard University Press. As you can guess from the title, Berger's book was very critical of the Warren court (and its aftermath in the 70s). One of the key responses to Berger was the publication of The Misconceived Quest for the Original Understanding by Paul Brest in 1980. Brest's article initiated an intense theoretical debate over the merits of originalism that continues today. At various points in time, both sides have claimed the upper hand, but at the level of theory, the case for originalism has always been contested.
    Originalism is not an ivory tower theory. It has had a profound influence on the practice of constitutional interpretation and the political contest over the shape of the federal judiciary. President Reagan's nomination of Robert Bork (an avowed originalist) was one key moment--with his defeat by the democrats seen as a political rejection of originalism. The current Supreme Court has at least three members who seem strongly influenced by originalist constitutional theory--Chief Justice William Rehnquist and Associate Justices Antonin Scalia and Clarence Thomas.
    The final chapter of the originalism debate in legal theory has yet to be written--and perhaps it never will be. But one last set of developments is particularly important. In the 70s and early 80s, originalism was strongly associated with conservative judicial politics and conservative legal scholars. But in the late 1980s and in the 1990s, this began to change. Two developments were key. First, Bruce Ackerman's work on constitutional history suggested the availability of "left originalism" that maintained the commitment to the constitutional will of "We the People" but argued that the constitution included a New Deal constitutional moment that legitimated the legacy of the Warren Court--We the People: Foundations, published in 1991. Second, Randy Barnett (along with Richard Epstein, the leading figure in libertarian legal theory) embraced originalism in an influential article entitled An Originalism for Nonoriginalists. Ackerman and Barnett represent two trends in originalist thinking: (1) the political orientation of originalism has broadened from conservatives to liberals and libertarians, and (2) the theoretical structure of originalism has morphed and diversified from the early emphasis on "the original intentions of the framers." After the publication of Paul Brest's Misconceived Quest one heard talk that originalism was dead as a serious intellectual movement. These days one is more likely to hear pronouncements that "we are all originalists, now."
    Original Intentions Early originalists emphasized something called the original intentions of the framers. Even in the early days, there were disputes about what this phrase meant. Of course, there were debates about whether the framers (a collective body) had any intentions at all. And there were questions about what counted as "intentions," e.g. expectations, plans, hopes, fears, and so forth. But the most important early debate concerned levels of generality. The intentions of the framers of a given constitutional provision can be formulated as abstract and general principles or as particular expectations with respect to various anticipated applications of the provision. Most theorists will assent to this point, which flows naturally from the ordinary usage and conceptual grammar of the concept of intention. The difficulty comes because the different formulations of intention can lead to different results in any given particular case. For example, the intention behind the equal protection clause might be formulated at a relatively high level of generality--leading to the conclusion that segregation is unconstitutional--or at a very particular level--in which case the fact that the Reconstruction Congress segregated the District of Columbia schools might be thought to support the "separate but equal" principle of Plessy v. Ferguson. Perhaps the most rigorous defender of the original intentions version of originalism has been Richard Kay in a series of very careful articles.
    Yet another challenge to original-intent originalism was posed by Jefferson Powell's famous article, The Original Understanding of Original Intent, published in 1985. Powell argued that the framers themselves did not embrace an original intention theory of constitutional interpretation. Of course, this does not settle the theoretical question. The framers, after all, could have been wrong on this point. But Powell's critique was very powerful for those who insisted that constitutional interpretation must always return to origins. A certain kind of original-intent theory was self-defeating if Powell's historical analysis was correct. Moreover, some of the reasons that Powell identified for the framers' resistance to originalism were quite powerful. Especially important was the idea that "secret intentions" or "hidden agendas" had no legitimate role to play in constitutional meaning. In the end, however, Powell's article actually had the effect of turning originalism in a new direction--from original intention to original meaning.
    Original Meaning The original-meaning version of originalism emphasizes the meaning that the Constitution (or its amendments) would have had to the relevant audience at the time of its adoptions. How would the Constitution of 1789 have been understood by an ordinary adult citizen at the time it was adopted? Of course, the same sources that are relevant to original intent are relevant to original meaning. So, for example, the debates at the Constitutional Convention in Philadelphia may shed light on the question how the Constitution produced by the Convention would have been understood by those who did not participate in the secret deliberations of the drafters. But for original-meaning originalists, other sources become of paramount importance. The ratification debates and Federalist Papers can be supplemented by evidence of ordinary usage and by the constructions placed on the Constitution by the political branches and the states in the early years after its adoption. The turn to original meaning made originalism a stronger theory and vitiated many of the powerful objections that had been made against original-intentions originalism.
    The concept of original meaning originalism in its modern incarnation can be traced to a brief mention in Robert Bork's The Tempting of America, but Bork did not develop the idea extensively. Later original-meaning originalism was picked up by Justice Scalia in his opening essay in A Matter of Interpretation. Although the distinction between original meaning and original intent can be found in a variety of early contemporary sources including an article by Robert Clinton in 1987, the systematic development of original-meaning originalism is a relatively recent phenomenon. Original meaning originalism receives its most comprehensive explication and defense in Randy E. Barnett's new book, Restoring the Lost Constitution: The Presumption of Liberty--a systematic development of the original meaning approach and critique of the original intention theory.
    Regime Theory Yet another important twist in originalist theory is emphasized by the work of Bruce Ackerman: a twist that I shall call "regime theory." The foundation for regime theory is the simple observation that the Constitution of the United States was adopted in several pieces--the Constitution of 1789 was supplemented by a variety of amendments. And of these amendments, the three reconstruction amendments (the 13th, 14th, and 15th) are of especial importance--because of the significant structural transformation they work in the relationship between the powers of the national government and the powers of the states. Interpreting the whole Constitution requires an understanding of the relationship between the provisions of 1789 and those adopted during Reconstruction. Some regime theorists argue that the interaction between these two constitutional regimes has the implication that provisions adopted in 1789 take on a new meaning and significance after the Reconstruction Amendments were adopted.
    Ackerman's own version of regime theory includes a fascinating and important challenge for originalists of all stripes. Ackerman emphasized the fact that both the Constitution of 1789 and the Reconstruction Amendments were adopted through processes that were extralegal under the legal standards the prevailed at the time. The Articles of Confederation required unanimous consent of all the states for constitutional amendments and for complicated reasons, it seems likely that the Reconstruction Amendments were of dubious legality if strictly judged by the requirements set forth for amendments in Article V. Ackerman's conclusion was that the Constitution derives its legitimacy, not from the legal formalities, but from "We the People," when mobilized in extraordinary periods of constitutional politics. Perhaps the most controversial conclusion that Ackerman reaches is that the New Deal involved another such constitutional moment, in which "We the People" authorized President Roosevelt to act as an extraordinary Tribune, empowered to alter the constitutional framework through a series of transformative appointments. If one accepts this view, then one might begin to ask questions about the "original meaning" of the New Deal--a kind of originalism that would surely not be embraced by the conservative proponents of originalism in the 70s and early 80s.
    Originalism and Precedent Whither originalism? Given the ups and downs of originalism over the past three decades, making long-term predictions seems perilous indeed. But I will make one prediction about the future of originalism. We are already beginning to see originalists coming to grips with the relationship between original meaning and precedent--both in the narrow sense of Supreme Court decisions and the broader sense of the settled practices of the political branches of government and the states. Already, originalists of various stripes are beginning to debate the role of precedent in an originalist constitutional jurisprudence. Given the conferences and papers that are already in the works, I think that I can confidently predict that the debate over originalism and stare decisis will be the next big thing in the roller-coaster ride of originalist constitutional theory.
    Bibliography This very selective bibliography includes some of the articles that have been influential in the ongoing debates over originalism.
    • Bruce Ackerman, We the People: Foundations (Harvard University Press 1991) & We the People: Transformations (Harvard University Press 1998).
    • Randy Barnett, An Originalism for Nonoriginalists, 45 Loyola Law Review 611 (1999) & Restoring the Lost Constitution (Princeton University Press 2004).
    • Raoul Berger, Government by Judicary (Harvard University Press 1977).
    • Robert Bork, The Tempting of America (Vintage 1991).
    • Paul Brest, The Misconceived Quest for the Original Understanding, 60 Boston University Law Review 204 (1980).
    • Robert N. Clinton, Original Understanding, Legal Realism, and the Interpretation of the Constitution, 72 Iowa L. Rev. 1177 (1987).
    • Richard Kay, Adherence to the Original Intentions in Constitutional Adjudication: Three Objections and Responses, 82 Northwestern Univeristy Law Review 226 (1988)
    • Jefferson Powell, The Original Understanding of Original Intent, 98 Harv. L. Rev. 885 (1985).
    • Antonin Scalia, A Matter of Interpretation (Princeton University Press 1997)
    • Lawrence Solum, Originalism as Transformative Politics, 63 Tulane Law Review 1599 (1989).
    • Keith E. Whittington, Constitutional Interpretation: Textual Meaning, Original Intent, and Judicial Review (Kansas 1999).


 
Legal Theory Calendar
    Monday, January 19 Tuesday, January 20
      At Oxford's Jurisprudence discussion group, David Miller presents Inheriting Responsibilities.
      At Yale philosophy, Yitzhak Melamed presents Spinoza's Anti-Humanism.
    Wednesday, January 21
      At Northwestern's Constitutional Theory Colloquium, Mary Anne Case, University of Chicago Law School, is presenting. Can anyone provide a title?
      At Villanova Law, Max Stearns (George Mason University School of Law) presents An Overview of Public Choice
      At the University of Hertfordshire Centre for Normativity and Narrative, Neil Manson (Cambridge) presents Freud, Folk Psychology and Mental Order.
    Thursday, January 22 Friday, January 23
      At the University of Texas, Oren Bracha (UT) presents The Transformation of American Copyright Law 1789-1909.
      At Oxford's Jowett Society, G.A. Cohen (Oxford) is speaking. Could someone provide the title of Cohen's paper?
      At Tulane's Center for Ethics and Public Affairs, Margaret Little (Georgetown University) presents Intimate Duties.
      At the University of Chicago, there is a conference on Constitution-Making in Israel and Palestine, sponsored by the Center for Comparative Constitutionalism.


Saturday, January 17, 2004
 
Legal Theory Bookworm This week, the Legal Theory Bookworm recommends Restoring the Lost Constitution: The Presumption of Liberty by Randy Barnett (Boston University). Here's what some leading constitutional theorists are saying about this book:
    Step by step, Randy Barnett constructs an intriguing case for a moderately libertarian natural-rights Constitution that allows government action only when, and because, doing so protects the generously defined liberties of each person. Along the way he sheds new light on old controversies. This book should provoke the kind of controversy that advances our understanding of the Constitution." (Mark Tushnet, author of The New Constitutional Order) "Randy Barnett makes two important arguments, one involving the interpretation of the Constitution by reference to original understandings, the other endorsing a libertarian tilt in resolving disputes about governmental powers. Constitutional scholars and students will find much to admire in Barnett's carefully nuanced arguments, whether or not they ultimately agree with his conclusions. But the book should attract general readers as well. It is remarkably well written, totally devoid of jargon, and presented in a conversational and courteous tone. A truly excellent book!" (Sanford Levinson., author of Constitutional Faith) "Randy Barnett's Restoring the Lost Constitution is a surprising book. It is surprising that a scholar as learned and competent as Barnett should undertake the defense of libertarianism--a perspective on the state and on law unfashionable among the intelligentsia for a century. It is surprising that he should defend it so well and reasonably. It is even more surprising that such a strong and comprehensive case can be made that this libertarian perspective not only was that of those who wrote and (more importantly) ratified the Constitution, but also that it is the lawful and proper way to interpret the Constitution today. This is an important and challenging book for anyone interested in American law and government." (Charles Fried, Beneficial Professor of Law, Harvard Law School, author of Saying What the Law Is: The Constitution in the Supreme Court) "Provocative in the best sense, this is a very readable book whose argument is clear and accessible even to those unversed in the details of constitutional law or theory. It is particularly suggestive and effective in connecting two disparate strands of conservative political and constitutional theory: traditional conservative respect for original constitutional meaning and libertarian commitment to individual rights." (Keith Whittington, author of Constitutional Interpretation: Textual Meaning, Original Intent, and Judicial Review) "This is an important book, one that everybody in the field will (or should) take account of. Randy Barnett puts forward a comprehensive, thoughtful, clear, concise, challenging, and historically plausible version of American Constitutionalism. He pulls together a tremendous amount of material, including some of the best recent revisionist scholarship on constitutional history, and sets this in a framework of great integrity and unity of vision." (Michael Zuckert, University of Notre Dame, author of Launching Liberalism: On Lockean Political Philosophy)
And here is the blurb:
    The U.S. Constitution found in school textbooks and under glass in Washington is not the one enforced today by the Supreme Court. In Restoring the Lost Constitution, Randy Barnett argues that since the nation's founding, but especially since the 1930s, the courts have been cutting holes in the original Constitution and its amendments to eliminate the parts that protect liberty from the power of government. From the Commerce Clause, to the Necessary and Proper Clause, to the Ninth and Tenth Amendments, to the Privilege or Immunities Clause of the Fourteenth Amendment, the Supreme Court has rendered each of these provisions toothless. In the process, the written Constitution has been lost. Barnett establishes the original meaning of these lost clauses and offers a practical way to restore them to their central role in constraining government: adopting a "presumption of liberty" to give the benefit of the doubt to citizens when laws restrict their rightful exercises of liberty. He also provides a new, realistic and philosophically rigorous theory of constitutional legitimacy that justifies both interpreting the Constitution according to its original meaning and, where that meaning is vague or open-ended, construing it so as to better protect the rights retained by the people. As clearly argued as it is insightful and provocative, Restoring the Lost Constitution forcefully disputes the conventional wisdom, posing a powerful challenge to which others must now respond.
This is a truly superb book--the most forceful case ever made for an originalist libertarian reading of the United States Constitution.


 
Download of the Week This week the dowload of the week is Buckley is Dead, Long Live Buckley: The New Campaign Finance Incoherence of McConnell v. Federal Election Commission by Rick Hasen (Loyola Marymount University). Hasen's blog, Election Law Blog, is a superb source of incisive legal analysis of election law issues. His analysis of of McConnell is must reading for anyone with a deep interest in the interplay between the freedom of speech and election law. Here is the abstract:
    The Supreme Court's recent decision in McConnell v. Federal Election Commission marks the culmination of an effort begun in 2000 to shift the Court's campaign finance jurisprudence in an important, though potentially dangerous, direction. Under pre-2000 jurisprudence, the Court (with one notable exception) upheld campaign finance laws only when the government demonstrated with a reasonable amount of evidence that the laws were at least closely drawn to prevent corruption or the appearance of corruption. The new jurisprudence, while purporting to apply the same anticorruption standard, does so with a new and extensive deference to legislative judgments on both the need for campaign finance regulation and the proper means to achieve it. There are signs that this shift is not merely the slipping of existing standards, however. Rather, it appears that the Court's jurisprudence is moving in the direction proposed by Justice Breyer, toward upholding campaign finance laws that promote a kind of political equality, what Justice Breyer termed a general participatory self-government objective. This apparent shift might be welcome news for those who believe that the Court had been too restrictive of efforts to limit the role of money in politics in order to promote greater political equality. But the means by which the Court has undertaken the shift have proven problematic. The Court has continued to entertain the fiction that it is adhering to the anticorruption rationale of Buckley v. Valeo. The result is jurisprudential incoherence and a lead opinion in the most important campaign finance case in a generation that appears to pay only cursory attention to the First Amendment interests that must be balanced in evaluating any campaign finance regime. Part I briefly surveys the pre-McConnell campaign finance jurisprudence, contrasting Buckley and the pre-2000 cases on the one hand, with the Court's three post-2000, pre-McConnell cases on the other. The recent trend, even before McConnell, is inconsistent with the Buckley rationale, at least as Buckley has been understood traditionally. The Court has replaced a general skepticism of campaign finance regulation with unprecedented deference to legislative determinations on both the need for regulation and the means to best achieve regulatory goals. Part II uses three examples from the McConnell joint majority opinion to demonstrate how the case fits into the new deferential post-2000 campaign finance jurisprudence. Part III points to signs apparent in the post-2000 jurisprudence and intensified in McConnell that the Court is moving toward endorsing the participatory self-government rationale for campaign finance regulation. Part IV argues that that if indeed the Court is moving toward endorsement of the participatory self-government rationale, it should do so more carefully. Thus far, the Court has given only lip service to the requirement that it balance competing interests and police campaign finance measures for legislative self-dealing. The part concludes by examining the danger that the Court eventually will eviscerate the distinction between contributions and expenditures without taking into account a key requirement of the participatory self-government rationale: the need for vibrant election-related participation by a wide group of non-governmental actors.
Download it while its hot!


Friday, January 16, 2004
 
Recess Appointment for Pickering with Updates Charles Pickering has received a recess appointment from President Bush. Here is an excerpt from the Washington Post story:
    President Bush bypassed Congress and installed Charles Pickering on the federal appeals court Friday in an election-year slap at Democrats who had blocked the nomination for more than two years.
    Bush installed Pickering by a recess appointment, which avoids the confirmation process. Such appointments are valid until the next Congress takes office, in this case in January 2005.
This is, of course, a very significant development in the confirmation wars--a natural retaliatory move by the President for the Senate Democrat's use of the filibuster against several of his nominees and yet another move in the downward spiral of politicization that has characterized the process. I am in the Sante Fe depot in San Diego, about to board a train for Los Angeles, so this will be brief. Here are some of the issues that the Pickering recess appointments raise for the ongoing controversies about the judicial selection process:
    --The Democrats may object that the use of recess appointments for judicial nominees is unconstitutional. There are a variety of grounds upon which this argument can be made. Here are some of them:
    • It might be argued that all recess judicial appointments are unconstitutional. Judge Norris of the Ninth Circuit took this position in United States v. Woodley, but an en banc panel of the Ninth Circuit disagreed. For more on this, see this post. As a pracital matter, I believe this constitutional objection have very little chance of gaining significant traction.
    • It might be argued that the recess appointments power only applies to vacancies when the vacancy first occurs during a recess. The contrary position is that the power is triggered whenever a vacancy exists during a recess. I am not sure, but I believe that the Pickering recess appointment would only be valid under the latter theory. Again, this theory has little chance of practical success, given the long historical practice, but from an originalist and textualist perspective, this argument may well be a winner.
    • It might be argued that the current adjournment of the Senate is not a recess--because it is not between two different sessions of the Congress. One might define recesses as the periods between Senate Sessions. Interestingly, however, the Senate itself distinguishes between recesses and adjournments in a way that is inconsistent with this argument: "recess - A temporary interruption of the Senate's (or a committee's) business. Generally, the Senate recesses (rather than adjourns) at the end of each calendar day." However, as I now understand the facts, the Congress is now in recess between two sessions of the same Congress, and not in an intrasession adjournament. Hence, even under the more restrictive interpretation, the Pickering appointment has occurred during a recess.
    --Undoubtedly, the President's use of the recess appointments power will provoke a political reaction from the Democrats. What weapons do the Democrats have left in their arsenal? The most obvious response would be expanded use of the filibuster. The Democrats might say, "So far, we have only filibustered the most extreme nominees, but if the President is going to use the recess appointments power, then we will filibuster all of the President's nominees." If this is the Democratic response, it would be another turn of the downward spiral of politicization that has characterized the appointments process for many years--extending back into the Clinton, Bush I, Reagan, and Carter Presidencies.
    --A further question concerns the future intentions of the Bush administration. Was this a one-time use of the power. As I understand it, Miguel Estrada was also offered a recess appointment, but turned it down--presumably for career related reasons. It would seem logical then to infer that Pickering chose to accept an offer than may well have been extended to the other nominees who are being filibustered. If so, then this use of the recess appointments power is quite limited.
    --Nonetheless, the Pickering recess appointment raises the specter of mass use of the recess appointments power--along the lines suggested by Randy Barnett in his piece in NRO, Benching Bork. This would be a much more significant development.
I must run to catch my train. More later!
Update: The NPR story is here. The Alliance for Justice opposes the move, but the Committee for Justice approves. President Bush's statement is here. The Federalist Society has a paper that addresses various issues here.
In the blogosphere, Roger Payne comments here. Brad DeLong argues that the Pickering move was calculated to appeal to rascist elements in the Republican base here. Howard Bashman's column on the constitutionality of recess appointments is here. And yet more from Scrivener's Error. Also, two posts (here and here) from Legal Fiction.
The New York Times endorses Senator Schumer's statement that the recess appointment is a "finger in the eye for all those seeking fairness in the nomination process." People for the American way call the move "arrogant disregard for the constitutional checks and balances that ensure independent and fair courts."
And here is a Congressional Research Service report on recess appointments (PDF).


 
Noberto Bobbio Norberto Bobbio, perhaps the most famous Italian legal philosopher of the 20th century, has died at the age of 94. Here is an excerpt from the obituary in The Guardian:
    Norberto Bobbio, who has died aged 94, was Italy's leading legal and political philosopher, and one of the most authoritative figures in his country's politics. His status was marked by the Italian president's immediate departure for Turin to be among the first mourners, and an extensive discussion of his writing in the media.
    Bobbio's life and work were conditioned by the vicissitudes of his country's democracy in the 20th century. The experience of fascism, the ideological divisions of the cold war, and the transformation of Italian society during the 1960s and 1970s - which he described so evocatively in his Ideological Profile Of Italy In The Twentieth Century (1969) - prompted and enriched his passionate defence of the constitutional "rules of the game" against those who denied their relevance or would overturn them for reasons of pragmatic convenience.
And here is a link to an interview with Bobbio (pdf).


 
Hasen on McConnell Superstar election-law blogger Rick Hasen has posted Buckley is Dead, Long Live Buckley: The New Campaign Finance Incoherence of McConnell v. Federal Election Commission on SSRN. Here is the abstract:
    The Supreme Court's recent decision in McConnell v. Federal Election Commission marks the culmination of an effort begun in 2000 to shift the Court's campaign finance jurisprudence in an important, though potentially dangerous, direction. Under pre-2000 jurisprudence, the Court (with one notable exception) upheld campaign finance laws only when the government demonstrated with a reasonable amount of evidence that the laws were at least closely drawn to prevent corruption or the appearance of corruption. The new jurisprudence, while purporting to apply the same anticorruption standard, does so with a new and extensive deference to legislative judgments on both the need for campaign finance regulation and the proper means to achieve it. There are signs that this shift is not merely the slipping of existing standards, however. Rather, it appears that the Court's jurisprudence is moving in the direction proposed by Justice Breyer, toward upholding campaign finance laws that promote a kind of political equality, what Justice Breyer termed a general participatory self-government objective. This apparent shift might be welcome news for those who believe that the Court had been too restrictive of efforts to limit the role of money in politics in order to promote greater political equality. But the means by which the Court has undertaken the shift have proven problematic. The Court has continued to entertain the fiction that it is adhering to the anticorruption rationale of Buckley v. Valeo. The result is jurisprudential incoherence and a lead opinion in the most important campaign finance case in a generation that appears to pay only cursory attention to the First Amendment interests that must be balanced in evaluating any campaign finance regime. Part I briefly surveys the pre-McConnell campaign finance jurisprudence, contrasting Buckley and the pre-2000 cases on the one hand, with the Court's three post-2000, pre-McConnell cases on the other. The recent trend, even before McConnell, is inconsistent with the Buckley rationale, at least as Buckley has been understood traditionally. The Court has replaced a general skepticism of campaign finance regulation with unprecedented deference to legislative determinations on both the need for regulation and the means to best achieve regulatory goals. Part II uses three examples from the McConnell joint majority opinion to demonstrate how the case fits into the new deferential post-2000 campaign finance jurisprudence. Part III points to signs apparent in the post-2000 jurisprudence and intensified in McConnell that the Court is moving toward endorsing the participatory self-government rationale for campaign finance regulation. Part IV argues that that if indeed the Court is moving toward endorsement of the participatory self-government rationale, it should do so more carefully. Thus far, the Court has given only lip service to the requirement that it balance competing interests and police campaign finance measures for legislative self-dealing. The part concludes by examining the danger that the Court eventually will eviscerate the distinction between contributions and expenditures without taking into account a key requirement of the participatory self-government rationale: the need for vibrant election-related participation by a wide group of non-governmental actors.
Highly recommended! This is a must download for constitutional theorists & anyone interested in election law.


 
More on Originalism and Precedent Randy Barnett comments on the relationship between originalism and precedent over on the Conspiracy. Here is a taste:
    Originalism (of any version) now confronts a new intellectual challenge: How to handle precedent. If the meaning of the Constitution should remain the same until it is properly changed, as originalists contend, suppose that the Supreme Court gets this meaning wrong, as they have so many times in the past (in part because they largely ignore original meaning)? Is a future Court free to disregard precedent whenever it concludes that the prior case got the text wrong? Like my BU colleague Gary Lawson, I am inclined to say "yes" but Larry Solum has recently made some powerful arguments in favor of the role of precedent for formalism that cause me to reserve judgment until I have given the matter more serious thought.
For more on this issue seeFor my take, see Getting to Formalism, posted last week.


 
Karlan on Felon Disenfranchisement Pamela S. Karlan (Stanford Law School) has uploaded Convictions and Doubts: Retribution, Representation, and the Debate Over Felon Disenfranchisement to SSRN. Here is the abstract:
    The tenor of the debate over felon disenfranchisement has taken a remarkable turn. After a generation of essentially unsuccessful litigation, two federal courts of appeals have recently reinstated challenges to such laws. A number of states have recently made it easier for ex-offenders to regain their voting rights. Recent public opinion surveys find overwhelming support for restoring the franchise to offenders who have otherwise completed their sentences. On the international front, the supreme courts of Canada and South Africa issued decisions requiring their governments to permit even incarcerated citizens to vote. This essay discusses some of the causes and consequences for the way in which we now approach the question of criminal disenfranchisement. Parts I and II suggest that the terms of the contemporary debate reflect an underlying change both in how we conceive the right to vote and in how we understand the fundamental nature of criminal disenfranchisement. Once voting is understood as a fundamental right, rather than as a state-created privilege, the essentially punitive nature of criminal disenfranchisement statutes becomes undeniable. And once the right to vote is cast in group terms, rather than in purely individual ones, criminal disenfranchisement statutes are seen not only to deny the vote to particular individuals but also to dilute the voting strength of identifiable communities and to affect election outcomes and legislative policy choices. The 2000 presidential election and the popular and scholarly discussion that followed the debacle in Florida powerfully demonstrated the outcome-determinative effects of criminal disenfranchisement laws even as the 2000 census drove home other representational consequences of the mass incarceration that triggers much of the disenfranchisement. Felon disenfranchisement cases offer an attractive vehicle for courts concerned with the staggering burdens the war on drugs and significantly disparate incarceration rates have imposed on the minority community. The legitimacy of criminal punishment depends on the legitimacy of the process that produces and enforces the criminal law. The legitimacy of that process in turn depends on the ability of citizens to participate equally in choosing the officials who enact and administer criminal punishment. Lifetime disenfranchisement of ex-offenders short circuits this process in a pernicious and self-reinforcing way. Part III suggests that if we conclude that criminal disenfranchisement statutes are essentially punitive, rather than regulatory - as I think we must - this opens an additional legal avenue for attacking such laws beyond the equal protection- and Voting Rights Act-based challenges that courts are now entertaining. Blanket disenfranchisement statutes also raise serious questions under the Eighth Amendment, given the Supreme Court's recent decisions in Atkins v. Virginia and Ewing v. California.
This is an important topic!


 
Pendo on the Difference Principle and Disability Elizabeth A. Pendo (St. Thomas University, Miami, FL - School of Law) has posted Substantially Limited Justice?: The Possibilities and Limits of a New Rawlsian Analysis of Disability-Based Discrimination (St. Johns Law Review, Vol. 77, p. 225, 2003). Here is the abstract:
    In its recent terms, the Supreme Court has increasingly turned its attention toward the Americans with Disabilities Act, and specifically the questions of who should be protected under the ADA, and what such protection requires. In the wake of the Court's decisions, workers have found it increasingly difficult to assert and protect their right to be free of disability-based discrimination in the workplace. Given the widespread influence of John Rawls in contemporary discussions of social, political and economic justice, his recent and final formulation of his theory of distributive justice presents a significant and promising philosophical foundation for evaluation of Title I of the ADA and the cases interpreting it. In particular, the two parts of Rawls's second principle of justice - the Principle of Fair Opportunity and the Difference Principle - reflect and reinforce the ADA's prohibition of both "irrational" and "rational" disability-based discrimination. The Principle of Fair Opportunity shares strong links to the ADA's protection of people with disabilities where the disability is not relevant to job performance, including protection of workers regarded as disabled or workers with a history of disability. The Difference Principle raises the issue of people with disability as "least advantaged" in the Rawlsian sense, and shares strong links to the ADA's protections where the disability does effect job performance, including actually disabled workers and the reasonable accommodation requirement. Using Rawls's methodology to evaluate recent ADA jurisprudence in light of the structure and content of the ADA indicates that many of these cases are not correctly decided, and suggests a better approach informed by values of distributive justice as well as the language and stated purposes of the ADA. Although Rawls does not take us as far as we want to go, his theory has significant value for understanding and advancing the interests of workers with disabilities, and perhaps those without disabilities as well.
The effort to apply the difference principle directly to disability issues is, I think, doomed to failure. First, Rawls explicitly excludes this problem from the scope of his theory. Second, the interpretation of the difference principle as applied to the disabled poses terribly difficult problems: (A) the challenges faced by the disabled are not located in the basic structure--the locus of the difference principle; (B) this leads naturally to the observation that disability policy is properly the subject of legislation and not the kind of issue that can be settled in advance by the design of most fundamental institutions of society; (C) the difference principle focuses on wealth and income, but the needs of the disabled are best conceptualized in terms of capacities for valuable functioning; (D) frequently the needs of the disabled would not be satisfied by a society which is in compliance with the difference princple--because the disabled may need more resources than the worst-off group (in terms of wealth and income); (E) if the difference principle were applied directly to capacities for functioning, the result would be to give the most disabled far too much--because of diminishing returns, some disabled persons could consume enormous resources and and still be in the worst-off group with respect to their actual capacities for valuable functioning. Despite my objections, this article takes on a very important and interesting topic!


 
Srinivas on Patents and Access to Drugs under the Doha Declaration K. Ravi Srinivas (Indiana University Bloomington - School of Law) has posted Interpreting Para 6 Deal on Patents and Access to Drugs (Economic and Political Weekly, Vol. 38, No. 38, September 2003) on SSRN. Here is the abstract:
    Paragraph 6 of the Doha Declaration recognising the need to ensure adequate and affordable supplies of needed drugs in those countries which did not have manufacturing capacities while protecting the rights of patent holders had directed the TRIPS Council to find an expeditious solution. Such a solution came as a desperate agreement forged on the eve of the Cancun meeting. What is the content of the agreement and how does it impact on developing countries?


 
Sitkoff on Politics & Corporations Robert H. Sitkoff (Northwestern University School of Law) has posted Politics and the Business Corporation (Regulation, Vol. 26, pp. 30-36, Winter 2003-04) on SSRN. Here is the abstract:
    This essay explores the policy bases for, and the political economy of, the law's long-standing regulation of corporate political speech. The essay has three parts. First, it contends that the conventional justifications for regulating corporate interventions in politics - that corporate donations unnaturally skew the political discourse (bad politics) and that corporate political donations harm shareholders (agency costs) - assume irrational investors and substantial capital market inefficiency. Drawing on public choice theory, the essay also explores the aim of retarding rent-seeking as an alternative justification for regulating corporate interventions in politics. Second, the essay reexamines the history of the regulation of corporate political speech and suggests a political economy analysis whereby corporations favored limitations on corporate donations in order to obtain protection from rent extraction by politicians. Finally, the essay explores the implications of this analysis for the modern regulation of corporate political donations.


 
Conference Announcement: What U.S. Lawyers Can Learn from International Law: Concepts of Gender Equality Across Legal Cultures
    WHAT U.S. LAWYERS CAN LEARN FROM INTERNATIONAL LAW: CONCEPTS OF GENDER EQUALITY ACROSS LEGAL CULTURES An Interdisciplinary Conference on the Role of International Gender Norms in Shaping Equality Jurisprudence in the United States Friday, February 20, 2004 Thomas Jefferson School of Law San Diego, California SPONSOR: RUTH BADER GINSBURG LECTURE SERIES AND THE WOMEN AND THE LAW PROJECT AT THOMAS JEFFERSON SCHOOL OF LAW Professor Martha Albertson Fineman, Robert W. Woodruff Professor of Law at Emory University, will deliver the Second Annual Ruth Bader Ginsburg Lecture, a lecture series created by Justice Ginsburg after her visit to Thomas Jefferson last year. Professor Fineman held the first endowed chair in the nation in feminist jurisprudence from 1999-2003 at Cornell University, was the Maurice T. Moore Professor of Law at Columbia University (1991-1999), and the Pritzker Distinguished Visiting Professor of Law at Northwestern University. She is one of the nation's most distinguished and influential scholars. Professor Fineman will speak about Comparative Concepts of Equality: The Use of International and Human Rights, Ideas, Norms and Concepts in U.S. Jurisprudence. Professor Fineman's lecture will be followed by commentary from an interdisciplinary panel of scholars.
      - Gerald Doppelt, Professor of Philosophy at UCSD, will speak about Liberalism and Multiculturalism - An Uneasy Alliance. - Abigail Saguy, Assistant Professor of Sociology at UCLA, will compare sexual harassment law in the United States and France. Her talk will be based on her recently published book, "What is Sexual Harassment? From Capitol Hill to the Sorbonne." - Huma Ahmed Ghosh, Assistant Professor of Women's Studies at SDSU, teaches Anthropology, Women Studies, and Asian Studies courses pertaining to gender relations in Asia and international development. Professor Ghosh will speak about Afghan Women's Rights: Trials and Tribulations. - Marjorie Cohn, Professor of Law at TJSL, will discuss Resisting Equality: Why the U.S. Refuses to Ratify the Women's Convention. Professor Cohn has published numerous articles in the academic and popular press about human rights, U.S. foreign policy, criminal justice, and international law. - Linda Keller, Assistant Professor of Law at TJSL, also taught at the University of Miami School of Law, where she served as Fellow of the Center for the Study of Human Rights. Professor Keller will speak about The Convention on the Elimination of Discrimination Against Women: Evolution and (Non)Implementation Worldwide.
    REGISTRATION/FURTHER INFORMATION: For registration materials: CONTACT: Cindy Marciel Email: MAILTO:cmarciel@tjsl.edu Tel: (619) 297-9700 ext. 1410 For further information: CONTACT: Associate Dean Julie Greenberg Email: MAILTO:julieg@tjsl.edu or CONTACT: Professor Marybeth Herald Email: MAILTO:marybeth@tjsl.edu


Thursday, January 15, 2004
 
More on Unpublished Opinions I commented recently on Stephen Barnett's (U.C. Berkeley) piece No-Citation Rules Under Siege: A Battlefield Report and Analysis. For more on this issue, you should check out Nonpublication.com! And here is the text of proposed FRAP 32.1:
    Rule 32.1. Citation of Judicial Dispositions
      (a) Citation Permitted. No prohibition or restriction may be imposed upon the citation of judicial opinions, orders, judgments, or other written dispositions that have been designated as “unpublished,” “not for publication,” “non-precedential,” “not precedent,” or the like, unless that prohibition or restriction is generally imposed upon the citation of all judicial opinions, orders, judgments, or other written dispositions. (b) Copies Required. A party who cites a judicial opinion, order, judgment, or other written disposition that is not available in a publicly accessible electronic database must file and serve a copy of that opinion, order, judgment, or other written disposition with the brief or other paper in which it is cited.
And you can read the report of the advisory committee here. This sounds quite sensible to me!


 
Welcome to the Blogosphere . . . to The Fladen Experience, a group blog from Stanford Law School students, Elliot Fladen, Phoebe Kozinski, Ty Clevenger, Tyler Doyle, Nathan Cemenska, and Ying Ma.


 
Initial Allocations of Property Rights Michigan law student Heidi Bond blogs, "our property professor told us to come up with some alternatives to a First-In-Time scheme for distribution of new resources," and one of the alternatives a classmate came up with was "Bribery--the person willing to pay the largest bribe to the government official who registers the resource gets it." Setting the wealth-transfer issues aside, this is the winner! In essence, this is a suggestion for an auction--the scheme that gets the resource to its highest and best use with the lowest transaction costs. Bravo for the smart students at Michigan!
Update: Heidi responds! It turns out that I misread Heidi's post & it was she, not a classmate, who came up with "bribery" as the allocation method. And of course, she doesn't support bribery--neither do I, because it does have wealth transfer effects. As Heidi points out, there are any number of reasons why an auction (or bribery) might not actually produce the highest and best use, including situations where collective uses do not result in bids because of transaction costs. Even more Kudos to Ms. Bond. And my apologies, if my post was misleading.


 
History of Legal Thought Courtesy of Will Baude, this exchange between a student and Richard Posner in the History of Legal Thought class at Chicago:
    Student: There's also a sense in which Socrates is a real . . . (searching) . . .
    Posner: Pain in the ass, right?


 
Rappaport on Precedent I strongly recommend Michael Rapport's post on precedent and originalism over at The Right Coast. Here is a taste:
    The most basic question involving precedent is “what type of law are the precedent rules?” In my view, precedent rules are mainly common law rules that are discovered by the courts but can be overridden by Congress through its power to pass necessary and proper laws for carrying into execution the judicial power. There is, however, a minimal degree of precedent that derives from the vesting of judicial power in the federal courts. Given the widespread acceptance of precedent in the Anglo-American world of the late 18th and early 19th century, including by such notables as Blackstone, Madison, Jefferson, Marshall, Hamilton, Adams, Jefferson, Kent, Story, and Patterson, it is fair to conclude that the Framers’ generation would have deemed a court that completely ignored precedent not to be exercising the “judicial power.” It is important to stress, however, that the Constitution only requires an extremely weak form of precedent. While some people at the time of the Constitution embraced a strong view of precedent, many supported a weak view that conferred significant weight not on a single decision, but only on a series of decisions. Thus, there was widespread acceptance only of a weak form precedent and only that can be legitimately understood to be part of the judicial power. Although the claim that the Constitution incorporates a weak precedent rule is significant as a matter of theory, its practical effect is limited. The main practical import of the constitutional rule is to forbid the no-precedent approach and to prohibit some of the circuit court rules that forbid courts from treating unpublished opinions as precedents.
Rappaport's post grows out of a panel titled Transitions to Originalism at the Faculty Division of the Federalist Society. I posted my remarks under the title Getting to Formalism last week. For more commentary, see Stuart Buck here, with a reply by C.E. Petit here. More from Yglesias, here, Pejmanesque, here, Strange Doctrines, here, Randy Barnett here, Glenn Reynolds, here, and cka3n, here.


 
Klerman on Judicial Independence at UCLA At UCLA's legal history series, Dan Klerman (USC) presents The Value of Judicial Independence: Evidence from 18th-Century England. Here is the abstract:
    This paper assesses the impact of judicial independence on equity markets. North and Weingast (1989) argue that judicial independence and other institutional changes inaugurated by the Glorious Revolution of 1688-89 allowed the English government credibly to commit to repay sovereign debt and more generally to protect contractual and property rights. Although they provide some supporting empirical evidence, they do not investigate the effect of judicial independence separately from that of other institutional innovations. This paper is the first to attempt to do so. We look at share price movements at critical points in the passage of the 1701 Act of Settlement and other events which gave judges greater security of tenure and higher salaries. Our results suggest that giving judges tenure during good behavior had a large and statistically significant positive impact on share prices, while salary increases and other improvements to judicial independence had impacts which were consistently positive, but not individually statistically significant.


 
Boardman on Risk at George Mason At George Mason, Michelle Boardman, GMU School of Law, presents The Life and Death of Risk: From Ignorance to Certainty.


 
Cuellar on Public Involvement in the Administrative State Mariano-Florentino Cuellar (Stanford Law School) has posted Rethinking Public Engagement in the Administrative State on SSRN. Here is the abstract:
    This Article presents an empirical, doctrinal, and theoretical critique of public engagement in the modern administrative state. The legitimacy of the administrative state depends on the claim that it provides opportunities for public engagement as well as a mechanism for expert scientific decisionmaking. A typical rulemaking proceeding lets experts make technical judgments about terrorism, transportation, or telecommunications subject to court review guarding against arbitrariness. The whole process is then enmeshed in a system that is supposed to provide engagement and therefore democratic accountability -- through presidential appointments and control, congressional oversight, and the public notice-and-comment process. This existing approach is legitimated by administrative pluralism,a way of thinking that emphasizes the value of interest-group competition in shaping regulatory policy. While administrative pluralism helps legitimate regulatory policy in the eyes of jurists, scholars, and the public, it also suppresses implicit questions about how much expert judgment is required in regulatory decisions, and whether the extent of participatory democracy and responsiveness is sufficient. The problems are not abstract. They are easily demonstrated in the course of a specific regulatory rulemaking proceeding, involving Section 314 of the USA Patriot Act (governing law enforcements access to financial information). The task of balancing privacy concerns and law enforcement objectives hardly seems like the exclusive province of experts. Individuals and interest groups did have a chance to submit comments in the rulemaking proceeding, but virtually all the comments taken seriously by the regulatory agency were sophisticated statements made by financial institutions and their lawyers. While over 70% of comments came from individuals concerned about privacy, the agency did not even address these in its final rule. Despite the administrative pluralism models tenacious hold, at least two alternatives exist to involve the public in rulemaking proceedings such as those governing Section 314, both of which involve constituting a small group of people whose discussions can inform the regulatory process. Participants can be either selected by lot from the entire population (a majoritarian deliberationapproach), or chosen by the agency from among constituencies (such as outside experts) who may be especially impacted by the regulation but are essentially unrepresented (a correctiveapproach). These approaches can generate valuable information about what informed citizens think of regulatory proposals. The technical challenge of implementing the alternatives is far from insurmountable, though difficult questions arise about selecting deliberation groups, framing the issue, and giving legal effect to the publics participation. Instead, two larger challenges remain. First is the challenge of choosing among different concepts of administrative democracyto combine expertise and participation. Second is the challenge of overcoming a political economy that strongly favors the status quo.


 
Hanewicz on the Business Judgment Rule Wayne Hanewicz (University of Florida - Fredric G. Levin College of Law) has uploaded When Silence is Golden: Why the Business Judgment Rule Should Apply to No-Shops in Stock-for-Stock Merger Agreements (Journal of Corporation Law, Vol. 28, No. 2) to SSRN. Here is the abstract:
    This Article argues that the business judgment rule, and not Unocal enhanced scrutiny, ought to apply to most no-shop provisions in stock-for-stock merger agreements. As a doctrinal matter, the Article argues that such no-shops are functionally different from defensive measures in several key aspects. Unlike defensive measures, no-shops are enacted primarily for the buyer's benefit and not to protect the target; no-shops are not enacted unilaterally by the target but instead are bargained for with the buyer; and no-shops do not prevent a target from being acquired. This Article also examines the costs and benefits of judicial intervention into this area. It argues that the need for judicial intervention is reduced because the potential for board conflicts is low relative to the potential for such conflicts when classic defensive measures are adopted; because no-shops, properly considered, do not cause boards to make uninformed decisions; and because shareholders are able to protect themselves from the harm that might be caused by agreeing to an overly restrictive no-shop. In addition, this Article argues that the costs of judicial intervention would be relatively high given the nature of the inquiry a reviewing court would have to undertake, the comparative risk of error of engaging in such a review, and the scarcity of judicial resources.


 
Baker and Krawiec on Incomplete Statutes & the Nondelegation Doctrine Scott Baker and Kimberly D. Krawiec (University of North Carolina at Chapel Hill - School of Law and University of North Carolina School of Law)has uploaded The Penalty Default Canon (George Washington Law Review, Forthcoming) to SSRN. Here is the abstract:
    Lawmakers often have an incentive to avoid making important policy choices, shifting responsibility for the outcomes of those choices onto other governmental branches. Statutory incompleteness (that is, a statute containing a gap or ambiguity) provides a mechanism for accomplishing this transfer of responsibility. Drawing on the incomplete contracts literature, we argue that the reasons for statutory incompleteness should form an important consideration for courts faced with interpretive disputes regarding an incomplete statutory provision. Specifically, if lawmakers attempt to employ statutory incompleteness as a means to shift responsibility for difficult policy choices onto courts or agencies, courts should penalize lawmakers by holding that the provision in question is an unconstitutional delegation of legislative authority. In contrast, when statutory incompleteness is inadvertent or attributable to a legislative desire to enhance public welfare - such as, for example, an attempt to reduce the transaction costs of lawmaking or harness the special expertise of courts or agencies - a penalty would be futile or overly costly and should not apply. This "Penalty Default Canon" sheds new light on the Chevron and non-delegation doctrines, as well as many theories of statutory interpretation. Indeed, we demonstrate that these theories and doctrines are flawed because they assume a single underlying cause of statutory incompleteness. The Penalty Default Canon, in contrast, is more nuanced, mimicking the approach taken by contract scholars and courts in the setting of contractual default rules. The article develops a three-part test for discerning the underlying source of statutory incompleteness through a careful examination of legislative history and interest group dynamics. We then apply this test to two statutory provisions that we argue Congress left intentionally incomplete: the "strong inference" provision of the Private Securities Litigation Reform Act of 1995 and Section 6 of the Clayton Act.
This sounds very interesting!


 
Ljungqvist on Conflicts of Interest & IPOs Alexander Ljungqvist (New York University) has posted Conflicts of Interest and Efficient Contracting in IPOs on SSRN. Here is the abstract:
    We study the role of underwriter compensation in mitigating conflicts of interest between companies going public and their investment bankers. Making the bank's compensation more sensitive to the issuer's valuation should reduce agency conflicts and thus underpricing (Baron (1982); Biais, Bossaerts, and Rochet (2002)). Consistent with this prediction, we show that contracting on higher commissions in a large sample of UK IPOs completed between 1991-2002 leads to significantly lower initial returns, after controlling for other influences on underpricing and a variety of endogeneity concerns. These results indicate that issuing firms' contractual choices affect the pricing behaviour of their IPO underwriters. Moreover, we cannot reliably reject the hypothesis that the intensity of incentives is optimal, and so that contracts are efficient.


 
Conference Announcement: Securing Privacy in the Digital Age
    A Stanford Law School Symposium: Securing Privacy in the Internet Age
      What legal regimes or market initiatives would best prevent the unauthorized disclosure of private information while also promoting business innovation?
    March 13-14 2004 Stanford Law School http://cyberlaw.stanford.edu/privacysymposium/
      The Law, Science and Technology Program (LST) and the Center for Internet and Society (CIS) at Stanford Law School will host a symposium next spring where more than twenty-five authors from across the globe will present papers addressing the ways in which application of various legal doctrines could induce software vendors, hardware companies and system administrators to adopt security-enhancing practices, report unauthorized disclosures of private information, and properly value and remedy harm flowing from privacy breaches, while promoting vigorous competition and innovation.
      This Symposium is appropriate for anyone interested in developing a legal regime that better promotes computer security than our current one. The authors represent a wide variety of viewpoints: academics, policy makers, economists, advocates, and legal and corporate professionals, and we anticipate the audience will reflect this diversity as well. The authors are listed on the symposium website. Papers will be published in a scholarly volume that will be available in late 2004.
      The Symposium Editors are: Margaret Jane Radin, Wm. Benjamin Scott and Luna M. Scott Professor of Law, Director, Stanford Program in Law, Science and Technology, Anupam Chander, Professor, UC Davis School of Law, Visiting Professor Stanford Law School, Spring 2004, and Lauren Gelman, Assistant Director, Center for Internet and Society, Stanford Law School.
    Registration is FREE if you register before March 1, 2004. All attendees MUST register on the website. No registrations will be accepted after March 11, 2004. After March 1, 2004, the registration fee is $250, which must be paid by check or by cash at the registration desk on the day of the Symposium. Checks should be written to "Stanford Law School."


Wednesday, January 14, 2004
 
Eternal Life and Risk, with an Update on Risk Aversion Among Ancient Vampires Tyler Cowen poses the following problem on the Conspiracy:
    There is an arbitrariness in defining the relevant class of risky events. In my lifetime as a driver, I stand some (fairly low) chance of killing an innocent pedestrian. Few people would argue that I should be prohibited from driving. Assume, however, that science prolongs (fit) human life forever, at least unless you are struck down by a car. My chance of killing an innocent pedestrian then would approach certainty, given that I plan to continue driving throughout an eternal life. In fact I could be expected to kill very many pedestrians. Should I then be prohibited from driving? When we make a prohibition decision, should we measure the risk of a single act of driving, or the risk of driving throughout a lifetime? Measuring the bundled risk appears to imply absurd consequences, such as banning driving for people with sufficiently long lives.
Just off the top of my head, this problem begs for a perspective shift. Given eternally long lives, the real issue concerns taking risks (e.g. walking as a pedestrian, assuming Cowen's hypothetical facts). So the question becomes, "Would it be rational to engage in regular low-risk behaviors (like daily walking), given that over the course of an eternal life, death would be the almost certain consequence?" One might imagine that in the beginning, such behaviors would continue, but that over time one would begin to realize that the odds were catching up with friends, family, and co-workers. Would the loss of eternal life really be a greater cost than the loss of the current human span of several decades? If the answer to this question were yes, then perhaps most humans would began to avoid risk. A very cautious approach to life might add tens of thousands of years to one's anticipated life span. I have an image of fine restaraunts serving only minced food to avoid the small (but statistically significant) risk of choking. No roller-skating, no skiing, no contact sports, no flying on airplances, no boating, no swimming. Would the eternal life lived to minimize risk be a recognizably human life?
Update: An astute reader notes that in contemporary vampire fiction, ancient vampires are generally potrayed as extremely risk averse--employing proxies when personal action in accord with their conception of the good (the bad?) would involve a significant risk to their immortality.
Further Update: Check out En Banc's Nick Morgan's comments here. And for more on Vampires, see Will Baude on Crescat Sententia.


 
Bashman Reports on Today's Supreme Court Decisions How Appealing has the scoop here.


 
Blogging and Academia There has been quite a bit of discussion recently about blogging and the academic world and especially about the impact of blogging on academic hiring. Here are links to some of the many posts on blogging and getting an academic job:A few observations:
    --Most blogging is likely to be viewed as irrelevant to academic hiring, because most blogs do not take on academic issues in a serious way. Most law student blogs, for example, fit the classical on-line diary model of blogging. Appointments committees are very busy & they just won't spend the time to read this kind of material.
    --Political blogging provides substantial information about a candidate's politics. Can such information affect the hiring process? It depends on the discipline. Law school hiring, for example, has a discernable political dimension, reflecting the liberal to left composition of most law school faculties. Some conservative and libertarian entry level candidates use a stealth strategy to get their first academic job--obviously political blogging could render that strategy unsuccessful.
    --Blogging creates relationships. As a result of my blogging over the past year or so, I've gotten to know a bit about several law students, judicial clerks, and lawyers. In several cases, other bloggers (and non-bloggers) have written me for advice about their academic job search. These personal contacts can be key to getting past the "first screen," the initial process by which law schools winnow out 95% of the potential applicants for acadmeic jobs. A very thoughtful law student blog with high quality posts that include some real scholarly content can create a favorable impression on the current crop of academic law bloggers. The blogosphere lends itself to the creation of informal relationships because of the back and forth, commentary, and cross-linking that connect blogs with related content.
    --I've already heard from one entry-level candidate whose blog played a substantial positive role in the hiring process. The hiring committee at one school (not my own) was very familiar with the candidate's blog and the blog's content played a significant role in the interview process. Of course, for a blog to play this kind of role, the content will have to be quite good.
Perhaps the most significant evidence I have about the effect of blogging on academic reputation is personal and anecdotal. Almost every time I go to an academic conference of any sort, several strangers introduce themselves to me and mention that they have been reading my blog. These encounters have occurred at meetings directed at law professors, political scientists, and philosophers--suggesting that blogging has an interdisciplinary reach. Similarly, I have formed strong positive impressions of several academics from other disciplines as a result of their blogging. I knew Chris Bertram very slightly before I began reading his now-defunct and much missed Junius and his posts on the very much alive and well-respected Crooked Timber. Now I pay close attention to his posts, I purchased and read his Rousseau and the Social Contract, and even have begun to think a bit differently about some of the issues in political philosophy on which Bertram has posted. Brian Weatherson works on topics that are outside my usual areas of professional interest, but I nonetheless take an interest in his work because of my respect and admiration for his blogging. Likewise, because I know several of the participants in Punishment Theory (Kyron Huigens, Rick Garnett, Ken Simons, Antony Duff), I've developed an interest in the work of some of the other participants--even thought criminal theory is at best a tertiary interest for me. It strikes me that the real importance of blogging for the academic world stems from phenomena like these. The academic blogosphere is like a sort of perpetual interdisciplinary academic conference. Like any good academic conference, the point is not so much going to the panels and papers--that's a sort of side benefit. The real point is the conversations in the halls--hearing about a new idea, about someone who is doing work that is interesting or "hot," meeting someone whose work you have admired, and so forth. When these conversations take place in the blogosphere, they become public, reaching dozens, hundreds, or even thousands of readers.
Despite the very large number of readers that some academic blogs attract, the current academic blogosphere is also in some ways like a very small conference--the number of academic bloggers is really quite tiny. But that may be starting to change. I recently spoke with someone who was seriously thinking about setting up a subject-matter specific blog along the lines of Punishment Theory but in a different area. Several such blogs would bring new audiences to the blogosphere and make the kind of academic discussions that currently take place on closed email discussion lists (conlawprof, cyberprof, etc.) more transparent. In my opinion, it is too early to tell where this is all heading. I would certainly not be surprised if academic blogs gradually faded from the scene or if the blog were replaced by something new and better. All I can say for sure is that my blogging over the course of the past year has affected my view of the academic world in significant and surprising ways. I suspect that I am not the only one who can say this.


 
Duff on Extreme Mental or Emotional Disturbance Surf on over to Punishment Theory for this post by Antony Duff, with a reply by Victor Tadros.


 
Heverly on the Information Semicommons We all know about the tragedy of the commons, and Frank Michelman and Michael Heller introduced us to the tragedy of the anticommons. Now comes Robert A. Heverly (University of East Anglia - Norwich Law School) with The Information Semicommons:
    We perceive Information as property; law and economic structures, we argue, make it so. But this perception does not end the questioning. If we believe information is property, the question we must ask is: what kind of property is information? While at times common uses of information, even privately owned information, is accepted, private ownership of information on the whole is still often taken as a granted. Common uses, when allowed, are viewed as infringements on the private owner's rights. But this perception is mistaken; information is perceived as owned, but ownership need not be based in a purely private ownership scheme. Information ownership is instead a semicommons, a property model that explicitly recognizes the dynamic relationship and interdependence of private and common property uses. A semicommons exists, according to the theory developed by Henry Smith, where there is a dynamic relationship between the private and common uses of property such that their co-existence achieves greater benefits than would be achieved under a scheme of either primarily private or primarily common ownership. With the peer-to-peer file sharing dispute as its starting point, this article applies semicommons theory to information, shifting decision-making away from primarily seeking to maximize private incentives for information creation and toward the benefits that flow from the interaction of private and common uses. The article distinguishes between content-level and distribution-level effects of information use, and argues that distribution level effects have traditionally - and wrongly - been ignored in policy and judicial decision-making. In a private information ownership model, peer-to-peer file sharing of copyrighted materials is mere infringement. Distribution level effects, however, are the necessary and proper results of common uses of information, and should be considered when making decisions on use to maximize the benefits that flow from the information semicommons. In terms of peer-to-peer file sharing, the Ninth Circuit Court of Appeals rejected Napster's arguments that file sharers actually purchased more copyrighted works than non-users as irrelevant in light of the private owners' rights in their owned information. In recognizing the existence of the information semicommons, this distribution-level effect of such common uses would be a critical issue subject to proof in such circumstances, not a simple aside that it was unproven and unimportant. Semicommons theory has broad implications for information-related decision-making in the digital age beyond peer-to-peer, with the potential to shift discussion away from the view that common uses of information are merely infringing private owners' property rights, or are a necessary primarily to prevent market failure or allow for robust freedom of speech. The semicommons view is one that recognizes that common uses of privately created information are necessary and appropriate, are part of the very structure of the properly described information ownership regime, and as such increase the overall societal benefits that flow from information creation.


 
Stephen Barnett on No-Citation Rules Stephen R. Barnett (University of California, Berkeley - School of Law (Boalt Hall)) has posted No-Citation Rules Under Siege: A Battlefield Report and Analysis (Journal of Appellate Practice and Process, Vol. 5, No. 2, Fall 2003) on SSRN. Here is the abstract:
    Debate over unpublished judicial opinions and no-citation rules frequently proceeds without full and updated information about citation practices and developments. This is particularly true of the state courts, whose citation practices are diverse, elusive, and fast-changing. This article offers a comprehensive report on current practices in the federal circuit courts of appeals - where nine of the thirteen circuits now allow citation of unpublished opinions (apart from related cases) - and in the state courts. The situation in the states as of late 2003 is found to be notably different from what was reported in congressional testimony in mid-2002. In place of a vast majority of states banning citation of unpublished opinions, a distinct trend toward allowing citation has produced near-equality between citation and no-citation states. (See the table in the Appendix.) The article then focuses on a proposed new rule for the federal courts, making unpublished opinions citable in all circuits, which has been put forward by the Advisory Committee on Appellate Rules of the United States Judicial Conference and on which the committee is receiving public comments (due by February 16, 2004). Problems of drafting the proposed rule are discussed, along with questions of how to deal with provisions in local circuit rules that allow, but disfavor and discourage, the citation of unpublished opinions.
If I might editorialize: I find no citation rules inexplicable. I know a few areas of law in great depth (e.g. have read several thousand opinions). In those areas, it is my experience that very frequently, the unpublished opinions are the ones that address the important unanswered questions of law & the published opinions simply repeat the conventional wisdom. This pattern would appear to turn the purpose of designating opinions as unpublished on its head!


 
Dauber on Victim Compensation and the War of 1812 Michele Landis Dauber (Stanford University - School of Law) has uploaded The War of 1812, September 11th, and the Politics of Compensation to SSRN. Here is the abstract:
    The September 11 Victim Compensation Fund (VCF) is often described as "unprecedented," though it is but the latest in a long line of Federal disaster relief statutes. One such measure established a commission to compensate those who lost property to British attacks in the War of 1812. The history of this relief effort, which is remarkably similar to that of the VCF, reveals a characteristic moral trajectory traced by victims as they seek compensation. The closer victims come to receiving payment for their losses, the harder it is for them to maintain the appearance of blamelessness that is the source of their claim. This paper explores the political and moral issues that arise for both claimants and relief officials from this process.


 
Mann on State Bankruptcy Laws Ronald J. Mann (University of Texas at Austin - School of Law) has posted The Rise of State Bankruptcy-Directed Legislation on SSRN. Here is the abstract:
    This paper considers the rise of state legislation directed at affecting bankruptcy outcomes. It analyzes the question as a federalism question - is this the business of the states? - rather than as a commercial law question - does the legislation foster value-increasing business transactions? The analysis proceeds in three steps. First, I describe the basic system that successfully delineated responsibility between Congress and the state legislatures until recent years (perhaps about 1990), and a number of systemic factors that have caused the old system to break down. Second, I discuss examples of potentially problematic legislation, not only legislation related to securitization, but other pieces of state legislation that have their primary effects in the bankruptcy of the affected parties. Finally, I use those examples to illustrate when those statutes should and should not be held preempted by Congress's authority under the Bankruptcy Code.


Tuesday, January 13, 2004
 
Tillman Sends a Rejection Letter to Hamilton, Jay, and Madison Seth Barrett Tillman (New Jersey District Court) has uploaded The Federalist Papers as Reliable Historical Source Material for Constitutional Interpretation to SSRN. Here is the abstract:
    The Federalist Papers ill serves judicial opinion writing when cited for anything but analyzing the largest constitutional structures and their purposes – as opposed to the Constitution's details, which, although discussed, were not the main subject matter of contention between those supporting and those opposing ratification. Moreover, modern judicial craftsman cannot assume that each and every paper is free of error. They are not. Across the Papers are both minor and major errors of various sorts; thus, the Papers must be read and analyzed (as any other document must be) rather than casually cited (as if each and every Paper is free of defect) for the point under discussion. And, lastly, blithely relying on the rationales put forward by the Papers should not preclude our realizing that not only are some passages of the Constitution in deep tension, but rather, some passages are logically incoherent. Under these circumstances, all rationales put forward – those in the Papers included – are equally problematic. This paper is largely written in a comic mode so as to be more accessible to the generalist lawyer and lay reader – the same audience to whom the Papers were originally addressed. Additionally, this paper also explores some undiscussed aspects of House and Senate contingency elections in the event of the failure of the electoral college to select a President and/or a Vice President.
This really is pretty funny--written as a turn down letter to Hamilton, Madison, and Jay from law review editors. The "logical incoherence" passage that I've underlined struck me as quite odd. Here is the example of alleged incoherence that is produced in the article:
    Consider: the Constitution makes it easier to remove a President through the process of impeachment than to override a President’s veto with regard to a single measure. Both impeachment and veto override require action by two-thirds of the Senate. And though overriding a veto also requires two-thirds of the House, impeachment by the House merely requires a simple majority of a quorum. How are these differing supermajority requirements coherent? Should not the process for removing a President be at least as difficult as overriding her veto with regard to a single bill?
I hate to be a stickler, but this doesn't count as "logical incoherence." There is no logical contradiction here. In fact, I'm not sure this even counts as "deep inconsistency." It is, after all, possible that the Senate and House would employ a much higher standard for impeachment than for veto override. For impeachment, the Constitution provides a standard ("high crimes and misdemeanors") whereas veto override is basically a discretionary decision. (Doesn't the historical evidence suggest that impeachment is, in fact, much more difficult to accomplish than overrride?) Nonetheless, this was amusing!


 
Gross on Torture Oren Gross (University of Minnesota Law School) has posted The Prohibition on Torture and the Limits of the Law (TORTURE, Sanford Levinson, ed., Oxford University Press, 2004) on SSRN. Here is the abstract:
    The debate about the moral and legal nature of the prohibition on torture and about the permissibility of carving out exceptions to that ban is generally conceptualized as a clash between two opposing poles with no middle ground between them. One may support an absolute ban on torture. Alternatively, one may believe that the duty not to torture, even if generally desirable and laudable, does not apply in certain exceptional circumstances, or, even if it does apply, is overridden, canceled or trumped by competing values. This paper defends an absolute prohibition on torture while, at the same time, arguing that truly catastrophic cases, such as the paradigmatic ticking-bomb scenario, should not be brushed aside as merely hypothetical or as either morally or legally irrelevant. The paper suggests that the way to deal with the "extreme" or "catastrophic" case is neither by reading it out of the equation nor by using it as the center-piece for establishing general policies. Rather, the focus is turned to the possibility that truly exceptional cases may give rise to official disobedience, i.e., public officials may step outside the legal framework and be ready to accept the legal ramifications of their actions. I argue that the prospect of extralegal action supports and strengthens the possibility of formulating and maintaining an absolute prohibition on torture.


 
It's Back! The infamous tournament of judges is making another appearance! This time, at Florida State, Mitu Gulati, Georgetown University Law Center (short-course visiting professor at FSU) presents Who Would Win a Tournament of Judges? (with co-author Stephen Choi). (FSU's version is password protected, but I found the paper at Georgetown. I don't know if the versions are identical.) This is a must download! The blawgosphere wants to know: who is the winner? I know, but if you want to, you will have to download Gulati's paper! (For Choi & Gulati's prior paper, A Tournament of Judges, follow the link.)


 
Ramseyer on Relationship Banking at Chicago At Chicago's Olin series, J. Mark Ramseyer, Mistubishi Professor of Japanese Legal Studies, Harvard University presents Does Relationship Banking Matter? Japanese Bank-Borrower Ties in Good Times and Bad coauthored with Yoshiro Miwa.


Monday, January 12, 2004
 
Weekend Roundup On Saturday, the Download of the Week was a paper by Steve Shiffrin and the Legal Theory Bookworm recommended A Companion to Philosophy of Law and Legal Theory by Dennis Patterson (Editor) and The Oxford Handbook of Jurisprudence and the Philosophy of Law by Jules Coleman and Scott Shapiro. On Sunday, the Legal Theory Calendar previewed this week's workshops and conferences. The Legal Theory Lexicon entry was on "justice."


 
MacLeod on Standard Form Contracts at Columbia At Columbia's law and economics series, Professor William Bentley MacLeod (Visiting Professor of Economics, Princeton University and Professor of Economics and Law, The University of Southern California) presents On the Efficiency and Enforcement of Stndard Form Contracts - The Case of Construction," Co-author Surajeet Chakravorty.


 
Stephen Parry on Harm & Counterfactuals I've just caught up with this paper, posted by Stephen Parry, Harm, History, and Counterfactuals. Here is a taste:
    In this paper I undertake a very preliminary inquiry into some aspects of the concept of harm. My excuse for doing so in a symposium on compensation is that, in private law and particularly in tort law, an award of damages is often intended to compensate for harm; if we do not know something about the nature of harm, we cannot fully understand the nature of at least this type of compensation. To avoid one possible source of confusion, I should add immediately that harm is not the only thing that can be compensated by an award of compensatory damages. Compensation in law is generally meant to rectify a setback to an interest, but, I shall argue, while all instances of harm are setbacks to interests, not all setbacks to interests are instances of harm. Further, in cases where the law imposes liability for omissions – i.e., for breaching an affirmative duty to put someone in a certain position – it may be that the term “compensation” is appropriate even if the person being compensated has not suffered any setback, in the sense of an historical worsening, at all. However, nothing that I have to say about harm will turn on accepting one account rather than another of the concept of compensation.
Anything by Perry is worth reading!


 
Balkin on Digital Speech Jack M. Balkin (Yale University - Law School) has posted Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society (New York University Law Review, Forthcoming) on SSRN. Here is the abstract:
    This essay argues that digital technologies alter the social conditions of speech and therefore should change the focus of free speech theory from a Meiklejohnian or republican concern with protecting democratic process and democratic deliberation to a larger concern with protecting and promoting a democratic culture. A democratic culture is a culture in which individuals have a fair opportunity to participate in the forms of meaning making that constitute them as individuals. Democratic culture is about individual liberty as well as collective self-governance; it concerns each individual's ability to participate in the production and distribution of culture. The essay argues that Meiklejohn and his followers were influenced by the social conditions of speech produced by the rise of mass media in the twentieth century, in which only a relative few could broadcast to large numbers of people. Republican or progressivist theories of free speech also tend to downplay the importance of nonpolitical expression, popular culture, and individual liberty. The limitations of this approach have become increasingly apparent in the age of the Internet. By changing the social conditions of speech, digital technologies lead to new social conflicts over the ownership and control of informational capital. The free speech principle is the battleground over many of these conflicts. For example, media companies have interpreted the free speech principle broadly to combat regulation of digital networks and narrowly in order to protect and expand their intellectual property rights. The digital age greatly expands the possibilities for individual participation in the growth and spread of culture, and thus greatly expands the possibilities for the realization of a truly democratic culture. But the same technologies also produce new methods of control that can limit democratic cultural participation. Therefore, free speech values - interactivity, mass participation, and the ability to modify and transform culture - must be protected through technological design and through administrative and legislative regulation of technology, as well as through the more traditional method of judicial creation and recognition of constitutional rights. Increasingly, freedom of speech will depend on the design of the technological infrastructure that supports the system of free expression and secures widespread democratic participation. Institutional limitations of courts will prevent them from reaching the most important questions about how that infrastructure is designed and implemented. Safeguarding freedom of speech will increasingly fall to legislatures, administrative agencies, and technologists.


 
Schneider on Statutory Construction in Tax Cases Daniel M. Schneider (Northern Illinois University - College of Law) has posted Statutory Construction in Federal Appellate Tax Cases: The Effect of Judges' Social Backgrounds and of Other Aspects of Litigation (Journal of Law & Policy, Vol. 13, 2003). Here is the abstract:
    "Statutory Construction" examines the effect of judges' social backgrounds on the method of statutory construction used to justify decisions in a database of federal appellate tax cases. It concludes that social backgrounds have a modest effect. Relatively few descriptive statistics were statistically significant, as was also true of predictive statistics. Results regarding aspects of the case, such as the type of taxpayer (e.g., individual, business) or representation by a lawyer, were more robust. These results are consistent with earlier research by the author on the same topic, in a database of federal trial tax cases.


 
Walker on Class Action Trials Laurens Walker (University of Virginia School of Law) has posted A Model Plan to Resolve Federal Class Action Cases by Jury Trial (Virginia Law Review, Vol. 88, No. 2, pp. 405-445, April 2002) on SSRN. Here is the abstract:
    These two decisions, Amchem and Ortiz, and the powerful critique pointing out the risk of collusion have clouded the prospects for future settlements in major class actions and cut short plans to amend the federal class action rule to encourage settlements. The procedural bar will likely diminish the number of class certifications for settlement by imposing "heightened attention" to certification requests. The substantive criticisms will likely block any future efforts to modify the Rule to facilitate certifications for settlement. Before these developments, settlement had been virtually the only process employed to resolve complex civil cases. Now the inevitable question is: How will the federal courts respond to the continuing tide of complex civil litigation commenced in district courts? One possibility is to turn away most of these claims at the threshold by refusing to certify class actions leaving plaintiffs to wait, likely in vain, for individual federal adjudication. This option would follow naturally from "heightened attention" to certification requirements and has been supported candidly by some critics. But the option is not attractive. A failure to certify can mean that thousands and sometimes millions of persons claiming a right under established substantive law will be left with no real judicial remedy. Chief Justice John Marshall famously labeled the failure to furnish a "remedy for the violation of a vested legal right" a condemnation of our jurisprudence. The other possibility for avoiding the new procedural and substantive objections to settlement is to bring the claims to an adjudicated solution by trial. Traditionally, this option has received little consideration because these cases, with thousands or millions of parties, simply defied trial. Indeed, the first casebook on complex civil litigation, published in 1985, had no chapter on trial, and recent editions of that book and other casebooks on the subject have only brief treatments of the topic. The first casebook on class actions, published in 2000, begins a brief section on trial by stating that few class actions actually go to trial; most settle, either after the certification decision or as trial approaches. Indeed, these editors go on to suggest that the total number of class-action jury trials may be only a handful. In fact, one scholar has suggested that judges themselves have devised remarkably effective ways to keep mass tort cases away from juries. Yet trial may be the best antidote to the procedural and substantive concerns about class action settlements.


 
Germain on French Statutory Construction Claire M. Germain (Cornell University - School of Law) has posted Approaches to Statutory Interpretation and Legislative History in France (Duke Journal of Comparative & International Law, Vol. 13, No. 3, Summer 2003) on SSRN. Here is the abstract:
    In France, Justice Jackson's question about where to look for the meaning of a statute would be phrased in broader terms and would not be limited to the question of whether to look only at the words of a statute or also at the legislative intent. French law starts from the premise that statutes and codes are the foundations of the legal system in the same way that cases are the foundation of the common-law system. Because of the primacy of written law in France, statutory interpretation lies at the heart of French law. Statutory interpretation is very flexible, and there are no strict canons of interpretation. The drafters of the Civil Code, Napoleon in particular, intended only to set general principles, leaving it up to judges to apply the principles to the circumstances of cases. This is why the Civil Code of 1804 can still resolve many of today's issues, such as those generated by automobile traffic accidents, that could not be anticipated at the time of writing. It is commonly understood that legislators cannot anticipate all situations and all difficulties that might arise from the application of legal texts. The meaning of statutes is not always clear. Moreover, the adaptation of texts to concrete situations may cause difficulties. Interpretation is needed on the meaning and scope of the text.


Sunday, January 11, 2004
 
Legal Theory and Voting Patters of Federal Judges Check out this post on Pejmanesque.


 
Legal theory Lexicon: Justice
    Introduction The connection between law and justice is a deep one. We have "Halls of Justice," "Justices of the Supreme Court," and "the administration of justice." We know that "justice" is one of the central concepts of legal theory, but it is also vague and ambiguous. This post provides an introductory roadmap to the concept of justice. Subsequent entries in the Legal Theory Lexicon will cover more particular aspects of this topic such as "distributive justice." As always, this post is aimed at law students (especially first-year law students) with an interest in legal theory.
    A Typology of Justice What is justice? One way to approach this question is via a typology--a scheme that divides the general and abstract concept of justice into component parts: (1) distributive justice, (2) corrective justice, (3) political justice, and (4) procedural justice. These may be deep and fundamental differences between different types of justice, or these categories may simply be heuristic devices. For now, let's lay that question to the side and focus instead on a brief exposition of each of the four types of justice:
      Distributive Justice In 1971, John Rawls's book A Theory of Justice put distributive justice at the center of philosophical discussion of justice. What is the subject of distributive justice? Even this question is controversial, but one formulation is: The subject matter of distributive justice is the distribution of the benefits and burdens of social cooperation.. The burdens of social cooperation include things like taxes and obligations to provide civic service (e.g. military service, jury service, and so forth). The benefits of social cooperation might be seen as including the resources that are produced by social cooperation, which might be represented by wealth and income. Thus, questions that might be answered by a theory of distributive justice might include:
        --Should the system of taxation be progressive (with a heavier burden on the wealthy than the poor)?
        --Should the government adopt an incomes policy (such as a guaranteed annual income) that will provide a minimum level of resources to those who are least well off?
        --should the burden of military service by distributed equally (in the form of mandatory service for all citizens) or should this burden be allocated by a volunteer army and market incentives?
      This list just begins to scratch the surface. In the context of the law school curriculum, questions of distributive justice arise in a variety of course. In tort law, distributive justice may be the basis for the theory that one of the purposes of tort law is "risk spreading" or the just distribution of the costs of accidents. In contract law, questions of distributive justice may arise in cases involving contracts of adhesions or contracts with terms that may exploit the unsophisticated and economically disadvantaged.
      In a future post, I will say more about particular theories of distributive justice. For now, let me just mention three approaches. The first approach is found in Rawls's theory, justice as fairness, which includes two principles of distributive justice. The first principle guarantees to every citizen a fully adequate scheme of equal basic liberties, such as freedom of conscience, the right to vote, and so froth. The second principle (the "difference principle") requires that inequalities of income and wealth work to the advantage of the least well-off group in society. The second approach is strict egalitarianism--which would not permit differences with respect to whatever good is the subject of justice. Why not equality of wealth and income? That's one option for egalitarians, but there are others such as equality of welfare or equality of resources of equality of opportunity for these things. The third approach is libertarianism--which holds that the distribution of wealth and resources is not itself a proper subject matter for justice. Rather, libertarians begin with the premise that each individual should have certain liberty rights (e.g. self-ownership, property rights, and contract rights) and that whatever distribution results from the exercise of these rights is a just distribution.
      Corrective Justice Aristotle defined "corrective" or "rectificatory" justice as "Justice in transactions." That's a good place to start. With Aristotle we might divide transactions into two categories, the voluntary and the involuntary. Justice in voluntary transactions would include the topics encompassed by contract law. Justice in involuntary transactions would include both transactions that are involuntary due to force (e.g. battery) and transactions that are involuntary due to fraud (e.g. fraud, misrepresentation, etc.).
      One of the great debates in contemporary legal theory concerns the status of corrective justice. This topic is especially hot in tort theory and criminal law theory. For example, some tort theories believe that the purpose of tort law is captured by the idea of corrective justice. Such theorists tend to believe that liability standards should be fault bases (e.g. intentional tort or negligence as opposed to strict liability) and that the purposes of tort damages is to make the plaintiff whole (and to force the defendant to disgorge wrongful gains) and not deterrence. Other tort theorists, e.g. welfarists or utilitarians, believe that corrective justice institutions should be judged solely by the consequences they produce. So a utilitarian might believe that the purpose of tort law is to produce optimum deterrence. Finally, some tort theorists believe that tort law serves the ends of distributive justice.
      Political Justice Yet another topic of justice is political justice. In a sense, this might be seen as a subtopic of distributive justice--since political rights and responsibilities can be seen as encompassed within the general category of the benefits and burdens of social cooperation. In relationship to the law school curriculum, we might say that political justice is concerned with the foundational issues of constitutional theory. Who shall have the right to vote? What power shall be allocated to local communities as opposed to nation-states? What limitations shall there be on the power of democratic majorities (e.g. individual rights & judicial review)?
      The topic of political justice shades into another important idea--"political legitimacy." Are these two ideas essentially the same or are they different? One view is that it is possible to have a legitimate political order that is nonetheless unjust (or vice versa). For example, some might say that the test of political legitimacy has to do with the origination of the political system. If a system has been accepted and endorsed by the people, this view contends, it is legitimate--even if the substance of the system (e.g. the allocation of political rights) is unjust. On this view, a religious state might be legitimate but unjust. A quite different view is that political legitimacy depends on political justice. For example, Randy Barnett has argued that the test for constitutional legitimacy is whether the constitution provides adequate guarantees of just outcomes (for Barnett, the protection of individual liberty). On this view, popular endorsement of an unjust political system does not make that system legitimate.
      Procedural Justice A final form of justice is "procedural justice." The very existence of this category is controversial. Some theorists argue that the only the outcomes of procedures count. But this is not the universal view. Some theorists believe that procedures are important for reasons that are not reducible to a concern with outcomes. One helpful typology was provided by Rawls, who distinguished between perfect, imperfect, and pure procedural justice.
        --Perfect procedural justice assumes that we have an independent criterion for the correctness of outcomes. For example, a correct outcome in a criminal case would be "freeing the innocent and convicting the guilty." We have perfect procedural justice if the procedure guarantees the correct outcome. In other words, perfect procedural justice requires 100% accuracy.
        --Imperfect procedural justice. Of course, in the actual world, most procedures fall short of 100% accuracy. Moreover, the more accurate a procedure is, the more expensive it is likely to be. Imperfect procedural justice acknowledges these facts and therefore conceives of procedural justice as a fair balance between the benefits of accuracy and the costs of procedure.
        --Pure procedural justice is based on the denial of the premise that we have an independent criterion for a correct outcome. We have a case of pure procedural justice if the procedure itself provides the criterion for judging the justice of the outcome. Rawls himself doubted there were many cases of pure procedural justice. He did see one case--a fair bet. With a fair gamble (e.g. a roll of unloaded dice), the outcome doesn't matter.
      In the context of the law school curriculum, questions of procedural justice arise in connection with procedural dues process (in constitutional law, administrative law, and procedure) and especially in the courses in civil and criminal procedure.
    Justice and Moral Theory Thinking about each of these four types of justice is connected with more general views about moral and political theory. Each of the three important families of normative moral theory (consequentialist, deontological, and aretaic) connects in interesting ways with thinking about justice:
      Consequentialist Ideas About Justice There are many different forms of consequentialism. In moral theory, the most familiar form is utilitarianism. In law, the emphasis lately have been on wheelbarrows. Most consequentialist theories do not see justice in any of its forms as truly distinctive. For example, for act utilitarians the rightness or wrongness of an action depends on whether that action (as opposed to the alternatives) produces the most utility. Thus, the best distribution of resources is the one that maximizes utility, and the best system of tort law is the one that utility. There are different ways of expressing this idea. One expression maintains that consequentialists do not place any independent value on justice; another way of putting it is to say that for consequentialists, justice is the production of good consequences.
      Deontological Ideas About Justice By way of contrast, deontological theories have a natural affinity for the idea that justice serves as an independent criterion for the rightness and wrongness of actions. Thus, it is a characteristically deontological position to maintain that unjust actions or institutions cannot be justified on the ground that they would produce good consequences. Thus, deontologists might say that it would be unjust and hence impermissible to punish an innocent persons--even if the net long-term effect of that action were to produce good consequences.
      Areataic Ideas About Justice From the view point of aretaic theory, justice is primarily a virtue, an excellence of human character. One of the most difficult problems for virtue ethics has been the development of an adequate theory of the virtue of justice. One view is that justice is the disposition to take neither too much nor too little for one's self. Another view is that justice is the disposition to act in conformity with social and legal norms, tempered by equity. Yet a third view is that the virtue of justice is simply the disposition to act in accord with the right theory of what a just action is.
    The Relationship Between Law and Justice What is the relationship between law and justice? That question can be tackled from many different directions. One angle of approach would be to ask whether there is some essential or necessary connection between legal validity and justice. The view that only just laws are legally valid is usually associated with natural law theory, whereas the view that there is no essential or necessary connection between law and justice is characteristically associated with legal positivism. But whether one is a natural lawyer or a legal positivism, one could say that the laws should be just. Thus, theories of justice can be seen as guiding the science of legislation.
    Conclusion Contemporary legal education is, in a sense, all about justice. Natural law, legal positivism, and legal realism all go beyond the black letter law and ask the question, "What should the law be?" Law students quickly discover that their instructors are frequently more interested in questions like, "Is that a just rule?" than in questions like, "What is the rule?" As you continue your study of legal rules, you can begin to ask questions like: "Does this rule address a question of distributive, corrective, political, or procedural justice?" "Is the rule in the case (or statute or constitutional provision) just or unjust?" "What theory of justice underlies the reasoning of the court?"


 
Legal Theory Calendar


Saturday, January 10, 2004
 
Certiorari in Hamadi Case Howard Bashman rounds up the press coverage on the Hamadi grant. Here is a short excerpt from David Savage's L.A. Times story:
    Unlike prisoners of war, these "unlawful combatants" had no rights under the Geneva Convention to a military hearing to argue they were not, in fact, enemy soldiers. And they were outside the protections of American law, and therefore, judges had no authority to second-guess the president's decision. While asserting this broad authority, the administration has used it sparingly. Only three men, all Muslims, have been publicly identified as "enemy combatants," and only two have had their legal claims heard in federal court. The first, Yaser Esam Hamdi, is a Saudi national who was fighting for the Taliban regime in Afghanistan. He surrendered to U.S. troops in fall 2001 and was taken to the U.S. Naval Base at Guantanamo Bay, Cuba. Military authorities learned he was born in Louisiana, and was therefore a U.S. citizen. In April 2002, he was sent to a Navy brig in Norfolk, Va., where he was held without charges and without being permitted to speak to a lawyer or his family.
And a bit more:
    Pepperdine University law professor Douglas W. Kmiec called the decision to hear Hamdi's case "a positive development." He predicted the court will not "interfere with necessary military decision-making," but instead will "write a narrowly drawn opinion that affirms it in the new and perplexing circumstances on the war on terror." On the other side, critics said Bush has followed a lawless course by ignoring both the Geneva Convention and the U.S. Constitution. They said Hamdi and Padilla should either be treated as prisoners of war or accused criminals. "The United States' treatment of [Hamdi] radically departs from settled law and history" on prisoners of war, Yale University law professor Harold Hongju Koh said in a friend-of-the-court brief on behalf of experts in international law.


 
Legal Theory Bookworm This week the Legal Theory Bookworm has two recommendations, both standard reference works for legal theorists:
    A Companion to Philosophy of Law and Legal Theory by Dennis Patterson (Editor). This very hand reference works has essays on property law by Jeremey Waldron, tort law by Stephen Perry, constitutional law and equal by Maimon Schwarzschild, deconstruction by Jack Balkin, precedent by Larry Alexander, indeterminacy by your humble blogger, and many many others.
    The Oxford Handbook of Jurisprudence and the Philosophy of Law by Jules Coleman and Scott Shapiro (Editors). This volume has essays on natural law by John Finnis and Brian Bix, on positivism by Andrei Marmor and Kenneth Einar Himma, on reasons by John Gardner and Timothy Macklem, and on law and objectivity by Brian Leiter--and much more.
These are wonderful reference tools! Highly recommended!


 
Download of the Week The Download of the Week is The First Amendment and the Socialization of Children: Compulsory Public Education and Vouchers by Steve Shiffrin of Cornell. Shiffrin has long been one of the finest American constitutional scholars. Here is the abstract of this very interesting article:
    The debate about public and private education raises important questions about the role of the state in promoting a certain kind of person and citizen, which has implications for liberal and democratic theory, the respective rights of children and parents, and the nature of religious freedom in a democratic society. In addressing these issues, Professor Shiffrin argues that the debate about compulsory public education has been oversimplified. Too often the argument has been that compulsory public education is always unconstitutional or, less frequently, that it is always constitutional. Similarly, much of the debate about vouchers contends that they are always good or always bad or that vouchers to religious schools either always do or always do not violate the Establishment Clause. Shiffrin maintains that the interests of children and the state in public education have been underestimated and that government should in many circumstances be able to compel adolescents of high school age, but not pre-adolescents, to attend public schools. No U.S. government is likely to engage in such compulsion, and there are good political reasons not to do so, but analysis of the case for compulsory public education leads to support of a strong presumption against vouchers, at least at the high school level. This presumption, however, is more difficult to defend when public schools are relatively homogeneous or are providing inadequate education to poor children. Even if vouchers could generally be supported, vouchers to religious schools raise serious concerns about the appropriate principles of church-state relations in the American constitutional order. But these concerns might be overcome in certain circumstances. In short, compulsory public education should sometimes be regarded as constitutional and sometimes not; vouchers are generally to be resisted, but sometimes not; and vouchers to religious schools should ordinarily be considered unconstitutional, but sometimes not.


Friday, January 09, 2004
 
Welcome to the Blogosphere . . . to Displacement of Concepts, "Thoughts on technology, innovation, law, legal education, economics, cyberspace, intellectual property, and other things of interest to the humans inhabiting the information society, brought to you by a few folks at the University of East Anglia and the Norwich Law School." The bloggers include: Robert A. Heverly, Andrew Scott, Lindsay Stirton, and Arvind T. Thattai.


 
Call for Papers: Evil, Law and the State
    Law, Evil, and the State Wednesday 14th to Saturday 17th July 2004 Mansfield College, Oxford, United Kingdom
    CALL FOR PAPERS
      This inter-disciplinary and multi-disciplinary conference seeks to explore issues surrounding evil and law, with a focus on state power and violence. Perspectives are sought from those engaged in any field that touches on the study of law and legal culture: anthropology, criminology, cultural studies, government/politics, history, legal studies, literature, philosophy, psychology, religion/theology, and sociology, as well as those working in civil rights, human rights, prison services, politics and government (including NGOs), psychiatry, health care, and other areas.
      Papers, reports, work-in-progress and workshops are invited on issues related to any of the following themes:
        * when and why is law evil or a source of evil? * state violence and coercion * justifications for punishment, including capital punishment * whether and under what circumstances the adversary or inquisitorial models of legal process generate, tolerate, or allow evil outcomes * issues of distributive justice in law, including distributing the costs of legal error * the intersection of law with issues of choice, responsibility, and diminished responsibility * state responsibility for terrorism, war, intervention, ethnic cleansing, and other problems of international law and international relations
      Papers will be considered on any related theme. 300 word abstracts should be submitted to both the Organizing Chairs; abstracts may be in Word, WordPerfect, PDF or RTF formats, and must arrive no later than Friday 19th March 2004. If an abstract is accepted for the conference, a full draft paper should be submitted by Friday 28th May 2004.
    Papers should be submitted to
      Professor John Parry Associate Professor of Law, University of Pittsburgh School of Law Pittsburgh, USA mail: Parry@law.pitt.edu
      Dr Rob Fisher Inter-Disciplinary.Net Oxfordshire, United Kingdom Email: rf@inter-disciplinary.net
    The conference will likely consist of roughly 30-35 people grouped primarily in consecutive panels so that each person may hear and respond to each paper. Non-presenters are welcome to attend and participate as well.
    Selected papers accepted for and presented at this conference will be published, as revised, in a themed volume. In addition, all papers accepted for and presented at the conference will appear in an ISBN eBook.
    Evil, Law, and the State is part of a larger series of ongoing conferences, run under the general banner 'At the Interface.' This series aims to bring together people from different areas and interests to share ideas and explore various discussions which are innovative and exciting.
    For further information about the project please go to: http://www.wickedness.net/els/els.htm
    For further information about the conference please go to: http://www.wickedness.net/els/els1/els04cfp.htm


 
Claeys on Separation of Powers Eric Claeys (Saint Louis University - School of Law) has posted Separation of Powers and the Living Constitution: How the Supreme Court Has Used Progressive Political Theory to Reconcile Formalism and Functionalism on SSRN. Here is the abstract:
    This article proposes a positive explanation why the Supreme Court has erratically veered from formalism to functionalism and back in its separation of powers law for more than 25 years. The explanation suggests that most members of the Court are drawing on normative ideas about separation of powers heavily influenced by Progressive political theory. The Progressives developed living Constitution political theory to justify creating a federal bureaucracy independent of the three traditional departments of the federal government. Watered down slightly, the Progressive idea that politics should be kept separate from administration became an important staple of legal education during and shortly after the New Deal - when most of the Justices on the Burger and Rehnquist Courts went to law school. Most of the Justices on these Courts have relied on Progressive norms about independent administration in deciding separation of powers cases since the 1970s. These Justices used formalism and functionalism not as self-contained intepretive theories, but as standards of review, like strict and rational-basis scrutiny. The Progressive distinction between politics and administration has served as the trigger between the two levels of scrutiny. If a law transfers power to independent agency administrators, most Justices apply functionalism to defer to Congress and uphold the law. If, however, a law enables politicians, and especially members of Congress, to supervise agency administration closely, the same Justices apply formalism to justify striking it down. The article explains a phenomenon that has puzzled commentators for years, but it also offers some important lessons about the normative debate between formalism and functionalism. Most important, contemporary functionalism seems problematic. Whether viewed from the perspective of history or of normative theory, contemporary functionalism seems to have it both ways. It tries to justify Progressive political results without using Progressive constitutional theory, to constitutionalize Progressive norms about independent administration without all the baggage that comes with using Progressive living-Constitution theory to do it. The article closes by suggesting that the main normative debate in separation of powers law may not lie between formalism and functionalism, but rather between formalism and living-Constitution theory.


 
Berman on the Sentencing Guidelines Douglas A. Berman (Ohio State University - Michael E. Moritz College of Law) has posted From Lawlessness to Too Much Law? Exploring the Risk of Disparity from Differences in Defense Counsel Under Guidelines Sentencing (Iowa Law Review, Vol. 87, No. 2, March 2002) on SSRN. Here is the abstract:
    The Sixth Amendment guarantees the right of a criminal defendant to have the assistance of counsel, and much has been said and written about the fundamental need, and the frequent failure, to ensure adequate counsel for all persons accused of crimes. However, far too little attention has been paid to the role and impact of defense counsel in the ultimate conclusion of the criminal process - sentencing. Especially since the enactment of the Federal Sentencing Guidelines, considerable scholarly attention has been given to the roles at sentencing of judges, prosecutors, and even probation officers. But still lacking in the legal literature has been assessments of defense counsel's effect on sentencing outcomes or explorations of whether differences in the quality of defense representation may be thwarting the goals of sentencing reform. This Article starts the project of closely examining the impact of defense counsel on sentencing outcomes. Part I notes the lack of attention given to defense counsel's role and impact in sentencing, and explains the importance of exploring the effect differences in defense counsel may have on sentencing outcomes. Part II utilizes the Federal Sentencing Guidelines as a focal point for discussing the array of challenges that modern sentencing schemes create for defense counsel. This Part highlights how the sheer bulk of sentencing law created by the Guidelines heightens the importance of sophisticated legal knowledge and skilled advocacy, while also increasing the risk of mistakes throughout the processes of plea bargaining, sentence calculations, and final sentence determinations. To best represent clients, defense lawyers have to tailor effectively their negotiations, mitigation arguments, and even repackage work relationships into the Guidelines context. As revealed throughout Part II, prosecutors, probation officers, and judges have many official and unofficial opportunities to make discretionary decisions that directly impact federal sentencing outcomes, and these decisions can be greatly influenced by the efforts of defense counsel at every stage of the federal sentencing process. As a result, from the very beginning of representation in a Guidelines world, the caliber and performance of defense counsel can have a dramatic impact on both the broad outline and detailed particulars of the ultimate sentencing outcome. These important insights in turn suggests that poor defense representation or differences in the quality of defense counsel may create considerable risks of disparities and other unfair sentencing outcomes under the Guidelines. Part III maps out the challenges for sentencing reformers in trying to fully assess and remedy these concerns by noting the complex realities that will necessarily attend efforts to measure and minimize the potential for differences in defense counsel to impact federal sentencing outcomes. The article urges researchers, and in particular the U.S. Sentencing Commission, to undertake empirical studies in order to try to gauge the impact of defense counsel on the sentencing process. In sketching out an agenda for future research and potential reforms, this Article closes with a call to action for all policy-makers and academics concerned with sentencing systems to focus needed attention on defense counsel's role and impact on sentencing outcomes.


 
Conference Announcement: Behavioral Analysis of Legal Institutions at Florida Stat
    Behavioral Analysis of Legal Institutions
    Spring 2004 Symposium March 26-27, 2004 Florida State University College of Law Program Overview
      Legal scholars increasingly analyze legal phenomena from a behavioral perspective, relying on findings from psychology, behavioral economics, and political science to predict how the law will affect behavior and how legal actors will make judgments and decisions. Much of the first wave of law and behavioral science scholarship focused on individual behavior without taking into account the larger institutional setting in which the behavior occurs, while other early scholarship within this field exalted the decision-making competence of one legal institution over another without engaging in a comparative analysis of the respective institutions. This symposium will examine the unique influences of different legal institutions on behavior and address methodological limitations and political pressures that complicate the behavioral analysis of legal institutions.
      Leading scholars in the fields of law and the behavioral sciences will participate in discussions organized around the judgments and decisions that take place within particular institutional settings of legal importance. Planned papers include presentations of original empirical research that extends our understanding of how legal institutions affect behavior and critical analyses of existing empirical research on the behavior of legal institutions.
      The conference will provide an opportunity to take stock of where the behavioral study of legal institutions stands, to discuss why the legal system should pay greater, or perhaps less, attention to this research, and to consider future directions for this field of study. Papers and comments will be published in Volume 32 of the Florida State University Law Review.
    Keynote Speaker, Philip E. Tetlock Lorraine Tyson Mitchell Chair II in Leadership and Communications; Professor of Organizational Behavior; Chair of the Organizational Behavior and Industrial Relations Group, Haas School of Business, University of California, Berkeley. Dr. Tetlock is a leading behavioral decision theorist whose studies on judgment and decision-making behavior within organizations include seminal research on accountability systems and the rationality of judgment and choice.
    This symposium has been approved for continuing legal education credit by The Florida Bar.


 
Kordana and O'Reilly on Daubert Kevin A. Kordana and Terrance O'Reilly (University of Virginia School of Law and Independent) have posted Daubert and Litigation-Driven Econometrics (as published in Virginia Law Review, Vol. 87, pp. 2019-2027, November 2001) on SSRN. Here is the abstract:
    rofessor D.H. Kaye's paper raises interesting and important questions. The central issue that his paper tackles is when expert testimony employing statistics, particularly econometrics, should get to the jury. Based on the case study he discusses, his answer appears to be "not if it's not up to snuff." Kaye nonetheless emphasizes that the answer also must not exclude too much. In other words, we should admit studies that are less than ideal but still "within the range of reasonable debate by experts." Kaye argues that the admissibility of evidence based on a statistical theory or technique should turn on whether the statistical method "has been subjected to sufficient study to establish its validity as applied to a class of problems that includes the one being investigated in the litigation" (i.e., the "major premise," or over-arching scientific theory). The question "[w]hether such a method is being applied properly to the problem at hand" (the "minor premise," or case-specific claims) would generally be left to the jury.


Thursday, January 08, 2004
 
Gulati at FSU Today At Florida State, Mitu Gulati, Georgetown University Law Center (short-course visiting professor at FSU) presents "Who Would Win a Tournament of Judges?"


 
Akhil Amar on the West Wing Gary O'Connor writes
    Akhil Amar was mentioned during last night's "West Wing" episode. Josh Lyman said "One of my law school classmates published an article on the constitutionality of Lincoln's general order" and another character (a lawyer from North Carolina complaining about the fact that North Carolina's copy of the Bill of Rights was stolen by a Union soldier in the Civil War) said "Akhil Amar."
What's next? Cass Sunstein on Sex in the City? Kathleen Sullivan on Six Feet Under? Randy Barnett on The Sopranos? Bruce Ackerman on The Simpsons? Steve Calabresi on Friends? Eugene Volokh on MI5?


 
Getting to Formalism
    Introduction I was recently invited to give a talk on "Transitions to Originalism" at the meeting of the Faculty Division of the Federalist Society in Atlanta on January 4, 2004. This post is a modified version of the remarks that I made on that occasion.
    Frequent readers know that I am a self-avowed neoformalist. What does that mean? Putting it in the negative, I reject the idea that law should be used instrumentally by judges to achieve the judge's idea of what constitutes good policy. On the positive side, I have argued that judges should adhere to "the rules laid down," roughly the text of statutes and constitutions in light of evidence of their original meaning. For a very brief summary of my views, surf to A Neoformalist Manifesto. In this post, I will simply assume that a formalist legal regime is the goal, and ask the next question: "How can we get to formalism?" This is too big a question for a single blog post, so I will limit my discussion to an important subset of this question: "How can we get to a formalist constitutional regime?"
    I am not going to argue for the virtues of formalism. Instead, for the sake of argument we can begin with the assumption that the goal is a formalist constitution--more or a less a constitutional regime where courts look to the text, structure, and original meaning of the Constitution as sources of interpretive authority and not a regime where judges rely on their own beliefs about what is just or what will produce the best consequences as a source of constitutional law.
    We do not have a formalist constitution today. What if we had the political will to achieve that goal? What is the best way to get to constitutional formalism?
    An Attractive Proposition: Constitutional Exclusivism If you are a constitutional formalist, the following proposition seems attractive as a starting point:
      It is the constitution (and not judicial decisions) that is a genuine source of normative authority. We are obligated by the Constitution and not by precedent.
    We might call this view constitutional exclusivism. The thesis of this post is that legal formalists should reject constitutional exclusivism. In a sense then, this post continues the argument that I began in The Case for Strong Stare Decisis: Part I, Part II, and Part III. I am going to argue that the authority of the constitutional text ought to be supplemented by the authority of settled constitutional practice. Sometimes constitutional practice can be found in the actions of the President or of Congress, but for the sake of simplicity and brevity I shall focus on judicial precedents.
    A Transition from What to What? When we are thinking about the transition to constitutional formalism, it is important to begin with some understanding of the status quo. Of course, that itself is a huge topic. To what extent is the existing constitutional order in the United States consistent with a formalist understanding of the Constitution? Rather than offer evidence, I would like to offer a working hypothesis. Let's assume for the sake of argument that the current set of constitutional doctrines, about federal power, separation of powers, and individual rights is a substantial departure from a formalist constitution. That is, I will assume that a textualist and originalist approach to constitutional interpretation would require the following significant constitutional changes:
    • A devolution of power from the national government to the states.
    • A reorganization of the federal government, including a substantial reduction in CongressÂ’s power to delegate legislative authority to the executive branch.
    • A conceptual realignment of individual rights jurisprudence. (But it is extremely controversial what form this realignment would take.)
    In other words, the transition to constitutional formalism would (or at least could) involve substantial and systematic changes in constitutional law.
    Finally, I want to reemphasize that these are assumptions--not arguments and evidence. The assumptions are necessary so that we can focus on the question: "Do we know the best way to constitutional formalism?"
    The Big Bang We have tentatively accepted constitutional exclusivism as a normative premise: It is the constitution (and not judicial decisions) that is a genuine source of normative authority. What are its implications? On the surface, at least, our normative premise would seem to lead to the conclusion that everyone (citizens and officials alike) should begins to act on the basis of the formalist constitution. That is, we should all give our constitutional allegiance to the real constitution--the one that sits under glass in Washington, D.C. And we should withhold our allegiance from the judicially mangled constitution--the one found between the covers of the United States Reports. Suppose that we did. The result would be a constitutional big bang, a rapid transition from the realist constitution created by judicial fiat to the constitutional order embodied in the text, structure, and history of the Constitution of the United States of America.
    But a moment's reflection suggests that this constitutional big bang would be problematic, to say the very least. For now, let's put aside the practical problems. Let's ignore the fact that it might be difficult to instantaneously transition to a world where the FCC, the CPSC, the EPA, the SEC, and the NLRB were all unconstitutional. Let's set aside the problem of transferring authority from Washington to Albany, Sacramento, and Austin. Even without those problems, we would face another problem of monumental proportions.
    What problem is that? A big bang that involved everyone (judges, other officials, and citizens) would lead to an intractable problem of authority and social coordination. Even if every single judge, every single official, and every single citizen were to become a committed originalist overnight, this would not lead to agreement about the meaning of the constitution. Why not? There are three problems that constitutional formalism must face:
      1. The Problem of Textual Ambiguity--Some provisions of the constitution are highly determinate in meaning. For example, there is little disagreement about what the requirement that the President be 35 years of age means. Other provisions are general and abstract, creating vagueness and ambiguity. Examples include the due process, privileges and immunities, and equal protection clauses of the 14th Amendment to the Constitution. But it is not just the text of the constitution itself that is vagie and ambiguous; the historical record (the guide to original meaning) is ambiguous as well. Documents such as The Federalist Papers, the record of the constitutional convention, and the ratification debates themselves contain abstract and general language that creates further vagueness and ambiguity. My point is not that the relevant texts provide no guidance; clearly many interpretations are wholly inconsistent with the text and original meaning. Nor am I a relativist about original meaning. I believe that some interpretations are better than others, and that usually there is a best interpretation that can be supported by good and sufficient evidence. Rather it is that these texts will frequently admit of more than one interpretation. Given the variability of human beings, it is inevitable that there will be disagreements even among formalists about what the Constitution means.
      2. The Problem of Incomplete Historical Knowledge--Different interpreters of the constitution know different things about constitutional history, and even when all that knowledge is added together, it is partial and incomplete. It is not uncommon for constitutional scholars to discover new historical evidence, or discovery a significant but neglected passage in a well-known source. Because of incomplete historical knowledge, different interpreters will understand the original meaning of the constitution in different ways. Because new historical knowledge comes to light or becomes more widely disseminated, our understanding of original meaning will vary over time.
      3. The Problem of Partiality--The first two problems are compounded by a third. Different interpreters are partial to different interests. We are members of different affinity groups and we are committed to a variety of causes and ideologies. Given human nature, it is hardly surprising that our attachments influence our interpretations. Given textual vagueness and ambiguity and the problem of incomplete historical knowledge, it is inevitable that we will tend to favor those interpretations of the Constitution that serve the interests to which we are partial. Such partiality is not necessarily a result of bad faith. It is simply a fact of human nature that we are attracted to the conclusions that we wish were true.
    Given these three problems, it is difficult to resist the conclusion that a formalist big bang, with each every citizen, official, and judge acting on the basis of their own interpretation of the text and original meaning would be completely unworkable. Law could not perform its coordinating function if every individual and official saw themselves as their own Supreme Court with independent and ultimate interpretive authority. Constitutional exclusivism plus radical interpretive pluralism would be a recipe for social chaos.
    A Modified Big Bang So we could modify the big bang. As a first step, we might say that citizens should not interpret the constitution on their own, but should instead defer to the judgments of officials, (judges, legislators, and executive officers). In order to make this workable, we would need a mechanism for coordination among the branches of government. On some questions, we might conclude that Congress, the President, or the States have final interpretive authority. But on most questions, we would be likely to conclude that the judiciary should have the final word. That is, on most questions of constitutional law, courts interpret the constitution and the other branches take judicial interpretations as authoritative. I realize that this claim is controversial. But once again, for the sake of argument, let's simplify and focus on the judiciary as the locus of interpretive authority
    A Judicial Big Bang So let's suppose that we modified our normative commitment to constitutional exclusivism as follows:
      It is the constitution (and not judicial decisions) that is a genuine source of normative authority for judges. Judges are obligated by the Constitution--not by precedent, but individuals and other officials are bound by judicial interpretations of the constitution.
    That is, let's imagine that a formalist big bang in which the courts look exclusively to the constitutional text and original meaning but the other branches of government and ordinary citizens defer to the judicial interpretation of what the original meaning of the text is. But even this modified, judicial version of the big bang would not be workable. What happens if the Supreme Court decides a constitutional issue and then remands the case? Is the trial judge bound by the Constitution itself or is she bound by the Supreme Court's interpretation of the Constitution? A hierarchical system of appellate courts simply could not function if lower courts were obligated only by the original meaning of the constitutional text.
    Law of the Case Because we have a system with appellate courts, we need to modify our principle once again. We need to take into account the doctrine that lawyers call the law of the case. This doctrine obligates a lower court to respect a constitutional interpretation made by a higher court in the same case on appeal. If we add this doctrine to our formulation of constitutional exclusivism, we get something like the following:
      It is the constitution (and not judicial decisions) that is a genuine source of normative authority for judges. Judges are obligated by the Constitution--not by precedent, but individuals and other officials are bound by judicial interpretations of the constitution. Lower court judges are also bound by the interpretations of higher courts as specified by the doctrine of the law of the case.
    Vertical Stare Decisis But even this amendment to our principle is not sufficient. The doctrine of law of the case only binds the lower courts in particular cases. That is, a rule is "law of the case" only for one particular lawsuit (or "civil action"). The law-of-the-case doctrine has no effect on other lawsuits, even if they are before the same judge or the same court. If our principle incorporates only this doctrine, it would mean that even after the United States Supreme Court had settled the original meaning of a particular provision, lower court judges would be free to disagree and to adopt their own interpretations. Every trial judge would be free to revisit every constitutional question every time it arose--except in cases where the issue was present after remand from a higher court. This would undermine the rule of law, because the constitution would have no settled meaning beyond the four corners of a particular lawsuit. Hence, there is a need to supplement the doctrine of law of the case with a doctrine of vertical stare decisis. That is, decisions by higher courts should be binding on lower courts, even if they do not get the text, structure, and history right in the eyes of the lower court judge. Let's modify our principle of constitutional exclusivism to add this additional feature:
      It is the constitution (and not judicial decisions) that is a genuine source of normative authority for judges. Judges are obligated by the Constitution--not by precedent, but individuals and other officials are bound by judicial interpretations of the constitution. Lower court judges are also bound by the interpretations of higher courts as specified by the doctrines of the law of the case and vertical stare decisis.
    Horizontal Stare Decisis in the Intermediate Appellate Courts But there is yet another problem to resolve. The United States has a large and complicated system of appellate courts. For example, in the federal system, we have an intermediate appellate court called the United States Court of Appeals. Litigants can appeal to this court "as of right." (The Supreme Court's jurisdiction, by way of contrast, is most discretionary--via what is called the "writ of certiorari.") In order for the Courts of Appeal to function, they are divided in Circuits (numbered from 1st to 11th plus two special circuits, one for the District of Columbia and another called the Federal Circuit). These Circuit Courts have many judges, but they in order to handle the large volume of cases, the Courts of Appeals hear cases as panels of three judges.
    If each panel of three judges accepted the principle of constitutional exclusivism, the result would be that the meaning of the Constitution with respect to a particular issue could never be settled within a Circuit until that issue was decided by the United States Supreme Court. The Constitution might mean one thing this week and another thing next week. Trial courts within the circuits would be faced with a real practical problem. Given conflicting Circuit precedent, what is the law? To make vertical stare decisis work, it must be supplemented by a rule of horizontal stare decisis for intermediate appellate courts that sit as panels.
    Once again, we must modify the principle of constitutional exclusivism. We now have the following formulation:
      It is the constitution (and not judicial decisions) that is a genuine source of normative authority for the Justices of the Supreme Court. Supreme Court Justices are obligated by the Constitution--not by thier own precedents, but individuals and other officials are bound by judicial interpretations of the constitution. Lower court judges are also bound by the interpretations of higher courts as specified by the doctrines of the law of the case and vertical stare decisis. Panels of intermediate appellate courts are bound by the doctrine of horizontal stare decisis to follow the prior decisions of the court (Circuit).
    Although there is room for disagreement, I believe that many originalists would accept this modified version of the principle of constitutional exclusivism. It reflects the widely shared belief that three rules--(1) the doctrine of law of the case, (2) vertical stare decisis, and (3) horizontal stare decisis for intermediate courts--are essential features of the rule of law in a common-law system.
    Horizontal Stare Decisis in the Supreme Court My guess is that many formalists and originalists who are attracted to the idea of constitutional exclusivism would be willing to go along with the modified version of that principle that I have proposed. What matters, they might say, is the Supreme Court. When we said that it is the constitution that is the ultimate source of authority, we meant that the Supreme Court is bound by the original meaning of the Constitution and not be its own precedents. So this is where my argument starts to get truly controversial. I am going to try to convince you that the Supreme Court should consider itself bound by its own prior decisions, and that this is the best expression of constitutional formalism. Here we go!
    As a first step, let's just list the options we have with respect to the role of stare decisis in the Supreme Court:
      Option one: no horizontal stare decisis in the Supreme Court. That is, we could implement the big bang in the Supreme Court. The Court would set aside all of its precedents and decide each and every constitutional question de novo as if it had never arisen before.
      Option two: an instrumentalist conception of precedent. Of course, even if the big bang were limited to the Supreme Court, there would be serious problems. First, there is the obvious problem of a potentially rapid and disruptive change in the basic structure of our federal system. Second, there is the continuing problem that without a doctrine of precedent, constitutional meaning can remain unsettled. Given the problems of textual ambiguity, incomplete historical knowledge, and partiality, it is quite likely that a system without precedent would produce flips and flops during those historical periods when the Court was closely balanced between adherents of different political ideologies and the control of the Presidency passed back and forth between the parties. One solution to these problems is an instrumentalist conception of precedent. The Court could balance the interests in constitutional stability and the protection of social expectations against the need to correct constitutional error, relying on precedent when the former interests were weightier than the latter.
      Option three: a formalist conception of precedent. There is a third option. The Supreme Court might consider itself bound by its own prior decisions. Of course, even a strong doctrine of horizontal stare decisis can allow for the overruling of precedent for reasons of precedent. When one case is egregiously out of line with surrounding doctrine, there will come a time when the weight of precedent requires that the outlying case be declared a mistake and overruled or limited to its own facts. A formalist conception of precedent allows for constitutional evolution, but only at a glacial pace.
    Which of these three options should a constitutional formalist chose? Let me begin by arguing that a commitment to formalismis inconsistent with option two (instrumentalist precedent). Why? Option two would require the Supreme Court to engage in realist judging. If option two were put into practice by the Supreme Court, it would inevitably engage the Justice's subjective policy preferences when the costs and benefits of overruling a given precedent was the subject for discussion. Not only is this inconsistent with the spirit and central purpose of constitutional formalism, but as a practical matter, this methodology seems likely to undermine formalist modes or reasoning and push the Court in the direction of constitutional instrumentalism. Let me confess that I recognize that these arguments are sketchy--this is a blog, after all! But I hope that you see the general shape and direction of the argument, and are willing to move to the next step--at least for the sake of argument.
    If we rule out option two, we are left with option one--no precedent--and option three--formalist precedent. That is, we are faced with the choice between a limited big bang, on the one hand, and formalist evolution, on the other. Which of these two choices should a formalist prefer?
    Constitutional Exclusivism Revisited As we begin to consider the choice between options one and three, it is important to consider our current stance with respect to constitutional exclusivism. I would like to suggest that we have already gone a long way towards rejecting the logic of constitutional exclusivism. First, we have already concluded that the original meaning of the Constitution is not the exclusive source of binding authority on constitutional questions. We have agreed that the Constitution should be supplemented by the doctrine of law of the case, the doctrine of vertical stare decisis, and by the doctrine of horizontal stare decisis in intermediate courts of appeal. Once we have accepted those doctrines, we have accepted the principle that stare decisis should (sometimes) override the authority of the constitution's text and original meaning as understood by particular citizens, officials, or judges.
    Hold on there! Aren't you trying to con us? We have only conceded that lower courts should be bound by precedent. But the Supreme Court is special, isn't it? Because the Supreme Court has ultimate authority, isn't the Supreme Court the only court that really counts! But as a matter of fact, the Supreme Court isn't the only court that really counts. Indeed, as a practical matter, the Courts of Appeal are the courts of last resort for the overwhelming majority of litigants and cases. The Supreme Court hears fewer than 200 cases per year; the intermediate courts of appeal in the federal system hear tens of thousands of cases every year. The Supreme Court revisits several major constitutional issues each year, but the courts of appeal of appeal revisit virtually every important constitutonal question in a variety of factual contexts on a regular basis. This point is crucially important: when you concede the normative authority of precedent in the lower courts, you have made a concession of tremendous practical importance.
    The Relationship Between Vertical and Horizontal Stare Decisis There is another way in which a concession with respect to vertical stare decisis has important implications for horizontal stare decisis in the Supreme Court. Enforcing a rule of vertical stare decisis requires the Supreme Court to itself adhere to a rule of horizontal stare decisis. Huh? Let me repeat that: if the Supreme Court actually enforces a doctrine of vertical precedent, then it must (at least partially) bind itself by a doctrine of horizontal precedent. Why? Imagine that a Circuit Court disregards a Supreme Court precedent and instead makes a decision based on its prediction of how the Supreme Court would likely decide the case if it were to go up on certiorari. And suppose the appellate judges are a good predictor of how the new formalist Supreme Court would in fact decide the case. What happens next? If the Supreme Court does not follow its own precedents and instead affirms the lower court's decision, then the Supreme Court will have followed the principle of constitutional exclusivism but it will have failed to enforce the rule of vertical stare decisis. If the Court does enforce vertical stare decisis and reverses the lower court that decided on the basis of the original meaning of the text, then the Supreme Court will in effect be following its own prior (and pre-formalist) precedents. The lesson is simple: consistent enforcement of vertical stare decisis requires a substantial degree of adherence to horizontal stare decisis.
    Back to the Practical: Transition Costs So far, I have not relied on the pratical problems with a big bang, but let's turn to those problems now. If the Supreme Court were to take a big bang approach to the return to original meaning, the nation would face substantial practical problems. Withint just a few years, the whole structure of the federalism and separation of powers would be radically transformed. I don't know how to estimate the costs of a big-bang transition to constitutional formalism, but I suspect these costs would be quite high. Of course, we could manage these costs by adopting Option Two, the instrumentalist conception of precedent, but that option is inconsistent with our goal, the transition to a formalist constitution. This only leaves one option, a formalist doctrine of precedent. In the end, the formalist doctrine of precedent is the only approach that is simultaneously feasible, normatively attractive, and true to the principles of formalism itself!
    The Downside of Formalist Precedent Of course, this picture is not all rosy. If the Supreme Court adheres to precedent, then the transition to constitutional formalism will be a slow one. Is this cost too high? I would like to suggest several tentative thoughts for your consideration:
    • Stopping instrumentalist constitutionalism in its tracks is by itself a very great good. Even if adherence to precedent absolutely precluded any movement towards the original meaning of the constitutional text, it would stop the current movement in the other direction. Even this modest achievement would not be inconsiderable.
    • A formalist understanding of precedent is narrower than the current, realist understanding. Realist Supreme Courts have adopted a realist conception of precedent. Essentially, the legal realists saw precedent as a prediction of what the Court would do in the future. This understanding of precedent licensed what might be called "legislative holdings." The court begins a sentence with "We hold that . . ." And then the Court just fills in the blank with whatever rule it wishes to establish. (Miranda warnings are a good example.) A formalist theory of precedent would limit the holding of a case to the ration decedendi, the principle actually required to resolve the case. If we moved to the formalist doctrine of precedent, the constraining force of the instrumentalist legacy of the Warren and New Deal Courts, although substantial, would not be insurmountable. New fact patterns and untested legislation would permit the Supreme Court to mount a gradual retreat from the high water marks of instrumentalist constitutionalism. This would take decades, but it would not take centuries.
    • And the formalist doctrine of precedent would facilitate the transition to the original meaning of the constitutional text in the way that it assigned gravitational force to precedents. The respect a precedent deserves would depend in part on the reasoning employed. Decisions based on a good faith attempt to read the text in light of its original meaning would be entitled to greater respect than decisions which pulled constitutional doctrines out of thin air. This rule would create a ratchet effect. Formalists precedents would have generative force; ultrarealist precedents would have little influence outside the specific constitutional issue and factual context. As more formalist precedents accumulated, the movement to constitutional formalism would begin to accelerate. As a practical consequence, this means that the transition to original meaning would proceed very slowly at first and then it would gradually begin to pick up speed. This process would allow social expectations to adjust and institutional arrangements to adapt. It would even allow time for constitutional amendments, if the pattern of change and reaction revealed that such amendments were necessary.
    In other words, if the Supreme Court were to adhere to a formalist doctrine of precedent, the result would be a slow but sure transition to constitutional formalism.
    A Final Formulation So let's give one final formulation of the principle of constitutonal exclusivism, adding in our understanding of the proper role of horizontal stare decisis in the Supreme Court:
      It is the constitution that is the highest and most authoritative source of normative authority for judges. Supreme Court Justices are obligated primarily by the Constitution--but they are also obligated to adhere to precedent. Individuals and other officials are bound by judicial interpretations of the constitution. Lower court judges are also bound by the interpretations of higher courts as specified by the doctrines of the law of the case and vertical stare decisis. Panels of intermediate appellate courts are bound by the doctrine of horizontal stare decisis to follow the prior decisions of the court (Circuit). The Supreme Court should follow the ratio decedendi of its own prior decisions, even when these decisions are inconsistent with the current understanding of the original meaning of the constitution, but the force of such precedents shall vary with the extent to which their reasoning is based on the precedent, text, structure, and original meaning of the Constitution.
    Can we call this final formulation constitutional exclusivism or does it require another name? In one sense, this principle is still one of constitutional exclusivism. The original meaning of the constitution is still the the ultimate source of authority. Yes, precedent plays a role, but that role is limited and constrained. On the theory that I have offered in this post, precedent is the servant of original meaning. Given the current practice of the Supreme Court, that relationship is reversed--original meaning today is only the handmaiden of the power of the Supreme Court to pronounce legislative holdings--and it is those holdings provide the Constitution that governs.
    Conclusion Of course, there will be many originalists who find my arguments unpersuasive. They will say that we cannot and should not wait for decades for the restoration of the original meaning of the Constitution. There are those who will argue that the transition to formalism must proceed with all deliberate speed. The partisans of original meaning may concede that a big bang is neither feasible nor desirable, but nonetheless contend that originalist judges should use the levers of judicial power to move as rapidly as feasible and desirable towards that goal. That is, there are originalists who will argue that we should employ instrumentalist means to achieve originalist ends. But I would like to suggest that this approach to judging is, in the end, inconsistent with the greatest strenghth of constitutional formalism. In an age of politicization, where judges are increasingly selected or defeated on the basis of political ideology and the judiciary has come to be seen as the third political branch, the greatest strenghth of constitutional formalism is that it offers a real alternative to a politicized judicary. The central idea of constitutional formalism is that judges should follow the rules laid down--deciding the cases before them on the basis of law and not on the basis of an agenda of preferred ends that are ultimately selected on the basis of political ideologies. If constitutional formalism is percieved as a cover for other political agendas, it will fail. For constitutional formalism to succeed, its methods must be formal, through and through. When it comes to precedent in a common law system, the formal path is clearly marked. Following the rules laid down in constitutional cases means following precedent. But following precedent does not mean following the legislative pronouncements of realist judges. A formalist doctrine of precedent inevitably (but gradually) leads to following decisions that are based on a good faith reading of the original meaning of the constitutional text. A return to the formalist conception of precedent is, in the final analysis, the best way of getting to constitutional formalism.
    Update: Stuart Buck has comments here, with a reply by C.E. Petit here. Will Baude is Off to Class. Yglesias suggests that I have gotten to formalism through legal realism. Pejmanesque comments here. More comments from Strange Doctrines here. Randy Barnett comments on the post and the event from which it derives. And Glenn Reynolds suggests that neoformalist legal theory might be called "legal legalism."


 
Bebchuk on Takeovers Lucian Arye Bebchuk (Harvard Law School) has posted The Pressure to Tender: An Analysis and a Proposed Remedy (Delaware Journal of Corporate Law (DJCL), Vol. 12, pp. 911-949, 1987) on SSRN. Here is the abstract:
    This paper provides a compact account of the problem of distorted choice in corporate takeovers. (A more detailed account is provided in, "Towards Undistorted Choice and Equal Treatment in Corporate Takeovers"). I analyze how the tender decisions of shareholders facing a takeover bid might be distorted. I also put forward an approach for addressing this problem, as well as analyze several alternative remedies.


 
Anderson on Military Lawyers and the Laws of War Kenneth Anderson (Washington College of Law, American University) has posted The Role of the United States Military Lawyer in Projecting a Vision of the Laws of War (Chicago Journal of International Law, Vol. 4, No. 2, Fall 2003) on SSRN. Here is the abstract:
    This article discusses the role of the United States military lawyer in projecting a moral vision of the laws of war, rather than simply acting in a technical and purely lawyerly fashion. The article argues that it is essential that US military lawyers acting in laws of war matters, and especially acting in diplomacy related to laws of war treaties, be willing to project the moral vision which underlies US military interpretations of the laws of war, beyond pure legalism, in order to compete with the moral visions of the laws of war expressed by human rights and other nongovernmental organizations (NGOs) who, believing that they represent humanity in the abstract, rather than some set of parochially national interests, therefore believe that the interpretation, enunciation, evolution, and ownership of the laws of war properly belongs to NGOs and not to military establishments, and not to the US military establishment in particular. The article considers the cases of the Ottawa convention banning landmines, the International Criminal Court debates, and the treatment of detainees at Guantanamo to critique the failure of US military lawyers to assert a moral vision of the laws of war that goes beyond mere national interest or the interest of the United States as a client. It concludes with a call for US military lawyers and their institutional government client to find a role for military lawyers to express a moral vision based around the core concept of the protection of noncombatants, and move beyond mere legalism.


 
Langevoort on Technological Change as the Cause of Financial Scandals Donald C. Langevoort (Georgetown University Law Center) has posted Technological Evolution and the Devolution of Corporate Financial Reporting on SSRN. Here is the abstract:
    The role of technological evolution as a potential causal factor in the recent financial scandals has not yet bee fully explored. This paper looks at technology-induced changes in the issuers’ marketplace environment, in the trading behavior of investors and in the tools employed by technology-oriented firms to make the case that motive, opportunity and the potential for rationalization of less-than-candid financial reporting were intensified by these trends. In particular, these forces suggest that some sizable portion of financial misreporting was not selfish on the part of managers but a predictable feedback loop generated by competitive forces. If so, there are important lessons to be learned with respect to the appropriate forms of (and forums for) deterrence, as well as with respect to on-going debates about the philosophy of financial reporting.


 
Kamin on Harmless Error Sam Kamin (University of Denver College of Law) has posted Harmless Error and the Rights/Remedies Split (Virginia Law Review, Volume 88, No. 1, pp. 1-86, March 2002) on SSRN. Here is the abstract:
    This Article will proceed in three parts. Part I will recount the history of the harmless error doctrine in the United States, comparing and contrasting it to the other constitutional doctrines that separate rights and remedies principally, qualified immunity and non-retroactivity in criminal appeals. This analysis leads to two conclusions. First, none of these doctrines can exert a positive influence on the substance of the law if treated as a threshold question. That is, unless courts look to the merits of constitutional claims first, and only after resolving those claims look to whether the prevailing party would be entitled to a remedy, these doctrines will serve to stagnate constitutional law rather than allow it to grow and develop. Second, although each of these doctrines, if properly applied, has the capacity to influence positively the development of constitutional law, only harmless error has the capacity to permanently sever rights from remedies. Because non-retroactivity and qualified immunity place later claimants in a better position than earlier ones, the likelihood of a remedy being provided to harmed parties increases over time. By contrast, the harmless error inquiry treats each case in a vacuum; later claimants are no better off than are earlier ones, and there is less impetus for government agents to change their behaviors to conform with the law. In Part II, I will point to a concrete example of harmless error's capacity to create a firewall between constitutional rights and remedies. Drawing on a database of nearly 300 California Supreme Court decisions in death penalty cases, I will show that during a ten year period, over ninety percent of death sentences imposed by trial courts were upheld on appeal even though nearly every case was found to have been tainted by constitutional error. This analysis illustrates both how malleable harmless error is in practice and how powerful a tool it can be for a court that wishes to affirm (or reverse) a decision below. Part III will present a modest proposal for reform of harmless error doctrine in the United States. In that Part, I will draw on the conclusions reached earlier to propose two changes in the way the harmless error doctrine is applied. First, harmless error analysis should not be made a threshold question. That is, a court should never defer the merits of a defendant's claim by finding that any error that might have occurred at his trial was harmless. Rather, courts should begin their analysis by considering the merits of the constitutional claims brought by criminal defendants and should rule on the harmlessness of trial errors only once they have found that those errors in fact occurred. Second, and more fundamentally, I will argue that in order to make harmless error function more like qualified immunity and non-retroactivity, its structure must be changed in order to make it more closely resemble those doctrines. To wit, I argue that the doctrine must contain a temporal component if it is to change not only the substance of constitutional law but also the behavior of government agents; the doctrine must put later litigants in a better position vis-a-vis recovery than earlier litigants. The most effective way to do this, I will argue, is to borrow the reasonableness standard from qualified immunity. I will propose that if a prosecutor should have known that her conduct was constitutional error, the government may not seek to benefit from the harmless error rule with regard to that error. It is only if both suggestions are adopted that the desired effect can be achieved. Without the first change, important questions of constitutional law will not be reached; without the second change, there will be little pressure on prosecutors to comply with the law.


Wednesday, January 07, 2004
 
Read this Post! Here. Actually, I would especially like you to read Stephen Bainbridge's report of remarks by Villanova law school Dean Mark Sargent, if you happen to be University of San Diego Dean, Dan Rodriguez!


 
More on the Original Meaning of the Copyright Clause C.E. Petit over at Scrivener's Error has a very thoughtful post that responds to my ruminations on the original meaning of the copyright clause, found at the end of Blogging from Atlanta 05, Association of American Law Schools, Section on Constitutional Law, Copyright and the First Amendment.


 
Rawls in the Blogosphere Check out Micah Schwartzman's post over at Crooked Timber.


 
Back from Atlanta I'm back from the AALS meeting in Atlanta. After a nightmarish 6 minutes at O'Hare, dashing from the end of terminal C to the end of terminal B, I arrived to a three hour wait for my luggage at LAX. I managed to blog five of the sessions that I attended. Here is the roundup of the posts:I also wanted to say how much I enjoyed the sessions in which I was a participant. The Federalist Society was kind enought to invite me to speak at a panel on "Transitions to Originalism." There was a terrific discussion, focused mostly on the role of precedent for originalists, with short talks by myself, Steve Calabresis, Richard Kay, Michael Rappaport, and Keith Whittington. (Kyron Huigens blogged some comments here.) I had organized another session for the AALS on Randy Barnett's book, Restoring the Lost Constitution. The panel included Steve Griffin, Sandy Levinson, Keith Whittington, and Mark Tushnet, with Barnett responding. I'm biased, of course, but I thought that it was absolutely terrific!


 
Frischmann's Institutional Theory of International Law Brett M. Frischmann (Loyola University of Chicago, Law School) has posted A Dynamic Institutional Theory of International Law (Buffalo Law Review, Vol. 51, 2003) on SSRN. Here is the abstract:
    This article develops a dynamic institutional theory of international law that integrates and builds from insights in the legal, economics (game theory), and international relations disciplines. While a number of scholars have applied game theory and international relations theories to international law, this theory is both novel and useful because it provides a theoretic framework for (1) analyzing international commitments, compliance institutions, and the dynamic process by which international legal regimes evolve; and for (2) examining and comparing the strategic institutional approaches taken to address compliance issues in different regimes. Each of these contributions is significant. With respect to the first contribution, international scholars have not developed a rational choice theory that integrates consideration of commitments, institutions and dynamicism. The theory that comes closest is iterated game theory, but, as noted below, the iterated game theory fails to account for the dynamic nature of international cooperation and the institutions that States create to maintain regime stability in the face of dynamic change. With respect to the second contribution, international scholars have not developed a theory that supports comparative analysis of the strategic institutional approaches taken to address compliance issues. The dynamic institutional theory highlights compliance strategies that have received very little attention by international scholars despite the prominence of such strategies in practice. This theory extends the iterated game theory model (for example, the iterated prisoners' dilemma), which is often used by rational choice theorists in the international relations and international law disciplines to study international cooperation, by recognizing first that iterated games actually evolve and second that States create institutions to cope with this evolution and sustain cooperation in the face of dynamic change. States understand when entering into an international agreement not only that they face noncompliance risks as traditionally conceived (defection based on incentives presented in iterated game context, for example), but also that dynamic change may threaten the stability of the game (unforeseen events may cause payoffs to change in magnitude or become more or less certain, for example). Accordingly, ex ante, States design institutions to monitor State behavior and adjust payoffs either by rewarding cooperators or punishing defectors - as predicted by traditional game theory - but also to maintain cooperation in the face of dynamic change - as predicted by a theory of evolving games. States create institutions to reduce uncertainty and transaction costs associated with dynamic change and to adjust commitments in future iterations. Such institutions facilitate internal change and maintain cooperation by relieving parties of the need to return to the bargaining table every time the game structure changes. This new theory provides a powerful framework for analyzing international legal commitments, institutional mechanisms created by parties to an international agreement to encourage and facilitate cooperation over time ("compliance institutions"), and the dynamic process by which international legal regimes evolve. Moreover, the theory facilitates analysis of compliance institutions and strategies, revealing important differences in the manner in which States address perceived risks of strategic defection and dynamic change. The article specifically contends that States pursue three types of compliance strategies: Type I strategies focused on adjusting States' incentives to comply by altering payoff structures (the expected costs and benefits of (non)compliance); Type II strategies focused on facilitating cooperation by reducing transaction costs and uncertainty as the legal regime evolves; and Type III strategies focused on maintaining cooperation and improving regime effectiveness by dynamically adjusting commitments over time. Comparative analysis of compliance institutions illustrates that these strategies may be implemented through different types of institutions and that the optimal choice of strategy and institutions may vary considerably across issue-areas. The final part of the article applies the theoretical framework to the GATT/WTO regime as well as the international regime that regulates ozone depleting substances (the "Ozone regime"). Attention is given to these regimes because they have been effective in achieving treaty objectives, are often considered as models for the development of compliance institutions in related areas of international law, and are increasingly the focal point of interdisciplinary legal issues. Applying the dynamic institutional theory to the GATT/WTO regime reveals that, while international trade law has evolved into a relatively strong version of public international law, the strength of the current WTO regime does not derive from strict enforcement-oriented institutions aimed at deterring intentional noncompliance through the threat of sanctions, a Type I strategy. Despite its adjudicative, rule-based orientation, the WTO dispute settlement institution, which is the cornerstone of the WTO regime, actually appears to be management-oriented and facilitative in the sense that it primarily implements Type II and Type III compliance strategies and implements Type I strategies only on a limited prospective basis. This important finding is contrary to conventional wisdom and should inform debates regarding reform of the WTO as well as the design of future compliance systems. Overall, the WTO compliance system is designed to maintain regime stability by internalizing (within the structure of formal, legalistic institutions) issues that otherwise might prompt parties to work outside the system (in the realm of pure politics). Applying the dynamic institutional theory to the Ozone regime reveals the complex, multifaceted nature of the Ozone compliance system, which implements all three strategies through a host of innovative institutions. As a result of this system, the Ozone regime has experienced very high rates of participation and compliance while dynamically adjusting commitment levels and adding newly identified ozone depleting substances to the list of regulated substances. Notably, although the system includes institutions empowered to implement Type I strategies through both positive and negative means (side-payments and penalties), no significant penalties have been given. To date, the compliance system has operated primarily in "managerial mode" with the threat of enforcement lurking in the background.


 
Perino on Up-the-Ladder Reporting Rules Michael A. Perino (St. John's University - School of Law) has posted How Vigorously Will the SEC Enforce Attorney Up-the-Ladder Reporting Rules? An Analysis of Institutional Constraints, Norms, and Biases on SSRN. Here is the abstract:
    Section 307 of the Sarbanes-Oxley Act directs the SEC to adopt rules that require attorneys to report evidence of material violations of securities laws or breaches of fiduciary duty to the general counsel or CEO of the company. If those individuals do not respond appropriately to the evidence, the attorney must report the evidence to the audit committee or to another committee comprised solely of outside directors. The SEC adopted the required rules in February 2003. This paper, in commenting on an analysis of these new rules by Professors Susan P. Koniak, Roger C. Cramton and George M. Cohen, examines certain institutional features that may impact the SEC's willingness or ability to enforce the lawyer conduct rules vigorously in the future. This kind of institutional analysis is important because one of the primary justifications for requiring the SEC to promulgate and enforce these rules is the perceived failure of state bar authorities to discipline transactional lawyers. This comment suggests that with respect to enforcing professional responsibility rules, the SEC shares many characteristics with state bar authorities and that it is therefore reasonable to expect a very similar pattern of enforcement. In particular, the paper analyzes the SEC's budget and personnel constraints and the already enormous demands on the SEC's scarce resources. The paper then sketches the potential influence of cultural norms, constraints, and staff incentives at the SEC on future enforcement efforts in this area.


 
Shiffrin on Comulsory Public Education and the First Amendment Steven Shiffrin (Cornell University - School of Law) has posted The First Amendment and the Socialization of Children: Compulsory Public Education and Vouchers (Cornell Journal of Law and Public Policy, Vol. 11, pp. 503-51, 2002) on SSRN. Here is the abstract:
    The debate about public and private education raises important questions about the role of the state in promoting a certain kind of person and citizen, which has implications for liberal and democratic theory, the respective rights of children and parents, and the nature of religious freedom in a democratic society. In addressing these issues, Professor Shiffrin argues that the debate about compulsory public education has been oversimplified. Too often the argument has been that compulsory public education is always unconstitutional or, less frequently, that it is always constitutional. Similarly, much of the debate about vouchers contends that they are always good or always bad or that vouchers to religious schools either always do or always do not violate the Establishment Clause. Shiffrin maintains that the interests of children and the state in public education have been underestimated and that government should in many circumstances be able to compel adolescents of high school age, but not pre-adolescents, to attend public schools. No U.S. government is likely to engage in such compulsion, and there are good political reasons not to do so, but analysis of the case for compulsory public education leads to support of a strong presumption against vouchers, at least at the high school level. This presumption, however, is more difficult to defend when public schools are relatively homogeneous or are providing inadequate education to poor children. Even if vouchers could generally be supported, vouchers to religious schools raise serious concerns about the appropriate principles of church-state relations in the American constitutional order. But these concerns might be overcome in certain circumstances. In short, compulsory public education should sometimes be regarded as constitutional and sometimes not; vouchers are generally to be resisted, but sometimes not; and vouchers to religious schools should ordinarily be considered unconstitutional, but sometimes not.


 
Schruers on ISP Liability Matthew Schruers (Texas Wesleyan Law School) has posted The History and Economics of ISP Liability for Third Party Content (Virginia Law Review, Volume 88, No. 1, pp 205-64, March 2002) on SSRN. Here is the abstract:
    The question of whether Internet Service Providers ("ISPs") should be liable for providing access to information that proves injurious to others has received enough attention in the preexisting literature that the normative arguments for various possible liability regimes have been substantially addressed. Indeed, courts can hardly pronounce upon the matter one way or another without being criticized as having taken yet another errant swing at Nietzsche's already severely abused nail. Revisiting these arguments is unlikely to provide new insights. Economic analysis, however, offers a unique perspective. Economic analysis has contributed to advances in several fields of legal thought, not the least of which is tort law. Nevertheless, the economic implications of ISP liability for third party content remain undeveloped despite being largely driven by tort considerations. This analysis will consider the economic implications of various liability regimes, and will conclude that relative to the available alternatives, the current regime, in which ISPs are almost completely immune from suit, is the most efficient. Therefore, the ultimate question is not whether an alternative regime would be more efficient, but how modifications to the present regime could maximize its efficiency. Part I of the discussion will present a survey of federal, state, and international case law on ISP liability for third party content, excluding cases addressing vicarious or contributory liability for third party infringement of intellectual property rights.6 This survey will explore how over the past decade, the U.S. standard for ISP liability began as a negligence rule and flirted briefly with a strict liability rule before Congress granted ISPs near-immunity in the Communications Decency Act. Later cases reinforced the noliability rule by expanding the scope of the Act. In contrast, European courts that initially employed a strict liability rule for ISP liability ultimately converged on a negligence rule, giving European ISPs little chance to receive the deference accorded their American counterparts. Part II will approach the issue from a law and economics perspective and consider the economics of various liability regimes, referring to the cases in Part I for illustrative purposes. These regimes are considered in the order in which they appeared in U.S. jurisprudence: negligence, strict liability, and no liability, or conditional immunity. The negligence rule first adopted in the United States and presently employed by European courts holds ISPs liable when they fail to exercise due care in the monitoring of third party content. The strict liability rule, adopted briefly in the United States and for some time in Continental courts, holds ISPs liable for all injuries resulting from third party content. The no liability, or conditional immunity, rule presently employed in the United States holds ISPs almost completely immune from liability for third party content, conditioned only on the minimal responsibilities implicit in Section 230 of the Communications Decency Act. Part II will conclude that as a result of the unique nature of the ISPs' costs in monitoring content and the externalities involved, the present conditional immunity regime is the most efficient of the three. Part III will then consider the present regime on the assumption that, as some have argued, it produces suboptimal levels of monitoring. On this assumption, Part III will analyze how subsidies could remedy the problem of shirking of monitoring duties, concluding that subsidization would be less costly than returning to a liability regime.


Tuesday, January 06, 2004
 
Blogging from Atlanta 05, Association of American Law Schools, Section on Constitutional Law, Copyright and the First Amendment
    Introduction It is the very last day of triple witching time in the legal academy. The American Society for Political and Legal Philosophy ended on Saturday, the Faculty Division of the Federalist Society ended on Sunday, but the AALS has its final sessions today. The Section on Constitutional Law has a terrific program organized on the relationship between copyright and the freedom of speech. The program has its genesis in last year’s program and a comment made by Margaret Jane Radin about the importance of copyright to the theory of the freedom of speech.
    The program is being moderated by Randy Barnett (Boston University). Randy opens the program by introducing the speakers, Ed Baker (Penn), Tom Bell (Chapman), Neil Netanel (Texas), and Jessica Litman (Wayne State). Quite a lineup!
    Netanel Netanel begins by observing that the conflict between freedom of speech and copyright is not obvious to everyone. So, he begins with examples. First, former Senator Alan Cranston was a journalist in the 30s. He was horrified to learn that English translation of Mein Kampf had been edited to make it more palatable. He published his own translation with commentary. His version was enjoined as violating copyright. Second, the founder of the World Wide Church of God wrote a tract, which was suppressed by the Church because it included racist material. A dissident church wanted to publish it, but they were enjoined. Third, a documentary filmmaker made a documentary on the San Francisco Opera—in the documentary there is shot of the stagehands playing checkers. In the background, on the television, the Simpsons was playing--4 ½ seconds of background. The documentary was picked up by PBS, and clearances were required. 20th Century Fox wanted $10,000 for the clip. So Homer Simpson was digitally removed from the documentary. This is a frequent problem for documentary filmmakers.
    The examples show that there is a conflict between the first amendment and freedom of speech, but some courts have gone so far as to state that copyright is categorically immune from first amendment challenge.
    Netanel now turns to Eldred v. Ashcroft. What did the first amendment have to say about the DC Circuit statement that there was categorical immunity? Surprisingly, the Court did not accept the plaintiff’s argument that intermediate scrutiny applied to the retroactive extension of copyright terms.
    The Court says that freedom of speech bears less heavily when you are not making your own speech. First, says Netanel, that was flat out wrong. Handing out the bible or the Communist Manifesto is obviously within the core of the First Amendment. Second, copyrighted material can be incorporated in original speech, e.g. the Simpsons clip in the documentary. How did the Court justify this? The Court had three arguments: (1) the first amendment was adopted close in time to the copyright clause. Netanel ridicules this argument, arguing that it is irrelevant, because the first amendment limits many other Article I powers and it was adopted close in time to them as well; (2) copyright has free speech benefits, e.g. it promotes the production of speech. Netanel argues that this does not entail that there should be no first amendment scrutiny, (3) copyright law has built in first amendment protections, including (a) idea/expression and (b) fair use. But these doctrines are notoriously inconsistent and frequently do not do the job of protecting free speech; however, footnote 24 suggests that the first amendment may guide the interpretation of these two accommodating doctrines.
    Baker Ed Baker is one of most distinguished first amendment scholars working today. He begins with the observation that he finds Eldred to be unproblematic. Logically, says Baker, it seems absurd to suggest that freedom of speech would not limit copyright. It is true that the Constitution gives Congress power to regulate Commerce, but no one would think that Commerce Clause enactments are exempt from free speech scrutiny.
    Baker’s first suggestion is that we throw out the idea of levels of scrutiny as the basis for applying free speech doctrine to copyright. The levels of scrutiny are well designed to achieve the purposes of the first amendment. There are competing theories of the freedom of speech. If the freedom of speech is a liberty to say whatever you want, then a law that says you can’t say it because someone else said it first. As to individual speech, any copyright limitation at all is unconstitutional. Not a liberty to sell speech, but to say what you choose to say.
    If one goes back to the history of copyright law, commercial copying was the primary concern. Early copyright law did not constrain the individual. On the other hand, the press clause protects a particular institution—the press. Does copyright interfere with freedom of press? Generally, structural regulation of the press has been upheld. This has been true even if the regulation was concerned with content. The Newspaper Preservation Act was concerned with diversifying content. Mail subsidies for Newspapers were concerned with promoting news content. Public access channels also promote specific content, as does the requirement for having programming aimed at children. Access regulation also is concerned with promoting specific content, e.g. political speech.
    So, copyright, which promotes the press, can be seen as promoting a more robust communications system. But does it really do that? Whenever content is suppressed, there is a first amendment problem. The answer to the question whether copyright promotes speech depends on the content of copyright law. So the built in protections (idea/expression, fair use) is important. One example: if the copyright statute were amended to allow the copyrighting of facts and ideas, that would be problematic. Unpublished material should be especially protected by the freedom of speech—hence, there should be no copyright to withhold material from publication.
    Baker ends by suggesting that the doctrinal issues on content discrimination are particularly confused. First, content discrimination should not be as important as the court suggests. Second, copyright is a content-based in the sense that content is being regulated, but particular viewpoints are not targeted.
    Bell Tom Bell is next. His thesis is that private alternatives can make copyright unconstitutional. Here’s the argument in three steps. (1) Courts have generally interpreted free speech to allow only state restrictions on speech that prove more efficacious than private remedies. (2) Copyright restricts speech. (3) Courts, therefore, should ask whether non-statutory protections can do just as good a job as copyright.
    By private alternatives, Bell means “self-help remedies.” A private party’s act, neither prohibited nor compelled by law. Also, another private remedy is contract, with the civil process in the background. For example, the problem of door-to-door solicitation can be controlled privately, by fences and trespass law. Another example, the strict scrutiny test requires that we look at private remedies. For example, in Cohen the f*ck the draft case, the Court said that the private remedy (turn away and avert your eyes) is a sufficient remedy. In ACLU v. Reno, the Court suggested that the availability of filtering software—a private remedy—is relevant to the free speech restriction on Internet pornography. All of these examples were intended to establish step one in Bell’s three step argument.
    Bell can skip step two, since the other members of the panel have established that copyright restricts free speech. What about step three? What about such self-help mechanisms such as digital rights management (encryption)? What about contractual protection of copyright? What about the “first use” doctrine?
    Some new rhetoric, says Bell, would help. One idea that might help is to look at copyright as a form of welfare. These are special benefits, but we can ask whether authors can “get off the dole.”
    Litman Jessica Litman is the final member of the panel. She begins with the observation that the internal limits are largely incoherent and a mess. Moreover, she argues, the Digital Millennium Copyright Act does not provide even these limits. There is not fair use limitation built into the Anti-Circumvention provisions in the DMCA. There are exceptions, for example, you can hack into content-filtering software to get list of blocked sites. But you cannot create and distribute software that allows you to do this.
    Litman now turns to use of Section 1201 to prevent disclosure of information that is true and otherwise not unlawful. Section 1201 does not distinguish between ideas and expression. You cannot use Section 1201 to get access to information for the purpose of getting access to information and ideas. DVD players include CSS—an encryption system that prevents DVDs from being played on unlicensed machine. Litman now discusses the DeCSS case—Norwegian teenager posts DeCSS on the Internet. Trial court held that posting DeCSS or linking to a site with DeCSS was covered by the anti-circumvention provisions and that posting and linking was not protected by the First Amendment, because of the nonspeech (e.g. functional) aspects of DeCSS.
    This is most scary in the case of Ed Felten’s research on decryption. When he was going to present his research at a conference, the industry sent a threatening letter. Felten then withdrew his paper, and this is evidence of the chilling effect of the DMCA on what is clearly a free speech interest.
    Discussion I asked the first question, and my point is described below under the next bold heading.
    The next question is asked by David McGowan. He makes several detailed criticisms of Litman’s examples. His points are quite sharp, but too detailed for me to report here. He then asks, “What is the theory of free speech that will take care of all of these examples?”
    Charles Marvin raises the Berne Convention (which is largely modeled on French law). The Berne Convention is based on author’s rights and not on the basis of a utilitarian theory.
    Larry Alexander is next. How is the protection given by copyright to Mein Kampff different from protection by contract or property law. The rationales for lawyer confidentiality have a similar rule-utilitarian structure.
    Margaret Radin suggests that property rhetoric came into intellectual property law in the nineteenth century. We should ask for arguments rather than rhetoric. One way of doing that economic cost-benefit could be read into the necessary and proper clause. There is such a thing as over propertization as well as underpropertization. We should bring in the social choice critique of the process by which copyright law is enacted.
    A Comment on Originalism and the Relationship Between the First Amendment and the Copyright Clause As Neil Netanel was discussing the relationship between the first amendment and the copyright clause, I was thinking about originalist approaches to Congress's intellectual property powers. Netanel thought that it was completely obvious that the first amendment must create a “liberty right” that limits what would otherwise be a plenary copyright power. (The world “plenary” is mine, not Netanel’s.) Netanel’s view accords with the contemporary (post New Deal) understanding of the relationship between Congress’s Article I powers and the Bill of Rights. The modern view is that Article I creates an ocean of power in which the Bill of Rights provides islands of liberty. The original understanding, at least in its Madisonian version, may have been quite different.
    Suppose that the Framers saw the relationship between Article I power and the Bill of Rights differently. Suppose they saw islands of power in a sea of liberty. Suppose they believed that the first amendment’s primary function was to reinforce the message of Article I: that the powers therein granted were limited and properly construed simply would not give Congress power to restrict the freedom of speech or of the press. That is, suppose that the original understanding was that the Article I powers contained internal limits that, if respected, made the collision between freedom of speech and copyright impossible.
    Given this understanding of the first amendment, the first question we ought to ask is how the copyright power could be construed so as to provide internal limits on that power that avoid any collision with freedom of speech and press. With this point in mind, it is interesting to consider the first copyright act which is dramatically narrower than current law. The term was much shorter—a maximum of 28 years. More importantly, the scope of copyright itself was narrower—for example, no derivative works were covered. Given this narrower understanding of the copyright power, the kinds of examples that Netanel discusses simply would not arise.
    On the original understanding of the copyright power, it is not clear that there was any possibility of collision between copyright and freedom of speech. Perhaps the key to understanding the copyright power is to construe that power so as to avoid collision with free speech values.


 
AALS Today The AALS Annual Meeting continues today. Highlights include the Section on Constitutional Law: Does Intellectual Property Threaten Freedom of Speech?


 
Gross on Handwriting Analysis Samuel R. Gross (University of Michigan Law School) has posted Detection of Deception: The Case of Handwriting Expertise on SSRN. Here is the abstract:
    In Shakespeare's Twelfth Night, Lady Olivia's renegade uncle, Sir Toby Belch, conspires with her maid, Maria, to set a trap for Malvolio, Olivia's officious, ambitious, and humorless steward. They plant an anonymous love letter that appears to be directed to Malvolio, in what seems to be Lady Olivia's handwriting. Malvolio finds it and falls for it.
Terse abstract!


 
Olken on White Samuel R. Olken (John Marshall Law School) has posted Historical Revisionism and Constitutional Change: Understanding the New Deal Court (Virginia Law Review, Volume 88, No. 1, pp 265-326, March 2002). Here is the abstract:
    Now into the fray comes Professor G. Edward White, one of the nation's preeminent legal historians and the author of several important books about the intersection of law and history. Perhaps none of his books is more important, however, than his most recent work, The Constitution and the New Deal, an elegant and masterful study of the transformation of the constitutional jurisprudence of the United States Supreme Court during the first half of the twentieth century. Primarily adapted from several law review articles the author published in leading law reviews throughout the past decade, this book re-examines the strands of early twentiethcentury constitutional jurisprudence. Not only does it reinforce Cushman's conclusions about the pace of jurisprudential change, it also approaches the issue of reconciling the New Deal and the Supreme Court as a problem of historiography. White offers a revised historical account of early twentieth-century constitutional thought that analyzes the broad contours of change in historical context. Rather than focus on doctrinal intricacies, the book makes selective use of academic commentary from the subject period and representative Supreme Court decisions to illustrate the arc of constitutional development in several areas, including a few often neglected by scholars of this era. In essence a study of intellectual constitutional history, it also provides extensive criticism of traditional historiography and posits that much of the contemporary misunderstanding about the role of the Supreme Court during the New Deal emanates from flawedhistorical methods and modernist assumptions about the judicial behavior of early twentieth-century Supreme Court Justices. To this end, White seeks to recapture the constitutional jurisprudential debates of this era and to advance a more complicated and richly nuanced account of transformative constitutional events. From this perspective, the New Deal and the Court-packing plan recede in importance as catalysts of constitutional change and instead become historical episodes stripped of their mythical importance, which White attributes to the indiscriminate use of political abels and behavioralist presuppositions of generations of scholars. In many respects, White succeeds in attaining his ambitious objective and has written a compelling revisionist history of one of the more controversial and misunderstood periods of American constitutional history. This Book Review corresponds to White's method of complicating and revising the conventional perspective. After an introductory discussion of the concept of revolution, Part I will address the conventional account of the constitutional revolution of 1937 and the factors White attributes to its enduring position of distorted significance. Part II will examine and respond to White's treatment of three areas of constitutional jurisprudence complicating the conventional account: foreign relations, administrative law, and free speech. With much precision and careful analysis, White illuminates the developments of these areas of law and, for the most part, effectively supports his revised narrative of early twentieth-century constitutional change. Finally, in Part III, this Book Review will examine the heart of White?s effort, namely his alternative explanation for the transformation in early twentiethcentury constitutional jurisprudence, particularly his emphasis on the ascendancy of modernism and the connection between the Supreme Court's internal intellectual climate and developments in both private and public law jurisprudence. To this end, White offers a detailed and shrewd account of the relationship between the formalism/realism debate in common law and the notion of constitutional adaptivity in political economy constitutional law. As I will discuss below, White's analysis overlooks, at certain points, factors that would ven more fully develop his already in-depth treatment of this period of constitutional change. Nevertheless, he generally succeeds in providing a reasoned, subtle, and persuasive revision of the change in constitutional jurisprudence of the early twentieth century.


 
Georgakopoulos on the LaSalle Decision Nicholas L. Georgakopoulos (Indiana University School of Law - Indianapolis) has posted New Value, After LaSalle (Bkruptcy Developments, Forthcoming) on SSRN. Here is the abstract:
    The LaSalle opinion ended doubts about the continued existence of the new value exception to the absolute priority rule. Reorganization plans that propose to issue securities in exchange for new contributions can be crammed down but under stricter criteria. After LaSalle new value plans must meet a market test. Thus, LaSalle appeared to revolutionize the cram-down process, forcing auctions in every new value plan. This Article surveys the experience since LaSalle. The few cases that applied it never ordered an auction or a true market test. Every plan proposed by debtors was rejected. In cases where competing plans were allowed, the choice among them was made by the court rather than any market. The experience with competing plans indicates that new contributions of unique assets that will serve the debtor's strategy may overcome objections. Pursuing the justifications of the fresh start policy, that bankruptcy law will prevent the incapacitation of individuals' productivity, reveals the possibility of a narrow exception to LaSalle's requirement of a market test for every new value plan.


Monday, January 05, 2004
 
AALS Today The AALS Annual Meeting continues today. Highlights include:
  • Co-Sponsored Program of Sections on Criminal Justice and Law and the Social Sciences Errors in the Jury Box: A Challenge to the Fairness of the Criminal Trial.
  • Co-Sponsored Program of Sections on Constitutional Law and Jurisprudence Restoring the Lost Constitution: The Presumption of Liberty. This program moderated by your humble blogger, features Randy Barnett (Boston University), Stephen Griffin (Tulane), Sandy Levinson (Texas), Mark Tushnet (Georgetown), and Keith Whittington (Princeton).
  • Joint Program of Sections on Intellectual Property and Law and Computers Copyright, Contract, and Technological Protections for Digital Content.
  • Open Program on Law and Communitarian Studies, Norms, Mores and Law in a Communitarian Perspective.


 
Weekend Update On Saturday the Download of the Week was a new paper by Ronald J. Allen and M. Kristin Mace. Also Saturday, the Legal Theory Bookworm recommended books by Amy Gutmann, David Heyd, and William Galston. From Sunday, the Legal Theory Calendar and a discussion of the Rule of Law on the Legal Theory Lexicon.


 
Bebchuck & Kahan on Fairness Opinions Lucian Arye Bebchuk and Marcel Kahan (Harvard Law School and New York University School of Law) have posted Fairness Opinions: How Fair Are They and What Can Be Done About It? (Duke Law Journal, Vol. 27, p. 27-53, 1989) on SSRN. Here is the abstract:
    Fairness opinions are a regular feature of every major corporate transaction. We analyze the conflict of interest problems that afflict fairness opinions and the extent to which courts should give weight to such opinions.


 
Korobkin on Williams v. Walker-Thomas Furniture Company Russell B. Korobkin (University of California, Los Angeles - School of Law) has posted A 'Traditional' and 'Behavioral' Law-and-Economics Analysis of Williams v. Walker-Thomas Furniture Company. Here is the abstract:
    Williams v. Walker-Thomas Furniture Company is a casebook favorite, taught in virtually every first-year Contract Law class. In the case, the D.C. Circuit holds that courts have the power to deny enforcement of contract terms if the terms are "unconscionable," and it remands the case to the lower court to consider whether the facts of the case meet this standard. This article, written for a session of the 2004 AALS Annual Meeting sponsored by the Contracts Section, analyzes the question that the D.C. Circuit posed to the lower court in Williams - and that Contracts teachers routinely pose to their students - from a "traditional" law-and-economics perspective, and from a "behavioral" law-and-economics perspective.


 
Snyder on Immigration Proceedings Kelley Brooke Snyder (US Court of Appeals for the Fourth Circuit) has posted A Clash of Values: Classified in Information in Immigration Proceedings (Virginia Law Review, Vol. 88, No. 2, pp. 447-484, April 2002). Here is the abstract:
    The U.S. principle of separation of powers provides an established institutional mechanism for protecting both national security and due process rights. While courts are expert at deciding what procedures are required to protect an alien's interests, the executive is best positioned to protect national security. This Note will argue that allowing the judicial branch to balance national security interests against individual rights will lead to results skewed by current events: In times of crisis, national security concerns may supersede respect for aliens' rights, but in times of peace, national security may be treated too lightly. The better alternative is to return to the executive the difficult choice of how best to further national security, given the need to take action in an individual case as well as to protect ongoing investigations and critical sources of information. The executive branch should be constrained by a judicial review of due process rights that ensures that aliens subject to final immigration determinations are able to respond to allegations against them. Complete deference to the executive would allow this branch to ignore individual rights concerns, especially when the group targeted is unpopular or suspect. With the additional protection of judicial review, however, the powers of each may be properly allocated. This Note will first briefly review the due process rights of aliens and pivotal early Supreme Court cases that laid the groundwork for the continuing use of classified evidence in immigration proceedings, as well as the terrorism exception recognized in dicta by the Court in Zadvydas v. Davis. The Note will then consider two lower court rulings that are emblematic of pre- September 11 decisions, both holding that the use of classified information violates the due process rights of aliens: In these cases, the courts reached their holdings by balancing the government's national security interests against the aliens' due process rights. Next, the Note will discuss legislation passed in the mid-1990s that created new procedures for using classified evidence in deportation proceedings. Finally, the Note will compare the Classified Information Procedures Act ("CIPA") with the Alien Terrorist Removal Court ("ATRC"), two different mechanisms governing the use of classified information in immigration proceedings. The Note will conclude that CIPA is preferable because it eliminates the need for the judiciary to balance national security concerns against individual rights.


 
Cohn on Treaty Obligations for Affirmative Action Marjorie Cohn (Thomas Jefferson School of Law) has posted Affirmative Action and the Equality Principle in Human Rights Treaties: United States' Violation of Its International Obligations (Virginia Journal of International Law, Vol. 43, p. 249, 2002) on SSRN. Here is the abstract:
    This essay demonstrates that the United States violates provisions of international treaties and thereby United States law in failing to take affirmative action to remedy racial and gender inequality. It begins by analyzing the affirmative action provisions in two treaties ratified by the United States - the Convention on the Elimination of All Forms of Racial Discrimination, and the International Covenant on Civil and Political Rights. The affirmative action requirements in the Convention on the Elimination of All Forms of Discrimination Against Women, and the International Covenant on Economic, Social and Cultural Rights, which the United States has signed, but not ratified, are also detailed. This essay then explains that well-established principles of international law require the United States to abide by treaties that it ratifies, and to refrain from taking any action inconsistent with treaties it has signed. These principles of international law are effectively incorporated into the Federal Constitution's Supremacy Clause, which describes all treaties as United States law, without reference to whether a particular treating is self-executing. This essay also discusses the current constitutional standards with respect to affirmative action and shows that United States treaty obligations constitute a compelling governmental interest supporting affirmative action programs. It does a comparative analysis of affirmative action in other countries. Finally, this essay provides examples of racial and gender inequality in the United States and contends that the persistence of inequality violates the United States treaty obligations to take affirmative action to achieve racial and gender equality.


Sunday, January 04, 2004
 
Legal Theory Lexicon: The Rule of Law
    Introduction This installment of the Legal Theory Lexicon provides a very short introduction to the idea of "the rule of law," aimed as usual at law students (especially first year law students) with an interest in legal theory.
    What is the Rule of Law? The ideal of the rule of law, which can be traced back at least as far as Aristotle, is deeply embedded in the public political cultures of most modern democratic societies. For example, the Universal Declaration of Human Rights of 1948 declared that "it is essential if man is not to have recourse as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the Rule of Law." Although the ideal of the rule of law has been criticized on the ground that it is an ideological construct that masks power relationships, even Marxist critics may acknowledge that observance of the ideal may curb abuses by the ruling class.
    What is the ideal of the rule of law? An initial observation is that there are several different conceptions of the meaning of the rule of law. Indeed, the rule of law may not be a single concept at all; rather, it may be more accurate to understand the ideal of the rule of law as a set of ideals connected more by family resemblance than a unifying conceptual structure.
    Dicey's Influential Formulation Historically, the most influential account of the rule of law was offered by A.V. Dicey. His formulation incorporated three ideas:
      (1) the supremacy of regular law as opposed to arbitrary power;
      (2) equality before the law of all persons and classes, including government officials; and,
      (3) the incorporation of constitutional law as a binding part of the ordinary law of the land.
    Rawls on the Rule of Law A contemporary elaboration of the ideal of the rule of law is provided by John Rawls. He defines the rule of law as "the regular, impartial, and in this sense fair" administration of "public rules." In schematic form and with some alterations, Rawls offered the following conception of the rule of law:
      1. The Requirement that Compliance Be Possible. The legal system should reflect the precept that ought implies can.
        a. The actions which the rules of law require and forbid should be of a kind which men can reasonably be expected to do and to avoid.
        b. Those who enact the laws and issue legal orders should do so in good faith, in the sense that they believe "a" with respect to the laws and orders they promulgate.
        c. A legal system should recognize impossibility of performance as a defense, or at least a mitigating circumstance.
      2. The Requirement of Regularity. The legal system should reflect the precept that similar cases should be treated similarly.
        a. Judges must justify the distinctions they make between persons by reference to the relevant legal rules and principles.
        b. The requirement of consistency should hold for the interpretation of all rules.
      3. The Requirement of Publicity. The legal system should reflect the precept that the laws should be public.
        a. The laws should be known and expressly promulgated.
        b. The meaning of the laws should be clearly defined.
      4. The Requirement of Generality. Statutes and other legal rules should be general in statement and should not be aimed at particular individuals.
      5. The Requirement of Due Process. The legal system should provide fair and orderly procedures for the determination of cases.
        a. A legal system ought to make provision for orderly and public trials and hearings.
        b. A legal system ought to contain rules of evidence that guarantee rational procedures of inquiry.
        c. A legal system ought to provide a process reasonably designed to ascertain the truth.
        d. Judges should be independent and impartial, and no person should judge her own case.
      Absent from Rawls's formulation is the notion that the rule of law requires that the government and government officials be subject to the law. Thus, a sixth aspect of the rule of law might be added to Rawls' formulation as follows:
      6. The Requirement of Government under Law. Actions by government and government officials should be subject to general and public rules.
        a. Government officials should not be above the law.
        b. The legality of government action should be subject to test by independent courts of law.
    More can be said about the content of the ideal of the rule of law, but this brief exposition provides sufficient clarity for for this brief introduction.
    The Values Served by the Rule of Law What values are served by the rule of law? Why is the rule of law important? Those are big questions, but we can at least give some quick and dirty answers. One reason that the rule of law is important has to do with predictability and certainty. When the rule of law is respected, citizens and firms will be able to plan their conduct in conformity with the law. Of course, one can dig deeper and ask why that predictability and certainty are important. Lot's of answers can be given to that question as well. One set of answers is purely instrumental. When the law is predictable and certain it can do a better job of guiding conduct. Another set of answers would look to function of law in protecting rights or enhancing individual autonomy. The predictability and certainty of the law creates a sphere of autonomy within which individuals can act without fear of government interference.
    Another way to look at the value of the rule of law is to focus on what the world would be like if there were systematic and serious departures from the requirements of the rule of law. What if the laws were secret? What if officials were immune from the law and could act as they pleased? What the system of procedure were almost completely arbitrary, so that the results of legal proceedings were random or reflected the whims and prejudices of judges? What if some classes of people were above the law? Or if other classes were "below the law" and denied the laws protections? These rhetorical questions are intended to draw out a "parade of horribles" in your imagination. In other words, the rule of law serves as a bulward against tyranny, chaos, and injustice.
    The Rule of Law and Bad Law One final question: "Is the rule of law a good thing, even if the laws are bad, unjust, or in the extreme case evil?" This question is too tough to take on in a systematic way, but here is one helpful thought. In a reasonably just society, one might believe that the rule of law is a good thing, even if some of the laws are bad. Certainty and predictability provide very great goods, which would be undermined if each judge or official picked and chose among the laws, enforcing the ones that the judge thought were good and nullifying the ones the judge thought were bad. But in a thoroughly evil society, the rule of law will be extremely problematic. Even an evil society may benefit from regularity in the enforcement of ordinary laws, but when it comes to horrendously evil laws, anarchy or revolution is likely to be preferable to the rule of law.
    Conclusion Sooner or later most law students run into a reference to "the rule of law," but in my experience, this idea is rarely explained when its introduced. This entry in the legal theory lexicon is designed to give you a fairly solid foundation with respect to the content of the rule of law and to get you thinking about what functions the rule of law serves.


 
Association of American Law Schools Annual Meeting Continues Today The AALS Annual Meeting continues today in Atlanta. Today's highlights include the Jurisprudence Section program entitled "The Rationality of Rule-Following."


 
Federalist Society: Faculty Division Conference Continues
    Schedule of Events for the 2004 Federalist Society Faculty Division Conference
      Hyatt Regency Atlanta 265 Peachtree Street, NE Atlanta, GA 30303 404-577-1234 January 3-4, 2004
    Sunday, January 4, 2003
      9:00-10:30 a.m. Panel on "Direct Democracy"
        Prof. Gail Heriot, San Diego Law School (Moderator) Prof. Marci Hamilton, Cardozo Law School Prof. Daniel Lowenstein, UCLA Law School Prof. Maimon Schwarzschild, San Diego Law School
      10:45-11:45 a.m. Paper Presentations
        Panel A:
          Prof. Michael Lewyn, visiting professor, Rutgers-Camden Law School "Zoning without Zoning: How a City without Zoning Still Overregulates Land Use" Prof. David Callies, Hawaii Law School "The Endangered Species Act and Property Rights" Prof. Eric Claeys, St. Louis Law School "Euclid Lives? The Uneasy Legacy of Progressivism in Zoning"
        Panel B:
          Prof. David Bernstein, George Mason Law School "You Can't Say That!: The Growing Threat to Civil Liberties from Antidiscrimination Laws" Prof. Stephen Ware, Kansas Law School "'Consumer' Bankruptcy is a Misnomer"
      11:45-2:00 p.m. Lunch and Panel on "The Proper Role of Marriage Law"
        Prof. Margaret Brinig, Iowa Law School Prof. William Eskridge, Yale Law School Prof. Martha Fineman, Cornell Law School Ms. Maggie Gallagher, President, The Institute for Marriage & Public Policy Prof. Robin Wilson, South Carolina Law School
      2:15-3:15 p.m. Paper Presentations
        Panel C:
          Prof. Eugene Kontorovich, George Mason Law School "Liability Rules for Constitutional Rights: The Case of Mass Detentions" Prof. Rick Peltz, Arkansas-Little Rock Law School "Limited Powers in the Looking Glass: Originalism, Not Textualism, Triumphs when Activists in Malls Claim State Constitutional Rights" Prof. Ilya Somin, George Mason Law School "Federalism vs. State Autonomy"
        Panel D:
          Prof. Michelle Boardman, George Mason Law School "The Illusion of Terrorism Insurance" Prof. Tom Lee, BYU Law School "Trademark Dilution after Moseley v. V. Secret Catalogue" Prof. Dale Nance, Case Western Reserve Law School "The Case against Public Subsidies of Law Schools"
      3:30-5:15 p.m. Panel on "Transitions to Originalism"
        Prof. Randy Barnett, Boston University Law School (Moderator) Prof. Steven Calabresi, Northwestern Law School Prof. Richard Kay, Connecticut Law School Prof. Michael Rappaport, San Diego Law School Prof. Larry Solum, San Diego Law School Prof. Keith Whittington, Princeton
      5:30-7:00 Paper Presentations, co-sponsored with the National Association of Scholars
        Prof. James Lindgren, Northwestern Law School "Chasing Cherished Superstitions about Conservatives" Prof. John McGinnis, Northwestern Law School "Ideology and the Legal Academy"


 
Legal Theory Calendar
    Sunday, January 4
      Association of American Law Schools Annual meeting continues.
      Faculty Division of the Federalist Society Continues.
    Monday, January 5
      Association of American Law Schools Annual meeting continues.
    Tuesday, January 6
      Association of American Law Schools Annual meeting continues.


Saturday, January 03, 2004
 
Blogging from Atlanta 04: Federalist Society: Faculty Division--International Law and Constitutional Interpretation
    Introduction It is Saturday evening in Atlanta and I am blogging from the meeting of the faculty division of the Federalist Society. As I was walking over from the Hilton to the Hyatt, I walked part of the way with a former who colleague, who asked where I was heading. When I answered, he said, “The Federalist Society? Larry!” Not an uncommon reaction, but in my experience completely unjustified. The Federalist Society meetings always provide some of the very best content at the AALS.
    McGinnis John McGinnis does a marvelous and witty job of introduction the topic—focusing on the use of foreign precedents, e.g. the use of decisions from other national courts in Lawrence v. Texas. He poses a series of questions for the panel, and then introduces the first speaker.
    Ramsey My colleague Mike Ramsey begins. Ramsey is a extraordinarily clear and intelligent speaker; it’s always a pleasure to listen to him. He begins with an obvious point—that McGinnis had not discussed “International Law,” but rather comparative law. Ramsey then turns to the question as to what the source of authority for international or comparative materials might be. Ramsey notes two ways in which comparative materials might be used. In Lawrence, Ramsey notes that both the State of Texas and Justice Scalia made claims about the universality of laws like the sodomy statute in Texas. In that context, the use of comparative materials seems entirely appropriate. If international materials are to be used, Ramsey says, they should not be used selectively. For example, the United States recognizes rights that many other societies do not, giving New York Times v. Sullivan as an example. He suggests also that our constitutional criminal procedure jurisprudence may also be out of line with international practice. As currently conceived, Ramsey argues, the use of comparative materials is cover for results that are in fact reached on other grounds. Finally, Ramsey says that there is a real empirical problem—the Supreme Court does not seem to do original and comprehensive research when it discusses comparative practices. He argues, for example, that secondary sources, such as UN reports, may (or may not) turn out to be empirically accurate. If we were to undertake a real examination, Ramsey suggests that we are unlikely to find world consensus. When there is consensus, Ramsey suggests, it is more likely that adhering to the consensus would result in the restriction, not the expansion, of rights.
    Spiro Peter Spiro (Hofstra Law School) begins by asking, There are three questions: (1) “What is going on here?” Why are people worried? (2) What can be done about it? (3) Is it a good thing?
    What is going on? It may seem like trivial decoration, e.g. Atkins and Lawrence involve only modest use of comparative sources. Breyer in Knight was careful to say the comparative sources are not binding. But there is something quite serious going on. The use of international law to define American constitutional norms could erode a fragile national identity. The constitution has been central to American identity. Progressives are liberal nationalists, says Spiro. Liberal, yes, but also “nationalist.” Huh? The possibility that citations to comparative materials might undermine national identity seems rather far fetched to me!
    Can the use of international norms be resisted? No, says Spiro. International actors have levers to discipline international actors. Detainees are being released from Guantanamo in response to international pressures.
      Comment: This portion of Spiro’s presentation seemed wholly unpersuasive to me. Yes, there are international pressures and because of globalization, these pressures may be growing, but these pressures are brought to bear on institutions other than the courts. The Kyoto accord may be an example of international pressure, but I had a difficult time seeing the link with Lawrence v. Texas. Spiro needs to offer a mechanism by which these pressures will be brought to bear on the process of constitutional interpretation. Perhaps I missed it, but I didn’t hear him offer any such account.
    Is the use of international norms a good thing? Spiro suggests that alien law be being imposed on the United States. But on the other hand, it could be evidence of a redefinition of community. Territorial jurisdictions aren’t dictated by nature. One can describe the power of the constitution. There may be a perception that we are in the same boat with certain other communities.
    Young Ernest Young (Texas Law School) was next. He begins with the question, “Could Texas be occupied by UN troops?” He suggests that the concern with international norms dominating U.S. is silly. He proposes three questions: (1) what domestic law question are we addressing? (2) what question are the comparative sources supposed to answer? (3) how well do we understand the comparative sources?
    On the first question, Young suggests that structural and federalism issues are particularly insusceptible to comparative analysis. Why not? Because of structural holism. Structures come in big packages. Young nicely analyzes Prince, where there was some discussion of commandeering in the European context. Pulling out one aspect from another system, Young argues, is problematic. On individual rights issues, Young says, it may be somewhat easier to use the international materials. Indeed, the Seventh Amendment as traditionally interpreted requires that American courts look at historical English practice.
    What are the international materials supposed to tell us? One use is to learn about consequences. We can look to other legal systems to see how various legal rules have actually worked elsewhere. For example, what would happen if we recognized a right to die? It would be crazy to shut our eyes to how such a right has worked in Denmark.
    What about understanding? Young argues that we don’t have a lot of expertise. Judges and law clerks lack training. So we have reason to doubt that our courts and judges will use comparative sources in an accurate way.
    Caveats: Some of the arguments made against originalism are the same as arguments as made against comparative sources. There will, Young claims, be some areas where the comparative materials will be usable. Moreover, the institutional limitations of courts do not counsel against scholars looking at comparative sources.
    Finally, Young addresses the inevitability of international law. There are two kinds of international law people, there are two groups who take it seriously. One group focuses on trade and wants to bring back Lochner. The other group focuses on human rights and wants to bring back the Warren Court. Constitutional scholars, says Young, need to take international sources more seriously. Now that people are paying attention to comparative sources, lawyers will inevitably use these sources.
    Discussion Eugene Kontovorich asks the first question: “Who is the world?” We can count nations or we could look at what people in other nations think: public opinion in most nations favors the death penalty, even thought many nations have abolished the death penalty. Why just look at world elites?
    Michael Rappaport asks what would be the best way of trying to stop the use of international norms in constitutional interpretation. One possibility would be to start using comparative sources for conservative causes, triggering a liberal backlash?
    I asked the next question. There seems to be an assumption on the panel that although comparative materials are not binding authority, they are persuasive. But why are they persuasive? Whether or not comparative materials are persuasive depends on what theory of constitutional interpretation you have? The most natural fit for the relevance of comparative materials might be a theory like Dworkin’s theory, which makes questions like “What is the best conception of equality?” relevant to the interpretation of the due process clause. But on a theory like Dworkin’s comparative materials are really only a source of arguments; they are relevant only for the reasons they provide. What then motivates the use of comparative materials? I suggested that the answer might be that comparative materials appear to be more legal—they may disguise that fact that the courts are using nonlegal norms as the basis for their decisions.
    Young replied to my question, arguing that comparative materials are relevant to the effects that laws have. They are admissible evidence because they “wiggle the mind.” Young claimed that every theory of constitutional interpretation would make such evidence relevant.
      Comment: I must admit that I found Young’s reply to my question to be surprising. First, he really did not answer the question. Young is arguing for the relevant of comparative data on the effects of legal policies. This is quite different that the question at hand, which is why comparative legal sources are persuasive. In other words, Young committed a category mistake. Second, I found Young’s assertion that every theory of constitutional interpretation would naturally focus on the consequences of recognizing a right (e.g. the right to assisted suicide). Perhaps, Young finds instrumentalist theories of constitutional interpretation to be attractive, but he has just got it wrong when he asserts that every theory of constitutional interpretation is instrumentalist. Indeed, I would think that the notion that the Supreme Court should view its role as engaging in extensive factfinding about the effects of recognizing various rights is, at the very lest, controversial. Most pre-realist theories of constitutional interpretation would reject the relevance of such evidence as an aid to constitutional interpretation. Put a little differently, Young assumes that constitutional interpretation is policymaking, but legal formalists reject that assumption.
    Jacob Levy asks the next question. We can ask the comparative question comparatively. For example, many other nations cite American constitutional law. For example, Canadian courts frequently use American law as a point of contrast, to explain how Canadian law is different. In the Australian and Canadian cases, there aren’t obvious howlers—they seem to get American law basically right.
    Young replies that there are howlers; other nations do get U.S. law wrong. He also suggests that there may be a loss of identity in Europe because of the reliance on EU institutions to protect human rights.
    Steve Calabresi suggested that the use of comparative arguments is quite old. The Federalist Papers used comparative arguments. John Marshall used Roman and civil law analogies. Frankfurter in the incorporation debates discussed the notion of “shocking the conscience of civilized peoples.” Calabresi also suggested that much comparative law may, in fact, be favorable to conservative causes—citing the example of German constitutional law.


 
Terry Eagleton in the New York Times Here. And a small taste:
    [T]he postmodernist giants — like Jacques Derrida and Roland Barthes — are over, [Eagleton] says. "The golden age of cultural theory is long past," Mr. Eagleton writes in his new book, "After Theory" (Basic Books), to be published in the United States in January. In this age of terrorism, he says, cultural theory has become increasingly irrelevant, because theorists have failed to address the big questions of morality, metaphysics, love, religion, revolution, death and suffering. Today graduate students and professors are bogged down in relativism, writing about sex and the body instead of the big issues. "On the wilder shores of academia," he writes, "an interest in French philosophy has given way to a fascination with French kissing."
Indeed.


 
Bertram on Otsuka Chris Bertram continues his commentary on Mike Otsuka's Libertarianism Without Inequality. Here is a taste:
    Similarly, I have to say that I’m not convinced by Otsuka’s suggestion that just because it might be rational for a person to gamble their freedom away (perhaps in the hope of gaining more freedom), then we should take the person who has so gambled to have signed away their freedom for life. Suppose I do voluntarily associate with others to form a theocracy but later repent of my decision and come to see it as the rash choice of an inexperienced youth — on Otsuka’s view that is just too bad. I am now bound by the laws of the new state and am subject to legitimate punishment for breaking them. There being no reason why I could not consent to a regime that included apostasy among its laws, when I renounce my religion I am legitimately punished.


 
Legal Theory Bookworm I thought that I would recommend three books by authors who were at the ASPLP sessions in Atlanta yesterday and today. Here are my recommendations:
  • William A. Galston, Liberal Purposes : Goods, Virtues, and Diversity in the Liberal State (1991):
      This book is a major contribution to the current theory of liberalism by an eminent political theorist. It challenges the views of such theorists as Rawls, Dworkin, and Ackerman who believe that the essence of liberalism is that it should remain neutral concerning different ways of life and individual conceptions of what is good or valuable. Professor Galston argues that the modern liberal state is committed to a distinctive conception of the human good, and to that end has developed characteristic institutions and practices--representative governments, diverse societies, market economies, and zones of private action--in the pursuit of specific public purposes that give unity to the liberal state. These purposes guide liberal public policy, shape liberal justice, require the practice of liberal virtues, and rest on a liberal public culture. Consequently the diversity characteristic of liberal societies is limited by their institutional, personal, and cultural preconditions.
  • David Heyd (editor), Toleration: An Illusive Virtue (1998):
      If we are to understand the concept of toleration in terms of everyday life, we must address a key philosophical and political tension: the call for restraint when encountering apparently wrong beliefs and actions versus the good reasons for interfering with the lives of the subjects of these beliefs and actions. This collection contains original contributions to the ongoing debate on the nature of toleration, including its definition, historical development, justification, and limits. In exploring the issues surrounding toleration, the essays address a variety of provocative questions. Throughout, the contributors point to the inherent indeterminacy of the concept and to the difficulty in locating it between intolerant absolutism and skeptical pluralism. Religion, sex, speech, and education are major areas requiring toleration in liberal societies. By applying theoretical analysis, these essays show the differences in the argument for toleration and its scope in each of these realms. The contributors include Joshua Cohen, George Fletcher, Gordon Graham, Alon Harel, Moshe Halbertal, Barbara Herman, John Horton, Will Kymlicka, Avishai Margalit, David Richards, Thomas Scanlon, and Bernard Williams. "When subtle thinkers probe tolerance with the acuity of this volume's contributors, we see both how far the notion stretches, and the profound challenges it poses to our habits of thinking."
  • Amy Gutmann, Identity in Democracy (2003):
      Are identity politics a needed defense against the tyranny of the majority, or a divisive impediment to the realization of individual rights and the common good? A little of both, and much more, according to this probing volume of political theory. Gutmann, a political philosopher, examines a wide variety of "identity groups" including religions, embattled cultural groups like French-Canadians, socially formative voluntary groups like the Boy Scouts; and "ascriptive groups" who bear an involuntary marker of difference, like racial minorities, homosexuals and the disabled. She argues that overlapping group identities are an inescapable part of every individual's political makeup, for good and ill. Identity groups have been in the forefront of efforts to expand individual rights and opportunity, she notes, and America's excessive economic inequality is in part due to the absence of a working-class identity politics that might bolster unions and demand more redistribution of wealth. On the other hand, identity groups like the Ku Klux Klan and orthodox religious groups that seek to curtail the rights of women pose a serious problem for democratic polities. Rather than being scapegoated or lumped in with other interest groups, identity groups must be carefully assessed to discern their alignment with fundamental democratic values of freedom, equality and opportunity. Gutmann's is a serious attempt to reconcile classical liberalism with contemporary multiculturalism. While it will not please ideologues on any side, her clear, nuanced and humane approach brings many valuable insights to this contentious debate.
    It was a great pleasure to hear all three authors speak. Heyd gave a marvelous talk and Gutmann and Galston were both extremely eloquent from the floor!


 
Download of the Week This week the download of the week is an article by Ronald J. Allen and M. Kristin Mace (Northwestern University Law School and Independent) have posted The Self-Incrimination Clause Explained and Its Future Predicted. Here is the abstract:
    Like many areas of the law, the Fifth Amendment has defied theoretical explanation by scholars. We examine whether the fifth amendment cases can be explained with a relatively simply theory, and find that they can. The key to that theory is the recognition that, although never acknowledged by the Court, its cases make plain that "testimony" is the substantive content of cognition - the propositions with truth-value that people hold or generate (as distinct from the ability to hold or generate propositions with truth-value). This observation leads to a comprehensive positive theory of the Fifth Amendment right: the government may not compel disclosure of the incriminating substantive results of cognition that themselves (the substantive results) are the product of state action. As we demonstrate in this article, this theory explains all of the cases, a feat not accomplished under any other scholarly or judicial theory; it even explains the most obvious datum that might be advanced against it - the sixth birthday question in Muniz. There remain two sources of ambiguity in Fifth Amendment adjudications. First, compulsion and incrimination are both continuous variables - questions of degree. The Court has recognized this and set about defining the amount of compulsion and incrimination necessary to a Fifth Amendment violation. The result is a common law of both topics rather than a precise metric of either. These two variables are independent and do not interact, which reduces the complexity of decision making. Compulsion, in other words, is in no way determined by the extent to which the results are incriminating. Compulsion is determined on its own, as is the sufficiency of incrimination. The second source of ambiguity arises from the Court not explicitly equating "testimony" with cognition, though that is precisely what has controlled its decisions. Given that the Court's opinions have not focused on substantive cognition as the third element of a Fifth Amendment violation, it is not surprising that the Court has not clarified whether cognition, too, is a continuous or discontinuous variable. This is where the future lies. The Court will have to clarify two matters: first, whether the extent of cognition matters, and second, the derivative consequences of cognition. In addition, the Court will have to determine whether these two issues are, like compulsion and incrimination, independent. Does the extensiveness of the compelled cognition determine how far its causal effect will be traced? We then note that this "theory" does not look like a standard academic theory with its attendant emphasis on normative analysis. We examine whether the normal meaning of "normative justification" is a very useful one in any field of law with the range of the fifth amendment, point out that it is quite similar to the fourth amendment in this regard, and that scholarly efforts to discover its "true" justification may be doomed to failure. This does not mean that fields of law are unjustified, but perhaps that the justification must come in other terms. The terms plainly applicable to these two areas are the traditional ones of the rule of law. The Court has strived to make sense of ambiguous directives through creating and sustaining relatively clear legal categories and by responding to new situations through analogies to prior cases. We think it plausible that, however dull this may appear to the legal theorist, the legal system may be better off as a result. The article thus adds to the growing literature concerning the nature of legal theorizing by demonstrating yet another area where legal theorizing in its modern conventional sense (involving the search for the moral or philosophical theory that justifies an area of law) has been completely ineffectual, whereas explanations that are informed by the presently neglected values of legality (clarity, precision, consistency, fidelity to authority) have considerable promise.
Highly recommended.


 
Blogging from Atlanta 03: The American Society for Political and Legal Philosophy, Session Three—Toleration and Recognition
    Introduction I’m blogging again from the meeting of the ASPLP in Atlanta. Jacob Levy gets the session started.
    Creppell Ingrid Creppell (George Washington University) begins. Her paper is titled "Toleration, Politics and the Common World.” She begins by discussing the point of toleration; toleration is premised on conflict and directed against cruelty and violence. Toleration is sometimes seen as idealistic, and sometimes as instrumental. She argues that toleration should be an end in itself. Understanding toleration requires understanding how people see themselves as members of a common world. Restrain followed by continued common life together defines toleration. Disagreement followed by disengagement is not toleration. So what kind of relationship is “toleration?” Walzer argues that “peaceful coexistence” is toleration.
    What did toleration actually entail? Identity change is necessary for toleration. Parties cannot remain who they are if toleration is to develop. Historically, there was an unmixable amalgam of traditional ideas about faith, new ideas about faith, and old ideas about the magistrate. New ideas were required for toleration to emerge. A modus vivendi was not enough; an internal change was necessary. This was an educative process.
    Toleration as interaction inherently includes “recognition.” If toleration is a relationship, what is the basis for mutual recognition? Key ideas were the rule of law and a realm that is free from interference. One approach is based on rights; reciprocity requires that one tolerate what the other does within their rights without harming others. Contemporary identity politics reacts to this. Strict political equality is not sufficient. A more direct recognition of diverse value is required. The rights interpretation and the identity politics view are a spectrum. Toleration as recognition seeks to respond to the politicization of individual identities.
    Some argue that it is part of the state’s job to protect group identity. We can distinguish between activist identity claims and preservationist claims. The vitality of the group is therefore an important consideration in determining the state’s obligation.
    Toleration as obligation depends on facilitating an active and interactive public sphere. Toleration is not just isolated enclaves or the pursuit of private interest without public engagement.
    But does this view make toleration an end in itself? The ever presence of plurality entails that uniformity imposes pain on others. Toleration is an ideal, because it allows persons to flourish without forcing the choice of either the individualist rights approach or submersion into group identity. Toleration allows individuals to see themselves as part of a multiform species. Toleration therefore allows a broad perspective on the self. Citizens should be educated and habituated to view others with awareness and respect. The key is the will to mutual engagement for the sake of creating as much flourishing as possible.
    A theory of toleration must begin from the reality of intolerance. What sort of political realm makes toleration possible? First, normal struggles over power and resources will continue. Second, the principle of impartiality is not that all individuals will suddenly become impartial, but rather that the way in which institutions are set up and structured will orient the public realm towards impartiality. No group can mandate a particularistic political norm, but this does not leave the public square naked.
    She rejects seeing politics only as the other face of war. We can acknowledge intolerance, but see the political realm as the place where justice is attempted if not always achieved.
    Newey Glen Newey (University of Strathclyde, Glasgow) is next. He agrees that disputes over toleration must be dealt with politically. As to disagreement, he begins with the idea that disputes not epistemic but are disputes over identity. Newey argues against this idea. Identity groups are interest groups. Some identities are better than others and should not be tolerated. The merits of various interest group claims must be evaluated.
    Do those who tolerate and those tolerated live in a common world? Newey mentions Rawls at this point, arguing that Rawls’s views can be characterized as thinness in, thickness out. Any idea of flourishing, says Newey, threatens to smuggle in what the Rawlsian overlapping consensus leaves out. If values are thin enough to win assent, they will not be thick enough to resolve the conflict.
    Does Creppell give a reason for tolerance to the intolerant? Newey argues no. The liberal answer to the intolerant is “Tough!”
    Those who are in relationships know that they are the scene of conflict. In what sense do we share a common world with terrorist groups? One attitude towards inhabitants of a common world is hatred.
    Ex ante, toleration is empty. It has no trajectory in history. Some groups gain toleration and move on to equality. Other groups move the other way. Smokers have equality and are then cast out and not even tolerated.
    Moreover, power is a zero sum game. And the idea of partiality cannot offer a justification to those against whom power is used. Intolerance may be so potent that only coexistence aided by barriers is possible. The idea that the barriers can be thrown down is a fantasy.
    A brighter future for toleration is to allow toleration to become what it is. We should abandon hope of universal accommodation of difference. The merely tolerant state does not risk the fate of Troy or Jericho.
    Feldman Noah Feldman (New York University) is next. There is a greater interest in the history of toleration than other ideas in liberal theory. Why? Maybe because, there is some sense that ideas matter. There is a tendency to look for voices in the wilderness, voices who called for toleration in the best. There are two broad rubrics for theories of toleration. A pragmatic justification is one grounded in self interest. I tolerate you because I may need toleration in the future. A principled view is one that goes beyond self interest.
    The toleration act’s preamble, for example, reflects a pragmatic justification. The toleration act was very limited. For example, it is limited to freedom of religion and does not extend to a general freedom of conscience. Moreover, it is limited to Protestants. And the reason for toleration is to strengthen mutual interest in the service of the sovereign’s interest. It is not as if the rhetoric of liberty of conscience were not available at this time, and Locke availed himself of it. The toleration act did not use the principled argument.
    If the reason we have a principle of impartiality is to facilitate autonomous self-determination, and what they want to do is to use the state to inculcate particular beliefs, then that is a cost of impartiality. Limiting impartiality to the state is done for pragmatic reasons—having to do with the power of the state.
    In Baghdad, the Kurd’s want full independence. They don’t trust the state to be impartial. The Kurds want a Kurdish reason to be zone within which impartiality works. At a moral level, there is no reason why all of Iraq is the correct zone or level.
    There are elements that will take advantage of the liberal state. There must be some way to stand up against them. When you do, you will be engaged in political tactics. And this requires in the end that you be ready to put force behind your words. Once you are involved in the political realm, you will deploy moral arguments and believe in them. But the way you effectuate your beliefs whether you are Martin Luther King, Jr. or Lyndon Baines Johnson, is not abstract discussion. You exercise power. Ideas play a role.
    Why is it that we want a moral theory of toleration? Is it that it would be moral durable than a self-interest theory? There are reasons to doubt that is true. Self-interest might be more durable. Another possibility is that we would like a coherent and comprehensive theory: a self-interest theory will result in defection when defection is possible. Another possibility is that we have the instinct that toleration is not moral. Perhaps, we are worried that the self-interested reason is immoral. You buy toleration for yourself at the cost of tolerating bad and harmful views.
    Feldman closes with a question. What if our theory of toleration ran like this: we are behind the veil of ignorance, so we opt for toleration. Would that be a principled or self-interested justification? It sounds odd to call it self-interested. But it suggests that a self-interest justification might collapse into the principled justification.
    Discussion Creppell begins. She agrees with Newey that identity does not collapse into belief. On the other hand, there are beliefs that are constitutive. No amount of argument will dislodge those debates. The reason that Creppell emphasizes identity is that the early modern period was characterized by confessional wars. Beliefs were not agreed upon. They agreed to disagree. So what changed? Humans live in complex worlds. The context in which humans are called upon to be tolerant is one in which you need to give an “action orientation.”
    Creppell does not believe she smuggles back in substantive elements, because she would put the substance on the table. If you can develop an idea of toleration that appeals more broadly, that is worthwhile.
    Creppell also discusses an example in Newey’s paper. Do we tolerate the African Anglican bishops would are deeply opposed to ordination of gays within the church?
    Why do we want a theory of toleration? Creppell responding to Feldman says, “It is a reality about our world and a way we conceptualize the virtues of living in world of pluralism.”
    Rainer Forst asks a question: is toleration a Foucaultian way of containing the powers of minorities? Some strategies of toleration are really strategies of domination. What used to be called “a critical theory of toleration” needs resources to decide which uses are emanicpatory and which are repressive.
    Margaret Jane Radin objects to the idea that pragmatism is opposed to morality and even that self-interest is opposed to morality.
    Amy Gutmann says that “mere toleration” is an impoverished basis for relationships, but to Gutman, there is no coherent amoral defense of toleration. If you rely on self interest, you won’t agreement. There is a thin moral impulse to think that in order to continue a human life, you must be tolerant of disagreement. Without a moral strand, you can’t get groups off the ground.
    Newey says he was not suggesting that morality plays no role. People do trade moral arguments. He thinks there is an overly schematic division between considerations of morality and self-interest.
    Creppell suggests that all human beings who live in a norm bound system, because human beings cannot exactly replicate a norm. There always has to be flexibility that is guide by norms. But then there is a need to discuss and articulate theories about this universal human capacity.
    Melissa Williams asked a very nice question, which suggested we recharacterize the “pragmatic” justifications as peace-based justifications. She asks Creppell, “Where do you say tough?”
    Bill Galston asks whether there is a principled basis for preferring engagement over disengagement. He suggests that the flourishing of particular communities may require disengagement. It is a mistake to think that engagement is preferable.
    Jeremy Waldron suggests that the preamble of the act of tolerance was about removing a distraction. Worry about hegemony is a distraction from getting on with moral interaction with one another. This argument, says Waldron, has an end-means relationship, but it is a moral argument.
    Noah Feldman suggests that the narrowness of toleration (e.g. Catholics not tolerated) does not comport with Waldron’s moral interpretation. The broad goal is legitimation of state.
    Newey responds to a question from Morgan. “Tough” has to be available, but that does not mean we cannot moral arguments up to the point where we have to get tough.
    Creppell says she gets tough at places like bodily integrity, life, oppression that causes pain and deprives persons of autonomous thought.


 
Blogging from Atlanta 02: The American Society for Political and Legal Philosophy, Session Two—Toleration as a Virtue
    Introduction Jeremy Waldrons (Columbia) is one of the greats of contemporary legal theory. He gets right to business and offers a terse introduction. We are off!
    Heyd David Heyd (Hebrew University of Jerusalem) is the first speaker. His paper is titled, "Is Toleration a Political Virtue?" He begins by saying that he will answer the question in the negative. There are, he says, two ideas of toleration—one broader and historical, the other narrower and philosophical. Both ideas are necessary, he argues. The historical evolution of the idea of toleration, he promises, will support his normative analysis. He also previews three main claims: 1. Toleration is moral, not political. 2. Toleration is not a virtue but an attitude. 3. Toleration is supererogatory.
    Historically, toleration is rooted in ideas of charity and grace. Later it became a political obligation, e.g. in the thought of Mill. And then, toleration once again became supererogatory. This is a dialectic evolution—which exposes tensions in the idea of toleration.
    Normatively, the first point is that toleration is moral and not political. The analytical literature distinguishes toleration from other ideas, e.g. respect, pluralism, charity, etc. It is not clear what the distinctive features of toleration are. The main business of the liberal state is to respect rights, establish justice and equality, and the insure the rule of law. The state is not a person, and hence cannot tolerate. States only enforce the law. According to Raz, the state should not respect practices that undermine autonomy. The state should be neutral among those beliefs that do not undermine autonomy. Courts operate on the basis of the law and have no values of their own. The same applies to other political actors—they do not have their own moral beliefs and hence cannot tolerate.
    Now Heyd turns to Rawls, arguing that given Rawls’s view of public reason, toleration is “a bridge between the moral and the political.” [At this point Heyd makes a point with which I disagree. He argues that Rawls sees the requirement of public reason as serving the practical role of insuring stability—but as I understand Rawls, he explicitly denies this.]
    Heyd’s view is that toleration is a supererogatory attitude. Is toleration a virtue? We may say that “toleration is the virtue of liberal society.” In this sense, we are just saying that toleration is an excellence. But toleration is not a virtue in the Aristotelian sense. Why not? Because it lacks the characteristics of an Aristotelian virtue—it is not rooted in moral psychology. Unlike courage, for example, toleration is not a mean between two opposing vices. Toleration does not require a characteristic motive, and a tolerant act is not less “tolerant” if it is not performed with ease.
    Toleration requires a shift from the impersonal judgment of actions to a personal judgment of the agent. Both kinds of judgment are valid. An action may look wrong from the impersonal perspective, but from the personal perspective, the act may become tolerable because of the motive or circumstances of the actor. Heyd argues that one can shift between these two perspectives—you can assume one stance or the other, but not both simultaneously. The shift of perspective is an intentional choice, and hence not a disposition, and hence not a virtue in the strict sense.
    Toleration has a price. It takes an effort. It is an active attitude. So there is no natural and easy toleration. Therefore, toleration is not an Aristotelian virtue.
    Heyd now skips a portion of his paper for reasons of time. Toleration is a great political value.
    Sabl Andy Sabl (UCLA, public policy) is next. Heyd’s conclusions follow from his definition. So the question is, are these the right definitions? Are they the most useful? No, says Sabl. Heyd’s conception of politics is untrue to the actual practice of politics. And his definition of virtue is not the most useful definition of virtue.
    Heyd defines politics in relationship to the neutral state. Sabl suggests that politics could be defined to be exactly the non-neutral and contestable. Sabl then by way of aside says that the state is fictional and therefore has no qualities. But assuming there is a state, Sabl argues, it must act through agents. Such agents, even judges, are not strictly neutral. And toleration is relevant to these agents.
    Even neutral rules have unequal burdens. For example, the military rule against personal headgear falls more heavily on orthodox Jews who have a religious duty to wear a yarmulke.
    Toleration can’t be an Aristotelian virtue—says Sabl. But that isn’t an interesting finding. Those who write about political virtue mean something more encompassing than strict Aristotelian virtue. Galston, for example, does not use political virtue in the Aristotelian sense. Political virtues can be viewed more capaciously.
    Being judgmental is, in the abstract, desirable. What creates toleration is the switch to the personal perspective. Among those who are not philosophers, this is not a common view. Rather, in the polity more common attitudes are libertarianism, pluralism, rationalist solidarity, freethinking, religious free conscience, skepticism, anticlericalism, individualism in a Millian form. These are all roads to toleration. From each of these perspectives, it will be tempting to undermine some of the other tolerant perspectives.
    Those who affirm religious toleration, e.g., will be tempted to undermine anticlericalism. But Madisonian pluralism comes in here. No one group can count on permanent power. There is a cost to Madisonian pluralism. We are led to tolerate harms. Nonetheless toleration is worth the cost.
    Sabl did a really terrific job!
    Abrams Kathryn Abrams (Boalt Hall School of Law, University of California at Berkeley) begins by noting that she has a very different perspective than Heyd. She will address three questions: 1. Is toleration is political. 2. What is the understanding of toleration that we should endorse in an egalitarian democracy? 3. What are the difference and similarities between Abrams and Heyd’s premises?
    One the question whether toleration is political, Abrams agrees with Sabl in rejecting Heyd’s statist conception of the state. State actors aren’t the only actors who elaborate political meaning. Politics is shaped by private individuals. So attitudes of private individuals will shape the political realm. For example, civil rights legislation was shaped by the civil rights activists. Also, the meaning of governmental actions depends on how they are received by private citizens. Moreover, government does not play the limited role assumed by Heyd’s analysis. The New Deal/administrative state gives rise to quasipublic officials who have various occasions for tolerance.
    On the nature of the toleration that should be practiced, Abrams argues against the forbearance conception of toleration. Identity politics make certain practices part of personal identity. This accounts have attenuate the notion of autonomy. [I am not sure I understand Abrams here.] Citizens in egalitarian democracy may find forbearance insufficient. Being free is one thing, but visibility is another. Kwanza is tolerated in these sense that there is noninterference, but not recognized. There is a need for recognition of difference.
    So what is appropriate is “engaged toleration.” This is broader in scope that Heyd’s idea of toleration. Engaged toleration begins with a cognitive shift. The tolerator suspends her own moral framework. The goal is to understand the practice or belief on its own terms. Two virtues facilitate this: curiosity and humility.
    Engaged toleration has indeterminate outcomes. We don’t know in advance what will happen.
    Why does Abrams use “toleration” rather than other terms to describe her view. She has two reasons. First, conceptually, toleration is defined by its second order character, the conceptual shift and second order value of equality. Second, rhetorically, toleration is powerful. Toleration exerts obligatory force. If we can embrace a more engaged conception of toleration, this can be an important step in negotiating difference.
    Abrams was thoughtful and eloquent, but I had a strong feeling that she was trying to bend the concept of toleration beyond recognition. In a way, she was arguing that toleration should serve the end of equality, but I suspect that the concept of toleration that she developed is really something quite different--perhaps, equal respect and mutual acknowledgement, but not toleration!
    Discussion Heyd begins, thanking his commentators. Starting with Sabl, Heyd agrees that the state is not necessarily neutral, but the state is impersonal, has no feelings, and is expected to be fair. States work through agents, but this does not mean that states can be tolerant. Officials, in their official role, should not be tolerant or forgiving. They should stick to the law. When the exercise discretion, but this discretion “is constitutive of their being public officials.” On virtue, Heyd says that it is not a disposition or character trait. It is an adopted attitude. The main thesis in Sabl’s paper is that there are many different tolerations. Once you tolerate tolerations that tolerate the intolerable. This means tolerating intolerable toleration.
    Heyd then turns to Abrams. He concedes that individuals act politically, but that doesn’t clash with his thesis, which is about the state. He then disagrees with a comment made by Abrams, arguing that social workers should be tolerant. Toleration has become less urgent in modern society. Lastly, Heyd says, that he accepts with much of what Abrams says about “engaged toleration.”
    Bill Galston suggests that there is an issue whether toleration is to be understood in purely instrumental terms or whether it is of intrinsic value. If it is instrumental, what is it instrumental toward. Galston says that toleration is instrumental toward reducing conflict, coercion, and cruelty. It is a mistake to see toleration as instrumental towards equality.
    Don Horowitz says that the state cannot be neutral with respect to values. Because the values of the state will differ from that of some individuals, toleration is a political virtue. More than fairness is required of the state. Toleration is always in doubt.
    The panelists are now offered an opportunity to respond to the panelists. Heyd begins, focusing mostly on areas of agreement with several of the questions/comments from the floor. He emphasizes his claim that toleration, like forgiveness, is not a matter of duty. We do not owe forgiveness; likewise we do not owe forgiveness. Sabl responds to Galston. On the one hand, from some perspectives, e.g. Christianity, toleration may be of intrinsic value. On the other hand, from the political perspective, toleration is instrumental. On reciprocity, Sabl says that it is a mistake to assume that you can stay in power over time. Abrams responds to Galston re equality as a ground for toleration. Even if negative judgments are part of the understanding of toleration, toleration can consist in both restraint and a scrutiny of the reasons for our negative judgments. So moving away from the negative judgment is not necessarily inconsistent with toleration.
    Comments on the Question Whether Toleration is a Virtue The issue from this session that was most compelling was the central one—is toleration a virtue? On this, I found the positions take by the panelists to be unsatisfactory. Of course, most everyone agrees that toleration is a virtue in the very broad sense—it is a good thing. But is toleration a virtue in the narrower sense that we associate with Aristotle’s theory of the virtues. Is toleration like courage or justice (moral virtues)? Here are some thoughts on that question:
    • Toleration can be seen as a mean between opposing defects of character. On the one hand there is the vice of intolerance or judgmentalism. On the other hand there is an opposing vice, as in, “You are way too tolerant.” Heyd denied that toleration has this structure, but in his oral presentation he merely asserted this position without supporting arguments.
    • Toleration is a dispositional trait. We say that some people are characteristically tolerant: “He’s a tolerant guy,” and others are characteristically intolerant: “She is so intolerant.” Again Heyd seems to deny this, but arguing against the dispositional nature of toleration is like paddling up a stream with a swift current. The evidence that toleration is dispositional from ordinary life is very strong.
    • Toleration is connected to character by qualities of emotion and intellect. Heyd is right that toleration is not connected in a simple way to a single morally neutral emotion, as courage is connected to fear. But that kind of connection is not required for Aristotelian virtues: magnanimity and liberality are virtues but they do not correspond to a single emotion. Justice is a difficult case, but once again there is not single emotion with respect to which justice is a dispositional mean. In the case of temperance, it seems quite likely that there are connections to a variety of emotional capacities, including the disposition anger and the capacity for empathy. Heyd’s own theory of tolerance is that it involves an act of perspective shift—from viewing the act to viewing the actor. But the capacity to easily and readily make such a shift would seem to be the sort of intellectual ability that is dispositional in nature. Some of us can do it easily; others find it hard.
    • Heyd also argued that toleration was unlike the moral virtues in that toleration never comes easily or smoothly. This simply seems wrong as a matter of the phenomenology of toleration. Tolerant people find toleration easy and natural. Intolerant people find toleration difficult and can only be tolerant through an act of self mastery.
    • This brings me to a larger point about Heyd’s arguments against toleration as a virtue. Heyd comes from a neo-Kantian perspective (or at least seems to, based on his presentation) that is skeptical of the virtues in general. So it hardly seems surprising that he is skeptical of the idea that toleration is a virtue.
    • Finally, even if toleration is a virtue, that does not mean that one cannot also speak of tolerant actions or a tolerant artificial person (e.g. the state). The same is true of the other moral virtues.
    I truly enjoyed this system. All three speakers were quite good!



 
Federalist Society: Faculty Division Conference Starts Today
    Schedule of Events for the 2004 Federalist Society Faculty Division Conference
      Hyatt Regency Atlanta 265 Peachtree Street, NE Atlanta, GA 30303 404-577-1234 January 3-4, 2004
    Saturday, January 3, 2004
      6:30-7:00 p.m. Reception 7:00-8:30 p.m. Panel on "International Law and Constitutional Interpretation"
        Prof. John McGinnis, Northwestern Law School (Moderator) Prof. Mattias Kumm, New York University Law School Prof. Michael Ramsey, San Diego Law School Prof. Peter Spiro, Hofstra Law School Prof. Ernest Young, Texas Law School


 
American Society for Political and Legal Philosophy Continues Today
    http://www.political-theory.org/asplp American Society for Political and Legal Philosophy 50th Annual Meeting Toleration and Its Limits Hilton Atlanta/ Atlanta Marriot Marquis Saturday, January 3
      Breakfast Reception 8:00-8:30 am Panel II. Toleration as a Virtue 8:30-10:15 am
        Paper:
          David Heyd, Hebrew University of Jerusalem "Is Toleration a Political Virtue?"
        Commentaries:
          Kathryn Abrams, Boalt Hall School of Law, Unversity of California at Berkeley Andy Sabl, University of California at Los Angeles Chair: Jeremy Waldron, Columbia University
      Panel III. Toleration and Recognition
        10:30 am-12:00 noon Paper:
          Ingrid Creppell, George Washington University "Toleration, Politics and the Common World"
        Commentaries:
          Glen Newey, University of Strathclyde, Glasgow Noah Feldman, New York University
        Chair:
          Jacob Levy, University of Chicago
    The ASPLP meeting is usually one of the most important events of the year in legal theory. See you there!


Friday, January 02, 2004
 
Blogging from Atlanta 01: The American Society for Political and Legal Philosophy, Session One—Toleration and Liberalism
    Introduction I’m blogging from the Hilton in Atlanta. The weather is glorious. This is triple witching time for legal theory. The Eastern Division of the American Philosophical Association’s annual meeting just concluded a few days ago. Today, American Society of Political and Legal Philosophy begins with a late afternoon session. Tomorrow, both the Association of American Law Schools and the Faculty Division of the Federalist Society get underway. The ASPLP is a distinguished group drawn from the legal academy, philosophy, and political science.
    I’m blogging from this session in real time, and, as always, this report is inevitably both incomplete and partial. This is my take on the papers, and I’m sure others would have different opinions.
    Amy Gutman (the President of ASPLP & distinguished Princeton political theorist) begins the session by noting that this year is the 50th anniversary of Nomos--the annual volume published by the ASPLP. She turns the session over to Melissa Williams (who along with Jeremy Waldron organized the event). Melissa introduces the first speaker, Steven Smith (University of San Diego). Smith is a distinguished law and religion scholar. A tall and soft-spoken man, Smith has a presence that is simultaneously unassuming and commanding. Smith takes the podium.
    Smith Smith’s paper is entitled Toleration and Liberal Commitments. The main argument, Smith says, is that if we want to maintain liberal commitments, we will necessarily do so on the basis of toleration. What is toleration? Smith says he defines it in terms of four components. The first component is the assumption that there is a conditions of pluralism. The second component is to specify that there will be an agent (for Smith’s purpose, the government). The third component is that the agent operates on the basis of a system of right belief or “orthodoxy.” The orthodoxy, says Smith, is merely the beliefs on the basis of which the agent acts. The fourth component of Smith’s definition of toleration is the assertion that the agent will then categorize others into three categories: (1) orthodox—beliefs that are consistent with the orthodoxy, (2) tolerable—beliefs that are tolerable, and (3) intolerable—beliefs that are unacceptable. I’m not quite sure, but there seems to be a potential circularity problem if the four components are intended as a “definition of toleration. Smith then says that the illiberal position lacks the category of the tolerable. Liberalism includes such a category.
    Smith briefly discusses his claim that there is no “universal” justification for toleration. He suggests that the best such argument is based on reciprocity—citing Jürgen Habermas as an example. Smith argues that the reciprocity argument will seem obtuse to those to whom it is addressed. For example, a Christian may expect toleration from other religions but not be willing to herself tolerate those religions. Why not? Because the Christian may believe that her religion is true and the other religions are false. Reciprocity assumes that the person asked to reciprocate sees the cases as equivalent, but persons who are intolerant reject this premise.
    Smith then goes on to what he calls “ultraliberal” criticisms. Mere toleration is not enough—mutual respect is required. Smith notes that ultraliberalism is itself a position Now Smith moves to what he calls the illiberal critique of toleration. Smith argues that in the actual world governments must endorse some beliefs and reject others—in public education, as the basis for legislation, and so forth. If ultraliberalism requires that the state treat all beliefs as equal, then ultraliberalism simply isn’t a possible position. Indeed, Smith argues, ultraliberalism may be a disguise for toleration—affirming that all beliefs are created equal but acting as if some beliefs were more equal than others.
    In a liberal society where citizens are in some sense the government, ultraliberalism would seem to require that individuals as citizens not affirm any belief over others. But individuals as persons are supposed by ultraliberalism to be allowed to have their own beliefs. But how can the same individual both affirm and bracket her own beliefs?
    Smith then turns to descriptive sociology, noting two different takes on the current scene. On the one hand, there are those who argue that current society is characterized by a lack of deep belief—a relativism about belief. On the other hand, current society is characterized by “culture wars,” a deep commitment to belief that is inconsistent with liberal toleration.
    Smith then turns to the idea that there is serious culture conflict on the international level. There is no reason to believe that world history is a process of convergence on liberal values, including toleration. In conclusion, Smith argues that Christian toleration is based on the belief within Christianity that “killing one another on the basis of religion is contrary to the will of God.”
    Smith was very impressive, a marvelous speaker and careful thinker.
    Morgan Glyn Morgan (Harvard University) now takes the podium, making the conventional move that he will accentuate the negative and emphasize disagreement. Morgan emphasizes Smith’s arguments about impoverishment of soul and disagreement. Now Morgan moves to the question as to what counts as a tolerant regime and argues that Smith’s notion of toleration would count regimes such as the Northern Ireland or Quebec of the 1960s as tolerant—even though these regimes gave substantial advantages to those who adhered to the orthodoxy. (I think Morgan means advantages to Protestants & English speakers and disadvantages to Catholics and French speakers, but I am not sure.)
    Now Morgan turns to Mill and the question whether liberals can tolerate religious majorities. He begins his exposition by briefly rehearsing Mill’s harm principle & referring to Chapter Four of On Liberty. Mill distinguishes four kinds of acts: (1) acts that harm the fundamental interests of others; (2) acts that harm nonfundamental interests of others; (3) acts that do not harm but which should be criticized, and (4) acts that are purely self concerning. So Mill would allow citizens to play an active role in forming one another’s characters—criticizing fellow citizens for vices that do not involve harm to others.
    Now Morgan turns to the part of Smith’s paper where Smith urges liberals to eschew ultraliberalism and embrace toleration—to affirm their own beliefs and to reject beliefs that are inconsistent with them. There are two ways of thinking about this advice, says Morgan—as a tactic and as a matter of principle. As a tactic matter, Morgan says, liberals must be cognizant that they may well be in the minority as the political winds shift. Mill doubted that toleration was sufficient to protect liberal values, and hence argued for institutional mechanisms such as plural voting. Modern liberals do not need to rely on Mill’s antimajoritarian contrivances. Why? The noncynical answer is that modern citizens turn out to be more reasonable than Mill expected. The more cynical answer is that modern liberal democracy is, in fact, a judicial aristocracy. This is quite a dramatic argument & the audience perks up! Since the current status quo protects liberal values, liberals would get not advantage from the move from ultraliberalism to mere toleration. Tactically, liberals should support an ultraliberal polity. Moreover, Morgan argues, that the tactical perspective is not the right one. The better question is one of principle. So what argument of principle does Smith have? Morgan argues that Smith must be relying on the notion that ultraliberalism causes an impoverishment of discourse and soul. But, Morgan counters, there is no reason to believe that ultraliberalism has caused such impoverishment. More likely, says Morgan, any impoverishment is caused by modernity—by which I believe that Morgan means to refer to economic modernization, roughly what Habermas calls “the colonization of the lifeworld by the system.”
    Forst The next speaker is Rainer Forst (University of Frankfurt). Forst says that he wants to argue for ultraliberalism that does not rely on skepticism. In taking this position, he says, he is conceding that skepticism is not a sufficient or sound basis for ultraliberalism. Indeed, skepticism can easily be used to justify intolerance, Forst argues. Forst does not deny that toleration can be justified on religious grounds, but, he argues, this is not an appropriate basis for toleration is a liberal society.
    Not all toleration, says Forst, is “liberal toleration.” One can have toleration on the basis of illiberal values. Forst then turns to the religious basis for toleration: Forst argues that in fact force can produce true belief—for example, force can loosen the grip of false belief, clearing the way for noncoercive induction of true belief.
    Forst then turns to the reciprocity argument does require skepticism. Rather, it is based on the distinction between faith and knowledge. There can be reasonable disagreement about religious belief, but this is not equivalent to skepticism. A lack of proof is not skepticism. The best case for toleration is a combination of the epistemological argument with a premise that is like Rawls’s liberal principle of justification—the notion that the use of force should be justifiable to reasonable citizens—but Forst believes this principle is moral and not merely political. “Everyone has a right to justification”—a right to be given reasons for legally binding norms. Smith’s argument against this is that there is a deep pluralism, not just of religions and moralities, but also of justice. But Forst argues, mere pluralism is not an argument against the principle of justifiability—it is a principle that is designed to enter into and resolve disagreement.
    Forst now argues that his position is more consistent with the idea of democratic self government than is Smith’s position. Smith’s position allows majorities to dominate minorities, e.g. to require mandatory inculcation of their own religious or secular doctrine. Forst, on the other hand, argues that on his view, contested views (based on faith) are not an appropriate reason for coercion. Next, Forst argues, affirming this principle does not lead to impoverishment of soul and discourse, but leads instead to an enrichment of both.
    Forst then argues that it is dangerous to ground toleration on religious values—citing the historical example of Protestant toleration which excluded toleration of atheists and Catholics.
    This was a rich and (to me) persuasive talk!
    Discussion Williams opens the discussion period. Glen Newey asks the first question, focusing on the epistemological premise of Forst’s argument, essentially questioning Forst’s premise that the epistemology of religious beliefs is not essentially different than the epistemology of other beliefs. (I am sitting next to Larry Alexander, who disagrees.) Next, David Heyd asks a question of Smith, re whether government (as opposed to citizens) is the agent of toleration. If government is the agent of toleration, how does toleration impoverish the soul of the individual? (I must be missing something, because it seems to me that Smith’s position preempted this point by arguing that the individual in a liberal polity is, qua citizen, part of the government.
    Amy Gutmann asks an important question neglected by the panel: what are the limits of toleration? And another: what is the aim of toleration? Gutmann suggests that the answer to the latter question is “respect for the individual.” Gutmann wants to know whether “respect for the individual” is the aim of toleration.
    Smith then responds to some of the comments, especially noting his disagreement with the premise that religion is based on faith rather than reason. Gutmann presses Smith on the limits of toleration: is the limit simply set by the orthodoxy? Yes, says Smith, but not without limits. What limits, asks Gutmann.
    The next question is asked by William Galston to Smith. The boldest section of Smith’s paper, says Galston, was the section on equality. Smith argued that there were compelling religious justifications of equality, but not secular justifications. Galston asks what Smith’s own position on equality is and what connection it has to toleration. Jeremy Waldron asks Smith a series of questions, one about the religious foundations of liberal beliefs and the other about mutual comprehension between religious and nonreligious persons. “Do we run into a dead end?”—where reasons run out. Waldron argues that we do not run out of reasons.
    Noah Feldman now asks a question: “Let’s not focus so much on the state as the agent of toleration. Let’s think of the state as being acted upon by various interests. If you see it that way, then the costs of neutrality to some interests become clearer. Certain groups are unable to get the state to act as they wish.”
    Dennis Thompson asks a final question. He suggests that he has heard two concepts of toleration. Forst is right with respect to the state, says Thompson. It isn’t the distinction between secular and religion; some secular doctrines are unintelligible as well. But you wouldn’t want to have the same standards for democratic citizens as for the state. The state promulgates coercive laws; individual citizens do not.
    Forst takes up Thompson’s point re the distinction between state and citizen—essentially agreeing with Thompson. But, Forst argues, that there are situations in which informal civic intolerance can have just as much force as the law. In fact, informal social pressures can be stronger than legal pressures. So, Forst concludes, it is not clear that individuals should have a greater latitude of intolerance. Then Forst turns to Waldron: if we had no freestanding principle of justification, then we would have no superior principle. But the principle of justification has a different and superior justification that does a particular religious belief.
    Morgan responds to a question by Andy Sabl (which I wasn’t quick enough to blog) re Mill. Sabl had observed that Millian liberals are, after all, a tiny minority. Why should we care about what effect our idea of toleration would have on them? Morgan responds that political theory can be addressed to a partisan group, e.g. just to Millian liberals. Is that all that can be done? Morgan says no, more can be done—although he can’t do it on this occasion.
    Smith says that religious justifications are the best ones he has encountered. He finds nonreligious justifications to be less forthcoming. What about toleration? Is the best justification for toleration a religious justification? Maybe, says Smith.
    Oh, and at the very end, Jacob Levy (of Volokh Conspiracy fame) conducted the business meeting!
    Conclusion This was a terrific session. I look forward to more tomorrow!


 
AALS Annual Meeting Starts Today The Annual Meeting of the Association of American Law Schools begins today, with registration only.


 
Conference Today: American Society for Political and Legal Philosophy
    http://www.political-theory.org/asplp American Society for Political and Legal Philosophy 50th Annual Meeting Toleration and Its Limits Atlanta, January 2-3, 2004 Hilton Atlanta/ Atlanta Marriot Marquis Friday, January 2
      Panel I. Toleration and Liberalism 4:30-6:30 pm
        Paper:
          Steven Smith, University of San Diego Law School "Toleration and Liberal Commitments"
        Commentaries:
          Glyn Morgan, Harvard University Rainer Forst, University of Frankfurt
        Chair:
          Melissa Williams, University of Toronto
      Reception 6:30-8:30 pm


 
Korobkin on ERISA Preemption Russell B. Korobkin (University of California, Los Angeles - School of Law) has posted The Failed Jurisprudence of Managed Care, and How to Fix It: Reinterpreting ERISA Preemption (UCLA Law Review, Vol. 51, p. 457, 2003) on SSRN. Here is the abstract:
    Most Americans receive their health care from a managed care organization (MCO), which makes state regulation of MCOs a significant policy issue. Most Americans also obtain their MCO membership through an employer-sponsored benefits plan subject to federal regulation. Consequently, courts must determine whether and to what extent federal law preempts state MCO regulation. Over the last quarter-century, two questions have been particularly troublesome for the courts: (1) may patients sue their MCOs for negligence and related state law claims?; and (2) may states regulate the benefits provided by MCOs to employment groups? Judicial attempts to address these issues have resulted in a confusing and doctrinally inconsistent jurisprudence of managed health care, in which like cases are treated differently and Congressional intent is all but forgotten. This state of affairs that has led to substantial scholarly criticism and calls for federal legislative reform. In two recent decisions concerning managed care, the Supreme Court missed opportunities to rationalize this body of law, reinforcing the failures of its jurisprudence. This Article contends that the flaws in the Court's managed care jurisprudence stem from a single mistake of statutory construction; specifically, the failure to recognize that medical benefits promised to patients by MCOs are not employment plan benefits, even when paid for by an employer. Were the Supreme Court to recognize this simple mistake, a new jurisprudence of managed care would emerge that eliminates confusion, avoids doctrinal conflict and inconsistency, and effectuates Congressional intent. The new jurisprudence would also obviate much of the need for federal "Patients' Bill of Rights" legislation.


 
Baker on the Ngeze Case C. Edwin Baker (University of Pennsylvania - School of Law) has posted Genocide, Press Freedom, and the Case of Hassan Ngeze on SSRN. Here is the abstract:
    This essay was written under contract with the United Nations to serve as background for my testimony as an expert witness in behalf of Hassen Ngeze in his prosecution before the International Criminal Tribunal for Rwanda. (On motion of the prosecutor, the Court excluded this essay - or report - and the offer of my testimony.) In the Prosecutor v. Ngeze, the prosecution charged Ngeze with direct and public incitement to genocide and conspiracy to commit genocide almost entirely on the basis of his publication of a newspaper, Kangura one of many newspapers being published in Rwanda during the period before the genocide, although not one of largest papers by circulation and not one still being published at the time the genocide began. The Essay begins by describing the essential role of a free press for democracy and for broader economic and social development and justice, with the essay emphasizing its particular importance for less developed countries. It then argues that in interpreting international criminal law standards, special efforts should be made to avoid interpretations that would make illegal any activities whose protection at least some clearly democratic countries consider fundamental to democracy. That is, international criminal law should try to avoid making democracy, as understood by significant democratic countries, illegal. With this background, different sections look at the precedent of the Nuremberg Trials, an earlier decision on direct and public incitement to genocide by the ICTR, and at the protection given the press by the European Court of Human Rights and the United States Constitution, with some emphasis on explaining the logic of the latter. The Essay provides a more exhaustive examination of the hundreds of pages of the reports of the prosecutions experts. These reports provide massive numbers of excerpts from Kangura that embody the heart of the prosecutions case. The Essay concludes that under all the precedents examined, ranging from Nuremberg to the recent ICTR case and under the constitutional approach of the United States or the human rights approach of the ECHR, there is absolutely no basis for a conviction in this case. Moreover, the Essay suggests that a conviction would create a tragic precedent given the democratic and developmental needs of countries such as Rwanda. NOTE: This was written 10 months before the ICTR reached a decision, issuing an opinion, finding Ngeze guilty of most charges, and sentencing him to life imprisonment. This paper will be revised in the future in light of this decision.


 
Wildenthal on Tribal Sovereignty Bryan H. Wildenthal (Thomas Jefferson School of Law - General) has posted Fighting the Lone Wolf Mentality: Twenty-first Century Reflections on the Paradoxical State of American Indian Law (Tulsa Law Review, Vol. 38, p. 113, 2002) on SSRN. Here is the abstract:
    What survives of American Indian tribal sovereignty rests largely on decisions of the U.S. Supreme Court, which rely in turn on constitutional principles, various Indian treaties, federal statutes, and what amounts to judge-made federal common law. But the Court has historically been an enemy as much as an ally of Indian sovereignty, and today it seems intent on undermining what remains of it. In a remarkable reversal, the political branches of the federal and even some state governments are now (sometimes) more friendly than federal and state courts to Indian tribal interests. This article is part of a symposium marking the centennial of the Court's decision in Lone Wolf v. Hitchcock (1903), often called the Dred Scott of Indian law. Lone Wolf upheld Congress's "plenary power" to seize Native American lands and abrogate Indian treaties. Later decisions qualified Lone Wolf's extreme abdication of judicial scrutiny and signalled a partial and tentative judicial defense of tribal rights. Yet the "Lone Wolf mentality" survives and has even undergone a revival on the modern Court, largely at the instigation of Chief Justice Rehnquist. The article begins by holding up as examples three cases decided in 1999, by the U.S. Supreme Court, the Navajo Nation Supreme Court, and the California Supreme Court. The first two reaffirmed Indian sovereignty and treaty rights. The California court, dealing with an Indian casino issue, went against tribal interests over a strong dissent, but the decision quickly boomeranged as the people of California overruled their judges to allow vastly expanded gaming on Indian lands. The article then goes back in time to review Lone Wolf and its progeny, pointing out how even the Warren Court, as late as 1955, outdid Lone Wolf in showing disregard for Indian property rights under the Constitution. The article surveys several key cases after 1955. Some of these countered the Lone Wolf mentality, but they also reveal Rehnquist's growing influence. The Court in 1999, for example, reaffirmed seemingly arcane Indian treaty rights in a 5-4 decision barely noticed except by Indian law specialists. But Rehnquist's dissent, among other startling moves, sought to resurrect an anti-Indian rule of treaty interpretation so dated and extreme it was rejected in 1905 by the same Court that decided Lone Wolf. Instead of construing relevant law in favor of Indian treaty rights, as the Court has at least purported to do since long before Lone Wolf, Rehnquist strained to uphold the legality of an Indian removal order dating from 1850. The article closes by discussing two cases decided in 2001 (one unanimous and one over a notably weak dissent) in which Rehnquist wrote or joined the Court's opinion. Both cut back Indian sovereignty in terms suggesting a triumphal revival of the Lone Wolf mentality in the new millennium, and both suggest that this revival faces little effective opposition on the Court.


Thursday, January 01, 2004
 
Happy New Year from Legal Theory Blog!


 
Most Hit LTB Posts of 2003


 
LTB's "Favorite Blogs of 2003" In alphabetical order:


 
LTB's "Favorite Post of 2003" My favorite post--from another blog--for 2003 was Political Theory and Political Philosophy, posted by Volokh Conspirator Jacob Levy on April 15.



 
Top Ten Referrers to LTB for 2003