Legal Theory Blog



All the theory that fits!

Home

This is Lawrence Solum's legal theory weblog. Legal Theory Blog comments and reports on recent scholarship in jurisprudence, law and philosophy, law and economic theory, and theoretical work in substantive areas, such as constitutional law, cyberlaw, procedure, criminal law, intellectual property, torts, contracts, etc.

RSS
This page is powered by Blogger. Isn't yours?
Monday, June 30, 2003
 
An Egalitarian Theory of Judicial Review Ronald C. Den Otter's article DEMOCRACY, NOT DEFERENCE: AN EGALITARIAN THEORY OF JUDICIAL REVIEW just became available on Westlaw and at 91 Ky. L.J. 615. Here is a taste:
    This Article contends that we should not rely so heavily on the judiciary to settle our moral conflicts, although not for the typical reasons mustered by democratic critics of judicial review. We have not openly confronted this problem of taking the Constitution away from the people because the debate over the proper scope of judicial review has been framed too narrowly, leaving us with two undesirable extremes. On the one hand, those who defend the exercise of judicial review call attention to the need for those with special competence to assess legislation that is constitutionally suspect. Undemocratic means, they insist, may be entirely appropriate to protect the substantive values that make the operation of constitutional democracy possible. After all, ordinary citizens may not be sufficiently aware of the constitutional and moral implications of their collective decisions. In such instances, judges must have veto power over popular choices to control the vagaries of democratic politics, thereby saving the people from themselves. On the other hand, those who seek to restrict the reach of judicial review in the name of democracy, such as Robert Bork, Antonin Scalia, and William Rehnquist, contend that the people or their elected representatives should decide the vast majority of questions concerning public morality. They point out that an activist approach to constitutional interpretation ultimately rests on an elitist rationale: that ordinary citizens cannot be expected to exercise political power responsibly.
Get it while its hot.


 
New Papers on the Net Here is today's roundup:
    Miranda McGowan (Minnesota) and James Lindgren (Northwestern) upload Untangling the Myth of the Model Minority. From the abstract:
      The model minority stereotype depicts Asian Americans as a group that has succeeded in America and overcome discrimination through its hard work, intelligence, and emphasis on education and achievement - a modern-day confirmation of the American Dream. A large body of work by Asian critical scholars condemns this image and charges that it conceals more sinister beliefs about Asian Americans and other racial minorities in America. Is this critique correct? Does the model minority stereotype really mask hostility toward Asian Americans or breed contempt for other minorities? This article presents the results of an empirical study into the model minority stereotype. Using 1990, 1994, and 2000 General Social Survey data (including some of the very data used by critical scholars to establish the existence of this stereotype), we confirm claims that some non-Hispanic white Americans think that Asian Americans as a group are more intelligent, harder working, and richer than other minorities and that some think Asian Americans are more intelligent and harder working than whites. But we also discovered that these ideas are not usually linked with negative views of Asian Americans (or of other minorities, for that matter). Indeed, we found weak support for the contrary position - that those who rate Asian Americans higher than other minorities, or particularly higher than whites, are more likely to hold other positive views about Asian Americans, immigration, African Americans, and government programs supporting these groups. Our study nonetheless confirms the scholarly suspicions in one crucial respect: non-Hispanic whites who have positive views of Asian Americans are less likely to think that Asian Americans are discriminated against in both jobs and housing, thus tending to support the claims of some Asian critical scholars that positive stereotypes about Asian Americans tend to be associated with a failure to recognize continuing discrimination. In these data, however, this complacency by whites about prejudice against Asians does not translate into hostility toward government programs to alleviate the problems of Asian or African Americans.
    David McGowan (Minnesota) posts Website Access: The Case for Consent, forthcoming in the Loyola-Chicago Law Journal. Here is the abstract:
      This paper presents a Coasean defense of the use of the trespass to chattels tort to regulate access to websites and private networks connected to the Internet. Consent to use should be presumed from the owner's choice to connect a site or network to the Internet. In most cases, however, owners should be able to stop unwanted uses by notifying a user that the owner objects to particular uses. The trespass to chattels tort provides courts a doctrinal basis to enjoin uses to which an owner does not consent. Injunctions facilitate bargaining. Because transaction costs are low in such cases, bargaining will better approximate the optimal social equilibrium of uses than would alternative regimes, such as judicial management of access through the doctrine of nuisance. There may be cases where utilitarian analysis suggests deviating from this approach, but they would be the exception, not the default. This paper also takes issue with the prevailing critique of the trespass tort. The basic premise of the prevailing critique is that chattel are different from real property because the law recognizes an interest in holding real property free from harmless intermeddling but does not recognize such an interest for chattel. The Restatement of torts says just the opposite, however, a point the prevailing critique does not acknowledge. The difference between real property and chattel is that the law provides a cause of action for harmless intermeddling with the former, and provides only a privilege to use self-help to protect the latter. When the interest in being free from even harmless intermeddling is taken into account, the prevailing critique reduces to the proposition that courts should not recognize new torts even when legally recognized interests are violated and the means the law expects to protect those interests fail to do so. That proposition is excessively formal and provides no normative basis for criticizing the doctrine.
    Avi Ben-Bassat (Hebrew University of Jerusalem, Economics) and Momi Dahan (Hebrew University of Jerusalem, Public Policy) post Social Rights in the Constitution and in Practice. Here is the abstract:
      This paper presents a new data set on constitutional commitments to social rights for 68 countries. Quantitative indices are constructed for five social rights: the right to social security, education, health, housing and workers rights. The right to minimal income (social security) appears in the constitution of 47 countries with relatively moderate constitutional commitment, while only 21 countries make a commitment to housing. We use these measures to characterize a typical constitution with respect to social rights. We find two clear groups: Countries which share the tradition of French civil law generally have a higher commitment to social rights than those that share the tradition of English common law. The constitutional commitment to social rights in socialist countries is closer to French civil law, whereas countries with a German or Scandinavian tradition resemble the English common law countries more closely. We then explore whether the constitutional commitment to social rights, in addition to other key control variables such as democracy and GDP per capita, has any effect on government policy. We find that the constitutional right to social security has a positive and significant effect on transfer payments. The constitutional right to health has a positive and significant effect on health outcome only when it is measured by infant mortality and life expectancy at birth. The right to education seems to have no (or negative) effect, however.
    Scott Meinke (Bucknell, Political Science) and Edward Hasecke (Cleveland State University, Political Science) post Term Limits, Professionalization, and Partisan Control in U.S. State Legislatures, forthcoming in the Journal of Politics. From the abstract:
      As states across the country have adopted term limits provisions for their state legislatures, political scientists have analyzed how mass unseatings of incumbents are affecting legislative composition, capacity, and activity. Yet this reform may impact legislatures not only directly through forced retirements, but also indirectly by changing the incentives to prospective candidates. Following hypotheses suggested by Fiorina (1994, 1996), we argue that term limits have changed the incentive structure for typical Democratic candidates in some legislatures. This change in incentives has, in turn, affected the partisan composition of statehouses just as the professionalization movement affected incentives and partisan composition a generation ago. We provide quantitative evidence that supports Fiorina's conjectures about term limits, suggesting that the presence of term limits provisions creates an environment that is less attractive to Democratic candidates.
    Daveed E. Gartenstein-Ross uploads A Critique of the Terrorism Exception to the Foreign Sovereign Immunities Act . From the abstract:
      While the Foreign Sovereign Immunities Act generally prevents foreign states from being the subject of lawsuits in U.S. courts, countries that have been designated as state sponsors of terrorism by the Secretary of State are exempted from this protection. Judgments entered under this "terrorism exception" already total more than $4 billion, with a number of suits still pending. These judgmetns may pose difficulties for the United States by shifting foreign policymaking power from the executive to the courts, encouraging retaliatory legislation, provoking hostility internationally, and posing barriers to normalization of relations with defendant states. In this Note, Daveed Gartenstein-Ross argues that, because the costs of the terrorism exception are substantial and the benefits minimal, the terrorism exception is a harmful piece of legislation. He explores alternatives policies that the United States can pursue.


 
Buck on Lawrence Surf here for Stuart Buck's commentary on The Buck Stops Here.


 
Seltzer on New Top Level Domains I seem to be in cyberlaw mode today! Surf over to wendy.seltzer.org for more on the decisions made in Montreal by ICANN on new top level domains.


 
Trespass to Chattels & the Internet: RIP? The use of the tresspass to chattels tort as a device to enforce rights on the Internet has been hugely controversial. The California Supreme Court has reversed the California Court of Appeals decision in Intel Corp. v. Hamedi (see also the trial court decision). Here is a link to the California Supreme Court decision. And here is a taste:
    After reviewing the decisions analyzing unauthorized electronic contact with computer systems as potential trespasses to chattels, we conclude that under California law the tort does not encompass, and should not be extended to encompass, an electronic communication that neither damages the recipient computer system nor impairs its functioning.
Potential SCOTUS nominee Jance Brown wrote a dissent. Here is a taste:
    Candidate A finds the vehicles that candidate B has provided for his campaign workers, and A spray paints the water soluble message, “Fight corruption, vote for A” on the bumpers. The majority’s reasoning would find that notwithstanding the time it takes the workers to remove the paint and the expense they incur in altering the bumpers to prevent further unwanted messages, candidate B does not deserve an injunction unless the paint is so heavy that it reduces the cars’ gas mileage or otherwise depreciates the cars’ market value. Furthermore, candidate B has an obligation to permit the paint’s display, because the cars are driven by workers and not B personally, because B allows his workers to use the cars to pick up their lunch or retrieve their children from school, or because the bumpers display B’s own slogans. I disagree.
Here are some additional resources:



 
Barnett on Lawrence and the Thomas Dissent Surf over to the Conspiracy here and go to NPR for Real Audio here for Randy Barnett's analysis of Justice Thomas's dissent in Lawrence.


Saturday, June 28, 2003
 
Copyright and "the Progress of Science" Stripped of the references to patent, the copyright clause of the constitution would read: "Congress shall have Power To promote the Progress of . . Science . . . by securing for limited Times to . . . Authors. . .the exclusive Right to their respective . . . Writings." Via David Post of the Conspiracy, I learned that there is a campaign to strip science out of the protection of the copyright laws. Here is an exerpt from the description of the Public Library of Science Campagin:
    PLoS is promoting a new model for scientific publishing – where all scientific and medical publications would be freely available to read and use through online public libraries of science.
And as we all know, the entertainment industry has promoted the idea that copyright does include all artistic expression--whether related to the progress of science or not. Of course, the 18th century term "science" was broad, encompassing all forms of systemic knowledge, including logic, philosophy, and what we call the social sciences and humanities, but it certainly did not encompass pure entertainment. Most of what was copyrighted in the late 18th century was scientific in the broad sense, charts, maps, and learned treatises. What does this mean? I'm not sure, but I do know that the copyright power today is only a distant cousin of the power as originally conceived.


 
Harry Potter and the International Order of Copyright Tim Wu has a nice piece on Slate on character appropriation. Here is a taste:
    You might think it a good thing that Rowling can stop the Potter cloning industry, whether it is in Brighton, Bangalore, or Bratislava. Who wants to see Harry turned into a hairy troll or forced to gallivant with foreign literary figures? But on closer examination the argument for letting Potter crush his international competition is quite weak. The case for preventing literal copying—in which a foreign publisher simply reprints a work without permission—is strong. But Potter follow-ons are different from the American Dickens piracy of the 19th century and DVD piracy of today. Literal copies are what come out when you use a photocopier. Potter's takeoffs are different: They either borrow characters and put them in a new, foreign context (Potter in Calcutta) or just use the themes and ideas of Potter (as in Tanya Grotter's case) as inspiration for a different kind of story. They aren't a direct replacement for a Potter book, the way a literal copy is, but rather a supplement or an adaptation.
Courtesy of Eugene Volokh.


 
Bloggers as Citizens Check out this post on Philosophy.Com.


Friday, June 27, 2003
 
New Papers on the Net Here is today's roundup:
    E. Sullivan (University of Minnesota, Twin Cities) posts Judicial Sovereignty: The Legacy of the Rehnquist Court, forthcoming in Constitutional Commentary. From the abstract:
      This review of John Noonan's book applauds the author for his careful, penetrating analysis of the Rehnquist Court's federalism theme. According to Narrowing the Nation's Power, the Rehnquist Court has rewritten the Constitution to advance federalism beyond anything recognizable in history. While Judge Noonan did not intend his book to be a close doctrinal analysis of all the Supreme Court's federalism cases in the past ten years, this review of the book brands the Court's work as "revisionism." This review asserts that the Supreme Court is engaged in judicial activism as it rewrites history in order to shift power back to the states at the expense of democratic principles and congressional prerogatives. The review goes beyond Judge Noonan's book by also analyzing Tenth Amendment cases as part of the larger rearticulation of federalism as an overarching constitutional and political doctrine. The review concludes that "while judicial review may have been the means to achieve the Court's federalism goals of strengthening the rights of others at the expense of Congress, ultimately the larger judicial and political shift, as ably demonstrated by Judge Noonan, has been structured - judicial supremacy over Congress' sovereignty and democratic values."
    Victor Fleischer (Columbia) and Geoffrey Smith (Ascent Group, LLC) post Columbia Venture Partners - MedTech Inc. From the abstract:
      This case study is a teaching exercise that was first used in the "Deals Workshop" seminar at Columbia Law School in April 2003. The case involves a potential investment by Columbia Venture Partners, a venture capital fund, in MedTech Inc., an emerging developer of medical devices for the cardiovascular market. The case offers an opportunity to examine the relative importance of various terms in typical venture capital contracts, such as valuation, liquidation preference, conversion rights, board representation, tag-along and drag-along rights, and vesting. The case also illustrates the use of negotiation tactics, including the use of "market" or industry standards, efficient risk allocation, and how to bring other objective criteria into the discussion.
      The case includes a teacher's manual followed by a background memo and term sheet intended for distribution to the students.
    Thomas Ulen (Illinois) posts Money and Politics: A Review of Ackerman & Ayres, Voting with Dollars, forthcoming in the University of Illinois Law Review. From the abstract:
      In Voting with Dollars Bruce Ackerman and Ian Ayres of the Yale Law School propose a new method of financing federal election campaigns. First, Ackerman and Ayres criticize what they call the "old paradigm" of campaign finance reform - one that relies on limiting the amount of money that individuals and organizations can donate and directs a modest amount of public money toward candidates for federal office. Their view is that these methods of command-and-control regulation are bound to fail in their goal of limiting the baneful influence of private money on federal campaigns and, thereby, on public policy. Then, Ackerman and Ayres argue in favor of two related reforms: a Patriot-dollar account that every registered voter may allocate to candidates and a secret donation booth for private contributions to candidates for public office.
      This review finds much to admire in the Ackerman-Ayres reform proposal. But it criticizes some minor administrative details of the reforms and raises two broader concerns: that the injection of up to $5 billion in public money into each campaign cycle might lead not to more deliberative democracy but to even more mind-numbing, trivial campaigns and that the amount of private money in federal campaigns may not be, after all, so large as to excite concern.
    Richard Frase (Minnesota) posts Limiting Retributivism: The Consensus Model of Criminal Punishment, forthcoming in THE FUTURE OF IMPRISONMENT IN THE 21ST CENTURY (Michael Tonry, ed., Oxford Univ. Press 2003). From the abstract:
      This paper argues that Norval Morris' theory of limiting retributivism should be recognized as the consensus model of criminal punishment. Some version of Morris' approach is embodied in the current sentencing regimes of almost all American states, even sentencing guidelines regimes expressly founded on a Just Deserts model, and in many nations, both in common law and civil law legal systems. Limiting retributivism is popular with practitioners, and makes good sense as a matter of policy, because it strikes an appropriate balance between the conflicting punishment goals and values which are recognized in almost all western countries. The theory accommodates retributive values (especially the importance of limiting maximum sanction severity) along with crime-control goals such as deterrence, incapacitation, rehabilitation, and denunciation. It also promotes efficiency, and provides sufficient flexibility to incorporate victim and community participation, local values and resource limitations, and restorative justice programs. Recognizing and promoting a consensus model based on Morris' theory would have considerable value; the theory enjoys widespread support, provides a principled basis to resist persistent political and media pressures to escalate sanction severity, and gives researchers and sentencing policy makers in diverse systems a common framework within which to compare, evaluate, and reform sentencing practices.
    J. Maurits Barendrecht (Tilburg University) posts Cooperation in Transactions and Disputes: A Problem-Solving Legal System? From the abstract:
      Prevention of harm, distribution (compensation, risk allocation, or redistribution of income) and controlling administrative costs are the generally accepted goals of the civil justice system. Is optimal cooperation, defined in this paper as using the problem-solving method of negotiation, a valuable fourth goal? If the legal system can successfully support problem-solving negotiations, without endangering other objectives, this is likely to lead to creation of value in terms of the preferences of the parties, to reductions in the costs of dispute resolution, and probably also to lower costs of transacting. Thus, optimal cooperation in the problem-solving manner seems to be a goal that is consistent with the perspective of welfare economics, in which the well-being of individuals is the criterion for normative evaluation.
      The net benefits of accepting this objective will depend on how the legal system can actually support problem-solving. This article discusses seven possible areas of implementation. A legal system attuned to problem-solving will be more open towards different types of interests and will stimulate the parties to find creative value-maximizing solutions. The perspective of problem-solving underlines the need to improve access to court, and more in general to reduce bargaining ranges by enhancing the way the legal system provides 'batnas'. If this is done, distribution of value will become easier and the effects of bargaining power can be diminished. Stressing the use of objective criteria, the perspective contains an invitation to redesign the rules of substantive private law so that they give better help to the negotiating parties when they deal with distributive issues. Useful objective criteria for distributive issues may be continuous instead of binary. Multiple objective criteria can exist next to each other. They do not have to be binding, but can be adjustable to individual differences in valuation of interests, different ways of creating value, and dissimilar external circumstances. The perspective of problem-solving also invites us to rethink the processes of contracting and dispute resolution, the role of blaming, and the principle of autonomy. Although many of the proposals suggested by this perspective are not new, it may help to develop a more coherent vision on reform of the civil justice system.
    Timo Goeschl (Cambridge, Land Economy) and Timothy Swanson (University of London, University College, Economics) post On Biology and Technology: The Economics of Managing Biotechnologies. From the abstract:
      This paper considers those sectors of the economy that operate under the same regimes of rewarding private innovators as others, but differ in that they face recurring problems of resistance, as occur in the pharmaceutical and agricultural industries. This recurrence originates in the natural processes of selection and evolution among humanity’s biological competitors. The paper examines the capacity for decentralised patent-based incentive mechanisms to result in socially optimal outcomes in these sectors under scale- and speed-dependent evolution of pathogens. It demonstrates that there is a fundamental incompatibility between the dynamics of the patent system and the dynamics of the resistance problem under both types of evolution. Under scale-dependent evolution, the externalities within a patent-based system indicate that decentralised mechanisms will result in systematic underinvestment in R&D that decreases further with an increasing severity of the resistance problem. Under speed-dependent evolution, a patent-based system will fail to target socially optimal innovation size. The overall conclusion is that patent-based incentive mechanisms are incapable of sustaining society against a background of increasing resistance problems. The paper concludes with appropriate policy implications of these results.
    Jonathan Kahn (University of Minnesota, Center for Bioethics) posts What's the Use? Law and Authority in Patenting Human Genetic Material, forthcoming in the Stanford Law & Policy Review. From the abstract:
      Most analyses of the relationship between intellectual property and genetics have focused on important but relatively discrete policy debates about when or whether genetic information should be patented. This article aims to delve beneath the surface of such debates to unearth and interrogate unarticulated themes and assumptions that implicitly reconstruct existing understandings of personhood, citizenship, and authority in terms of genetic discourses. Where the domains of science and the market intersect in patent law, genetic identity and property intertwine, each informing and to a degree becoming a function of the other. As experts in the natural and social sciences construct human identity at the molecular level, venture capital is making deals with these same professionals to manage and transform that identity into marketable products subject to patent rights. Genes are thus becoming sources both of identity and of property, concepts basic to historical constructions of American citizenship Contemporary discourses of genetics and rights may be currently reshaping understandings of citizenship to the extent that the legal identity of the individual is implicated in and constructed through a relationship to her genetic material. The first step toward understanding and analyzing the nature of this relationship is to explore how genetic material itself is identified and defined within the domain of legal discourse. Intellectual property law provides a primary site for this exploration because, more than most other areas of the law, it deals explicitly with defining the nature and legal status of human genetic material. This article explores the patenting of human genetic material as a site where science, the market, and law "situate the self" in the genome in a manner that simultaneously renders it a subject of commerce. As an entry point to this still large area of study, I choose the relatively circumscribed arena presented by the rather heated debates that emerged in 1999 and 2000 around the proposed revisions to the "Utility Examination Guidelines" used by the U.S. Patent and Trademark Office (PTO) in evaluating the validity of patent applications. In examining the debates before the PTO, I aim to show how certain claims, supported by particular models of authoritative knowledge, gain recognition from and access to the power of the American legal and regulatory system while others are marginalized and denied. I argue that the PTO, functioning in a quasi-judicial manner, constructs distinctions between issues of policy and administration as a means to circumscribe the debates over patentability of human genetic material. The boundaries it draws, enables the PTO to bracket and dismiss concerns couched dignitary and religious discourses while recognizing and crediting the more technical arguments of scientific and economic experts.
Other papers of note:


 
Frist Memo on the Process for Confirming Supreme Court Nominees Courtesy of Howard Bashman:
    To All 100 Senators June 26, 2003 Dear Colleague, In light of numerous letters of colleagues addressing the possibility of a Supreme Court nomination this summer, I wanted to write to outline my expectations of a fair and orderly process for Senate consideration of a Supreme Court nomination if any Justice retires at the end of this Supreme Court Term. First, to perform our obligations to the Supreme Court and the American people, the Senate should act on any nominee within a reasonable time to ensure, if possible, that a new Justice can assume office before the Supreme Court resumes hearing cases this fall. In most instances in the past, the Senate has acted promptly to consider a President's nominee to the Supreme Court. Most recently, for example, both of President Clinton's nominees to the Supreme Court received Senate votes before the Senate's August recess, after receiving Judiciary Committee hearings in July. Consistent with that schedule, if there is a retirement at the end of the Supreme Court's Term and a nomination is submitted shortly thereafter, I anticipate the Judiciary Committee would hold hearings in July and the full Senate would vote on the nomination before the Senate recess in August. Second, the Senate must vote on the President's nominee to either confirm or reject the nominee. The Constitution provides that the Senate shall advise and consent on a President's nominees to the Supreme Court. Since 1789, in accord with the Constitution and to fulfill its Constitutional responsibility, the Senate has consistently afforded Presidential nominees to the Supreme Court a vote of the Senate (except, of course, when the nominee withdrew before a vote). Any tactics to endlessly delay the process and prevent the Senate from performing its Constitutional responsibility to vote on a Supreme Court nomination would be inconsistent with the Constitution and contrary to the Senate's traditional practice for more than 200 years. As Majority Leader, I will work to ensure that a Supreme Court nominee by a President of either party receives a fair up or down vote in the Senate. The Senate has few Constitutional responsibilities as important as exercising its advice and consent on a President's nominee to the Supreme Court. I look forward to working with each of you to ensure a fair and orderly Senate process in the event of a Supreme Court nomination this summer or in the future. Sincerely yours, Bill Frist, M.D.


 
Volokh on Thomas Surf to Eugene Volokh's column on MSNBC (GlennReynolds.com) for a terrific column on Justice Thomas & the affirmative action cases. Here is a taste:
    Lots of people have criticized Justice Clarence Thomas’ anti-race-preferences opinion (from Monday’s Grutter v. Bollinger decision concerning the University of Michigan Law School’s admissions policy), on the grounds that there’s reason to think that he has benefited from some such preferences. Maureen Dowd in The New York Times has a particularly intemperate expression of this view: “It’s impossible not to be disgusted at someone who could benefit so much from affirmative action and then pull up the ladder after himself. So maybe he is disgusted with his own great historic ingratitude.”
    The most basic objection to this view, I think, is that if a judge thinks that a policy is unconstitutional, he has an obligation to so vote, whatever his personal history might be. “Gratitude” isn’t a proper basis for constitutional decisionmaking.


Thursday, June 26, 2003
 
Lawrence v. Texas Decided--Updated at 3:01 PM EDST
    Update Alert Scroll down to the Reporting and Blogospheric Reactions section of the post for a bunch of new links.
    Introduction The Supreme Court has decided Lawrence v. Texas, 6-3 to strike down the Texas statute. Justice Kennedy wrote the majority opinion, stating the law "demeans the lives of homosexual persons." This post, which will be updated periodically, provides basic information on the opinion, reactions, and most especially relevant legal theory resources. From the AP Report on the New York Times:
      The men "are entitled to respect for their private lives," Kennedy wrote. "The state cannot demean their existence or control their destiny by making their private sexual conduct a crime," he said. Justices John Paul Stevens, David Souter, Ruth Bader Ginsburg and Stephen Breyer agreed with Kennedy in full. Justice Sandra Day O'Connor agreed with the outcome of the case but not all of Kennedy's rationale. Chief Justice William H. Rehnquist and Justices Antonin Scalia and Clarence Thomas dissented.
    A Preliminary Comment I have now read the majority opinion by Justice Kennedy and the principal dissent by Justice Scalia. Here are one or two simple points:
      --The holding is broad and not narrow. As I read it, the Court has established a clear right for gays to engage in sexual acts in private, and by implication, has reaffirmed (or established) a similar right for heterosexuals.
      --The major disagreement between Kennedy and Scalia was about the doctrine of stare decisis. Kennedy needed to argue that reversal of Bowers was consistent with the discussion of the role of precedent in Casey Scalia charges the majority with inconsistency, and devotes a substantial portion of his dissent to Roe v. Wade, clearly weakening the dissent as an intellectual matter.
      --Scalia argues that the majority employed "rational basis scrutiny," but having read and reread Kennedy's opinion, I think this is just plain wrong. Althouigh there is ambiguity, it looks like a fundamental rights decision to me. (Update: Unlearned hand reacts to this here.)
      --The majority relied extensively on historical evidence that homosexuals were not singled out for special treatment by early anti-sodomy laws and on evidence that such laws were rarely enforced (or enforceable under the then-prevailing rules of evidence and criminal procedure.
    Opinions Here are links to the opinions: Excerpt from Justice Kennedy's Majority Opinion
      We conclude the case should be resolved by determining whether the petitioners were free as adults to engage in the private conduct in the exercise of their liberty under the Due Process Clause of the Fourteenth Amendment to the Constitution. For this inquiry we deem it necessary to reconsider the Court’s holding in Bowers. There are broad statements of the substantive reach of liberty under the Due Process Clause in earlier cases, including Pierce v. Society of Sisters, 268 U. S. 510 (1925), and Meyer v. Nebraska, 262 U. S. 390 (1923); but the most pertinent beginning point is our decision in Griswold v. Connecticut, 381 U. S. 479 (1965).
      In Griswold the Court invalidated a state law prohibit- ing the use of drugs or devices of contraception and coun- seling or aiding and abetting the use of contraceptives. The Court described the protected interest as a right to privacy and placed emphasis on the marriage relation and the protected space of the marital bedroom. Id., at 485. After Griswold it was established that the right to make certain decisions regarding sexual conduct extends beyond the marital relationship. * * * The Court began its substantive discussion in Bowers as follows: The issue presented is whether the Federal Con- stitution confers a fundamental right upon homosexuals to engage in sodomy and hence invalidates the laws of the many States that still make such conduct illegal and have done so for a very long time. Id., at 190. That statement, we now conclude, discloses the Court's own failure to appreciate the extent of the liberty at stake. To say that the issue in Bowers was simply the right to engage in certain sexual conduct demeans the claim the individual put forward, just as it would demean a married couple were it to be said marriage is simply about the right to have sexual intercourse. The laws involved in Bowers and here are, to be sure, statutes that purport to do no more than prohibit a particular sexual act. Their penalties and purposes, though, have more far-reaching consequences, touching upon the most private human conduct, sexual behavior, and in the most private of places, the home. The statutes do seek to control a personal relationship that, whether or not entitled to formal recognition in the law, is within the liberty of persons to choose without being pun- ished as criminals.
      This, as a general rule, should counsel against attempts by the State, or a court, to define the meaning of the rela- tionship or to set its boundaries absent injury to a person or abuse of an institution the law protects. It suffices for us to acknowledge that adults may choose to enter upon this relationship in the confines of their homes and their own private lives and still retain their dignity as free persons. When sexuality finds overt expression in inti- mate conduct with another person, the conduct can be but one element in a personal bond that is more enduring. The liberty protected by the Constitution allows homosex- ual persons the right to make this choice.
      Having misapprehended the claim of liberty there pre- sented to it, and thus stating the claim to be whether there is a fundamental right to engage in consensual sodomy, the Bowers Court said: "Proscriptions against that conduct have ancient roots. In academic writings, and in many of the scholarly amicus briefs filed to assist the Court in this case, there are fundamental criticisms of the historical premises relied upon by the majority and concurring opinions in Bowers. Brief for Cato Institute as Amicus Curiae; Brief for American Civil Liberties Union et al. as Amici Curiae; Brief for Professors of History et al. as Amici Curiae. We need not enter this debate in the attempt to reach a defini- tive historical judgment, but the following considerations counsel against adopting the definitive conclusions upon which Bowers placed such reliance.
      At the outset it should be noted that there is no long- standing history in this country of laws directed at homo- sexual conduct as a distinct matter. Beginning in colonial times there were prohibitions of sodomy derived from the English criminal laws passed in the first instance by the Reformation Parliament of 1533. The English prohibition was understood to include relations between men and women as well as relations between men and men. See, e.g., King v. Wiseman, 92 Eng. Rep. 774, 775 (K. B. 1718) (interpreting "mankind" in Act of 1533 as including women and girls). Nineteenth-century commentators similarly read American sodomy, buggery, and crime- against-nature statutes as criminalizing certain relations between men and women and between men and men. * * * Laws prohibiting sodomy do not seem to have been enforced against consenting adults acting in private. A substantial number of sodomy prosecutions and convic- tions for which there are surviving records were for preda- tory acts against those who could not or did not consent, as in the case of a minor or the victim of an assault. * * * It was not until the 1970’s that any State singled out same-sex relations for criminal prosecution, and only nine States have done so. * * * Bowers was not correct when it was decided, and it is not correct today. It ought not to remain binding prece- dent. Bowers v. Hardwick should be and now is overruled.
    Excerpt from Justice Scania's Dissent
      I begin with the Court's surprising readiness to recon- sider a decision rendered a mere 17 years ago in Bowers v. Hardwick. I do not myself believe in rigid adherence to stare decisis in constitutional cases; but I do believe that we should be consistent rather than manipulative in invoking the doctrine. Today's opinions in support of reversal do not bother to distinguish—or indeed, even bother to mention the paean to stare decisis coauthored by three Members of today's majority in Planned Parent- hood v. Casey. There, when stare decisis meant preserva- tion of judicially invented abortion rights, the widespread criticism of Roe was strong reason to reaffirm it. * * * Today's approach to stare decisis invites us to overrule an erroneously decided precedent (including an "intensely divisive" decision) if: (1) its foundations have been "eroded" by subsequent decisions, ante, at 15; (2) it has been subject to "substantial and continuing" criticism, ibid.; and (3) it has not induced "individual or societal reliance" that counsels against overturning, ante, at 16. The problem is that Roe itself which today's majority surely has no disposition to overrule satisfies these conditions to at least the same degree as Bowers. * * * The Texas statute undeniably seeks to further the belief of its citizens that certain forms of sexual behavior are "immoral and unacceptable," Bowers, supra, at 196—the same interest furthered by criminal laws against fornica- tion, bigamy, adultery, adult incest, bestiality, and ob- scenity. Bowers held that this was a legitimate state interest. The Court today reaches the opposite conclusion. The Texas statute, it says, "furthers no legitimate state interest which can justify its intrusion into the personal and private life of the individual," ante, at 18 (emphasis addded). The Court embraces instead JUSTICE STEVENS' declaration in his Bowers dissent, that the fact that the governing majority in a State has traditionally viewed a particular practice as immoral is not a sufficient reason for upholding a law prohibiting the practice, ante, at 17. This effectively decrees the end of all morals legislation. If, as the Court asserts, the promotion of majoritarian sexual morality is not even a legitimate state interest, none of the above-mentioned laws can survive rational- basis review.
    Reporting Blogospheric Reactions Here are some posts of note: Legal Theory Resources Here are some resources on the issues of moral, political, and legal philosophy: Bibliography
      John M. Finnis, Law, Morality, and `Sexual Orientation,' 69 NOTRE DAME LJ 1049, 1066-67 (1994).
      Stephen Macedo, "Homosexuality and the Conservative Mind," and "Reply to Critics" (Robert George and Gerard Bradley, and Hadley Arkes), Georgetown Law Journal, v. 84 (December 1995).
    Links to the Record


 
Lazarus on Equal Protection Edward Lazarus has a Findlaw column entitled The Supreme Court And Equal Protection: Why This Term's Momentous Affirmative Action and Same-Sex Sodomy Cases Have Put the Doctrine To the Test. Read it!


 
Dorf & Adler Michael Dorf (Columbia) and Matthew Adler (Pennsylvania) have posted Constitutional Existence Conditions and Judicial Review. Here is the abstract:
    Although critics of judicial review sometimes call for making the entire Constitution nonjusticiable, many familiar norms of constitutional law state what we call "existence conditions" that are necessarily enforced by judicial actors charged with the responsibility of applying, and thus as a preliminary step, identifying, propositions of sub-constitutional law such as statutes. Article I, Section 7, which sets forth the procedures by which a bill becomes a law, is an example: a putative law that did not go through the Article I, Section 7 process and does not satisfy an alternative test for legal validity (such as the treaty-making provision of Article II, Section 2), has no legal existence. A judge who disclaims the power of judicial review nevertheless "enforces" Article I, Section 7 when he finds that a putative statute is (or is not) an enactment of Congress that he must take account of. We contrast existence conditions with "application conditions" that limit the legal force of a proposition of nonconstitutional law by some means other than vitiating the status of that proposition as law. For example, absent payment of just compensation, the Takings Clause would block the application of an otherwise valid statute such as the Endangered Species Act to a privately owned parcel of land if the impact of that application were to destroy all economically viable use of the parcel. Judicial enforcement of application conditions is not entailed by the enforcement of ordinary sub-constitutional law, even though judicial non-enforcement of application conditions might be unwise. After setting forth the conceptual distinction between existence and application conditions, we argue that many familiar constitutional provisions and doctrines - including the scope of enumerated powers and some individual rights - are best read as existence conditions and are thus necessarily judicially enforced. We then reconcile that observation with a variety of doctrines - including the political question doctrine, the enrolled bill doctrine, and the rational basis test - that seem to authorize the courts not to enforce or to "under-enforce" existence conditions. We argue that these doctrines should be understood in some instances as granting epistemic deference to non-judicial interpreters of the Constitution and in other instances as reflecting the fact that some constitutional provisions and doctrines are "perspectival" - that is, they have different content for different addressees.


 
Korsgaard and Parfit Christine Korsgaard (Harvard, Philosophy) has posted a paper entitled Normativity, Necessity, and the Synthetic a priori: A Response to Derek Parfit. From the abstract:
    If I understand him correctly,Derek Parfit ’s views place us, philosophically speaking, in a very small box.According to Parfit,normativity is an irreducible non-natural property that is independent of the human mind.That is to say,there are normative truths -truths about what we ought to do and to want,or about reasons for doing and wanting things.The truths in question are synthetic a priori truths,and accessible to us only by some sort of rational intuition.Parfit supposes that if we are to preserve the irreducibility of the normative,this is just about all we can say,at least until we bring in some actual intuitions to supply the story with some content.


 
New Papers on the Net Here is the roundup:
    Christopher Fairman (Ohio State) posts The Myth of Notice Pleading, forthcoming in the Arizona Law Review. Here is the abstract:
      This Article challenges the prevailing rhetoric of notice pleading in the federal courts. By examining the reality of pleading practice in eight diverse substantive areas (ranging from antitrust to defamation, negligence to RICO), a rich continuum of fact-based pleading requirements emerges. The scholarly literature, however, largely ignores what federal courts require under this vast umbrella of "heightened pleading." This Article uncovers narrowly-targeted forms of fact-pleading, more broad-based particularity mirroring the standard used in fraud claims, and even "hyperpleading"—mandating virtually every element of a claim be pleaded with particularity. From this micro-examination of pleading, the Article develops the first contemporary model of pleading based on actual federal practice: the pleading circle. Contrary to the notice pleading myth, current practice is not a simple binary choice: fact-based pleading for fraud; notice pleading for everything else. Rather, there is a spectrum beginning with the factless and universally rejected "conclusory allegation." Simplified notice pleading follows. The varieties of heightened pleading are next with their increasing particularity requirements. Ultimately, pleadings reach the point of prolixity and the same fate as its conclusory cousin. The Article also explores potential explanations for the disconnect between notice pleading rhetoric and reality. One overriding conclusion emerges—notice pleading as a universal standard is a myth.
    Lionel Smith (McGill) posts The Motive, Not the Deed, forthcoming in MODERN LAW OF REAL PROPERTY AND TRUSTS - ESSAYS FOR EDWARD BURN. From the abstract:
      The fiduciary's duty of loyalty has been subjected to a great deal of analysis. That analysis usually focuses on the distinctive proscriptive rules, which forbid the fiduciary from being in a conflict of interest and related situations. This paper argues that in order to understand what is truly distinctive about fiduciary obligations, it is necessary to take account of another body of fiduciary law: that which controls the exercise by fiduciaries (such as trustees or corporate directors) of their powers. When the two are considered together, the unique feature of fiduciary obligations becomes clearer. In the vast majority of obligations, in both the common law and the civil law traditions, observance or breach of the duty is judged by whether or not a particular result was brought about, an inquiry which may be associated with a 'standard of care' or an 'intensity' of the duty. What is unique about the fiduciary obligation of loyalty is that its observance or breach depends on the motive with which the fiduciary acted. The control of fiduciary powers follows a model of analysis which is much closer to the judicial review of administrative action than to the law of negligence. Once this is understood, the strict proscriptive rules which forbid conflicts of interest can be better analysed as protecting the beneficiary of a fiduciary obligation from the burden of proving an improper motive. The fiduciary must not only act with the proper motive; he must be seen so to act, and so he is forbidden to be in situations of conflicting motivational pressure.
Other papers of interest:


Wednesday, June 25, 2003
 
Eve Tushnet on Stare Decisis Check out her post here. Tushnet has a really excellent post replying to my three part series on stare decisis (Part I, Part II, Part III). I will post a reply in a day or two.


 
Blogging from Montreal: Part 7
    Introduction It is Wednesday afternoon at the ICANN meeting in Montreal. Louis Touton has the podium, and is presenting the RFP Draft for a very limited number of new "sponsored Top Level Domains." How limited? Here is the language from the draft:
      This RFP is only open to those entities listed in Appendix B, or affiliates or successors of those entities as defined below, who applied in Fall 2000 to ICANN as sponsors for a new sTLD.
    The Winners and the Losers And just who might that be? You can surf here and figure it out for yourself, but it looks like this is the list:
      --The World Health Organization (.health).
      --International Air Transport Association (.travel).
      --International Confederation of Free Trade Unions (.union).
      --Universal Postal Union (.post).
      --Nokia (.mobi).
    Although I may have missed some qualifiers, the number is clearly very small. And who are the losers? Everyone who was not proposing a "sponsored" TLD in the 2000 round. And who is that? Among others it is ICM, which seeks to create a sponsored TLD for adult content. And, of course, everyone who proposed a competitor to .com.
    Just What Is A Sponsored TLD Anyway? Heck if I know, but here is what the draft RFP says:
      The following characteristics, among others, should be present in an sTLD:
        (a) registrations must be limited to registrants from a well-defined and limited community, including members of a Sponsoring Organization (if indeed the Sponsoring Organization is a membership organization); (b) the scope of activity and the limits of registrations must be circumscribed by a clear charter; (c) in a hierarchical policy environment, the charter must clearly define which policy responsibilities are delegated from ICANN to the Sponsor; (d) open and transparent structures must be in place that allow for orderly policy development and the ability of members and registrants to influence the policy development and implementation process and for the Sponsoring Organization to be receptive to such influence; and (e) the Sponsor must commit to adhere to ICANN policies as they may change from time to time through consensus processes.
    Wow! This is quite wierd. Take for example the requirement for "open and transparent structures." Why should ICANN be in the business of dictating the internal policymaking structure of entities like the World Health Organization? If WHO has opaque decision making structures, that is the business of the United Nations, not ICANN! This whole enterprise is fundamentally flawed.
    Evaluating the Proposed "Montreal" Round What should we make of this very limited plan for expansion of the root? Certainly an argument can be made that this is a part of an absurdly slow process of root expansion. But how could expansion move more quickly at this stage of the game? Here are some possibilities:
      --A big bang market-driven expansion, e.g. an immediate auction of several hundred or several thousand new TLDs.
      --A steady-state market-driven expansion, e.g. a commitment to the auctioning of a few dozen new TLDs per year.
      --An open-ended beauty contest, e.g. a repetition of the process followed in the year 2000.
    But are any of these options realistic? I cannot imagine that any of these options for rapid expansion is either realistic or desirable. The ICANN board and community is simply not yet ready for a market-driven approach to root expansion. No one wants a repetition of the year 2000 process. Beauty contests are simply the worst possible mechanism for expansion of the root.
    Next Steps So what is ICANN to do? Given Stuart Lynn's legacy (the commitment to creating a small number of new sponsored Top Level Domains through a beauty-contest mechanism, it is not clear that ICANN has many feasible options. If the Lynn proposal were expanded, there would be a real danger of lock-in to a beauty contest mechanism as a template for future root expansion. Perhaps ICANN could simply abandon the Lynn proposal, but that would mean no expansion of the root. What is really needed is a fundamental rethinking of root allocation policy. And ICANN needs help in that enterprise. At a minimum, ICANN nees input from economists and policy scientists familiar with similar resource allocation problems--such as those faced by the Federal Communications Commisssion.
    The Long Run And in the long run, the root resource should be put to its highest and best use. The best way to accomplish that goal is by conducting regular auctions of a significant number of slots, as Karl Manheim and I have proposed in An Economic Analysis of Domain Name Policy.
    Guide to Blogging from Montreal Posts.


 
Yin on the Affirmative Action Cases Surf here for a good post by lawprof Tung Yin.


 
Blogging from Montreal: Part 6 There is a very nice post from nhklein on ICANN Watch, touching both on whois and on the difficulties with establishing the ccNSO (the supporting organization for ICANN that consists of the various entities that operate ccTLDs. This is surely one of ICANN's most difficult problems, and in my opinion it stems from a fundamental ambiguity in the nature of ICANN. Here is quote from the post:
    One European ccTLD manager put it bluntly: “We are not under the law of California!”
And from the point of view of the ccTLDs what could be more obvious. ccTLDs, by nature, are primarily responsible to constituencies and laws within their national (or other designated regional) territories. But ccTLDs are also Top Level Domains. As such, they receive root service from the ICANN. Without a listing in the root, a ccTLD might continue to operate on a local basis, but it could hardly be part of the global Internet. The fundamentla legal status of ICANN may be ambiguous, but like it or not, it would appear the ICANN now has consolidated legal authority over the root. Hence, ccTLDs participate in the root because ICANN (in its role as the IANA) lists them in the root. ICANN is a California nonproft corporation, and hence, ICANN is governed by California law. As a result, it is inescapable that ccTLDs are (to the extent they deal with ICANN) subject, at least potentially, to California law. This flows directly from the rules of personal jurisdiction (or territorial jurisdicton as it sometimes called) and the rules of choice-of-law. ccTLD manages are not lawyers, and this may seem surprising to them, but it is simply a fact. Of course, ICANN could contract around personal jurisdiction and choice of law. It could contract with the managers of .uk for the application of British law and a London forum, and with the manager of .fr for French law, and a Paris forum, and so forth. But this would create intolerable cost and uncertainty for ICANN. No intelligent lawyer would suggest that each relationship between ICANN and a ccTLD manager should be governed by a different national law. But there is another side of the story. Legally, ICANN may have have dominion over the global root, but given the history of the Internet, this same resource can be viewed from another angle. The root can also be viewed as the product of a global system of voluntary cooperation. From this angle, it might be argued that ICANN can no more have legal "ownership" of the root than the French government can own the french language. This is the picture of the Internet as a global system of voluntary cooperation or a commons in the technical legal and economic sense. But here is rub. If the root is a commons, then action with respect to the root requires consensus. You may say: Right on! ICANN should only act based on consensus of the global Internet community. But if you say that, you do not understand the fundamental nature of the root. The root is a scarce economic resource. Decisions regarding the root produce winners and losers. If the management of the root requires consensus of the global Internet community, then the root will not be managed effectively. This may be sad, but is true. Hence, the need for some entity with the power to act in the public interest when consensus is impossible because of a conflict between private or governmental interests. But the story is more complicated still! ICANN does not have any intellectual property rights in the root. The root is not copyrighted or patented. The root works because of what economists call networking effects. Everyone has an incentive to use the authoritative global root, because it is de facto a standard. Thus the phrase "authoritative global root" is a bit of misnomer, because the root is not backed by "authority;" it is the product of mutually reinforcing behaviors and expectations. And this means that there is a balance of power. If ICANN's decisions with respect to the ccTLDs were to deviate too far from what the interests of the ccTLDs, then they have a threat. The ccTLD managers can threaten to form their own root. And their root would simply be a superset of the ICANN root. The ccTLD root would borrow the gTLD nameserver addresses from the ICANN root and then add the ccTLD file that the new ccTLD root managers agreed upon. But who would win? Because there would be a winner. Powerful networking economics would eventually result in either ICANN or the ccTLDs winning the contest for control of the global root. I don't know who would win. But that's the point. The winner is uncertain. Hence, ICANN has powerful incentives to bring the ccTLD managers into the ICANN process, and whether they know it or not, the ccTLD managers have powerful incentives to reach an understanding with ICANN.
Guide to Blogging from Montreal Posts.


 
Game Theory and the Dormant Commerce Clause Maxwell L. Stearns (George Mason) has posted A Beautiful Mend: A Game Theoretical Analysis of the Dormant Commerce Clause on SSRN. Here is the abstract:
    While the commerce clause neither mentions federal courts nor expressly prohibits the exercise of state regulatory powers that might operate concurrently with Congressional commerce powers, the Supreme Court has long used the dormant commerce clause doctrine to limit the power of states to regulate across a diverse array of subject areas in the absence of federal legislation. Commentators have criticized the Court less for creating the doctrine than for applying it in a seemingly inconsistent, or even haphazard, way. Past commentators have recognized that a game theoretical model, the prisoners' dilemma, can be used to explain the role of the dormant commerce clause doctrine in promoting cooperation among states by inhibiting a regime of mutual defection. This model, however, provides at best a partial account of existing dormant commerce clause doctrine, and sometimes seems to run directly counter to actual case results. The difficulty is not the power of game theory to provide a positive account of the cases or to provide the dormant commerce clause doctrine with a meaningful normative foundation. Rather, the problem has been the limited choice of models drawn from game theory to explain the conditions in which states rationally elect to avoid mutually beneficial cooperative strategies with other states. Professor Stearns shows how a state might avoid cooperation in a situation not captured in the prisoners' dilemma account to disrupt a multiple Nash equilibrium game, thus producing an undesirable mixed strategy equilibrium in place of two or more available pro-commerce, Nash equilibrium outcomes. At the same time, the defecting state secures a rent that only became available as a consequence of the Nash equilibrium, pro-commerce strategies of surrounding states and that is closely analogous to quasi rents described in the literature on relational contracting. The combined game theoretical analysis, drawing upon the prisoners' dilemma and multiple Nash equilibrium games, not only explains several of the most criticized features of the dormant commerce clause and several related doctrines, but also underscores the proper normative relationship between the dormant commerce clause doctrine and various forms of state law rent seeking.


 
New Papers on the Net Here is today's roundup:Abstracts were not accessible when I prepared this post. My apologies.


Tuesday, June 24, 2003
 
Equitas non pro fastidiosi est. You really truly must surf here.


 
Garamendi Sasha Volokh has a great post on American Insurance Associationn v. Garamendi--the foreign policy preemption case from yesterday.


 
Blogging from Montreal: Part 5
    Introduction Today, ICANN staff posted a draft document entitled Establishment of new sTLDs: Request for Proposals on the ICANN website. This document is the result of former ICANN President Stuart Lynn's proposal for the creation of a very small number of so-called sponsored Top Level Domains (sTLDs). Lynn's idea was to allow a few, well-heeled organizations to create new TLDs similar to .aero or .museum. After all, what harm can this do? Karl Manheim have posted a paper analyzing Lynn's proposal.In that paper, we reached the following conclusions:
      • The proposal is best understood as a short-run solution to problems that emerged after the November 2000 round of TLD expansion. This short-run and short-sighted approach cannot succeed, because the long run problem that ICANN must solve is a resource allocation problem. Short-run solutions will have unintended consequences because they will commit the root resource and create precedents for future resource allocation decisions.
      • The proposal will move ICANN in the direction of the worst of all resource allocation models, the beauty-contest approach. ICANN should learn from its own painful experience in November 2000 and from the 75 years of failure at the FCC under the beauty contest model. Of all the decisions that ICANN could make now, moving towards the beauty-contest model is the worst-possible decision.
      • The criteria in the proposal will have the unintended consequence of favoring well-finance globalized non-profit membership organizations at the expense of regional, relatively poorer institutions that serve the needs of communities in third world.
      • The criteria in the proposal will focus the decision-making process on the characteristics of the applicant (responsiveness to community, etc.) rather than the usefulness of the new sTLD. The experience of the FCC teaches that is an inherent problem in the beauty-contest model.
      • The best options for solution of ICANN’s short-run problem are entirely mechanical or objective allocation systems. One such proposal is to grandfather in all the qualified applications from the November 2000 round. Other possibilities are to grandfather all qualified applications from non-profit institutions.
      • ICANN should establish a task force to design a rational policy that will put the root to its highest and best use and avoid the substantial institutional problems produced by the beauty-contest model.
    Twomey's Fine Finesse The so-called RFP is actually many things, including the communication of the following news:
      The Board, in consultation with ICANN President Paul Twomey, is also considering initiating a comprehensive study by ICANN of whether and how to proceed with additional TLDs at the same time as the creation of new sTLDs is being considered through this RFP process, and as the evaluation of the original Proof of Concept is being completed. The outcome of that study may or may not support the continued growth of additional TLDs and may or may not continue the concepts of sTLDs and uTLDs. Until this more substantive review is completed, the Board does not feel it is appropriate to commit to a substantial expansion of sTLDs. The Board does, however, feel that there should be an opportunity to allow those who submitted applications for sTLDs in Fall 2000, but whose applications were not successful, to have an opportunity to submit updated and revised applications at this time, as an extension of the original Proof of Concept. For the reasons presented in the Plan of Action, this opportunity is not being extended to uTLD applicants.
    In my opinion, this is damage control at its best. The very worst thing that could have happened to ICANN would have been the adoption of Stuart Lynn's plan as a blueprint for the expansion of the root! This would have guaranteed that for the foreseeable future, ICANN would have been recapitulating the history of the FCC--adopting the "beauty contest" model for the allocation of the root resource. Instead, Twomey used the the RFP draft as the vehicle to avoid "beauty contest" lock-in. Thus, the RFP suggests "a comprehensive study by ICANN of whehter and how to proceed with additional TLDs at the same time as the creation of new sTLDs is being considered through this RFP process, and as the evaluation of the original Proof of Concept is being completed." This was a crucial step toward a more rational policy for the allocation of the root resource. Also important, is the decision to exclude unsponsored Top Level Domains (uTLDs) from the interim RFP process. Had the RFP process been extended to uTLDs, yet another step towards lock in of the beauty contest approach would have been taken. And finally, the proposal is limited to unsuccessful applicants from the 2000 round of gTLD expansions. This limitation makes it clear that this limited sTLD expansion round is basically a process for cleaning up the mistakes made in 2000. Bravo!
    The Bad News But there is bad news as well. The RFP is part of an elaborate "beauty contest" approach. An elaborate application must be submitted and evaluated by independent evaluators. The criteria include the following:
      • the proposed sTLD; • the proposed Sponsoring Organization, including the proposed extent of its policy-making authority, its proposed policy-making process, and an indication of the level of support from the proposed Sponsored TLD Community; • how the proposed new sTLD adds value to the DNS; • how the proposed sTLD would reach and enrich broad global communities; • how the Sponsoring Organization would implement policies and processes to protect the rights of others; and • how the Sponsoring Organization and its selected Registry Operator would assure stable registry operation, including provisions for assuring continuity of service in the event of business failure.
    But perhaps this is the best that could be done, given that institutional momentum behind the Lynn proposal.
    A Comprehensive Study of Root Resource Policy In the short run the RFP is not terribly important. A few sTLDs will be created. Although there is a lot of window dressing, this is really a grandfathering process. In the long run, some very important issues need to be addressed, and a comprehensive study of root resource policy is exactly what is needed. For some thought by Karl Manheim and me on these issues, see An Economic Analysis of Domain Name Policy.
    More tomorrow.
    Guide to Blogging from Montreal Posts.


 
Blogging from Montreal: Part 4 It is Tuesday afternoon in Montreal and the GNSO council is discussing the report on expansion of the name space. On the one hand, this report represents a step forward. So far, ICANN's basic policy toward the root has been to waste the resource. Although the root could comfortably support a thousand to ten-thousand additional top level domains (TLDs), expansion of the root has been proceeding at a snails pace. Here is the recommendation in the GNSO report:
    Expansion of the gTLD namespace should be a bottom-up approach with names proposed by the interested parties to ICANN. Expansion should be demand-driven. There is no support for a pre-determined list of new names that putative registries would bid for.
But this is all very abstract. Consider the following list of "possible objective criteria:"
    1. Future expansion should increase the level of competition. 2. Future expansion should avoid names that are confusingly similar so as to avoid confusing net users. 3. Future expansion should avoid names that might deceive or defraud net users. 4. An easily understood relationship must exist between a new gTLD and its stated purpose. 5. Future names should be both for commercial and non-commercial purposes. 6. Future names should add-value to the domain name system. The purpose of introducing new names is to make the domain name system more useful and more accessible to broader communities of interest and to more end users (Lynn report 3-2003).
Notice that Number 1 and Number 6 are in tension or contradiction. Competition means competition--that is, proprietors of gTLDs aiming to attract the same customers on the basis of price and service. The idea of "value added TLDs" has been a coded way of saying that new TLDs should be different than old TLDs, and this notion is inherently anticompetitive. As the GNSO Council was concluding its discussion, the comment was made that this report is "wishy washy," and that is just about the best thing one can say about it.
Nonetheless, the GNSO Council approved the report unanimously.
Guide to Blogging from Montreal Posts.


 
Marston on the affirmative action cases Check out his post entitled "Curmudgeonly Thoughts on Grutter v. Bolllinger." Here is an excerpt:
    am troubled by a few things in the opinion. First, Justice O'Connor seems all too willing to defer to the judgment of what she calls "major American businesses" and "the military" and their estimations of good public policy. Those of us who are interested in resisting the corporatization of the university should be skeptical of a Court that takes "major American businesses" at their word when they prefer any policy, especially if the Court seems to be deferring to their judgment about educational policy. And there is already too much deference to the military going on right now in American society. They should be under civilian control, not the other way around. In other words, even if you like the way such arguments cut right now, you might not like the result the next time around. In addition, these sentences from O'Connor's opinion make me really uncomfortable:
      In order to cultivate a set of leaders with legitimacy in the eyes of the citizenry, it is necessary that the path to leadership be visibly open to talented and qualified individuals of every race and ethnicity. All members of our heterogeneous society must have confidence in the openness and integrity of the educational institutions that provide this training.
    Here the Court is trying to foster legitimate rule by appealing to a principle of merit in the selection of access to higher education. The visual metaphors here are quite stunning when you think about them: what is necessary is the appearance of openness and integrity.
And I agree, it really is stunning! O'Connor surely wrote carelessly (or much too candidly) when she said "legitimacy in the eyes of the citizenry."
Update: Also, check out this post by John Eden on the Legal Theory Annex.


 
The Layers Principle and NAT In a post entitled Oh no, Not Nat!, Eric Rescorla (Educated Guesswork) discusses an important question about the relationship between NAT (Network Address Translation) and the the layered nature of Internet Architecture. Rescorla is reacting to a post by Ed Felten, commenting on The Layers Principle: Internet Architecture and the Law by Minn Chung and me.


 
The battle for the Constitution? Cal Thomas has an op/ed with the above title. Here is an excerpt:
    The . . . currently prevailing . . . view of the Constitution is the judicial philosophy of Justice Felix Frankfurter. Speaking of Supreme Court justices, Frankfurter said, "It is they who speak and not the Constitution." That view was echoed in a 1958 Supreme Court decision (Cooper vs. Aaron): "Article VI of the Constitution makes the Constitution the 'supreme law of the land' .. It is emphatically the province and duty of the judicial department to say what the law is .. It follows that the interpretation of the (Constitution) enunciated by this Court . is the supreme law of the land .." When the Constitution is not the supreme law, the Supreme Court will inevitably come to see itself as the supreme law. Charles Evans Hughes, who became chief justice in 1930, remarked earlier: "The Constitution is what the judges say it is."
Thanks to Howard Bashman.


 
Senate Rules Committee Votes to Limit Filibusters of Judicial Nominees See this AP Report. Here is a snippet:
    A Senate committee with all its Democratic members absent voted to limit filibusters of President Bush's judicial nominees Tuesday, a move Republicans hope will usher future federal judges through the Senate faster, even if Democrats want to stop them. Democrats oppose changing Senate filibuster rules for judicial nominees, but Republicans have a one-vote majority on the Senate Rules Committee and expected to win Tuesday's committee vote in any case. Democrats are expected to fight the measure on the Senate floor. The Rules Committee officially voted 10-0 for the measure, which would reduce the number of senators needed to force a vote on a judicial nominee with each successive vote until only a 51-member majority is needed.
For my analysis of this issue, see Breaking the Deadlock: Reflections on the Confirmation Wars. And see this post on How Appealing.


 
Balkin on Affirmative Action and Judicial Selection Reacting to a story in the New York Times, Jack Balkin has a thoughtful post on the possible effects of the affirmative action decisions on appointments to the Supreme Court. Here is a taste:
    President Bush, who is a shrewd politician, well understands that even as he attempts to pack the Court with judges whose beliefs he admires, he must keep public opinion in mind in making judicial appointments at the Supreme Court level (by contrast, very few members of the public pay much attention to lower court nominations). His father understood this point too, which, I think, explains both Souter's appointment and Thomas'. (Souter was more acceptable because unknown, Thomas was expected to be more acceptable because although he was very conservative he was also African-American). I have long believed that it is not in the interest of the Republican Party for Republican-appointed judges to overrule Roe v. Wade. (See my discussion of the Supreme Court and party coallitions). Nor, for that matter, is it in the interest of the Republican Party for those judges completely to outlaw affirmative action in college admissions (government contracting is another matter). Getting rid of Roe and affirmative action through judicial fiat simply bolsters the Democratic coallition. I'm sure that Bush and Karl Rove understand this perfectly.
Surf on over to Balkanization.


 
Updated Post on the Affirmative Action Cases Yesterday, I put up a long post on the affirmative actions cases, which has been updated several times. It includes a variety of resources on the cases, emphasizing the theoretical and normative questions. Scroll down or click here.


 
25 Years Howard Bashman has an excellent post that starts with an email from a student at Harvard:
    For us, the most surprising part of today's rulings is the following flourish in the penultimate paragraph of Justice O'Connor's majority opinion in the law school cases: "We expect that 25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today." Slip op. at 31. A group of HLS students is bitterly divided: is this conclusion a holding, binding upon the Courts of Appeals?
Bashman replies:
    Obviously, that sentence from Justice O'Connor's opinion will mean whatever five or more Justices serving on the Court some twenty-five years from now decide that it means.
And Rick Hasen points out in a post on his excellent Election Law Blog:
    The unwritten assumption here is that affirmative action programs are safe for the next 25 years. That is wishful (or perhaps, depending on one's politics, not wishful) thinking. Eventually, a president will get to replace Justice O'Connor.
Bashman and Hasen are both right when they say that a majority of the Supreme Court can do whatever it pleases without fear of reversal. The Supreme Court's decisions are final. But neither Howard nor Rick has answered the question at hand: Is the 25 year period binding on the lower federal courts as a matter of stare decisis? Here is a bit more from the email to Bashman:
    Those who think not note that the Court has used the term "expect," indicating that the Court is merely speculating as to a future state of affairs. Such conjecture, they argue, cannot be the basis of a constitutional holding. Instead, the Courts of Appeals will have to weigh those expectations against the facts found in future cases, and will be free to hold as they see fit on the basis of the other principles in the opinion. Others argue that it is not for the Courts of Appeals to decide when a proposition receiving five votes is harmless dicta and when it is binding. Although law students routinely consider "dicta" that part of an opinion not necessary to render the judgment, of course many important constitutional principles were rendered thusly. These students argue that this rule is a holding, binding on the Courts of Appeals. They point out that Justice Thomas has, in a bit of gamesmanship, characterized the sentence as a "holding." Slip op. at 2 (Opinion of Thomas, J., concurring in part and dissenting in part). They also note that the five Justices in the majority have not responded to that characterization, as they easily could have. And none of the Justices casting majority-granting votes concurred in the opinion but only in the judgment with respect to the sentence in question, leaving it without the force of law. This, combined with the fact that the sunset provisions of the Equal Protection Clause are inarguably constitutional principles, makes the 25-year statement a rule of law which Courts of Appeals must apply.
There is a common assumption here--made by both the students who think that it is a "holding" and those who think it is not. The common assumption is the legislative theory of holdings. That is, the view that a holding is an authoritative statement of a rule binding the lower courts via the rule of vertical stare decisis. It is easy to see how contemporary American law students would come to adopt the legislative theory. The United States Supreme Court has used such legislative holdings for more than a generation. The most famous example is, of course, Miranda in which the Court stated a holding that went far, far beyond the facts of Miranda's case. Where does the legislative theory of holdings come from? The answer is found in legal realism. The realists viewed holdings as predictions of how future courts are likely to act. A legislative pronouncement is thus a "holding," but only insofar as it provides a reliable guide to future decisions by the court who issued it. And that brings us round to Bashman and Hasen, both of whom observe that the O'Connor 25 year pronouncement is not a very good predictor of the Court's future behavior--once the composition of the Court changes. But all of this is most unfortunate. Why? Because there is a much better understanding of "holding" available. The holding of a case is the ration decendi--the reasoning necessary to the result. Even the Supreme Court can only decide individual cases on their facts. For the Court to do otherwise, is for the Court to transgress the limits on judicial power that it jealously enforces via the justiciability doctrines. Moreover, adopting the realist view of precedent undermines the rule of law. Even a recent legislative pronouncement of a "holding" is an uncertain guide if the decision was 5-4 and either the composition of the Court has changes or some members of the Court are wavering. And this brings us back to the 25 years. This statement is obviously an obiter dictum, and even if the Court had explicitly called it a holding, it would not be part of the holding. What happens 25 years from now, or 5 years from now, was not before the court.
For my views on the role of stare decisis, see The Case for Strong Stare Decision, or Why Should Neoformalists Care About Precedent?, in three parts:And be sure to read the posts by Bashman and Hasen.


 
Klarman: Is the Supreme Court Irrelevant? Michael Klarman (Virginia) posts Is the Supreme Court Sometimes Irrelevant?: Race and the Southern Criminal Justice System in the World War II Era. Here is the abstract:
    This article considers the impact of Supreme Court criminal procedure decisions on the treatment of blacks by the southern criminal justice system. It considers decisions in the areas of coerced confessions, race discrimination in jury selection, the right to counsel, and the right against mob-dominated trials. The article finds that these Supreme Court rulings had almost no impact. Blacks continued to be almost entirely excluded from juries in criminal cases; law enforcement officers continued to beat black defendants into confessing; and court-appointed white lawyers turned in sham performances. The article also considers the indirect effects of these decisions and the litigation that produced them. Here, the rulings may have been more consequential, in terms of educating blacks about their rights, mobilizing social protest, facilitating NAACP branch-building and fund-raising, and instructing oblivious whites about the egregiousness of Jim Crow conditions. Finally, the article considers why Supreme Court criminal procedure rulings were so much less efficacious (for southern blacks) than contemporaneous Court decisions invalidating the white primary and mandating the admission of blacks to southern public universities.


 
New Papers on the Net Here is the roundup:
    Guido Ferrarini (Università degli Studi di Genova), Niamh Moloney (Queen's University Belfast) and Cristina Vespro (Universite Libre de Bruxelles) post Executive Remuneration in the EU: Comparative Law and Practice. From the abstract:
      Executive pay practices are currently a "cause celebre" of corporate governance in the media, among regulators, in the marketplace, and in academia, in the US, the UK, and Europe. The purpose of this paper is to examine the approaches taken across Europe to the regulation of executive pay practices in listed companies. The outstanding feature of the regulation of executive pay across Europe is the extent to which it reflects the interconnection between pay and corporate governance. This link is expanded on in Part B with respect to the different rules found across European legal systems and how they address/prioritize the concerns which executive pay potentially raises. The role of public regulation is relatively important for disclosure of executive pay, while best practices and private codes generally have some impact on the way in which executive compensation is set for listed companies. On the whole, there is some convergence in continental Europe towards the Anglo-American model. The merits of full disclosure of executive remuneration are increasingly acknowledged in corporate governance codes and reports, while the use of remuneration committees is on the rise in the Continent. The research data on reported pay practices for 2001 among FTSE Eurotop300 companies reveal a reliance on performance-based pay generally and a somewhat variable adoption of share options programmes and other equity-based incentive contracts, which are generating difficulties in dispersed ownership systems. The executive pay problem may therefore be a particular cost of dispersed ownership, and the particular legal and policy responses, which are widely debated a specific feature of Anglo-American corporate governance. Nonetheless, the faultline between both systems, which is evident from the different approaches European states have taken, calls for particular care in the adoption of pan-European reforms but also in the transplanting of reforms based on the Anglo-American experience.
    Roberto Galbiati (Università degli Studi di Siena) and Alberto Zanardi (Bocconi University) post The Redistributive Effects of Tax Evasion: A Comparison between Conventional and Multi-Criteria Perspectives. From the abstract:
      The first part of the paper extends the approach developed by Lambert and Ramos (1997) for the measurement of the redistributive effects of a personal income tax to include tax evasion. Particular attention is paid to the concept and measurement of horizontal equity. We show that the criteria adopted to identify the equals are critical in order to evaluate the fairness of income taxation. In particular, we compare the traditional criterion, when income alone is considered (what we call conventional perspective), with a criterion based on a composite set of socio-economic features (multi-criteria perspective). Secondly, this framework is applied to the measurement of the redistributive effects of tax evasion in Italian income tax. The empirical analysis shows that in the multi-criteria perspective, taxation improves the horizontal equity of pre-tax income distribution whereas, if the sole income is assumed as criterion of equity, income tax determines horizontal inequity.
Other papers of interest:


Monday, June 23, 2003
 
New Papers on the Net Here is the roundup:
    Douglas Cumming (Alberta, Finance & Management) and Jeffrey Macintosh (Toronto, Law) post Selection Effects, Corporate Law and Firm Value. From the abstract:
      A significant amount of work has been done on corporate law choice and firm value (in terms of share prices, Tobin's Q, or variants), particularly in recent years. These empirical studies of the effect of corporate law on firm value have invariably used econometric methods that treat the decision to incorporate as a random event. Recent research from the U.S. and abroad has recently shown that this decision is far from random. It is quite possible that the magnitude, sign, and statistical significance of the effect of reincorporation on firm value are quite different when the selection effects are considered. Using a prior dataset that enables selection effects to be considered, I show that the accounting for selection effects using Heckman (1976, 1979) corrections is both economically and statistically significant in ascertaining the impact of corporate law on firm value.
    Brian Cheffins (Cambridge) posts Are Good Managers Required for a Separation of Ownership and Control? From the abstract:
      Logically, in a corporate governance system where big companies are widely held and control over corporate policymaking is delegated to a cohort of full-time executives, there needs to be "good" managers. In Britain, however, ownership separated from control in large business enterprises at a time when the country's corporate executives were allegedly amateurish and complacent. The paper examines this British paradox and concludes that dynamics affecting institutional investors explain how ownership structures were reconfigured when doubts existed about managerial quality.


 
Blogging from Montreal: Part 3 It is Monday afternoon, and I am blogging from the NCUC (Non Commercial Users Constituency) meeting. Milton Mueller started the meeting off with an introduction. One of Mueller's points concerned his perception that the GAC (Government Advisory Committee) is asserting a greater role within ICANN. In particular, Mueller asserted that the GAC was attempting to get the ICANN Board to act on WIPO II without going through the constituencies process. Mueller suggested that this was a harbinger of the possible evolution of ICANN in the direction of an intergovernmental or quasi-international organization. Wendy Seltzer then made a short presentation on the At Large Advisory Committee. The ALAC is supposed to establish a structure by which individuals can be represented in the ICANN process. Individual participation in ICANN is, of course, hopeless. The costs of participation are huge. The benefits of participation for an individual are miniscule. So the At Large process has come to focus on intermediary organizations. But as Mueller points out, this creates a certain tension between ALAC and NCUC--they are both interested in attracting an overlapping set of organizations to participate in the ICANN process.
Update: Check out Mueller's ICANN Watch post here.
Guide to Blogging from Montreal Posts.


 
Library Internet Filetering Decision: Updated 3:48 PM EDST Here is a link to today's opinion in United States v. American Library Assn., Inc. Rehnquist authored an opinion in which O'Connor, Scalia, and Thomas, joined. Kennedy and Breyer each wrote separate concurring opinions. Stevens and Souter (joined by Ginburg) each wrote separate dissenting opinions. Contrary to several media reports, there is no majority opinion. Reuters reports that the Supreme Court has uphold the statute mandating Internet filtering at public libraries. Here is a short excerpt:
    The U.S. Supreme Court upheld on Monday a law requiring the nation's public libraries to filter out Internet pornography, ruling it does not violate free-speech rights. By a 6-3 vote, the justices reversed a ruling by a special three-judge federal court panel in Philadelphia that the filtering requirement caused libraries to violate the First Amendment constitutional rights of their patrons.
Here is a snipped from Justice Kennedy's concurring opinion:
    If, on the request of an adult user, a librarian will un-block filtered material or disable the Internet software filter without significant delay, there is little to this case.
And here is a similar passage from Justice Breyer's concurrence:
    At the same time, the Act contains an important excep-tion that limits the speech-related harm that “overblock-ing” might cause. As the plurality points out, the Act allows libraries to permit any adult patron access to an “overblocked” Web site; the adult patron need only ask a librarian to unblock the specific Web site or, alternatively, ask the librarian, “Please disable the entire filter.”
Update: Here is the LA Times story. And here is the New York Times story.
Update: Must read post by Eugene Volokh here.


 
Affirmative Action Decisions: Updated on 06/23/2003 at 6:14 AM EDST This post includes links to the opinions, an excerpt from O'Connor's opinion in the law school case, links to media reports, links to posts in the blogosphere, and legal theory resources.
    The Opinions From Justice O'Connor's Opinion in the Law School Case
      [T]oday we endorse Justice Powell’s view that student body diversity is a compelling state interest that can justify the use of race in university admissions. * * * we have never held that the only governmental use of race that can sur-vive strict scrutiny is remedying past discrimination. Nor, since Bakke, have we directly addressed the use of race in the context of public higher education. Today, we hold that the Law School has a compelling interest in attaining a diverse student body.
      The Law School’s educational judgment that such diver-sity is essential to its educational mission is one to which we defer. The Law School’s assessment that diversity will, in fact, yield educational benefits is substantiated by respondents and their amici. Our scrutiny of the interest asserted by the Law School is no less strict for taking into account complex educational judgments in an area that lies primarily within the expertise of the university. Our holding today is in keeping with our tradition of giving a degree of deference to a university’s academic decisions, within constitutionally prescribed limits. * * * In order to cultivate a set of leaders with legitimacy in the eyes of the citizenry, it is necessary that the path to leadership be visibly open to talented and qualified indi-viduals of every race and ethnicity. All members of our heterogeneous society must have confidence in the open-ness and integrity of the educational institutions that provide this training. As we have recognized, law schools “cannot be effective in isolation from the individuals and institutions with which the law interacts.” See Sweatt v. Painter, supra, at 634. Access to legal education (and thus the legal profession) must be inclusive of talented and qualified individuals of every race and ethnicity, so that all members of our heterogeneous society may participate in the educational institutions that provide the training and education necessary to succeed in America. * * * That a race-conscious admissions program does not operate as a quota does not, by itself, satisfy the require-ment of individualized consideration. When using race as a “plus” factor in university admissions, a university’s admissions program must remain flexible enough to en-sure that each applicant is evaluated as an individual and not in a way that makes an applicant’s race or ethnicity the defining feature of his or her application. The impor-tance of this individualized consideration in the context of a race-conscious admissions program is paramount. See Bakke, supra, at 318, n. 52 (opinion of Powell, J.) (identi-fying the “denial . . . of th[e] right to individualized consid-eration” as the “principal evil” of the medical school’s admissions program).
      Here, the Law School engages in a highly individual-ized, holistic review of each applicant’s file, giving serious consideration to all the ways an applicant might contrib-ute to a diverse educational environment. The Law School affords this individualized consideration to applicants of all races. There is no policy, either de jure or de facto, of automatic acceptance or rejection based on any single “soft” variable. Unlike the program at issue in Gratz v. Bollinger, ante, the Law School awards no mechanical, predetermined diversity “bonuses” based on race or eth-nicity. See ante, at 23(distinguishing a race-conscious admissions program that automatically awards 20 points based on race from the Harvard plan, which considered race but “did not contemplate that any single characteris-tic automatically ensured a specific and identifiable con-tribution to a university’s diversity”). Like the Harvard plan, the Law School’s admissions policy “is flexible enough to consider all pertinent elements of diversity in light of the particular qualifications of each applicant, and to place them on the same footing for consideration, al-though not necessarily according them the same weight.” Bakke, supra, at 317 (opinion of Powell, J.). * * * It has been 25 years since Justice Powell first approved the use of race to further an interest in student body diversity in the context of public higher education. Since that time, the number of minority applicants with high grades and test scores has indeed increased. See Tr. of Oral Arg. 43. We expect that 25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today.
    Reports The Supreme Court has decided Gratz and Grutter. Here is the Associated Press story. And here is an excerpt:
      In two split decisions, the Supreme Court on Monday ruled that minority applicants may be given an edge when applying for admissions to universities, but limited how much a factor race can play in the selection of students. The high court struck down a point system used by the University of Michigan, but did not go as far as opponents of affirmative action had wanted. The court approved a separate program used at the University of Michigan law school that gives race less prominence in the admissions decision-making process. The court divided in both cases. It upheld the law school program that sought a "critical mass" of minorities by a 5-4 vote, with Justice Sandra Day O'Connor siding with the court's more liberal justices to decide the case. The court split 6-3 in finding the undergraduate program unconstitutional. Chief Justice William H. Rehnquist wrote the majority opinion in the undergraduate case, joined by O'Connor and Justices Antonin Scalia, Anthony M. Kennedy, Clarence Thomas and Stephen Breyer.
    And here are more stories: From the Blogosphere Here are selected reactions from the blogosphere:
      --Jack Balkin's analysis here. A snippet:
        Most institutions of higher education should be breathing a sigh of relief at these two opinions. They allow most elite institutions to go about their business as before. They impose higher costs on big state universities, but these universities are already so firmly committed to affirmative action that they will probably gladly take on the additional costs. Essentially the Court has said that affirmative action in higher education is constitutional, as long as individualized determinations are made and specific markers or point systems benefitting minorities are not used.
      --Eugene Volokh. Here is a short passage:
        I do think that the Gratz/Grutter combo will mean both more cheating and less transparency in the design of race preferences -- which may lead to less political accountability, since voters will find it harder to identify the true magnitude of race preferences, and more of the political acrimony caused by allegations of cheating and disingenuousness.
      --Matt Evans on the Buck Stops Here.
    Resources Here are some resources and links from the Legal Theory perspective and others:
      Stanford Encyclopedia of Philosophy entry on Affirmative Action. Highly recommended as a starting point for theoretical analysis.
      Ethics Updates has a nice resource page on Race, Ethnicity and Multiculturalism--a good starting point for philosophical perspectives on the general issues.
      Dahlia Lithwick comments on Slate.
      Linda Greenhouse of the NY Times on the Oral Argument.
      Transcript in Gratz. Transcript in Grutter. Audio is available here.
      BBC, Should Universities Ban Affirmative Action.
      Washington Post, Affirmative Action Under Attack.
      Robert Allen, Rawlsian Affirmative Action: Compensatory Justice as Seen from the Original Position
      SCOTUS Blog has a report.
      Howard Bashman reports from How Appealing: here and here and here.
      University of Michican's resource page.
      President Bush's remarks on the cases.
      A post from the Volokh Conspiracy on viewpoint diversity & the cases.
      Bibliography from the Stanford Encyclopedia entry:
        Bolick, Clint. The Affirmative Action Fraud: Can We Restore the American Civil Rights Vision? Washington, D.C.: Cato Institute, 1996. Cahn, Steven M. (ed). Affirmative Action and the University: A Philosophical Inquiry. Philadelphia: Temple University Press, 1993. Carter, Stephen L. Reflections of an Affirmative Action Baby. New York: Basic Books, 1991. Cohen, Marshall et al. (eds). Equality and Preferential Treatment. Princeton, New Jersey: Princeton University Press, 1977. [Contains the early articles in Philosophy & Public Affairs.] Dworkin, Ronald. A Matter of Principle. Cambridge, Massachusetts: Harvard University Press, 1985. Eastland, Terry. Ending Affirmative Action: The Case for Colorblind Justice. New York: Basic Books, 1996. Edley, Christopher, Jr. Not All Black and White: Affirmative Action and American Values. New York: Hill and Wang, 1996. Edwards, John. When Race Counts: The Morality of Racial Preference in Britain and America. London: Routledge, 1995. Fullinwider, Robert K. The Reverse Discrimination Controversy: A Moral and Legal Analysis. Totowa, New Jersey: Rowman and Littlefield, 1980. Glazer, Nathan. Affirmative Discrimination: Ethnic Inequality and Public Policy. New York: Basic Books, 1975. Goldman, Alan H. Justice and Reverse Discrimination. Princeton, New Jersey: Princeton University Press, 1979. Greenawalt, Kent. Discrimination and Reverse Discrimination. New York: Alfred A. Knopf, 1983. Gutmann, Amy and Thompson, Dennis. Democracy and Disagreement. Cambridge, Massachusetts: Harvard University Press, 1996. Kahlenberg, Richard D. The Remedy: Class, Race, and Affirmative Action. New York: Basic Books, 1996. Lawrence, Charles R. III and Matsuda, Mari J. We Won't Go Back: Making the Case for Affirmative Action. Boston: Hougton Mifflin Company, 1997. Paul, Ellen Frankel et al. (eds). Reassessing Civil Rights. Oxford: Blackwell Publishers, 1991. Rosenfeld, Michel. Affirmative Action and Justice: A Philosophical and Constitutional Inquiry. New Haven, Connecticut: Yale University Press, 1991. Symposium: Bakke -- Civil Rights Perspectives. Harvard Civil Rights-Civil Liberties Law Review, 14 (Spring 1979). Symposium: Race-Based Remedies. California Law Review, 84 (July 1996). Symposium: The Meanings of Merit -- Affirmative Action and the California Civil Rights Initiative. Hastings Constitutional Law Quarterly, 23 (Summer 1996). Valls, Andrew. "The Libertarian Case for Affirmative Action," Social Theory and Practice, 25 (Summer 1999), 299-323. Young, Iris Marion. Justice and the Politics of Difference. Princeton, New Jersey: Princeton University Press, 1990. Waldron, Jeremy. "Humility and the Curse of Injustice," in Robert Post and Michael Rogin, eds., Race and Representation: Affirmative Action (New York: Zone Books, 1998), 385-389.


Sunday, June 22, 2003
 
Blogging from Montreal: Part 2 It is a warm day in a beautiful city, but I am two floors below ground in the Sheraton at the open meeting of the GAC (Government Advisory Council) with the Board of Directors of ICANN. Vint Cert has introduced Paul Twomey. This is Twomey's first meeting as ICANN's President, although he has been involved since the very beginning of ICANN's formation. I am immediately struck by the difference in tone and style between Twomey and his predecessor Stuart Lynn. Twomey is much more focused on the management of ICANN--on what staff, resources, communications, and participation is required to make ICANN actually work. Twomey then turned his attention to substance, including for example, the "WIPO II" recommendationsand IDN ("Internationalized Domain Name") implementation. Twomey then turns to more controversial topics--the creation of new generic Top Level Domains (gTLDs) and the so-called "At Large" process. The most "controversial" issue was WIPO II. I'll have more to say about this tomorrow. The final item on the agenda was the Evolution and Reform Committee Chair, Alejandro Pisanty's report about the country code Name Support Organization (ccNSO). It has been difficult for ICANN to get the ccNSO's to participate in the ICANN process. From an outsiders point of view, this is both completely understandable but also somewhat comic. At one level, the managers of the ccTLDs (like .uk (United Kingdom) or .fr (France)), seem to believe that they ought to be able to dictate the terms upon which they will recieve root service. If ICANN had been organized as a for-profit enterprise, the economic absurdity of this position would be crystal clear. On another level, it is completely understandable that the managers of ccTLDs would be outraged when something that was given away for free (root service), suddenly has a price tag attached to it.
Update: Check out Michael Froomkin,who adds the proper dose of cynical acid to my softball post.
Guide to Blogging from Montreal Posts.


 
Downward Spirals Department: Politicization and Collegiality on the Sixth Circuit The Cincinatti Enquirer has a story entitled Political divide sparks disorder in the courts. Here is a snippet:
    "To be treated this way by colleagues is beyond the pale," said Boyce Martin Jr., the 6th Circuit's chief judge and the target of the recent misconduct accusations. "(The court) is totally more politicized." Although judges always have been political creatures, politics has never played a more visible and, some say, more destructive role in the country's judicial system than it does today.


 
Smith on Barnette You really should download Steve Smith's Barnette's Big Blunder. Here is is the big blunder:
    [N]o official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.
Smith asserts this passage is both wrong and self-contradictory. My Friday post, Smith on Barnette, has drawn a characteristically thoughtful and intelligent reply from Steve Smith:
    It seems to me that maybe our main difference is over the meaning of "prescribe." (This was a source of disagreement at the original conference too.) IN the paper, I think I argue that if I assert, "X is true and you should believe it," this is a form of prescription. This doesn't seem to me to be an especially unusual use of the term, but of course you can understand it differently, and maybe there's some good reason to do so in this context. (Such as to limit the scope of Barnette. But then you'd have to look at the various invocations of it and see whether the government has actually "prescribed" in the forbidden sense. Maybe it would have. It wouldn't have, I think, in say the moment of silence case, but maybe that's why with religion government is forbidden not only to "prescribe" but also to "endorse"?) Anyway, what seemed to me more controversial in this respect,though I also argue for it, is the claim that even to say "I believe X" is implicitly to say "X is true, and so you should believe it too," so that "I believe X" is itself a prescription. There was quite a lot of resistance to this claim, but it seems right to me.
But I am not sure I agree with Steve that we disagree about the meaning of the verb "to prescribe." I think the key is "to prescribe as an orthodoxy," with its associated sense of "establishment." But this is really just a sideshow, you want to get to the main event by downloading Steve's fine paper.


 
Blogging from Montreal: Part 1 This week I will be blogging from the ICANN meeting in Montreal. Among the issues that I will be discussing are:
    --Internationalized Domain Names (IDN) and the Global Digital Divide.
    --The relationship between ICANN and the managers of the country code Top Level Domains (ccTLDs).
    --The role of democratic mechanisms in Internet governance.
    --Root expansion and the sponsored Top Level Domain plan.
Today is the first day of the Montreal meeting. So far, no surprises or fireworks.
Guide to Blogging from Montreal Posts.


Saturday, June 21, 2003
 
Confirmation Wars: Bits and Pieces All of this courtesy of How Appealing and Election Law Blog:
    --The Los Angeles Times reports that Orrin Hatch will bring Carolyn Kuhl to the floor despite blueslipping attempts by Feinstein and Boxer.
    --The New York Times reports "John Kerry said yesterday that he would filibuster any Supreme Court nominee who opposes the legality of abortion or would "turn back the clock" on civil liberties, the environment and worker protection."
    --Also on the Times, Linda Greenhouse has a piece that reports the following:
      "The real point is not that the court is conservative, but that the spectrum of views on the court today represents a particular range, from ardent conservative to central or moderate liberal," said Paul Gewirtz, a professor at Yale Law School. "There's something to be said for a court of centrists, but that's not what we have. One end of the spectrum is represented, and not the other." Among legal academics, conservatives and liberals have their mirror-imaged constitutional wish lists. Conservatives call theirs the "Constitution in exile," a vision that includes state sovereignty, limited national power and strong protection for private property. Liberals refer to the "shadow Constitution," under which the government has affirmative obligations to alleviate inequality, protect people from harm that results only indirectly from official action, and surround criminal defendants and prisoners with a range of safeguards.


 
The Downward Spiral of Politicization I've been arguing that the confirmation wars are evidence that the selection process for federal judges is in a downward spiral of politicization. Where is the bottom? I've tried to argue that after a death sprial, the bench would be in a very sorry state. With that in mind, take a look at this post by Rick Hasen.


 
Rappaport and McGinnis on Legislative Entrenchment Michael Rappaport (San Diego) and John McGinnis (Northwestern University) have posted Symmetric Entrenchment: A Constitutional and Normative Theory (forthcoming in the Virginia Law Review) on SSRN. Rappaport and McGinnis have authored a series of important papers on the role of supermajoritarian mechanisms in constitutional law and theory. In this paper, they investigate legislative entrenchment--the attempt by one session of a legislature to insulate its products from repeal or revision by its temporal successors. If you are interested in constitutional law, you will want to download this important paper:
    In this article, we defend the traditional rule that legislative entrenchment, the practice by which a legislature insulates ordinary statutes from repeal by a subsequent legislature, is both unconstitutional and normatively undesirable. A recent essay by Professors Eric Posner and Adrian Vermeule disputes this rule against legislative entrenchment and provides the occasion for our review of the issue. First, we argue that legislative entrenchment is unconstitutional, offering the first comprehensive defense of the proposition that the original meaning of the Constitution prohibits legislative entrenchments. We show that a combination of textual, historical, and structural arguments make a very compelling case against the constitutionality of legislative entrenchment. In particular, the Framers incorporated into the Constitution, the traditional Anglo-American practice against legislative entrenchment, as evidenced by early comments by James Madison ?– comments that have not been previously discussed in this context. Moreover, legislative entrenchment essentially would allow Congress to use majority rule to pass constitutional amendments. On the normative issue, we offer a new theory of the appropriate scope of entrenchment: the theory of symmetric entrenchment. Under our theory, there is a strong presumption that only symmetric entrenchments--entrenchments that are enacted under the same supermajority rule that is needed to repeal them--are desirable. The presumption helps to distinguish desirable entrenchments that would improve upon government decisions from undesirable ones that simply involve legislatures protecting their existing preferences against future repeal. To be desirable entrenchments must generally be symmetric because the supermajority rule that is applied to the enactment of entrenched measures would improve the quality of these measures and therefore compensate for the additional dangers that entrenchments pose. This theory steers a middle path between a strict majoritarian position, which would prohibit all legislative entrenchments, and a position that would allow legislative majorities to entrench measures.


 
Rodriguez and Weingast: PPT & Legislative History Daniel Rodriguez (San Diego) and Barry Weingast (Stanford, Hoover Institution) have posted their papere, The Positive Political Theory of Legislative History: New Perspectives on the 1964 Civil Rights Act and its Interpretation (forthcoming in the University of Pennsylvania Law Review) on SSRN. From the abstract:
    In this paper, we develop an argument for the use of legislative history in statutory interpretation. This argument is based upon positive political theory (PPT) and we draw upon PPT models to explain how courts can rely on probative evidence in a statute's legislative history to construe ambiguous legislation. The heart of the paper is an extended analysis of the Civil Rights Act of 1964 and, in particular, Title VII of that statute. Relying on the PPT model, we argue that a key error made by scholars and courts involved in the interpretation of the Act is the reliance on statements of "ardent supporters" of the legislation, supporters who strategically argue for a very broad construction of the Act. By neglecting statements of legislators whose support for the Act were pivotal, scholars and courts fall into the mistake identified memorably by Judge Harold Leventhal who suggested that legislative history-based interpretation is akin to looking out over a crowd and picking out your friends. Our analysis provides a framework for the improved use of legislative history in interpreting statutes; many questions, remain, however, about the desirability of intentionalist methods of interpretation.
Download it while its hot!


 
New Papers on the Net Here is the roundup:
    Lucian Bebchuk (Harvard) and Allen Ferrell (Harvard) upload Federal Intervention to Enhance Shareholder Choice, already published in the Virginia Law Review. From the abstract:
      The thesis put forward in this Essay is straightforward. When the managers and shareholders cannot be easily separated, control rights should lie in the hands of someone whose loyalties are aligned with the creditors, but the reorganization itself should not affect the value of the managers' equity interest. These principles are not new, but rather forgotten. Although they barely made a toehold in the academic literature of the time, 10 the early law of corporate reorganizations in this country adopted these principles in an environment in which it seems likely that they vindicated the creditors' bargain. Hence, it is this body of law to which we turn first.
    Paul Lombardo (University of Virginia, Center for Biomedical Ethics) posts Taking Eugenics Seriously: Three Generations of ??? are Enough?, forthcoming in the Florida State University Law Review. From the abstract:
      Recent media attention to the history of the eugenics movement in American has resulted in apologies from the governors of Virginia, Oregon, North Carolina, South Carolina and California for state mandated surgical sterilizations under eugenics laws. This article tracks the genesis of the eugenics apology movement, which began with a monument to the infamous case of Buck v. Bell that was erected just as heightened media coverage of milestones in human genome research filled the headlines. The article also explores the involvement of most early geneticists in the eugenics movement, attempting to put into historical context both the hopeful side of eugenics that made it so popular early in the 20th Century, as well as the dark memories we normally associate with eugenics in that era. The article draws parallels between the urge to eradicate disease embraced within the eugenics movement,and the similar urge often used to argue for new genetic technologies, such as prenatal genetic diagnosis.It is concluded with an echo of the Buck case, exhorting readers to avoid simplistic moralisms in reflecting on historic cases like Buck, in favor of a more searching analysis that would require us to understand our own eugenic impulses.


 
Popular Sovereignty Work by Bruce Ackerman, Akhil Amar, and others has made the notion of popular sovereignty central to contemporary constitutional theory. But does the notion of "We the People" make sense as a normative idea? Glen Staszewski (Michigan State University - Detroit College of Law) has posted Rejecting the Myth of Popular Sovereignty and Applying an Agency Model to Direct Democracy (forthcoming in the Vanderbilt Law Review) on SSRN. Here is the abstract:
    Successful ballot measures are commonly perceived as a pure reflection of "the will of the people." Yet initiatives do not appear magically on election ballots or in statute books as a result of the electorate's wishes. Rather, such measures are conceived, drafted, and vigorously promoted by identifiable initiative proponents, who often represent particular special interests and may not even live in the communities in which their measures are proposed. The myth of popular sovereignty in direct democracy should be rejected. Instead, initiative measures should be characterized as lawmaking by initiative proponents, whose general objective is either ratified or rejected by the voters. Rejecting the myth of popular sovereignty in direct democracy would alleviate many of the problems of judicial review that commentators have identified. By treating the initiative proponents as the relevant lawmakers, courts would be able to identify impermissible motives underlying a measure's enactment and continue using an intentionalist methodology of statutory interpretation without resorting to a counterproductive fiction of "voter intent." On the other hand, express recognition that direct democracy involves lawmaking by initiative proponents intensifies the tension between direct democracy and representative government, the problems associated with the delegation of lawmaking authority to unelected actors, and the absence of safeguards to encourage careful deliberation and reasoned decisionmaking in the initiative process. Initiative proponents are not the only unelected lawmakers in our democracy. Administrative agencies have freely enacted binding rules based on broad delegations of authority since the New Deal. This development has always been considered constitutionally suspect, but courts have allowed it to continue unabated largely because administrative law has developed alternative safeguards to replace those provided in the legislative process by representation and the requirements of Article I, Section 7. Specifically, administrative agencies must comply with the notice-and-comment procedures of the Administrative Procedure Act, and their final rules must withstand hard-look judicial review. Those safeguards ensure that agency officials engage in careful deliberation and reasoned decision-making and have thereby legitimized agency lawmaking. A similar model is needed to constrain the proponents of ballot measures and thereby legitimize the use of direct democracy. This Article therefore draws on the agency model to propose amending state laws that regulate direct democracy to subject the proponents of initiatives to the requirements of public deliberation and reasoned decisionmaking that presently constrain administrative agencies. The Article argues that unlike previous proposals, such reforms would promote careful deliberation, improve the legislative product, and provide a heightened standard of judicial review that is well established and directly responsive to the serious structural shortcomings of the current method of lawmaking by the people.


 
Gardner reviews Nagel John Gardner (Oxford) reviews Thomas Nagel's Concealment and Exposure in the Notre Dame Philosophical Reviews. Here is a taste:
    In Concealment and Exposure, Thomas Nagel collects eighteen previously published essays of varying length and importance. Most are works of moral and political philosophy, although the final five (which I will not discuss) relate to his other main area of philosophical interest, the relationship between mind and reality. Among the papers in moral and political philosophy, a few might equally be classified as works of cultural commentary, and a couple perhaps even as works of social psychology. Five were published in scholarly books and journals, but the rest appeared in newsstand periodicals such as The New Republic and The London Review of Books (which gives us some reason to be more optimistic about public culture than Nagel is himself). More than half are review articles, mostly, but not only, discussing works by other philosophers.


Friday, June 20, 2003
 
Lessig on Layers Larry Lessig posts re The Layers Principle: Internet Arhictecture and the Law, "Lawrence Solum and Minn Chung have a comprehensive and powerful view of layers in network architecture, nicely linking that architecture to policy implications, in particular, how governments regulate." And thank you to B2FXXX and Stephen's Web for the links to our download page on SSRN. More thank yous are due to GrepLaw and ICANN Watch. If you missed it, check out Ed Felten's (Princeton, Computer Science) post entitled Layers on his blog Freedom to Tinker. Also: It's the Architecture, Stupid and It's the Architecture, Stupid--Part II on Copyfight (Donna Wentworth) and this post on A Copyfighter's Musings and this post on Pressepapiers.net.


 
Ideas, Intellectuals and the Public Today, tomorrow, and the next day, at Goodenough College, London. The Institute for Ideas is putting on Ideas, Intellectuals and the Public. Here is a description:
    Ideas can define and transform society, but how healthy is intellectual life today? Human beings have a unique capacity to use reason to act on the world, but we don't always make the most of that capacity. It is crucial for those of us who value ideas to assess rigorously and honestly the state of contemporary ideas, the role of intellectuals and the audience for their ideas. In a period when Big Brother refers not to George Orwell but to a reality TV show, and when bright young things are developing gameshow formats rather than scribbling essays; when thinkers join thinktanks to design short-term government policy rather than reflecting on and challenging the status quo, and when the ever growing number of graduates seem more interested in job prospects than academic endeavour, is intellectual life in terminal decline? This three-day conference will look at the debates about knowledge, the university system and new arenas for ideas, bringing together academics, journalists, novelists, scientists, artists and activists in a public arena to look at where new ideas are coming from, who is their audience and whether they match up to the tasks facing humanity.


Thursday, June 19, 2003
 
Brett Kavanaugh CBS reports:
    In a move sure to inflame the already bitter battle over White House judicial nominees, the Washington Post reports that President Bush plans to name a key aide to independent counsel Kenneth W. Starr to an influential appellate court judgeship. The Post says White House lawyer Brett M. Kavanaugh will be nominated for an opening on the U.S. Court of Appeals for the District of Columbia, which is considered the second most powerful court in the nation, after the U.S. Supreme Court. The report comes a day after the White House rejected an offer from Senate Democrats to meet to discuss ways to avoid a nasty partisan fight over potential Supreme Court nominees.
Eugene Volokh's testimonial can be found here.


 
No Longer Bloggered It appears that the severe problems with Blogger--the back end of Blogspot--have been resolved. Because of the Blogger problems, I had been displaying only the last 30 posts (4-6 days worth) on the main page. As of today, I have reset the display option to 8 days. I hope that this change is more convenient for those who surf here once a week or so.


 
Unpublished Dispositions and Stare Decisis For an nice discussion of the stare decisis effect of unpublished decisions, see Crescat Sententia.


 
Smith on Barnette
    Smith's Paper Steven Smith, my University of San Diego colleague, has posted Barnette's Big Blunder, forthcoming in the Chicago-Kent Law Review, on SSRN. Here is the abstract:
      Among the most celebrated statements ever issued in a Supreme Court opinion is Justice Robert Jackson's resounding declaration in Barnette v. West Virginia Board that "[i]f there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein." By using the preposition "or" rather than "and," Jackson asserted two constitutional prohibitions: government may not force citizens to confess an orthodoxy, but government may also not prescribe any orthodoxy. Upon reflection, however, the "no prescription" prohibition is manifestly untenable, and neither Justices nor scholars have ever tried to apply it in any consistent way. Nonetheless, this impossible prohibition exerts a powerful and unfortunate rhetorical influence over constitutional discourse: recent examples discussed in the article include work by respected legal scholars including Kent Greenawalt and Michael McConnell and judicial decisions including the recent Newdow decision on the Pledge of Allegiance.
      This article first explains why the "no prescription" prohibition could not possibly be taken at face value. The article then considers the various ways in which courts and scholars have tried to qualify or reinterpret that prohibition (such as by limiting the prohibition to religion), and it argues that these efforts do not succeed in avoiding the decisive objections to a "no prescribed orthodoxy" principle. Our constitutional discourse would be more honest and cogent, the article concludes, if Barnette's "no prescription" principle were excised "root and branch."
    Smith's Argument Why does Smith reach these conclusions? His first argument is that the "no prescription" prohibition is contradictory:
      Barnette’s “no orthodoxy” passage is self-contradictory because the passage itself comprises a sort of mini-orthodoxy, or a prescription of what shall be orthodox in an important “matter of opinion” centrally affecting “politics” and law (and “religion,” and probably “nationalism” as well). It is not foreordained, after all, that government officials must avoid prescribing what beliefs are favored within the subject matter categories listed in Barnette. Governments have often issued such prescriptions--indeed, as we will notice shortly, governments in this country have issued and continue to issue such prescriptions routinely-- and many people have believed that government should so prescribe. Barnette flatly declares these “pro-prescription” beliefs to be not orthodox-- not “right opinion”-- and it declares the contrary view to be the constitutional orthodoxy: indeed, Barnette’s phrase “fixed star” is little more than a metaphorical equivalent for “orthodoxy.” In this respect, Barnette’s “‘no orthodoxy’ orthodoxy” contradicts itself; it is like the pastor who repeatedly declares during worship that there must be absolutely no talking, or like the sign on the school wall that says no signs are permitted on the wall.
    But I find this argument curious. Here is the passage with which we are concerned:
      [N]o official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion.
    What does "prescribing an orthodoxy in matters of opinion" mean? Limning this passage requires us to understand what it means to "prescribe an orthodoxy in politics, nationalism, religion, or other matters of opinion." There are three elements here. Let's look at each of them:
      --"prescribe." The verb "to prescribe" is from Middle English and thence from the Latin praescribere to write at the beginning, dictate, order, from prae- + scribere to write. The relevant definition is: "1 a : to lay down as a guide, direction, or rule of action : ORDAIN b : to specify with authority."
      --"orthodox." The adjective "orthodox" is from the Greek orth- + doxa opinion. The relevant meaning is "conforming to established doctrine especially in religion."
      --"politics, nationalism, religion, or other matters of opinion." The key here is the phrase "other matters of opinion," indicating that it is "opinion" about politics, nationalism, religion, and similar matters" that is at stake.
    So what does it mean to "prescribe an orthodoxy on matters of opinion, such as religion, nationalism, or politics?"
    Distinguishing Belief, Action on Belief, and Prescription of Orthodoxy Most reasonably construed, the idea is that a public official would set forth a belief about a matter of opinion (such as religion, nationalism, or politics) as an established doctrine laid down as a rule or guide for the belief of citizens. Importantly, we must distinguish the act of establish an orthodoxy as a guide for the belief of others from other belief-involving acts by public officials. Thus:
      1. Having a belief is not establishing that belief as an orthodoxy. Public officials have beliefs of their own, but having a belief is not the same as establishing that belief as an orthodoxy. From the fact that George W. Bush has Christian beliefs, we cannot validly infer that George W. Bush has established Christian belief as an orthodoxy.
      2. Acting on a belief is not the same as establishing the belief as an orthodoxy. Public officials act on the basis of their beliefs, but this is not the same as establishing the belief as an orthodoxy. For example, a legislator may believe that abortion is morally wrong and vote on that basis for a bill that bans so-called "partial birth abortion." But this act does not establish the belief that abortion is morally wrong as an orthodoxy. There is a loose way of talking that equates legislating on the basis of a belief with establishing the belief as an orthodoxy, but this is a rhetorical exaggeration. You can easily see this by asking if the person who makes such an assertion would count legislation that conforms with her own belief as establishing an orthodoxy. The answer is invariably "no."
      3. Prohibition of an action is not the same as establishing the belief that the action is wrong as an orthodoxy. At this point, the reason for 3 should be clear. Three is just a special case of two. Prohibition of an action may well be based on a belief that the action is wrong. But as we established in 1, having a belief is not the same as establishing a belief as an orthodoxy. Nor does acting on a belief transform the having of the belief into the establishment of an orthodoxy: this is the point we established in 2.
      4. The prohibition of official establishment of orthodoxy in matters of opinion is not the same as establishing the belief that orthodoxy is morally wrong. It is clear that 4 is just a special case of 3. When the Supreme Court says that no official may establish an orthodoxy in matters of belief, this is not the same as a establishing the belief that establishing an orthodoxy of belief in matters of opinion is wrong as an orthodox belief. Citizens and public officials are free to disagree with the Supreme Court about this matter. They are not legally entitled to act so as to establish an orthodoxy of belief, but they are free to believe that is morally permissible or obligatory to do so.
    An Analogy Consider a mundane analogy. Suppose the antitrust laws prohibit certain forms of vertical integration. I believe that vertical intergration is efficient and hence is morally permissible and should be legally permissible. The Supreme Court rules otherwise. By so doing, the Supreme Court does not establish an orthodoxy of belief on this matter. Nor would this conclusion be changed if the Supreme Court were to say that the prohibition on vertical integration were "an fixed star in the firmament of our antitrust laws."
    Smith's Slide from "Constitutional Orthodoxy" to "Orthodoxy of Belief" So when Smith argues "Barnette flatly declares these “pro-prescription” beliefs to be not orthodox-- not “right opinion”-- and it declares the contrary view to be the constitutional orthodoxy," his argument rests on a false equation between "constitutional orthodoxy"--the stare decisis effect of a Supreme Court opinion as a matter of law--and "prescribing an orthodoxy on matters of opinion"--the establishment of an official belief as a standard which citizens should adopt to guide their own beliefs.
    Explicity and Implict Prescription Smith's makes much of a distinction between explicit and implicit prescriptions--arguing that that the public officials explicitly endores beliefs (e.g. the Declaration of Independence and Gettysburg address). But this argument is still wide of the mark. What Smith needs is an argument that public officials should prescribe orthodoxy of belief in matters of opinion. This is very important. Saying, I believe that P and you should believe that P is not the same as prescribing an orthodoxy that P. Let me repeat that. If I say, I believe that abortion is wrong and you should believe that abortion is wrong, I have not prescribed an orthodoxy of belief that abortion is wrong. Whenever we engage in assertoric speech acts, whenever we assert that P, we ask others to accept our assertions. But asserting is not the same as prescribing an orthodoxy. Prescribing an orthodoxy requires more. Prescription requires authority, and hence only those with authority can engage in the speech act of prescribing an orthodoxy. And the speech act of prescribing an orthodoxy necessarily involves more than assertion or persuasion; it involves the establishment of a doctrine. Here is an example that illustrates my point:
      If I were the President, and I were to say, "My fellow Americans, abortion is wrong, and as Americans, you have an obligaiton to believe this. A belief in the wrongness of abortion is an obligation of citizenship," I would have prescribed an orthodoxy.
    And that case can be distinguished from this one:
      If I were the President, and I were to say, "My fellow Americans, abortion is wrong, and that is why I support the right to life amendment," I would not have prescribed an orthodoxy. Why not? Because it would be completely consistent for me to continue, "This is my belief, and I hope you share it. But I recognize that you have the right to believe otherwise."
    The Two Distinct Prohibitions One final point. Recall that the Barnette passage reads: "[i]f there is any fixed star in our constitutional constellation, it is that no official, high or petty, can prescribe what shall be orthodox in politics, nationalism, religion, or other matters of opinion or force citizens to confess by word or act their faith therein." What about the second "or" (in bold)? After that or, we find the assertion that public officials may not "force citizens to confess by word or act their faith therein." Is this prohibition really different from the first prohibition? Well, yes. The first prohibition says that government may not prescribe orthodox belief. The second prohibition says that government may not force citizens to confess faith the orthodox belief by word or act. And one more important point. Forcing citizens to act in accord with a law founded on a belief is not the same as forcing citizens to confess faith in the belief. I can believe that veritcal integration is a good thing and comply with a law that prohibits it. In fact, I can say, "I am obeying this law, but I disagree with it."
    Conclusion I am far from convinced by Smith's critique of Barnette, but I haven't come near to doing justice to Smith's paper, which offers a rich and original palette of arguments. Moreover, I absolutely agree with what I take it is Smith's central normative thesis: government and public officials may explicitly or implicitly affirm or reject beliefs on matters of opinion. My point is very modest: Barnette says nothing to the contrary. Download it while its hot.


 
Rational Commitment?: A Paper by Chapman and a bit on John Broome Bruce Chapman (Toronto) has posted a working paper entitled Rational Commitment and Legal Reason on SSRN. Here is the abstract:
    Economic theory has a problem with the idea of rational commitment. It might be rational for an agent to make such a commitment (e.g., a threat, a promise) because the commitment makes alternatives available that are preferred by the agent to those alternatives that are available if no such commitment is made. Unfortunately, however, the same preference maximizing rationality that is sufficient to motivate the idea of making the commitment initially is often also sufficient to undermine the commitment when it comes time to carry it out. For example, it might be wasteful or contrary to one's preferences, and nothing more, to carry out a threat (promise) if the threat (promise) has been unsuccessful (successful) in deterring (inducing) another's behaviour in the way that was planned. To the extent that this ex post quandary is predicable ex ante, the threat (promise) is not a credible one to make, either for the party threatened (promised) or for the threatening (promising) party. The result is that the benefits of being able to make such threats (promises) are lost.
    In this paper I argue that what economic theory needs to resolve the problem of rational commitment is an account of rationality that is so structured that it can simultaneously comprehend both the preference maximizing rationality of adopting a commitment and the more formal (less substantive, less preference-based) rationality of carrying out the commitment once it has been made. The difficulty, of course, is that a rationality that is too formal, or rigid, in its adherence to the planned commitment ceases to look rational at all. Indeed, it appears to look more like "blind commitment" or mere "mechanical habit," the sort of thing that takes the agent beyond the state of reflection or deliberation that is characteristic of rational behavior.
    However, I argue in my paper that the required rational structure is to be found in some recent work by John Broome and, more particularly, in the sharp conceptual difference that Broome makes between action in accordance with reasons and action according to the normative requirements of practical rationality. Broome shows that it is a common mistake to think that all of rational behavior is action according to (undefeated) reasons and that this ignores the more formal constraints that fall under the normative requirements of practical rationality. I argue that the economic theory of rational choice falls prey to this same confusion (where reasons, whatever their basis, are ultimately thought to give rise to a preference for doing x rather than y, and rational choice consists in following that preference), something that serves to undermine the possibility of keeping to commitments rationally made. However, if there is more to rationality than acting for reasons, as Broome suggests with his account of normative requirements, then it is possible to be rational even as one acts contrary to reason in some particular case. In my paper I show the importance of this argument for the economic problem of rational commitment in general, and for the problem of credible threats and promises more particularly.
    Lest this argument be thought of theoretical interest only, I also show that the more robust model of rational commitment that is made possible by the idea of normative requirements of practical rationality should be familiar to legal theorists. For it is an idea manifested constantly in common law decision-making, where defeasible legal rules, apparently simultaneously, both determine cases (as a matter of normative requirement) and are determined by them (as a matter of reason). Thus, the logical distinction between reasons and the normative requirements of practical rationality can be used both to prescribe a solution for a problem in rational choice, namely, the problem of rational commitment, and to provide understanding for what is rational in legal reason and the method of common law adjudication.
If you are interested in this kind of issue, you might check out John Broome's work. A good place to begin is his collection of essays, Ethics Out of Economics. Another recent Broome essay of interest is: Can there be a preference-based utilitarianism?


 
New Papers on the Net Here is the roundup:
    John McGinnis (Northwestern) posts The Political Economy of International Antitrust Harmonization. From the abstract:
      This essay argues against substantive international antitrust harmonization, by which I mean a single international regime binding on all nation states in at least some areas of antitrust. While multiple domestic antitrust regimes impose some costs, substantive harmonization likely imposes more substantial costs. An international lawmaking regime creates high agency costs because it is less subject to democratic control. It also imposes costs by discouraging beneficial change, as the regime once in place will be difficult to transform. Its long-run costs are particularly problematic in a world that is not static. As information costs, transportation costs, and trade restrictions decline, it may well be that the appropriate scope of the optimal antitrust regime will narrow as market processes become better correctives to market imperfections than government intervention. The lock-in costs of an international regime thus are particularly high in a world in which the pace of change is ever faster. In contrast, the essay suggests that an antidiscrimination regime for competition law located within the World Trade Organization may be appropriate. The WTO has an interest in precluding nations from discriminating in antitrust rules that affect market access in order to prevent the substitution of nontariff barriers for tariff barriers. An antidiscrimination regime has advantages over substantive harmonization, because formulating and applying antidiscrimination rules have fewer agency costs than formulating and applying substantive rules. Moreover, the antidiscrimination model permits continued innovation and change in substantive rules, thus facilitating continued debate about the optimal content of regulation. Finally, this more modest and practical objective - ­the elimination of foreign bias - would make an international competition agenda more amenable to being adopted in its most practical forum - the WTO.
    Kieran Setiya posts Against Internalism, forthcoming in Noûs. Here is an excerpt:
      It might be a good idea, at this point, to expunge the word "internalism" from the philosophy of practical reason. In general, the internalist proclaims a necessary connection between states of one kind (normative facts, propositions, beliefs, judgements) and those of another (actual motivation, possible motivation, rational motivation, desire, reasons). The externalist denies that this connection is necessary or "internal" and claims that it is merely contingent. But – as this general description suggests – the forms of internalism are bewilderingly various; and it is easy to confuse one with another. At the risk of such confusion, I want to frame the following discussion in terms of a kind of internalism about reasons to act.
    Martin Lipton (Wachtell, Lipton, Rosen & Katz) uploads The Millenium Bubble and Its Aftermath: Reforming Corporate America and Getting Back to Business. Here is the abstract:
      Bubbles are recurring and inexplicable phenomena. Periodically, the seemingly irrational conduct of investors results in speculative "mania" evidenced by skyrocketing stock prices and exaggerated enthusiasm. Virtually all market participants become absorbed in this mania, encouraging investor bravado and thereby contributing to the growth of the bubble until it bursts. The burst of the bubble causes widespread losses and often reveals mismanagement, misfeasance and malfeasance that contributed to the bubble. The losses and scandals erode public confidence and contribute to a serious downturn in economic activity. The burst of the "Millenium Bubble" of the late 1990s and early 2000s and the ensuing collapse of corporate giants such as Enron have, as in past collapses, reseulted in a crisis in investor confidence and an economic downturn of such magnitude as to threaten deflation. The response has been congressional hearings, investigations, prosecutions and sweeping new regulation - to express our condemnation of the conduct that created the crisis, to punish corporate wrongdoers and to impose structural, procedural and behavioral requirements to reduce the likelihood of the crisis's recurrence. This paper, which is an update of a speech I gave to the Commercial Club of Chicago in November 2002, discusses the evolution and collapse of the Millenium Bubble and the regulatory regime that has emerged as a result of the post-Enron crisis. It then highlights two essential themes for long-term reform of corporate America: First, post-Enron regulations will be effective only if accompanied by fundamental changes in corporate culture. To bring about true reform, those who are regulated must share the goals embodied in the rules that they are obligated to follow. Second, in our efforts to restore confidence in our markets, we must guard against overregulation and overzealous prosecution, as these may stifle the recovery of our economy.
    Alfred Brophy (Alabama) posts Some Conceptual and Legal Problems in Reparations for Slavery, forthcoming in the NYU Annual Survey of American Law. From the abstract:
      Now that "reparations talk" is being taken seriously, it is time to address reparations plans more fully. After discussing why reparations talk has become popular, the paper turns to conceptual problems associated with claims for reparations for slavery: whether courts are the appropriate place to look and whether American law is even equipped to deal with such claims. It addresses three problems in particular: the use of unjust enrichment analogies in reparations talk; the constitutionality of race-based remedies, such as reparations; and the types of remedies for harms where the most directly affected people are no longer alive. A final section places reparations talk into the context of the cultural war over redistribution of property on the basis of race.
    Diane Ring (Harvard) posts One Nation Among Many: Policy Implications of Cross-Border Tax Arbitrage, forthcoming in the Boston College Law Review. From the abstract:
      Cross-border tax arbitrage arises where a transaction is subject to two or more countries' differing tax regimes. Conflicts between the tax rules create unique opportunities for the parties to engage in profitable tax planning - opportunities that would not be available if the transaction occurred entirely domestically in one of the countries. These opportunities have been a growing feature of the multi-jurisdictional business world and have raised issues concerning whether and how countries, such as the United States, should respond. This Article examines cross-border tax arbitrage in the context of both domestic tax policy and of other international tax issues, and considers potential responses. It proposes an analytic framework for cross-border tax arbitrage based on specific case studies. The Article concludes by proposing a balancing test for determining the appropriate treatment of specific instances of cross-border tax arbitrage.
    Jonathan Nash (Tulane) uploads Examining the Power of Federal Courts to Certify Questions of State Law, forthcoming in the Cornell Law Review. Here is the abstract:
      Attracted by the perception that certification accords with norms of federalism and comity, federal courts have applied certification without serious examination of its jurisdictional validity. Close examination of certification's jurisdictional underpinnings reveals that they are contradictory and flawed. When a federal court certifies questions of law to a state court, the procedural posture is either that the federal court temporarily relinquishes jurisdiction over the case to the state high court - the "unitary conception" of certification; or that that the federal court abstains pending resolution of an independent, streamlined case by the state high court - the "binary conception" of certification. The unitary conception is problematic because it may require state courts to exercise improperly the federal judicial power. The binary conception is inconsistent with current Supreme Court precedent. Moreover, although this precedential inconsistency can be mitigated, the binary conception of certification remains inconsistent with the fundamental purpose of the federal diversity jurisdiction.


 
Hasen on "The Recall" The Governor of California, Gray Davis, is the subject of a recall campaign. My Loyola colleague Rick Hasen sorts out the complexities in this op/ed.


 
Workshop Today At Florida State's excellent summer series, Paul Caron, University of Cincinnati (visiting at FSU College of Law) does Cultivating an Active Learning Environment in the Classroom.


 
Watt Reviews Kekes On Notre Dame Philosophical Reviews, John Watt has a review of John Kekes, The Art of Life, published last year by Cornell University Press. Here is a taste:
    John Kekes attempts in this book to discuss one way in which life may be lived well. He does this by analyzing a specific type of good life, that which consists in practising the art of life to achieve personal excellence. The book falls into three sections. Part one consists of a discussion of various types of concrete good lives of personal excellence. Kekes gives five types of such a life: those of self-direction, decency, moral authority, depth and honour. Each life forms the basis of a chapter, with the main focus of each chapter being on a particular life, actual or fictional, which embodies the value in question. Thus, for the life of moral authority, Kekes examines the life of the sophron (wise man) in the Cypriot village of Alona and for the life of depth, he examines Oedipus as portrayed by Sophocles. Part two examines in four chapters the general conditions for practising the art of life and develops some of the ideas which emerged from the examination in part one. Part three, the final chapter, draws together the threads of the various arguments to provide ’one possible and reasonable approach to living a good life’ (p10):
      The resulting view is that one way of making lives good is the successful practice of the art of life. This requires living and acting in conformity to a reasonable ideal of personal excellence and developing a well-integrated dominant attitude (p239).
    Given that Kekes is arguing only that the sort of life he presents is one possible good life, not that it is the only good life, it is on the goodness of that life that I am going to concentrate. In particular, I am going to suggest that the absence of an adequate conception of practical wisdom or phronesis and a consequent lack of engagement with the rationality of ends pursued undermines the goodness of this life.
Download it while its hot.


Wednesday, June 18, 2003
 
Felten on the Layers Principle Ed Felten (Princeton, Computer Science) comments on The Layers Principle: Internet Architecture and the Law, a working paper authored by Minn Chung and me, in a post entitled Layers on his blog Freedom to Tinker. And see It's the Architecture, Stupid and It's the Architecture, Stupid--Part II on Copyfight (Donna Wentworth). And also this post on A Copyfighter's Musings. And also this post on Pressepapiers.net.


 
Helfer in the New York Times Check out my Loyola colleague Larry Helfer's editorial in the New York Times: Not Leading the World but Following It.


 
Hasen on the Confirmation Wars Rick Hasen and I have a running debate about the confirmation wars. I maintain we are in a downward spiral of politicitization, while Rick argues that it is business as usual (albeit with roller-coaster like fluctuations). Rick posts today with some evidence that supports the downward spiral hypothesis. (Thanks!)


 
Political Science Book Reviews Check out short reviews in the Washington Post by Kimberly Phillips-Fein. On the table:
    Theda Skocpol, In Diminished Democracy: From Membership to Management in American Civic Life.
    Ian Shapiro, The Moral Foundations of Politics.
    Ann Florini, The Coming Democracy: New Rules for Running a New World.
    James MacGregor Burns, Transforming Leadership.


 
Friedman on Rawls and Nozick Over at the dissident, Jeffrey Friedman has an interesting piece titled Theory Gets a Reality Check: Philosophy, Economics, and Politics as if Verisimilitude Mattered. Take this with a grain of salt. Friedman makes a number of dubious assumptions, but it is still fun reading. Friedman makes a point that is often missed on the right--Rawls's basic theoretical framework can easily be given a heavily libertarian/pro-market interpretation.


 
Marston on the Role of Parties in the Confirmation Wars Brett Marston has an excellent post. Here is a taste:
    Regardless of what the judiciary does, many of the difficulties that are currently part of the nominations process have to do with an invention that was deemed destructive and undesirable by the framers -- and no, I'm not speaking about "judical activism," but political parties. Sure, according to the Federalist, it's good to have the nominations process in one person so we know whom to punish if the judges are not virtuous (with or without impeachment). Aside from the fact that the Federalist may or may not be authoritative on every point (whatever "authoritative"means), I think that our political system presents problems that are not late eighteenth century problems by any stretch of the imagination. In the current context, you have competing "factions" in the Senate and the House, one of which is allied to the President, who proudly runs as the leader of the faction and who raises bucketloads of cash for them. You have a very narrow factional divide in both the country and the legislature. And you have judicial candidates who either ally themselves with positions that are attractive to one faction or another, or who are claimed by one faction or the other. The federal judiciary is a wonderful prize because it promises partisan entrenchment by one of the factions if it can only gain enough votes to prevail in the Senate and win the Presidency as well. And you have popularly elected Senators who probably do not "refine and enlarge the public views" as Federalist #10 thought they would; instead, they reflect the views of their constituency, partly because of changing norms about democratic responsiveness, partly because of the structure of the modern political campaign.


 
Claus on the Ninth Laurence Claus (San Diego) has posted Protecting Rights from Rights: Enumeration, Disparagement, and the Ninth Amendment on SSRN. Here is the abstract:
    The ninth amendment speaks to the problem of tension between federal constitutional rights and other legal rights. It provides that enumerating certain rights in the Constitution shall not be construed to have negative effects on other rights "retained by the people." The ninth amendment reaches beyond the founders' particular concern that the Constitution's enumeration of powers be taken seriously. By providing more generally that enumerating rights should not negatively affect other rights, the founders guarded against both the possible negative effect that they could foresee, and any others that they could not. When the fourteenth amendment introduced the Constitution's broad guarantees of individual right as limitations upon the states, another possible negative effect appeared. Enumerating rights in the Constitution might negatively affect other rights retained by the people if enumeration prompted courts to expand enumerated rights at the expense of other rights retained by the people. This article examines ways in which the ninth amendment's tension-resolving role may be conceived, traversing possible meanings of rights, of denial or disparagement, and of retention by the people.
Download it while its hot.


 
New Papers on the Net Here is today's roundup:
    Frank Partnoy (San Diego) posts two papers:
      Strict Liability for Gatekeepers: A Reply to Professor Coffee. From the abstract:
        This article responds to a proposal by Professor John C. Coffee, Jr. for a modified form of strict liability for gatekeepers. Professor Coffee's proposal would convert gatekeepers into insurers, but cap their insurance obligations based on a multiple of the highest annual revenues the gatekeepers recently had received from their wrongdoing clients. My proposal, advanced in 2001, would allow gatekeepers to contract for a percentage of issuer damages, after settlement or judgment, subject to a legislatively-imposed floor. This article compares the proposals and concludes that a contractual system based on a percentage of the issuer's liability would be preferable to a regulatory system with caps based on a multiple of gatekeeper revenues.
        Both proposals mark a shift in the scholarship addressing the problem of gatekeeper liability. Until recently, scholarship on gatekeepers had focused on reputation - not regulation or civil liability - as the key limitation on gatekeeper behavior. Indeed, many scholars have argued that liability should not be imposed on gatekeepers in various contexts, and that reputation-related incentives alone would lead gatekeepers to screen against fraudulent transactions and improper disclosure in an optimal way, even in the absence of liability. From a theoretical perspective, this article is an attempt to move the literature away from a focus on reputation to an assessment of a potential reinsurance market for securities risks, where gatekeepers would behave more like insurers than reputational intermediaries.
      A Revisionist View of Enron and the Sudden Death of 'May'. From the abstract:
        This article makes two points about the academic and regulatory reaction to Enron's collapse. First, it argues that what seems to be emerging as the "conventional story" of Enron, involving alleged fraud related to Special Purpose Entities, is incorrect. Instead, this article claims that Enron is largely a story about derivatives - financial instruments such as options, futures, and other contracts whose value is linked to some underlying financial instrument or index. A close analysis of the facts shows that the most prominent SPE transactions were largely irrelevant to Enron's collapse, and that most of Enron's deals with SPEs were arguably legal, even though disclosure of those deals did not comport with economic reality. This first point about derivatives is important to the literature studying the relationship between finance and law: legal rules create incentives for parties to engage in economically equivalent unregulated transactions and financial innovation creates incentives for parties to increase risks (to increase expected return) outside the scope of legal rules requiring disclosure. Second, this article argues that the regulatory response to Enron was misguided, in part because it focused too much on the conventional story. Congress - in a little noticed provision of the Sarbanes-Oxley Act of 2002, Section 401(a) - directed the Securities and Exchange Commission to adopt new regulations requiring that periodic filings disclose off-balance sheet transactions that "may" have a material effect on a company's financial condition. The SEC originally proposed disclosure regulations based on this heightened "may" standard, but in its final release reverted to a lower "reasonably likely" standard. Surprisingly, the SEC promulgated these "reasonably likely" regulations even though Congress, in debating Sarbanes-Oxley, already had considered - and rejected - this approach. This second point about regulatory response is important to the literatures studying both mandatory disclosure and the relationship between Congress and administrative agencies: not only did interested private actors quickly capture the agency rule-making process, but they were able to persuade the agency to revive an interpretation the legislature already had considered and rejected.
    Ronald Gilson (Stanford) and Jeffrey Gordon (Columbia) post Controlling Controlling Shareholders. Here is the abstract:
      The rules governing controlling shareholders sit at the intersection of the two facets of the agency problem at the core of public corporations law. The first is the familiar principle-agency problem that arises from the separation of ownership and control. With only this facet in mind, a large shareholder may better police management than the standard panoply of market-oriented techniques. The second is the agency problem that arises between controlling and non-controlling shareholders, which produces the potential for private benefits of control. There is, however, a point of tangency between these facets. Because there are costs associated with holding a concentrated position and with exercising the monitoring function, some private benefits of control may be necessary to induce a party to play that role. Thus, from the point of view of public shareholders, the two facets of the agency problem present a tradeoff. The presence of a controlling shareholder reduces the managerial agency problem, but at the cost of the private benefits agency problem. Non-controlling shareholders will prefer the presence of a controlling shareholder so long as the benefits from reduction in managerial agency costs are greater than the costs of private benefits of control.
      The terms of this tradeoff are determined by the origami of judicial doctrines that describe the fiduciary obligations of a controlling shareholder. In this article, we examine the doctrinal limits on the private benefits of control from a particular orientation. A controlling shareholder may extract private benefits of control in one of three ways: by taking a disproportionate amount of the corporation's ongoing earnings; by freezing out the minority; or by selling control. Our thesis is that the limits on these three methods of extraction must be symmetrical because they are in substantial respects substitutes. We then consider a series of recent Delaware Chancery Court decisions that we argue point in inconsistent directions: on the one hand reducing the extent to which a controlling shareholder can extract private benefits through selling control, and on the other increasing the extent to which private benefits can be extracted through freezing out non-controlling shareholders. While judicial doctrine is too coarse a tool to specify the perfect level of private benefits, we believe these cases get it backwards - the potential for efficiency gains are greater from sale of control than from freeze outs, so that a shift that favors freeze outs as opposed to sales of control is a move in the wrong direction. In particular we argue that the Delaware law of freeze outs can be best reunified by giving "business judgment rule" protection to a transaction that is approved by a genuinely independent special committee that has the power to "say no" to a freeze out merger, while also preserving what amounts to a class-based appraisal remedy for transactions that proceed by freeze out tender offer without a special committee approval.
    Lynne Dallas (San Diego) posts two papers:
      Law and Socio-Economics in Legal Education, forthcoming in the Rutgers Law Review. From the abstract:
        The objective of this Article is to provide an introduction to law and socio-economics ("LSOC") in legal education. LSOC studies the interrelationship between law and economic/social processes. It is interdisciplinary, drawing on a number of disciplines, such as psychology, sociology, anthropology, political science, and economics. It encompasses diverse perspectives, which include a number of schools of economic thought, such as behavioral, neo-institutional, feminist, binary, traditional institutional, and post-Keynesian economics.
        This Article discusses aspects of neoclassical economics with which some heterodox approaches (within the LSOC umbrella) disagree. Particular attention is given to traditional institutional economics (hereinafter "institutional economics") which differs on many dimensions from neoclassical economics. Part I of this Article gives an overview of the LSOC approach mainly from an institutional economic perspective and compares this LSOC approach with neoclassical economics in terms of their views of markets and economics as a discipline. Part II is devoted to LSOC and human behavior and provides an overview of the different methodologies and perceptions of human behavior utilized in neoclassical economics and LSOC. Part III gives examples of two LSOC approaches drawing on institutional and feminist economics.
      Developments in U.S. Boards of Directors and the Multiple Roles of Corporate Boards, forthcoming in the San Diego Law Review. Here is the abstract:
        As corporations have increased in size and complexity so have the demands on their operations, requiring more complex organizational structures. The demands on corporate boards of directors have also changed. These demands require boards to perform a multitude of functions that call for attention to the structure of boards and to their composition and practices. Insufficient changes have been made, however, to accommodate these multiple roles of corporate boards. This paper is about the multiple roles of corporate boards, which are defined as the manager-monitoring, relational and strategic management roles of boards. This paper argues that these roles more accurately describe board functions than the description of board functions as control, service and strategy. In addition, this paper argues that the multiple roles of boards often come into conflict with each other.
        A recent meta-analysis has found that both insider-dominated boards and outsider-dominated boards are associated with more successful corporations in terms of return on assets. In this paper, I explore a number of alternative explanations for these findings. I offer an interpretation that is based on the multiple conflicting roles of corporate boards. I argue that insider-dominated board perform some roles more effectively than outsider-dominated boards, particularly strategic management. I argue that the advantages of the insider-dominated board in strategic management comes from the advantages of group decision making by peers (fellow executives) which decreases corporate politics and the chance of a dominant CEO becoming convinced of his invincibility. In addition, the quality of decision making is enhanced in ambiguous and uncertain situations when diverse perspectives are shared and this sharing is encouraged when persons are in similar social positions. Outsider-dominated boards perform other functions better than insider-dominated boards, such as manager-monitoring functions, to the extent that the outside directors are truly independent of management. This analysis suggests a number of reforms for corporate boards that will enable them to perform their multiple functions more effectively.
    Grant Morris (San Diego) posts scaping the Asylum: When Freedom is a Crime, forthcoming in the San Diego Law Review. From the abstract:
      This article discusses the constitutionality and desirability of laws that criminalize escape by civilly committed mental patients from the hospitals or other treatment facilities in which they are confined. Although escape by sentence-serving convicts is a crime in many states, regardless of whether they escape from prison or from another place of confinement or custody, escape by "regular" civilly committed mental patients is not. Nevertheless, criminalization of escape is becoming increasingly popular for "special" civilly committed patients, such as individuals who have been acquitted of crime by reason of insanity and sexually violent predators. Pivotal Supreme Court decisions involving specially categorized mental patients are analyzed to assess whether criminalization of escape by these patients is constitutional. Of particular interest is the equal protection argument: If regular civilly committed mental patients are not prosecuted and punished for escape, can special civilly committed patients be prosecuted and punished?
      The article also discusses alternatives to the criminalization of escape that would assure the public's safety while avoiding constitutional challenges. Criminalization of escape by mental patients may be an unnecessary, and unwise, policy judgment if the risk of escape can minimized through enhanced security measures to prevent escape, treatment opportunities that offer patients the prospect of release, and clarification of authority to apprehend and return patients if escape does occur. Nevertheless, the article concludes by questioning whether public pressure to confine, and if possible, punish specially civilly committed patients will preclude use of these rational alternatives to criminalization of patient escape.
    Maxwell Stearns (George Mason) posts Appellate Courts Inside and Out, forthcoming in the Michigan Law Review. From the abstract:
      In Inside Appellate Courts: The Impact of Court Organization on Judicial Decision Making in The United States Courts of Appeals (Michgan 2002), Jonathan Matthew Cohen, a sociologist and practicing attorney, asks a question that has received scant attention in the academic commentary on appellate judging: If we accept the dominant conception of appellate court judging as a process of atomistic contemplation, how do federal circuit court judges continue to maintain high quality opinions in the face of pervasively growing judicial dockets? Cohen advances the provocative thesis that increasing workloads have not prevented appellate judges from producing high quality outputs, but rather, that the dominant image of appellate judging as an isolated contemplative task is conceptually flawed. A better approach, Cohen argues, is to compare the task of appellate court judging to production within a multi-divisional private firm. While Cohen recognizes the inherent limits of his analogy, and in particular, that unlike private firms, circuit courts lack a central coordinating authority, he nonetheless contends that it is more fruitful to consider the judges in the manner of workers in a complex organization than as autonomous actors reflecting in isolation on the legal issues presented on appeal.
      In this review essay, Stearns considers three complementary methodologies for analyzing appellate courts that yield insights of particular interest to lawyers and legal scholars. Such questions include how appellate courts transform preferences into doctrine; the nature of cases that are likely susceptible to further appellate process through en banc, mini-en banc, or Supreme Court review; and how best to evaluate appellate court opinions. While organizational theory provides a useful starting point, Stearns contends that insights drawn from other methodologies, including economics (demonstrating how decentralized informational processes can provide more meaningful data), probability analysis (demonstrating the quality of data drawn from subsets of a larger group), and social choice (demonstrating the nature and limits of group decision making), might prove more fruitful in evaluating at least some of these questions. Stearns concludes that a comprehensive understanding of federal appellate judging requires not only an understanding of the circuit courts' internal organizational structure, but also an analysis of the edifice of circuit court decision making from inside and out.


 
White at Oxford on Self-Development Today at Oxford's Research Seminar in Political Theory, Stuart White (Oxford) presents Self-Development as a Political Ethic.


Tuesday, June 17, 2003
 
Self Help for Copyright Owners & the Global State of Nature Courtesy of the ever-interesting Chris Bertram of Junius, I was intrigued by Michael LaBossiere's discussion of the Berman bill, authorizing self-help by copyright owners against those who distribute pirated electronic copies of copyrighted works. Here is an excerpt:
    Locke notes that "the lack of a judge with authority puts all men in the state of nature." In this state of nature people are permitted to judge their own cases and seek retribution against those who have done them wrong. This is, of course, because they have no higher authority to which they can appeal. Locke does not, of course, endorse uncontrolled vengeance: he holds that retribution must be proportional to damage suffered and within the limits of reason and conscience. Given that computer networks span the globe and the obvious lack of a world government (or even a truly effective international legal system), it seems evident that copyright holders and those who violate those copyrights will often be in the state of nature. As an example, US copyright holders might have their copyrights violated by people living in countries that do not recognise American legal authority or even by people who live in areas of the world that lack a centralised authority. In such cases, it would be all but impossible to bring about effective legal action against the offenders. However, being connected to the internet, the offenders are accessible to hacking. Such attacks would be practical and, more important from the philosopher’s standpoint, ethical as well, provided that the attack was limited to rendering the stolen property useless. After all, the damage would be proportional to the harm and it is a well established moral principle that a thief is not wronged when the rightful owner reclaims her property.
My my my. One hardly knows where to begin. First, but perhaps dullest, the lack of a world government hardly puts us in a state of nature. There is a highly effective international treaty regime for the protection of intellectual property, which combined with other remedies, such as doctrine of recognition of judgments (or comity), creates a variety of legal remedies against electronic pirates. Second, and more interesting, is the question whether it really is possible to make out a Lockean case for intellectual property. Third, and even more interesting, is the question whether digital piracy in cyberspace is analogous to piracy on the high seas--creating a case for universal jurisdiction--that is, for the legal authority of any nation to authorize action against cyberpirates. I hope this topic gets some traction in the blogosphere. I'd love to see what others think! Perhaps, I'll have more to say on this soon.


 
Bork on the Alien Torts Claims Act Courtesy of How Appealing, Robert Bork has an op/ed on the Alien Torts Claims Act in the Wall Street Journal. Here is a taste:
    U.S. courts are deciding cases by citizens of Paraguay against another citizen of Paraguay for acts in Paraguay; by citizens of Bosnia-Herzegovina against the leader of the Bosnian Serbs; and claims against the estate of a former Philippine president, although all plaintiffs and defendants were Philippine nationals and the alleged violations occurred entirely in the Philippines. Major American corporations, such as Texaco and Unocal, are being sued when they do business abroad, for the human-rights violations of host governments. The Ninth Circuit, sitting en banc, has just upheld suits filed here against foreign nationals who assisted our government in the seizure of criminals abroad. We may expect soon suits against our allies for capturing and extraditing alleged terrorists.
    How did we get to this state of affairs? Many American courts claim authority from the little-known Alien Tort Act (ATA): "The district courts shall have original jurisdiction of any civil action by an alien for tort only, committed in violation of the law of nations or a treaty of the United States." Early in my time on the federal appellate bench, I sat on a three-judge panel that heard Tel-Oren v. Libyan Arab Republic (1984), involving Israelis' claims against the P.L.O., Libya and others for a murderous attack launched in Israel. It seemed preposterous that we should decide the legality of an assault by foreigners against foreigners on foreign soil. My first thought was that the statute must be a modern excrescence. To my chagrin, it turned out to have been part of the first Judiciary Act of 1789.


 
New Papers on the Net Here is today's roundup:
    Margaret Blair (Georgetown) posts Directors' Duties in a Post-Enron World: Why Language Matters, forthcoming in the Wake Forest Law Review. Here is the abstract:
      This essay observes that, in the face of corporate scandals of the last few years, a number of prominent advocates for shareholder primacy have retreated to the position that directors and officers should attempt to maximize long run share value performance, rather than short term value. But the mantra of share value maximization has no distinctive meaning and policy implications if it is not interpreted to mean maximization of short term value. This is because the actions required to maximize share value in the long run are indistinguishable in practice from actions taken in pursuit of other more broadly-stated goals such as the maximization of wealth for all corporate stakeholders. Moreover, once its advocates accept the goal of long run share value maximization, then they should consider discarding the language of shareholder primacy, and the associated emphasis on high-powered, equity-based incentive systems. Such language is unnecessarily divisive and provocative. It draws attention to conflicting interests in corporate enterprises and announces that, when faced with conflicts, directors should choose actions that benefit shareholders even if those actions harm other stakeholders. In so doing, it tends to reduce cooperation, send signals that other participants and other values are of secondary importance, and undermine the ethical climate inside corporations. This essay proposes that, by contrast, the language of "team production" supports cooperative behavior, sharing of burdens and rewards, and win-win solutions.
    Jonathan Lipson (Baltimore) uploads Remote Control: Revised Article 9 and the Negotiability of Information, forthcoming in the Ohio State Law Journal. Here is the abstract:
      This article considers the effect that rules on the continuity of security interests and proceeds under Article 9 of the Uniform Commercial Code will have on the negotiability (i.e., free alienability) of information assets, such as data and biotechnologies. The continuity of interest rules provide that a security interest will presumptively continue in collateral, even after disposition by the debtor. The proceeds rules provide that a security interest will automatically attach to, among other things, "rights arising out of" collateral, and to whatever is received upon the disposition of the collateral. Information assets, such as data and biotechnology assets, are often highly mobile, mutable and replicable. Thus, security interests in these assets will arise readily and will endure, even as these assets may travel through the chain of commerce, into the hands of good faith purchasers remote from the debtor and secured party that created the interest in the first place. This article calls the power to assert a security interest in assets at such a remove “remote control.” The article then considers arguments against remote secured party control under these circumstances. Among other things, remote secured party control presents challenges to historic understandings of the treatment of bona fide purchasers, and to doctrinal and theoretical approaches to property. This article concludes by suggesting that courts can mitigate the problem of remote control by relaxing the definition of property in this context. If data and biotechnology assets are property at all—a contested claim—it is not clear that they should be treated as such for the benefit of remote, prior secured parties in disputes with later bona fide purchasers.
    W. Viscusi (Harvard) posts The Value of Life: Estimates with Risks by Occupation and Industry. From the abstract:
      The worker fatality risk variable constructed for this paper uses BLS data on total worker deaths by both occupation and industry over the 1992-1997 period rather than death risks by occupation or industry alone, as in past studies. The subsequent estimates using 1997 CPS data indicate a value of life of $4.7 million for the full sample, $7.0 million for blue-collar males, and $8.5 million for blue-collar females. Unlike previous estimates, these values account for the influence of clustering of the job risk variable and compensating differentials for both workers' compensation and nonfatal job risks.
    Samuel Thompson (UCLA) and Robert Clary post Coming in from the 'Cold': The Case for ESD Codification, forthcoming in Tax Notes. From the abstract:
      This article builds on and elaborates on portions of an article the authors are currently publising in the University of Southern California's annual publication, Major Tax Planning, entitled "Staying the Course: A Look at the Response to Corporate Inversoins and Tax Shelters and the Need for More Direct Action."
Other papers of interest:


 
Holmes on Antidiscrimination and Equality at Oxford At Oxford's Jurisprudence Discussion Group, Elisa Holmes presents Anti-discrimination Rights without Equality.


Monday, June 16, 2003
 
Solum and Chung on Internet Architecture and the Law Minn Chung and I have posted The Layers Principle: Internet Architecture and the Law on SSRN. Here is the abstract:
    This essay addresses the fundamental questions of Internet governance: whether and how the architecture of the Internet should affect the shape and content of legal regulation of the global network of networks. Our answer to these questions is based on the concept of layers, the fundamental architectural feature of the Internet. Our thesis is that legal regulation of the Internet should be governed by the layers principle-the law should respect the integrity of layered Internet architecture. This principle has two corollaries. The first corollary is the principle of layer separation: Internet regulation should not violate or compromise the separation between layers designed into the basic architecture of the Internet. The second corollary is the principle of minimizing layer crossing, i.e. minimize the distance between the layer at which the law aims to produce an affect and the layer directly affected by legal regulation. The essay argues that layers analysis provides a more robust conceptual framework for evaluating Internet regulations than does the end-to-end principle.
    The layers principle is supported by two fundamental ideas. The first idea is transparency: the fact that layer violating regulations damage transparency combined with the fact that Internet transparency lowers the cost of innovation provides compelling support for the principle of layer separation: public Internet regulators should not violate or compromise the separation between layers designed into the basic architecture of the Internet. The second idea is fit: the fact that layer-crossing regulations result in inherent mismatch between the ends such regulations seek to promote and the means employed implies that layer-crossing regulations suffer from problems of overbreadth and underinclusion; avoidance of these problems requires Internet regulators to minimize the distance between the layer at which the law aims to produce an effect and the layer directly targeted by legal regulation.
    Finally, the essay provides a detailed discussion of several real or hypothetical layer-violating or layer-crossing regulations, including: (1) The Serbian internet interdiction myth, (2) Myanmar's cut-the-wire policy, (3) China's great firewall, (4) the French Yahoo case, (5) cyber-terrorism, (6) Pennsylvania's IP address-blocking child-pornography statute, (7) port blocking and peer-to-peer file sharing, and (8) the regulation of streaming video at the IP layer.
Comment are welcome.


 
Filibuster Debate Check out the latest posts by the Curmudgeonly Clerk and Will Baude on the filibuster of judicial nominees.


 
What's Left? On the agenda of the Supreme Court, see Howard Bashman for the lowdown.


 
Hasen on Beaumont Rick Hasen explains why the Supreme Court's opinion today in Federal Election Commission v. Beaumont bodes well for the soft money provisions of McCain-Feingold on his Election Law Blog.


 
New Papers on the Net Here is the roundup:
    Pekka Väyrynen (UC Davis, Philosophy) posts A Theory of Hedged Moral Principles. Here is an exerpt:
      Classical moral principles are no less in trouble than classical laws of nature. So how does the general knowledge we have in science and ethics hook up with the notion of explanatory power and other notions that have traditionally been associated with law-likeness? This paper develops a theory of exception-tolerating and yet robustly explanatory moral principles. I first develop a semantics for hedged moral principles that shows how exception-laden generalizations can have determinate and informative truth conditions without being backed by exceptionless generalizations. I then turn to discuss what features hedged moral principles must possess if they are to play an explanatory role regarding the moral status of actions, and how exactly they figure in such explanations. The upshot of this paper is that we should find nothing paradoxical about the idea of defeasible and yet explanatory moral generalizations, and that such generalizations give us a recognizably principled morality, even if not in the classical sense.
    Ethan de Mesquita (Washington University, St. Louis, Political Science) and Matthew Stephenson (Harvard, Government) upload Legal Institutions and the Structure of Informal Networks. From the abstract:
      The relationship between government-provided contract enforcement and informal trade networks raises important sociological, political, and economic questions. When economic activity is embedded in complex social structures, what are the implications of governmental contract enforcement for the scope and nature of economic relations? What determines whether individuals rely on formal legal institutions or informal networks to sustain trade relationships? Do effective legal institutions erode informal networks? To address these questions, we develop a model in which a trade-off exists between size and sustainability of networks. By adding the possibility of enforceable contracts, we provide a theoretical explanation for the coexistence of legal contract enforcement and an informal economy. We find that legal enforcement has little effect on networks unless the cost of law drops below a certain threshold, at which point small decreases in the cost of law have dramatic effects on network size and the frequency of use of the legal system.


 
Balkin on Ideological Judicial Appointments Responding to Matthew Yglesias, Jack Balkin defends ideological judicial appointments in a post entitled Judicial Appointments and Good Faith: Some Notes About Constitutional Change. Here is a snippet:
    [I]s ideological diversity on the federal bench a good thing? Well, often it is, especially if you are in the minority. But I'm not at all sure that Lyndon Johnson should have appointed a racial conservative to fill Tom Clark's seat in 1967 instead of Thurgood Marshall because the Warren Court was getting too liberal, and Marshall's appointment would push it even further to the left. Nor am I sure that Franlkin Roosevelt should have started to appoint some Lochner era conservatives in 1940 because there were just too many New Dealers on the Supreme Court. Rather, ideological diversity on the federal bench is produced through the give and take of regular elections, in which the parties take turns in the White House, and through political pressure by opposition politicians on the President. Ideological diversity on the federal bench, in short, is a product of democratic elections and the separation of powers. If the country wants to keep returning conservative Republicans to office, we are going to get increasingly conservative judges and Justices over time, and the content of American constitutional law will change accordingly.
Balkin's argument is symptomatic of the increasing politicization of the law, of which Balkin is an eloquent defender. My case for a different view is made in posts entitled A Neoformalist Manifesto and Fear and Loathing in New Haven (which have links to more by Balkin).


 
Public Reason
    Introduction The idea of public reason, prominent in Rawls's late work, has interesting connections with a variety of topics of concern to legal theorists. Rawls's idea began with what he called the fact of pluralism, the observation since the Wars of Relgiion western societies have been characterized by deep and persistent disagreements about matters moral and religious. Rawls argued that in societies that respected the basic liberties, such agreements were likely to persist. Nonetheless, Rawls maintained, even in pluralist socities, there is a shared public reason, a set of political values (such as the fundamental political equality of citizens) and facts (such as the uncontroversial results of science) that might be used as the basis for a common justification for a societies fundamental political commitments (the essentials of the constitution and the principles of justice). Two recent papers shed light on the debates over public reason.
    Lipkin on the Public Square Robert Justin Lipkin has posted a paper entitled Reconstructing the Public Square to SSRN. Here is the abstract:
      This Article offers a rapprochement between two warring factions over the role of religion in publicly justifying laws. One faction, embracing "inclusionism," is skeptical of any constraints on democratic debate and adopts an eclectic attitude welcoming religious arguments in the public square along with secular arguments. Yet, understood in this fashion, inclusionism is committed to the propriety of those religious arguments, which may in fact create a "Tower of Babel" environment in which American citizens are unable to genuinely communicate with one another. For this reason, and because religious arguments are sometimes divisive, the opposing faction embraces "exclusionism" which seeks to limit the public square to secular arguments only. Yet, restricting the freedom of religion and speech rights of religious citizens is unfair, and more dangerously, appears incompatible with both the purposes of the First Amendment and democracy. How then can the dangers associated with both inclusionism and exclusionism be avoided?
      This Article attempts to formulate a principled compromise between these two factions by replacing the distinction between the religious and the secular with the novel distinction between the dedicated and the deliberative. Dedicated arguments and reasons - including both religious and secular ones - insist on a canonical and fixed language and form of reasoning for discussing public policy. By contrast deliberative arguments and reasons - including both religious and secular ones - insist on a tentative, pragmatic, fallibilistic language and form of reasoning with which to conduct democratic debate. This Article then suggests that democracy in the public square is best understood as committed to the thesis that dedicated arguments be "reconstructed" into deliberative ones, and that all deliberative argumentsc - whether religious or secular- play an equal role in justifying coercive laws in democratic societies. The Article then evaluates an important objection to the Reconstruction Thesis, which helps to better illuminate the Thesis's rationale. The Article concludes with a conception of complex democracy, which is presupposed by the Reconstruction Thesis and enables us to see its plausibility and attractiveness.
    Review of Weithman On the Notre Dame Philosophical Reviews, Lucas Swaine has a review of Paul Weithman's Religion and the Obligations of Citizenship, recently published by Cambridge University Press. Here is an excerpt:
      Weithman emphasizes the role that religious institutions play in on-going political discussions. He proposes that religious institutions in America make “valuable contributions to democracy” (36, 91). Empirical evidence shows how churches provide opportunities to participate and engage in civic argument, and means through which people can “[achieve] the realization of citizenship” (48, 69, 85, 91). Not only did churches help to rid America of slavery: they continue to encourage participation among the poor and underprivileged, they contribute to civic argument and debate over important political questions, and churches prompt community involvement with opportunities for people to volunteer in various worthy capacities (4, 40-49, 90, 91). What is more, Weithman argues, the Biblical language employed by Catholic bishops or Martin Luther King, Jr. have had real “moral pay-off[s]”; such talk of sin and seemingly offensive admonishments shake people from complacency (53-54, 81). Catholic Church officials break the conservative stereotype: they may lobby against “partial-birth abortions,” embryo research, and physician-assisted suicide, but they also speak up for refugees, immigrants, and the poor (58, 60, 64). Nonetheless, debates on full participation and citizenship “should be settled by informed political debate”; and it is “important” that churches do not contest democratic institutions themselves (54, 62). The good news, Weithman suggests, is that American churches acknowledge the legitimacy of U.S. institutions. They teach reverence for American history, an appreciation of religious liberty among other democratic values, and that the country is worth dying for (63-64, 91).
    Happy downloading!


 
Maclean on Procedural Virtue Today at Oxford's Moral Philosophy Seminar, Douglas Maclean (North Carolina at Chapel Hill, Philosophy) presents Procedural Virtue.


Sunday, June 15, 2003
 
Review of Tushnet Mark Kessler (Bates) reviews Mark Tushnet's The New Constitutional Order. Here is a taste:
    The previous constitutional order, beginning with FDR and moving through LBJ’s Great Society was characterized by interest group bargaining and premised on the principle that individuals could petition the national government and federal courts to solve large-scale social problems. This order supplanted a regime founded on the notion that government played only the most minimal role in solving social and economic problems. The new order, according to Tushnet, began taking shape with Ronald Reagan’s presidency, received greater definition in the wake of the 1994 congressional elections, and “consolidated” during the Clinton years. Indeed, the major characteristics of this new order are encapsulated by Bill Clinton’s assessment during his 1996 State of the Union address that “the era of big government is over.” Overall, the new order is characterized by divided government, ideologically organized political parties, and a severely chastened constitutional ambition. Arguing against those who interpret current trends in constitutional interpretation as indicative of a return to the pre-New Deal constitutional order, Tushnet suggests that the Supreme Court has maintained and will likely continue to support, given current institutional arrangements across American national government, major doctrinal developments from the New Deal/Great Society constitutional order. But, Tushnet cautions, do not expect the current Court to go further or extend any of these previous doctrines. “This far and no further” seems to be the underlying principle. In general, the new order’s central guiding principle is not, as current critics suggest, that government can not solve social problems, but rather that it can not solve any additional problems.


 
Review of Carl Schmitt David Gordon has a review of Karl Schmitt's The Nomos of the Earth in the International Law of the Jus Publicum Europaeum. Courtesy of Political Theory Daily Review. Here is a taste:
    Schmitt devotes most of his attention in the book to the order of European nations that prevailed from the sixteenth to the twentieth centuries. Here once more the key to all mysteries is to avoid abstract rules that mandate perpetual war to enforce them. "The formal reference point for determining just war no longer was the church's authority in international law, but rather the equal sovereignty of states. . . . Any war between states, between equal sovereigns, was legitimate. Given this juridical formalization, a rationalization and humanization—a bracketing—of war was achieved for 200 years" (p. 121). Even a reader sympathetic to Schmitt will likely consider his claim extreme. Is it not paradoxical to think that war can be limited by throwing out altogether the notion of just war? Schmitt is not deterred by the paradox and relentlessly presses his cases against "theological" attempts to establish rules for just war. "A true jurist of this transitional period [to the order of modern Europe], Gentili, formulated the battle cry. . . . Silete theologi in munere alieno! . . . [figuratively: Theologians should mind their own business!]" (p. 121).


 
More on Mediocrity My post on friday (The Aretaic Turn) contributes to a minor blogospheric eruption promted by Matthew Yglesias's suggestion that we might prefer mediocre judges to brilliant ones and Jack Balkin's reply to Yglesias. This is fun stuff and there is more from Juan Nonvolokh and Yglesias. Update: And more from Law Muse and the Epistemopolitan and Green Gourd and Unlearned Hand.


Saturday, June 14, 2003
 
Howard Chang on Immigation and Distributive Justice Howard Chang (Pennsylvania) posts The Immigration Paradox: Poverty, Distributive Justice, and Liberal Egalitarianism, forthcoming in the DePaul Law Review. From the abstract:
    The immigration of unskilled workers poses a fundamental problem for liberals. While from the perspective of the economic welfare of natives, the optimal policy would be to admit these aliens as guest workers, this policy would violate liberal egalitarian ideals. These ideals would treat these resident workers as equals, entitled to access to citizenship and to the full set of public benefits provided to citizens. If the welfare of all incumbent residents determines admissions policies, however, and we anticipate the fiscal burden that the immigration of the poor would impose, then our welfare criterion would preclude the admission of unskilled workers in the first place. Thus, our commitment to treat these workers as equals once admitted would cut against their admission and make them worse off than they would be if we agreed never to treat them as equals. A liberal can avoid this anomaly by adopting a cosmopolitan perspective that extends equal concern to all individuals, including aliens, which suggests liberal immigration policies for unskilled workers. The problem with this escape from the "immigration paradox" is the failure of most citizens to adopt such a cosmopolitan perspective. As long as citizens are reluctant to bear the fiscal burdens that cosmopolitan liberalism would impose, constraints of political feasibility may imply that guest-worker programs are the best policies that cosmopolitan liberals can obtain with respect to many unskilled alien workers.


 
New Papers on the Net Here is the roundup:
    Theodore Eisenberg (Cornell) and Margo Schlanger (Harvard) upload The Reliability of the Administrative Office of the U.S. Courts Database: An Empirical Analysis, forthcoming in the Notre Dame Law Review. From the abstract:
      Researchers have long used federal court data assembled by the Administrative Office of the U.S. Courts (AO) and the Federal Judicial Center (FJC). The data include information about every case filed in federal district court and every appeal filed in the twelve non-specialized federal appellate courts. The varied uses of the AO database have led to its being called "by far the most prominent" database used by legal researchers for statistical analysis of case outcomes. Like many large data sets, the AO data are not completely accurate. Some reports exist relating to the AO data's reliability, but no systematic study of the AO's non-bankruptcy data has been published. In the course of a substantive study of federal litigation brought by inmates, one of us began to investigate the nature and rate of errors, exploiting a technological innovation in federal court records: the availability of docket sheets over the Internet via the federal judiciary's Public Access to Court Electronic Records project (PACER). This Article follows a similar method to begin more comprehensively the process of assessing the AO data's reliability. Our study looks at two large categories of cases, torts and inmate civil rights, and separates two aspects of case outcomes: which party obtained judgment and the amount of the judgment when plaintiffs prevail. With respect to the coding for the party obtaining judgment, we find that the AO data are very accurate when they report a judgment for plaintiff or defendant, except in cases in which judgment is reported for plaintiff but damages are reported as zero. As to this anomalous category (which is far more significant in the inmate sample than in the torts sample), defendants are frequently the actual victors in the inmate cases. In addition, when the data report a judgment for "both" parties (a characterization that is ambiguous even as a matter of theory), the actual victor is nearly always the plaintiff. Because such cases are quite infrequent, this conclusion is premised on relatively few observations and merits further testing. With respect to award amounts, we find that the unmodified AO data are more error prone, but that the data remain usable for many research purposes. While they systematically overestimate the mean award, the data apparently yield a more accurate estimate as to median awards. Researchers and policymakers interested in more precise estimates of mean and median awards have two reasonably efficient options available. First, as described below, they can exclude two easily-identified classes of awards with evidently suspect values entered in the AO data. Second, using PACER or courthouse records, they can ascertain the true award only in the suspect cases without having to research the mass of cases. Either technique provides reasonable estimates of the median award. The second technique may provide a reasonable estimate of the mean award, at least for some case categories.
    David Skeel (Pennsylvania) posts Creditors' Ball: The 'New' New Corporate Governance in Chapter 11, forthcoming in the University of Pennsylvania Law Review. From the abstract:
      In the 1980s and early 1990s, many observers believed that the American corporate bankruptcy laws were desperately inefficient. The managers of the debtor stayed in control as "debtor in possession" after filing for bankruptcy, and they had the exclusive right to propose a reorganization plan for at least the first four months of the case, and often far longer. The result was lengthy cases, deteriorating value and numerous academic proposals to replace Chapter 11 with an alternative regime. In the early years of the new millennium, bankruptcy could not look more different. Cases proceed much more quickly, and they are much more likely to result in auctions or other sales of assets than in previous decades. This transformation is due in part to a change in the major corporations that file for bankruptcy. Rather than industrial, bricks-and-mortar firms, many of the new debtors are knowledge-based firms with transient assets. Much more important, however, has been the adjustments creditors have made in an effort to reassert control in bankruptcy. In this Article, I focus on the two most important contractual developments: lenders' use of debtor-in-possession financing agreements as a governance lever; and the so-called pay-to-stay arrangements which give key managers bonuses for meeting specified performance goals (such as quick emergence from bankruptcy or the sale of important assets). Both of these developments can be seen as adjustments by creditors to counteract bankruptcy's interference with the shift in control rights that would ordinarily occur at the time of financial distress. As I have discussed elsewhere, chapter 11 functioned somewhat like an antitakeover device in the 1980s. Creditors have now neutralized its effects. Of the two new contractual approaches, pay-to-stay agreements have proven much more controversial, prompting heated complaints about excessive managerial pay in cases like Enron, Polaroid and Kmart. The controversy is similar in obvious respects to the recent complaints about performance-based pay outside of bankruptcy. I argue that pay-to-stay agreements are more defensible, but also argue that bankruptcy compensation should be constrained in several ways. Although the use of DIP financing agreements to shape bankruptcy cases has not received nearly so much attention, the effect is even more profound. I argue that the use of these agreements to control Chapter 11 cases is, on the whole, a beneficial development. But I also argue that some of their terms - such as provisions protecting pre-petition loans by the DIP lenders and the use of DIP agreements to lock up control - should be subject to careful judicial scrutiny.
    Alma Cohen (National Bureau of Economic Research) offers Profits and Market Power in Repeat Contracting: Evidence from the Insurance Market. From the abstract:
      This paper tests models in which repeat contracting enables sellers to obtain market power and higher profits with respect to repeat customers. I use for this purpose a unique data base from the insurance market that enables direct measurement of sellers' profits. The evidence, I find, is consistent with sellers making higher profits on repeat customers. The evidence is also consistent with sellers gaining market power over repeat customers by learning about these customers information that competing sellers do not have.


Friday, June 13, 2003
 
The Aretaic Turn The ever acute Jack Balkin answers the always intelligent Matthew Yglesias's argument for mediocrity on the federal bench. Here is a bit of Yglesias's post:
    Law professors seem to take it as just obvious that intellectual brilliance is an extremely important qualification for a judge and then debate as to whether or not said brilliance could be outweighed by really horrendous policy view or something. This whole approach seems wrong to me. Why should we want brilliant judges? Why not bland mediocrities? It seems to me that the lower federal courts, in particular, positively call for bland mediocrities who will adjudicate cases according to statute and precedent without doing much of anything that's remotely brilliant. Even at the Supreme Court level why should I want a judge who, like Posner and other brilliant legal theorists, has put forward revolutionary new understandings of the law?
    Certainly I wouldn't want stupid judges, but you can be a lot less brilliant than Judge Posner before you become stupid. I think a nice, ordinarily smart guy who got good grades in college and law school but who hasn't demonstrated much intellectual creativity or daring or cutting brilliance is exactly what we're looking for. Leave brilliant reconceptualizations to politicians and writers and professors and let the judges just judge away boringly.
Balkin responds:
    Good judging also involves more than the ability to figure out who should win. It also requires the ability to state principled legal conclusions in written form that can be useful to other judges and other actors in the legal system. A judge may recognize that the plaintiff should win in a particular case, but may not be able to articulate the reasons for that conclusion in a way that captures the best legal principle for deciding the case, and that distinguishes other fact situations that seem identical but in fact are not. Judges who cannot perform these tasks well may move the law in the wrong direction. People who have mediocre legal minds are usually not as good at articulating the proper grounds for resolution. They often tend to conflate issues or gloss over important distinctions. Law professors tend to think that brilliant legal minds are better at these features of judging than less brilliant legal minds. They are right about that, I think, but there are other important considerations that go into the art of good judging, and as Matthew suggests, we should not assume that law professors are the best at all of the aspects that go into this art.
Well, I think that both Balkin and Yglesias have captured part of the truth:
    Balkin is right. Good judging requires the intellectual virtues. Aristotle identified two intellectual virtues, sophia or theoretical wisdom and phronesis or practical wisdom. Good judging requires both of these virtues. Sometimes the law is intellectually complex--think the rule against perpetuities or the eleventh amendment. Moreover, in our system, even trial judges face novel questions of law that require the exercise of the same intellectual abilities as are needed by appellate court judges. But theoretical wisdom is not sufficient. The law must be applied to particular fact situations. Judges need the ability to size up a case and discern its legally salient features. Moreover, judges must manage cases and devise remedies. These tasks require practical wisdom.
    Yglesias is right. Read charitably, the real point of Yglesias's post is not that judges don't need to be smart, but that they don't need to be brilliant. And here Yglesias's post calls to mind the connection between our understanding of "brilliance" or "genius" and creativity. And there is a sense in which we do not want judges to be creative. Why not? Because we want them to display the virtue of justice, which Aristotle understood as fidelity to law. (Let me put his view of equity to the side for the moment.) The virtue of justice requires decision according to the law as it is and not according to the judge's conception of the law as it should be--no matter how brilliant that conception might be.
This raises an interesting question: are the virtues of sophia (understood as encompassing brilliance) and justice in tension with one another? Are brilliant judges less dedicated to the rule of law? We know that Aristotle would deny that this is the case, at least for a judge who was fully in possession of the virtues. Such a judge would have the virtue of phronesis and would, therefore, be able to discriminate between cases that required judicial creativity and those that required fidelity to law. But is Aristotle right? We live after romanticism, and one of the fundamental assumptions of romantic thought is that genius is inconsistent with conformity. Hence, we have the image of the brilliant judge who casts aside precedent and forges a new legal order using only the brute force of intellect. It is interesting that Justice Holmes, who like Posner approached genius, maintained that the law was not the calling for brilliance. In this post, I have been making what we might call the aretaic turn in legal theory. That is, like Balkin and Yglesias, I am focusing on judicial character--rather than on a decision procedure for judging. For a fuller statement of my views on these matters, see Virtue Jurisprudence: A Virtue-Centered Theory of Judging.


 
Fair Use and the Constitution Orin Kerr (author of the must download Cybercrime's Scope: Interpreting 'Access' and 'Authorization' in Computer Misuse Statutes) has a typically thoughtful post on the Conspiracy on the question whether there is a constitutional defect in the DMCA because it lacks a fair use exception. Kerr says no. Here is his reason:
    As I see it, the problem is that an affirmative defense only works in conjunction with its corresponding cause of action. While the Constitution may require an affirmative defense to liability under a specific cause of action, we don't normally speak of someone having a "right" to do the act just because a particular law would not (even could not) punish it. Consider the insanity defense in criminal law, which, like fair use, is generally treated as an affirmative defense that excuses liability. Some courts have held that the insanity defense is required by the Due Process clause, and that legislative efforts to abolish the insanity defense are unconstitutional. See, e.g., Finger v. State, 27 P.3d 66 (Nev. 2001); State v. Straburg, 110 P. 1020 (Wash 1910). However, we don't talk about having Due Process rights to commit crime while insane. That would be pretty odd, in fact; imagine a defense attorney claiming that a prison sentence violated his client's constitutional rights by incapacitating his client and therefore making it impossible for him to commit crimes that would then be excused by the insanity defense. The trick is that although the Due Process clause may require a state to have an insanity defense, that does not mean that the law has to otherwise allow acts that if committed would fall under the insanity defense. At a conceptual level, I think the right to fair use is similar. It may be that the First Amendment requires a fair use defense to copyright infringement. As I see it, this does not necessarily mean that the First Amendment invalidates any other law (such as the DMCA) that prohibits acts that would constitute (or at least lead to) protected fair use.
There are many different things going on in this paragraph. Let me try to sort them out, one by one.
    --“[T]he problem is that an affirmative defense only works in conjunction with its corresponding cause of action.”
      First, it is hard to tell precisely what Kerr means when he says that “an affirmative defense only works in conjunction with its corresponding cause of action.” If he simply means that “affirmative defenses” are defenses to particular claims (or to get fancy “claim tokens”) by particular parties, then his claim is correct but trivial. If he means that any legal argument falling under the type “affirmative defense,” is limited to a corresponding legal theory (claim or cause of action), then Kerr’s claim is simply false. “Fraud” is an affirmative defense, but it is not bound to particular claim type. “Accord and satisfaction” is an affirmative defense, but can be raised against any claim type.
      Second, it is true that the “fair use” defense is bound up with copyright law. (It is, after all, in the statute.) There are similar doctrines that are all called “fair use,” i.e., trademark fair use, but these are not the same legal doctrine. So Kerr is right—the fair use affirmative defense is bound up with the copyright legal theory (claim or cause of action).
    --The analogy with the insanity defense.
      This wasn’t really necessary for Kerr’s argument, so I won’t have much to say. I would just like to point out that it is a poor analogy. There is no right to be insane, but there is a right to self-defense. The fact that some affirmative defenses are not rights-protecting defenses does not imply that no affirmative defenses are rights-protecting defenses. The use of the analogy would only advance Kerr’s claim if it were used in support of this latter claim, but that use of the analogy would constitute a logical fallacy and hence make Kerr’s argument invalid.
    --“ It may be that the First Amendment requires a fair use defense to copyright infringement. As I see it, this does not necessarily mean that the First Amendment invalidates any other law (such as the DMCA) that prohibits acts that would constitute (or at least lead to) protected fair use.”
      This is the real core of Kerr’s argument. Let’s take it apart, point by point:
      First, although Kerr says that it “may be” that the freedom of speech requires a fair use doctrine, there is substantial support for the proposition that the freedom of speech does require a fair use doctrine. See, e.g., Eldred.
      Second, Kerr says that if fair use is constitutionally required, “this does not necessarily mean that [the freedom of speech] invalidates any other law . . . that would constitute (or at least lead to) protected fair use.” Kerr has used the idea of necessity, introducing an important ambiguity in his claim. We need to think this through step by step to understand Kerr’s argument.
        + By saying that is “not necessary,” Kerr might mean that from the fact that the freedom of speech requires fair use as a defense to copyright, there is no logical implication that freedom of speech requires fair use as a defense to other legal theories (claims or causes of actions) such as the DMCA. If this is what Kerr means, he is right, but his point is trivial. Logical implication is a very stringent standard, a few legal conclusions about the content of the law follow as logical implications from the rules in analogous areas of law. We want to know if freedom of speech does require a fair use defense to the DMCA. Logical necessity is simply beside the point.
        + What we really need to know are the answers to questions like the following:
          # What is the constitutional basis for the fair use defense in copyright law?
          # Does that rationale extend to the DMCA?
        + Once we frame the issue in this way, we can see that Kerr does have a point. Here is an example. Suppose that a terrorist organization has an assassin awaiting a signal to attack a prominent political figure. Suppose further that the signal is a shortwave broadcast in which a passage from a copyrighted work (e.g., a paragraph from a novel) is read. Suppose further that the passage is short, and would constitute fair use. But of course, the terrorist who reads the passage will not have a fair use defense to a conspiracy charge, even though the freedom of speech might require a fair use defense to a copyright violation claim.
        + But our terrorist hypothetical has very little to tell us about the question whether the freedom of speech requires a fair use exception to the DMCA. Why not? Because the purpose and function of the DMCA is closely aligned to the purpose and function of the Copyright Act. Indeed, the anti-circumvention provisions of the DMCA are aimed at protecting copyrights. The argument that the freedom of speech requires some accommodation of fair use under the DMCA is based on the relationship between the function and purpose of the DMCA and the function and purpose of the copyright acts.
    So where does this leave us? I am not going to produce an affirmative argument that the freedom of speech requires a fair use exception to the DMCA. Many others have done that. The very modest point of this post is that Kerr has not produced an argument to the contrary.


 
New Papers on the Net Here is the roundup:
    Geoffrey Sayre-McCord uploads Coherentist Epistemology and Moral Theory. Here is the abstract:
      Moral knowledge, to the extent anyone has it, is as much a matter of knowing how -- how to act, react, feel and reflect appropriately -- as it is a matter of knowing that -- that injustice is wrong, courage is valuable, and care is due. Such knowledge is embodied in a range of capacities, abilities, and skills that are not acquired simply by learning that certain things are morally required or forbidden or that certain abilities and skills are important. 1 To lose sight of this fact, to focus exclusively on questions concerning what is commonly called propositional knowledge, is to lose one's grip on (at least one crucial aspect of) the intimate connection between morality and action. At the same time, insofar as it suggests that moral capacities can be exhaustively accounted for by appeal to peoples' cognitive states, to focus on propositional knowledge is to invite an overintellectualized picture of those capacities. No account of moral knowledge will be adequate unless it does justice to the ways in which knowing right from wrong, and good from bad, is not simply a matter of forming the correct beliefs but is a matter of acquiring certain abilities to act, react, feel, and reflect appropriately in the situations in which one finds oneself. And this means a satisfying treatment of moral epistemology must give due attention to what's involved in knowing how to be moral.
    Oliver Hart (Harvard) posts Incomplete Contracts and Public Ownership: Remarks, and an Application to Public-Private Partnerships. From the abstract:
      The question of what should determine the boundaries between public and private firms in an advanced capitalist economy is a highly topical one. In this paper I will try to summarize some recent theoretical thinking on this issue. I will divide the paper into two parts. First, I will make some general remarks about the relationship between the theoretical literature on privatization and incomplete contracting theories of the firm. Second, I will use some of the ideas from this literature to develop a very preliminary model of public-private partnerships.
    Mariano-Florentino Cuellar (Stanford) posts Choosing Anti-Terror Targets by National Origin and Race. Here is the abstract:
      Here is an increasingly accepted post-September 11 view about racial and national origin profiling (i.e., "profiling") in law enforcement: it may be troubling but its use should depend on the context (Part I). In other words, legislatures, courts, and executive officials should weigh the costs and benefits of using a particular law enforcement policy (i.e., profiling) in a given context (i.e., the war on terrorism). The problem is that this idea suffers from massive conceptual and practical difficulties, underscored by the fact that terms like cost, benefit, and terrorism are not self-explanatory. Yet the intuitive appeal of some anti-terror policies - including profiling - can result in a sort of "plausibility principle," where legislatures, courts, and others often consider the merits of law enforcement strategies merely on the basis of whether such policies have a plausible justification (Part II). The problems that arise with general discussions of profiling are not likely to be solved by judicial review of individual profiling policies because the relevant constitutional doctrines - under equal protection, due process, and the Fourth and First Amendments - do little to regulate law enforcement's decisions to investigate and prosecute (Part III). Which means that to decide whether profiling itself makes sense, we must consider the dynamics driving law enforcement's use of discretion. At least one aspect of this should raise concern - that statutory and other legal changes increase law enforcement's flexibility not only to choose whom to investigate, but what sorts of methods to deploy, including troubling tactics such as extended detention of immigrants, secret searches, or wiretaps (Part IV). Changes in the judicial regulation of discretion are possible but unlikely, and data will continue to be scarce. All of which makes it hard to accept even a principled "context-dependent" case for legitimizing profiling in the war on terror, and highlights the disconnection between asserted justifications for particular enforcement strategies and the likely performance of enforcement systems.
    Leo Strine, Jr. (Delaware Court of Chancery) uploads 'Mediation Only' Filings in the Delaware Court of Chancery: Can New Value Be Added by One of America's Business Courts?, forthcoming in the Duke Law Journal. Here is the abstract:
      This essay, which is forthcoming in the Duke Law Review, advocates a new role for the Delaware Court of Chancery - the handling of "mediation only" business cases. Mediation only cases are matters submitted to the court solely for the purpose of invoking the services of a member of the Court of Chancery as a mediator to help the parties resolve a business dispute through a mediated settlement. The judicial-mediator would have no adjucative role in the traditional sense, but would solely act to facilitate a mutually acceptable resolution. In the essay, the author identifies the possible utility of this concept - which was recently enacted into law - and its consistency, in broad terms, with the historic role of the Delaware Court of Chancery in filling "gaps" in corporate instruments and commercial contracts.
    Steven Hinckley (University of South Carolina) uploads Your Money or Your Speech: The Children's Internet Protection Act and the Congressional Assault on the First Amendment in Public Libraries, forthcoming in the Washington University Law Quarterly. From the abstract:
      The article contends that Congress has not met its burden under strict scrutiny of showing that Internet filtering in public libraries is constitutionally necessary to achieve the government's interests in protecting children, and argues that software filters cannot possibly be applied without compromising a great deal of library patrons' constitutionally protected online speech. The article concludes that, stripped of the facade of an innocent use of Congress's spending power, the Children's Internet Protection Act is merely the Computer Decency Act and the Child Online Protection Act in disguise - yet another clumsy congressional attempt to censor Internet content without sufficient concern for the damage it will do to the First Amendment rights of library patrons, many of whom use libraries as the only place where they can gain access to the incredible wealth of diverse information available on the Internet.


Thursday, June 12, 2003
 
Posner for Chief Jack Balkin endorses Richard Posner for Chief Justice in a post on Balkinization. No one is more deserving.


 
Marston on Kennedy and Pryor I just caught up with this post by Brett Marson. Great!


 
Baude on Judicial Self-Selection Will Baude (Crescat Sententia) has a very nice post on the relationship between a prospective judge's jurisprudence and motivation to pursue a judicial career.


 
Must Read . . . is the only way to describe Howard Bashman's column U.S. Supreme Court Vacancies On The Horizon: What To Expect This Summer If One Or Two Vacancies Arise On The Court.


 
Bernard Williams I am sad to report the death of the eminent philosopher Bernard Williams, long associated with Cambridge, Berkeley, and Oxford. I regret that I saw Williams speak only once, at a meeting of the American Philosophical Association, where he did an author meets critics session with the late Warren Quinn. The topic was Williams's Ethics and the Limits of Philosophy. Here is a brief biography from his home page:
    Professor Williams received the M.A. degree from Oxford University. After serving in the Royal Air Force, he held a series of academic positions in England. In 1967 has was appointed Knightbridge Professor of Philosophy at Cambridge University, and in 1979, Provost of King's College. He came to Berkeley in 1988; from 1990 to 1996 he also held the position of White's Professor of Moral Philosophy at Oxford University. Professor Williams divides his time between Berkeley and England. He has been Fellow of the British Academy since 1971 and Foreign Honorable Member of the American Academy of Arts and Sciences since 1983. He has been awarded honorary degrees by the University of Dublin, the University of Aberdeen, Cambridge University, Harvard University, Yale University, and the University of Chicago; he was knighted in 1999. He has served on several government committees in England, including the Royal Commission on Gambling (1976-78), and he was chairman of the Committee on Obscenity and Film Censorhip (1977-79). He was a member of the Labour Party's Commission on Social Justice (1992-94) and participated in the Independent Inquiry into the Misuse of Drugs Act (1997-2000). From 1967 through 1986 he was a member of the Board of Sadler's Wells Opera (later the English National Opera).
Williams was a giant of contemporary moral philosophy.
Update: For more, see Chris Bertram's Junius. Also, Jacob Levy on the conspiracy.


 
The Filibuster and Alberto Gonzales Rick has a must read post entitled Rationality of party filibuster strategies, continued, and the Democrats' Apparent Success in Blocking Estrada. Rick's thesis is that the Democratic filibuster of Owen and Estrada has succeeded strategically by moving the Bush administration away from either Owen or Estrada as a Supreme Court nominee and improving the chances of Alberto Gonzales. Rick quotes from a Washington Times story that includes the following:
    At the top of most lists [of potential Supreme Court nominees] is Alberto Gonzales, White House counsel and a friend of Mr. Bush's. Ironically, the former Texas Supreme Court justice is opposed by some conservatives because he has said the landmark Roe v. Wade case legalizing abortion is settled law.
    Also on the list are Washington lawyer Miguel Estrada and Texas Supreme Court Justice Priscilla Owen. But in recent weeks, Republican insiders have said Mr. Estrada and Justice Owen are unlikely as contenders because both of their nominations to lower federal courts are being filibustered by Democrats.
Rick comments:
    The article also says that if Democrats filibuster nominees, Republicans are threatening the "nuclear" option. But if the article is correct that Gonzales gets the nod, I doubt there would be a Democratic filibuster. Democrats would be foolish to block a relatively centrist nominee.
Hmm. My immediate and quite off the cuff reaction is that if Rick's analysis is correct, then there will be strong pressure on President Bush not to nominate Gonzales. Given the downward spiral of politicization that has characterized the judicial selection process in recent years, it seems almost inconceivable that the Republican right could accept political moderation as the response to the Democratic filibusters of Owen and Estrada. But there is another way of looking at this issue. Why should we assume that Gonzales's belief in the doctrine of stare decisis makes him a political moderate? There is another, more natural explanation, for deference to precedent. Perhaps, Gonzales is committed instead to the rule of law. And commitment to the rule of law is exactly what we should be looking for in prospective Supreme Court Justices. Of course, no one has resigned--yet!


 
Oman on Historical Explanations of Law Nate Oman responds to Brian Leiter and to me. The issue: are historians guilty of intellectual sloppiness when their are dismissive of the causal role of legal doctrines and ideas in the production of legal events?


 
Harm Facilitating Speech Does the freedom of speech encompass a manual for bomb making? Check out Eugene Volokh's post on the Conspiracy.


 
Rubin on Retribution I have been thinking quite a lot about retribution recently, so I was very pleased this morning, when I discovered that Edward Rubin (Pennsylvania) has just posted Just Say No to Retribution on SSRN. Rubin is smart, interesting, and comes from an intellectual space that is quite different from my own. So I read the paper with great interest. You should read the full paper, but here is the abstract, followed by some comments:
    Retribution has become increasingly popular, among both legislators and scholars, as a rationale for punishment. The proposed revision of the Model Criminal Code adopts this newly fashionable standard and abandons its previous commitment to rehabilitation. The concept of retribution, however, is too vague to serve as an effective principle of punishment. It is sometimes defined as a requirement that the criminal be "paid back" for the harm he inflicted, but this is a virtually empty metaphor, since prison time has very little to do with repayment. A second definition of retribution involves desert, but the term is both over- and under-inclusive with respect to criminal punishment. Retribution does have a core meaning, however; it inevitably involves the idea of morally condemning the offender. The difficulty is that moral condemnation is entirely inconsistent with the premises of the modern administrative state. Modern governments are supposed to be instrumental - we want them to meet our needs, not to generate their own moral systems. It might be argued that a retributive standard responds to the people's morality, and more specifically to their anger at the criminal. But modern government is supposed to serve people's needs, not their passions, and our own Constitution is based on this exact ethos. In addition, retributive discourse is likely to exacerbate one of the most serious problems in American criminal justice, which is the over-use of imprisonment, particularly for non-violent offenders. The principles of punishment that should be adopted in place of retribution are rehabilitation and proportionality. Proportionality involves a relative ranking of crimes and punishments, so that the most severe punishments are imposed for the most serious crimes, and milder ones are used for less serious crimes. It would forbid the two California sentences that the Supreme Court just upheld against an Eighth Amendment challenge, where a person who stole $399 worth of golf clubs, and another who stole $150 worth of videotapes, received sentences of 25 years to life. Retributivists often adopt proportionality as their own means for establishing a punishment scale, but this only illustrates the emptiness of retribution as a concept. If retribution means anything, it is that we have some fixed idea about the amount of punishment a particular criminal deserves or should be paid back with, not that punishments should be determined by their relationship to other punishments. In fact, proportionality is an independent principle. While it is inconsistent with the concept of retribution, it serves as a complementary principle to rehabilitation.
It strikes me that something quite odd is going on in Rubin's paper. Retributivism, the view that Rubin critiques, is a theory of punishment. It seeks to explain why we punish. Proportionality, which is part of the Rubin's alternative to retributivism, is not a theory of punishment, but a tool deployed in service of some theory (including retributivism) as Rubin points out. So if Rubin limited himself to comparing retributivism with proportionality, he would be committing a category mistake. Of course, Rubin does not make this error. His real comparison is between retributivism and rehabilitation, and the idea that the proper purpose of punishment is rehabilitation is a rival of the idea that the purpose of punishment is retribution. But then Rubin's position becomes quite odd. Because retributivists have a natural way to incorporate proportionality--punishment should be proportional to desert (blameworthiness), whereas rehabilitation theories are usually thought to eliminate concern for proportionality. (It might be that crime-of-passion murderers can be rehabilitated with an anger management class, but petty thieves require years of intenstive treatement--it all depends on contingent facts.) Rubin argues, "If retribution means anything, it is that we have some fixed idea about the amount of punishment a particular criminal deserves or should be paid back with, not that punishments should be determined by their relationship to other punishments." But this is simply silly. Retributivists are entitled to use the distinction between natural and conventional justice. Punishments are obviously largely conventional (prison versus fine versus community service). Although retributivists might argue that some punishments are clearly too severe or not severe enough, any sensible retributivist will recognition that natural desert does not determine the precise punishment. So given a conventional system of punishment, proportionality plays a role that coheres with retributivisms deep assumptions. But there is lots more in Rubin's paper. In particular, you will want to read his defense of rehabilitation. Download it while its hot.


 
Katz on Compensation Leo Katz (Pennsylvania) has uploaded What to Compensate? Some Surprisingly Unappreciated Reasons Why the Problem is So Hard to SSRN.
    Finding the rightful measure of compensation involves first finding the right baseline. But baseline problems, though common throughout law, are remarkably ill-understood. Rather than solve these problems outright, this essay seeks to get to the bottom of their multiple roots. The four kinds of cases being considered are typified by (1) the plaintiff whose leg the defendant tortiously broke - thus preventing him from getting on the plane that crashed (i.e., "failure to worsen" cases); (2) the plaintiff whose loss of legs due to defendant's tortious conduct caused her to give up her career as a professional athlete - with the result that she is now much happier and has no regrets about losing her former career (i.e., "subjective improvement"cases); (3) the promisee of an enforceable contractual promise asking to be put in the position he would have been in had the promise been kept rather than had the promise never been made (i.e, the contract damage question); (4) the plaintiff who but for defendant's tortious conduct would not exist, with particular emphasis on the descendants of slaves who but for slavery would not have existed, and surely not in the United States.
Katz is always worth reading, and this article tackles a deep and interesting problem. Download it while its hot.


 
McCaffery and Baron on Framing and Taxation Edward McCaffery (Southern Cal, Law) and Jonathan Baron (Southern Cal, Psychology) have posted Framing and Taxation: Evaluation of Tax Policies Involving Household Composition. McCaffery is one of the most interesting legal theorists working in tax. Here is the abstract of their paper:
    Three studies of attitudes toward tax policies were conducted on the World Wide Web. The results show several effects. In penalty aversion, subjects preferred bonuses over penalties, when policies differ only in how they are formally described. In the Schelling effect, subjects prefer both higher bonuses (for children) for the poor than for the rich and higher penalties (for being childless) for the rich than for the poor. In the neutrality bias, subjects preferred separate filing for married couples more when it was presented in a format that emphasized the effect of marriage (where it is neutral) than in one that emphasized the effect of the number of earners in a couple (where one-earner couples pay more). In the status-quo effect, subjects preferred the specified starting point to any change. Finally, in the metric effect, subjects favored more progressiveness in tax burdens when taxes were expressed in percent than when they were expressed in dollars.
    The research suggests a general framework. Subjects approach a given decision problem with strong independent norms or ideals, such as, here, "do no harm," "avoid penalties," "treat likes alike," "help children," and "expect the rich to pay more." They then evaluate the problem on the basis of the most salient norms. In a complex area such as tax, independently attractive ideals are often in conflict, and the result is shifting, inconsistent preferences.


 
New Papers on the Net Here is the roundup:
    Ehud Kamar (Southern Cal) posts Shareholder Litigation Under Indeterminate Corporate Law, forthcoming in the University of Chicago Law Review. From the abstract:
      One of the least explained phenomena in American corporate law is the puzzling circularity of director and officer liability insurance and indemnification. Under the auspices of state corporate law, virtually all public corporations use internal and external insurance to protect their boards and management from liability for breach of fiduciary duties. The concept of liability insurance and indemnification in relation to shareholder fiduciary claims seems on its face futile. Arguably, there is no utility for shareholders in suing corporate fiduciaries for damages when fiduciaries pay most of these damages using funds provided by shareholders. The resulting transaction and litigation costs simply seem superfluous. This Article argues that insurance and indemnification can be a socially desirable mechanism that induces plaintiffs to sue yet keeps sanctions low. Although litigation is costly, and should ordinarily be kept at a minimum, shareholder litigation can be cost-effective in view of the indeterminacy that characterizes corporate law.
    Jerry Kang (UCLA) posts Denying Prejudice: Internment, Redress, and Denial. From the abstract:
      Students of the Japanese American internment know about the remarkable coram nobis cases that took place in the early 1980s. In these cases, Korematsu, Yasui, and Hirabayashi returned to the federal district courts that convicted them during World War II and petitioned for their convictions to be overturned on the basis of "smoking gun" evidence discovered in the national archives. That evidence showed that the Executive Branch had suppressed critical exculpatory evidence during the 1940s litigation in which Fred Korematsu and three other litigants challenged the internment's constitutionality. Quite remarkably, these petitions were largely granted, and fueled the extraordinary redress movement, which culminated in federal reparations. Yet, there was a dark side to this victory. In granting victory to the petitioners, the Judiciary absolved the one branch of government that has never been held accountable for the internment: itself. In overturning these convictions, the lower federal courts adopted an official legal history that insulated the wartime Supreme Court of any fault. According to that account, the Supreme Court was simply duped by bad apples in the Departments of War and Justice, who suppressed exculpatory evidence. But this tidy story is nonsense. The wartime Court was no innocent, tricked by conniving lawyers. It was a full participant in the internment machinery, and it deployed its enormous intellectual resources to make sure that it did not interfere with the internment, but at the same time, never grant the internment official approval. Respectful of its sister branches of government, the Court also made sure that blame would not fall at the feet of President Franklin Delano Roosevelt or the Congress. Instead, it thrusted responsibility upon the little known War Relocation Authority, ridiculously characterized as a rogue agency. This is what the Court did in the 1940s. And as I show in Part I, it did so with tremendous acumen, exploiting what are typically praised as the passive virtues. For its machinations, the Judiciary has never apologized or accepted responsibility. After the coram nobis cases of the 1980s, official history has been rewritten to make any apology simply unwarranted. In this way, the personal victories of Fred Korematsu, Gordon Hirabayashi, and Min Yasui were ironically exploited to complete the circle of absolution started by the Supreme Court itself back in the '40s. The apparent acceptance of responsibility manifested in the 1980 coram nobis cases was and is a mirage. As regards the Judiciary, we do not have the taking of responsibility; we have a supreme denial. That is the counter-story told in Part II. What is the payoff of this more nuanced and disturbing interpretation of the internment, the Judiciary, and the coram nobis cases? In Part III, I sketch out some preliminary answers in light of reparations theory and the revival of what I call the Korematsu mindset, post 9-11.
    Anthony Infanti (Pittsburgh) offers Cross-border Outsourcing: U.S. International Tax Pitfalls, Pratfalls, and Opportunities. From the abstract:
      During the past decade, there has been a surge in outsourcing by businesses both in the United States and abroad. In the face of this surge in outsourcing as well as the trend toward outsourcing activities that come closer and closer to a business' "core," some commentators have underscored the need for businesses to make an educated decision about whether and what to outsource. This article, which, as its title indicates, is particularly concerned with cross-border outsourcing, is written in the same vein. It provides a non-exhaustive examination of the myriad of circumstances under which a decision to outsource the provision of goods or the performance of services to a foreign provider can affect the application of the U.S. international tax regime to the outsourcing business. The purpose of this article is to foster greater awareness of the sometimes dissonant tax aspects of cross-border outsourcing and thereby impel businesses and their legal advisors to take a more holistic view of the decision to outsource - a view that encompasses not only the potential business benefits and detriments of a decision to outsource, but also the potential tax benefits and detriments of such a decision.
And here are other titles of interest:


 
Crossley at FSU on the ADA At Florida State today, Mary Crossley does an internal workshop entitled Evenhanded Inequality: Reclaiming the Civil Rights Foundations of the ADA.


Wednesday, June 11, 2003
 
Leiter on Microfoundations and Functionalist Causation in Law Brian Leiter writes in response to my offhand remarks on the role of microfoundations in historical explanations of legal change:
    I was surprised for several reasons to read the following on your site: "When thinking about this issue, I always come back to the famous debate between Jon Elster and Gerald Cohen over Marxist theories of history. From where I sit, Elster won this debate decisively. Without microfoundations, Marxist theories of history are close to mere dogma. But Historians believe they can explain legal events, like Supreme Court decisions, with absolutely no account of causal mechanism at all. This sort of sloppy thinking is really quite astounding." A few comments:
      (1) It can not possibly be a constraint on the validity of a functional explanation that we be able to identify the causal mechanism. The obvious counter-example: Darwin's theory of natural selection, which predates Mendelian genetics (which supplied the causal mechanism). Darwin's theory was not "mere dogma" prior to Mendel. Why not? Because it gave the best account of the phenomena in question. The discovery of a causal mechanism provides additional support for a functional explanation, but its absence is not decisive. Functional explanations that are genuine explanations should have "in principle" causal mechanisms underlying them, to be sure, but Marx clearly has those. (2) You make it sound as though Cohen denied the existence of microfoundations, but I don't think this is right. First, I read 258 ff. of Cohen's 1978 Karl Marx's Theory of History as acknowledging the central point about functional explanations: that genuine ones are, in principle, translatable in to the terms of causal explanations. Second, in the case of functional explanations of historical change, the relevant causal mechanisms (the microfoundations) are to be cashed out in terms of the struggle between classes for control of productive forces (cf. Railton, "Explanatory Asymmetry in Historical Materialism," Ethics [1986]). Here again Cohen (292-93 of the 1978 book):
        "Classes are permanently poised against one another, and that class tends to prevail whose rule would best meet the demands of production. But how does the fact that production would prosper under a certain class ensure its dominion? Part of the answer is that there is a general stake in stable and thriving production, so that the class best placed to deliver it attracts allies for other strata in society. Prospective ruling classes are often able to raise support among the classes subjected to the ruling class they would displace. Contrariwise, classes unsuited to the task of governing society tend to lack the confidence political hegemony requires, and if they do seize power, they tend not to hold it for long."
      So here Cohen clearly meets the demand for microfoundations, in ways that are familiar from Marx as well. Put more simply: if the nascent bourgeoisie can out-produce the complacent feudal lords, it should hardly be surprising that (a) they try to, and (b) they displace, in the end, the unproductive feudal lords. This is more than sufficient to defeat Elster's challenge.
      To sum up points 1 and 2, here's what I wrote in the Stanford Law Review in the May 2002 issue:
        All functional explanations, it turns out, have the suspicious feature that the explanandum (the thing to be explained) is temporally prior to the explanans (that which does the explaining). The sucking reflex (the explanandum) is clearly prior to the fact of survival (the explanans). But how can anything explain the existence and character of something that comes before it? Genuine explanations involve the temporal priority of explanans over explanandum: you explain the occurrence of X by something that came before X in time, not after it! So if functional explanations are genuine, they must satisfy this temporal demand. How can they do so?
        The answer, in a nutshell, is that functional explanations, if they are real explanations, have to be reducible to or shorthands for ordinary causal explanations (X was caused by Y, and Y preceded X in time). When we say the sucking reflex in infants is explained by the contribution it makes to the survival of newborns, what we really mean is that the reason the sucking reflex came to predominate in the population of infants is that, in the past, those infants with the genetic predisposition for the sucking reflex survived and went on to reproduce at much higher rates than those lacking that genetic predisposition. So a genetic predisposition towards sucking causes survival, which over time and populations, causes most infants to end up having that genetic predisposition.
        Class struggle must play the same role with respect to Cohen’s functionalist version of historical materialism: The reason relations of production favorable to the maximal development of the forces of production come into being is because classes that can effectively exploit the forces of production try to bring such relations about. Here is how Peter Railton put the point many years ago:
          "Historically man has enlarged what are in effect his natural (“material”) possibilities through the development of new productive forces, and, with this, new ranges of adaptations or social forms ... became possible. When the terms of competition thus shift, individuals or groups who happen to be so situated or so to act as to take differential advantage of these changes in adaptive possibilities will acquire increased resources, power, and so on. The result may be the emergence into prominence of new groups at the expense of those groups who previously commanded resources, power, and so on. If the terms of competition shift markedly, and if new groups emerge who take advantage of these changes, the resulting conflict may lead to an overthrow of existing social relations.... Marx analyzes such intergroup competition as class struggle, since the groupings that emerge in such conflicts are, he believes, determined by the relation of individuals to the productive forces....
          As in the biological case, one can give a “fitness”-invoking [or functionalist] gloss on this process: a dominant class that cannot achieve efficient exploitation of the possibilities inherent in the existing state of productive forces will tend to be replaced by a class that can, and, in the process, social relations as a whole will be reshaped to reflect the mode of existence of this more efficient class."
        So functionalist explanations are simply a gloss on ordinary causal explanations in terms of class struggle. And class struggle provides a fruitful explanatory rubric through which to view a wide range of historical events, from slavery to urban history.
      (3) As to the historians, the only account (among those that explain decisions without recourse to doctrine) that I have some familiarity with is Lucas A. Powe, Jr., The Warren Court and American Politics (Harvard UP, 2000), and that one is rich in explicit and implicit claims about the microfoundations.
    What could be nicer than having Brian Leiter to keep one intellectually honest? Now, I need to dig out my copy of Elster's Making Sense of Marx and take a look at Powe's book. Surely Brian is right that one can have a covering law (in Cohen's sense) without a complete account of the causal mechanism. It seems, nonetheless, quite fair to ask historians (Marxist or otherwsie) who discount doctrinal or intellectual accounts of the production of legal doctrine, for their account of the causal mechanism. It is pretty hard to tell a coherent story about the causes of Supreme Court opinions that doesn't--in an important and causally potent way--go through the legal ideas and doctrines. This is the crucial point in the context of Nate Oman's experience at his conference. Brian and I would need to get down to cases before can decide how far apart we are.


 
Report from Mongolia Andrew McLaughlin (of the Berkman Center at Harvard) is doing a Slate diary from Mongolia. Here is his description of the why of his visit:
    I am spending two weeks here as a volunteer at the invitation of Geekcorps, a nonprofit that sends computer and network techies to developing countries, where they work on projects designed to bolster the ability of local entrepreneurs to serve their markets. My job is to assist the local technology sector and the relevant portions of the Mongolian government (the Ministry of Infrastructure, the telecommunications regulatory board, and some interested members of Mongolia's parliament, the Great Hural) to work through pressing Internet-related policy issues: the regulation of Internet service providers, the allocation of radio spectrum for wireless devices, the taxation of technology goods and services, the establishment of an advocacy association for information technology firms, and the creation of clear, predictable, objective, and independent regulatory procedures.
Check it out.


 
Confirmation Wars: Financing Check out this post by Rick Hasen on the $5.5 million warchest, banked for opposition to Bush Supreme Court nominees. (Rick, doesn't this provide some evidence of a downward spiral?)


 
Benjamin's Critique of a Spectrum Commons Stuart Benjamin has posted Spectrum Abundance and the Choice Between Private and Public Control on SSRN. This paper attacks the spectrum-commons position (associated with Larry Lessig and Jochai Benkler). Here is the abstract:
    Prominent commentators have recently proposed that the government allocate significant portions of the radio spectrum as a wireless commons. The problem for commons proposals is that truly open access leads to interference, which renders a commons unattractive. Those advocating a commons assert, however, that a network comprising devices that operate at low power and repeat each other's messages can eliminate the interference problem. They contend that this possibility renders spectrum commons more efficient than privately owned spectrum, and in fact that private owners would not create these abundant networks (as I call them) in the first place. In this Article I argue that these assertions are not well-founded, and that efficiency considerations favor private ownership of the spectrum.
    Those advocating a commons do not propose a network in which anyone can transmit as she pleases. The abundant networks they envision involve significant control over the devices that will be allowed to transmit. On the question whether private entities will create these abundant networks, commons advocates emphasize the transaction costs of aggregating spectrum, but those costs can be avoided via allotment of spectrum in large swaths. The comparative question of the efficiency of private versus public control, meanwhile, entails an evaluation of the implications of the profit motive (enhanced ability and desire to devise the best networks, but also the desire to attain monopoly power) versus properties of government action (the avoidance of private monopoly, but also a cumbersome process than can be subject to rent-seeking). The deciding factor, in my view, is that these networks might not develop as planned, and so the flexibility entailed by private ownership - as well as the shifting of the risk of failure from taxpayers to shareholders - makes private ownership the better option.
    The unattractiveness of a commons in this context casts serious doubt on the desirability of commons more generally. Commons proponents have championed abundant networks because those networks avoid interference problems. If private ownership is a more efficient means of creating abundant networks, then the same would almost certainly be true for networks that run the risk of interference. Most uses of spectrum are subject to interference, so the failure of the commons advocates' arguments undermines the appeal of a commons for most potential uses of spectrum.
If you are interested in communications regulation, this is a must read.


 
Legal Theory? Kenney Hegland has posted If Stephen King Discovers Cujo, Can Judges Discover Law? (forthcoming in the Legal Studies Forum) on SSRN. Here is a taste:
    If the law is discovered, relativism vanishes: law, and our commitment to process (the Constitution, majority rule, rationality, and articulation), rest on bedrock, not whim. However, the discovery view tends toward intolerance (“I’m sure I’m right”), tends toward abstraction (“Who cares what is happening on the street when we know what should be happening?”), and tends toward rigidity (“We got this right the first time”). The view that judges invent or create law solves these problems: it promotes tolerance, practical, non-theoretical, problem-solving, and, as a result, is quite flexible in dealing with new problems.
And here is a bit more:
    If we believed that things exist out there, our world would look different. Obviously it did for those judges who thought they were discovering the law: these folks may have been wrong, but they weren’t idiots What we would see would be a world that supports our view. Consensus would jump out and judicial disagreements would no longer be seen as proving “law doesn’t exist, out there,” but rather would be viewed as data in need of an explanation: “Given that judges discover the law, how can they discover different things?” Did someone do a clumsy job? A half-hearted job? Are we on the right track, not there yet, but getting there? When two scientists come to different conclusions, “Dinosaurs are related to lizards,” “No, to birds,” we don’t throw up our hands and give up on science and say that there is no truth about dinosaurs, only points of view. No, we get excited and want to do more science. Who is right? What explains the error?
No comment.


 
New Papers on the Net Here is today's roundup:
    Jens Großer posts Should I organize the conference?: Cut-point belief reciprocity in an experimental public goods game with alternating, single decision makers.
    Marco Battaglini uploads Long-Term Contracting with Markovian Consumers.
    Joan Esteban and Laurence Kranich offer Redistributive Taxation with Endogenous Sentiments.
    Robert McCarthy posts a Review of James R. Otteson's Adam Smith’s Marketplace of Life on the Notre Dame Philosophical Reviews, courtesy of Online Papers in Philosophy. Here is a taste:
      Otteson’s attention to the mechanistic character of Smith’s psychology is a strength of the book. His language (calling the impartial spectator a ’procedure,’ for example) shows that he sees clearly that Smith’s method is to give a causal account of human behavior in terms of the interaction of human passions. The passions are simple, but interact in complex ways. It is from this basis that what Otteson calls “unintended order” arises. People acting on “basic, natural drives” cause “an order that they did not consciously intend to create but that nevertheless unfolds on its own and serves both to strengthen the interpersonal bonds and increase the wealth of the community” (p. 6). It is Smith’s interest in this phenomenon, according to Otteson, that unites his two major works. In particular, Otteson argues, the impartial spectator is the tool that in both works produces this order, through the mechanisms of the marketplace.
    Damian Chalmers (London School of Economics & Political Science, Law) posts The Reconstitution of European Public Spheres, forthcoming in the European Law Journal. From the abstract:
      The strength of participation in its political processes has increasingly become the yardstick against which the legitimacy of the European Union is measured. Yet experiments in deliberative and participatory democracy suggest that their practice invariably falls short of their lofty ideals. A reason is their failure to consider the process of communication itself. As understanding of communication is constituted through a number of surrounding communicative contexts, communication, can never be said to be good or bad. More important is a constitutional framework for communication which provides the contexts-performative, institutional and epistemic-that enable communication to contribute to particular, desirable ideals. This piece will argue that a deliberative approach to European governance involves a process of justification in which the three practical tasks of the European Union-polity-building, problem solving and the negotiation of political community are debated and resolved around the four values that have underpinned the development of politics as a productive process those of transformation, validity, relationality and self government. The organisational reform required for this involves a wide-ranging revisiting of the structures of the European polity.
    Russell Denton and Paul Heald (Georgia) unveil Random Walks, Non-Cooperative Games, and the Complex Mathematics of Patent Pricing. From the abstract:
      Current patent valuation methods have been described charitably as "inappropriate," "short-sighted," "inherently unreliable," and a "guestimate." This article provides a more rational and systematic tool than any found in the existing literature. We explain how patents are like stock options and demonstrate how the Black-Scholes equation for pricing real options can be applied to price patents. First, we explain the major difficulties inherent in applying the standard equation to patents and then proceed to demonstrate how it can be adapted to overcome those problems. In particular, the Denton Variation of Black-Scholes begins with fine distinctions in identifying the source of value, followed by a systematic analysis of factors, especially market forces, that influence variance and its sources over time. We show that the point-price paradigm relied upon by patent valuations to date has been flawed. Here, we leave the world of contemporary patent valuation behind. We claim that solving a single Black-Scholes equation is grossly inadequate for a risky, long-lived, infrequently traded item such as a patent. For a patent, the present value exists as a distribution curve with variously weighted probabilities, thus the apparent precision in picking a starting value by traditional patent valuation methods is illusory. The Denton Variation eliminates two historic shortcomings of the parent equation by providing a precise way to factor in transactions costs, and by quantifying the impact of the option cost on the profitability of the transaction. Moreover, the expression can accommodate a variety of patent profitability situations. For the purposes of illustration, we run through the equation to value a hypothetical patent. We also take the time to explain how insights provided by game theory help justify the choices that underlie the Denton adaptation of the Black-Scholes equation. In the patent licensing context, considerations of the effect of bargaining position are unavoidable, and we show how our assumptions are consistent game theoretical paradigms. In conclusion, we explore the implications of our refined approach to patent valuation. We examine, therefore, how patent valuation problems currently hinder efficient transfer of technology and how our enhanced version of the new Black-Scholes variant equation fits comfortably into the calculation of the reasonable royalty remedy applicable in cases of patent infringement when the patent owner cannot prove lost profits.
    Oren Perez (Bar-Ilan University, Law) posts Electronic Democracy as a Multi-dimensional Praxis, forthcoming in the North Carolina Journal of Law & Technology. From the abstract:
      E-politics is proclaimed the "next" thing. This article explores the capacity of the Internet to contribute to the development of more inclusive decision-making structures, focusing on one specific feature of the Internet, its multi-dimensionality. This feature, it is argued, opens new possibilities for structuring political praxis. To appreciate these possibilities, this article reviews and criticizes current democratic practices, focusing in particular on their procedural uniformity. This uniformity, which permeates both the legal and philosophical discourse of democracy, is not compatible with the reality of social and individual pluralism that characterizes the contemporary society. In a pluralistic society, this procedural uniformity could lead to the exclusion of certain world-views and personality types. To the extent that democracy is understood as an attempt to forge a legitimate system of governance for a pluralistic society, this result seems unacceptable. The main argument of this article is that the Internet, as a new communicative arena and technological frontier, can extend the universe of our democratic practices by enabling the development of multiple forms of deliberation and decision-making. This argument seeks to go beyond current uses of the Internet, which merely copy off-line democratic practices.
    Gregory Alexander (Cornell) posts The Limits of Property Reparations. From the abstract:
      Human history is replete with examples of unjustified expropriations of property by conquering states and other transitory regimes. Only in modern times, however, have nations attempted systematically to remedy historical injustices by providing reparations to the dispossessed owners or their successors. From the aboriginal peoples of the Antipodes to the Native Americans of Canada and the U.S. to the European victims of the German and Soviet communism, groups of people who were stripped of their land and possessions by fraud or force are demanding, and in many cases getting, reparations for these injustices. The thesis of this paper is that the case for reparations for such expropriations of property is highly tenuous, both morally and in practical terms. Reparations claims in general face two serious challenges: human irrationality and the effects of time. While these challenges are not necessarily insuperable, they are formidable.
    Christopher Slobogin (University of Florida) posts What Atkins Could Mean for People with Mental Illness, forthcoming in the New Mexico Law Review. From the abstract:
      This article, written for a symposium on Atkins v. Virginia - the Supreme Court decision that prohibited execution of people with mental retardation - argues that people with severe mental illness must now also be protected from imposition of the death penalty. In labeling execution of people with mental retardation cruel and unusual, the Atkins majority stressed that mentally retarded people who kill are less blameworthy and less deterrable than the average murderer, an assertion that can also be made about people with severe mental illness. As it had in previous eighth amendment cases, however, the Court also relied heavily on an emerging legislative consensus against the execution of the former group. Such a consensus does not exist with respect to people with mental illness (in fact, only one state, Connecticut, bars their execution). But the same legislative inaction that undermines the eighth amendment argument bolsters an equal protection argument, because it shows an irrational prejudice against the latter group. A careful reading of Court's cases suggests that "rational basis with bite" is the right standard for assessing the validity of laws that discriminate on the basis of disability, at least when those laws deprive people of life or liberty. In any event, there may not be any rational basis for distinguishing between the people with retardation and people with mental illness in the death penalty context. A review of the psychiatric literature shows that severe mental illness, in the form of psychosis, reduces blameworthiness and deterrability at least as much as mental retardation. People with mental illness at the time of the offense, while often found sane and sentenced to death (possibly because mental illness is irrationally viewed as an aggravating circumstance by sentencing bodies), are no more responsible for their condition or able to appreciate society's mores than are people with retardation (or children under 17, another group the Court is likely exempt from the death penalty). Nor are people with mental illness as likely to recidivate as these two groups. Concerns about malingering and misdiagnosis of mental illness, which are exaggerated where severe disorder is involved, should be dealt with through imposing more stringent standards of proof. Otherwise, we are allowing execution of people who do not deserve the death penalty simply because it is too "hard" to identify them.
    Ehud Kamar (Southern Cal) posts Regulatory Competition Theory of Indeterminacy in Corporate Law, forthcoming in the Columbia Law Review. From the abstract:
      This Article revisits the debate on the desirability of interstate competition in providing corporate law. It argues that the market for corporate law is imperfectly competitive, and therefore may not yield the optimal product to either shareholders or managers. Delaware dominates the market as a result of several competitive advantages that are difficult for other states to replicate. These advantages include network benefits emanating from Delaware's status as the leading incorporation jurisdiction, Delaware's proficient judiciary and Delaware's unique commitment to corporate needs. Delaware can enhance these advantages by developing indeterminate and judge-oriented law, even if such law is otherwise undesirable. Indeterminacy makes Delaware laws inseparable from its application by Delaware's courts and thus excludes non-Delaware corporations from network benefits, accentuates Delaware's judicial advantage, and makes Delaware's commitment to firms more credible. Whether state competition constitutes a race to the top, to the bottom, or somewhere in between, excessive indeterminacy may add an additional degree of inefficiency to the law.


 
Language and Evolution New Scientist has a nice series on the human mind. I particularly liked Steven Mithen's Thoroughly Mobile Minds. Here is a taste:
    Neanderthal technology remained largely unchanged for a quarter of a million years. Yet in roughly half that time the technology of H. sapiens has evolved from stone tools to the laptop on which I write this, not to mention the Internet to which it is connected. In fact, our technological evolution did not really get started until after the last ice age had reached its peak, a mere 20,000 years ago. This phenomenal rate of culture change is the clue that there is something fundamentally different about H. sapiens from all the other members of the human genus. This something must lie within the mind and many would characterise it as symbolic thought the capacity to attribute an arbitrary meaning to a sound, movement or an object. The word "dog" neither looks, sounds nor smells like a dog, for example. Only H. sapiens has left us unambiguous evidence for those practices such as art, ritualised burial and body decoration that imply the presence of symbolic thought. This is Culture with a capital C, entirely different from the toolmaking traditions of chimpanzees. The evidence for symbolism becomes pervasive only about 50,000 years ago, but there are traces of engraved stone and uses of pigments that indicate symbolic thought reaches back to the origins of H. sapiens. It is a distinguishing feature of our species.
Courtesy of Brian Weatherson.


Tuesday, June 10, 2003
 
Law versus History Nate Oman describes his experience presenting a paper on natural law and legal positivism in the nineteenth century at a history conference. The historians, of course, thought that legal ideas were causally impotent. Here is the thing I find odd. Historians toss off sweeping generalizations about the nature of historical causation--even though their methods have little to teach us about the causes of legal events. When thinking about this issue, I always come back to the famous debate between Jon Elster and Gerald Cohen over Marxist theories of history. From where I sit, Elster won this debate decisively. Without microfoundations, Marxist theories of history are close to mere dogma. But Historians believe they can explain legal events, like Supreme Court decisions, with absolutely no account of causal mechanism at all. This sort of sloppy thinking is really quite astounding. Read Nate's post, which is eloquent and thoughtful.
Update:For some thoughtful comments by Bruce Boyden, check out this post in the Legal Theory Annex.


 
Pryor Filibuster Check on this post at Southern Appeal that reports on the state of play with respect to a possible filibuster of Bill Pryor.


 
New Papers on the Net Here is the roundup:
    Troy Paredes (Washington University, St. Louis) posts Blinded by the Light: Information Overload and its Consequences for Securities Regulation, forthcoming in the Washington University Law Quarterly. From the abstract:
      A demanding system of mandatory disclosure, which has become more demanding in the wake of the Sarbanes-Oxley Act of 2002, makes up the core of the federal securities laws. Securities regulation is motivated, in large part, by the assumption that more information is better than less. After all, "sunlight is said to be the best of disinfectants; electric light the most efficient policeman." But sunlight can also be blinding. Two things are needed for a regulatory regime based on disclosure, such as the federal securities laws, to be effective. First, information has to be disclosed. Second, and often overlooked, is that the users of the information - for example, investors, securities analysts, brokers, and portfolio managers - need to use the disclosed information effectively. Securities regulation focuses primarily on disclosing information, and pays relatively little attention to how the information is used - namely, how do investors and securities market professionals search and process information and make decisions based on the information the securities laws make available? Studies making up the field of behavioral finance show that investing decisions can be influenced by various cognitive biases on the part of investors, analysts, and others. This Article focuses on a related concern: information overload. An extensive psychology literature shows that people can become overloaded with information and make worse decisions with more information. In particular, studies show that when faced with complicated tasks, such as those involving lots of information, people tend to adopt simplifying decision strategies that require less cognitive effort but that are less accurate than more complex decision strategies. The basic intuition of information overload is that people might make better decisions by bringing a more complex decision strategy to bear on less information than by bringing a simpler decision strategy to bear on more information. To the extent that investors, analysts, and other capital market participants are subject to information overload, the model of mandatory disclosure that says more is better than less may be counterproductive. This Article considers the phenomenon of information overload and its implications for securities regulation, including the possibility of scaling back the mandatory disclosure system.
    Stephen Lubben (Seton Hall) uploads Learning the Wrong Lessons: Baird and Rasmussen's Third Lesson of Enron and the Inherent Ambiguity of Control. From the abstract:
      In this paper, I respond to Baird & Rasmussen, Four (or Five) Easy Lessons From Enron, 55 Vand. L. Rev. 1787 (2002). The paper is specifically addressed to Baird & Rasmussen's contention that one of the key lessons of Enron is that chapter 11 is superfluous when a firm's control rights "are coherently allocated." Part I of the article reviews Barid & Rasmussen's view of control rights and the lessons about control rights they draw from Enron. Part II explains how this conception of control rights suffers from several identifiable shortcomings and develops the argument that control in modern firms is inherently (and perhaps intentionally) constructed in a way that leaves ultimate control unclear, even when the firm faces a financial crisis. Part III concludes with some brief observations about the implications of the ambiguous nature of control and the future of chapter 11.


 
Rebellious Judges Joanne Mariner has a very nice findlaw column on vertical stare decisis. Here is a taste:
    In Hutto v. Davis, a 1982 case, the Supreme Court endorsed a robust view of vertical stare decisis. Warning of the perils of lower court disobedience, the Court conveyed an almost apocalyptic vision of confusion and disarray. As it explained: "unless we wish anarchy to prevail within the federal judicial system, a precedent of this Court must be followed by the lower federal courts no matter how misguided the judges of those courts may think it to be." Most foreign judges would be amused at these remarks. In civil law jurisdictions - that is, in most European and Latin American countries - there is a very different conception of the judicial function. In theory, the comprehensive legal codes used in such jurisdictions avoid the need for judicial interpretation - or the exercise of judicial discretion - and thus it is unnecessary for one judge's interpretation to bind future judges. In many countries, therefore, judges routinely flout the contrary precedents of higher courts. Indeed, the absence of a vertical principle of stare decisis is vividly illustrated by the response of one Italian trial judge who attended a lecture on courts in the United States. Hearing a defense of stare decisis, he exclaimed, with considerable outrage: "My independence as a judge would be completely undermined if I had to follow the decisions of the court of appeals." As the Italian judge implies, our system of law would not break down without strict judicial obedience to the rulings of higher courts. Its nature would, however, be changed if the principle of vertical stare decisis were to be accorded less deference. The civil law understanding of judicial independence elevates the autonomy of individual judges at the expense of the judiciary's strength as an institution. The principle of stare decisis constrains individual lower court judges but in doing so, it shifts power to the Supreme Court and the judiciary as a body. It turns a mass of uncoordinated decisionmakers into a coherent whole: a branch of government capable of speaking with one voice. Certainly school desegregation would never have been accomplished, or probably even attempted, without a strong vertical stare decisis principle to keep recalcitrant lower court judges in line.
For more on vertical stare decisis, see this editorial by the amazing Howard Bashman. And for my take on this important topic, please check out my three part series:.


 
Confirmation Wars: The Fortas Filibuster C. Boyden Gray has a piece in the Wall Stree Journal entitled A Filibuster Without Precedent. Gray has a very good analysis of the Fortas filibuster and what it means for the question whether there the traditions of the Senate encompass filibusters of judicial nominees. Here is a taste:
    On June 26, 1968, President Johnson nominated Justice Fortas to Chief Justice Warren's seat. Senators from both parties opposed the Fortas nomination for a variety of reasons, some plausible (e.g., that Justice Fortas, who had been a trusted adviser to President Johnson before his nomination, had continued to participate in White House decision making during his tenure on the Court), some not so plausible (e.g., that President Johnson should not be allowed to choose Chief Justice Warren's successor because the president was a lame duck). Whatever the merits, the criticisms did not prevent a relatively rapid decision on the nomination, which was reported out of the Judiciary Committee (by divided vote) in the middle of September and was opened to floor debate on Sept. 24. A filibuster followed, but not for long. On Oct. 1, the Senate voted on a motion for cloture that would have ended debate on the nomination and allowed an immediate vote on whether to confirm Justice Fortas as chief justice. The Congressional Record for Oct. 1, 1968, shows that 45 senators voted for cloture, 43 voted against. However, if the senators who did not vote are taken into account, we find that 48 were on record as opposing cloture, 47 as favoring it. Indeed, at least one of the senators who voted for cloture, Republican John Sherman Cooper of Kentucky, said that he would vote against the Fortas nomination if it came to a vote. Another who voted for cloture proposed immediately after the vote that the president withdraw the nomination and submit a name that could be quickly confirmed. This evidence alone shows that of the 47 on record for cloture, at least one, if not more, was actually opposed to the Fortas nomination.


 
Bernstein on Lochner Check out this post by David Bernstein, challenging the assumption that Lochner era jurisprudence was partisan.


 
Viens at Oxford Today At Oxford's Jurisprudence Discussion Group, Adrian Viens presents Juridical Bivalence.


 
Law and Philosophy at the University of Texas Brian Leiter has announced next year's lineup at the spectacular Texas series. Here is the link. Next year's speakers include:
    John Gardner, the Professor of Jurisprudence at Oxford University.
    Liam Murphy, Professor of Law and Philosophy at New York University.
    Nicos Stavropoulos, University Lecturer in Legal Theory at Oxford University.
    Benjamin Zipursky, Professor of Law at Fordham University in New York.
    Mark Murphy, Associate Professor of Philosophy at Georgetown University.
    Jonathan Wolff, Professor of Philosophy and Head of Department at University College London.


Monday, June 09, 2003
 
Hiring This article is about the Economics Department at NYU, but it includes a description of NYU President (and former Law School Dean) John Sexton's famous and infamous hiring techniques. Worth a read.


 
Prospect Theory and Behavioral Economics Courtesy of the marvelous PoliticalTheory.info, Dirk Olin has a very nice (and short) piece in the NYT titled Prospect Theory. Here is a morsel:
    Daniel Kahneman, a professor at Princeton who was the first psychologist to win the Nobel in economics (which he was awarded last year for studies he conducted with Amos Tversky), has attributed market manias partly to investors' ''illusion of control.'' Kahneman recently explained the basic weirdness of the dot-com bomb to The Financial Times: ''A high percentage of investors knew it was a bubble and still invested because they thought they could get out in time.'' Why did so few heed the alarms? According to Kahneman's ''prospect theory,'' most of us find losses roughly twice as painful as we find gains pleasurable. This radical precept subverts much of ''utility theory,'' the longstanding economic doctrine that says we weigh gain and loss rationally. When combined with the reality that some market winners display the same recklessness as some victorious gamblers -- a phenomenon that Richard Thaler, an economist at the University of Chicago, calls ''the house-money effect'' -- the market is often revealed to be downright loony.


 
Welcome to the Blogosphere Please welcome Three Years of Hell to Become the Devil: A law student at Columbia University to the Blogosphere. This is (you guessed it) a law student blog--and it is very nicely executed with Moveable Type.


 
Behavioral Economics and Tax Surf to A Taxing Blog and check out the plug for Terry Chorvat's paper Perception and Income: The Behavioral Economics of the Realization Doctrine, which recently went up on SSRN.


 
New Papers on the Net Here is the roundup:
    Francesco Pulitini (Siena) posts Notes on The 'Economics Analysis of Law' From the abstract:
      During the last decades Economic analysis of Law has acquired enormous attention from both economists and lawyers on both sides of the Atlantic. This essay briefly recalls the origins of E.A.L., in Chicago and at Yale. The main difference between them is the role played by the concept of economic efficiency, which is a very important one. In the essay I wonder if that difference makes it hard or incorrect to consider E.A.L. as a homogeneous field.
    David Bernstein (George Mason) posts Lochner's Feminist Legacy, forthcoming in the Michigan Law Review. From the abstract:
      This essay is a review of Julie Novkov's Constituting Workers, Protecting Women: Gender, Law, and Labor in the Progressive and New Deal Years. The book, which discusses the controversy over "protective" laws for women, has some important strengths. Novkov deserves praise for considering a wide range of Lochner-era cases and for reading many of the related legal briefs, an often overlooked but extremely important source for constitutional history. Novkov also provides some compelling analysis. For example, she is one of the few scholars to recognize that the liberal Holden v. Hardy and not the strict Lochner v. New York was the leading case on the constitutionality of protective labor legislation case for much of the so-called Lochner era. The book is also very good at its primary task - explaining how considerations of sex affected legal arguments regarding protective laws for workers during the period studied. On the other hand, several flaws make Constituting Workers, Protecting Women less valuable than it might have been. First, Novkov pays almost no attention to any form of economic analysis. For example, Novkov never seriously considers whether economic logic suggests that maximum hours laws or minimum wage laws that applied only to female workers actually aided them. Novkov also fails to discuss the empirical evidence regarding the effect of sex-specific protective labor laws. Moreover, Novkov shows no interest in the public choice aspects of protective labor legislation for women, noting only in passing that protective legislation was often promoted by labor unions that excluded women to prevent women from competing for jobs held or sought by union members. A second problem with Constituting Workers, Protecting Women is that its perspective on constitutional change overemphasizes the importance of legal argument at the expense of both important personalities and crucial political developments. For example, remarkably for a book by a political scientist about constitutional law that culminates in the New Deal era, Franklin Roosevelt's name does not appear in the index. A third problem with Constituting Workers, Protecting Women is that Novkov overstates the importance of the debate over protective laws for women in the general debate over the constitutionality of police power legislation more generally.
      Despite the reservations noted above, Constituting Workers, Protecting Women is recommended for readers interested in constitutional, labor, and women's history. While it does not deliver everything the author promises, or that this reviewer would have liked to have seen, it is a cogent account of an important legal and historical controversy. The definitive book on protective labor legislation and women during the Lochner era, however, remains to be written.
    Elizabeth Chorvat (George Mason) posts You Can't Take It With You: Behavioral Finance and Corporate Expatriations, forthcoming in the UC Davis Law Review. From the abstract:
      In 2002, reports of corporate expatriations filled the headlines. These reports come as something of a surprise because the tax rules enacted in the early 1990's should have prevented almost all of these transactions. Various commentators have tried to explain this phenomenon. However, these explanations are not consistent with the empirical evidence. This article proposes a solution to this problem by arguing that corporate managers are exploiting fluctuations in stock prices to expatriate at reduced cost. The article proposes legislation to reduce expatriations consistent with this model.
    Robert Sitkoff (Northwestern University) offers Corporate Political Speech, Political Extortion, and the Competition for Corporate Charters, forthcoming in the University of Chicago Law Review. From the abstract:
      This article explores the policy bases for, and the political economy of, the law's long-standing discrimination against corporate political speech. This Article also explores the relevance of state law regulation of corporate political speech to the competition between the states for corporate charters. In the process, implications for the current political debate over soft money and the current academic debates over enacting an optional federal corporate takeover law regime and creating a securities law regulatory competition are noted. The underlying aim of this Article is to bring to bear on the relevant policy debates a shift in focus from the shareholder/manager agency relationship to the agency relationship between lawmakers and society. The Article draws on the contractarian view of the firm, the economic theory of regulation, and the study of public choice.


 
Marston on the Constitutionality of the Filibuster Brett Marston has a nice post on the constitutionality of filibustering judicial nominees.


 
Wolf on the Meanings of Lives Today at Oxford's Moral Philosophy Seminar, Astor Visiting Lecturer, Susan Wolf (North Carolina at Chapel Hill, Philosophy) presents The Meanings of Lives. Here is a taste of her very nice paper:
    What is it to live a meaningful life, then? What does meaningfulness in life amount to? It may be easier to make progress by focusing on what we want to avoid. In that spirit, let me offer some paradigms, not of meaningful, but of meaningless lives.
    For me, the idea of a meaningless life is most clearly and effectively embodied in the image of a person who spends day after day, or night after night, in front of a television set, drinking beer and watching situation comedies. Not that I have anything against television or beer. Still the image, understood as an image of a person whose life is lived in hazy passivity, a life lived at a not unpleasant level of consciousness, but unconnected to anyone or anything, going nowhere, achieving nothing - is, I submit, as strong an image of a meaningless life as there can be. Call this case The Blob.
    If any life, any human life, is meaningless, the Blob's life is. But this doesn't mean that any meaningless life must be, in all important respects, like the Blob's. There are other paradigms that highlight by their absences other elements of meaningfulness.
And here is another bit from near the end:
    the difference between a meaningful and a meaningless life is not a difference between a life that does a lot of good, and a life that does a little. (Nor is it a difference between a life that makes a big splash and one that, so to speak, sprays only a few drops.) It is rather a difference between a life that does good or is good or realizes value and a life that is essentially a waste. According to these intuitions, there is as sharp a contrast between the Blob and a life devoted to the care of a single needy individual as there is between the Blob and someone who manages to change the world for the better on a grand scale.
Download while its hot.


Sunday, June 08, 2003
 
The Case for Strong Stare Decisis, or Why Should Neoformalists Care About Precedent? Part Three: Precedent and Principle
    Introduction I have piled up the promissory notes, and now the time has come to pay the piper. In this post, I complete my argument for strong stare decisis--the view that even the Supreme Court should consider itself bound by its prior decisions. In particular, I've promised to offer an argument that strong stare decisis can be justified as a matter of principle. And that task looks daunting, because it is absolutely clear that stare decisis can function to prevent the rapid reversal of error. But a promise is a promise. So here goes.
    Guide This is Part Three of a series of posts on stare decision. The prior posts are:
      Part One: The Three Step Argument. made the basic case for stare decisis in three contexts:
        --vertical stare decisis, i.e., the binding of lower courts by the decisions of higher courts.
        --horizontal stare decisis in intermediate courts of appeal, e.g. the rule that three-judges panels of the Courts of Appeals are bound by their predecessors and bind their successors.
        --horizontal stare decisis for courts of last resorts, i.e. the idea that the Supreme Court should consider itself bound by its own prior decisions.
      Part Two: Stare Decisis and the Ratchet. responded to "the ratchet"--the argument that if (1) judiciary alternates between periods of realism and formalism, (2) formalist judges respect precedents from realist periods, and (3) realist judges do not respect decisions from formalist periods, then (4) the law will grow progressively more realist over time.
    The argument so far has had two themes. The first theme that strong stare decisis best serves the rule of law values of predictability and certainty. The second theme is that strong stare decisis provides the best strategy for avoiding a downward spiral of politicization that threatens to eviscerate the rule of law. In this third post in the series, I address the question: "Can strong stare decisis be justified as a matter of principle?"
    Framing the Issue So I have a problem. This is a blog and a really good answer to the question could easily take a long law-review article or even a book. So I need a framing device--an expository technique that will enable us to get at the heart of the question. We are investigating legal formalism--but the general concept of law as a formal system is too general and abstract. We need to compare particular conceptions of formalism. (Notice that I am appealing to the concept/conception distinction.) My strategy will be to compare two simple theories (that is, two conceptions of legal formalism) that are very similar. The first conception will incorporate strong stare decisis, both horizontal and vertical, for trial courts, intermediate appellate courts, and courts of last resort. Let's name this conception "strong stare decisis." The second conception will be like the first, but it will substitute much weaker version of stare decisis for courts of last resort. Let's name the second conception "weak stare decisis," remembering that the weakening of the force of precedent is limited to courts, like the United States Supreme Court, that stand at the top of the hierarchy in a particular jurisdiction. Let's assume that both of these conceptions share the other features of a neoformalist theory of adjudication, e.g., they both share a commitment to deciding constitutional cases in accord with the plain meaning of the text and, when that is ambiguous, to the original meaning insofar as that can be ascertained from history. For the purposes of exposition, I will use "textualism" as a shorthand for the complex structure of neoformalism. That means that I will be ignoring originalism for most of the remainder of this post.
    Deontological Textualism Why would a formalist favor weak stare decisis over strong stare decisis? In the first two parts of this series, I have argued for the unremarkable conclusion that strong stare decisis better serves the rule of law values of predictability and certainty. I've also shown that strong stare decisis provides a better solution to the problem of politicization. From the neoformalist perspective, it would seem like the presumption should be in favor of strong, and against weak, stare decisis. There is, however, a very powerful argument against strong stare decisis that may overcome this presumption. The argument is simple. Judges have an obligation to decide cases correctly, ruling in favor of the party that is entitled to win on the basis of the law. Thus, in constitutional cases, judges have an obligation to rule in favor of the interpretation of the constitution that fits the text. Correspondingly, in a case in which one party advocates a result that conforms to the text and the other side argues for a ruling that is inconsistent with the text, the former side is entitled to prevail. Without worrying about the deep foundations of this principle in moral philosophy or political theory, let us assume, arguendo, that this is correct. Let's call this view--that judges have a duty to decide in accord with the text--"deontological textualism."
    The Argument Against Stare Decisis from Deontological Textualism If deontological textualism is correct, the there is a really big fat problem with stare decisis. Here is one way of putting the problem:
      Suppose you are a judge in a constitutional case. You have the text and a precedent in front of you. There are only two possibilities. Either the precedent is consistent with the best reading of the text or it isn't. If the precedent did adhere to the text, then you don't need the precedent to get the right result. If the precedent did not follow the text, then decision according to the precedent is wrong.
    Voila! Either the precedent is irrelevant (except as a source of arguments to be evaluated on their merits) or it is wrong. The only cases in which stare decisis could make a difference, it would lead judges to violate their duty to decide according to law. Oh oh! It looks like stare decisis is in big trouble. Why would anyone embrace a doctrine which results in error in the only cases in which it makes a difference? What a knucklehead I've been! Now that I've formulated the objection to strong stare decisis from deontological textualism, my whole view of formalism is shifting. Not only is horizontal stare decisis a bad idea for courts of last resort, stare decisis is always a bad idea! This argument actually has equal force against vertical stare decisis and horizontal stare decisis in the appellate courts. Wow! I think I'm onto something. It will be embarrassing to switch my position so radically, but this new conception of formalism is even more radical and interesting.
    Point of View But then it hits me. The argument against strong stare decisis from deontological textualism only works if we evaluate stare decisis from the first-person judicial perspective--from the point of view of the judge deciding whether to follow precedent. If we ask the question, should I consider myself bound by precedent assuming that I will otherwise make the legally correct decision and we assume (on the basis of deontological textualism) that I am obligated to make the correct decision, the assumptions dictate the answer. But now shift from the first-person judicial perspective to the third-person systemic perspective. Now we are looking at the doctrine of stare decisis from outside the courtroom and over the long run of cases. Which system is more likely to produce fidelity to text in the long run? A system of precedent or a system in which judges are free to decide for themselves in each and every case what the text means? Once we switch perspectives, the epistemological bias of the first-person judicial perspective is unveiled. From the first-person judicial perspective, the answer to the question, will I decide correctly if I am not bound by precedent is always "yes." But from the third-person perspective, it is quite clear that individual judges frequently err. The argument of the immediately preceding paragraph was, quite simply, bogus. Totally bogus, man!
    The Internal Inconsistency of the Case for Weak Stare Decisis from Deontological Textualism Remember that we are comparing strong and weak versions of stare decisis for courts of last resort. But I have been assuming that my opponents are not against vertical stare decisis or horizontal stare decisis as rule for intermediate appellate courts. (The explanation is in Part I of this series.) But the argument from deontological textualism applies with equal force to these applications of the doctrine of stare decisis. And this is very important. Because very few opponents of stare decisis at the level of the Supreme Court are willing to bite the bullet and say that we should do away with the rule that requires trial courts to adhere to the precedents set by appellate courts. That seems like a recipe for chaos. And it would be in the context of a common-law system. But there is an alternative available.
    Comparing Civil Law and Common Law from a Neoformalist Perspective Doing away with precedent in a common law system is a recipe for disaster. But there is an alternative. We could replace our common-law system with a civil-law system. In common-law systems, the doctrine of stare decisis is essentially for the rule of law. Constitutions and statues are drafted with the common law system in mind. Some bodies of law (contracts outside the UCC, torts, much of property, agency, etc.) rely almost entirely on precedent--even in states which adopted a "codification" in the late 19th century. But civil law jurisdictions do not have the doctrine of stare decisis, and nonetheless they preserve the rule of law. How? Too big a question for an adequate answer here! But, here are some basic points:
      --Civil law systems draft their constitutions and codes with the lack of stare decisis as basic assumption.
      --Civil law judges are inculcated with the culture of civil-law judging; they learn civil-law techniques of reasoning and adopt civil-law norms.
      --Civil law codes are supplemented by quasi-authoritative extrajudicial interpretive materials, the functional equivalent of our multi-volume treatises (Wigmore, Williston, Moore's).
    Obviously, I can't try to summarize the debate as to whether civil law or common law is better as a matter of institutional design! But for the purposes of this post, I can make two points: (1) transition to a civil law system is outside the feasible choice set as anything but a very long-run option for the United States; (2) it seems unlikely that a priori arguments can settle the question whether common law or civil law better serves the values of the rule of law; (3) even in a civil law system, many of the issues that arise in the debate over strong stare decisis will be recreated in debates over the role of quasi-authoritative extrajudicial glossing. It seems fair to set aside the civil law option. Our debate is over the proper conception of formalism for a common-law system. The debate between civil law and common law is related but conceptually distinct.
    Strong or Weak But the critic of strong stare decisis has yet another line of argument. OK. I concede that lower courts should follow precedent. I am even willing to concede that the Supreme Court should consider precedent. What I am against is the idea that precedent should be considered first and should be considered binding. Fair enough. The time has come to take a hard look at weak stare decisis. Let's rock and roll:
      A Copernican Shift But before we go further, it is crucially important that we take stock of the dialectical movement of the argument so far. Once we agree that the argument from deontological textualism fails, that lower courts should be bound by precedent, and that even the Supreme Court should give precedent some substantial weight, the nature of the debate has been fundamentally changed. Watch me now. A big important move is coming. The nature of debate has been transformed. The advocate of weak stare decisis is no longer arguing that judges must decide each and every case in accord with the best interpretation of the text. We have moved from an ex post look at the correctness of particular prior decisions to an ex ante look at questions of systematic institutional design. This is a Copernican shift in perspective. We have awoken from our dogmatic slumbers.
      Versions of Weak Stare Decisis And now we need to get specific about weak stare decisis. How would nonbinding stare decisis work? Let's run through some possibilities:
        --Precedent Last, Not First. Let's get this one out of the way. Sometimes it is suggested that the Supreme Court should first look at the text and original meaning. If the case can be resolved on the basis of these factors, then the Court should go no further. But if there is a tie or if the text and history are indeterminate, then (but only then) should the Court consider precedent. This same idea can be formulated in a variety of superficially different but functionally equivalent ways. For example, precedent could create a bursting-bubble presumption, with text or original meaning as bubble bursters. In the context of the current argument, these formulations are nonstarters, because in a wide range of cases (perhaps almost all cases), this view of precedent is so weak that it is functionally equivalent to the position that the Supreme Court should ignore precedent altogether. Let's take this notion off the table.
        --Precedent as a Factor to Be Weighed. So here is another idea. We could take precedent as simply a factor to be weighed with other factors when a court makes a decision. I am not certain that it is really open to neoformalists to adopt a balancing test model for integrating the role of precedent, text, and history. How is this supposed work? Balancing assumes a single scale. But how do you weigh precedent against text? Doesn't this involve a category mistake? Balancing tests work well if you have a neorealist, instrumentalist, interest-accommodation theory of law, but that can't be the way that balancing works for neoformalists.
        --Precedent as Binding in the Absence of Clear Error.. This is the most promising possibility. The Supreme Court might adopt the view that it will follow its own prior precedents in the absence of clear error. Here is one way the story could be told. Even judges who take text and history seriously can disagree about what the Constitution means. Of the alternative interpretations that could be said to fit the text, some do a better job of making sense of meaning and some do a worse job. The Supreme Court might follow those precedents that can be said to be reasonable interpretations while ignoring precedents that fail to make any attempt to fit the text or that do try but miss the mark so badly that we can say that they are clear mistakes. This proposal has much to recommend it, but it has one, very troublesome, feature. Whether a given interpretation is "reasonable" or "clearly erroneous" cannot be determined by objective criteria. These are judgment calls, and it is inevitable that there will be disagreement. Moreover, such judgment calls will inevitably be influenced by the political ideology of the judge. A clear error rule can reduce the target zone for politicization of the judiciary, but given the nature of our Constitution's broad and ambiguous provisions, the clear error rule will inevitably leave much open. And it goes without saying that this zone will not be reduced by the accumulation of precedent--because the point of a clear error rule is to prevent precedent from settling this kind of question.
      The Law Works Itself Pure Enough critique! What is your alternative? In your last post you promise to explain how stare decisis could actually do a better job of insuring fidelity to the text and original meaning of the Constitution. How exactly does that work? And no more promissory notes. I want the answer now! Fair enough. It is time to ante up. The series of arguments that follow aim to show that over the long run and viewed from the appropriate perspective, strong stare decisis offers the most reliable path to the rule of law in general and fidelity to the constitutional text in particular. The argument proceeds through a number of steps:
        First Order Neutrality Let's not neglect an obvious starting point. Over the long haul, strong stare decisis seems to be neutral with respect to fidelity to text. Strong stare decisis can enshrine good decisions and bad decisions. It can lock in a decision that departs from the text. And it can lock in a decision that gets the text exactly right. Weak stare decisis grants freedom to correct error and freedom to make new errors.
        Ultrarealist Precedent And if all precedents were created equal, that would be the end of the story. But not all precedents are created equal. A fully developed theory of stare decisis needs a richly detailed account of dicta and ratio decendi. But the neoformalist theory of precedent should not be confused with ultrarealist theory of precedent that holds sway in contemporary judicial practice. Let me use Miranda as an example. Miranda is an extreme example of a ultrarealist theory of precedent. Realists viewed stare decisis as an otiose and misleading way of describing the predictive theory of precedent. Precedents are important because they enable us to predict what a particular set of judges will do on future occasions. Given that judges have the freedom to either sign on to an opinion, concur separately, or dissent, we can cautiously assume that if an opinion labels a particular statement as a holding, then that statement provides a reliable guide to the likely future decisions of the court. Notice that on the ultrarealist picture, decisions of the Supreme Court (which always sits en banc) and decisions of the Courts of Appeal (which sits in three judge panels) have radically different precedential effect. A Supreme Court "holding" may be a pretty good predictor of the likely voting pattern next year--assuming that at least five judges from the majority remain on the Court. But a Court of Appeals "holding" is a lousy guide to how another three-judge panel (which in all likelihood would have zero or one judge in common with the precedent case) would vote. Thus, Supreme Court "holdings" are read by realists like they were statutes whereas Court of Appeals "holdings" are not so read by realists. This whole picture of precedent is rejected by neorealism. Thus, when I argue for strong stare decisis, I am not arguing that the Miranda dicta became law just because the Supreme Court pronounced it to be so. Of course, in the intervening years Miranda has been applied in a variety of contexts and also been subjected to a number of carve outs and exceptions. The whole corpus of Miranda decisions has come to embody something like the original Miranda rule. Strong stare decisis requires adherence to that--the corpus of decisional law--but nothing in neoformalism requires adherence to the legislative pronouncement made by the original Miranda Court.
        Neoformalist Precedent In contrast to the ultrarealist theory of precedent, the neoformalist view actually affords the pronouncements in individual cases ("legislative holdings") less rather than more authority. The binding authority of a single case is always quite limited in scope--only the ration decendi and not the obiter dicta is authoritative. Common law rules are established by the accumulation of precedent, and the same goes for constitutional interpretations in a common law system. And this brings us back around to the ratchet (the notion that stare decisis can lock in realist precedents. We are now in a position to appreciate a significant qualification on this claim. Neoformalist stare decisis would not "lock in" the broad legislative pronouncements characteristic of modern Supreme Court.
        Gravitational Force Even if we reject the ultrarealist theory of precedent, it is nonetheless the case that some precedents are more weighty than others. How does this come to be? Ronald Dworkin suggested the metaphor of gravitational force to capture this aspect of the doctrine of stare decisis. Let's grab the metaphor. Precedents acquire gravitational force in diverse manners. One common pattern involves a case that articulates a principle that stands the test of time and becomes incorporated in a body of interlocking decisions--the first case to articulate the principle comes to stand for the entire body of interrelated decisions. This point leads to another. The force of a precedent depends on the soundness of its reasoning. What does this mean? On the one hand, precedents that respect the prior cases, text, and original meaning have greater gravitational force. On the other hand, precedents that do not attempt consistency with precedent, text or history are less weighty.
        The Law Works Itself Pure And this bring us to the aphorism, “the law works itself pure.” Over time, neorealist judging does not lock in realist precedents. As neorealist precedents accumulate, the force of realist decisions is gradually eroded—their gravitational force growing ever less powerful with time. Strong stare decisis does not require the view that errors can never be corrected. Quite the contrary. As time passes, realist decisions control a shrinking domain, then are confined to their facts, and finally are overruled. How can that be? If precedents are binding, how can they ever be overruled? You already know the answer. Formalist judges overrule precedents when, but only when, they have become so inconsistent with the surrounding legal landscape that respect for precedent requires that they be overruled. This move is so familiar to common lawyers that we don’t think twice when we see it happen.
        Life in the Fast Lane Recall that our project is to develop the best conception of neoformalism. Using constitutional interpretation for illustrative purposes, we are comparing two theories. Both incorporate textualism and originalism. Both incorporate strong respect for vertical precedent. Both incorporate strong respect for precedent by intermediate courts of appeal. Strong stare decisis extends the principle that precedent is binding to courts of last resort. Weak stare decisis eschews this move, incorporating instead the view that prior Supreme Court decisions can be ignored when they are inconsistent with text or history. And if the historical circumstances are right, weak stare decisis has the advantage of speed. If neoformalists control the bench, they can move more quickly toward the constitutional interpretations that would be correct if the courts were writing on a blank slate. But the advantage of speed comes at a heavy price. Neoformalism infused with strong stare decisis moves at a more deliberate pace, but no step is taken until the path is sure. This tradeoff is inevitable. The law cannot be both flexible and stable at the same time. The fast lane is not for those who are devoted to the rule of law.
      Summation In the long run, strong stare decisis offers the best hope for restoring the rule of law. This is true for three fundamental reasons. First, strong stare decisis directly advances the rule of law values of stability and certainty. If courts of last resort are free to overrule precedent willy nilly, the rule of law must suffer. Second, strong stare decisis offers the best hope for ending the downward spiral of politicization that threatens to do lasting damage to the rule of law by creating a basis for mutual trust. Third, strong stare decisis incorporates a view of the force of precedent that both weakens the force of realist precedent in the short run and insures its demise in the long run.
    Stepping Back And now we need to step back. And I want to be very candid. At this point in the argument, I have a very strong suspicion about how you, gentle reader, are responding. And before we close, we need to take a hard look at those reactions.
      Reactions If you are on the right, I think you may be thinking along the following lines. Interesting, but ultimately unpersuasive. Because I know that the worst decisions of the Warren and Burger Courts should be overruled as soon as right-thinking Justices gain a majority on the Supreme Court. Nothing that Solum has said has given me reason to doubt that. And if you are on the left, you are likely to be thinking something else entirely. Interesting, but entirely wrong headed. Because I know that the best decisions of the Warren and Burger Courts were correct. Any theory that says that these decisions were wrong when they were rendered just can't be right.
      Temporal Asymmetry I am not making this up. I've gotten versions of these reactions from several correspondents and similar points have been made in the blogosphere. But when we juxtapose the reactions, the temporal asymmetry is striking. The right is looking forward. The left is looking backward. And both sides are looking at particular decisions which form bedrock for them. I've avoided naming names, because the list will vary from individual reader. The usual include Miranda and Roe v. Wade and a host of others. But this is weird? Why doesn't the right observe that strong stare decisis would have prevented many of the realist abuses of the Warren and Burger Courts? Why doesn't the left remark on the fact that strong stare decisis would prevent a rapid dismantling of the Warren Court legacy? And why are both left and right focused on this particular era in our history, ignoring the long run?
      Heuristics and Legal Theory All of this sounds odd, but we should not find it surprising. Because both the left and right are reasoning as we should expect, because these modes of thought are quite natural for humans. Evaluating theories of judging is a very difficult task. Much of the work is quite abstract. Examples are relevant, but their assessment is extremely complex, because practices of judging interact systemically with underlying political forces and hence with legislative and executive action. As a result, the implications of a theory of judging for particular issues are almost impossible to assess if one looks at the issues from a long run perspective. Here it comes. And so, it is quite natural for us humans to use simplifying heuristics to avoid these dauntingly complicated tasks. How do we simplify? First, we focus on a few concrete examples upon which we have firm opinions. Second, we adopt a very particular temporal perspective. What would theory X have meant for issue Y if it had been put into place at time Z. What would strong stare decisis have meant for the right to choice if it had been adopted by the Supreme Court right before Roe v. Wade was decided? A ha. Now I have some evidence I can work with. And then we simplify our task in yet another way. We use what cognitive science calls that "Take The Best" (TTB) heuristic for reasoning. When an issue is complicated and reasoning is tough, one way to simplify the problem is to look at the available evidence, rank its quality, and then take the best evidence and make your decision on the basis of that evidence alone. When we reason about theories of judging, the combination of these two heuristics is deadly. We evaluate theories of judging by the implications they would have for a tiny number of decisions if they had been implemented right before or right after those decisions were made. But once you step back, it is obvious that this mode of reasoning is entirely inappropriate for the task at hand. But because an adequate mode of reasoning is time consuming and just plain hard, we can't help ourselves. The temptation to react quickly on the basis of wholly inadequate evidence is almost irresistible.
    Conclusion So my conclusion is going to sound quite odd. In the course of this three part series, I think that I have provided a sound argument for strong stare decisis, but I don't think that my argument is convincing. Conviction requires more than sound argument. Indeed, at this point, I rather suspect that you are quite cross with me. Because my arguments are unsettling. On the one hand, they produce doubts about your most firmly held convictions about judging. And on the other hand, my arguments suggest that you cannot quell these doubts by the methods that have always worked before. If I am right, then you have a lot of work to do before you can know what you really think about strong stare decisis. And for that, I'm truly sorry.


Saturday, June 07, 2003
 
Barnett on Judicial Activism Randy Barnett has a very thoughtful post entitled WHY THE REPUBLICANS ARE LOSING THE WAR OVER JUDGES over at the Conspiracy. Here is a taste:
    Where I most strongly disagree with judicial conservatives is over their stance on unenumerated rights. If it is improper “judicial activism” to ignore the text, structure, and original meaning of the Constitution, then when assessing the proper scope of federal power it is improper to ignore the Ninth Amendment ,which says
      The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people.
    And when addressing the proper scope of state power, it is improper to ignore the Privileges or Immunities Clause of the Fourteenth Amendment, which reads:
      No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States. . . .
    Judicial conservative deny and disparage both clauses because, in their view, this language provides insufficient guidance to judges to count as "law."
Read it while its hot.


 
New Papers on the Net Here is the roundup:
    Saul Levmore (Chicago) and Kyle Logue (Michigan) post Insuring Against Terrorism - and Crime. From the abstract:
      The attacks of September 11th produced staggering losses of life and property. They also brought forth substantial private insurance payouts, as well as federal relief for the City of New York and for the families of individuals who perished on that day. The losses suffered in and after the attacks, and the structure of the relief effort, have raised questions about the availability of insurance against terrorism, the role of government in providing for, subsidizing, or ensuring the presence of such insurance, and the interaction between relief and the incentives for future precaution taking. In response to such losses, and in anticipation of others, one might imagine a range of government responses from nonintervention, to subsidized private insurance, to after-the-fact government payments of a fixed or uncertain kind, and so forth. This Article argues that the particular mix responses the government has chosen with respect to 9/11, including the September 11th Victims' Compensation Fund and the Terrorism Risk Insurance Act of 2002, will significantly affect private expectations about the government's response to future terrorist attacks. One aim of this Article is to explore the relationships between promised or expected government actions (or inactions) and private decisions regarding terrorism risk. These issues lead to some novel ideas about the role of government in insuring against terrorism - and then against crime more generally. Part II provides some background on the response of the private insurance market and the federal government to the losses resulting from September 11th. Part III looks at the positive question of how government and private actors should be expected to respond to the losses of 9/11 and to the prospect of future such losses. It explores the interactions among government relief and charitable responses to 9/11 as well as the existence or absence of private insurance, and draws contrasts between terrorism disasters and natural disasters, as well as between 9/11 and prior terror attacks. Part III also analyzes the circumstances in which episodic relief of the 9/11 variety will lead to (or be replaced by) more permanent, routinized relief, as is available in some other countries. Part IV takes up the normative question of the optimal mix of government and private relief (including insurance) for terrorism-related losses. It provides a skeptical view of government intervention in property insurance markets, quite generally, and of the particular federal terrorism reinsurance regime that Congress recently adopted. Part V then broadens the inquiry by asking whether the case for government-sponsored insurance against crime, which is to say a much broader set of crimes than terrorism alone, is at least as sound as that for terrorism-related risks. Part VI concludes.
    Ernst Fehr (Zurich, Empirical Economic Research) and Joseph Henrich (Emory, Anthropology) post Is Strong Reciprocity a Maladaption? On the Evolutionary Foundations of Human Altruism. From the abstract:
      In recent years a large number of experimental studies have documented the existence of strong reciprocity among humans. Strong reciprocity means that people willingly repay gifts and punish the violation of cooperation and fairness norms even in anonymous one-shot encounters with genetically unrelated strangers. We provide ethnographic and experimental evidence suggesting that ultimate theories of kin selection, reciprocal altruism, costly signaling and indirect reciprocity do not provide satisfactory evolutionary explanations of strong reciprocity. The problem of these theories is that they can rationalize strong reciprocity only if it is viewed as maladaptive behaviour whereas the evidence suggests that it is an adaptive trait. Thus, we conclude that alternative evolutionary approaches are needed to provide ultimate accounts of strong reciprocity.
    Hope Babcock (Georgetown) offers Should Lucas v. South Carolina Coastal Council Protect Where the Wild Things Are? Of Beavers, Bob-o-Links, and other Things that Go Bump in the Night, forthcoming in the Iowa Law Review. From the abstract:
      This Article suggests that Lucas v. South Carolina Coastal was primarily an attempt by the Court to simplify the judicial task of resolving what Robert Gordon refers to as the "hard cases" involving land use disputes, and that, with respect to laws protecting wildlife, the Court did not succeed in its quest. The article briefly discusses the effect of wildlife protection regulations on private land use, and the equity issues raised by their idiosyncratic application. The article shows how English understandings about the rights and duties of landowners influenced colonial expectations about similar matters at the time of the founding of this country. It also shows how early colonial law harbored a deep-seated hostility toward wilderness and explains how some English property doctrines were changed in this country to facilitate cultivation of wild (or "waste") lands - the very lands that are prized today as wildlife habitat. The Article traces the common law roots of modern wildlife laws and demonstrates how the common law doctrines of state wildlife trust and public trust have protected wildlife in this country because of their importance as communal resources, and discusses the theoretical aspects of these doctrines.
    Thomas Nachbar (Virginia) uploads Judicial Review and the Quest to Keep Copyright Pure, forthcoming in the Journal of Telecommunications and High Technology Law. From the abstract:
      This paper is a discussion of the Supreme Court's decision in Eldred v. Ashcroft. In the paper, I argue that the ambiguity at issue in Eldred has little to do with the meaning of the words of the Copyright Clause but rather raises questions of how aggressive the judiciary should be in policing Congress's exercise of the copyright power. After exploring the justifications offered for heightened judicial scrutiny in a variety of constitutional contexts, I conclude that exercise of the copyright power implicates none of them. Courts should correspondingly review intellectual property laws for consistency with the Copyright Clause using the most deferential standard of review conceivable, a standard that I distinguish in both form and context from the sort of "heightened" rational basis review we've grown accustomed to seeing in case like Lopez and Morrison. Among the most serious flaws in arguments for heightened judicial scrutiny of copyright laws is that they rest on the assertion that courts should impose upon congress a particular definition of "progress" as that term is used in the Copyright Clause. Such attempts are misguided for two reasons: First, they seek to replace political copyright policymaking with judicial policymaking. But such attempts are inconsistent with the Constitution's preference for representative policymaking, and the Supreme Court's decision to constitutionalize the exclusion of fact from copyrightable subject matter in Feist Publications, Inc. v. Rural Telephone Co. provides a more than adequate reminder that (in addition to its lack of democratic pedigree) the court's copyright policymaking is also substantively lacking. Second, they call upon eighteenth-century copyright policy as understood by the Framers to provide this frozen definition of "progress". But the Framers never considered most of the copyright-related questions we face today, and even if they had, it's not clear that we would want to live with the policy choices they would have made. Political, not judicial, action is the solution for our copyright woes.
    David Partlett (Washington and Lee) uploads Misuse of Genetic Information: The Common Law and Professionals' Liability, forthcoming from the Washburn Law Journal. From the abstract:
      As scientific advances take place in mapping the human genome, probing minds have voiced concern about discrimination on the basis of genetic makeup in employment and insurance. The issue has been addressed in numbers of western countries, usually by strong proscriptions against use of genetic information based upon principles of privacy. The article argues that an evolutionary approach in prescribing legal norms to actual abuses is more satisfactory. Placing responsibility on professionals through their duty of confidence may be particularly efficacious in preventing abuses while allowing the optimal disclosure of socially useful information. It is recognized that social norms will also govern the use of genetic information. More generally, the law of torts will implicate the disclosure and use of genetic information.
    Shannon Gilreath (Wake Forest) uploads Cruel and Unusual Punishment and the Eighth Amendment as a Mandate for Human Dignity: Another Look at Original Intent. From the abstract:
      The idea behind this article is not proof that the death penalty is or is not "cruel and unusual punishment" in twenty-first century America. Instead, this piece is an attempt to demonstrate that one can seriously consider the Fifth and Eighth Amendments to the United States Constitution and conclude that the death penalty is "cruel and unusual" and, therefore, unconstitutional, without doing violence to the "original intent" of the Framers.
    Michael Klarman (Virginia) posts Why Massive Resistance?. From the abstract:
      This paper seeks to explain the phenomenon of southern massive resistance to Brown v. Board of Education. Given that there were white racial moderates in the South - people who favored compliance with court orders, opposed school closures, and would have tolerated gradual desegregation - why did Brown so radicalize southern politics, leading temporarily to a fairly unified effort by southern states to defy the Court? One explanation focuses on southern politicians. Either because they miscalculated their constituents' preferences or because they demagogically capitalized on their constituents' fears, politicians became extremists and created an environment that chilled the expression of moderate sentiment. On this view, massive resistance was not inevitable, at least outside of the Deep South. This paper takes a different tack, arguing that the political dynamics of the segregation issue, combined with certain features of southern politics, ineluctably propelled public debate toward extremism, independently of the machinations of politicians.
    Tim Wu (Virginia) uploads When Code Isn't Law, forthcoming in the Virginia Law Review. From the abstract:
      This Article proposes a new and concrete way to understand the relationship between code and compliance with law. I propose to study the design of code as an aspect of interest group behavior as simply one of several mechanisms that groups use to minimize legal costs. Code design, in other words, can be usefully studied as an alternative to lobbying campaigns, tax avoidance, or any other approach that a group might use to seek legal advantage. The important case of peer-to-peer ("P2P") filesharing, explored in depth in this Article, illustrates the possibility of using code design as an alternative mechanism of interest group behavior.
    Rafael La Porta (Harvard, Economics), Florencio Lopez de Silanes (Yale, Management), Cristian Pop-Eleches (Harvard, Economics) and Andrei Shleifer Harvard Economics) post Judicial Checks and Balances. From the abstract:
      In the Anglo-American constitutional tradition, judicial checks and balances are often seen as crucial guarantees of freedom. Hayek (1960) distinguishes two ways in which the judiciary provides such checks and balances: judicial independence and constitutional review. We create a new data base of constitutional rules in 71 countries that reflect these provisions. We find strong support for the proposition that both judicial independence and constitutional review are associated with greater freedom. Consistent with theory, judicial independence accounts for some of the positive effect of common law legal origin on measures of economic freedom. The results point to significant benefits of the Anglo-American system of government for freedom.
    Simeon Djankov (World Bank), Rafael La Porta (Harvard, Economics), Florencio Lopez de Silanes (Yale, Management) and Andrei Shleifer (Harvard, Economics) postThe Regulation of Labor. From the abstract:
      We investigate the regulation of labor markets through employment laws, collective bargaining laws, and social security laws in 85 countries. We find that richer countries regulate labor less than poorer countries do, although they have more generous social security systems. The political power of the left is associated with more stringent labor regulations and more generous social security systems. Socialist and French legal origin countries have sharply higher levels of labor regulation than do common law countries, and the inclusion of legal origin wipes out the effect of the political power of the left. Heavier regulation of labor is associated with a larger unofficial economy, lower labor force participation, and higher unemployment, especially of the young. These results are difficult to reconcile with efficiency and political power theories of institutional choice, but are broadly consistent with legal theories, according to which countries have pervasive regulatory styles inherited from the transplantation of legal systems.


Friday, June 06, 2003
 
Symposium on Fiss The latest electronic edition Issues in Legal Scholarship has a symposium entitled The Origins and Fate of Antisubordination Theory. Here is a roundup of the articles:


 
Donnelly at Oxford Today At Oxford's Jurisprudence Discussion Group, , Bebhinn Donnelly presents Teleology and moral duty.


Thursday, June 05, 2003
 
The Case for Strong Stare Decisis, or Why Should Neoformalists Care About Precedent? Part Two: Stare Decisis and the Ratchet.
    Guide This is Part Two of a series of posts on stare decision. For the prior post, go to: Part One: The Three Step Argument. Part I made the basic case for stare decisis, focusing on the familiar argument that in a common-law system, stare decisis is essentially for predictability and certainty. The first step of the three step argument made the case for strong vertical stare decisis, arguing the the precedents set by higher courts should bind lower courts. The second step made the case for strong horizontal stare decisis in intermediate appellate courts (e.g. the United States Courts of Appeals). The third and most important step consisted of arguments for the proposition that courts of last resort (e.g. the United States Supreme Court) should consider themselves bound by their prior decisions. In this post, I consider an important objection to the third step, the argument known as "the ratchet."
    The Ratchet The argument called "the ratchet" is actually a cluster of related arguments. All of the arguments share a common structure. Let me begin with a fairly standard statement of the argument:
      Suppose that the conservative critiques of the Warren Court are correct--that the decisions of the Warren Court (or at least many of them) cannot be defended on formalist grounds. What then would be the effect of a return to formalism? Why, it would lock in the realist decisions of the Warren Court era. But it would do more than that. Even if formalist judging were to prevail for years or decades, the pendulum might swing back to realism at some point in the future. But those the realists of the future will not be constrained by the formalist decisions of their predecessors. And hence during future periods of realism, the law would be distorted by yet another increment. You can see where the argument goes. If formalists respect precedent and there are alternating periods of realism and formalism, then we have a ratchet--for emphasis, we might use the redundant phrase, one way ratchet.
    That's the basic idea of the ratchet, but this description is vague. We need a more rigorous understanding of the argument.
    A Game Theoretic Model of the Ratchet
      The Simple Model Let's try to get to the underlying structure of this argument. The ratchet is a thought experiment based on a simple model of the judiciary. The model assumes that the judiciary game is a turnwise, two-player game. Call the players "left" and "right." Each player can make one of two moves. Abstractly (without thinking about precedent), let's call these moves: "formalist" and "realist." Each turn consists of a period during which the player controls the judicial branch of government. Let's assume that turns are of equal duration (8 years) and that they alternate. If the left plays realist, then the judiciary will make realist decisions based on a left political ideology--substitute "right" for "left" and you get the equivalent description for a right realist play. If either the left or the right plays formalist, then judges make formalist decisions. Let's assume that the "state of the law" can be mapped onto a simple left-right real line. Let's assume that the starting point of the line is 0, for political neutrality. Further assume that a play of formalist, has no impact on the state of the law. That a play of right-realist moves the state of law +1 or 1 step to the right; a play of left-realist moves the state of the law -1 or one step to the left. Further suppose that each side's utility function for evaluating payoffs is as follows:
        Left: Ur(S) = 0-S.
        Right: Ul(S) = S.
      In other words, if the state of the law is one (S=1), the right derives a utility of one (Ur = 1) and left derives a utility of negative one (Ul = 0-1 = -1). How might the game be played? Suppose the right consistently plays formalist and the left consistent plays realist. We then get the following sequence for 5 rounds of play:
        Round 1, Left-realist = -1, total = -1.
        Round 2, Right-formalist = 0, total = -1.
        Round 3, Left-realist = -1, total = -2.
        Round 4, Right-formalist = 0, total = -2.
        Round 5, Left-realist = -2, total = -3.
      As described this is a zero-sum game. All gains for the left are losses for the right and vice versa. And playing formalist is clearly a bad strategy. If the right plays formalist and the left plays realist, the law moves to the left. The dominant strategy in this game (absent some mechanism for precommitment) is for both sides to play realist. Again, for five rounds:
        Round 1, Left-realist = -1, total = -1.
        Round 2, Right-formalist = 1, total = 0.
        Round 3, Left-realist = -1, total = -1.
        Round 4, Right-formalist = 1, total = 0.
        Round 5, Left-realist = -1, total = -1.
      If we assume an initial state of the law at zero, there is a first-mover advantage. Over a number of round-pairs (left move, right move), the average score for the model is -0.5. If right moves first, the average score would be +0.5.
      An Extension of the Simple Model Let's extend the simple model. Let's assume that both players derive utility from the rule of law (stability and certainty). So let's assume that right player derives a utility equal to zero minus half the absolute value of the change in the state of the law from the previous round added to the utility derived from the state of the law. The left player has same utility function, modified to account for the fact the utility function of the left derives higher values from positions on the line that are to the left:
        Ur(S{x}) = S{x} + 0.5 * (0-|S{x}-S{x+1|}l)
        Ul(S{x}) = S{x} + 0.5 * |S{x}-S{x+1|}l
      Where Ur is the notation for the right's utility function, Ul is the notation for the left's utility function, {x} is the notation for Round X, {x+1} is the notation for the round after Round X, and || is the absolute value function. With this change in the simple model, our game becomes a turnwise prisoner's dilemma. If there are only two turns and no mechanism for cooperation, then the dominant strategy is to play realist. In the iterative version of the game (an indefinite number of move pairs), the equilibrium strategy is tit for tat. If you play formalist, I play formalist. If you play realist, I play realist. This should result in both players playing the cooperative formalist strategy. Each player gains +0.5 utiles from playing formalist/formalist as compared to playing realist/realist.
    The Intuitive Point of the Models Let's put the models aside and try to capture their intuitive sense. If the only thing that counts about the law is political ideology, then the left and right have every incentive to select political judges who will make ideological decisions. If the rule of law benefits both the left and the right, then both parties will want to appoint formalist judges, provided that the other side can be trusted to follow suit. How does the ratchet fit in? The ratchet is simply a shorthand description of the zero-sum version of the game. The core idea of the ratchet is that the competition between left and right over the judiciary is captured by the simple model. But if the extended model better captures reality, then the ratchet is wrong. So which is it?
    Back to Stare Decisis You, gentle reader, are probably getting quite impatient. My abstract model is aimed at formalism and realism, but the topic at hand is stare decisis. You have already observed, no doubt, that both the simple model and the extended model can be applied to stare decisis. But here is the crucial point. Even if you are a formalist, you may reject the idea that following stare decisis is the formalist move. Or more precisely, you may believe that there are different conceptions of formalism and that the best conception does not incorporate a principle of strong stare decisis for courts of last resort. Let's simplify and assume that one conception of formalism is textualism, the view that judges on courts of last resort should adhere to the text, even if it is contrary to precedent.
    Textualism versus Realism So let's think about the implications of our model for the textualist. Let's and assume that we have a two player game. One player is "textualist," and the other player is "left realist." This game is much more complicated than our prior game, because we now have a two dimensional space for the state of the law. Textualist's evaluate the state of the law on a real line that runs from Fidelity to Text to Disconformity to Text. Left realists evaluate the state of the law on a real line that runs from Left to Right. Some left outcomes rank high on fidelity to text; others rank low. It is a matter of great controversy whether textualism as a theory tilts to the left or the right, but (simplifying greatly) it is commonly assumed that textualism probably tilts right. Arguendo, let's go with this simplifying assumption. Notice that even after this simplification, a formal model of the game would be extremely complex. Nonetheless, we can intuitively grasp what a complex model would reveal. The game between textualists and left-realists will have the general structure of the simple model above. On average, gains for left-realists are losses for textualists and vice versa. Now, consider the decision whether to follow precedent. If the textualist follows precedent and the left-realist does not, we have the ratchet. After each round of play, the law will have moved further away from Fidelity to Text and closer to Disconformity to Text. This is "the ratchet" as applied to precedent.
    Ideal and Nonideal Theory We need another distinction to allow a meaningful evaluation of the ratchet. Following Rawls, let's distinguish between ideal and nonideal theory. In our context, ideal theory involves making the assumption that judges perfectly comply with our theory of judging. Nonideal theory relaxes the perfect compliance assumption, and it is very important to specify with precision exactly how the assumption is being relaxed. The ratchet simply does not favor textualist formalism over a formalism that also incorporates stare decisis and one that does not is relatively easy as a matter of ideal theory. In the realm of idea theory, there are no realists to create a ratchet effect for realist precedents. The ratchet gets going when we move to the case of nonideal theory, imagining that the world is divided into two camps, only one of which will adopt some version of formalism. Here is the important point:
      Clear argument about nonideal theory requires very careful attention to the assumptions about noncompliance. It is very easy to jury-rig the assumptions in a way that assures that one view wins out over another.
    OK, but how does that apply to the ratchet? The ratchet makes a very peculiar set of assumptions about compliance with a theory of judging. On the one hand, it assumes that the "bad guys," i.e. the left realists, pay no attention at all to the normative theory of judging at issue. On the other hand, it assumes that the "good guys," the textualists, comply perfectly with the theory. But this set of assumptions jury rigs the outcome. Here is another way of putting my point:
      In political debate, it is very natural to reason as follows. We are the good guys, so we will act in conformity with the right normative theory. But they are the bad guys, and they will always act selfishly and violate the norms if it is in their self interest.
    And that is quite a natural way to reason. When the stakes are high and disagreements are fundamental, it is all too easy to paint yourself white and your opponent black. But that is no way to reason about normative theories--in this case, about theories of judging. OK, but what happens if we make more realistic and consistent assumptions? And here is where it gets really interesting. Watch carefully, because the next block of argument is where the most important moves will be made.
    Politicization and the Ratchet So let's not assume that that the good guys comply perfectly with our best normative theory of judging and the bad guys just do what they please. But we don't want to resort to ideal theory--that doesn't get at the interesting questions. So let's assume that both the left and the right are capable of acting so as to advance their political ideology at the expense of the rule of law. And let's assume that both the left and right are capable of cooperating so as to advance the rule of law, if, but only if, they believe such cooperation is in their long-term self interest and also believe that they have good reason to trust that the other side will not defect from the cooperative scheme. And then what? And then we have reason to believe that the current situation can go one of two ways. On the one hand, if both sides treat the situation as a zero sum game, we can continue the downward spiral of politicization. On the other hand, if both sides can come to see that the rule of law is in their long-run self interest and come to have reason to trust the other side, it is possible to pull out of the downward spiral and begin the process of rebuilding the rule of law. That is all very abstract. How does it apply to stare decisis?
    The Role of Stare Decisis in Restoring the Rule of Law
      The World as We Find It Let's begin with the world as we find it. And we find a mixed bag. On the one hand, the judiciary seems thoroughly politicized. The decisions of the Warren and Burger courts resist rationalization on conventional, formalist grounds. Although legal theorists labor mightily to justify the attractive outcomes, appealing to a New Deal constitutional moment, the forum of principle, or the distinction between high and low politics, the perception of Senators, Judges, and Justices is that the hot button issues are decided by political ideology and not neutral principles. On the other hand, the ordinary and boring cases that constitute the great bulk of judicial work continue to be decided, for the most part, on conventional, formalist grounds. Precedents are followed, and statutes interpreted in accord with their plain meaning and common sense. There is still a substantial (but diminishing) reservoir of formalist legal practice. The world as we find it is a strange brew--a cuppa formalism topped by realist foam.
      A Fair Description of the Players Here is where I'm gonna lose you. There are few devils and fewer angels among the players of the judicial selection game. The left does not consist of unprincipled realists, willing and able to sacrifice the rule of law on the altar of poetically correct results. The right is not made up of hypocritical formalists, devoted to text and original meaning only when and because it advances their agenda, willing to don realist garb as soon as the 11th Amendment or a Presidential election is at stake. But . . . And this is a big but. But both sides are all too ready to see their rivals in the worst possible light. Both the left and the right see the value of the rule of law. Both the left and the right are afraid that if they decide cases on the basis of the rules laid down, the other side will take advantage. Both the left and the right see their rivals as fundamentally untrustworthy. A fair description of the players depicts few devils and fewer angels and many, many well-intentioned but fallible humans.
      The Options So given the lay of the land and a realistic assessment of the players, what are the options. How can we prevent a downward spiral of politicization? How can we restore the rule of law? Here are some options.
        Total Victory Partisans on both sides are fond of the total victory scenario. We need to be realists for now, but once we have achieved stable, long-term control of the Presidency, the Senate, and the Supreme Court, then we can worry about neutral principles, precedents, and the rest. Maybe. But the lesson of American history is that absent a crisis (a great depression, a civil war, a global conflagration), the winner-take-all structure of the American political system produces parties that alternate in power--for longer or shorter periods. And if one side pursues total victory when it senses that victory is within reach, the other side will be all the more anxious to use the levers of judicial power to undo the damage when it regains the upper hand.
        Wait for a Crisis Pessimists on both sides despair of any solution short of a crisis. The downward spiral must run its course. When things get bad enough, then, and only then, will there be sufficient political pressure to break out of the prisoner's dilemma. But the pessimists are not pessimistic enough. Because it isn't clear that it is so easy to pull out of a downward spiral of politicization once you are at the bottom. The bottom is inhabited by thoroughly corrupt judges who see every case as a patronage opportunity and lawyers who see briefs and arguments as less than mere window dressing. One of the dirty secrets of American law is that we have already hit bottom--in counties in Southern Illinois, in Texas, in Louisiana, and elsewhere. Wait for a Crisis is surely the option of last resort.
        Formalism With Weak Stare Decisis And this brings us to one of the current favorites. Many formalists (on the right) are tempted by the idea that we can have formalism without stare decisis. Judges should adhere to the plain meaning of the constitutional text in light of the historical evidence of original meaning. This is sufficient to restore the rule of law, and it has the great tactical advantage of allowing the Rehnquist (soon to be Thomas?) Court to roll back the realist decisions of the Warren and Burger Courts. But this is not a stable solution, once we think about the reaction of the left. On the hot button issues, the text and history allow too much room for maneuver. Even if the left were to embrace formalism without stare decisis, we would expect that the struggle to politicize the court to continue--the terms of debate would be different but the underlying realpolitik would be the same. And there is an even more fundamental problem, formalism without stare decisis looks like it has been jury-rigged in favor of those outcomes the right prefers--especially given the current political situation and composition of the Court. And because formalism without stare decisis will be perceived as unprincipled, as a program for the restoration of the rule of law it is doomed to failure--unless supplemented by total victory.
        Formalism With Strong Stare Decisis And that brings me to the final option of my list--formalism with strong stare decisis. Would this option create the possibility of restoring trust? Here is the interesting point. The very argument used against strong stare decisis--the infamous ratchet--explains why stare decisis is likely to be effective as a confidence building measure. If formalist judges of the right are willing to respect realist precedents of the left, this is a clear and convincing demonstration that the right is serious about the rule of law. And there is more good news. The judicial selection/decision game is not a zero sum game. Both sides lose from a downward spiral of politicization. Both sides gain from the rule of law. Trust is the key to the emergence of a stable, cooperative equilibrium with both sides committed to appointing formalist judges and each side willing to allow the other the privilege of appointing judges from its own party.
      But Haven't I Missed the Point But the opponent of strong stare decisis is not likely to be satisfied, yet! Here is how the objection might go: OK, maybe you are right about this confidence building point. But our objection to stare decisis is that it is wrong. The ratchet isn't a political argument; it is an argument of principle. Precedents that run contrary to the text and plain meaning of the constitution are illegitimate and wrong. Applying stare decisis to those decisions is simply institutionalizing error. You haven't answered our objection. And this is a completely fair reaction to the argument developed in this post. My argument is not complete. I need to do more than show that strong stare decisis is essential to the restoration of the rule of law. I need to show how strong stare decisis can be defended as a matter of principle, and . . .
    To Be Continued And that is what I will do. In the next Part, I will offer an argument for the proposition that strong stare decisis offers the the best (but not the fastest) path to the restoration of the rule of law. The best way to a practice of constitutional interpretation that adheres to the text and original meaning does not require that we disregard precedent. How does that work? . . . to be continued.
    Part III: Precedent and Principle


 
Hasen on the Nuclear Option Check out Rick's post here, commenting on this article from Roll Call and arguing that there are insufficient Republican votes for a unilateral move to change Rule 22. Update: Rick posts a letter from a Senate attorney here. And here is a link to the Frist/Miller proposal.


 
Senate Rules Committee Filibuster Hearings Courtesy of Marcia Oddi of the Indiana Law Blog, Senator Lott's Committee holds hearings today. Here is the witness list. And here is a link to the audio (2:00 p.m. E.D.S.T. today). For my thoughts on the so-called "nuclear option," check out this post. And for more, go here.


 
Lemley and Burk on Biotech Uncertainty Mark Lemley (UC) Berkeley and Dan Burk (Minnsota) have posted Biotechnology's Uncertainty Principle on SSRN. Their paper offers a timely and important critique of the Federal Circuit's biotech patent jurisprudence--a must read for the IP crowd. Here is the abstract:
    In theory, we have a unified patent system that provides technology-neutral protection to all kinds of technologies. However, we have recently noticed an increasing divergence between the rules actually applied to different industries. Biotechnology provides one of the best examples. In biotechnology cases, the Federal Circuit has repeatedly held that uncertainty in predicting the structural features of biotechnological inventions renders them nonobvious, even if the prior art demonstrates a clear plan for producing the invention. At the same time, the court claims that the uncertain nature of the technology requires imposition of stringent patent enablement and written description requirements that are not applied to patents in other disciplines. Thus, as a practical matter it appears that although patent law is technology-neutral in theory, it is technology-specific in application. Much of the variance in patent standards is attributable to the use of a legal construct, the "person having ordinary skill in the art" (PHOSITA), to determine obviousness and enablement. We do not challenge the idea that the standards in each industry should vary with the level of skill in that industry. We think the use of the PHOSITA provides needed flexibility for patent law, permitting it to adapt to new technologies without losing its essential character. We fear, however, that the Federal Circuit has not applied that standard properly in biotechnology. The court has a static perception of the field that was set in its initial analyses of biotechnology inventions, but which does not reflect the realities of the industry. In the final part of the paper, we offer a very preliminary policy assessment of these industry-specific patent cases. We suggest that the special rules the Federal Circuit has constructed for biotech cases are rather poorly matched to the specific needs of the industry. Indeed, in some ways the Federal Circuit cases have it exactly backwards. We offer a few suggestions as to what a consciously designed biotechnology patent policy may look like.


 
New Papers on the Net Here is the roundup:
    Arthur Jacobson (Cardozo) offers Law Without Authority: Sources of the Welfare State in Spinoza's Tractatus Theologico-Politicus, forthcoming in the Cardozo Law Review. From the abstract:
      In his Tractatus Theologico-Politicus (1670), Spinoza mounts an attack on authority in all its forms, including the authority of law and the state. Because authority in all its forms is a product of the imagination, obligation can never be justified. The subjects of Spinoza's commonwealth have no duties, only rights. Spinoza replaces the authority of the commonwealth with the welfare of subjects as the sign and the source of the commonwealth's flourishing. Spinoza was thus the first to propose that the only way for commonwealths to maintain the illusion of authority is by attending to the welfare of their citizens.
    David Carlson (Cardozo) posts two papers:
      Hegel's Theory of Measure. From the abstract:
        The final segment in Hegel's analysis of "being" is measure - the unity of quality and quantity. At stake in these chapters is the difference between quantitative and qualitative change. A being or thing is indifferent to quantitative change, which comes from the outside. For instance, a legislature can increase the stringency of zoning regulations, and yet the legislation is still constitutional "zoning." But there comes a point at which quantitative change effects a qualitative change - zoning becomes an uncompensated "taking" of property. This paper analyzes how Hegel, in the "Science of Logic," derives measure from the categories of quality and quantity, and how essence - the "beyond" of being/appearance is in turn derived. The paper is the third installment on a complete analysis of Hegel's most important (and least read) work - the Science of Logic (1831).
      Indemnity, Liability, Insolvency. From the abstract:
        Suppose A has a claim against B. B has a claim over against C. B, however, is insolvent and has not actually paid A. B's only asset is, in fact, B v C . To what extent can C claim that B v C is valueless - that B was not damaged because B was too broke to pay A? This paper argues that the fundamental legal distinction between indemnity and liability is beginning to dissolve, because B can always pay A (and thereby give value to B v C ) by borrowing the amount B owes and using B v C as collateral for the loan. This very possibility tends to render the distinction between indemnity and liability obsolete.
    Pascoe Pleasence (Legal Services Research Centre), Hazel Genn (University College London), Nigel Balmer (Legal Services Research Centre), Alexy Buck (Legal Services Research Centre) and Aoife O'Grady (Legal Services Research Centre) post Causes of Action: First Findings of the LSRC Periodic Survey, forthcoming in the Journal of Law and Society:
      In this paper we report some of the first findings of the LSRC periodic survey of justiciable problems. We confirm the prevalence of justiciable problems amongst the general population. We identify important differences in the experiences of discrete socio–demographic populations, not only in terms of the number of problems faced, but also in terms of the perception of problems and reactions to them. We show that cost is not the principal barrier to taking action or obtaining advice across most problem categories. Other concerns, such as fear or uncertainty as to what can be done are generally more prevalent. We illustrate the range of strategies employed by those who take action, and confirm the rarity of court action. Finally we show that the basic form of Felstiner, Abel, and Sarat’s aetiology of lawsuits is recognizable within our findings, although we explain that the manner and form of progression through the various stages is complex and irregular.
    David Hoffman (Cravath, Swaine & Moore) posts How Relevant is Jury Rationality?, forthcoming in the University of Illinois Law Review. from the abstract:
      This essay reviews "Punitive Damages: How Juries Decide" by Cass Sunstein, et al. The book provides a good example of a recent trend: the use of behavioralist research to justify surprisingly paternalistic legal reforms. While critics of behavioralism often contend that its theoretical foundations are weak, this approach is unlikely to prove an effective rejoinder in the new debate about what kinds of paternalism are made permissible by human "irrationality". A better approach: (1) notes the lack of a nexus between behavioralism and the supposed emergent necessity of paternalist reforms; and (2) suggests that juror unwillingness to apply cost-benefit formula provides the true motivating force for the new paternalism in law and economics. Rather than asking if jurors act rationally (and punishing them if they will not), we should instead question what law and economics mean when they use the word "rational" as an intial matter.
And a few more interesting titles:


Wednesday, June 04, 2003
 
The Case for Strong Stare Decisis, or Why Should Neoformalists Care About Precedent? Part One: The Three Step Argument. Why care about precedent? If you are a realist, precedent just gets in the way of the real purpose of law--to achieve social policy goals or accomodate the balance of social interests. If you are a formalist, precedent can get in the way of making decisions that respect the plain meaning of the text. Either way, why care about precedent?
    Introduction One of the great divides in contemporary legal theory is that between neorealists (who believe that judges should treat the law as an instrument to achieve "good" or "just" results, and neoformalists (who believe that judges should do what the authoritative legal materials--constitutions, statutes, precedents--require. In A Neoformalist Manifesto, I sketched a neoformalist theory of judging for constitutional cases. Roughly, judges should decide cases by looking to precedent, constitutional text and structure, and historical evidence of orginal meaning, in that order, when they interpret the constitution. One feature of my view that is highly controversial is that it incorporates strong stare decisis, the view that even courts of last resort should regard their own prior decisions as binding. This view is usually rejected by legal theorists of all stripes. It is easy to see why neorealists from both the left and the right would reject strong stare decisis. A strong doctrine of precedent precludes both left and right from arguing that the constitution should be interpreted to advance their political agenda. But both textualists and originalists also have reasons to oppose stare decisis. Why should we lock in precedents that are inconsistent with the text, the original meaning, or the will of "We the People"? These are very important questions, and I will take a stab at answering them here.
    The Plan So here is the plan. My discussion will be organized around four questions. First, what is stare decisis? Second, what is the case for a strong doctrine of stare decisis? Third, what are the arguments against stare decisis? Fourth, how can we resolve the debate?
    What is Stare Decisis First things first. What is stare decisis?
      Stare Decisis Defined What is stare decisis? That latin phrase can be roughly translated as "to stand by that which is decided." The core of the doctrine of stare decisis is the idea that prior decisions, precedents, should in some way constrain current decisions. The California Supreme Court put it this way:
        It is . . . a fundamental jurisprudential policy that prior applicable precedent usually must be followed even though the case, if considered anew, might be decided differently by the current justices. This policy . . . 'is based on the assumption that certainty, predictability and stability in the law are the major objectives of the legal system; i.e., that parties should be able to regulate their conduct and enter into relationships with reasonable assurance of the governing rules of law.
      Moradi-Shalal v. Fireman's Fund Ins. Companies 46 Cal.3d 287, 296 (1988).
      A fully developed theory of stare decisis enables lawyers to distinguish between the holding of case, which is legally binding, and mere dicta, which are not part of the reasoning essential to the result reached.
      State Decisis versus Law of the Case When the Supreme Court decides a particular case and remands, its decision is binding on the lower court in that case. Technically, this is not precedent or stare decisis. This binding effect is called law of the case, and almost everyone agrees that the functioning of a judicial system with vertical hierarch (higher and lower courts) requires that the decisions of the higher courts bind the lower courts--even if the lower court judge thinks that the higher court made a bad decision..
      Horizontal versus Vertical Stare Decisis Within the doctrine of stare decisis, it is important to distinguish what we might call vertical and horizontal contexts. Vertical stare decisis applies when a Supreme Court decision in one case binds the lower courts in other cases. Horizontal stare decisis applies when a Court is bound by its own prior decisions. In the United States, vertical stare decisis is a part of the law of every jurisdiction. In the federal system, the United States Supreme Court does not consider itself strongly bound by horizontal stare decisis, but the Courts of Appeals do consider themselves bound.
      Strong and Weak Stare Decisis What is the force of precedent? Some courts afford precedent great force--treating the doctrine of stare decisis as a rule with binding force. Other courts give precedent only the barest nod of respect--treating the doctrine of stare decisis as a mere presumption--a bubble that can be burst by any countervailing force. In between, we can imagine courts giving the precedents substantial deference but setting aside caselaw when there are substantial or compelling reasons.
    The Case for Strong Stare Decisis Let me lay my cards on the table. The argument I am about to present tries to sneak up on you. I am going to try to get you to agree that with the proposition that a very strong doctrine of stare decisis is justified in one context (vertical stare decisis), and then move step-by-step to a radicaly conclusion--that the United States Supreme Court should consider itself bound by its own prior decisions. Watch out! Be very careful that I don't try to put one over on you as the argument moves from context to context.
      Step One: Vertical Stare Decisis Here is the easy part of the argument. Decisions of the Supreme Court should bind the Courts of Appeal and the District Courts, and decisions of the Courts of Appeal should bind the District Courts. In other words, higher courts bind lower courts. Of course, but why? Think about the alternative. Without vertical stare decisis the law would be up for grabs in every case. This is most obviously true with respect to the common law--where the law is defined by precedent. But it is also true--in the United States--in constitutional and statutory cases. Our constitution are statutes are filled with provisions that are breathtaking in their generality--think "equal protection" and "unreasonable restraint of trade." Without stare decisis the meaning of these provisions would be up for grabs in every case involving them. And when the law is up for grabs, it cannot realize the values we summarize by the phrase the rule of law. Without vertical stare decisis, the law would be unpredictable and uncertain. Unless you believe in the strong indeterminacy thesis, you are likely to agree that lower courts must be bound by the decisions of superior courts. For more on vertical stare decisis, check out this op/ed by Howard Bashman.
      Step Two: Horizontal Stare Decisis in Intermediate Appellate Courts And these same considerations apply with almost equal force when the context changes to question whether the intermediate appellate courts--the United States Courts of Appeals in the federal system--should follow precedent. A bit of institutional description is necessary. In the federal system, there are thirteen different courts of appeals (all but one of which hear cases orginating from geographic territories). Each court of appeals has several judges (and one, the NInth Circuit, has more than two dozen judges). When these courts here cases, three judges form a panel--and this feature is required for these courts to process the tens of thousands of cases they hear each year. If the Courts of Appeals did not follow stare decisis, this would mean that in every single case involving legal issues on which there was no controlling Supreme Court precedent, each panel would be entitled to make a de novo decision on the unctrolled issue. It would not be unusual for the same legal issue to be decided by a different panel of the relevant Court of Appeals each time the issue was presented. This system would certainly reduce the certainty and predictability of the law--more in some areas of the law than others, of course. Once again, rule of law values support a very strong doctrine of stare decisis.
      Caveat: I have left the system of en banc review out of my simplified (blog) version of the argument. The current practice is that a Circuit can go en banc, sitting as a whole rather than in three-judge panels. When a Court of Appeals goes en banc, it has the power to overrule precedent, and, importantly, this feature is partially inconsistent with strong stare decisis. Only partially, because the most important use of the en banc power to overrule is in the case of inconsistent decisions by individual three judge panels. Correction of these inconsistencies is actually required by strong respect for precedent. The other use of the en banc power is to overrule prior circuit precedent on the ground that it is wrong. When a circuit does this en banc, which is rare, the circuit is acting like the Supreme Court--raising the same issues as I discuss in Step Three. So here we go.
      Step Three: Horizontal Stare Decisis in Courts of Law Resort
        Introduction to Step Three Some readers of this blog are very familiar with the doctrine of stare decisis and others have only a passing familiarity with it. So, I need to begin this section by putting it in context. I am about to make a fairly radical argument. A radically conservative argument--that is. In the United States, stare decisis is usually only weakly respected by courts of last resort (e.g. the Supreme Court of the United States or the highest courts of the various states). Even in the United Kingdom, the House of Lords has abandoned the principle (from London Tramways v London County Council [1898]) that it is bound by its own prior decisions. In 1966, the Lords made the following statement of practice: "Their Lordships…recognise that the rigid adherence to precedent may lead to injustice in a particular case and also unduly restrict the proper development of the law." Now, even I won't argue for rigid adherence to precdent. But I will be arguing for much stronger respect for precedent than is currently the practice. Why?
        The Implications of Steps One and Two Let's begin with a point that might seem obvious, but is frequently overlooked. If you accept Step One (vertical precedent) and Step Two (horizontal precedent for intermediate courts of appeal), you have prima facie reason to believe that courts of last resort should follow their own prior decisions. If courts of last resort simply ignore precedent, then in each and every case and on each and every issue, in theory, the law is uncertain. Why? Because without any doctrine of precedent, the court should change its mind on an issue if the court views the balance of reasons differently than it did no a prior occaision. This is always a theoretical possibility, although in practice it may be unlikely. But the problem is more than theoretical. On many issues, a switch in positions will be somewhat likely or even very likely--if the court gives no deference to its own prior decisions. Again, why? For many reasons, including the following:
          --Substantial time has passed since the last occaision upon which the court considered the issue, and both the compositon of the court and legal landscape have changed.
          --The issue was badly argued and briefed on a prior decision, and hence, the balance of reasons is likely to differ upon reconsideration.
          --The issue is a close one, and even a slight difference in either the court's composition or the way the issue is argued, could produce a different result.
        Even this brief catalog is sufficient for our purposes at this stage of the argument. Some respect for precedent by courts of last resort is necessary for the rule of law. Without minimal stare decisis, the law become unpredictable and uncertain--with the uncertainty growing as time passes from the last occaision upon which the court of last resort passed upon a particular issue. In other words, if you bought the arguments at Step One and Step Two, you should be convinced that complete disregard for precedent, even by the Supreme Court, is, ceteris paribus a bad thing.
        Beyond Minimal Stare Decisis: The Case for a Strong Doctrine This first move is an important one, but it is not sufficient for my purposes. I need to argue for strong stare decisis. Furthermore, ceteris is not paribus, because the Supreme Court differs from lower courts in important ways. So what is the case for a strong doctrine of precedent at the level of courts of last resort? Why should the Supreme Court pay more than lip service to its own prior decisions? Several points need to be made:
          First, the stronger the doctrine of stare decisis the greater the predictability and certainty of the law.
          Second, the stronger the doctrine of stare decisis, the more determinant the meaning of the general and abstract clauses of the constitution. This is a crucially important point, especially given the the nature of the United States Constitution. Remember we are asking the question: why should a formalist care about stare decisis? And a formalist might say, I don't need the doctrine of precedent, because I will interpret the constitution in a manner that respects the text and its original meaning. And from where I sit, this is a powerful and important argument. And if our constitution did not include provisions like the equal protection clause, the due process clause, the privileges and immunties clause, the freedom of speech, etc., the formalist case against stare decisis might be quite strong. But sophisticated formalists do not and cannot claim that text and history provide fully determinant meanings for the grand (or perhaps badly drafted) clauses of the Constitution. These clauses are inherently contestable. And this fact leads to another . . .
          Third, the stronger the doctrine of stare decisis, the lower the risk of politicization. Precisely because the Constitution has abstract and ambiguous clauses, there will be a great temptation for the poltical branches of government to affect constitutional interpretation. A strong doctrine of stare decisis limits this opportunity to those issues which are left open by prior decisions. A weak doctrine of stare decisis inherently increases the incentives for and hence the likelihood of politicization. To complete this argument, I need to argue that politicization of the judicary is a very bad thing--but since I have done that on a number of prior occaisions, I will not repeat that argument here.
        But What About Institutional Design? And now, I need to consider a counter argument. Even in the federal system there are literally hundreds of lower courts. Intermediate courts of appeal often sit in panels. These features of institutional design make the doctrine of stare decisis critical. But the Supreme Court has only nine judges and it does not sit in panels (as some state Supreme Courts do). Because of life tenure, the composition of the Supreme Court changes relatively slowly. (Although there have been periods, where several seats have turned over in just a few years.) Even without stare decisis, the Supreme Court is likely to adhere to its prior decisions simply because of the stability in the composition of the Court. So long as the Justices themselves have stable preferences, Supreme Court doctrine is likely to remain relatively stable. This is a very important argument. But as I will show, it is fallacious. Why?
          First, in the long run, there will be periods of rapid change in the composition of the court--just as there are periods of stability. Without a strong doctrine of stare decisis, the rule of law will be severely compromised during those periods.
          Second, even during periods of stability in membership, the Supreme Court may become highly unstable. Anyone who is familiar with constitutional doctrine knows that the contemporary Supreme Court has had an extraordinary series of flips and flops on crucially important issues. A stark example is the Supreme Court's 10th Amendment jurisprudence, from National League of Cities v. Usery forward. In these cases, the Court was closely divided and justices in the middle did not vote consistently. As a result, the law became radically unstable and uncertain. Also, because some cases involve multiple decisive issues, there is no guarantee that the court will be able to render a coherent decision that can guide the lower courts. Strong stare decisis does not eliminate this problem, but it radically reduces the number of occaisions in which it will arise.
          Third, At any given time, most issues upon which the Court might pronounce are only addressed by precedents from Courts with radically different compositions. The Supreme Court does not revisit the entire federal corpus juris every year or even every several years. At any given point in time, the vast majority of Supreme Court precedents were decided by courts with little or no overlap in membership with the current court. As a result, without a doctrine of stare decisis the majority of the federal corpus juris is not only up for grabs in theory, it is up for grabs in practice as well.
        The argument against strong stare decisis from the institutional design of the Supreme Court is not without force, but as the above three arguments demonstrate, the institutional design of the court is not a full and adequate substitute for a practice of respecting prior decisions.
      We have established that there are compelling reasons for the doctrine of stare decisis. In fact, almost any formalist will accept a strong doctrine of vertical stare decisis and the idea that intermediate appellate courts should be bound by their own prior decisions. And the same arguments that justify these conclusions apply to courts of last resort. But formalists have an especially weighty reasons to favor strong stare decisis for the Supreme Court. If the Surpeme Court does not follow the rules laid down, the result will be strong pressures to politicize the court and those very pressures would undermine the rule of law. Stare decisis and formalism are like love and marriage. They go together like a horse and carriage.
    Preview of Comming Attractions I am almost at the end of Part I. Tomorrow, I will post Part II, and my last task is a preview of coming attractions. But I have yet to touch upon the most powerful objection to strong stare decisis from within the formalist camp--this is the argument known as the ratchet. Suppose that the conservative critiques of the Warren Court are correct--that the decisions of the Warren Court (or at least many of them) cannot be defended on formalist grounds. What then would be the effect of a return to formalism? Why, it would lock in the realist decisions of the Warren Court era. But it would do more than that. Even if formalist judging were to prevail for years or decades, the pendulum might swing back to realism at some point in the future. But those the realists of the future will not be constrained by the formalist decisions of their predecessors. And hence during future periods of realism, the law would be distorted by yet another increment. You can see where the argument goes. If formalists respect precedent and there are alternating periods of realism and formalism, then we have a ratchet--for emphasis, we might use the redundant phrase, one way ratchet. Is this right? Should formalist judges disregard precedent in order to serve a higher formalism? These questions and more will be addressed tomorrow.
    Part II: Stare Decisis and the Ratchet
    Part III: Precedent and Principle


 
Phillips on the Use and Abuse of Culture At Oxford's Research Seminar in Political Theory, Anne Phillips (London School of Economics) presents The Uses and Abuses of Culture: Thinking Through the Feminism/Multiculturalism Debate.


Tuesday, June 03, 2003
 
Internet Governance Symposium The Loyola of Los Angeles Law Review has an important symposium on internet governance, edited by Michael Froomkin. Michael has put together a very impressive lineup. Here is the roundup:
    ICANN 2.0: MEET THE NEW BOSS by A. Michael Froomkin:
      In this Introduction, Professor A. Michael Froomkin reviews each of the contributions made to the ICANN Symposium in light of his own wealth of knowledge and personal experience in the development of ICANN. Professor Froomkin discusses the history of ICANN and efforts at reform, as well as international considerations in ICANN’s development, and the theoretical relationship between the legal and technological components.
    FROM SELF-GOVERNANCE TO PUBLIC-PRIVATE PARTNERSHIP: THE CHANGING ROLE OF GOVERNMENTS IN THE MANAGEMENT OF THE INTERNET’S CORE RESOURCES by Wolfgang Kleinwæchter
      When ICANN was launched in 1998, it was celebrated as a global test for self-governance of the Internet. Instead of control by governments or the United Nations, the developers, providers, and users of Internet services would manage the Internet´s core resources. Five years later, spurred by concerns such as terrorism and cybercrime, the concept of Internet self-governance has been supplanted by an increased role for governments in form of a public-private partnership. ICANN's limited role for governments has been abandoned as governments now claim national sovereignty over various aspects of the management of the Internet. While ICANN remains a private corporation, governments can now send a non-voting liaison to ICANN's Board of Directors. Governments can also request consultations and explanations if the Board rejects a recommendation from ICANN's Governmental Advisory Committee.
    A COMMENTARY ON THE ICANN “BLUEPRINT” FOR EVOLUTION AND REFORM by David R. Johnson, David Post, and Susan P. Crawford:
      In this Article, David R. Johnson, David Post, and Susan P. Crawford argue that consensus policy-making is central to ICANN’s legitimacy and criticize the ICANN Board’s recent departure from a consensus requirement in its policy-making process. In October 2002, the Board adopted new bylaws that allow the mandatory imposition of global policy rules on registries and registrars under contract with ICANN. Although the existing contracts require documented consensus as a condition for imposing mandatory policy, ICANN has not yet announced how it intends to deal with this problem. The authors stress that because ICANN was not established by the United States or any other government, its authority to enforce its rules derives solely from these contracts, and more accurately, from the consensus decision-making model they embrace. Without such consensus decision-making, the authors fear for ICANN’s future in an increasingly litigious world.
    ICANN AND THE CONCEPT OF DEMOCRATIC DEFICIT by Dan Hunter
      In this Article, Professor Dan Hunter examines why ICANN’s attempts to be democratic have failed. The typical explanation is that ICANN fails to meet its democratic obligations. Hunter argues instead that the problem is with our understanding of “democracy.” Democracy is an empty concept that fails to describe few of our political commitments. This Article explores features of democracy and ICANN, explaining why the online world exposes limitations in implications of democracy such as the nature of the demos, the idea of constituencies, direct democracy, and the like. If the concept of “democratic deficit” is so ill-suited to the online world, then we need to consider whether it is appropriate to berate ICANN for its allegedly undemocratic actions.
    ABOUT A DIFFERENT KIND OF WATER: AN ATTEMPT AT DESCRIBING AND UNDERSTANDING SOME ELEMENTS OF THE EUROPEAN UNION APPROACH TO ICANN by Herbert Burkert
      This Article outlines the coming of age of a European Union Internet governance policy and its activities in setting up a ".EU" registry. A recurring leitmotif in these policies is the search for an adequate regime for a fundamental resource of global communications, which is still under the influence -- if not direct control -- of a single country. It is suggested that an analogy which has been developed in Public International Law with regard to shared resources (for example, water) might be helpful, not only in understanding past European Union policies, but also in guiding future policies to transform ICANN into a more traditional, or at least a more familiar structure. However, the ICANN context contains some elements which might make the outcome of such a Public International Law-oriented approach less predictable.
    GOVERNANCE IN NAMESPACES by Stefan Bechtold
      The creation of the ICANN made the regulation of the Domain Name System (DNS) a central topic in Internet law and policy discussions. Critics argue that ICANN uses its technical control over the DNS as undue leverage for policy and legal control over the DNS itself and DNS-dependent activities. Such problems are not unique to the DNS. Rather, the DNS discussions are an example of the more abstract governance problems that occur in a set of technologies known as “namespaces.” Namespaces are an overlooked facet of governance in real space and cyberspace. In this Article, Stefan Bechtold develops a general theory of the governance of namespaces. Designing namespaces and exercising control over them is not merely a technical matter. Rather, the technical control over a namespace creates levers for the intrusion of politics, policy, and regulation. The Article provides several dimensions along which namespaces can be analyzed and explains how namespaces protect social values, and how they allocate knowledge, control, and responsibility. The taxonomic structure developed in this Article can be useful to legal and policy debates about the implications of namespaces. It can also be helpful to designers of namespaces who consider the legal and policy consequences of their actions.
    Bravo!


 
Confirmation Wars Department: Curry on the Reasons for a Downward Spiral Tom Curry has a piece entitled Court vacancy would trigger political warfare on MSNBC. Here is a taste:
    Confirming a Supreme Court nominee was always a political matter — but in the past two decades it has become a form of electoral politics. Nominees must run for the court job, with the electors being the 100 members of the Senate. Advocacy groups run campaigns for and against the nominee, complete with TV ads and street protests. Why has confirmation become so politicized?
      --The court has involved itself in decisions once made by state legislatures, city councils, or local custom, such as whether a rabbi should be able to say a prayer at a public high school graduation ceremony.
      --Confirmation battles are motivators and fund-raising beacons for advocacy groups from People for the American Way on the left to Concerned Women for America on the right.
      --Over the past 40 years, Democrats have proven to be stronger in congressional elections than in presidential elections. Republican presidents from Richard Nixon to George Bush crushed their Democratic opponents — but then usually were faced with a Senate that was still under Democratic control. Democrats have had enough votes to defeat three Republican nominees since 1969, and to nearly scuttle a fourth, Clarence Thomas.
      --Confirmation warfare is fueled by a fundamental dispute over how to interpret the Constitution. Most liberals want the Constitution to be read expansively to guarantee broad privacy rights and to permit government action to redress economic inequality. Many conservatives want a reading of the Constitution in which individual rights are limited to those specified in the text and interpretations are governed by the intentions of the Framers.


 
Still Bloggered If you have been having trouble reaching this site recently, the difficulty is with blogger/blogspot, the hosting service for many blogs and blawgs. The site has been loading slowly or your browser may tell you that the site does not exist at all. Recently, I have noted that Internet Explorer is treating the blog as an FTP site, and asking whether I wish to open or download the file. In addition, the loading time for most blogspot blogs has now exceeded Google's tolerances, and if you normally visit via doing a google search such as "legal theory" or "legal theory blog," you may be referred to the February 2003 archives rather than the main page. You can always return to the main page by click on "Home" on the sidebar at the very top. I am losing hope that this problem will be cleared up in the immediate future, but I am keeping my fingers crossed.


 
Internet Governance and Democracy Deficits One of the most interesting questions in the theory of Internet governance is concerns the role of democracy. That is why I was particularly interested when I saw that Dan Hunter (Pennsylvania, Wharton) has uploaded his paper, ICANN and the Concept of Democratic Deficit, forthcoming in the Loyola of Los Angeles Law Review, on SSRN. Here is the abstract:
    The Internet Corporation for Assigned Names and Numbers (ICANN) is an institution besieged. It has endeavored to be democratic but its attempts to do so have been disastrous. The typical explanation for this is that the problem is with ICANN: it fails to meet its democratic obligations. My view is that the problem is with our understanding of "democracy." Democracy is an empty concept that fails to describe few, if any, of our genuine political commitments. In the real world, the failings inherent in "democracy" have been papered over by some unusual characteristics of the physical political process. However, in online trans-national institutions like ICANN, democracy is exposed as a poor substitute for a number of other conceptions of our political commitments.
    This Article seeks to articulate these political commitments and to explain why democracy and ICANN are such a poor mix. It begins by charting the rise of ICANN and its attempts to be democratic. It then explains why democracy is an empty shell of a concept. It then explores some features of democracy and ICANN, explaining why the online world exposes limitations in implications of democracy such as the nature of the demos, the idea of constituencies, direct democracy, voting, and the like. It concludes that ICANN's example demonstrates that democracy is in fact anything but a coherent general theory of political action. We need to consider, then, whether we should continue to berate ICANN for its undemocratic actions.
Personally, I am in sympathy with Hunter's bottom line, although I find myself balking at the reasons he provides. The problem isn't with democracy. Not surprisingly, Hunter's arguments fall far short of a compelling case against democracy in general. Rather, the problem is that ICANN doesn't provide the kind of service that is an appropriate candidate for democratic administration. That is, ICANN does not provide a public good. Karl Manheim and I make this point at length in An Economic Analysis of Domain Name Policy. And for a fully developed account to the case for democracy in internet governance, you must see Michael Froomkin's Habermas@discourse.net: Toward a Critical Theory of Cyberspace, also available in the Harvard Law Review. For a comment on Froomkin, see Rational Discourse and Internet Governance.


 
New Papers on the Net Here is today's roundup:
    Jean Lanjouw (Yale) and Joshua Lerner (Harvard, Finance Unit) post Preliminary Injunctive Relief: Theory and Evidence from Patent Litigation. Here is the abstract:
      This paper examines the suggestion that established plaintiffs request preliminary injunctions to engage in predation against less financially healthy firms. We first present a model in which differences in litigation costs drive the use of preliminary injunctions in civil litigation. The hypothesis is tested using a sample of 252 patent suits, which allows us to characterize the litigating parties while controlling for the nature of the dispute. The evidence is consistent with the predation hypothesis. We then explore various implications of the model and the impact of policy reforms.
    David Evans (NERA Economic Consulting - Cambridge Office), Atilano Padilla Blanco (National Economic Research Associates Inc. (NERA) - Cambridge Office ) and Christian Ahlborn (Linklaters & Alliance) upload The Antitrust Economics of Tying: A Farewell to Per Se Illegality, forthcoming in the Antitrust Bulletin. From the abstract:
      We describe the main features of U.S. and E.C. tying law and consider their recent evolution. We then review the economic literature on tying and summarize its main implications for the analysis of tying cases: First, recent advances in economic theory unambiguously endorse a rule-of-reason approach to tying such as that adopted by the D.C. Circuit Court of Appeals in Microsoft III. Second, there is no economic basis for a per se prohibition of tying. And third, the modified per se rule adopted by the U.S. Supreme Court in Jefferson Parish does not accurately screen pro-competitive from anticompetitive tying. Drawing on the findings of the economic models developed by the Chicago and post-Chicago Schools, we conclude by proposing a three-step test to implement rigorously a rule-of-reason analysis to tying cases.
    William Carney (Emory) and Mark Heimendinger (Milbank, Tweed, Hadley & McCloy) offer Appraising the Non-Existent: The Delaware Courts' Struggle with Control Premiums, forthcoming in the University of Pennsylvania Law Review. From the abstract:
      This paper examines the holdings of the Delaware courts that a control premium must be added to the market value of shares in freeze-out transactions. It finds this result is not required by prior Delaware law. We argue that there is no control premium absent a current transaction in control, and that assumptions of control premia in freeze-outs are simply speculation. Awarding control premia provides a windfall gain for public shareholders, and is contrary to the treatment of public shareholders who receive publicly traded shares in other mergers.
    Peter Henning (Wayne State) presents Misguided Federalism, forthcoming in the Missouri Law Review. From the abstract:
      The article considers the effect of the Supreme Court's recent federalism decisions - specifically Lopez and Morrison - on the scope of federal criminal law. The Court in Morrison expressed concern that the extension of federal authority through the Violence Against Women Act to rape, a common law felony prosecuted in every state, went beyond Congress's legislative power because "we can think of no better example of the police power, which the Founders denied the National Government and reposed in the States, than the suppression of violent crime and vindication of its victims." Invoking federalism as an independent principle to limit the federal government's authority to prosecute crimes that state and local authorities ordinarily handle certainly has a superficial appeal. Lopez and Morrison both refer to a seemingly inviolable realm of state authority that appears to include state and local control - perhaps to the exclusion of the federal government - over the prosecution of "local" crimes. The Court's federalism analysis gives the impression of separate spheres of authority over the criminal law that relegates Congress to legislating only in those areas that are obviously "national" in scope. The notion of mutually exclusive spheres hinted at in Lopez and Morrison - at least with respect to criminal statutes - overstates the role of federalism in demarcating the authority of the national and state governments.
      The article argues that it is a misguided view of federalism that the federal government somehow invades the sovereignty of the states by pursuing criminal prosecutions for certain types of conduct already subject to prosecution by state and local authorities. The source of that misunderstanding is the Supreme Court's broad language in Lopez and Morrison asserting that matters traditionally viewed as "local" - including the prosecution of violent crimes normally brought in state and local courts - are reserved in some way from regulation by the national government. Under this approach, federalism becomes not just an aspect of constitutional analysis, but also a new type of defense in federal prosecutions. The article analyzes decisions of the lower courts imposing an independent federalism limit on prosecutions that are not, according to the judges, of sufficient national interest. This misuse of federalism is, in reality, a new form of supervisory power to control prosecutors through a flawed application of federalism.
    Andrew Guzman (UC Berkeley) posts The Case for International Antitrust. From the abstract:
      Competition policy is made at the national level. A great deal of the business activity that it seeks to regulate takes place at the international level. It is universally accepted that some level of international cooperation is necessary to make regulation effective under these conditions. There is, however, a considerable diversity of views on the question of how much cooperation is appropriate. The presence of international activity distorts competition policy in at least two ways. First, it causes the preferred domestic policies of states to diverge from what they would be in the absence of such activity. States that are net exporters of goods sold in imperfectly competitive markets have an incentive to weaken their antitrust rules and states that are net importers of such goods have reason to tighten theirs. Second, the choice of law rules adopted to establish the jurisdictional reach of domestic law create an additional divergence between the substantive laws actually chosen and those that would be chosen by a closed economy. States that choose to limit their laws to activities that take place within their territory are better off if they also weaken their substantive laws. States that extend the reach of their laws generate overlapping jurisdiction and force firms to run a gauntlet of legal rules that includes the strictest elements of each state's laws, leading to a de facto regulatory standard that is stricter than that of any single state. This chapter explains why these problems cannot be resolved through the sort of low levels of cooperation that dominate current international antitrust efforts. Information sharing in particular cannot address the distortions to competition policy generated by cross-border business. Choice of law strategies can improve the regulatory framework, but can only partially address the problem and even this would require a dramatic change to existing policies. What is required, then, is a deeper form of cooperation on the subject of substantive laws or international standards. Though cooperation of this sort is difficult to achieve, there is no other way to address the policy distortions created when national authorities try to regulate international competition.
    Robert Schapiro and William Buzbee (Emory) upload Unidimensional Federalism: Power and Perspective in Commerce Clause Adjudication, forthcoming in the Cornell Law Review. From the abstract:
      Since 1995, the United States Supreme Court has applied a new form of rigorous judicial scrutiny in assessing the constitutional limits of the Commerce Clause, a provision that long has functioned as the central authorization of congressional power. As critics on and off the bench have noted, the Court has advanced its conception of federalism by requiring that the regulated activity itself be economic or commercial in nature. A crucial aspect of the Court's approach that has received less attention is the prior step of selecting the relevant activity for constitutional analysis. Legislation can be viewed from a variety of different perspectives, and the choice of vantage points can be critical in determining the requisite commercial nexus. In the wake of the New Deal, the Court upheld legislation if it had a commercial connection when viewed from any perspective. This Article argues that in a break from a half century of settled jurisprudence, the Court recently has insisted on selecting a single perspective as determinative. This approach, which we term "unidimensional," relocates substantial discretion from Congress to the judiciary. Drawing on the insights of recent scholarship on statutory interpretation, we illuminate the flaws in the Court's unidimensional approach. Legislation implicates multiple motives, targets, beneficiaries, and effects. For the Court to pick out a single element as dispositive constitutes a groundless form of reductionism. Here, as in other aspects of its recent jurisprudence, the Court focuses on the common-law rights holder as the fulcrum of analysis. This framework tilts the doctrine against regulation, as it inevitably casts the state as a suspect interloper. Lower court cases evidence the confusion that the Court's narrow commercial activity analysis has generated. In place of this flawed, unidimensional approach, we offer a "legislativist" framework for Commerce Clause cases. Under the legislativist method, the text of the legislation guides the judicial identification of the relevant activities for purposes of Commerce Clause scrutiny. This approach retains meaningful judicial oversight, while avoiding the arbitrary usurpation of congressional authority inherent in the Court's current jurisprudence.
    A. Mitchell Polinsky (Stanford) posts Principal-Agent Liability, forthcoming as a chapter in AN INTRODUCTION TO LAW AND ECONOMICS (3d ed. 2003). From the abstract:
      This essay is a new chapter in An Introduction to Law and Economics (Third Edition, forthcoming 2003). It reexamines some of the principles of liability from earlier chapters when harm is caused by an agent who is under the supervision of a principal. The primary questions addressed are: Is the optimal level of liability different when harm is caused by an agent of a principal rather than by a single actor? Should liability be imposed on the principal, the agent, or both? If on both, what is the optimal mix of liability between the principal and the agent?


 
Barnett on the Necessary and Proper Clause Randy Barnett (Boston University) has uploaded a new paper to SSRN: The Original Meaning of the Necessary and Proper Clause, forthcoming in the University of Pennsylvania Journal of Constitutional Law. Here is the abstract:
    This article presents evidence of the original public meaning of the Necessary and Proper Clause. I show that the meanings of "necessary" we have inherited from John Marshall's discussion in McCulloch v. Maryland - a choice between "indispensably requisite" on the one hand and mere "convenience" on the other - is undercut by the available evidence. The truth lies somewhere in between. While these findings will, of course, be of interest to originalists, they should also interest the many constitutional scholars who consider original meaning to be one among several legitimate modes of constitutional analysis, as well as those scholars for whom original meaning is the starting point of a process in which it is "translated" into modern terms. By either account, it is important to get the original meaning right, even if it is not alone dispositive of today's cases and controversies.
    This is the companion to two previous articles - "The Original Meaning of the Commerce Clause" 68 U. Chi. L. Rev. 101(2002) and "New Evidence on the Original Meaning of the Commerce Clause" 55 U. Ark. L. Rev. 847 (2003) - in which I presented evidence of the public meaning of Congress's power "To regulate Commerce with foreign Nations, and among the several States, and with the Indian Tribes." To determine the constitutionality of any particular legislation and evaluate judicial applications of the Commerce Clause, however, we must also consider the meaning of the Necessary and Proper Clause. For the expansive post-New Deal reading of congressional power owes as much to the Supreme Court's interpretation of the Necessary and Proper Clause as it does to its expansive reading of the Commerce Clause.
Barnett is one of our most most thoughtful and original constitutional theorists, and this essay is another building block in his important project.


Monday, June 02, 2003
 
Bashman on Stare Decisis Howard Bashman (How Appealling) has a terrific op/ed in the Los Angeles Times (registration required). Here is a taste:
    The U.S. Supreme Court sits atop this nation's hierarchical system of justice. Once the Supreme Court decides a question of constitutional law, judges serving on lower courts must apply the Supreme Court's ruling, whether they agree with it or not. A federal judge whose conscience prevents him or her from applying the law faithfully should, at a minimum, refuse to participate in deciding those cases in which the impediment arises. For if one judge can elevate his conscience above the law, so can others, and soon we will have a system where judges at every level are free to decide cases based on personal predilection rather than binding judicial precedent and the texts of constitutions and statutes.


 
Adler on Risk, Death, and Harm Matthew Adler (Penn) has posted Risk, Death and Harm: The Normative Foundations of Risk Regulation on SSRN. Here is the abstract:
    Is death a harm? Is the risk of death a harm? These questions lie at the foundations of risk regulation. Agencies that regulate threats to human life, such as the EPA, OSHA, the FDA, the CPSC, or NHTSA, invariably assume that premature death is a first-party harm - a welfare setback to the person who dies - and often assume that being at risk of death is a distinct and additional first-party harm. If these assumptions are untrue, the myriad statutes and regulations that govern risky activities should be radically overhauled, since the third-party benefits of preventing premature death and the risk of premature death are often too small to justify the large compliance costs that these laws create.
    In this Article, I consider the harmfulness of death, and of the risk of death, in a philosophically rigorous way. The analysis is complicated, since a variety of plausible theories of welfare have been proposed, and since risk too is a multifaceted concept. A given person P's "risk" of death might be risk in a Bayesian sense (some person's subjective probability that P will die), or risk in the frequentist sense (the objective frequency with which persons like P die prematurely as a result of the kind of threat to which P is exposed). These two conceptions of risk are very different, yet too often are not distinguished in legal or policy-analytic writing about risk. As for the harmfulness of death: this raises knotty philosophical problems, problems that have prompted some contemporary philosophers to deny that the dying person is worse off than she would have been had she continued to live.
    I ultimately conclude that death is a first-person welfare setback ­common sense is vindicated here, I argue ­as is risk in the Bayesian sense, but that risk in the frequentist sense is not. This conclusion has implications for a range of regulatory practices ­specifically, for cost-benefit analysis, risk-risk analysis, the interpretation of statutes that create health or safety thresholds, environmental justice policy, and comparative risk analysis ­and also for tort and criminal law. These implications are explored, at length, in the final section of the Article. In particular: the widespread use of frequentist risk measures as a determinant of regulatory choice is misguided. EPA, OSHA, FDA and other federal and state agencies typically determine how stringently to regulate some toxin by looking (at least in part) to the frequentist risk imposed by the toxin on the maximally exposed, highly exposed, or representative individual. Similarly, environmental justice analysis is often keyed to the distribution of frequentist risks. And some propose that regulatory priority-setting (so-called comparative risk assessment) also take into consideration frequentist risk. This regulatory focus on frequentist risk was encouraged by the Supreme Court's seminal decision in the "Benzene" case (Industrial Union Dept v. American Petroleum Institute, 1980), and is endorsed by the risk assessment community. But the practice has no normative basis, and should be abandoned. Similarly, risk-imposition in the frequentist sense should be neither tortious nor criminal - at least if harmfulness is a precondition for liability in these domains, as it may well be.


 
Workshops Today Here is the roundup:


Sunday, June 01, 2003
 
Oman on Audi Nate Oman has a very good post on Robert Audi's book titled Religious Commitment and Secular Reason. Audi is the most articulate advocate of the view that public policy should be based on secular reasons. Here is a taste of Oman's post:
    Now it may be that Audi has independently adequate secular reasons justifying religious freedom. However, to the extent that by religious freedom we mean some special consideration given to liberty for religious conscience and conduct that we don't give for strongly held secular beliefs, I am doubtful that Audi will be able to come up with an argument that does not depend for its persuasiveness at least in part on the lingering force of religious freedom's theological roots. It may be that for this reason Audi and others do not actually believe in religious freedom per se, but simply in some kind of personal autonomy of which religion is a subset. However, if this is the case then it seems that our fear of breaching the wall of seperation should be less rather than greater. Surely it is the case that any deeply held ideological belief runs the risk of justifying coercion of others. To the extent that religion is simply another ideological belief, then it doesn't seem to present any special dangers. To the extent that it presents special dangers only to religious freedom, then it seems directing heightened concern towards religious arguments for this reason is unjustified. Why should we be especially solicitious of religion? There are arguments that seek to answer this question, but once Audi opens up the pandora's box of historically religious arguments, I don't see how we can be certain that our assent to these arguments doesn't result from some invidious influence of the arguments' religious past.
I have the very great pleasure of participating with Audi in a variety of conferences on these general issues, and although I find myself in disagreement with him on the question whether religious reasons should be included in public political debate, he is one of the most interesting and articulate thinkers on these difficult questions. For my take on some of these issues, see my Constructing an Ideal of Public Reason.


 
Volokh on Free Expression & IP Eugene Volokh (UCLA) has posted Freedom of Speech and Intellectual Property: Some Thoughts After Eldred, 44 Liquormart, Saderup, and Bartnicki to SSRN, also forthcoming in the Houston Law Review. Here is the abstract:
    This article makes several different observations about the Free Speech Clause and intellectual property law, in light of some recent doctrinal developments: (1) the Court's decision about copyright in Eldred v. Ashcroft; (2) the Court's evolving commercial speech jurisprudence in cases such as 44 Liquormart v. Rhode Island, which is relevant to trademark dilution law; (3) the California Supreme Court's right of publicity decision in Comedy III Productions v. Saderup; and (4) the Court's decision in Bartnicki v. Vopper, which indirectly bears on trade secret law.
If you are interested in the relationship between free expression and IP, you will not want to miss this.


 
New Papers on the Net Here is the roundup:
    Kathryn Abrams (UC Berkeley) uploads 'Groups' and the Advent of Critical Race Scholarship, forthcoming in Issues in Legal Scholarship: The Origins and Fate of Antisubordination Theory. From the abstract:
      This essay reconsiders Owen Fiss's "Groups and the Equal Protection Clause," sometimes described as having instigated critical scholarship in the area of race. It argues that, from a contemporary vantagepoint, the article appears ambivalent in its critical thrust. Fiss's approach forsakes the jurisprudential comforts of neutrality, individualism and means/ends analysis for an explicit focus on the material and dignitary circumstances of African-Americans. Yet its account of racial disadvantage is surprisingly de-contextualized: it reflects neither the contemporaneous perspectives of its African-American subjects, nor more than a fleeting sense of the agonistic, political dynamics that produced it. This reified rendering yields an account of Black disadvantage that is decoupled from a corresponding account of white supremacy. The essay reflects on the sources of Fiss's critical ambivalence, and considers its implications for the Court's increasingly firm embrace of a single mediating principle.
    Timothy Zick (St. Johns) posts Marbury Ascendant: The Rehnquist Court and the Power to 'Say What the Law Is', forthcoming in the Washington and Lee Law Review. Abstract:
      Marbury posited that it is "emphatically" the power of the judiciary to "say what the law is." This Article focuses on two areas in which the Rehnquist Court has dramatically advanced a judicial supremacy model for interpreting legal meaning. The first is the Court's newly restrictive interpretation of Congress' power under Section 5 of the Fourteenth Amendment to "enforce" the guarantees set forth in Section 1 of the amendment. The second development is the Court's diminished deference to agency interpretations of law under the Mead-Christensen doctrine. Because commentators have generally tended to address the recent spate of Section 5 precedents as record-centric intrusions on legislative fact-finding authority, rather than bold assertions of interpretive supremacy, the connection between these parallel developments has been missed. This Article links the Section 5 cases and the Mead-Christensen doctrine as parallel manifestations of the ascendancy of judicial power under the Rehnquist Court. The Mead-Christensen model diminishes, yet does not wholly foreclose, judicial deference to legal interpretations by other branches. Conceptualizing the Section 5 precedents as a struggle over interpretive supremacy rather than institutional fact-finding, the Article proposes that the Court apply the Mead-Christensen model for reviewing agency interpretations of law to Congress' exercise of the Section 5 enforcement power. Application of the agency model to exercises of the Section 5 power would preserve the Court's ability to render definitive constitutional interpretations, while at the same time preserving, in instances where the Court has not plainly foreclosed alternative interpretations, a sphere of legislative enforcement and interpretation that is entitled to judicial "respect" insofar as any particular enactment has the "power to persuade." In short, the model would allow for a true sharing of the Section 5 power in some circumstances, as is contemplated by the constitutional text, while rejecting either judicial supremacy or wholesale deference to legislative enforcement decisions.
    Larry Karp (UC Berkeley) posts Property Rights, Mobile Capital, and Comparative Advantage. Abstract:
      Recent papers use sector-specific factor models with mobile labor to show that imperfect property rights can be a source of comparative advantage. In these models, weaker property rights to the specific factor in a sector attract the mobile factor and increase the country's comparative advantage for that sector. If capital in addition to labor is mobile, and if the benefits of capital are non-excludable or if the degree of property rights is endogencous, a deterioration of property rights has ambiguous effects on comparative advantage. The presence of a second mobile factor also makes the relation between the equilibrium wage-rental ratio and the degree of property rights ambiguous.
    Jeffrey Rachlinski (Cornell) posts Misunderstanding Ability, Misallocating Liability, forthcoming in the Brooklyn Law Revew. From the abstract:
      In the Anglo-American legal tradition, people are responsible for damage caused by their failure to conform their conduct with that of the "reasonable person." With few exceptions, so long as one's conduct conforms to that of the reasonable person, then even if the conduct harms others, it does not create liability. Courts understand that the "reasonable person" is an idealized legal fiction but believe the construct to be a useful way to identify culpable conduct. For the reasonable-person test to be useful, courts must identify the characteristics of this reasonable person. As to cognitive and perceptual abilities, courts endow this hypothetical reasonable person with what they believe are "ordinary" skills and abilities. Recent cognitive psychological research, however, indicates that intuitions about ordinary skills and abilities vastly overstate the cognitive skills people actually possess. Consequently, reliance on intuition and folk wisdom about ordinary abilities leads courts to overattribute accidents to negligent carelessness, rather than unavoidable misfortune.


 
Blogging From Rutgers: Legal "Realism" Yesterday was the final day of the seminar on Mind, Language, and Law at Rutgers Law School in Camden--organized by Dennis Patterson and Kim Ferzan. Colin McGinn was the speaker and his topic was realism. McGinn's position is that, pace Michael Dummett, there is no single sense in which realism is used in the various realist/antirealist philosophical debates. Rather, McGinn, suggests, that there are at least three senses of realism:
    --Realism as Reference. In the first sense, we say that realism is connected with reference. Hence, realist about mathematics is someone who says that mathematic objects or propositions refer to something. A moral realist is someone who believes that moral terms or propositions refer. And reference is understood here in a philosophical sense, as specified, for example by the way that Frege or Russell used the term "reference."
    --Realism as Objectivity. In a second sense, one is a realist about something if one believes that thing is "objective" as opposed to "subjective," where objective is understood as meaning independent of the mind. In this sense, one is a realist about the external world if one believes that mountains and stars exist independly of human minds.
    --Realism as Determinacy. In a third sense, one is a realist about a domain if one believes that propositions with the domain are determinant in the sense that they are either true or not true. (To simplify, true or false.) Thus "Hamlet has a mole on his left shoulder" is neither true nor false, because Shakespear never tells us whether Hamlet does or does not have such a mole--therefore, on the determinacy conception of realism, one would be an antirealist about Hamlet's mole.
We might then what about law? Is law real? Or are legal propositions such as "affirmative action violates the equal protection clause" real? Consider each of the three sense of realism:
    --Reference. Does the proposition Affirmative action violates the equal protection clause refer to anything? Or to simplify, does "The University of Michigan Law School's affirmative action program violate the equal protection clause" refer. There is a complex event to which the phrase "the University of Michigan Law School's affirmative action program" refers. On Frege's theory, the proposition will refer if it has a truth value. If Dworkin's right answer thesis is correct, then we might say that propositions like X violates the equal protection clause do have have truth values. If, on the other hand, we deny that legal questions have right answers, then we might conclude that such sentences do not refer, and hence that law is not real.
    --Objectivity. Is the question whether affirmative action violates equal protection mind independent? In one sense, obviously not. The equal protection caluse is a product of the human mind. On the other hand, we might say that given that humans have created the equal protection clause, it's meaning is independent of what we think about its meaning.
    --Determinacy. And of course, there is (or was) a raging jurisprudential debate over the determinacy of law. So those who hold that the law is indeterminate, are not realists in this sense.
And then we get the question, were the American legal realists "realists" in any sense that relates to other forms of philosophical realism. Once the question is framed this way, it seems that the "realists" were anti-realist in more than one sense. The American legal realists thought that legal concepts lacked referents--hence the famous claims of "transcendental nonsense." The legal realists sometimes claimed that the law was mind dependent--or in the case of Frank's famous aphorism about judges and breakfast, stomach dependent. And the legal realists were famous for their claims about the indeterminacy of law. McGinn was marverlous.