Legal Theory Blog



All the theory that fits!

Home

This is Lawrence Solum's legal theory weblog. Legal Theory Blog comments and reports on recent scholarship in jurisprudence, law and philosophy, law and economic theory, and theoretical work in substantive areas, such as constitutional law, cyberlaw, procedure, criminal law, intellectual property, torts, contracts, etc.

RSS
This page is powered by Blogger. Isn't yours?
Sunday, July 23, 2006
 
New Location for Legal Theory Blog The new location for Legal Theory Blog is:


Saturday, July 22, 2006
 
Legal Theory Bookworm The Legal Theory Bookworm recommends Dred Scott and the Problem of Constitutional Evil by Mark A. Graber. Here's a blurb:
    An examination of what is entailed by pledging allegiance to a constitutional text and tradition saturated with concessions to evil. The Constitution of the United States was originally understood as an effort to mediate controversies between persons who disputed fundamental values, and did not offer a vision of good society. In order to form a 'more perfect union' with slaveholders, late eighteenth century citizens fashioned a constitution that plainly compelled some injustices and was silent or ambiguous on other questions of fundamental right. This constitutional relationship could survive only as long as a bisectional consensus was required to resolve all constitutional questions not settled in 1787. Dred Scott challenges persons committed to human freedom to determine whether antislavery northerners should have provided more accommodations for slavery than were constitutionally strictly necessary or risked the enormous destruction of life and property that preceded Lincoln's new birth of freedom.


 
Download of the Week The Download of the Week is Terms of Use by Mark Lemley. Here is the abstract:
    Electronic contracting has experienced a sea change in the last decade. Ten years ago, courts required affirmative evidence of agreement to form a contract. No court had enforced a “shrinkwrap” license, much less treated a unilateral statement of preferences as a binding agreement. Today, by contrast, it seems widely (though not universally) accepted that if you write a document and call it a contract, courts will enforce it as a contract even if no one agrees to it. Every court to consider the issue has found “clickwrap” licenses, in which a user clicks “I agree” to standard form terms, enforceable. A majority of courts in the last ten years have enforced shrinkwrap licenses, on the theory that people agree to the terms by using the software they have already purchased. Finally, and more recently, an increasing number of courts have enforced “browsewrap” licenses, in which the user does not see the contract at all but in which the license terms provide that using a Web site constitutes agreement to a contract whether the user knows it or not. Collectively, we can call these agreements “terms of use” because they control (or purport to control) the circumstances under which buyers of software or visitors to a public Web site can make use of that software or site. The rise of terms of use has drawn a great deal of attention because of the mass-market nature of the resulting agreements. Terms of use are drafted with consumers or other small end users in mind. Commentators - myself among them - have focused on the impact of this new form of contract on consumers. But in the long run they may have their most significant impact not on consumers but on businesses. The law has paid some attention to the impact of terms of use on consumers. Virtually all of the cases that have refused to enforce a browsewrap license have done so in order to protect consumers; conversely, virtually all the cases that have enforced browsewrap licenses have done so against a commercial entity. And shrinkwrap and clickwrap cases, while enforcing some contracts against consumers, have protected those consumers against certain clauses considered unreasonable. Businesses, however, are presumed to know what they are doing when they access another company's Web site, so courts are more likely to bind them to that site's terms of use. Sophisticated economic entities are unlikely to persuade a court that a term is unconscionable. And because employees are agents whose acts bind the corporation, the proliferation of terms of use means that a large company is likely “agreeing” to dozens or even hundreds of different contracts every day, merely by using the Internet. And because no one ever reads those terms of use, those multiple contracts are likely to have a variety of different terms that create obligations inconsistent with each other and with the company's own terms of use. We have faced a situation like this before, decades ago. As business-to-business commerce became more common in the middle of the 20th Century, companies began putting standard contract terms on the back of their purchase orders and shipment invoices. When each side to a contract used such a form, courts had to confront the question of whose form controlled. After unsuccessful judicial experimentation with a variety of rules, the Uniform Commercial Code resolved this “battle of the forms” by adopting a compromise under which if the terms conflicted, neither party's terms became part of the contract unless the party demonstrated its willingness to forego the deal over it. Rather, the default rules of contract law applied where the parties' standard forms disagreed, but where neither party in fact insisted on those terms. I have three goals in this paper. First, I explain how courts came to enforce browsewrap licenses, at least in some cases. Second, I suggest that if browsewraps are to be enforceable at all, enforcement should be limited to the context in which it has so far occurred - against sophisticated commercial entities who are repeat players. Finally, I argue that even in that context the enforcement of browsewraps creates problems for common practice that need to be solved. Business-to-business (b2b) terms of use are the modern equivalent of the battle of the forms. We need a parallel solution to this “battle of the terms.” In Part I, I describe the development of the law to the point where assent is no longer even a nominal element of a contract. In Part II, I explain how the recent decisions concerning browsewrap licenses likely bind businesses but not consumers, and the problems that will create for commercial litigation. Finally, in Part III, I discuss possible ways to solve this coming problem and some broader implications the problem may have for browsewrap licenses generally.
Download it while its hot!


Friday, July 21, 2006
 
Welcome to the Blogosphere . . . . . . to Jurisdynamics hosted by Jim Chen with contributions from Daniel A. Farber and J.B. Ruhl.


 
Bernstein on Lochner David Bernstein (George Mason University - School of Law) has posted Lochner v. New York: A Centennial Retrospective on SSRN. Here is the abstract:
    This Article discusses two aspects of Lochner's history that have not yet been adequately addressed by the scholarly literature on the case. Part I of the Article discusses the historical background of the Lochner case. The Article pays particular attention to the competing interest group pressures that led to the passage of the sixty-hour law at issue; the jurisprudential traditions that the parties appealed to in their arguments to the Court; the somewhat anomalous nature of the Court's invalidation of the law; and how to understand the Court's opinion on its own terms, shorn of the baggage of decades of careless and questionable historiography. In short, Part I places the Lochner opinion firmly in its historical context. Part II of this Article explains how Lochner, which existed in relative obscurity for decades, became a leading anti-canonical case. As discussed in Part II, Lochner was initially famous only because of Oliver Wendell Holmes's much-cited dissent. The Lochner cases modern notoriety, however, arose largely because the post-New Deal Supreme Court continued to treat the Lochnerian cases of Meyer v. Nebraska and Pierce v. Society of Sisters as sound precedent. Meyer, in particular, eventually became an important basis for the Warren and Burger Courts' substantive due process jurisprudence in the landmark cases of Griswold and Roe v. Wade. Critics of those opinions attacked the Court for following in Lochner's footsteps, and, with some significant help from Laurence Tribe's 1978 constitutional law treatise, Lochner came to represent an entire era and style of jurisprudence. Recently, the ghost of Lochner has been kept very much alive by Justices Kennedy, O'Connor, and Souter, each of whom has praised Meyer and Pierce as engaging in appropriately aggressive due process review of police power regulations, while straining to distinguish those opinions from Lochner. Meanwhile, a revival of limited government ideology on the legal right, most notably in the Rehnquist Court's federalism opinions has raised (perhaps exaggerated) fears on the legal left that the conservatives seek to return, in spirit if not in letter, to the discredited jurisprudence of the Lochner era. Yet virtually no one, on either the right or the left, challenges what may be the strongest evidence of Lochner's influence on modern jurisprudence: the Supreme Court's use of the Fourteenth Amendment's Due Process Clause to protect both enumerated and unenumerated individual rights against the states.


 
Appointments Chairs Over at Prawfsblawg, the comments to the post entitled Faculty Appointments Chairs provide a list of the chairs are various American law schools.


 
Barton on Teaching & Scholarship--and some comments! If you are a legal academic, you should probably read this.
Benjamin Barton (University of Tennessee, Knoxville - College of Law) has posted Is There a Correlation Between Scholarly Productivity, Scholarly Influence and Teaching Effectiveness in American Law Schools? An Empirical Study on SSRN. Here is the abstract:
    This empirical study attempts to answer an age-old debate in legal academia; whether scholarly productivity helps or hurts teaching. The study is of an unprecedented size and scope. It covers every tenured or tenure-track faculty member at 19 American law schools, a total of 623 professors. The study gathers four years of teaching evaluation data (calendar years 2000-03) and creates an index for teaching effectiveness. This index was then correlated against five different measures of research productivity. The first three measure each professor's productivity for the years 2000-03. These productivity measures include a raw count of publications and two weighted counts. The scholarly productivity measure weights scholarly books and top-20 or peer reviewed law review articles above casebooks, treatises or other publications. By comparison, the practice-oriented productivity measure weights casebooks, treatises and practitioner articles at the top of the scale. There are also two measures of scholarly influence. One is a lifetime citation count, and the other is a count of citations per year.
    These five measures of research productivity cover virtually any definition of research productivity. Combined with four years of teaching evaluation data the study provides a powerful measure of both sides of the teaching versus scholarship debate.
    The study correlates each of these five different research measures against the teaching evaluation index for all 623 professors, and each individual law school. The results are counter-intuitive: there is no correlation between teaching effectiveness and any of the five measures of research productivity. Given the breadth of the study, this finding is quite robust. The study should prove invaluable to anyone interested in the priorities of American law schools, and anyone interested in the interaction between scholarship and teaching in higher education.
And here is a bit more from the paper:
    The teaching evaluation data came in different forms for different institutions, from access to a university website that gathered the data, to a single page amalgamation, to physical copies of every student evaluation during the period. From these data I chose the question on the evaluation sheet that most closely measured teaching effectiveness. For example, the University of Tennessee form actually asks the students to rank the professor from 1-5 (with 5 being the highest ranking) on the “Instructor's effectiveness in teaching material.” The results can be found on a publicly accessible website (University of Tennessee 2006). Of the 19 schools, 13 schools asked a somewhat similar question and ranked the professor from 1-5. Two of the other schools ranked from 5-1 (with 1 being the best ranking), one ranked from 4-1 (again with 1 as the best), and one each ranked from 1-4, 1-7, and 1-9, with 1 being the lowest.
One more point--the study examines the correlation between global teaching effectiveness (across courses) and global scholarly productivity (across fields) and did not attempt to study correlations between writing that is salient to the course for which teaching effectiveness is being measure.
And one more issue--what about peer versus student evaluations. Again, a bit more from the paper:
    I also am aware that the use of teaching evaluations as a proxy for teaching effectiveness is somewhat controversial. There are studies, both within law schools and higher education in general, that show that teaching evaluations have biases, including biases based on race (Smith 1999), gender (Farley 1996), and even physical attractiveness (O’Reilly 1987). Other studies have shown that student teaching evaluations are positively correlated with other measures of teaching effectiveness, including peer reviews and output studies, suggesting at least that student measures track other alternative measures (Bok 2004). Many law faculty members have nevertheless argued to me that teaching evaluations are little more than a popularity contest. Some have even argued that teaching effectiveness is inversely correlated with teaching evaluations, since students tend to highly rank easy professors of little substance, while ranking professors who challenge them comparatively lower. For better or worse, I believe teacher evaluations are the only viable way to measure teaching effectiveness for a study of this breadth. My other choices were exceedingly unpalatable: 1) attempting to gather peer evaluation data, which is rarely if ever expressed numerically, and would also almost certainly not be provided by the host institutions; 2) some type of personal subjective measure of teaching effectiveness, potentially requiring me to personally visit classes and make my own call on teaching effectiveness.
At one level these results are completely unsurprising. What mechanism would result in a correlation between research productivity and teaching effectiveness? Here are some possibilities:
    --More research and more effective teaching might both be products of some underlying trait--such as diligence.
    More research might result in more knowledge, which might result in more effective teaching.
    More research might result in more knowledge, which might result in less effective teaching.
    More research might divert effort from teaching, which might result in less effective teaching.
And so forth.
It is possible, however, that some of these effects might be observed with a different research design. If it were possible to do reliable assessments of the objective accuracy of information conveyed and to compare that to research productivity in the particular field, for example, there might be some correlation between productivity and teaching effectivenss (in the objective sense). But that would not necessarily correlate with student ratings of teaching effectiveness? Why not? Because generally law students are incapable of evaluating "knowledge of the subject matter." For one thing, they lack a good baseline for comparison, because the truth is that the general level of knowledge of subject-mater among legal academics is fairly shallow. And a student rarely learns enough about a subject to actually get ahead of the professor. Of course, we all know that occasionally newbie professors get caught in gaffs--but most experienced teachers learn how to avoid this--which is mostly a matter of not saying things you don't know, not mastering the subject so deeply that you can answer any question about any point accurately.
But with that caveat aside, this is clearly valuable research! Highly recommended for all legal academics!

Thanks Lisa Fairfax to via Dan Markel.


Thursday, July 20, 2006
 
Thursday Calendar
    University of Arizona Law: Mona Hymel, Globalization, Environmental Justice, and Sustainable Development: The Case of Oil


 
Beta Version of the New Legal Theory Blog If you would like to see the new look of Legal Theory Blog, here is the URL: In addition, there is a new companion blog that will collect the Legal Theory Lexicon posts:During the "beta test," I will be requesting feedback on various design elements of the new version of the blog. I would greatly appreciate your assistance! Check out the new blog for the current set of issues!

This post will move to the top of the blog until the transition is complete.


Wednesday, July 19, 2006
 
Lemley on Terms of Use Mark A. Lemley (Stanford Law School) has posted Terms of Use on SSRN. Here is the abstract:
    Electronic contracting has experienced a sea change in the last decade. Ten years ago, courts required affirmative evidence of agreement to form a contract. No court had enforced a “shrinkwrap” license, much less treated a unilateral statement of preferences as a binding agreement. Today, by contrast, it seems widely (though not universally) accepted that if you write a document and call it a contract, courts will enforce it as a contract even if no one agrees to it. Every court to consider the issue has found “clickwrap” licenses, in which a user clicks “I agree” to standard form terms, enforceable. A majority of courts in the last ten years have enforced shrinkwrap licenses, on the theory that people agree to the terms by using the software they have already purchased. Finally, and more recently, an increasing number of courts have enforced “browsewrap” licenses, in which the user does not see the contract at all but in which the license terms provide that using a Web site constitutes agreement to a contract whether the user knows it or not. Collectively, we can call these agreements “terms of use” because they control (or purport to control) the circumstances under which buyers of software or visitors to a public Web site can make use of that software or site. The rise of terms of use has drawn a great deal of attention because of the mass-market nature of the resulting agreements. Terms of use are drafted with consumers or other small end users in mind. Commentators - myself among them - have focused on the impact of this new form of contract on consumers. But in the long run they may have their most significant impact not on consumers but on businesses. The law has paid some attention to the impact of terms of use on consumers. Virtually all of the cases that have refused to enforce a browsewrap license have done so in order to protect consumers; conversely, virtually all the cases that have enforced browsewrap licenses have done so against a commercial entity. And shrinkwrap and clickwrap cases, while enforcing some contracts against consumers, have protected those consumers against certain clauses considered unreasonable. Businesses, however, are presumed to know what they are doing when they access another company's Web site, so courts are more likely to bind them to that site's terms of use. Sophisticated economic entities are unlikely to persuade a court that a term is unconscionable. And because employees are agents whose acts bind the corporation, the proliferation of terms of use means that a large company is likely “agreeing” to dozens or even hundreds of different contracts every day, merely by using the Internet. And because no one ever reads those terms of use, those multiple contracts are likely to have a variety of different terms that create obligations inconsistent with each other and with the company's own terms of use. We have faced a situation like this before, decades ago. As business-to-business commerce became more common in the middle of the 20th Century, companies began putting standard contract terms on the back of their purchase orders and shipment invoices. When each side to a contract used such a form, courts had to confront the question of whose form controlled. After unsuccessful judicial experimentation with a variety of rules, the Uniform Commercial Code resolved this “battle of the forms” by adopting a compromise under which if the terms conflicted, neither party's terms became part of the contract unless the party demonstrated its willingness to forego the deal over it. Rather, the default rules of contract law applied where the parties' standard forms disagreed, but where neither party in fact insisted on those terms. I have three goals in this paper. First, I explain how courts came to enforce browsewrap licenses, at least in some cases. Second, I suggest that if browsewraps are to be enforceable at all, enforcement should be limited to the context in which it has so far occurred - against sophisticated commercial entities who are repeat players. Finally, I argue that even in that context the enforcement of browsewraps creates problems for common practice that need to be solved. Business-to-business (b2b) terms of use are the modern equivalent of the battle of the forms. We need a parallel solution to this “battle of the terms.” In Part I, I describe the development of the law to the point where assent is no longer even a nominal element of a contract. In Part II, I explain how the recent decisions concerning browsewrap licenses likely bind businesses but not consumers, and the problems that will create for commercial litigation. Finally, in Part III, I discuss possible ways to solve this coming problem and some broader implications the problem may have for browsewrap licenses generally.
Highly recommended.


 
Ibrahim on Director Liability and the Nature of the Board Darian Ibrahim (University of Arizona) has posted The Board as a Collective Body or a Collection of Individuals: Implications for Director Liability on SSRN. Here is the abstract:
    How should we conceive of a corporate board: as a collective body, or as a collection of individuals? And what practical consequences flow from our conception? The high-profile 'Disney' case, just decided by the Delaware Supreme Court, flagged a critical yet overlooked issue in corporate law that goes to the very heart of the board's nature. That is, when directors are sued for breaching their fiduciary duties, should courts determine their liability collectively, where all directors stand or fall together, or individually, where some directors may be liable but others not liable? This choice of assessment procedure can mean the difference between a director's liability and her exoneration, and thus carries significant financial implications for directors, shareholders, insurers, and attorneys. It may also deepen our understanding of the nature of boards on a theoretical level. Although 'Disney' flagged the choice of assessment approach as an issue, it did not adequately address it. Surprisingly, given the issue's interesting theoretical underpinnings and practical importance, neither has any other court or academic. Working from what is essentially a blank slate, this article proposes the criteria by which the whole board and individual director assessment approaches should be compared, and then applies those criteria to the three different types of fiduciary duties claims that shareholders can bring (assuming that after 'Disney', a "duty of good faith" exists in at least some form). It concludes with a split decision: by recommending that courts select the individual director approach in duty of loyalty and good faith cases, and the whole board approach in duty of care cases. For the most part, these recommendations track the limited case law on the issue. A collective/individual focus that shifts depending on context also reveals that, on a theoretical level, boards are properly conceived of as both collective bodies and a collection of individuals.


 
Wendesday Calendar
    University of Cincinnati Law: Ronna Schneider, Religion in the Public Schools


 
Nichols on Chinese Regulation of Religion Joel A. Nichols (Pepperdine University - School of Law) has posted Dual Lenses: Using Theology and Human Rights to Evaluate China's 2005 Regulations on Religion (Pepperdine Law Review, Vol. 34, 2006) on SSRN. Here is the abstract:
    In order for China to move forward in the international community, it needs to continue to improve its standing on human rights issues. Of particular concern to many observers is the relationship between the government and religion. While foreign religious organizations and missionaries are still heavily regulated by a 1994 law, a new law respecting religious citizens and organizations within China went into effect in 2005. This new law is salutary in some respects in that it provides a much fuller delineation of the relationship between government and religion within China, and it appears more solicitous toward religious rights than previous regulations. But the new law is very vague in places and contains several provisions that could be troublesome and problematic depending on how and whether they are implemented. This paper is primarily built on a lecture given at Fuller Theological Seminary in 2005. Its premise is that international human rights laws are a useful but not sufficient benchmark by which to assess China's law. It is also important to understand the theological premises of some of the religious communities and believers for a broader measure of the efficacy and fairness of China's law. By focusing upon and using these dual lenses of law and religion, the paper offers both preliminary assessments of the 2005 law and also some possible ways forward that will further China's efforts to respect its heritage while simultaneously allowing it to better align itself with prevailing international norms regarding religious rights and obligations.


Tuesday, July 18, 2006
 
Young Scholars and Empirical Research There is a thoughtful post entitled Should Young Scholars Engage in Empirical Legal Research? by Lisa Fairfax at Conglomerate. Here is a taste:
    I am at the SEALS annual conference and getting a chance to see some interesting panels as well as workshops for new law professors. One panel I attended focused on new developments in empirical legal research. Although the work in which people were engaged sounded interesting, each panelist asked the question, should young scholars engage in such research? The answer appeared to be no, with some qualifications. There were essentially four reasons why people responded no to the query.
Read the post. To add a bit. Very junior scholars without a solid foundation in empirical research methods should be wary--to say the least. Junior scholars ought to be exploiting their strengths, not starting over. Another important consideration is the prevalance of empirical scholarship on the senior faculty: as a rule of thumb, its better to do scholarship that is likely to be understood and valued by those who will be voting on one's tenure. But if you are trained in empirical methods and are on a faculty with tenured faculty who do empirical work, then I see no reason for juniors to shy away.


 
Hricik on Law Blogging David C. Hricik (Mercer University - Walter F. George School of Law) has posted Ethics of Blawging on SSRN. Here is the abstract:
    Addresses the legal ethical issues that face lawyers who blog (or blawg), including the potential for disclosure of client confidences, inadvertent formation of attorney-client relationships, and the unauthorized practice of law.


 
May on Chevron Randolph J. May (The Free State Foundation) has posted Defining Deference Down: Independent Agencies and Chevron Deference (Administrative Law Review, Vol. 58, p. 429, 2006) on SSRN. Here is the abstract:
    Surprisingly, although the rationale articulated in Chevron in support of the deference doctrine might suggest that independent agencies should receive less deference than executive branch agencies, the question has not been examined in the courts and it has received very little attention in the academic literature. This article begins that examination in the hope of spurring further commentary. In Part II below, the Article recounts Chevron and its rationale grounding the deference doctrine primarily (but not exclusively) in notions of political accountability inherent in constitutional separation of powers principles. Part III briefly examines the Supreme Court's recent Brand X decision to show how in that particular fact situation, involving a ruling of the FCC, a so-called independent agency, Chevron deference trumped stare decisis. In effect, this allowed the agency to alter the interpretation of a statutory provision that previously had been construed differently by an appellate court. Part IV sketches the skimpy scholarly literature that hints, in light of Chevron's political accountability rationale, that the decisions of independent regulatory agencies should receive less deference than those of executive branch agencies. Part V argues that there is considerable law and logic to support these heretofore underexplored, sparse suggestive comments. Since independent agencies such as the FCC are, as a matter of our current understanding of the law and of historical practice, mostly free from executive branch political control, Chevron's political accountability rationale should imply that statutory interpretations of independent agencies receive less judicial deference. In light of the peculiar constitutional status of the independent agencies, which often are referred to as the headless fourth branch, Part VI concludes with an explanation as to why a reconception of the Chevron doctrine, which would accord less judicial deference to the decisions of these agencies, is more consistent with our constitutional tradition than is the current conception.


 
Taslitz on the Subconscious and Rape Andrew E. Taslitz (Howard University - School of Law) has posted Forgetting Freud: The Courts' Fear of the Subconscious in Date Rape (and Other) Cases on SSRN. Here is the abstract:
    Trial and appellate courts are often very resistant to using the lessons of social science in crafting substantive criminal law and evidentiary doctrines. Using the courts' refusal to accept the teachings of cognitive scientists and forensic linguists in date rape cases as an example, this article argues that the courts misunderstand and fear the subconscious. They rely on a folk concept of the subconscious mind as sharply distinct from consciousness, inscrutable, frightening, and generally irrelevant to criminal responsibility because the healthy mind is one in which consciousness runs the show. Only in extreme cases of mental disease is the subconscious usually relevant, for it then manages to wrest control from the conscious mind. Subconscious processes can in these few extreme cases, such as with the insanity defense, exculpate but never inculpate. Relatedly, evidentiary doctrines display distrust of experts on the subconscious mind, resistance to the value of social scientific generalizations, confusion about the value of jury instructions, and a hesitancy to part with tradition. This article contrasts the folk subconscious with the scientifically-informed one. The scientific subconscious is a spectrum rather than half of a dichotomy (with consciousness being the other half). This subconscious interacts with consciousness in a reciprocal way in even the healthiest of minds. Some sorts of subconcious thoughts are accessible to consciousness in individual cases. Furthermore, even a person who does not know his subconscious mind can influence it by consciously gathering relevant information and altering his behavior, processes that over the long run change the subconscious to be closer to our conscious ideals. These insights have implications for substantive criminal law culpability doctrines and evidentiary ones that turn modern approaches on their head, holding persons responsible for all of who they are, at the subconscious and conscious levels alike, as the article details.
I must admit that I find this a bit puzzling. I should have thought that the lesson of contemporary cognitive science is that consciousness is both (1) poorly understood and therefore not well theorized as a causal influence and (2) only one of a very large complex of "modules" that interact to determine human behavior. Of course, this is not the classic Freudian picture--not by miles. A fascinating topic.


 
Sachs on Nuclear Waste Storage and the Mescalero Apaches Noah Sachs (University of Richmond School of Law) has posted The Mescalero Apache and Monitored Retrievable Storage of Spent Nuclear Fuel: A Study in Environmental Ethics (Natural Resources Journal, Vol. 36, p. 641, 1996) on SSRN. Here is the abstract:
    The proposal of the Mescalero Apache Indians to host a nuclear waste storage facility raised difficult questions about political sovereignty, environmental justice, and democratic consent. While the proposal had numerous drawbacks and deserved to be opposed, many of the arguments used against it were conceptually flawed and paternalistic. Arguments decrying bribery of a poor community were particularly weak, while those criticizing targeting of Indian tribes by the United States government and coercion of tribal membners by the Mescalero leadership had more merit. The core ethical arguments should be separated from the rhetoric so that policy makers, Native Americans, environmentalists, and industry leaders can better evaluate similar projects in the future.


 
Berners-Lee on Net Neutrality Tim Berners-Lee has an excellent post on Net Neutrality. Here's a taste:
    Net neutrality is this:
      If I pay to connect to the Net with a certain quality of service, and you pay to connect with that or greater quality of service, then we can communicate at that level.
    That's all. Its up to the ISPs to make sure they interoperate so that that happens. Net Neutrality is NOT asking for the internet for free. Net Neutrality is NOT saying that one shouldn't pay more money for high quality of service. We always have, and we always will.
My take on Net Neutrality can be found in The Layers Principle: Internet Architecture and the Law, with Minn Chung.


Monday, July 17, 2006
 
Meese on Monopolization and the Theory of the Firm Alan J. Meese (College of William and Mary) has posted Monopolization, Exclusion and the Theory of the Firm (Minnesota Law Review, Vol. 89, 2005) on SSRN. Here is the abstract:
    This article examines and critiques the distinction that courts currently draw under Section 2 of the Sherman Act between “competition on the merits,” on the one hand, and contractual exclusion, on the other. The article finds the source of this distinction in neoclassical price theory, its theory of the firm, and the derivative model of “workable competition” that most economists embraced from about 1940 onward. Workable competition, it is shown, privileged property-based, “unilateral” technological competition by a fully-integrated firm over “concerted” non-standard agreements, i.e., partial integration. Various antitrust scholars embraced workable competition as a proper guide to antitrust policy, endorsing the distinction between “competition on the merits,” on the one hand, and contractual exclusion, on the other, and the distinction found its way into modern law in United States v. United Shoe Machinery Co., 110 F. Supp. 295 (D. Mass. 1953), aff'd United Shoe Machinery v. United States, 347 U.S. 521 (1954) (per curiam). Moreover, the distinction survives to this day. “Competition on the merits” by a monopolist is lawful per se, even if such conduct completely excludes rivals from the market and regardless of the conduct's ultimate impact on consumers. By contrast, exclusionary agreements that hamper rivals in a non-trivial way are presumptively unlawful and only survive if a court believes they are the least restrictive means of producing significant benefits. Transaction cost economics (TCE), offers a competing theory of the firm as well as a new and benign interpretation of partial integration in the form of various non-standard contracts. In particular, TCE undermines price theory's conclusion that single-firm conduct produces special technological benefits that partial integration cannot produce. Instead, TCE shows that technological considerations cannot explain complete integration, and that both complete and partial integration can be methods of reducing the transaction costs that reliance upon the market would otherwise entail. Because both complete and partial integration can produce significant benefits, there is no reason to privilege one form of integration over the other. As a result, courts should relax the intrusive scrutiny they currently apply to exclusionary agreements entered by monopolists.


 
Brooks on Hegal on Monarchy Thom Brooks (University of Newcastle upon Tyne (UK)) has posted No Rubber Stamp: Hegel's Constitutional Monarch. Here is the abstract:
    Perhaps one of the most controversial aspects of Hegel's Philosophy of Right for contemporary interpreters is its discussion of the constitutional monarch. This is true despite the general agreement amongst virtually all interpreters that Hegel's monarch is no more powerful than modern constitutional monarchies and is an institution worthy of little attention or concern. In this article, I will examine whether or not it matters who is the monarch and what domestic and foreign powers he has. I argue against the virtual consensus of recent interpreters that Hegel's monarch is far more powerful than has been understood previously. In part, Hegel's monarch is perhaps even more powerful than Hegel himself may have realised and I will demonstrate certain inconsistencies with some of his claims. My reading represents a distinctive break from the virtual consensus, without endorsing the view that Hegel was a totalitarian.


 
Matwyshyn on Spam Andrea M. Matwyshyn (University of Florida) has posted Penetrating the Zombie Collective: Spam as an International Security Issue (SCRIPT-ed, Vol. 4, 2006) on SSRN. Here is the abstract:
    Since the mid 1990's, spam has been legally analyzed primarily as an issue of balancing commercial speech with consumers' privacy. This calculus must now be revised. The possible deleterious consequences of a piece of spam go beyond inconvenient speech and privacy invasion; spam variants such as phishing and “malspam” (spam that exploits security vulnerabilities) now result in large-scale identity theft and remote compromise of user machines. The severity of the spam problem requires analyzing spam foremost as an international security issue, expanding the debate to include the dynamic impact of spam on individual countries' economies and the international system as a whole. Spam creation is becoming a flourishing competitive international industry, generating a new race to the bottom that will continue to escalate. Although the majority of spammers reside in the United States and a majority of spam appears to originate in the U.S., spam production is being increasingly outsourced to other countries by U.S. spammers. Similarly, as U.S. authorities begin to prosecute, spammers are moving offshore to less regulated countries. Therefore, spam presents an international security collective action problem requiring legislative action throughout the international system. A paradigm shift on the national and international level is required to forge an effective international spam regulatory regime. Spam regulation should be contemplated in tandem with the development of computer intrusion legislation and privacy legislation, harmonizing all three simultaneously across the international system to form a coherent international data control regime.


 
Tehranian on Middle-Eastern Legal Scholarship John Tehranian (University of Utah - S.J. Quinney College of Law) has posted Whitewashed: Towards a Middle-Eastern Legal Scholarship (Indiana Law Journal, Vol. 82) on SSRN. Here is the abstract:
    This Article examines the antinomy of middle-eastern legal and racial classification. Individuals of middle-eastern descent are caught in a catch-22. Through a bizarre fiction, the state has adopted the uniform classification of all individuals of middle-eastern descent as white. On paper, therefore, they appear no different than the blue-eyed, blonde-haired individual of Scandinavian descent. Yet reality does not mesh with this bureaucratic position. On the street, individuals of middle-eastern descent suffer from the types of discrimination and racial animus endured by recognized minority groups. The dualistic and contested ontology of the middle-eastern racial condition therefore creates an unusual paradox. Reified as the other, individuals of middle-eastern descent do not enjoy the benefits of white privilege. Yet, as white under the law, they are denied the fruits of remedial action. Moreover, the state's racial fiction fosters an invisibility that perniciously enables the perpetuation and even expansion of discriminatory conduct, both privately and by the state, against individuals of middle-eastern descent. Indeed, unlike virtually every other racial minority in our country, middle easterners have faced rising, rather than diminishing, degrees of discrimination over time—a trend epitomized by recent targeted immigration policies, racial profiling, a war on terrorism with a decided racialist bent, and growing rates of job discrimination and hate crime. By tracing the chilling reproblematization of the middle easterner from friendly foreigner to enemy alien, enemy alien to enemy race, this Article argues that the modern civil rights movement has not done enough to advance the freedoms of those of middle-eastern descent. Finally, the Article critiques the extant literature in critical race theory for ignoring issues of concern to individuals of middle-eastern descent. Specifically, the legal academy must launch a dialogue, in both its law review literature and the classroom, on the particular problems facing the middle-eastern population, especially in the post-9/11 environment. A central tenet of this plea is a re-examination in what we—as a society and as scholars—count as diversity. The Article therefore takes a simple, though radical, step: calling for the development of a middle-eastern critical legal scholarship.


Sunday, July 16, 2006
 
Legal Theory Calendar
    Wednesday, July 19
      University of Cincinnati Law: Ronna Schneider, Religion in the Public Schools
    Thursday, July 20
      University of Arizona Law: Mona Hymel, Globalization, Environmental Justice, and Sustainable Development: The Case of Oil


 
Callfor Papers: Canadian Legal Education Annual Review
    CALL FOR PAPERS 1ST (2007) ISSUE OF THE CANADIAN LEGAL EDUCATION ANNUAL REVIEW (CLEAR) The Canadian Legal Education Annual Review is a peer-reviewed annual publication of the Canadian Association of Law Teachers (CALT). The aim of the journal is to foster scholarly exchanges on issues related to legal education and relevant to all Canadian law professors, graduate students and those who teach law. In particular, the journal aims to encourage critical and scholarly reflections on the aspirations, goals, objectives, values and cultures of legal education and on the processes involved in law teaching . CLEAR welcomes submissions dealing with current issues and problems in legal education, presenting empirical studies on legal education and action research projects carried out by professors, examining new trends in adult education or new methods of instruction (including but not limited to learning technologies), and discussing faculty or university reports dealing with issues such as curricular reforms and access to legal education. In addition, the CLEAR welcomes other pieces such as personal stories and reviews of books and other media. CLEAR welcomes and encourages submissions in both English and French. Please send your manuscripts as e-mail attachments in Microsoft Word format to clear.raedc@mac.com. All manuscripts must follow the Canadian Guide to Legal Citation. Manuscripts must be sent by October 31st, 2006 for publication in the 2007 issue, which will be published in time for the annual conference of the Canadian Association of Law Teachers in June 2007.


 
Legal Theory Lexicon: Libertarian Theories of Law
    Introduction The dominant approaches to normative legal theory in the American legal academy converge on fairly robust role for the state and government subject to the constraints imposed by an equally robust set of individual rights. Normative legal theorists of all stripes--conservatives and liberals, welfarists and deontologists—tend to agree that the institution of law is fundamentally legitimate and that the legal regulation has a large role to play. There is, however, a counter-tradition in legal theory that challenges the legitimacy of law and contends that the role of law should be narrowly confined. This entry in the Legal Theory Lexicon will examine libertarian theories of law. As always, the Lexicon is aimed at law students—especially first year law students—with an interest in legal theory.
    The libertarian tradition of social, political, and legal thought is rich and varied, no brief summary can do it justice. So the usual caveats apply. This is a brief introduction to libertarian thought with an emphasis on its role in normative legal theory. Debates about the true meaning of the term “libertarian” will largely be ignored, and will disputes over the advantages of “liberalism,” “classical liberalism,” and “libertarianism” as the best label for libertarian ideas. Enough with the caveats, here we go!
    Historical Roots of Contemporary Libertarianism One good way to approach contemporary libertarian legal theory is via its historical roots. A good place to begin is with John Locke’s conception of the social contract.
      John Locke and the Social Contract The idea of a “social contract,” by which individuals in a state of nature contract with each other (or with a sovereign) to enter a “civil society” is one of the most important in all of political philosophy. Hobbes, Rousseau, and Locke all have distinctive theories of the social contract, but Locke’s version is important—both to libertarian theory and American constitutionalism. For the purposes of this discussion, the important idea is that a legitimate (or perhaps just) civil society has authority that is limited to those powers that the citizens-to-be would agree to delegate to the government in a social contract. Locke himself argued that the inconveniences of the state of nature would motivate a social contract that delegated to the government the power to protect property—understood in a broad sense that encompasses personal security and liberty—and the power to resolve disputes. But the Lockean social contract would not authorize government to restrict fundamental liberties or to take property from one citizen and transfer it to another. Of course, there is much more to day about Locke, but we are concerned here only with getting the gist of those Lockean ideas that are historically important to libertarian theory. Kant and Spheres of Autonomy Kant also made an important contribution to libertarian theory via his ideas of autonomy. There is no good way to summarize Kant’s theory of autonomy in a sentence or two, but the gist of his notion is the humans, as rational beings, have an interest in being autonomous in the sense of “self governing.” The role of law is to protect individual “spheres of autonomy” or “zones of liberty” in which individuals can act without interference from others. Suppose then, that our theory of proper legislation was that the laws should create maximum equal liberties for each, consistent with the same liberty for all. These two Kantian ideas—autonomy and maximum equal liberty—have played an important role in libertarian thinking about law.
      John Stuart Mill and the Harm Principle John Stuart Mill was a liberal utilitarian, and so, in a sense, it is odd that he is also the author of one of the most important works in the libertarian tradition, On Liberty, a rich, complex, and easily misunderstood work. I am afraid I may be contributing to the misunderstanding by emphasizing just one idea from On Liberty--the so-called “harm principle.” Here is how Mill states the principle:
        . . . the sole end for which mankind are warranted, individually or collectively, in interfering with the liberty of action of any of their number, is self-protection. That the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinion of others, to do so would be wise, or even right...The only part of the conduct of anyone, for which he is amenable to society, is that which concerns others. In the part which merely concerns himself, his independence is, of right, absolute. Over himself, over his own body and mind, the individual is sovereign.
      The harm principle is almost as controversial as it famous. In particular, there is a persistent worry about the problem of the baseline against which “harm” as opposed to “lack of advantage” might be measured.
    Theoretical Foundations of Libertarianism This very brief introduction to the historical roots of libertarianism in Locke, Kant, and Mill prepares the way for a discussion of the theoretical roots of libertarian legal theory. Libertarianism operates at the level of political theory: it is a view about questions like “What is the proper role of government?” and “When is coercive legislation legitimate?” Theories at this level of abstraction need foundations of some sort, either deep foundations in comprehensive moral theories like utilitarianism or shallow foundations that explain why deeper foundations are unnecessary. Let’s take a look at both sorts of foundations for libertarian legal theories.
      Consequentialist Foundations The consequentialist case for libertarianism is contingent—it depends on empirical and theoretical questions about the effects that various legal regimes have. Consequentialist libertarians believe that minimum government interference with individual liberty and free markets produces better consequences that extensive government regulation or redistribution of income. Historically, both John Stuart Mill and Adam Smith are associated with both libertarianism and consequentialism.
      There are many different flavors of consequentialism, but in the legal academy, the most prominent strands of consequentialist thinking are associated with law and economics and assume a preference-satisfaction (or “welfarist”) notion of utility. Even among theorists who accept welfarism, there are major disagreements about how much and when government should regulate. But the general idea behind the consequentialist case for libertarianism is that markets are more efficient than regulation. This conclusion follows from fairly straightforward ideas in neoclassical microeconomics. Markets facilitate Pareto-efficient (welfare enhancing) transactions; regulations thwart such transactions.
      Markets may lead to substantial disparities in wealth and income, but from the consequentialist perspective, such inequalities may not justify legislation that redistributes wealth and income. First, for a strict utilitarian, the distribution of utility itself is of no moral significance: classical utilitarians believe that the sum of utilities should be maximized, even if that means that some will be very well off and others very poor. Of course, there is a well-known utilitarian argument for the redistribution of wealth and income based on the idea of diminishing marginal utility, but this argument might be outweighed by the massive utility losses caused by redistributive programs—providing a utilitarian argument against government-mandated redistribution of wealth and income. Second, even consequentialists who believe in some form of egalitarianism might believe that the worst off members of society will be better served by a libertarian regime than by a social-welfare state. We are already on a tangent, so I’m going to leave the topic of redistribution—noting that this is an issue upon which consequentialists themselves many differ in a variety of ways.
      In contemporary legal theory, Richard Epstein is the “libertarian” thinker who is most strongly associated with consequentialist foundations. Because he is a consequentialist, Epstein may not be a pure libertarian, but on a variety of issues (e.g. antidiscrimination laws), Epstein takes strongly libertarian positions.
      Deontological Foundations Although some libertarians are consequentialists, many others look to deontological moral theory for the foundations of their libertarianism. There are many different strategies for arguing for libertarianism based on deontological premises. One method starts with the idea of self-ownership or autonomy. Each of us has a moral right to control our own bodies, free of wrongful interference by others. This might imply that each individual has a right against theft, battery, false-imprisonment, enslavement, and so forth. Of course, these rights might justify a certain kind of government—one that protects us against invasions of our rights. But when government goes beyond the protection of these rights, then government itself operates through force or threats of force. For example, the redistribution of income might be accomplished by taxing income to finance a welfare system. Taxes are not voluntary; tax payments are “coerced” via threats of violence and imprisonment. Without consent, it might be argued, these threats are wrongful actions.
      In my mind, the deontological approach to the foundations of libertarian political theory is most strongly associated with the late Robert Nozick and his magnificent book, Anarchy, State, and Utopia (see reference below).
      Pluralist Foundations There is an obvious problem with locating the foundations of a political theory, like libertarianism, in a deeper moral theory, such as some form of deontology or consequentialism. In a pluralist society, it seems very unlikely that any one view about morality will ever become the dominant view. Instead, modern pluralist societies are usually characterized by persistent disagreements about deep moral questions. If a particular form of libertarianism rests on deep moral foundations, then most of us will reject that form of utilitarianism, because we reject the foundations. One alternative would be to try to argue for libertarianism on the basis all of the different moral theories, but that is obviously a very time-consuming and difficult task. Another approach would be to articulate shallow foundations for utilitarianism—foundations that are “modular” in the sense that they could be incorporated into many different comprehensive theories of morality. This general strategy was pioneered by the liberal political philosopher, John Rawls—himself, of course, no libertarian.
      One contemporary libertarian legal theorist who has pursued the pluralist strategy is Randy Barnett. In his book, The Structure of Liberty, Barnett argues that anyone who wishes to pursue their own interests—whatever those might be-- have good reasons to affirm a generally libertarian framework for government. Barnett’s case for libertarianism is complex, but his basic idea is that human nature and circumstances are such that the law must establish and protect property rights and liberty of contract. The key to Barnett’s argument is his identification of what he calls the problems of knowledge, interest, and power. For example, the problems of knowledge include the fact that each individual has knowledge of his or her circumstances that are relevant to how resources can best be utilized. This fact, combined with others, make decentralized control of resources through a private property regime superior to a centralized command and control system. For our purposes, it is not the details for Barnett’s argument, but his general strategy that is important: Barnett attempts to create a case for libertarianism that does not depend on either consequentialist or deontological moral theory.
    Libertarian Agendas for Legal Reform (or Revolution!) Even thought this is “Legal Theory Blog,” we should say something about the practical agendas of various libertarian legal theories. Let’s begin with modest libertarianism and proceed to its most radical (anarchist) forms.
      Modest Libertarian Reforms: Deregulation, Privatization, and Legalization At the very least, libertarians favor less government—as measured against the baseline of the current legal order in the United States. So, libertarians are likely to be in favor of more reliance on markets and less reliance on government. Hence, libertarians are likely to support programs of deregulation and privatization. Deregulation might include measures like abolition of consumer product safety regulations and the elimination of rent control laws. Privatization might include the federal government selling off the national park system or the Tennessee Valley Authority.
      A libertarian reform agenda might also include the legalization of various forms of conduct that are currently prohibited. Examples of this kind of reform might include the legalization of recreational drugs, the end of prohibitions on various consensual sexual activities, and the elimination of restrictions on gambling and prostitution.
      Comprehensive Libertarian Reform: The Night-Watchman State A more ambitious libertarian agenda might be the establishment of what has been called the night-watchman state. The idea is that government would limit its role to the protection of individual liberty. Government would continue to provide police protection, national defense, and a court system for the vindication of private rights (property, tort, and contract rights, for example), but nothing else. In other words, the function of law would be limited to those activities that are necessary for the protection of private property and liberty.
      The difference between the advocacy of modest and comprehensive libertarian reform may be more a matter of tactics than of principle. One might believe that there is no realistic chance of a transition to a night-watchman state. Those who advocate such comprehensive reform may undermine their own political effectiveness by sounding “radical.” So as a matter of practical politics, it may be that libertarians are most effective when they advocate marginal reforms that move the system incremental in libertarian directions.
      Libertarian Revolutions: Anarchy and Polycentric Constitutional Orders Some libertarians advocate an agenda that is even more radical than the night-watchman state. One might question whether there is a need for the nation state at all. One version of this more radical approach is pure anarchism—the view that no government is necessary because individuals can coexist and cooperate without any need for state action. Another variation of this idea is sometimes called a “polycentric constitutional order.” The idea is that individuals could subscribe to private firms that would provide the police and adjudication functions of the night watchman state. Such a society would have entities that functioned like governments in some ways—with the important exception that individuals would enter into voluntary agreements for their services.
    The Rivals of Libertarian Legal Theory Libertarian theory can be criticized in a variety of ways. Sometimes the disagreement is mostly empirical: libertarians believe that life without the state would be better, and anti-libertarians believe it would be worse. But sometimes the critics of libertarianism have a radically different vision of the fundamental purposes of government. One such rival is egalitarianism—the view the distributive justice requires that goods (let’s leave the definition of good at the abstract level) should be divided equally, and that the creation of social equality is the primary aim of government. Some libertarians might accept this goal, but argue that maximum liberty is the best way to achieve it. Other libertarians might argue that liberty is the good that should be equally divided. But many libertarians see equality as the wrong goal for government. That is, sometimes libertarians and egalitarians differ fundamentally over the purpose of government.
    Another rival to libertarianism is the view that legislation should aim at the promotion of virtue in the citizenry. If one believes that the aim of government is to make humans into better people, then one is likely to see a variety of restricts of liberty as justified. (Let’s call views that see virtue as the end of government “aretaic political theories.”)
    Aretaic political theorists are likely to disagree with libertarians over what might be called “moral legislation.” For instance, one might believe that legal prohibitions on gambling, drugs, and prostitution are justified because they help promote a moral climate where most citizens don’t want to engage in these activities. Many libertarians would say it is simply not the business of government to decide that a taste for gambling is a bad thing; whereas many virtue theorists are likely to say that this is precisely the sort of work that governments should be doing.
    Conclusion Libertarian legal theory is interesting on the merits—as one of the most significant normative theories of law. But there is another important reason for legal theorists to be interested in libertarianism even if they ultimately reject it. Libertarian legal theories call into question the very purpose of law and government. A really careful evaluation of libertarianism requires that one form views about the function of law and the purposes of government, and to confront a variety of criticisms of conventional views about those topics. For that reason, thinking about libertarian legal theory is an excellent way of thinking about the most fundamental questions in normative legal theory.
    Once again, this entry is bit too long, but I hope that I’ve provide a good starting point for your investigations of libertarianism. I’ve provided a very brief set of references for further exploration.
    References


Saturday, July 15, 2006
 
Download of the Week The Download of the Week is Habermas's Call for Cosmpolitan Constitutional Patriotism in an Age of Global Terror: A Pluralist Appraisal by Michel Rosenfeld. Here is the abstract:
    In recent work, Habermas has provided a critical account of the trend towards transnational government and global governance in terms of his conception of communicative action and his discourse theory of law and ethics expanding on his contribution in BETWEEN FACTS AND NORMS (Eng. Trans. 1996). Moreover, Habermas has also recently tackled global terrorism and evaluated it in terms of modernism and the continued viability of the Enlightenment project. Habermas does tie economic globalization and global terrorism, but does not believe that the latter is ultimately a manifestation of a clash of cultures. Instead Habermas regards global terrorism as an economically based reaction to the gross inequities perpetrated by globalization. Accordingly, Habermas regards global terrorism as arising from a breakdown of communication and as only amounting to an external threat to modernism. The article takes a critical look at Habermas's analysis and at his suggestion that the solution to the problem lies with expansion of the constitutional order beyond the nation-state through promotion of a cosmopolitan constitutional patriotism. Taking a pluralistic perspective, the article questions several of Habermas's assumptions and conclusions.


 
Legal Theory Bookworm The Legal Theory Bookworm recommends The Judge in a Democracy by Aharon Barak. Here's a blurb:
    Whether examining election outcomes, the legal status of terrorism suspects, or if (or how) people can be sentenced to death, a judge in a modern democracy assumes a role that raises some of the most contentious political issues of our day. But do judges even have a role beyond deciding the disputes before them under law? What are the criteria for judging the justices who write opinions for the United States Supreme Court or constitutional courts in other democracies? These are the questions that one of the world's foremost judges and legal theorists, Aharon Barak, poses in this book. In fluent prose, Barak sets forth a powerful vision of the role of the judge. He argues that this role comprises two central elements beyond dispute resolution: bridging the gap between the law and society, and protecting the constitution and democracy. The former involves balancing the need to adapt the law to social change against the need for stability; the latter, judges' ultimate accountability, not to public opinion or to politicians, but to the "internal morality" of democracy. Barak's vigorous support of "purposive interpretation" (interpreting legal texts--for example, statutes and constitutions--in light of their purpose) contrasts sharply with the influential "originalism" advocated by U.S. Supreme Court Justice Antonin Scalia. As he explores these questions, Barak also traces how supreme courts in major democracies have evolved since World War II, and he guides us through many of his own decisions to show how he has tried to put these principles into action, even under the burden of judging on terrorism.


Friday, July 14, 2006
 
Brown on Plea Bargaining and Regulation of Defense Counsel Darryl K. Brown (Washington and Lee University - School of Law) has posted Executive-Branch Regulation of Criminal Defense Counsel and the Private Contract Limit on Prosecutor Bargaining on SSRN. Here is the abstract:
    Criminal defendants' right to counsel is regulated by courts, legislatures and, more recently and controversially, by the executive branch. Prosecutors recently have taken a more active role in affecting the power and effectiveness of defense counsel, especially privately retained counsel in white-collar crime cases. Under the Thompson Memo, prosecutors bargain to win waivers of attorney-client privilege and to convince corporate defendants not to pay the legal fees of corporate officers who face separate indictments. These tactics join longer-standing tools to weaken defense representation through forfeiture, Justice Department eavesdropping on attorney-client conversations of defendants in federal custody, and prosecutors' power to veto defendants' choices to share attorneys with other suspects. The organizing concern for regulation of counsel is not simply fairness, but also accuracy and a less noted goal - effectiveness of criminal law enforcement. Defense counsel is best understood not solely in light of defendant's interests but also of systemic ones. That gives the executive branch has a stronger claim to competence in regulating counsel. But regulation works best when the regulator is institutionally well suited to the task, and one feature that makes an actor well suited is supervision or some other check by another actor. By those criteria, much executive-branch regulation of defense counsel is acceptable, because prosecutors either need the consent of Congress or the judiciary, or - in the case of privilege waivers - must face well-funded counsel in negotiation. But bargaining to end attorneys' fee payments to some defendants is different. That policy gives prosecutors power unchecked by legislatures and courts or even the capable opposition of a well-funded opponent. The Supreme Court has left little doctrinal basis for restricting prosecutors' bargaining incentives for defendant cooperation. Yet this essay explains how firms themselves, through private contract, can take much of the sting out of prosecutors' abilities to demand nonpayment of attorneys' fees. Further, as they do so, courts are likely to be receptive to a narrow constitutional doctrine that leaves current plea bargaining law in place but still bars prosecutorial incentives for firms to breach duties to pay fees. Courts and defendants can work within Supreme Court doctrine to limit prosecutors by grounding those limits in the protection of contract obligations as much as the right to counsel.


 
Gotanda on Transnational Contract Damages John Y. Gotanda (Villanova University School of Law) has posted Damages in Lieu of Performance Because of Damages in Lieu of Performance Because of Breach of Contract (Villanova Law/Public Policy Research Paper No. 2006-8, DAMAGES IN PRIVATE INTERNATIONAL LAW, Hague Academy of International Law, 2007) on SSRN. Here is the abstract:
    In contract disputes between transnational contracting parties, damages are often awarded to compensate a claimant for loss, injury or detriment resulting from a respondent's failure to perform the agreement. In fact, damages may be the principal means of substituting for performance or they may complement other remedies, such as recision or specific performance. Damages for breach of contract typically serve to protect one of three interests of a claimant: (1) performance interest (also known as expectation interest); (2) reliance interest; or (3) restitution interest. The primary goal of damages in most jurisdictions is to fulfil a claimant's performance interest by giving the claimant the substitute remedy of the "benefit of the bargain" monetarily. This typically includes compensation for actual loss incurred as a result of the breach and for net gains, including lost profits, that the claimant was precluded from because of the respondent's actions. All legal systems place limitations on damage awards. The most common limitations are causation, foreseeability, certainty, fault, and avoidability. In order to obtain damages, there must be a causal connection between the respondent's breach and the claimant's loss. In addition, the claimant must show that the loss was foreseeable or not too remote. Further, the claimant is required to show with reasonable certainty the amount of the damage. Many civil law countries also require, as a prerequisite to an award of damages for breach of contract, that the respondent be at fault in breaching the agreement. Damages may also be limited by the doctrine of avoidability, which provides that damages which could have been avoided without undue risk, burden, or humiliation are not recoverable. The rules concerning damages for breach of contract are complex and vary greatly from country to country. Furthermore, in some federal countries, such as the United States and Canada, the applicable rules differ among states and provinces. This chapter, which is part of a comprehensive study of the awarding of damages in private international law, focuses on the general rules concerning damages awarded in lieu of performance because of a breach of contract ("performance damages"). It begins with an overview of the purposes served by awarding damages. It then examines performance damages for breach of contract in common law and civil law countries. The study subsequently analyzes the awarding of damages under the Convention on the International Sale of Goods (CISG), general principles of law, and principles of equity and fairness.


 
Rosenfeld on Habermas on Patriotism Michel Rosenfeld (Cardozo Law School) has posted Habermas's Call for Cosmpolitan Constitutional Patriotism in an Age of Global Terror: A Pluralist Appraisal on SSRN. Here is the abstract:
    In recent work, Habermas has provided a critical account of the trend towards transnational government and global governance in terms of his conception of communicative action and his discourse theory of law and ethics expanding on his contribution in BETWEEN FACTS AND NORMS (Eng. Trans. 1996). Moreover, Habermas has also recently tackled global terrorism and evaluated it in terms of modernism and the continued viability of the Enlightenment project. Habermas does tie economic globalization and global terrorism, but does not believe that the latter is ultimately a manifestation of a clash of cultures. Instead Habermas regards global terrorism as an economically based reaction to the gross inequities perpetrated by globalization. Accordingly, Habermas regards global terrorism as arising from a breakdown of communication and as only amounting to an external threat to modernism. The article takes a critical look at Habermas's analysis and at his suggestion that the solution to the problem lies with expansion of the constitutional order beyond the nation-state through promotion of a cosmopolitan constitutional patriotism. Taking a pluralistic perspective, the article questions several of Habermas's assumptions and conclusions.


 
Somin on Raich Ilya Somin (George Mason University - School of Law) has posted Gonzales v. Raich: Federalism as a Casualty of the War on Drugs (Cornell Journal of Law and Public Policy, Symposium on the War on Drugs, June 2006) on SSRN. Here is the Abstract:
    The Supreme Court's recent decision in Gonzales v. Raich marks a watershed moment in the development of judicial federalism. If it has not quite put an end to the Rehnquist Court's "federalism revolution," it certainly represents a major step in that direction. In this Article, I contend that Raich represents a major - possibly even terminal - setback for efforts to impose meaningful judicial constraints on Congress' Commerce Clause powers. Raich undermines judicial enforcement of federalism in three interlocking ways: by adopting an essentially limitless definition of "economic activity" thereby ensuring that virtually any activity can be "aggregated" to produce the "substantial affect [on] interstate commerce" required to legitimate congressional regulation under Lopez v. United States and Morrison v. United States; by making it easier for Congress to impose controls on even "noneconomic" activity by claiming that it is part of a broader "regulatory scheme"; and finally, by restoring the so-called "rational basis" test, holding that "[w]e need not determine whether [defendants'] activities, taken in the aggregate, substantially affect interstate commerce in fact, but only whether a 'rational basis' exists for so concluding." The Supreme Court's recent seemingly pro-federalism decisions in Gonzales v. Oregon and Rapanos v. Army Corps of Engineers actually do little or nothing to mitigate the impact of Raich. I also contend that the Raich decision is misguided on both textual and structural grounds. The text of the Constitution does not support the nearly unlimited congressional power endorsed in Raich. Such unlimited power also undercuts some of the major structural advantages of federalism, including diversity, the ability to "vote with your feet," and interstate competition for residents. Raich's undercutting of federalism by upholding the power of Congress to ban the possession of homegrown medical marijuana closely parallels legal developments during the Prohibition era of the 1920s. In both periods, the establishment of a nationwide prohibition regime greatly eroded decentralized federalism, in part because the Supreme Court accepted the government's claims that the power to regulate a market in prohibited substances necessarily required comprehensive regulation of virtually all sale or possession of the commodities in question. The future of judicial federalism may depend not just on the precise doctrinal impact of Raich, but on the possibility that liberal jurists and political activists may come to recognize that they have an interest in limiting congressional power. A cross-ideological coalition for judicial enforcement of federalism would be far more formidable than today's narrow alliance between some conservatives and libertarians. Ironically, the Raich decision, in combination with other recent developments, may help bring about such a result.


Thursday, July 13, 2006
 
Yet More on Teaching and Scholarship Orin Kerr has a very good post entitled Legal Scholarship and "the Canon" at OrinKerr.com, which quite properly questions the empirical foundation for my claim that immersion in the canon--which is required of younger teachers--leads to a parallel immersions in canon-focused scholarship. Also, Peter Spiro has a post entitled International Legal Scholarship and the Lack of a Canon, in which he observes that not all fields of law have a canon, and makes the further claim that there is no canon in international law:
    There isn't a canon in international law, or at least it's a very thin one (or alternatively a thick one under impossible and obvious stress), and it means that anything goes. Teaching can't help but be a plus in that case, because you get the raw materials without the strictures of received wisdom. Teaching IL almost begs the teacher to come up with her own organizing principle; it forces you to think from scratch. And where there's less of a canon, it's less likely that there are powerful individuals who have a vested interest in it and who are looking to enforce orthodoxies through appointments and tenure decisions. This also allows for more imaginative and foundational scholarship (in a way that does have generational implications).
What a marvelous and hopeful thought! It actually makes me want to work in IL! And I must admit that I was thinking of the core of the law school currciulum--subjects like contracts, property, procedure, constitutional law, administrative law, tax, etc.--when I asserted that intensive preparation for teaching focuses one on the "canon."
Read Kerr and Spiro!


Wednesday, July 12, 2006
 
More from Buck on Teaching and Scholarship Stuart Buck replies to my post prompted by his Teaching vs. Scholarship. I agree with almost everything in Buck's most recent post, but I wanted to say a few more words about the larger topic: Is teaching in competition with scholarship? Or a they complimentary?
This issue is usually framed somewhat simplistically. Back in the day, the lay of the land might have been appoximately as follows:
    At elite research-oriented law schools, the question would rarely arise. It was assumed that research was important, whether or not it competed with teaching. At most other law schools, where the emphasis was more on professional education, many faculty members believed that time spent on research detracted from teaching, and hence, that a lack of scholarly productivity was a sign of virute and not of vice.
More recently, the landscape has changed--especially as the imperative for scholarly activity has spread throughout the legal academy. Not too long ago, there was a debate at many law schools that went like this:
    Older faculty member: too much emphasis on scholarship will hurt our student. Younger faculty member: the best scholars are also the best teachers; so, a lack of scholarly productivity will hurt our students.
But recently the nature of the debate seems to be shifting. One reason for the shift is the change in the nature of legal scholarship brought about by the growth of interdisciplinarity in the legal academy. When legal scholars were focused on doctrine, the case for complimentarity between teaching and scholarship was easily made. Who could doubt that the author of a leading treatise was immersed in the law in a way that could be useful in the classroom? (Setting aside the "over their heads" problem, for a moment.) But in the era of interdisciplinary research, the connection between scholarship and teaching is more complicated. The most sophisticated work in law and economics, empirical legal studies, law and philosophy, and legal history is not necessarily focused on doctrine. One symptom of this transformation is the decline of the treatise as a form of legal scholarship. Fifty years ago, the great treatise writers were at the pinnacle of the profession. Today, it is not clear that writing a treatise is even a mild positive for an ambitious legal scholar; many would go further and say that treatise writing is a sign of serious scholarly dysfunction and constitutes a significant negative factor. (Of course, many would disagree.) But this shift has changed the terms of the debate. The typical doctrinal course can be supplemented by interdisciplinary perspectives of various sorts, but most law school courses still have legal doctrine (understanding of the system of rules and principles) as their primary focus. Sophisticated interdisciplinary work in many cases takes the scholar away from the internal point of view--the perspective of practising lawyers and judges. This creates a tension between scholarship and teaching that is quite different from the simple competition for preparation time.
I could go on and one. I really am not arguing for any conclusion here. My point is really just that the terms of the debate have changed.


Tuesday, July 11, 2006
 
Buck on Teaching versus Scholarship Stuart Buck has a post on the old chestnut, the question whether scholarship interferes with or enhances teaching--in the context of legal education. One of Buck's points is that teaching may actually enhance scholarship:
    Why is this? For one simple reason: No matter how intensely you study a particular subject, if time goes by without regular review, it's easy for the details to slip from your memory. But teaching a course inherently requires regular review -- not just of your own scholarship on a given subject, but of everything else that is relevant to that subject. If you're going to stand in front of a group of people and explain a particular legal subject, you have to know the ins and outs of all the important cases/statutes/commentary. It's not enough to know this stuff "on paper" -- you have to know it stone cold, so that you can answer practically any question that students might throw your way. What's more, you have to know the subject well enough to explain it to beginners. I think that this requires more in-depth knowledge than merely being able to converse with other "experts." When you're talking to beginners, you have to understand the topic well enough to boil it down to the basics. You can't get away with casually referring to some abstraction on the assumption that everyone else will know what you're talking about.
There is something to Buck's theory, but I want to suggest some qualifications:
  • Buck writes, "[T]eaching a course inherently requires regular review -- not just of your own scholarship on a given subject, but of everything else that is relevant to that subject." "Requires" is a bit strong. Lot's of popular and fairly effective teachers barely review the material in the casebook--much less "everything else that is relevant." The truth is that the gap between students and an experienced teacher is so enormous that there is a real danger of "knowing too much." That's why neophyte teachers who are just learning a new subject can be very effective--they are less prone to teach "over the heads" of their students.
  • Buck writes, "It's not enough to know this stuff "on paper" -- you have to know it stone cold, so that you can answer practically any question that students might throw your way." Hardly anyone knows their subject "stone cold," especially these days when very few law professors are treatise writers. Buck's view is wildly romantic.
  • Burck writes, "you have to know the subject well enough to explain it to beginners. I think that this requires more in-depth knowledge than merely being able to converse with other 'experts.'" This is simply false. It is at least as accurate to say that in-depth knowledge hurts teaching by introducing more complexity than students can handle. And talking with an expert requires way, way, way more in depth knowledge than teaching. There are many teachers who do a fantastic job in what for them is a tertiary field. They know their way around their casebook and the core concepts of the subject, but would never try to engage an expert in an in-depth discussion of the concepts or rules that lie outside the teaching core.
In addition, there is one important way in which teaching undermines good scholarship. Young scholars spend enormous amounts of time on the "canon," the cases and rules that are in their casebooks. And so it is hardly surprising that many of them end up writing about the core canon--frequently with the result that their scholarship is derivative and repetitive. In many law school subjects, the core has been examined from every possible angle on multiple occasiions over a period of decades; that makes it very difficult to say something new about the material that is in the casebook!
I don't want to exaggerate. I actually agree with the Buck's core claim that teaching does enhance scholarship, but not for the reasons that Buck articulates. Teaching contracts, property, or constitutional law does not make you an expert--it does give you a broad overview of the core concepts and their interrelationships.
Read Buck's post!


Monday, July 10, 2006
 
State Decisis in a Court of Last Resort One of my favorite topics has come up over at PrawfsBlawg, where guest blogger Russell Covey has posted Are Supreme Court Justices Bound By Supreme Court Precedent? Here's a taste:
    While reading Justice Stevens' dissent in Kansas v. Marsh, which is largely focused on explaining why his joining Justice Blackmun's dissent in Walton v. Arizona does not commit him to agreement with the majority in Marsh, I was again reminded of a question that has bothered me ever since I read Justice Scalia's dissent in Dickerson v. U.S.: Are supreme court justices bound by supreme court precedent? The apparent answer is no. They can rule any way they like. In Dickerson, the majority ruled that 18 U.S.C. 3501, which purported to overrule Miranda, was unconstitutional. In dissent, Scalia explained why he disagreed and concluded his dissent by announcing that until §3501 is repealed, "[I] will continue to apply it in all cases where there has been a sustainable finding that the defendant's confession was voluntary." Does this strike anyone as a problem? It does me.
An interesting question. The conventional view is that the Supreme Court should afford its own prior decisions a presumption of validity. What does that mean? One possibility would be that this “presumption” is a mere “bursting bubble.” Precedents will be followed until and unless there are good reasons to depart from them. If this were the only role for precedent, then it would be virtually no role at all—it takes only a slender needle of flimsy argument to burst a bubble. Likewise, the presumption view is virtually meaningless if it only decides cases in which the arguments for and against sticking with the precedent are in equipoise. Of course, there will be some cases in which the arguments for and against a change in the law are perfectly balanced, but such cases are likely to be rare. The presumption view of the force of precedent is implausible. A more reasonable view is that precedents are entitled to weight because of the costs of legal change. One such cost is associated with reliance and expectations. Individuals and institutions may fail to receive expected benefits or incur avoidable costs. Another set of costs may be related to the implementation of new legal rules—at a minimum, the treatises will need to be rewritten. The instrumentalist view of precedent conceives of the decision whether to overrule existing precedent as simply adding another factor to the balance of factors that are relevant to the selection of an optimal rule. From the realist perspective, precedents should be overrule when the benefits of overruling exceed the costs and precedents should be followed when they already provide the optimal rule or when the costs of changing the law are greater than the marginal benefits the better rule would provide. The instrumentalist view of precedent is peculiar, because it denies that Supreme Court precedents should be treated as legally authoritative by the Supreme Court itself. One way of drawing out this peculiarity is by comparing the situation in which there is a prior Supreme Court precedent on a particular point of law to the situation in which there is no prior decision and a new case presents a novel issue of law. Of course, it is possible that the former case involves greater reliance interests than the later case, but this is not necessarily so. It might well be that the relevant individuals and institutions have made plans based on guesses about the Supreme Court’s likely decision or that they have made plans for no good reason at all. From the instrumentalist perspective, reliance interests are valued in terms of consequences of disappointed expectations. Stare decisis is simply one mechanism by which reliance interests could be generated. The point is that the instrumentalist conception reduces the force of precedent to a contingent policy concern—one that may drop out entirely in some cases. What is the alternative to treating precedent as a presumptively valid or giving precedents instrumental force? The formalist conception of stare decisis is based on the idea that precedents are legally binding or authoritative. That is, a formalist believes that precedents provide what are sometimes called “content independent” or “peremptory” reasons for action. Of course, the formalist conception of precedents as legally binding is quite familiar, even in our realist legal culture. When it comes to vertical stare decisis, the conventional notion is that the decisions of higher courts are binding on lower courts. A Court of Appeals may not decide to overrule a Supreme Court decision because the advantages of the better rule outweigh the costs of changing legal rules. The idea of binding precedent also operates at the level of intermediate appellate courts. Three judge panels of the United States Courts of Appeal are bound by the prior decisions of the Court; they are not free to decide that the benefits of a better rule outweigh the benefits of adhering to the law of the circuit.
And also at Prawfsblawg, Will Baude writes:
    Russell Covey suggests that it is problematic that the Supreme Court Justices are free to disregard Supreme Court precedents while everybody else is not. I agree with him that such an asymmetry would be troubling, but I think he is attempting to solve it in the wrong way. Given a conflict between what the Constitution says and what the Supreme Court says that the Constitution says, most people are not bound to follow the latter.
Baude is on to one of the most important issues in debates about constitutional stare decisis. Some originalists may object to constitutional stare decisis on the basis of the notion that it only the Constitution itself should be considered to be legally authoritative. If the precedents are consistent with the meaning of the Constitution, then the doctrine of stare decisis doesn’t make a difference. If the precedents are inconsistent with the Constitution, then judges are obligated to follow the Constitution itself are not the precedents—so the argument would go.
There is something to this argument. Affording strong stare decisis effect to precedents that disregarded the Constitution would, in fact, be to elevate the status of judicial decisions above the Constitution itself. And such elevation would be inconsistent with the formal rule that the Constitution is the Supreme law of the land. In addition, giving precedents the power to overrule the Constitution would create questions of legitimacy. It is unclear whether there is any theory that would legitimate the assignment of a power to overrule the Constitution to the Supreme Court.
But these same problems do not exist if we are dealing with precedents that are based on formalist legal reasoning that aims at the interpretation and application of the original meaning of the Constitution. Such decisions do not involve an implicit claim that the Supreme Court may overrule or modify the Constitution—quite the contrary, they assume the opposite.
Of course, it is possible to disagree about the meaning of the Constitution. We may come to believe that a prior decision—although formalist in method—involved a mistake. The question then becomes, can we legitimately give stare decisis effect to a formalist decision if we believe the decision is mistaken? The answer to this question is “yes, we can.” Once we are operating within the realm of formalist precedents, the question is not “Are we respecting the authority of the Constitution?” but is instead, “What is the institutional mechanism by which disputes about the meaning of the Constitution are to be settled?” At one extreme, we can imagine that we would entitle each and every government official the authority to decide for herself what the Constitution means. The problems with that system are obvious—it would create uncertainty, unpredictability, and instability that would undermine the rule of law. Various other possibilities exist. We could give every judge the power to interpret the Constitution de novo, with no horizontal or vertical stare decisis. That system would not be as chaotic as one which gave the authority to every official—high and low—but it would, nonetheless, be a real mess. We could imagine a system in which every Supreme Court justice has interpretive authority, but a doctrine of vertical stare decisis binds the lower courts. That system would be more stable, but would still involve shifts in constitutional meaning—as the composition of the Court changes and as individual Justices change their minds. And at the other extreme from total hermeneutic polycentricism, would be a system in which the decisions of the Supreme Court which respect that text and original meaning are given binding effect—granting earlier Supreme Courts the power to constrain the interpretations made by later Supreme Courts. This final option maximize the rule-of-law values of stability, predictability, and certainty.

For more on this topic, see The Case for Strong Stare Decisis, or Why Should Neoformalists Care About Precedent? Part I: The Three Step Argument Part II: Stare Decisis and the Ratchet Part III: Precedent and Principle
Some of my remarks above are adapted from a forthcoming piece, THE SUPREME COURT IN BONDAGE: CONSTITUTIONAL STARE DECISIS, LEGAL FORMALISM, AND THE FUTURE OF UNENUMERATED RIGHTS, which will appear in the University of Pennsylvania Journal of Constitutonal Law.


 
Fairness, Legitimacy, and Compliance Over at Changing the Court, Aubrey Fox has a post entitled Why Fairness Matters. The post investigates compliance rates with court orders in light of Tom Tyler's work on procedural fairness and legitimacy. Here is a taste:
    So why are compliance rates with court orders typically so low in large urban jurisdictions? Social psychologists, such as Tom Tyler of New York University, argue that this is the wrong question to ask. Instead, they flip the question around, examining what motivates people to obey in the first place. “Compliance with the law cannot be taken for granted,” writes Tyler. To Tyler, the key element to compliance is legitimacy – whether or not individuals believe that the courts (or the police) are “entitled to be obeyed.” In Tyler’s research, legitimacy is much more important than “fear of punishment,” or the threat that by not complying, an individual will face negative consequences such as a jail sentence.
And this gives me another opportunity to plug Tyler's book, Why People Obey the Law.


Sunday, July 09, 2006
 
Legal Theory Lexicon: The Counter-Majoritarian Difficulty
    Introduction The counter-majoritarian difficulty may be the best known problem in constitutional theory. The phrase is attributed to Alexander Bickel—a Yale Law School Professor—who is said to have introduced it in his famous book The Least Dangerous Branch. Whatever Bickel actually meant by the phrase, it has now taken on a life of its own. The counter-majoritarian difficulty states a problem with the legitimacy of the institution of judicial review: when unelected judges use the power of judicial review to nullify the actions of elected executives or legislators, they act contrary to “majority will” as expressed by representative institutions. If one believes that democratic majoritarianism is a very great political value, then this feature of judicial review is problematic. For at least two or three decades after Bickel’s naming of this problem, it dominated constitutional theory.
    This entry in the Legal Theory Lexicon explores the counter-majoritarian difficulty, efforts to solve the problem and to dissolve it. As always, the Lexicon is aimed at law students, especially first-year law students, with an interest in legal theory. As is frequently the case with the Lexicon, we will explore a very big topic in just a few paragraphs. Many articles and books have been written about the counter-majoritarian difficulty; we will only scratch its surface. Moreover, any really deep discussion of the counter-majoritarian difficulty would lead (sooner or later) to almost every other topic in constitutional theory. The Lexicon is “quick and dirty,” and definitely not deep, comprehensive, or authoritative.
    Democracy and Majoritarianism The counter-majoritarian difficulty is rooted in ideas about the relationship between democracy and legitimacy (see the Legal Theory Lexicon entry on Legitimacy ). We all know the basic story: the actions of government are legitimate because of their democratic pedigree, and democratic legitimacy requires “majority rule.” Of course, it isn’t that simple. Among the complexities are the following:
    • There are many different theories of democratic legitimacy, and only some of them emphasize “majoritarianism” as the key factor.
    • Some theories of democratic legitimacy rely on the idea of “consent of the governed,” but it is very difficult to mount an argument for actual consent to existing majoritarian institutions or their actions.
    • The idea of “legitimacy” is itself deeply controversial and might even be called obscure. What legitimacy is and why it is important are themselves deep and controversial questions.
    Despite these complexities, most of us have a rough and ready appreciation for the idea that actions by democratic majorities have some kind of legitimacy that is lacking in the actions of unelected judges. At any rate, that idea is the normative foundation of the counter-majoritarian difficulty.
    Constitutional Limits on Majoritarianism The counter-majoritarian difficulty is sometimes characterized as a problem with the institution of judicial review, but it could also be understood as a difficulty for any constitution that constrains majority will. Of course, there could be constitutions that impose no limits at all on the will of democratically elected legislatures. For example, a regime of unicameral parliamentary supremacy might be said to have a constitution that allows a parliamentary majority to pass any legislation that it pleases and to override the courts or executive whenever the legislature is in disagreement with their actions. Of course, even this simple constitution might constrain the legislature in a certain sense. For example, legislation that attempts to constrain the action of a future legislature might be “unconstitutional.” Another example might be legislation that abolishes elections and substitutes a system of self-perpetuating appointments. Similarly, a legislature might pass a “bill of rights” that purports to bind future legislatures, even in the absence of an institution of judicial review.
    The Institution of Judicial Review Even though the counter-majoritarian difficulty might be a feature of any system with a binding constitution, the difficulty is especially acute for a regime that incorporates the institution of judicial review incorporating judicial supremacy. In the United States, for example, the courts have the power to declare that acts of Congress are unconstitutional, and if the Supreme Court so declares, the Congress does not have the power to override its decision.
    The institution of judicial review is counter-majoritarian in part because federal judges are not elected and they serve life terms. Presidents are elected every four years; members of the House of Representatives every two years; and Senators serve staggered six year terms. Of course, judges and justices are nominated by the President and confirmed by the Senate and these features create some degree of democratic control of the judiciary. Nonetheless, on the surface, it certainly looks like judicial review is an antidemocratic institution. Unelected judges strike down legislation enacted by elected legislators: that is certainly antidemocratic and antimajoritarian in some sense.
    The counter-majoritarian difficulty is compounded by the nature of judicial review as it has been practiced by the modern Supreme Court. If the Supreme Court limited itself to enforcing the separation of powers between the President and Congress or to the enforcement of the relatively determinate provisions of the constitution that establish the “rules of the game” for the political branches, then the counter-majoritarian difficulty might not amount to much. But the modern Supreme Court has been involved in the enforcement of constitutional provisions that general, abstract, and seemingly value laden—examples include the freedom of speech, the equal protection clause, and the due process clause of the constitution. The counter-majoritarian difficulty seems particularly acute when it comes to so-called “implied fundamental rights,” like the right to privacy at issue in cases like Griswold v. Connecticut and Roe v. Wade.
    Answering the Countermajoritarian Difficulty How have constitutional theorists attempted to answer the counter-majoritarian difficulty? The problem with answer that question is that there are so many answers that it is difficult to single out three or four for illustrative purposes. So remember, the “answers” that are discussed here are arbitrary selections from a much longer list.
      Discrete and Insular Minorities One famous answer to the counter-majoritarian difficulty focuses on the idea of “discrete and insular minorities.” The background to this answer is the premise that in the long run, most individuals win some and lose some in the process of democratic decision making. Shifting coalitions among various interest groups “spread the wealth” and the pain—no one wins all the time or loses all the time. Or rather, normally wins and losses are spread across the many different groups that constitute a given political society. However, there may be some groups that are excluded from the give and take of democratic politics. Some groups may be so unpopular (or the victims of such extreme prejudice) that they almost always are the losers in the democratic process. The famous “Footnote Four” of the United States Supreme Court’s decision in the Carolene Products case can serve as the germ of an answer to the counter-majoritarian difficulty. Judicial review is arguably legitimate when it serves to protect the interests of “discrete and insular minorities” against oppressive actions by democratic majorities.
      Anti-Democratic Political Theory Another answer to the counter-majoritarian difficulty admits that judicial review is antidemocratic but seeks to justify this feature by appeal to some value that trumps democratic legitimacy. This isn’t really just one answer to the difficulty—it is a whole lot of answers that share a common feature—the appeal to anti-democratic political values. For example, it might be argued that “liberty” is a higher value than “democracy” and hence that judicial review to protect liberty is justified. Or it might be argued that “equality” is a higher value, or “privacy,” or something else. Obviously, there is a lot more to be said about this kind of answer to the counter-majoritarian difficulty, but for the purposes of this Lexicon entry, this incredibly terse explanation will have to suffice.
      Dualism and High Politics Yet a third approach to the counter-majoritarian difficulty attempts to turn the problem upside down—arguing that judicial review is actually a democratic institution that checks the antidemocratic actions of elected officials. Whoa Nelly! How does that work? This third approach is strongly associated with the work of Bruce Ackerman—perhaps the most influential constitutional theorist since Alexander Bickel. Ackerman’s views deserve at least a whole Lexicon entry, but the gist of his theory can be stated briefly. Ackerman argues for a view that can be called “dualism,” because it distinguishes between two kinds of politics—“ordinary politics” (the kind practiced every day by legislators and bureaucrats) and “constitutional politics.” What is “constitutional politics”? And how is it different from “ordinary politics”? Ackerman’s answers to these questions begin with the idea that ordinary politics isn’t very democratic. Why not? We all know the answer to that question. Ordinary politics are dominated by self-interested politicians and manipulative special interest groups. The people (or “We the People” as Ackerman likes to say) don’t really get involved in ordinary politics, and therefore, ordinary politics are not really very democratic. Constitutional politics, by way of contrast, involve extraordinary issues that actually “get the attention” of the people. For example, the ratification of the Constitution of 1789 caught the attention of ordinary citizens, as did the Reconstruction Amendments (the 13th, 14th, and 15th) following the Civil War. When “We the People” become engaged in constitutional politics, we are giving commands to our agents—Congress and the President—and the Courts are merely enforcing our will when they engaged in judicial review—so long as they are faithful to our commands.
      Whew! That was a lot of “We the People” talk. I need a break from channeling Ackerman, before I can finish this entry! OK. I’m back!
      Ackerman’s theory emphasized the idea of distinct regimes that resulted from “constitutional moments”—periods of intense popular involvement in constitutional politics. Recently, Jack Balkin and Sandy Levinson have advanced a similar theory—which emphasizes that idea of “high politics”—the great popular movements that seek to influence the decisions of the Supreme Court on issues like abortion or affirmative action. I can’t do justice to their theory here, but the idea is that the Supreme Court may be responding to democratic pressures when it makes the really big constitutional decisions.
    Dissolving the Counter-Majoritarian Difficulty So far, I’ve been discussing responses to the counter-majoritarian difficulty that operate within normative constitutional theory. There is another important line of attack, however. The counter-majoritarian difficulty rests on a positive (factual) assumption—that the Supreme Court does, in fact, act contrary to political majorities. Some political scientists have argued that this positive assumption is incorrect—that the Supreme Court rarely, if ever, acts contrary to the wishes of the dominant political faction. There could be many reasons for that—one of them being the Supreme Court’s awareness that if it were to buck Congress and the President, it is vulnerable to a variety of political reprisals. Congress might strip the Court of jurisdiction. Ultimately, the President might simply refuse to cooperate with Court’s decisions.
    There is another side to this story. There may be reasons why elected politicians prefer for the Supreme Court to “take the heat” for some decisions that are controversial. When the Supreme Court acts, politicians may be able to say, “It wasn’t me. It was that darn Supreme Court.” And in fact, the Supreme Court’s involvement in some hot button issues may actually help political parties to mobilize their base: “Give us money, so that we can [confirm/defeat] the President’s nominee to the Supreme Court, who may cast the crucial vote on [abortion, affirmative action, school prayer, etc.].” In other words, what appears to be counter-majoritarian may actually have been welcomed by the political branches that, on the surface, appear to have been thwarted.
    Conclusion Once again, I’ve gone on for too long. I hope you will forgive me, and I hope that this Lexicon entry has given you food for thought about the counter-majoritarian difficulty. Below, I’ve included a list of references to articles that focus on the difficulty itself and also to some of the authors who have attempted to give answers to Bickel’s famous problem.
    References This is a very incomplete list, emphasizing the works that are focused on “the counter-majoritarian difficulty” in particular and omitting many important works of constitutional theory that deal with the counter-majoritarian difficulty as part of a larger enterprise.
      Bruce Ackerman, We the People: Foundations (1993) & We the People: Transformations (1998).
      Jack M. Balkin & Sanford Levinson, Understanding the Constitutional Revolution, 87 Va. L. Rev. 1045 (2001).
      Alexander Bickel, The Least Dangerous Branch: The Supreme Court at the Bar of Politics 16-18 (2d ed. 1986).
      Steven G. Calabresi, Textualism and the Countermajoritarian Difficulty, 66 Geo. Wash. L. Rev. 1373 (1998); Barry Friedman, The Counter-Majoritarian Problem and the Pathology of Constitutional Scholarship, 95 Nw. U. L. Rev. 933 (2001).
      Barry Friedman, The History of the Countermajoritarian Difficulty, Part One: The Road to Judicial Supremacy, 73 N.Y.U. L. Rev. 333, 334 (1998).
      Barry Friedman, The History Of The Countermajoritarian Difficulty, Part II: Reconstruction's Political Court , 91 Geo. L.J. 1 (2002).
      Barry Friedman, The History Of The Countermajoritarian Difficulty, Part Three: The Lesson Of Lochner, 76 N.Y.U. L. Rev. 1383 (2001).
      Barry Friedman, The History Of The Countermajoritarian Difficulty, Part Four: Law's Politics, 148 U. Pa. L. Rev. 971 (2000).
      Barry Friedman, The Birth Of An Academic Obsession: The History Of The Countermajoritarian Difficulty, Part Five, 112 Yale L.J. 153 (2002).
      Ilya Somin, Political Ignorance and the Countermajoritarian Difficulty: A New Perspective on the Central Obsession of Constitutional Theory, 89 Iowa L. Rev. 1287 (2004).
      Mark Tushnet, Policy Distortion and Democratic Debilitation: Comparative Illumination of the Countermajoritarian Difficulty, 94 Mich. L. Rev. 245 (1995).


Saturday, July 08, 2006
 
Legal Theory Bookworm The Legal Theory Bookworm recommends Judging Under Uncertainty : An Institutional Theory of Legal Interpretation by Adrian Vermeule. Here's a blurb:
    How should judges, in America and elsewhere, interpret statutes and the Constitution? Previous work on these fundamental questions has typically started from abstract views about the nature of democracy or constitutionalism, or the nature of legal language, or the essence of the rule of law. From these conceptual premises, theorists typically deduce an ambitious role for judges, particularly in striking down statutes on constitutional grounds. In this book, Adrian Vermeule breaks new ground by rejecting both the conceptual approach and the judge-centered conclusions of older theorists. Vermeule shows that any approach to legal interpretation rests on institutional and empirical premises about the capacities of judges and the systemic effects of their rulings. Drawing upon a range of social science tools from political science, economics, decision theory, and other disciplines, he argues that legal interpretation is above all an exercise in decisionmaking under severe empirical uncertainty. In view of their limited information and competence, judges should adopt a restrictive, unambitious set of tools for interpreting statutory and constitutional provisions, deferring to administrative agencies where statutes are unclear and deferring to legislatures where constitutional language is unclear or states general aspirations.
And "back of the book" blurbs:
    The topic of legal interpretation is a large and enduring one, and Vermeule has made a distinct contribution. Part of that contribution comes simply from the way in which, more than any other scholar of interpretation, Vermeule combines the insights of legal philosophy, public choice theory, history, economics, social psychology, and political science, among others, with a prodigious knowledge of numerous areas of law to produce a genuine comprehensive work on legal interpretation. This is a serious, thoroughly academic, and wonderfully multi-disciplinary addition to the literature on legal interpretation, and in its focus on institutions and on less-than-perfect interpreters making decisions under conditions of uncertainty has a distinct argument and a distinct voice. --Frederick Schauer, John F. Kennedy School of Government, Harvard University
and
    Judging Under Uncertainty uses three basic models of statutory interpretation, all three of which are justified by a cluster of competing normative and empirical assertions that come easily to the armchair quarterbacks known as legal scholars. What is most interesting about Vermeule's work is that he attempts to strip away the normative angle and present a case for interpretive method based on the empirical side. Vermeule has contributed distinctive and imaginative scholarship on the subject of legal interpretation, clearly advancing the field substantially. --Philip P. Frickey, Boalt Hall School of Law, University of California, Berkeley


 
Download of the Week The Download of the Week is The Economics of Open-Access Law Publishing by Jessica Litman. Here is the abstract:
    The conventional model of scholarly publishing uses the copyright system as a lever to induce commercial publishers and printers to disseminate the results of scholarly research. The role of copyright in the dissemination of scholarly research is in many ways curious, since neither authors nor the entities who compensate them for their authorship are motivated by the incentives supplied by the copyright system. Rather, copyright is a bribe to entice professional publishers and printers to reproduce and distribute scholarly works. As technology has spawned new methods of restricting access to works, and copyright law has enhanced copyright owners' rights to do so, the publishers of scholarly journals have begun to experiment with subscription models that charge for access by the article, the viewer, or the year. Copyright may have been a cheap bribe when paper was expensive, but it has arguably distorted the scholarly publishing system in ways that undermine the enterprise of scholarship. Recently, we've seen a number of high-profile experiments seeking to use one of a variety of forms of open access scholarly publishing to develop an alternative model. Critics have not quarreled with the goals of open access publishing; instead, they've attacked the viability of the open-access business model. If we are examining the economics of open access publishing, we shouldn't limit ourselves to the question whether open access journals have fielded a business model that would allow them to ape conventional journals in the information marketplace. We should be taking a broader look at who is paying what money (and comparable incentives) to whom, for what activity, and to what end. Are either conventional or open-access journals likely to deliver what they're being paid for? Law journal publishing is one of the easiest cases for open access publishing. Law scholarship relies on few commercial publishers. The majority of law journals depend on unpaid students to undertake the selection and copy editing of articles. Nobody who participates in any way in the law journal article research, writing, selecting, editing and publication process does so because of copyright incentives. Indeed, copyright is sufficiently irrelevant that legal scholars, the institutions that employ them and the journals that publish their research tolerate considerable uncertainty about who owns the copyright to the works in question, without engaging in serious efforts to resolve it. At the same time, the first copy cost of law reviews is heavily subsidized by the academy to an extent that dwarfs both the mailing and printing costs that make up law journals' chief budgeted expenditures and the subscription and royalty payments that account for their chief budgeted revenues. That subsidy, I argue, is an investment in the production and dissemination of legal scholarship, whose value is unambiguously enhanced by open access publishing. In part I of the paper I give a brief sketch of the slow growth of open access publishing in legal research. In part II, I look at the conventional budget of a student-edited law journal, which excludes all of the costs involved in generating the first copy of any issue, and suggest that we cannot make an intelligent assessment of the economics of open access law publishing unless we account for input costs, like the first copy cost, that conventional analysis ignores. In part III, I develop a constructive first copy cost based on assumptions about the material included in a typical issue of the law journal, and draw inferences based on comparing the expenses involved in the first copy, and the entities who pay them, with the official law journal budget. In part IV, I examine the implications of my argument for open access law publishing. In part V, I argue that the conclusions that flow from my analysis apply to non-legal publishing as well.
Highly recommended!


Friday, July 07, 2006
 
More Reverse Engineering of U.S. News's Rankings by Tom Bell Tom Bell has a post entitled Scores of All Law Schools in USN&WR Rankings at Agoraphilia. Here's a snippet:
    U.S. News & World Report does not disclose the scores of all the law schools it ranks. It does so only for schools ranked in tiers one or two. USN&WR lists schools in tiers three and four by name, alphabetically. It of course generates scores for all the schools it ranks. How else would it know what tier to place them in? For reasons good, bad, or indifferent, however, USN&WR declines to report the scores of the law schools it ranks in tiers three or four. I here partially remedy that lacuna.


 
Hall & Wright on Content Analysis of Judicial Opinions Mark A. Hall and Ronald F. Wright (Wake Forest University - School of Law and Wake Forest University - School of Law) have posted Systematic Content Analysis of Judicial Opinions on SSRN. Here is the abstract:
    Despite the interdisciplinary bent of legal scholars, the academy has yet to identify an empirical methodology that is uniquely its own. We propose that one standard social science technique - content analysis - could form the basis for an empirical methodology that is uniquely legal. It holds the potential for bringing social science rigor to our empirical understanding of caselaw, and therefore for creating what is distinctively a legal form of empiricism. To explore this potential, we collected all 122 examples we could find that use content analysis to study judicial opinions, and coded them for pertinent features. Legal scholars began to code and count cases decades ago, but use of this method did not accelerate until about 15 years ago. Most applications are home-grown, with no effort to draw on established social science techniques. To provide methodological guidance, we survey the questions that legal scholars have tried to answer through content analysis, and use that experience to generalize about the strengths and weaknesses of the technique compared with conventional interpretive legal methods. The epistemological roots of content analysis lie in legal realism. Any question that a lawyer might ask about what courts say or do can be studied more objectively using one of the four distinct components of content analysis: 1) replicable selection of cases; 2) objective coding of cases; 3) counting case contents for descriptive purposes; or 4) statistical analysis of case coding. Each of these components contributes something of unique epistemological value to legal research, yet at each of these four stages, some legal scholars have objected to the technique. The most effective response is to recognize that content analysis does not occupy the same epistemological ground as conventional legal scholarship. Instead, each method renders different kinds of insights that complement each other, so that, together, the two approaches to understanding caselaw are more powerful that either alone. Content analysis is best used when each decision should receive equal weight, that is, when it is appropriate to regard the content of opinions as generic data. Scholars have found that it is especially useful in studies that question or debunk conventional legal wisdom. Content analysis also holds promise in the study of the connections between judicial opinions and other parts of the social, political, or economic landscape. The strongest application is when the subject of study is simply the behavior of judges in writing opinions or deciding cases. Then, content analysis combines the analytical skills of the lawyer with the power of science that comes from articulated and replicable methods. However, analyzing the cause-and-effect relationship between the outcome of cases and the legally relevant factors presented by judges to justify their decisions raises a serious circularity problem. Therefore, content analysis is not an especially good tool for helping lawyers to predict the outcome of cases based on real-world facts. This article also provides guidance on the best practices for using this research method. We identify techniques that meet standards of social science rigor and account for the practical needs of legal researchers. These techniques include methods for case sampling, coder training, reliability testing, and statistical analysis. It is not necessary to practice this method profitably only at its highest level. Instead, we show that valuable uses can be made even by those who are largely innumerate.


 
Ricks on Non-Precedential Opinions Sarah E. Ricks (Rutgers, The State University of New Jersey - School of Law-Camden) has posted The Perils of Unpublished Non-precedential Federal Appellate Opinions: A Case Study of the Substantive Due Process State-Created Danger Doctrine in One Circuit (Washington Law Review, Vol. 81, p. 217, 2006) on SSRN. Here is the abstract:
    About 80% of federal appellate decisions are non-precedential. This Article examines the practical consequences for district courts and litigants confronting inconsistent appellate opinions issued by the same federal circuit. Specifically, this is a case study comparing the divergent binding and non-precedential opinions applying one frequently invoked constitutional theory within the U.S. Court of Appeals for the Third Circuit, the “state-created danger” theory of substantive due process. The comparison demonstrates that the risks of non-precedential opinions are real. During the six-year interval between binding state-created danger decisions, the Third Circuit created inconsistent non-precedential opinions on the identical legal theory. Doctrinal divergence between the Third Circuit's binding and non-precedential opinions has undermined the predictive value of precedential state-created danger decisions, creating an obstacle to settlement at both the trial and appellate levels. In turn, district courts' unpredictable application of the non-precedential opinions has undermined the critical appellate functions of ensuring that like cases are treated alike, that judicial decisions are not arbitrary, and that legal issues resolved at the appellate level need not be relitigated before the district courts. The practice of issuing non-precedential opinions is justified on efficiency grounds, as a mechanism for overburdened appellate courts to manage their dockets. But doctrinal inconsistency between the Third Circuit's precedential and non-precedential opinions undercuts the efficiency rationale because doctrinal divergences may have led plaintiffs and defendants to value cases differently—potentially leading to more litigation, fewer settlements, and additional need for judicial decision-making. This Article proposes several reforms to reduce doctrinal inconsistency between precedential and non-precedential opinions. Because an appellate court should weigh the same considerations in making each of its publication decisions, the Third Circuit should replace its amorphous publication guideline with specific criteria. The Article concludes by suggesting that, consistent with the common law tradition of empowering the applying court to assess the persuasive value of a judicial decision, the Third Circuit should no longer refuse to cite its own non-precedential opinions, and should follow several circuits in expressly according persuasive value to its non-precedential opinions.


 
Sorenson on Resentencing Quin M Sorenson (United States Court of Appeals for the Third Circuit) has posted The Illegality of Resentencing (Duquesne University Law Review, Vol. 44, p. 211, 2006) on SSRN. Here is the abstract:
    The Supreme Court in United States v. Booker held that mandatory application of the United States Sentencing Guidelines is inherently unconstitutional and, to preserve the federal sentencing structure, it excised several provisions of the United States Code that required district courts to adhere to the Guidelines in sentencing criminal defendants. Yet, the Court did not address one provision of the Code, 18 U.S.C. § 3742(g)(2), that still requires district courts to adhere to the Guidelines in resentencing criminal defendants. This article explores this provision and concludes that it renders all resentencing in the federal system illegal, in violation of either the statute or the Constitution. District courts are called upon to recognize the unconstitutionality of 18 U.S.C. § 3742(g)(2) sua sponte and to excise it from the United States Code.


 
Chen on Amicus Influence on Gonzales v. Raich Paul H.S. Chen (Western Washington University - Department of Political Science) has posted Amici Curiae Influence on Supreme Court Decision-making in Gonzales v. Raich on SSRN. Here is the abstract:
    By attempting to discern the influence of information and arguments provided in amici cu-riae briefs on the recent Supreme Court case of Gonzales v. Raich, this paper seeks to shed light on Supreme Court decision-making and opinion-writing on two fronts: first, by showing how legal arguments influence the substantive decision-making and opinion-writing of the justices; and second, by showing that the submission of amici curiae briefs in Supreme Court cases does have an impact on the outcome of the case in terms of the issues the justices must consider in deciding the case, the concerns they wish to address in their opinions, and the evidence and ar-guments they which to marshal to support their positions.


Thursday, July 06, 2006
 
Thursday Calendar
    University of Arizona Law: Kirsten Smolensky, Parental Liability for Genetic Enhancement


 
Fowler, Johnson, Spriggs, Jeon, and Wahlbeck on Network Analysis of Supreme Court Precedents James H. Fowler , Timothy R. Johnson , James F. Spriggs , Sangick Jeon and Paul J. Wahlbeck (University of California, Davis , University of Minnesota , Washington University, St. Louis - College of Arts & Sciences , University of California, Davis and George Washington University) have posted Network Analysis and the Law: Measuring the Legal Importance of Supreme Court Precedents on SSRN. Here is the abstract:
    We construct the complete network of 28,951 majority opinions written by the U.S. Supreme Court and the cases they cite from 1792 to 2005. We illustrate some basic properties of this network and then describe a method for creating importance scores using the data to identify the most important Court precedents at any point in time. This method yields dynamic rankings that can be used to predict the future citation behavior of state courts, the U.S. Courts of Appeals, and the U.S. Supreme Court, and these rankings outperform several commonly used alternative measures of case importance.
And from the body of the article:
    A recent advance in internet search theory (Kleinberg 1998) allows us to draw on both inward and outward citations to assess case importance. In particular, this procedure relies conceptually on two different kinds of significant cases simultaneously – outwardly important cases and inwardly important cases. An outwardly important case is one that cites many other important decisions, thereby helping to define which decisions are pertinent to a given legal question. Such cases can also be seen as resolving a larger number of legal questions (Post and Eisen 2000, 570) or at least engaging in a greater effort to ground a policy choice in prior rulings. An inwardly important case is one that is widely cited by other prestigious decisions, meaning that judges see it as an integral part of the law. Cases can act as both inwardly and outwardly important opinions, and the degree to which cases fulfill these roles is mutually reinforcing within the precedent network. That is, a case that is outwardly important cites many inwardly important opinions, and a case that is inwardly important is cited by many outwardly important opinions.
As many of you know, I'm a big fan of network analysis as an empirical tool, and I highly recommend this article. At the same time, I am a bit dismayed by the conceptual softness at the heart of this project. The authors purport to have a measure for "case importance," but they simply have not done the conceptual work that would be required to elucidate what underlying property of cases they are attempting to measure. Of course, if case importance is simply defined in terms of citations--how many cases cite to a case or are cited by a case, then they have an excellent measure. Duh! But in that case, their preliminary discussion, which seems to assert that case importance is causally important to the law--to understanding how the law works, how legal change occurs, etc.--is simply a bald assertion, without any supporting argument or analysis. If they have a more robust understanding of what "case importance" actually is--then they need to explain it.
There are some very good reasons to question the relationship between citations and importance. Take "outward importance"--the number of cases cited by a case: there are many factors that can account for outward importance. Was the opinion drafted by the Justice or was it drafted by clerks? Does it deal with an impacted field of law or does it write on a clean slate? The author's seem to believe that cases that cite many other cases are thereby likely to resolve many legal questions, but that assertion seems highly dubious. It certainly would require analysis and evidence to convince me of this claim. "Inward importance"--the number of cases that cite a case--seems like a more plausible measure, of something, but again there are problems. For example, the Supreme Court's summary judgment decisions are among it's most cited opinions, but that doesn't mean they actually are doing legal work--that depends on the controversial assumption that standards of summary judgment actually determine the outcome of summary judgment decisions, a highly contestable proposition. Cititing a case is not equivalent to being causally influenced by a case.
There is another problem with using citations as a proxy for importance. Some very important cases may "settle" legal questions in a way that ends the need for further litigation. Given that case X has settled legal question Q, it may be that citizens and officials stop litigating Q. In that case, X might be a very important case--so far as the causal role of law is concerned, but not a frequently cited case!
It is also somewhat surprising that the author's do not themselves cite any of the prior work by legal academics on the application of network analysis to citation networks.
But with all of that said, this is an important article.
Download it while its hot!


 
Litman on the Economics of Open-Access Jessica Litman (University of Michigan) has posted The Economics of Open-Access Law Publishing (Lewis & Clark Law Review, Forthcoming) on SSRN. Here is the abstract:
    The conventional model of scholarly publishing uses the copyright system as a lever to induce commercial publishers and printers to disseminate the results of scholarly research. The role of copyright in the dissemination of scholarly research is in many ways curious, since neither authors nor the entities who compensate them for their authorship are motivated by the incentives supplied by the copyright system. Rather, copyright is a bribe to entice professional publishers and printers to reproduce and distribute scholarly works. As technology has spawned new methods of restricting access to works, and copyright law has enhanced copyright owners' rights to do so, the publishers of scholarly journals have begun to experiment with subscription models that charge for access by the article, the viewer, or the year. Copyright may have been a cheap bribe when paper was expensive, but it has arguably distorted the scholarly publishing system in ways that undermine the enterprise of scholarship. Recently, we've seen a number of high-profile experiments seeking to use one of a variety of forms of open access scholarly publishing to develop an alternative model. Critics have not quarreled with the goals of open access publishing; instead, they've attacked the viability of the open-access business model. If we are examining the economics of open access publishing, we shouldn't limit ourselves to the question whether open access journals have fielded a business model that would allow them to ape conventional journals in the information marketplace. We should be taking a broader look at who is paying what money (and comparable incentives) to whom, for what activity, and to what end. Are either conventional or open-access journals likely to deliver what they're being paid for? Law journal publishing is one of the easiest cases for open access publishing. Law scholarship relies on few commercial publishers. The majority of law journals depend on unpaid students to undertake the selection and copy editing of articles. Nobody who participates in any way in the law journal article research, writing, selecting, editing and publication process does so because of copyright incentives. Indeed, copyright is sufficiently irrelevant that legal scholars, the institutions that employ them and the journals that publish their research tolerate considerable uncertainty about who owns the copyright to the works in question, without engaging in serious efforts to resolve it. At the same time, the first copy cost of law reviews is heavily subsidized by the academy to an extent that dwarfs both the mailing and printing costs that make up law journals' chief budgeted expenditures and the subscription and royalty payments that account for their chief budgeted revenues. That subsidy, I argue, is an investment in the production and dissemination of legal scholarship, whose value is unambiguously enhanced by open access publishing. In part I of the paper I give a brief sketch of the slow growth of open access publishing in legal research. In part II, I look at the conventional budget of a student-edited law journal, which excludes all of the costs involved in generating the first copy of any issue, and suggest that we cannot make an intelligent assessment of the economics of open access law publishing unless we account for input costs, like the first copy cost, that conventional analysis ignores. In part III, I develop a constructive first copy cost based on assumptions about the material included in a typical issue of the law journal, and draw inferences based on comparing the expenses involved in the first copy, and the entities who pay them, with the official law journal budget. In part IV, I examine the implications of my argument for open access law publishing. In part V, I argue that the conclusions that flow from my analysis apply to non-legal publishing as well.
Highly recommended! This paper was delivered (by a proxy, actually) at the terrific open-access conference organized by Lydia Loren at Lewis and Clark!


 
Beny on Nielsen & Albitson on Public Interest Practice Laura N. Beny (University of Michigan at Ann Arbor Law School) has posted Sample Selection, Methodology and Implications for the Have Nots: A Commentary on Professors Nielsen's and Albitson's 'The Organizational Environment of Public Interest Practice 1975-2000' (North Carolina Law Review, Vol. 84, No. 5, 2006) on SSRN. Here is the abstract:
    This is a commentary by Professor Beny on Professors Nielsen's and Albiston's empirial study, "The Organizational Environment of Public Interest Practice: 1975-2000," 84 N.C. L. REV. 102, 116 (2006). Both Nielsen's and Albiston's empirical study and Professor Beny's commentary were presented at the University of North Carolina Law Review Symposium, Empirical Studies of the Legal Profession: What Do We Know About Lawyers' Lives? at the University of North Carolina School of Law, October 2005.


 
Evans & Alexeev on Response to Rankings Jeffrey Evans Stake and Michael Alexeev (Indiana University School of Law-Bloomington and Indiana University Bloomington - Department of Economics) have posted Who Responds to U.S. News & World Report's Law School Rankings? on SSRN. Here is the abstract:
    U.S. News & World Report (USN&WR) publishes rankings of American Law Schools. The popularity of the rankings raises the question of whether the rankings influence the behavior of law school applicants, law schools, law teachers, lawyers, or employers. This study explores some indicia of USN&WR influence. In particular, we attempt to determine whether USN&WR rankings have influenced 1) law faculty members who respond to the USN&WR survey of law school quality, 2) lawyers who respond to USN&WR surveys, 3) law school applicants, 4) employers who hire law school graduates, and 5) administrators who set tuition. We find significant effects on the first three groups, “echo effects” of USN&WR rankings that are folded back into subsequent rankings. We do not find important effects on salaries or tuitions.


 
Mitchell & Tetlock on Empirical Investigation of Corrective and Distributive Justice Gregory Mitchell and Philip E. Tetlock (University of Virginia School of Law and University of California, Berkeley - Organizational Behavior & Industrial Relations Group) have posted An Empirical Inquiry into the Relation of Corrective Justice to Distributive Justice (Journal of Empirical Legal Studies, Vol. 3, 2006 Forthcoming) on SSRN. Here is the abstract:
    We report the results of three experiments examining the long-standing debate within tort theory over whether corrective justice is independent of, or parasitic on, distributive justice. Using a “hypothetical societies” paradigm that serves as an impartial reasoning device and permits experimental manipulation of societal conditions, we first tested support for corrective justice in a society where individual merit played no role in determining economic standing. Participants expressed strong support for a norm of corrective justice in response to intentional and unintentional torts in both just and unjust societies. The second experiment tested support for corrective justice in a society where race, rather than individual merit, determined economic standing. The distributive justice manipulation exerted greater effect here, particularly on liberal participants, but support for corrective justice remained strong among non-liberal participants, even against a background of racially unjust distributive conditions. The third experiment partially replicated the first experiment and found that the availability of government-funded insurance had little effect on demands for corrective justice. Overall, the results suggest that, while extreme distributive injustice can moderate support for corrective justice, the norm of corrective justice often dominates judgments about compensatory duties associated with tortious harms.


Wednesday, July 05, 2006
 
Mitchell & Tetlock on Experimental Political Philosophy Gregory Mitchell and Philip E. Tetlock (University of Virginia School of Law and University of California, Berkeley - Organizational Behavior & Industrial Relations Group) have posted Experimental Political Philosophy: Justice Judgments in the Hypothetical Society Paradigm on SSRN. Here is the abstract:
    In this draft of a chapter forthcoming in a book on political psychology, we advocate blending thought experiments with laboratory experiments via a technique we call “the hypothetical society paradigm,” which is designed to bring out the inferential advantages of both approaches while minimizing their disadvantages. We discuss the primary benefits of this technique and survey the principal empirical findings thus far obtained using this technique. We also discuss two categories of fruitful future applications of this and related techniques: (a) isolating sources of support and resistance to particular policy proposals with potentially profound societal implications; (b) helping to clarify boundary conditions for the applicability of competing and complementary psychological theories of justice.


 
Saver on Intangible Harm Richard S. Saver (University of Houston - Health Law & Policy Institute) has posted Medical Research and Intangible Harm (University of Cincinnati Law Review, Vol. 74, p. 941, 2006) on SSRN. Here is the abstract:
    Although conventional wisdom assumes that human subjects participating in medical research face significant risk of pain, disability, and death, evidence suggests that, in the aggregate, research subjects fare as well therapeutically as patients with similar conditions not participating in clinical trials. Research subjects do, however, face unappreciated risk of intangible harm, even if not physically injured, such as affronts to dignitary interests. This Article considers whether the intangible hazards faced by subjects should be cognizable under the law to a greater degree. A more flexible approach has considerable advantages, including helping to police opportunistic conduct in the investigator-subject relationship, but also raises difficult drawbacks, such as pragmatic problems in defining boundaries for intangible harm claims and possible over-deterrence. This Article balances the competing considerations and suggests a limited, incremental approach to recognizing intangible harm claims in medical research. In particular, this Article contends that intangible harm remedies are best used to address abandonment hazards, such as when investigators and sponsors readily frustrate and potentially exploit subjects' assumptions regarding study terminations and continued access to experimental technology. Similarly, subjects may be wrongfully abandoned, even if not physically injured, when the study fails to contribute to general medical knowledge through the public dissemination of research results. Because such abandonment conduct disgregards subjects' reasonable expectations and considerable personal investments in clinical trials, and raises serious concerns for the research enterprise generally, it warrants sanction as legally cognizable harm.


 
Tiller & Yoon on PPT & Private Securities Litigation Emerson H. Tiller and Albert Yoon (Northwestern University - School of Law and Northwestern University - School of Law) have posted Private Securities Litigation and the Courts: Positive Political Theory and Evidence on SSRN. Here is the abstract:
    This paper incorporates insights from Positive Political Theory to examine the role of legislative reform on judicially created legal doctrines relating to private securities litigation. The study bears on the general question of how Congress can influence the use of legal doctrines within a judicial hierarchy. Specifically, we examine the effect of the Private Securities Litigation Reform Act of 1995 on the use of the “Group Pleading Doctrine” and “Fraud on the Market” theory – two doctrines that assisted plaintiff-investors in overcoming the strict federal pleading requirements for bringing their claims in federal court.


 
Sander & Rozdeiczer on Matching Disputes and Procedures Frank E.A. Sander and Lukasz Rozdeiczer (Harvard Law School and Harvard Law School) have posted Matching Cases and Dispute Resolution Procedures: Detailed Analysis Leading to a Mediation-Centered Approach Approach (Harvard Negotiation Law Review, Vol. 11, p. 1, 2006) on SSRN. Here is the abstract:
    This article builds on the January 1994 Negotiation Journal article by Sander and Goldberg on the same taxonomy topic. But it takes a much broader view of the problem, first by also looking at others' analysis of the same problem - specifically the work of the CPR International Institute for Conflict Prevention and Resolution, the Federal Judicial Center, and Professor Edward Dauer. Although these sources do not always use the same terminology or focus on the same types of cases, Sander and Rozdeiczer identify the three main areas of inquiry as 1) The goals of the parties, 2) The features of the process, the case or the parties that point to a particular dispute resolution process, and 3) the capacity of a procedure to overcome impediments to effective resolution. The upshot of their analysis is that mediation is by far the most hospitable and effective process for most situations. They conclude their analysis by recommending a mediation-centered approach, consisting of the following three steps: 1) Assume mediation, unless the case falls into one of the rare situations where mediation is not appropriate; 2) If mediation is appropriate, then what kind of mediation should be used (evaluative, facilitative, transformative etc.); 3) If mediation is not appropriate, what method(s) should be used? This involves a loopback to Part II of the article setting forth the three main areas of inquiry listed above.


Tuesday, July 04, 2006
 
Chafetz on Yoder Josh Chafetz (Yale Law School) has posted Social Reproduction and Religious Reproduction: A Democratic-Communitarian Analysis of the Yoder Problem on SSRN. Here is the abstract:
    In 1972, Wisconsin v. Yoder presented the Supreme Court with a sharp clash between the state's interest in social reproduction through education--that is, society's interest in using the educational system to perpetuate its collective way of life among the next generation--and the parents' interest in religious reproduction--that is, their interest in passing their religious beliefs on to their children. This Article will take up the challenge of that clash, a clash which continues to be central to current debates over issues like intelligent design in the classroom. This Article engages with the competing theories put forward by scholars and judges who believe in a broad right of religious reproduction, trumping the state's interest in social reproduction, as well as those who believe that the interest in social reproduction should trump contrary claims by insular religious groups. The Article suggests that each of the major competing theories is fundamentally flawed and offers an alternative analysis based on communitarian and democratic values. This democratic-communitarian view begins with the communitarian intuition that social subjects are constituted by multiple sources of value and that a rich diversity of value sources is important and worth fostering. Communitarian theory both recognizes the danger in allowing high-level value sources (that is, those value sources further from the individual) to become too thick and seeks to match social institutions to the values they are best able to promote. The role of education in our society suggests that it is uniquely well-situated to inculcate society-wide values. This conclusion combines with the democratic intuition that, in a democracy, decisions about the inculcation of social values can only legitimately be made by democratic means. The conclusion is that parents and courts are unjustified in interfering with social reproduction through schooling. However, communitarian theory also suggests that conscientious citizens and legislators should impose the minimum of constraints necessary to ensure the transmission of important communal values. That is, they should strongly consider democratically enacting the sorts of exemptions at issue in Yoder.


Monday, July 03, 2006
 
Call for Papers
    HUMAN AFFAIRS: A Postdisciplinary Journal for Humanities Social Sciences Call for Papers FOR A SPECIAL ISSUE ON action & practice theory Guest Editor: Theodore R. Schatzki, University of Kentucky, USA Human affairs are composed of human activities to asubstantial degree. Much of the social world also results from human activities, whether it is intended or outside people’s control, whether it is cause for pride or for shame. The venerable task of the human sciences has been to comprehend the activity-sociality nexus and whatever bears on it. Traditional approaches to this task, such as those which emphasize the individual and the psychological or those that highlight society or the structural, have proved inadequate. New conceptions and accounts of action, performance, practice and the human world are needed, as the recent practice turn, among other developments, demonstrates. HUMAN AFFAIRS invites submissions of papers for its next issue VOLUME 17, NUMBER 2, december 2007 devoted to exploration of the above topic. Contributions drawing on all fields of the humanities and the social sciencesbut also transcending themare welcome, focusing primarily (though not exclusively) on topics such as
      · Conceptions of action and practice in philosophy and social science · Teleological, instrumental, expressive, ceremonial, ritualistic action & practice · Action, practice and communication · The constitution of individual and collective identities in practices · The foundation of memory in actions, practices and related sociality · Practice, the performance of action and the body · The times and spaces of activity and of practice · Practices and learning · The production and use of knowledge, artifacts and works in practices · Practices, digital culture and virtual objects · Practices and individual responsibility · Understanding & explaining actions and practices · Social practices and nature · Practices and social order · Producing, maintaining & transforming social structure and organization in practices
    Submission Guidelines Please follow the submission guidelines on the cover or the website of HUMAN AFFAIRS. Abstracts Due: December 15, 2006 (in English, Slovak, Czech) Manuscripts due: June 15, 2007 (in English, Slovak, Czech) All information and communications concerning submissions should be addressed to the Editorial Office: Department of Social & Biological Communication Slovak Academy of Sciences, Klemensova 19, 813 64 Bratislava, SLOVAKIA tel: 00421-2-54 77 56 83, fax: 00421-2-54 77 34 42, E-mail: humanaffairs@humanaffairs.sk; Website: www.humanaffairs.sk


 
Call for Papers: Lesbian, Gay, Bisexual, and Transgender Legal Issues
    Law & Sexuality: A Review of Lesbian, Gay, Bisexual, and Transgender Legal Issues Law & Sexuality is currently seeking timely theoretical or practical articles to be published in Spring 2007. Law & Sexuality, at Tulane Law School, is the first and only student-edited law review in the country devoted solely to covering legal issues of interest to the lesbian, gay, bisexual, and transgender community on a wide variety of subjects, including constitutional, employment, family, health, insurance, immigration and military law. Submissions should be e-mailed to lbecnel@law.tulane.edu and will be accepted through August 31, 2006.


 
Conference Announcement: Moral Contextualism
    The Department of Philosophy at the University of Aberdeen is hosting an international conference on 'Moral Contextualism'. The conference will take place on July 4-5 2006. Programme:
      John Hawthorne (Rutgers University) - Predicates of Character Response: TBA Alan Thomas (University of Kent) - Inferential Contextualism and Moral Cognitivism Response by Timothy Chappell (The Open University) Ralph Wedgwood (University of Oxford) - Moral Contextualism Response by Kent Hurtig (University of Stirling) Walter Sinnott-Armstrong (Dartmouth College) - Book symposium on 'Moral Scepticisms' (OUP 2006). Responses by Peter Baumann (University of Aberdeen) Gerry Hough (University of Aberdeen) Martijn Blaauw (University of Aberdeen) Berit Brogaard (University of Missouri, St. Louis) - Moral Contextualism and Moral Relativism Response by Lars Binderup (University of Southern Denmark) John Greco (Fordham University) - What's Wrong with Contextualism? Comments by Duncan Pritchard (University of Stirling)
    For more information on this event, also on how to register, please visit the conference homepage at: If you have any questions, please contact the conference organisers:
      *Peter Baumann: p.baumann@abdn.ac.uk *Martijn Blaauw: m.blaauw@abdn.ac.uk


 
Call for Papers: BSET 2007
    CALL FOR PAPERS: The BRITISH SOCIETY for ETHICAL THEORY 2007 CONFERENCE University of Bristol, UK 9-11 July 2007 Invited Speakers: Roger Crisp (Oxford University) David Velleman (New York University) Papers are invited for the annual conference of the British Society for Ethical Theory, to be held at the University of Bristol. The subject area is open within metaethics and normative ethics. Papers on topics in applied ethics or the history of ethics may also be considered provided they are also of wider theoretical interest. Papers, which should be unpublished at the time of submission, should be in English, no longer than 6500 words, readable in at most 45 minutes and in a form suitable for blind review. Please send your submission electronically, and include an abstract, as well as your full name, address and academic affiliation. Those who submitted papers for our previous conferences - successfully or otherwise - are welcome to submit again (though not of course the same papers!). Please tell us if you are a postgraduate student: submissions from postgraduates are encouraged as our aim is that some such should be represented at the conference. Selected conference papers will be published in the journal "Ethical Theory and Moral Practice". Please make clear in any covering letter whether you want your paper considered for publication here as well as for the conference programme. The deadline for submissions is 8th December, 2006. Papers and accompanying particulars should be emailed to Dr. Alison Hills at the following email address:
      bset.submissions@gmail.com
    Note that ONLY electronic submissions will be accepted. Further particulars regarding registration will be available in due course from: BSET homepage - http://www.bset.org.uk/ - where further information about the Society is also available.


 
Conference Announcement: Antecedents of Action
    Philosphy of Action - Conference Announcement ANTECEDENTS OF ACTIONS: Reasons, Decisions, Intentions and Will www.uni-potsdam.de/action 14.-17. September 2006 University of Potsdam Speakers:
      Maria Alvarez (Southhampton, GB) Michael Bratman (Stanford, USA) Jennifer Hornsby (London, GB) Marco Iorio (Bielefeld, D) Geert Keil ( Aachen, D) Christoph Lumer (Siena, I) Alfred Mele (Tallahassee, USA) Neil Roughley (Konstanz, D) Tim Schroeder (Manitoba, CDN) Thomas Spitzley (Duisburg-Essen, D) Ralf Stoecker (Potsdam, D) Gary Watson (Riverside, USA)
    Organisation:
      Thomas Spitzley (University of Duisburg-Essen) Ralf Stoecker (University of Potsdam)
    More than fifty years have passed since Wittgenstein wrote his cryptic reminder. During this time numerous scholars in the philosophy of action have tried to elucidate the relation between our actions and other things we do or which happen to us. It used to be widely agreed that actions are distinguished by a certain kind of explanation that refers to reasons, intentions or motives. But how exactly is this more general criterion of action to be understood? In particular, what is the relation between intentions and reasons for actions on the one hand and action tokens on the other? These are questions that continue to generate vast disagreement. This conference brings together representatives of the most important rival action theories in order to further discussion of the differing approaches to these issues. The conference also aims to lay a foundation for more detailed discussions of theories of action among German philosophers. Location:
      University Potsdam Am Neuen Palais 10 14469 Potsdam GERMANY
    Costs:
      Conference (incl. lunch and coffee breaks): 25 €. Additional dinner and guided tour through Park Sanssouci: 25 €.
    Contact:
      action@uni-potsdam.de


Sunday, July 02, 2006
 
Legal Theory Calendar
    Thursday, July 6
      University of Arizona Law: Kirsten Smolensky, Parental Liability for Genetic Enhancement


 
Legal Theory Lexicon: Legitimacy
    Introduction Legitimacy. It’s a word much bandied about by students of the law. “Bush v. Gore was an illegitimate decision.” “The Supreme Court’s implied fundamental rights jurisprudence lacks legitimacy.” “The invasion of Iraq does not have a legitimate basis in international law.” We’ve all heard words like these uttered countless times, but what do they mean? Can we give an account of “legitimacy” that makes that concept meaningful and distinctive? Is “legitimacy” one idea or is it several different notions, united by family resemblance rather than an underlying conceptual structure.
    This entry in the Legal Theory Lexicon theory will examine the concept of legitimacy from various angles. As always, the Lexicon is aimed at law students, especially first-year law students, with an interest in legal theory.
    Normative and Sociological Legitimacy Let’s begin with the distinction between normative legitimacy and sociological legitimacy. On the one hand, we talk about legitimacy as a normative concept. When we use “legitimacy” in the normative sense, we are making assertions about some aspect of the rightness or wrongness of some action or institution. On the other hand, legitimacy is also a sociological concept. When we use legitimacy in the sociological sense, we are making assertions about legitimacy beliefs--about what attitudes people have. Although these two senses of legitimacy are related to one another, they are not the same. That’s because an institution could be perceived as legitimate on the basis of false empirical beliefs or incorrect value premises. The opposite can be true as well: a controversial court decision (Roe, Bush v. Gore, etc.) could have been perceived as illegitimate, even if it had been a legitimate decision.
    Conceptions of Legitimacy
      Concepts and Conceptions The distinction between normative and sociological legitimacy is important, but, by itself, it doesn’t get us very far. What does “legitimacy” mean? How is “legitimacy” different from “justice” or “correctness”? Those are deep questions—deserving of a book-length answer. My general policy in the Lexicon series is to steer a neutral course—avoiding controversial assertions about debatable matters of legal theory. But when it comes to legitimacy, it is difficult to stick to this plan. The difficulty is not so much that legitimacy is the subject of a well-defined debate; rather, the problem is that the concept of legitimacy is usually ill-defined and undertheorized.
      So here is the strategy we will use. Let’s borrow the concept/conception distinction for a starting point. Let’s hypothesize that there is a general concept of legitimacy but that this concept is contested—different theorists have different views about what legitimacy consists in. Some theorists think that legitimacy is conferred by democratic procedures; others may think that legitimacy is a function of legal authorization. Let’s take a look at four different notions of legitimacy.
      Four Conceptions of Legitimacy
        Legitimacy as Democratic Process One very important and influential idea of legitimacy is connected with democratic procedures. Let’s begin with a simple example. Suppose you belong to a small-scale organization of some kind—maybe a law-school faculty. The executive of the organization can take various actions on her own authority, but there are some matters that must be decided by democratic procedures. For example, suppose the Dean of a law school decided that all first-year classes should be taught in small-groups with cooperative-learning techniques and without the traditional case method and Socratic questioning. This might be a marvelous innovation. (I’m not saying it would be.) But if the Dean made the decision without the input of the faculty (or a vote of the faculty), then it is quite likely that there would be vociferous opposition to the new organization of the curriculum on the grounds that the Dean’s decision lacked democratic legitimacy.
        Let’s take a more familiar example. Federal judges are not directly elected. They are appointed for life terms. Although the President (who nominates federal judges) and the Senate (which confirms them) are both elected bodies, the judges who sit at any given time have an indirect and diffuse democratic pedigree. Moreover, there life terms make them relatively insular. So there is a question of legitimacy about the institution of judicial review. Does the fact that Supreme Court Justices are not elected make it illegitimate for them to invalidate actions taken by elected officials? Of course, that’s a big question. For our purposes, the important point is that the question itself is one of democratic legitimacy.
        Legitimacy as Legal Authority Another conception of legitimate seems to focus on legal authority. For example, when President Truman ordered the seizure of the steel mills during the Korean War, there was not question but that he had been elected in 1948. But despite the fact that Truman was elected democratically, there was still a question about the legitimacy of his action. Even if his action was democratic, it may not have been legal. When an official acts outside her sphere of legal authority, we sometimes say that here decision was “illegitimate.” When we use “legitimacy” in this way, we seem to be relying on the idea that legitimacy is connected to legal authority. Actions that are not legally authorized are frequently called “illegitimate” whereas actions that are lawful are sometimes seen as legitimate for that reason.
        Legitimacy as Reliability Yet another theory ties legitimacy to the reliability of the process that produces the decision. To see the point of the “reliability conception” of legitimacy, we need to step back for a moment. There is a difference between the “correctness” or “justice” of a decision, on the one hand, and its “legitimacy” on the other. Indeed, this seems to be a crucial feature of “legitimacy.” We think that an incorrect decision can nonetheless be legitimate, whereas a correct decision can lack legitimacy.
        Reliability theories acknowledge this “gap” between legitimacy and justice, but insist that there is nonetheless a strong connection between the two. The idea is that legitimacy requires a decision making process that meets some threshold requirement of reliability. So tossing a coin would not be a legitimate method for deciding legal disputes. Even if the coin toss came out the right way and the party that would have won in a fair trial did win the coin toss, the decision that resulted from the flip of a coin would be criticized as illegitimate.
        One important example of a reliability theory of legitimacy is found in Randy Barnett’s book, Restoring the Lost Constitution. Barnett argues that the legitimacy of a constitution depends on its reliability in producing just outcomes. A legitimate constitution guarantees a tolerable level of justice. A constitution that does not provide such a guarantee is illegitimate—or so Barnett argues.
        The Liberal Principle of Legitimacy Let’s do one more theory of legitimacy. John Rawls’s has advanced what he called “the liberal principle of legitimacy.” Here is how Rawls states the principle:
          [O]ur exercise of political power is fully proper only when it is exercised in accordance with a constitution the essentials of which all citizens as free and equal may reasonably be expected to endorse in the light of principles and ideals acceptable to their common human reason.”
        Unpacking Rawls’s principle could take a whole article, but let me make three observations:
        • The distinctive feature of the principle is that it makes reasons count. That is, the principle bases legitimacy on reasonable endorsement “in the light of principles and ideals acceptable to . . . common human reason.” Readers of past lexicon entries will note that Rawls’s is referring her to his idea of public reason.
        • The principle does not require that citizens actually endorse the constitutional essentials. Rather, the requirement is that citizens “may reasonably be expected to endorse” the constitutional essentials. In other words, the constitutional essentials must be justified by public reasons in such a way that the justification is one that reasonable citizens could be expected to accept.
        • Citizens are asked to endorse the constitutional essentials “as free and equal”. That is, the principle assumes a certain political conception of citizens as free and equal members of society. The reasons are addressed to citizens conceived in this way, and not to citizens as they are, if that includes their rejection of the notion that each and every citizen should be regarded as a free and equal member of society.
        Rawls’s liberal principle of legitimacy point us in the direction of a whole family of ideas about legitimacy. Rawls’s principle is tied to his idea of public reason, but we can imagine other theories of legitimacy that include particular kinds of reasons as legitimating or exclude categories of reasons as illegitimate.
      Competing versus Complementary Conceptions We began our investigation of various conceptions of legitimacy with the working hypothesis that these would be “competing conceptions,” i.e., that only one of these theories of legitimacy could be correct for a given domain of application. Now, let’s take a second look at that assumption.
      Is it really the case that the various conceptions of legitimacy compete with one another? There is another possibility—that some (or all) of these conceptions are complementary. For example, we might say that a given judicial decision has legitimacy in the sense that it was made by legally authorized officials, but that the same decision lacks democratic legitimacy, because it was made by unelected judges contrary to the will of democratically elected legislators. If this way of talking is sensible, then it may be the case that the various conceptions of legitimacy do not compete with one another, but rather exist in some sort of complementary relationship.
    Conclusion We’ve barely scratched the surface, but I hope this entry has given you food for thought about the idea of “legitimacy.” My own sense is that one should be very wary about deploying the idea of legitimacy. Because legitimacy has different senses and is undertheorized, it is very easy to make claims about legitimacy that are ambiguous or theoretically unsound.


Saturday, July 01, 2006
 
Legal Theory Bookworm The Legal Theory Bookworm recommends The Rehnquist Legacy edited by Craig Bradley. Here's a blurb:
    During the thirty-three years William Rehnquist has been on the Supreme Court, nineteen as Chief Justice, significant developments have defined the American legal landscape. This book is a legal biography of Chief Justice William Rehnquist of the United States Supreme Court and the legacy he created. It is an intensive examination of his thirty-three year legacy as a Supreme Court Justice based on his Court opinions, primarily in the area of constitutional law, and written by a group of legal scholars each of whom is a specialist in the area covered by his/her chapter.
Many of the papers from this volume were posted on SSRN; the contents include fine essays by Rick Garnett, Yale Kamisar, Mark Tushnet, Dan Farber, Phil Frickey, and many others.


 
Download of the Week The Download of the Week is Temporary Legislation by Jacob E. Gersen. Here is the abstract:
    This paper provides a descriptive, positive, and normative analysis of temporary legislation, statutes containing a clause terminating legal authority on a specified future date. Notwithstanding the fact that a significant portion of the legislative docket consists of statutes that terminate automatically absent affirmative Congressional reauthorization in the future, the political dynamics of such statutes remain significantly under-theorized. Yet, temporary statutes have a long and storied pedigree both in the United States and elsewhere. After a historical overview, the paper outlines the major conceptual features of temporary statutes and demonstrates the implications for allocations of power and responsibility within and among the three branches of government, with a particular emphasis on the political economy of temporary legislation. Lastly, using a mixture of theoretical analysis and a case study, the paper argues for greater reliance on temporary statutes as a mechanism for responding to newly recognized risks.
Download it while its hot!