Search results for the tag, "College Essays"


May 1st, 2005

The Prewar Evidence (or Lack Thereof): Saddam Hussein’s Collaboration with Terrorists and His Deterrability

Saddam Hussein

This isn’t a 404 error; the page you’re looking for isn’t missing. I just moved it—in fact, I created a microsite for it.


April 4th, 2005

State-Sponsored Disinformation

Saddam Hussein

In the 16 months between Sept. 11, 2001, and the Iraq war—despite considerable efforts to entangle Saddam Hussein in the former[1]—hawks came up seriously short. Consequently, neither of the Bush administration’s two most publicized arguments for the war—the President’s State of the Union address and Secretary of State Colin Powell’s presentation to the U.N. Security Council—even mentioned the evidence allegedly implicating Baghdad in our day of infamy. Lest we misconstrue the subtext, on Jan. 31, 2003, Newsweek asked the President specifically about a 9/11 connection to Iraq, to which Bush replied, “I cannot make that claim.”

And yet, seven weeks later, a few days before the war began, the Gallup Organization queried 1,007 American adults on behalf of CNN and USA Today. The pollsters asked, “Do you think Saddam Hussein was personally involved in the Sept. 11th (2001) terrorist attacks (on the World Trade Center and the Pentagon), or not” Fifty-one percent of respondents said yes, 41% said no, and 8% were unsure. What accounts for this discrepancy between the American people and their government

Many blame the media; indeed, it has become a cliché, in the title of a recent book by Michael Massing, to say of antebellum reporting, Now They Tell Us (New York Review of Books, 2004). Such facileness, however, confuses coverage of Iraq’s purported “weapons of mass destructions”—which as some leading newspapers and magazines have since acknowledged was inadequately skeptical[2]—with coverage of Iraqi-al-Qaeda collaboration, which was admirably exhaustive.

Instead, two answers arise. First, the current White House is perhaps the most disciplined in modern history in staying on machine. Although the President denied the sole evidence tying Saddam to 9/11—an alleged meeting in Prague between an Iraqi spy and the ringleader of the airline hijackers in April 2001—the principals of his administration consistently beclouded and garbled the issue. As Secretary of Defense Donald Rumsfeld told Robert Novak in May 2002, “I just don’t know” whether there was a meeting or not. Or as George Tenet told the congressional Joint Inquiry on 9/11 a month later (though not unclassified until Oct. 17, 2002), the C.I.A. is “still working to confirm or deny this allegation.” Or as National Security Adviser Condoleezza Rice told Wolf Blitzer in September 2002, a month before Congress would vote to authorize the war, “We continue to look at [the] evidence.” Or as Vice President Richard Cheney told Tim Russert the same day, “I want to be very careful about how I say this. . . . I think a way to put it would be it’s unconfirmed at this point.” Indeed, a year later—even after U.S. forces in Iraq had arrested the Iraqi spy, who denied having met Mohammed Atta—Cheney continued to sow confusion: “[W]e’ve never been able to . . . confirm[] it or discredit[] it,” he asserted. “We just don’t know.”

A second hypothesis is that while Iraq had nothing to do with 9/11, it did have a relationship with al Qaeda. Never mind that at best the relationship was tenuous, that there was nothing beyond some scattered, inevitable feelers. That Saddam Hussein and Osama bin Laden had been in some sort of contact since the early 1990s allowed the Bush administration to shamelessly conflate their activities pertaining to 9/11 and those outside 9/11.

In this way, as late as October 2004, in his debate with John Edwards during the presidential campaign, Dick Cheney continued to insist that Saddam had an “established relationship” with al Qaeda. Senator Edwards’s reply was dead-on: “Mr. Vice President, you are still not being straight with the American people. There is no connection between the attacks of September 11 and Saddam Hussein. The 9/11 Commission said it. Your own Secretary of State said it. And you’ve gone around the country suggesting that there is some connection. There’s not.”

Two months ago, CBS News and the New York Times found that 30 percent of Americans still believe that Saddam Hussein was “personally involved in the September 11, 2001, terrorist attacks.” Sixty-one percent disagreed. This is certainly an improvement; yet the public is not entirely to blame. Nor is the Fourth Estate.

Rather, the problem lies primarily with the Bush administration. Andrew Card, George W. Bush’s Chief of Staff, explained it best. “I don’t believe you,” he told Ken Auletta of the New Yorker, “have a check-and-balance function.” In an interview, Auletta elaborated: “[T]hey see the press as just another special interest.” This is the real story of the run-up to the Iraq war: not a press that is cowed or bootlicking, but a government that treats the press with special scorn and sometimes simply circumvents it. As columnist Steve Chapman put it, the administration’s policy was “never to say anything bogus outright when you can effectively communicate it through innuendo, implication and the careful sowing of confusion.”

Indeed, now that we are learning more stories of propaganda from this administration—$100 million to a P.R. firm to produce faux video news releases; White House press credentials to a right-wing male prostitute posing as a reporter; payolas for two columnists and a radio commentator to promote its policies—the big question isn’t about the supposed failings of the press. The question is about the ominously expanding influence of state-sponsored disinformation.

Footnotes

[1] For instance, on 10 separate occasions Donald Rumsfeld asked the C.I.A. to investigate Iraqi links to 9/11. Daniel Eisenberg, “‘We’re Taking Him Out,’” Time, May 13, 2002, p. 38.

Similarly, Dick Cheney’s chief of staff, I. Lewis “Scooter” Libby, urged Powell’s speechwriters to include the Prague connection in his U.N. address. Dana Priest and Glenn Kessler, “Iraq, 9/11 Still Linked by Cheney,” Washington Post, September 29, 2003.

[2] See, for instance, The Editors, “Iraq: Were We Wrong,” New Republic, June 28, 2004; [Author unspecified], “The Times and Iraq,” New York Times, May 26, 2004; and Howard Kurtz, “The Post on W.M.D.: An Inside Story,” Washington Post, August 12, 2004.


October 7th, 2004

Tolerating Intolerance: Why Hate Speech Is Free Speech

God Hates Fags

A version of this blog post appeared in the Hamilton College Spectator in two parts, on October 7, 2004, and October 14.

Fyodor Dostoyevsky once said that we can judge a society’s virtue by its treatment of prisoners.

Likewise, we can judge a society’s freedom by its treatment of minorities. For freedom makes it safe to be unpopular; this is why the First Amendment fundamentally protects dissent. Playing the title character in the movie The American President (1995), Michael Douglas crystallizes the point: “‘You want free speech? Let’s see you acknowledge a man whose words make your blood boil, who’s standing center stage and advocating at the top of his lungs that which you would spend a lifetime opposing at the top of yours.’”

This is of course a Tinseltown vision, familiar more from the mind of Voltaire than in daily life. What if the speaker were calling interracial marriage “a form of bestiality,” a la Matt Hale of the Creativity Movement (formerly the World Church of the Creator)?[1] What if the speaker were waving a placard that says, “God Hates Fags,” a la supporters of Jael Phelps, a candidate for city council in Topeka, Kansas?[2] What if the speaker were suggesting that “more 9/11s are necessary,” a la professor Ward Churchill?[3]

Such notions represent so-called hate speech, which critics seek to criminalize. They argue that speech is a form of social power, by which the historically dominant group, namely, male WASPs, institutionally stigmatizes and harasses the Other. In this way, mere epithets can inflict acute anguish, so that certain words become inherently abusive, intimidating and persecutory. Explains Daniel Jonah Goldhagen, a historian of the Holocaust: We should view such “verbal violence . . . as an assault in its own right, having been intended to produce profound damage—emotional, psychological, and social—to [one’s] dignity and honor.”[4] Adds law professor Charles Lawrence, “The experience of being called ‘nigger,’ ‘spic,’ ‘Jap,’ or ‘kike’ is like receiving a slap in the face.”[5]

Now, that words are never just words, critics are right. With words, a speaker can reach into your very soul, imprinting searing, permanent scars. With words, a speaker can incite individuals to insurrection or vigilantism. Words are weapons. Yet words are always just words, since the breaking of sound waves across one’s ears is qualitatively different from the breaking of a baseball bat across one’s back.[6] Put simply, sticks and stones may break my bones, but words can never truly hurt me.

Specifically, as physical acts, deeds entail consequences over which one has no volition; an engaged fist hurts, whether one wants it to or not. By contrast, one can control one’s reaction to language; to what extent a locution harms one depends ultimately on how one evaluates it.[7] After all, taking responsibility for one’s feelings distinguishes adults from adolescents. Thus, as law professor Zechariah Chafee puts it, banning hate speech “makes a man a criminal . . . because his neighbors have no self-control.”[8] Indeed, with torture chambers in Egypt, genocide in the Sudan and suicide bombing in Israel, equating words with violence is odious. As writer Jonathan Rauch notes, “Every cop or prosecutor chasing words is one fewer chasing criminals.”[9] Plus, if we want to ban speech because it inspires violence, doesn’t history demand that we start with our most beloved book—the Bible—in whose name men have conducted everything from war to inquisition to witch burnings to child abuse?[10]

Still, critics assert that hurling forth scurrilous epithets silences people. The wound is so instantaneous and intense that it disables the recipient. But the law should be neither a psychiatrist nor a babysitter; it should not promote the message, “Peter cast aspersions on Paul. Ergo, Paul is a victim.” That lesson only entails a race to the bottom of victimhood, and implies that one should lend considerable credence to the opinions of bigots. To the contrary, one should recognize that the opinions of bigots are the opinions of bigots.

Consider an incident from the spring of 2004 at Hamilton Collee, wherein one student, face to face with another, called him a “fucking nigger.” Far from cowering, the black students on campus, with the full-throated support of their white peers and faculty, reacted with zeal. Just as the American Civil Liberties Union (A.C.L.U.) predicted 10 years earlier: “[W]hen hate is out in the open, people can see the problem. Then they can organize effectively to counter bad attitudes, possibly change them, and forge solidarity against the forces of intolerance.”[11] Sure enough, with a newly formed committee, a protest, a petition, constant discussion, letters to the editor and articles in the school newspaper, this is exactly what ensued. As if stung, the community sprang into action and bottom-up, self-censorship obviated top-down, administrative censorship.

This is likewise the case outside the ivory tower, since as a practical matter, the more outrageous something is, the more publicity it attracts. Perhaps the most famous example comes from the late 1970s, when neo-Nazis attempted to march through Skokie, Illinois, home to much of Chicago’s Jewish population, many of whom had survived Hitler’s Germany. Although the village board tried to prevent the demonstration, various courts ordered that it be allowed to proceed. Of course, by this time, notoriety and counterprotests caused the Nazis to change venues. Similarly, on September 13, 2001, the Christian fundamentalists Jerry Falwell and Pat Robertson accused those who disagreed with their ideology of begetting the terrorist attacks two days earlier. Both have since lost their once-significant political clout.

Better yet, the claim of Holocaust deniers that the Auschwitz gas chambers could not have worked led to closer study, and, in 1993, research detailed their operations. Even the repeatedly qualified, recent musings about gender differences by Harvard president Larry Summers ignited a national conversation about the latest science on the subject. The lesson here is that just as democracy counterbalances factions against factions, so speech rebuts speech. And rather than try to end prejudice and dogma, we can make them socially productive.

For this reason, we should practice extreme tolerance in the face of extreme intolerance. We need not give bigots microphones, but we need to give ourselves a society where, as a 1975 Yale University report describes it, people enjoy the unfettered right to think the unthinkable, mention the unmentionable, and challenge the unchallengeable.[12] Thomas Jefferson got it exactly right upon the founding of the University of Virginia: “This institution will be based on the illimitable freedom of the human mind. For here, we are not afraid to follow truth where it may lead, nor to tolerate error so long as reason is free to combat it.”[13]

Furthermore, with laws built on analogy and precedent, even narrowly tailored restrictions lead to wider ones.[14] Indeed, the transition to tyranny invariably begins with the infringement of a given right’s least attractive practitioners—“our cultural rejects and misfits . . . our communist-agitators, our civil rights activists, our Ku Klux Klanners, our Jehovah’s Witnesses, our Larry Flynts,” as Rodney Smolla writes in Jerry Falwell v. Larry Flint (1988).[15] And since free speech rights are indivisible, the same ban Paul uses to muzzle Peter, Peter can later use to muzzle Paul. Conversely, if we tolerate hate, we can employ the First Amendment for a nobler good, to defend the speech of anti-war protesters, gay-rights activists and others fighting injustice that is graver than being called names. For example, in the 1949 case Terminiello v. Chicago, the A.C.L.U. successfully defended an ex-Catholic priest who had delivered a public address blasting “Communistic Zionistic Jew[s],” among others.[16] That precedent then formed the basis for the organization’s successful defense of civil rights demonstrators in the 1960s and 70s.[17]

And yet critics contend that since hate speech exceeds the pale of reasonable discourse, banning it fails to deprive society of anything important. As much of the Western world has recognized, people can communicate con brio sans calumny. Human history is full enough of hate; shouldn’t we try to make our day and age as hate-free as possible?

Yes, but not as a primary. As writer Andrew Sullivan explains, “In some ways, some expression of prejudice serves a useful social purpose. It lets off steam; it allows natural tensions to express themselves incrementally; it can siphon off conflict through words, rather than actions.”[18] The absence of nonviolent channels to express oneself only intensifies the natural emotion of anger, and when repression inevitably comes undone, it erupts with furious wrath. Moreover, “Verbal purity is not social change,” as one commentator puts it. [19] Speech is a consequence, not a cause of bigotry, and so it can never really change hearts and minds. (In fact, a hate speech law doesn’t even attempt the latter, since it treats as bigots words instead of people.) Rather, a government gun sends the problem underground, and makes bigots change the forms of their discrimination, not their practice of it.

Finally, consider two crimes under a hate speech law. In each, I am beaten brutally, my jaw is smashed and my skull is split in the same way. In the former my assailant calls me a “jerk”; in the latter he calls me a “dirty Jew.” Whereas assailant one receives perhaps five years incarceration, assailant two gets 10. This is unjust for three reasons. First, we usually consider conduct spurred by emotion less abhorrent than that spurred by reason. This is why courts show lenience for crimes of passion, and reserve their greatest condemnation for calculated evil; hence the distinction between first and second-degree murder. A hate speech ban reverses this axiom. Second, such a law makes two crimes out of one, levying an additional penalty for conduct that is already criminal.

Third, the sole reason assailant two does harder time is not because hate motivated him, but because his is hate directed at special groups, like Jews, blacks or gays. Hate crime, then, turns out not to address hate, but politics. For to focus on one’s ideology—regardless of how despicable that ideology is—rather than on the objective violation of a victim’s rights, politicizes the law. Observes writer Robert Tracinski, such legislation “is an attempt to import into America’s legal system a class of crimes formerly reserved only to dictatorships: political crimes.”[20]

In the end, we must make a fundamental decision: Do we want to live in a free society or not? [21] If we do, then we must recognize that the attempt to criminalize hate is not only immoral, it is also impractical. For freedom will always include hate; progress thrives in a crucible of intellectual pluralism; and democracy is not for shrinking violets. As Thomas Paine remarked, “Those who expect to reap the blessings of freedom, must, like men, undergo the fatigues of supporting it.”[22] This, too, is the view of the United States Supreme Court, which in cases like Erznoznik v. Jacksonville (1975) and Cohen v. California (1971) has ruled that however much speech offends one, one bears the burden to avert one’s attention.

What then should we do? If the difference between tolerance and toleration is eradication vs. coexistence, then, as Andrew Sullivan concludes, we would “do better as a culture and as a polity if we concentrated more on achieving the latter rather than the former.”[23]

Footnotes

[1] As quoted in Nicholas D. Kristof, “Hate, American Style,” New York Times, August 30, 2002.

[2] Eric Roston, “In Topeka, Hate Mongering Is a Family Affair,” Time, February 28, 2005, p. 16.

[3] Ward Churchill, Interview with Catherine Clyne, “Dismantling the Politics of Comfort,” Satya, April 2004.

[4] Daniel Jonah Goldhagen, Hitler’s Willing Executioners: Ordinary Germans and the Holocaust (New York: Knopf, 1996), p. 124.

[5] Charles R. Lawrence III, “If He Hollers Let Him Go: Regulating Racist Speech on Campus,” Duke Law Journal, June 1990.

[6] Stephen Hicks, “Free Speech and Postmodernism,” Navigator (Objectivist Center), October 2002.

[7] Stephen Hicks, “Free Speech and Postmodernism,” Navigator (Objectivist Center), October 2002.

[8] Zechariah Chafee Jr., Free Speech in the United States (Cambridge: Harvard University, 1941), p. 151.

[9] Jonathan Rauch, “In Defense of Prejudice,” Harper’s, May 1995.

[10] Nadine Strossen, Defending Pornography: Free Speech, Sex, and the Fight for Women’s Rights (New York: Scribner, 1995), p. 258.

[11] [Unsigned], “Hate Speech on Campus,” American Civil Liberties Union, December 31, 1994.

[12] “Report of the Committee on Freedom of Expression at Yale,” Yale University, January 1975.

[13] “Quotations on the University of Virginia,” Thomas Jefferson Foundation.

[14] Eugene Volokh, “Underfire,” Rocky Mountain News (Denver), February 5, 2005.

[15] Rodney A. Smolla, Jerry Falwell v. Larry Flint: The First Amendment on Trial (Urbana: University of Illinois, 1988), p. 302.

[16] As quoted in Terminiello v. Chicago, 337 U.S. 1 (1949).

[17] [Unsigned], “Hate Speech on Campus,” American Civil Liberties Union, December 31, 1994.

[18] Andrew Sullivan, “What’s So Bad About Hate?,” New York Times Magazine, September 26, 1999.

[19] As quoted in [Unsigned], “Hate Speech on Campus,” American Civil Liberties Union, December 31, 1994.

[20] Robert W. Tracinski, “’Hate Crimes’ Law Undermines Protection of Individual Rights,” Capitalism Magazine, November 16, 2003.

[21] Salman Rushdie, “Democracy Is No Polite Tea Party,” Los Angeles Times, February 7, 2003.

[22] Thomas Paine, “The Crisis,” No. 4, September 11, 1777, in Moncure D. Conway (ed.), The Writings of Thomas Paine, Vol. 1 (1894), p. 229.

[23] Andrew Sullivan, “What’s So Bad About Hate?,” New York Times Magazine, September 26, 1999.

Unpublished Notes

Clichés
The vileness of the offense makes it a perfect test of one’s loyalty to the principle of freedom.[1]

Sunlight is the best disinfectant.[2]

Working toward what the free speech scholar Lee Bollinger terms “the tolerant society” by banning intolerance, institutes a do-as-I-say-not-as-I-do, free-speech-for-me-but-not-for-thee standard; our means corrupt our end.

“We are not afraid to entrust the American people with unpleasant facts, foreign ideas, alien philosophies, and competitive values. For a nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”

“I disapprove of what you say, but I will defend to the death your right to say it.”[3]

Speech As Action

“The wounds that people suffer by . . . listen[ing] . . . to such vituperation . . . can be as bad as . . . a . . . beating.”[4]

Granted, such fortitude is idealistic; so is it naïve? Does it trivialize the sincerity or seriousness of one’s pain? Does it intellectualize or universalize something profoundly personal? While the range of responses to hate is vast, the common denominator derives from what Alan Keyes, a former assistant secretary of state, terms “patronizing and paternalistic assumptions. Telling blacks,” for instance, “that whites have the . . . character to shrug off epithets, and they do not. . . . makes perhaps the most insulting, most invidious, most racist statement of all.”[5]

Speech victimizes only if one grants the hater that dispensation.

In the end, one retains the capacity to check, and to exaggerate, the force of input.

As Ayn Rand showed in The Fountainhead, although protagonist Howard Roark endures adversity that would shrivel most men, “It goes only down to a certain point and then it stops. As long as there is that untouched point, it’s not really pain.”

“The only final answer” to hate speech, “is not majority persecution . . . but minority indifference . . . The only permanent rebuke to homophobia is not the enforcement of tolerance, but gay equanimity in the face of prejudice. The only effective answer to sexism is not a morass of legal proscriptions, but the simple fact of female success. In this, as in so many other things, there is no solution to the problem. There is only a transcendence of it. For all our rhetoric, hate will never be destroyed.”[6]

We should not prohibit speech because it leads to violence. The moment one graduates to action—when the harassment turns from verbal to physical taunts—that should be illegal. Then it is no longer a question of taunts, but a question of threats.

Minorities

But don’t minorities deserve special protection? After all, they are invariably the canary in the coalmine of civilization.

At the end of our century, we have once again been faced with an outburst of hatred and destruction based on racial, political and religious differences, which has all but destroyed a country—former Yugoslavia—at least temporarily. Rwanda, Sudan…

But why should an anti-Semite be prosecuted for targeting Jews, while the Unabomber is not subject to special prosecution for his hatred of scientists and business executives? The only answer is that the Unabomber’s ideas are more “politically correct” than the anti-Semite’s.[7]

is, of course, to try minds and punish beliefs.[8]

Hate crime expands the law’s concern from action to thought

But a free-market solution to hate effectively perpetuates the status quo. And though laudable in principle, such a solution lacks force in the face of much of human history, especially the 20th century.

It requires great faith in the power of human reason to believe that people will ultimately become tolerant if left to their devices. Surely, in the absence of the 1964 Civil Rights Act, which among other things banned private discrimination, things would not have changed as much anyway.

“require us to believe too simply in the power of democracy and decency and above all rationality; in the ability of a long, slow onslaught” on bigotry.[9]

[1] Ayn Rand, “Censorship: Local and Express,” Ayn Rand Letter, August 13, 1973.

[2] “Sunlight is said to be the best of disinfectants.” Louis D. Brandeis, Other People’s Money: And How the Bankers Use It (New York: Frederick A Stokes, 1914 [1932]), p. 92.

[3] Evelyn Beatrice Hall, under the pseudonym Stephen G. Tallentyre, The Friends of Voltaire (1906).

[4] Daniel Jonah Goldhagen, Hitler’s Willing Executioners: Ordinary Germans the Holocaust (New York: Knopf, 1996), p. 124.

[5] Alan L. Keyes, “Freedom through Moral Education,” Harvard Journal of Law and Public Policy, Winter 1991.

[6] Andrew Sullivan, “What’s So Bad About Hate?,” New York Times Magazine, September 26, 1999.

[7] Robert W. Tracinski, “’Hate Crimes’ Law Undermines Protection of Individual Rights,” Capitalism Magazine, November 16, 2003.

[8] Jonathan Rauch, “In Defense of Prejudice,” Harper’s, May 1995.

[9] Ursula Owen, “The Speech That Kills,” Index on Censorship, 1998.


September 10th, 2004

Why Conscription Is Immoral and Impractical

The Vietnam Veterans Memorial Wall

No matter how one rationalizes it—duty, the Constitution, necessity, practicality, shared sacrifice—conscription abrogates a man’s right to his life and indentures him to the state. As President Reagan recognized (at least rhetorically), “[T]he most fundamental objection is moral”; conscription “destroys the very values that our society is committed to defending.”[1]

The libertarian argument says that freedom means the absence of the initiation of coercion. Since conscription necessitates coercion, it is incompatible with freedom. Most political scientists, however, believe that freedom imposes certain positive obligations; and so, like taxes, conscription amounts to paying rent for living in a free society.

Which view is right goes to the heart of political philosophy—but the answer is straightforward. If government’s purpose is to protect your individual rights, it cannot then claim title to your most basic right—your very life—in exchange. Such an idea establishes the cardinal axiom of tyranny that hinges every citizen’s existence to the state’s disposal. Nazi Germany, Soviet Russia and Communist China well understood this monopoly. And they demonstrated that if the state has the power to conscript you into the armed forces, then the state has the power to conscript you into whatever folly or wickedness it wants. (This logic is not lost on the Bush administration, which given the dearth of C.I.A. personnel who speak Arabic, has floated plans to draft such specialists.) Moreover, as philosopher Ayn Rand argued, if the state can force you to shoot or kill another human bring and “to risk [your] death or hideous maiming and crippling”—“if your own consent is not required to send [you] into unspeakable martyrdom—then, in principle,” you cease to have any rights, and the government ceases to be your protector.[2]

It matters little that you may neither approve of nor even understand the casus belli, for conscription is the hallmark of a regime whom persuasion cannot bother. This is of course the point, since by inculcating a philosophy of mechanical, unquestioning obedience, conscription churns men from autonomous individuals into sacrificial cogs. What could better unfit people for democratic citizenship?

By contrast, with voluntary armed services, no one enters harm’s way who does not choose that course; the state must convince every potential soldier of the justice and necessity of the cause. To a free society—one rooted in the moral principle that man is an end in himself, that he exists for his own sake—conscription robs men, as the social activist A.J. Muste wrote, “of the freedom to react intelligently . . . of their volition to the concrete situations that arise in a dynamic universe . . . of that which makes them men—their autonomy.”[3]

In this way, conscription exemplifies the “involuntary servitude” the American Constitution forbids. And yet the same Constitution that forbids the state from enforcing “involuntary servitude” (13th Amendment), instructs it to “provide for the common defense” (Preamble) and to “raise and support armies” (Article 1, Section 8, Clause 12). Do these powers not amount to conscription? Not necessarily. David Mayer, a professor of law and history at Capital University, explains: Where the Constitution is ambiguous, we should refer to its animating fundamentals; we should read each provision in the framework “of the document as a whole, and, especially, in light of the purpose of the whole document. . . . [T]hat purpose is to limit the power of government and to safeguard the rights of the individual.”[4] Conscription explicitly contradicts these American axioms.

Even so, some argue that conscription is necessary to ensure America’s survival in the face of, say, a two-front war. A government that acts unconstitutionally in emergencies is better than a government that makes the Constitution into a suicide pact.[5] “Injustice is preferable to total ruin,” the social scientist Garrett Hardin once opined.[6] But stability is neither government’s purpose nor its barometer. True, stability provides the security necessary to exercise one’s freedom; but a government that sacrifices its citizens’ autonomy to prop itself up is no longer a guardian of freedom. To put it another way, the survival of the nation is an imperative, but since the Constitution defines the nation, the nation’s survival is meaningless apart from that relationship. As philosophy professor Irfan Khawaja puts it, “A constitution is to a nation what a brain is to a person: take the brain out, and you kill the person; take large enough chunks of the brain out, and it may as well not be there.”[7]

Yet what if, out of ignorance or indifference, people fail to appreciate a threat before it is too late? Would the 16 million men and women whom the U.S. government conscripted for World War Two—over 12 percent of our population at that time—have arisen, voluntarily, in such numbers, at such a rate, and committed to such specialties as we needed to win the war?[8] Isn’t conscription, as President Clinton termed it, a “hedge against unforeseen threats and a[n] . . . ‘insurance policy’”?[9] Haven’t our commanders in chief—from Lincoln suspending habeas corpus during the Civil War, to FDR interning Japanese-Americans during the Second World War, to Bush’s Patriot Act today—always infringed certain liberties in wartime? In 1919, the Supreme Court declared that merely circulating an inflammatory anti-draft flier, in wartime, constitutes a “clear and present danger.”

Of course, since the price of freedom is eternal vigilance, if one wants to continue to live in freedom, then one should volunteer to defend it when it is threatened. As a practical matter, a dearth of volunteers is often the result of a corrupt war. For instance, without conscription, the U.S. government would have lacked enough soldiers to invade Vietnam; an all-volunteer force (A.V.F.) would have surely triggered a ceasefire years earlier, since people would have simply stopped volunteering. Indeed, rather than deter presidents from prosecuting that increasingly unpopular, drawn-out and bloody tragedy—from sending 60,000 Americans to their senseless deaths—conscription enabled them to escalate it.

Still, even in a just war, enlistments might not meet manpower needs. Sometimes quantity overcomes quality. Napoleon, no neophyte in such matters, noted that “Providence is always on the side of the last reserve.”[10]

But God does not side with the big battalions, but with those who are most steadfast. As President Reagan put it, “No arsenal or no weapon in the arsenals of the world is so formidable as the will and moral courage”[11] of a man who fights of his own accord, for that which he believes is truly just. This is why American farmers defeated British conscripts in 1783, and why Vietnamese guerrillas defeated American conscripts in 1975. Would you prefer to patrol Baghdad today guarded by a career officer, acting on his dream to see live action as a sniper, or guarded by a haberdasher whom the Selective Service Act has coerced into duty and who can think of nothing else save where he’d rather be?

Furthermore, when private firms, in any field, need more workers, they do not resort to hiring at gunpoint. Rather, they appeal to economics, by increasing employees’ compensation. If anyone deserves top government dollar, it is those, who as George Orwell reportedly said, allow us to sleep safely in our beds, those rough men and women who stand ready in the night to visit violence on those who would do us harm.[12]

Nonetheless, isn’t an A.V.F. a poor man’s army, driving a wedge between the upper classes who usually loophole or bribe exemptions, and the middle and lower classes on whose backs wars are traditionally fought? Similarly, doesn’t an A.V.F. devolve disproportionately on minorities, who, as one former Marine captain writes, “enlist[] in the economic equivalent of a Hail Mary pass”?[13] In fact, today’s A.V.F. is the most egalitarian ever. While blacks, for instance, remain overrepresented by six percent, Hispanics, though they comprise about 13 percent of America, comprise 11 percent of those in uniform.[14] Moreover, overrepresentation of a class or race stems not from the upward mobility the armed forces offer—training soldiers in such marketable skills as how to drive a truck, fix a jet or operate sophisticated software—but from the inferior opportunities in society.

Still, critics insist the A.V.F. excludes the children of power and privilege, of our opinion- and policy-makers. Isolated literally and socially from volunteers, these “chicken hawks” can thus advocate “regime change,” “police action,” protecting our “national interests,” or “humanitarian intervention.” After all, as Matt Damon’s character remarks in Good Will Hunting (1997): “It won’t be their kid over there, getting shot. Just like it wasn’t them when their number got called, ‘cuz they were pulling a tour in the National Guard. It’ll be some kid from Southie [a blue-collar district of Boston] over there taking shrapnel in the ass.” “The war,” therefore, as former marine William Broyles Jr. recently noted, “is impersonal for the very people to whom it should be most personal.”[15] By contrast, serving in combat gives one an essential understanding of its horrors, and the more people who serve, the more soberly and honestly will people weigh the real-life consequences of their opinions. It’s exceedingly more trying to beat the warpath if your spouse, friends, children or grandchildren might come home in a body bag (and even more vexing if the government does not censor such coverage).

In theory, this argument has much merit. As a moral issue, however, no matter how egalitarian conscription may be, there is no getting around that it still violates individual rights. Additionally, that veterans, ipso facto, possess better judgment than their civilian counterparts elides that those Abraham Lincoln and Franklin Roosevelt, neither of whom saw combat, were America’s greatest wartime strategists. Moreover, as journalist Lawrence Kaplan observes, Vietnam left Senators Chuck Hagel (R-NE), John McCain (R-AZ) and John Kerry (D-MA) on three divergent paths, with Hagel a traditional realist, McCain a virtual neoconservative and Kerry a leftist.[16] Experience, while laudable and preparatory, is neither mandatory nor monolithic.

Yet the military integrates blacks and whites, Jews and gentiles, immigrants and nativists, communists and capitalists, atheists and religionists. Esprit de corps breeds national unity. Not for nothing did “bro” enter the American vernacular in the Vietnam era—“Who sheds his blood with me shall be my brother”[17]—nor was it coincidental that the army was the first governmental agency to be desegregated. A speechwriter for President Nixon, who wrote a legislative message proposing the draft’s end, now argues that the “military did more to advance the cause of equality in the United States than any other law, institution or movement.”[18]

Of course, forcing people to wear nametags in public areas would make society friendlier, but no one (except some characters in Seinfeld) entertains this silly violation of autonomy—so why should we entertain it for the most serious violation? Noble and imperative as the ends may be, a civilian-controlled military is not a tool to implement social change, but a deadly machine for self-defense. Further, to advance equality at home one may well need to watch one’s bros die abroad.

But conscription will restore the ruggedness today’s young Americans sorely lack, critics contend. Complacency cocoons my generation, who depend on anything but ourselves. Maybe they even quote Rousseau: “As the conveniences of life increase . . . true courage flags, [and] military virtues disappear.”[19]

Yet soft as we may appear vegging out before M.T.V., history shows that when attacked, Americans are invincible. As President Bush said of 9/11: “Terrorist attacks can shake the foundations of our biggest buildings, but they cannot touch the foundation of America. These acts shattered steel, but they cannot dent the steel of American resolve.”[20] Moreover, the problem is not a dearth of regimentation, but a dearth of persuasion; the administration has failed to convince potential soldiers to enlist. Rather than see this as a sign of pusillanimity, it seems that those with the most to lose think Washington is acting for less than honorable reasons—which should cause the government not to reinstate conscription but to rethink its policies.

In his augural address, JFK acclaimed the morality behind conscription. “Ask not what your country can do for you,” he declared. “Ask what you can do for your country.” But our founders offered us an alternative between parasitism and cannon fodder, between betraying one’s beliefs by serving or becoming a criminal or expatriate by dodging: autonomous individuals pursuing their own happiness, sacrificing neither others to themselves nor themselves to others. The catch-22 goes further, since the prime draftee age, from 18 to 25, in Ayn Rand’s words, constitutes “the crucial formative years of a man’s life. This is . . . when he confirms his impressions of the world . . . when he acquires conscious convictions, defines his moral values, chooses his goals, and plans his future.” In other words, when man is most vulnerable, draft advocates want to force him into terror—“the terror of knowing that he can plan nothing and count on nothing, that any road he takes can be blocked at any moment by an unpredictable power, that, barring his vision of the future, there stands the gray shape of the barracks, and, perhaps, beyond it, death for some unknown reason in some alien jungle.”[21] Death in some alien jungle yesterday—death in some alien desert today.

Footnotes

[1] Ronald Reagan, Letter to Mark O. Hatfield, May 5, 1980. As quoted in Doug Bandow, “Draft Registration: It’s Time to Repeal Carter’s Final Legacy,” Cato Institute, May 7, 1987.

[2] Ayn Rand, “The Wreckage of the Consensus,” in Ayn Rand, Capitalism: The Unknown Ideal. Italics added.

[3] A.J. Muste, “Conscription and Conscience,” in Martin Anderson (ed), with Barbara Honegger, The Military Draft: Selected Readings on Conscription (Stanford: Hoover, 1982), p. 570.

[4] David Mayer, “Interpreting the Constitution Contextually,” Navigator (Objectivist Center), October 2003.

[5] The term “suicide pact” comes from Supreme Court Associate Justice Robert Jackson, who, in his dissenting opinion in Terminiello v. Chicago (1949), wrote: “There is danger that, if the court does not temper its doctrinaire logic with a little practical wisdom, it will convert the constitutional Bill of Rights into a suicide pact.”

See also David Corn, “The ‘Suicide Pact’ Mystery,” Slate, January 4, 2002.

[6] Garrett Hardin, “The Tragedy of the Commons,” Science, December 13, 1968.

[7] Irfan Khawaja, “Japanese Internment: Why Daniel Pipes Is Wrong,” History News Network, January 10, 2005.

[8] Harry Roberts, Comments on Arthur Silber, “With Friends Like These, Continued—and Arguing with David Horowitz,” LightofReason.com, November 19, 2002.

[9] William Jefferson Clinton, Letter to the Senate, May 18, 1994.

[10] Burton Stevenson, The Home Book of Quotations (New York: Dodd, Mead, 1952), p. 2114.

[11] Ronald Reagan, First Inaugural Address, January 20, 1981.

[12] For years people have quoted these eloquent words—either “People sleep peaceably in their beds at night only because rough men stand ready to do violence on their behalf,” or, “We sleep safely at night because rough men stand ready to visit violence on those who would harm us”—and attributed them to George Orwell, which was the pseudonym of Eric Blair. Yet neither the standard quotation books, general and military, extensive Google searches, the Stumpers ListServ, nor the only Orwell quotation booklet, The Sayings of George Orwell (London: Duckworth, 1994), cites a specific source.

[13] Nathaniel Fick, “Don’t Dumb Down the Military,” New York Times, July 20, 2004, p. A19.

[14] Nathaniel Fick, “Don’t Dumb Down the Military,” New York Times, July 20, 2004, p. A19.

[15] William Broyles Jr., “A War for Us, Fought by Them,” New York Times, May 4, 2004.

[16] Lawrence F. Kaplan, “Apocalypse Kerry,” New Republic Online, July 30, 2004.

[17] Noel Koch, “Why We Need the Draft Back,” Washington Post, July 1, 2004, p. A23.

[18] Noel Koch, “Why We Need the Draft Back,” Washington Post, July 1, 2004, p. A23.

[19] Jean-Jacques Rousseau, “A Discourse on the Moral Effects of the Arts and Sciences,” in Jean-Jacques Rousseau, The Social Contract and Discourses (London: Everyman, 1993), p. 20.

[20] George W. Bush, Statement by the President in His Address to the Nation, White House, September 11, 2001.

[21] Ayn Rand, “The Wreckage of the Consensus,” in Ayn Rand, Capitalism: The Unknown Ideal.

Unpublished Notes

1. Bill Steigerwald, “Refusing to Submit to the State,” Pittsburgh Tribune-Review, September 19, 2004.
We [draft dodgers] exploited nearly 30 deferments, which—until the draft lottery was instituted in 1969 to make involuntary servitude an equal opportunity for every 18- to 26-year-old—were embarrassingly rigged in favor of the white and privileged and against minorities and working classes.

We went to college (2-S)—for as long as possible. We got married—and had kids ASAP (3-A). We faked diseases and psychoses, made ourselves too fat or over-did drugs (4-F).

We became preachers (4-D) and teachers and conscientious objectors (1-O). We fled to Canada or even committed suicide.

2. David M. Kennedy, “The Best Army We Can Buy,” New York Times, July 25, 2005.
But the modern military’s disjunction from American society is even more disturbing. Since the time of the ancient Greeks through the American Revolutionary War and well into the 20th century, the obligation to bear arms and the privileges of citizenship have been intimately linked.

When our deferments were refused or elapsed, we became draft bait (1-A). . . .

[I]t’s the military draft that’s morally wrong, not the [politician] . . . who dodges it.

3. Christopher Preble, “You and What Army?,” American Spectator, June 14, 2005.
A draft would succeed in getting bodies into uniforms, but conscription is morally reprehensible, strategically unsound, and politically unthinkable. The generals and colonels, but especially the junior officers and senior enlisted personnel who lead our armed forces, know that the military is uniquely capable because it is comprised of individuals who serve of their own free will.

4. Richard A. Posner, “Security vs. Civil Liberties,” Atlantic Monthly, December 2001.
Lincoln’s unconstitutional acts during the Civil War show that even legality must sometimes be sacrificed for other values. We are a nation under law, but first we are a nation. . . . The law is not absolute, and the slogan “Fiat justitia, ruat caelum” (Let justice be done, though the heavens fall) is dangerous nonsense. The law is a human creation . . . It is an instrument for promoting social welfare, and as the conditions essential to that welfare change, so must it change.

5. John Stuart Mill, On Liberty. As quoted in Michael Walzer, Just and Unjust Wars.
The only test . . . of a people’s having become fit for popular institutions is that they, or a sufficient portion of them to prevail in the contest, are willing to brave labor and danger for their liberation.

6. Mario Cuomo. As quoted in William Safire, “Cuomo on Iraq,” New York Times, November 26, 1990.
You can’t ask soldiers to fling their bodies in front of tanks and say, ‘We’ll take our chances on reinforcements.’

the latter means that your right to your own life is provisional—which means you don’t have that right. Instead, you must buy your rights by surrendering your life.

integrate idea of “shared sacrifice” to “poor man’s army” counterargument

Data on Draftees
According to Pentagon officials, draftees tend to serve shorter terms than volunteers, so the armed services get less use out of their training. Draftee military units also don’t jell as well into cohesive fighting forces (Mark Thompson, “Taking a Pass,” Time, September 1, 2003, p. 43).

the lack of unit cohesiveness from constant rotation

“With soldiers now serving 50% longer than they did in the Vietnam era, the Pentagon invests heavily in career-length education and training, helping the troops master the complicated technology that makes the U.S. military the envy of the world” (Mark Thompson, “Taking a Pass,” Time, September 1, 2003, p. 43).

7. Fred Kaplan, “The False Promises of a Draft,” Slate, June 23, 2004.
In 2002 (the most recent year for which official data have been compiled), 182,000 people enlisted in the U.S. military. Of these recruits, 16 percent were African-American. By comparison, blacks constituted 14 percent of 18-to-24-year-olds in the U.S. population overall. In other words, black young men and women are only slightly over-represented among new enlistees. Hispanics, for their part, are under-represented, comprising just 11 percent of recruits, compared with 16 percent of 18-to-24-year-olds.

Looking at the military as a whole, not just at those who signed up in a single year, blacks do represent a disproportionate share—22 percent of all U.S. armed forces. By comparison, they make up 13 percent of 18-to-44-year-old civilians. The difference is that blacks re-enlist at a higher rate than whites. (Hispanics remain under-represented: 10 percent of all armed forces, as opposed to 14 percent of 18-to-44-year-old civilians.)

Still, the military’s racial mix is more diverse than it used to be. In 1981, African-Americans made up 33 percent of the armed forces. So, over the past two decades, their share has diminished by one-third.

There is a still more basic question: What is the purpose of a military? Is it to spread the social burden—or to fight and win wars? The U.S. active-duty armed forces are more professional and disciplined than at any time in decades, perhaps ever. This is so because they are composed of people who passed comparatively stringent entrance exams—and, more important, people who want to be there or, if they no longer want to be there, know that they chose to be there in the first place. An Army of draftees would include many bright, capable, dedicated people; but it would also include many dumb, incompetent malcontents, who would wind up getting more of their fellow soldiers killed.

It takes about six months to put a soldier through basic training. It takes a few months more to train one for a specialized skill. The kinds of conflicts American soldiers are likely to face in the coming decades will be the kinds of conflicts they are facing in Iraq, Afghanistan, Kosovo, and Bosnia—“security and stabilization operations,” in military parlance. These kinds of operations require more training—and more delicate training—than firing a rifle, driving a tank, or dropping a bomb.

If conscription is revived, draftees are not likely to serve more than two years. Right now, the average volunteer in the U.S. armed forces has served five years. By most measures, an Army of draftees would be less experienced, less cohesive—generally, less effective—than an Army of volunteers. Their task is too vital to tolerate such a sacrifice for the cause of social justice, especially when that cause isn’t so urgent to begin with.

Mandatory National Service
A final spin on the conscription arguments appeals to compulsory national service.

With the phrase painted across the back of his jacket, a nineteen-year-old department store worker, Paul Robert Cohen, said it memorably: “Fuck the draft.” In Cohen v. California (1971), the U.S. Supreme Court ruled that the First Amendment protected this speech (which in full read, “Fuck the Draft. Stop the War”). Yet 58 years earlier—the only time the high court has reviewed conscription—the Court ruled conscription unconstitutional.

Each must choose according to his own priorities.

Sacrifice
The word “sacrifice” apparently now applies only to our grandparents.

But I do not want to sacrifice; in fact I want to live selfishly, to protect my own freedom, not that of my 280 million compatriots.

Regimentation
“Soviet Russia took children away at an early age and indoctrinated them with ideas of war and the glories of the regimented life in which the individual does not count.”

Constitutional Arguments
On one hand, they may—though the argument that because something is constitutional, it is ipso facto moral, fails to question whether the Constitution, on the given issue, is itself immoral.

Under the ordinary rules that courts use to harmonize potentially conflicting laws, the more specific one typically governs (Adam Liptak, “In Limelight at Wiretap Hearing: 2 Laws, but Which Should Rule?,” New York Times, February 7, 2006).


April 30th, 2004

To Torture or Not to Torture

Jack Nicholson As Colonel Nathan R. Jessup in A Few Good Men

A version of this blog post appeared in the Hamilton College Spectator on April 30, 2004.

We know the general location, we know it will happen in the next 24 hours, and we’re confident the person we’ve nabbed knows what, where and when.[1] The question before us: to torture or not to torture?

Although we’ve now heard Attorney General-nominee Alberto Gonzales condemn the practice, seen Specialist Charles Granger sentenced to 10 years for committing it, and read half a dozen new books highlighting the route from Gonzales’ keyboard to Granger’s fists, it seems we are no closer to an answer. We urgently need one, but the very subject makes us wince and demur, insulated by the cliché, “Out of sight, out of mind.” Of course, this only perpetuates the problem, for without the check of a national debate, government defaults to its worst instincts. Here, then, is a modest start in addressing today’s moral imperative.

We should first remember that this hypothetical represents an emergency, and since emergencies distort context, they make it tortuous to retain a fully rational resolution. Similarly, emergencies are emergencies—people do not live in lifeboats—so such context should not form the basis for formulating official policy

Nonetheless, torture advocates argue that the end justifies the means, which amounts to an often obvious but equally precarious utilitarian calculus: had we, say, captured a 20th 9/11 hijacker on 9/10, many would have doubtless approved his torture to elicit information. Advocates also argue that once we determine a suspect knows something, he thereby becomes a threat and forfeits his rights. Playing Colonel Nathan R. Jessup in A Few Good Men (1992), Jack Nicholson memorably crystallized the point: “[W]e live in a world that has walls, and those walls need to be guarded by men with guns. Who’s gonna do it? You?. . . . [D]eep down in places you don’t talk about at parties, you want me on that wall. You need me on that wall.” Thus, torture is a “necessary evil,”[2] made particularly imperative by a post-9/11—and now post-3/11 (Madrid)—world.

Of course, governments have always used the excuse of an emergency to broaden their powers. Referring to the French Revolution, Robespierre declared that one cannot “expect to make an omelet without breaking eggs.” The Soviets alleged that their purges were “temporary.” The Nazis said extraordinary times necessitated extraordinary measures. And, in the same way, 45 days after 9/11, in government’s characteristic distortion of words, Congress adopted the so-called Patriot Act (which in the heat of the moment many of the lawmakers voting for it did not even read, in whole or in part). Then, 13 months later, the Bush administration floated a second Patriot Act. Such is the pattern of and path toward despotism.

And yet, the renowned civil libertarian, Alan Dershowitz, is perhaps torture’s most famous advocate. Dershowitz favors restricting the practice to “imminent” and “large-scale” circumstances. But, again, by such seemingly small steps we creep further toward the Rubicon: since we have already surrendered such power, a precedent has been established, and the rest is only a matter of details and time.

Indeed, once we legitimate torture to save New York City, it becomes much easier to legitimate its use to save “just” Manhattan. And then “just” Times Square. And then “just” the World Trade Center. Before we let a judge issue what Deroshowitz terms “torture warrants” on a case-by-case basis, we need to define our criteria precisely. Are they to save a million people? A thousand? A hundred? The President? Members of the Cabinet? Senators? Only in cases involving a “weapon of mass destruction”?

Similarly, if torture makes terrorists sing, as it often does in foreign countries, why shouldn’t we use it against potential terrorists? And then to break child pornography rings and to catch rapists? And then against drug dealers and prostitutes? After reading of endless abuses by government officials using forfeiture, I.R.S. audits, graft, payoffs, kickbacks and the like, it is naïve to think that once we collectively sanction torture, that torture would somehow be exempt from the temptress of absolute power. Do not say it cannot happen in America. It already has.

Footnotes

[1] This is the “ticking-bomb” hypothetical, which Michael Walzer described in “Political Action: The Problem of Dirty Hands,” Philosophy and Public Affairs 2, 1973, 166–67, and Alan Dershowitz popularized in Why Terrorism Works: Understanding the Threat, Responding to the Challenge (2002). But as Arthur Silber of the LightofReason blog notes, we should modify this Hollywood fantasy. For instance, has the suspect confessed to knowledge and refuses to spill it, or does he profess not to know anything when we believe he does?

[2] The term “necessary evil” is contradictory. Explains psychotherapist Michael Hurd: “[T]here are no necessary evils. If something is truly evil, there’s no way it can be necessary, and if it is truly necessary to the well-being of a rational man’s life, it’s not evil, but good.”

Unpublished Notes

The conservative analyst Andrew Sullivan adds, “In practice, of course, the likelihood of such a scenario is extraordinarily remote. Uncovering a terrorist plot is hard enough; capturing a conspirator involved in that plot is even harder; and realizing in advance that the person knows the whereabouts of the bomb is nearly impossible.”

“What the hundreds of abuse and torture incidents have shown is that, once you permit torture for someone somewhere, it has a habit of spreading. Remember that torture was originally sanctioned in administration memos only for use against illegal combatants in rare cases. Within months of that decision, abuse and torture had become endemic throughout Iraq, a theater of war in which, even Bush officials agree, the Geneva Conventions apply. The extremely coercive interrogation tactics used at Guantánamo Bay ‘migrated’ to Abu Ghraib. In fact, General Geoffrey Miller was sent to Abu Ghraib specifically to replicate Guantánamo’s techniques. . . .

“[W]hat was originally supposed to be safe, sanctioned, and rare became endemic, disorganized, and brutal. The lesson is that it is impossible to quarantine torture in a hermetic box; it will inevitably contaminate the military as a whole. . . . And Abu Ghraib produced a tiny fraction of the number of abuse, torture, and murder cases that have been subsequently revealed. . . .

“What our practical endorsement of torture has done is to remove that clear boundary between the Islamists and the West and make the two equivalent in the Muslim mind. Saddam Hussein used Abu Ghraib to torture innocents; so did the Americans. Yes, what Saddam did was exponentially worse. But, in doing what we did, we blurred the critical, bright line between the Arab past and what we are proposing as the Arab future. We gave Al Qaeda an enormous propaganda coup, as we have done with Guantánamo and Bagram, the ‘Salt Pit’ torture chambers in Afghanistan, and the secret torture sites in Eastern Europe. In World War II, American soldiers were often tortured by the Japanese when captured. But FDR refused to reciprocate. Why? Because he knew that the goal of the war was not just Japan’s defeat but Japan’s transformation into a democracy. He knew that, if the beacon of democracy—the United States of America—had succumbed to the hallmark of totalitarianism, then the chance for democratization would be deeply compromised in the wake of victory. . . .

“What minuscule intelligence we might have plausibly gained from torturing and abusing detainees is vastly outweighed by the intelligence we have forfeited by alienating many otherwise sympathetic Iraqis and Afghans, by deepening the divide between the democracies, and by sullying the West’s reputation in the Middle East. Ask yourself: Why does Al Qaeda tell its detainees to claim torture regardless of what happens to them in U.S. custody? Because Al Qaeda knows that one of America’s greatest weapons in this war is its reputation as a repository of freedom and decency. Our policy of permissible torture has handed Al Qaeda this weapon—to use against us. It is not just a moral tragedy. It is a pragmatic disaster.[1]

Finally, we must decide whether our government should conceal or inform us of its torture policies. Whether one opposes torture, I agree with Deroshowitz that “[d]emocracy requires accountability and transparency”;[2] “painful truth,” as Michael Ignatieff, author of The Lesser Evil (2004), puts it, “is far better than lies and illusions.”[3] As such, the U.S. government should clarify what tactics it is using and which are still off limits, so the American people can vote our views, via our representatives, into action.

“It is an axiom of governance that power, once acquired, is seldom freely relinquished.”[4]

“Another objection is that the torturers very swiftly become a law unto themselves, a ghoulish class with a private system. It takes no time at all for them to spread their poison and to implicate others in what they have done, if only by cover-up. And the next thing you know is that torture victims have to be secretly murdered so that the news doesn’t leak. One might also mention that what has been done is not forgiven, or forgotten, for generations.”[5]

“The chief ethical challenge of a war on terror is relatively simple—to discharge duties to those who have violated their duties to us. Even terrorists, unfortunately, have human rights. We have to respect these because we are fighting a war whose essential prize is preserving the identity of democratic society and preventing it from becoming what terrorists believe it to be. Terrorists seek to provoke us into stripping off the mask of law in order to reveal the black heart of coercion that they believe lurks behind our promises of freedom. We have to show ourselves and the populations whose loyalties we seek that the rule of law is not a mask or an illusion. It is our true nature.”[6]

Even if an exception is justified, further exceptions, likely to be increasingly unjustified, will likely ensure.

[1] Andrew Sullivan, “The Abolition of Torture,” New Republic, December 19, 2005.

[2] Alan Dershowitz, “Is There a Torturous Road to Justice?,” Los Angeles Times, November 8, 2001.

[3] Michael Ignatieff, “Lesser Evils,” New York Times Magazine, May 2, 2004.

[4] Mark Danner, [Untitled], New Yorker, July 29, 1991.

[5] Christopher Hitchens, “Prison Mutiny,” Slate, May 4, 2004.

[6] Michael Ignatieff, “Lesser Evils,” New York Times Magazine, May 2, 2004.


April 20th, 2004

Defending the Disgusting

The judicial history of free speech in America is the story of how Supreme Court justices—whom the Constitution designates to check and balance the power of Congress and the president—are instead unwilling to act against them, lest a backlash against judicial activism ensue. As FDR put it while trying to pack the Court in 1937, the American people expect the unelected third branch of government to fall in line behind the elected other two.[1] Of course, the judiciary is a deliberately antidemocratic body; as the last bulwark against the tyranny of the majority, it tempers democracy’s excesses. In this way, judges should ensure that government’s powers remain wedded strictly to the protection of the Constitution, which, regarding free speech, means the protection of the First Amendment. The specific purpose of that Amendment, then, is the protection of minorities and dissent.

Alas, from its inception, the Supreme Court has viewed the First Amendment as subject, if not subordinate, to majority rule, or “democratic deliberation.” To be sure, the Court sometimes protects offensive speech, always lauds the value of free speech, and elevates the First Amendment above other constitutional guarantees. Yet the Court simultaneously undercuts free speech by acknowledging a higher value. That value goes by different names—“social utility” and “community standards” summarize them—and mandates the categorization of speech into “political” vs. “commercial” pigeonholes. This technicalized morass is today’s state of the First Amendment.

Now, just as we can best measure the strength of steel under stress, so the best tests of principle come over the most nauseating examples. As philosopher Ayn Rand observed, although it is uninspiring to “fight for the freedom of the purveyors of pornography or their customers . . . in the transition to statism, every infringement of human rights has begun with the suppression of a given right’s least attractive practitioners . . . [T]he disgusting nature of the offenders makes . . . a good test of one’s loyalty to a principle.”[2] The test here is Ashcroft v. Free Speech Coalition (2002), in which the latter challenged the constitutionality of the 1996 Child Pornography Prevention Act (C.P.P.A.). The C.P.P.A. criminalized sexually explicit images that depict minors but were produced, typically via computer imaging, without using any actual children.

C.P.P.A. supporters argue, in the Court’s summary, that “harm flows from the content of the images, not from the means of their production” (3). In other words, virtual child pornography threatens children in “less direct, ways” than real-life child pornography (3). For instance, by increasing the chance that pedophiles become molesters, virtual images “whet the appetites of child molesters” (4).

But any way one rationalizes the C.P.P.A., such images fail to intrinsically harm any flesh and blood minor; the Ferber test requires an “intrinsic” connection. Viewing, after all, does not necessitate acting; one can be a pedophile but not a child molester. Similarly, whereas viewing virtual images coerces nobody and involves only the viewer and the producer, viewing real images, as per New York v. Ferber (1982), constitutes criminal coercion of children. The C.P.P.A. collapses these distinctions—distinctions that Ferber relied on as a major reason for its verdict—but the “casual link,” as the Court notes, “is contingent and indirect” (12); the government needs a “significantly stronger, more direct connection” (15-16).

Furthermore, the government’s arguments turn, not on any actual coercion, but on potential coercion; this is why the C.P.P.A. resorts to such hesitating, noncommittal words as images that appear to show minors in sexually explicit conduct (3), or that convey that impression (4). The cardinal principle of liberty, however, must always take precedence: so long as one refrains from directly initiating force against others, one must be free to pursue one’s own version of happiness—including, however despicable, taking pleasure from virtual pedophilia.

C.P.P.A. supporters argue next that child pornography “as a whole . . . lacks serious literary, artistic, political, or scientific value” (8). Any redeeming values are de minimis, since kiddie porn only perpetuates prurience, pedophilia, and child molestation. Yet however indecent one’s values may be—with the exception of child molestation, which necessitates coercion—freedom does not mean upholding a social consensus, but the autonomy of each individual to choose his own values. No, one’s man treasure is not another’s trash, but using the government to ban certain trash necessarily foists the values of some, usually the majority, on others, usually a minority. American history is rife with examples. The Comstock Act (1873) criminalized as pornography any information concerning birth control. The National Endowment for the Arts continually funds “art” that many would taxpayers consider unworthy of that name. In Vietnam and in Iraq today, we try to impose Western values on many who simply do not want them (at least without their own adaptations).

Indeed, it is sheer folly to make government the arbiter of whether books, magazines, newspapers, radio, television, theater and film have value, let alone “literary, artistic, political, or scientific” value—or, most ominous of all, “serious value.”[3] Judges call such speech lacking “unprotected,” but this is the zenith of censorship. For when government takes it upon itself to decree which of its citizens’ values have value—to dictate which words deserve freedom and which make you a criminal—it exceeds its job of impartiality and assumes arbitrary power. As such, the First Amendment no longer derives from the Constitution but from popular predilections.

And yet, in his dissent, Chief Justice Rehnquist observes that although the “C.P.P.A. has been on the books, and has been enforced, since 1996,” movies produced thereafter, like American Beauty (2000) and Traffic (2001), which the defendants argue the C.P.P.A. would have banned, nonetheless proceeded unabated—and won Academy Awards (Rehnquist, dis. op., 5). Rehnquist thus argues that the C.P.P.A. “need not be” construed to ban such movies (Rehnquist, dis. op., 7). Of course, this is Rehnquist’s construal; one can easily envisage how Attorney General John Ashcroft, or some like-minded zealous puritan, would think otherwise. After all, America is not Alice in Wonderland, and words do not mean, as Humpty Dumpty said, “what I choose [them] to mean.” Rather—if we are to have a government of laws, not a government of men—words must mean what they actually say.

Finally, since laws are rarely repealed, ideological organizations have become notorious for mining case law digests to unearth some obscure precedent, whose language they construe, years if not decades later, to push for a ban on something else—and then something else.[4] Therefore, the alleged limits on censorship, the legalistic conditions of where and when, are insignificant. While the high court today may ban “only” nonvirtual child pornography, using the same nonabsolutist precedents, a future Court may well ban gay porn, and still another Court may ban pornography altogether. Since we have already surrendered such power, the principle has been established, and, as Ayn Rand observed, the “rest is only a matter of details—and of time.”[5] Censorship is the canary in the political coalmine, and the anti-minority, collectivist rationales, however piecemeal and whatever pullbacks, bring us ever-closer to a Fahrenheit 451 society. Do not say “it” cannot happen in America. Having already criminalized defamation and “fighting words”—and with a legal history including Dred Scott, Prohibition, Bowers v. Hardwick, the Patriot Act, and now the Federal Marriage Amendment—it already has.

Footnotes

[1] The “American people . . . expect the third horse to pull in unison with the other two.” Franklin D. Roosevelt, Fireside Chat 46, March 9, 1937.

[2] Ayn Rand, “Censorship: Local and Express,” Ayn Rand Letter, August 13, 1973.

[3] Mark Henry Holzer, Sweet Land of Liberty? The Supreme Court and Individual Rights.

[4] For instance, the Federal Vocational Rehabilitation Act of 1973 prohibits discrimination against otherwise qualified handicapped people. Although the act did not address the specific issue of HIV and AIDS discrimination, subsequent court cases have held that the act protects AIDS as a handicap. See the movie Philadelphia (1993).

[5] Ayn Rand, “Censorship: Local and Express,” Ayn Rand Letter, August 13, 1973.

Unpublished Notes

Thus, banning such personless, harmless speech criminalizes mere thoughts and constitutes preemptive law.

Small but significant

the deep-seated, indelible destruction of pure and innocent children.

Virtual child porn is “neither obscene under Miller nor child pornography under Ferber” (2). Hence?

Possession of nonvirtual child pornography is a federal crime, and soliciting and sexual relations with minors constitutes statutory rape.

Higher Interest

In Chaplinsky v. New Hampshire (1942), the Court affirmed: “[A]ny benefit that may be derived from [such utterances] is clearly outweighed by the social interest in order and morality.”

Meaning of Free Speech

As Voltaire said (actually, it was Evelyn Beatrice Hall, under the pseudonym S[tephen] G. Tallentyre, in The Friends of Voltaire [1906]): “I disapprove of what you say, but I will de-fend to the death your right to say it.”

Marketplace of ideas theory of free speech: ideational diversity is an essential ingredient, the stew out of which bubble the best ideas, if not eventually truth itself.

Counterargument

Additionally, repealing the C.P.P.A. would embolden molesters, who if indicted for vir-tual child pornography, could evade liability because their images are computer-generated (Con-ner, 5).

Absolutism

In Dennis v. United States (1950), the Supreme Court declared: “[C]ertain kinds of speech are so undesirable as to warrant criminal sanction. Nothing is more certain in modern society than the principle that there are no absolutes.” Yet the First Amendment reads that “Congress shall make no law . . . abridging the freedom of speech.” Common sense tells you that these words constitute an absolute. The Amendment does not say no law except in wartime, or except when the speech gainsays community standards, or except when it lacks redeeming value, or social utility, or fails to serve a public interest.

Granted, the framers never intended the First Amendment to allow, nor has any Supreme Court allowed, absolute free speech. Yet the bottom line is if speech is not absolute, it is arbi-trary.

And don’t tell me that me the slope may slippery, but we can be reasonable about where it slides. The ever-growing number of restrictions show otherwise.

If from the very first days of the republic restraints on speech were commonplace, if no less a patriot than Thomas Jefferson believed that states could censor speech and that a selective prosecution now and then of an unpopular speaker was just, if during World War One antidraft activists could be incarcerated for quoting the Ninth and Thirteenth Amendments, if American communists could be incarcerated not for throwing bombs but for merely agreeing to organize and advocate—if there are no absolutes—then it should not surprise that truly free speech has never existed in America, albeit the country that has the fewest restrictions thereon.

Indeed, if the First Amendment says that “Congress shall pass no laws . . . abridging the freedom of speech,” then Congress should pass no such laws, period.

The genie, once out of the bottle, can never be coaxed or stuffed back inside.

As Chief Justice Frankfurter explained in Dennis v. United States (1951): “The language of the [Constitution] is to be read not as barren words found in a dictionary but as symbols of historic experience illumined by the presuppositions of those who employed them. Not what words did Madison and Hamilton use, but what was it in their minds which they conveyed?”

“’[P]rotecting the children,’” as columnist Robert Tracinksi puts it, “is no excuse for muzzling the adults.”

Theoretically, in a democratic republic, politicians are supposed to represent the views of their constituents. Hence, if people disagree with how our representatives are voting, we can vote them out of office. But this principle assumes limited government in the classical liberal sense, not the leviathan we have today.


April 19th, 2004

Covering Dictatorships Means Covering the Truth

A version of this blog post was awarded the Hamilton College 2005 Cobb Essay Prize, appeared in the Utica Observer-Dispatch (April 19, 2004), and was noted on the Hamilton College Web site (April 21, 2004).

Most of us trust that what we read, watch or hear from well-established news organization is trustworthy. But trustworthiness depends on the source—not only the organization, but also the origin of information. For without freedom one cannot report the news freely. It is therefore fraudulent for a news agency to operate in a dictatorship without disclosure.

What constitutes a dictatorship? First, if independent media exist, the state aggressively censors them. After all, news doesn’t mean much if citizens are privy only to propaganda. Second, if candidates for political office exist, the state shackles their activities. After all, news doesn’t mean much if the opposition is nonexistent. Third, the state cows its citizens. After all, news doesn’t mean much if people are afraid to speak.

As Iraqis and U.S. marines toppled the massive statue of Saddam Hussein in Baghdad two years ago, Eason Jordan, chief news executive of the Cable News Network (CNN), penned an op-ed for the New York Times. The headline was its own indictment: “The News We Kept to Ourselves.” For the past 12 years, Jordan confessed, there were “awful things that could not be reported because doing so would have jeopardized the lives of Iraqis, particularly those on our Baghdad staff.” This much is inarguable: the Hussein regime expertly terrorized, if not executed, any Iraqi courageous enough to slip a journalist an unapproved fact. Jordan relates one particularly horrifying story: “A 31-year-old Kuwaiti woman, Asrar Qabandi, was captured by Iraqi secret police . . . for ‘crimes,’ one of which included speaking with CNN on the phone. They beat her daily for two months, forcing her father to watch. In January 1991, on the eve of the [first] American-led offensive, they smashed her skull and tore her body apart limb by limb. A plastic bag containing her body parts was left on the doorstep of her family’s home.”[1]

As for the journalists, had one been “lucky” enough to gain a visa to Iraq, one then received a minder. An English-speaking government shadow, the minder severely circumscribed a journalist’s travels to a regime-arranged itinerary. Franklin Foer of the New Republic describes one typical account: when a correspondent unplugged the television in his hotel room, a man knocked on his door a few minutes later asking to repair the “set.” Another correspondent described an anti–American demonstration, held in April 2002 in Baghdad, to celebrate Saddam’s 65th birthday. When her colleagues turned on their cameras, officials dictated certain shots and, with bullhorns, instructed the crowd to increase the volume of their chants. Had the regime deemed one’s reports to be too critical, like those of recently retired New York Times reporter Barbara Crossette or CNN anchor Wolf Blitzer, it simply revoked one’s visa or shut down one’s bureau, or both.[2] Of course, this all depends on the definition of “critical”; referring to “Saddam,” and not “President Saddam Hussein,” got you banned for “disrespect.” At least until an Eason Jordan could toady his way back in.

And yet CNN advertises itself as the “most trusted name in news.” Truth, however, as the American judicial oath affirms, consists of the whole truth and nothing but the truth; what one omits is equally important as what one includes. Thus, to have reported from Saddam’s Iraq as if Tikrit were Tampa was to abdicate a journalist’s cardinal responsibility. Indeed, if journalists in Iraq could not have pursued, let alone publish, the truth, they should not have not been concocting the grotesque lie that they could, and were. Any Baghdad bureau under Saddam is a Journalism 101 example of double-dealing. And any news agency worthy of the title wouldn’t have had a single person inside Iraq—at least officially. Instead, journalists could have scoured Kurdistan or Kuwait, even London, where many recently arrived Iraqis can talk without fear of death. According to former C.I.A. officer Robert Baer, who was assigned to Iraq during the Gulf War, Amman, the capitol of Jordan, is a virtual pub for Iraqi expatriates.[3]

Why, then, were the media in Iraq? As columnist Mark Steyn observes, “What mattered to CNN was not the two-minute report of rewritten Saddamite press releases but the sign off: ‘Jane Arraf, CNN, Baghdad.’”[4] Today’s media today access above everything and at any cost—access to the world’s most brutal sovereign of the last 30 years and his presidential palaces built with blood money, and at the costs of daily beatings, skull-smashings and limb-severings. Dictators, of course, understand this dark hunger, and for allowing one to stay in hell, they demand one’s soul, or unconditional obsequiousness. Thus did CNN become a puppet for disinformation, broadcasting the Baath Party line to the world without so much as innuendo that “Jane Arraf, CNN, Baghdad” was not the same as “Jane Arraf, CNN, Washington.” In this way, far from providing anything newsworthy, let alone protecting Iraqis, the media’s presence there only lent legitimacy and credibility to Saddam’s dictatorship.

Alas, dictatorship neither begins nor ends with Iraq. According to Freedom House, America’s oldest human rights organization, comparable countries today include Burma, China, Cuba, Iran, Libya, North Korea, Pakistan, Saudi Arabia, Sudan, Syria, Uzbekistan and Vietnam.[5] How should we read articles with these datelines? In judging the veracity of news originating from within a dictatorship, the proper principle is caveat legens—reader beware. As Hamilton College history professor Alfred Kelly explains in a guidebook for his students, train yourself to think like a historian. Ask questions such as: Under what circumstances did the writer report? How might those circumstances, like fear of censorship or the desire to curry favor or evade blame, have influenced the content, style or tone? What stake does the writer have in the matters reported? Are his sources anonymous? What does the text omit that you might have expected it to include?[6] You need not be a conspiracy theorist to recognize the value of skepticism.

Footnotes

[1] Eason Jordan, “The News We Kept to Ourselves,” New York Times, April 11, 2003.

[2] Franklin Foer, “How Saddam Manipulates the U.S. Media: Air War,” New Republic, October 2002.

[3] Franklin Foer, “How Saddam Manipulates the U.S. Media: Air War,” New Republic, October 2002.

[4] Mark Steyn, “All the News That’s Fit to Bury,” National Post (Canada), April 17, 2003.

[5] As quoted in Joseph Loconte, “Morality for Sale,” New York Times, April 1, 2004.

[6] Alfred Kelly, Writing a Good History Paper, Hamilton College Department of History, 2003.

Bibliography

Chinni, Dante, “About CNN: Hold Your Fire,” Christian Science Monitor, April 17, 2003.
Collins, Peter, “Corruption at CNN,” Washington Times, April 15, 2003.
—, “Distortion by Omission,” Washington Times, April 16, 2003.
Da Cunha, Mark, “Saddam Hussein’s Real Ministers of Disinformation Come Out of the Closet,” Capitalism Magazine, April 14, 2003.
Fettmann, Eric, “Craven News Network,” New York Post, April 12, 2003.
Foer, Franklin, “CNN’s Access of Evil,” Wall Street Journal, April 14, 2003.
—, “How Saddam Manipulates the U.S. Media: Air War,” New Republic, October 2002.
Glassman, James K., “Sins of Omission,” TechCentralStation.com, April 11, 2003.
Goodman, Ellen, “War without the ‘Hell,’” Boston Globe, April 17, 2003.
Kelly, Alfred, Writing a Good History Paper, Hamilton College Department of History, 2003.
Jacoby, Jeff, “Trading Truth for Access?Jewish World Review, April 21, 2003.
Jordan, Eason, “The News We Kept to Ourselves,” New York Times, April 11, 2003.
Loconte, Joseph, “Morality for Sale,” New York Times, April 1, 2004.
de Moraes, Lisa, “CNN Executive Defends Silence on Known Iraqi Atrocities,” Washington Post, April 15, 2003.
Smith, Rick, “CNN Should Scale Back Chumminess with Cuba,” Capitalism Magazine, May 8, 2003.
Steyn, Mark, “All the News That’s Fit to Bury,” National Post (Canada), April 17, 2003.
Tracinski, Robert W., “Venezuela’s Countdown to Tyranny,” Intellectual Activist, April 2003.
Walsh, Michael, “Here Comes Mr. Jordan,” DuckSeason.org, April 11, 2003.

Addendum

Newsweek’s Christopher Dickey recently observed that the “media marketplace . . . long ago concluded [that] having access to power is more important speaking truth to it.”


February 26th, 2004

Buckley v. Valeo

Does campaign finance reform restrict free speech?

In the aftermath of the Watergate scandal, Congress amended existing campaign finance laws to limit the amount that could be contributed to, or spent by, political campaigns. The Supreme Court considered these regulations in Buckley v. Valeo (1976) and made a momentous hash of the legislation. The verdict therefore both protects and violates free speech rights, though its arguments for the former (expenditures) apply equally to the latter (contributions).

Those who want to limit contributions argue that, in contrast to expenditures, contributions are less connected to my speech; only indirectly does my check, after proceeding through the local campaign office to the national office to an advertising firm, really express my views, or my own voice. Yet, as Chief Justice Burger observes in his dissent, the distinction between contributions and expenditures “simply will not wash.” It is more semantic than substantive. Limits are limits, regardless of their consequences or one’s intentions.

Second, political contributions of any size are still a form of speech, as the Court implicitly acknowledges in allowing up to $1,000 (now $2,000) of it. “Your contribution to a candidate,” notes the radio host Andrew Lewis, “is de facto the publication of your ideas.” Thus, however a candidate uses your money, however it reaches him, however “symbolic” it may be in constitutional parlance, it’s still your money—which means it’s still your speech. If you give money to a candidate, you bolster his candidacy; if you withhold your financial sanction or contribute to another candidate, you implicitly sap the former candidacy. This is how people communicate politically in a representative republic.

Thus, as writer Michael Hurd argues, the “extent to which we ban money from campaigns is the extent to which we ban our . . . ability to express ourselves”;[3] the only proper limits are each individual’s willingness to spend the fruits of his labor. A free society cannot survive as such without the expression of ideas unfettered.

Furthermore, as the Court itself argues regarding expenditure limits, a cap “naively underestimates[s] the ingenuity and resourcefulness” of those who seek vicarious political influence. According to Todd Gaziano, Director of the Center for Legal and Judicial Studies at the Heritage Foundation, caps are “like trying to dam a stream with a pile of sticks. Campaign spending eventually will flow through the dam, over the dam, or find another path.” Indeed, as Bradley Smith shows in Unfree Speech: The Folly of Campaign Finance Reform, caps affect the channels through which money reaches political campaigns, rather than the total amount of money.

Still, the Court argues that because its cap still leaves people “free to engage in independent political expression,” pursuing other avenues such as resource-rich advertising, caps do not have “any dramatic averse effect,” like undermining “the potential for robust and effective” campaigns “to any material degree.” But it doesn’t matter if caps preserve some speech. As Barry Goldwater declared, “[E]xtremism in the defense of liberty is no vice! And . . . moderation in the pursuit of justice is no virtue!” Accordingly—especially as the last bulwark against tyranny—free speech is too sacred to be restrained or subjected to a cost-benefit analysis; it needs no checks or balances, for it is its own.

Finally, the appellants argue that contributions exceeding $1,000 tend toward bribery. Since running for office requires significant donations, politicians increasingly offer pork barrels to those who underwrite their campaigns. Both the “actuality and appearance” of this influence peddling thus “undermine[s]” the “integrity” of and our “confidence” in the government. After all, how can I, a college student with a $25 check reserved for my favorite candidate, compete with Fortune 500 companies that contribute (however indirectly) hundreds of thousands of dollars—to multiple candidates?

Now, concerns that electoral contributions amount to quid pro quos are legitimate. The need to curtail the pressure-group warfare that engulfs Washington is urgent. Yet the criteria the Court use employ the yardstick not of the First Amendment—which should guide all discussion of free speech issues—but of its consequences. Consequences are important, but we cannot eliminate a problem by manipulating its effects.

Rather, we must consider the root cause. The Court believes this cause is unlimited contributions, in which “corruption inhere[s].” But, in fact, corruption inheres in unlimited government, toward which ours increasingly tends. Thus, to take money out of politics, we should take politics out of money. As journalist Frank Pellegrini explains: “The thicket of bendable laws [and] targeted tax breaks . . . are what keeps the campaign checks in the mail and the lobbyists in the corridors of power. When one tweak in one bit of fine print can save a corporation millions, how can we expect them to stop trying to secure that advantage.” Concludes writer Edwin Locke: only when politicians “have no special favors to sell will lobbyists stop trying to buy their votes.”

Unpublished Notes

by reducing the scope of government, we reduce the power of politicians

Statism gives the state the power to dispense favors, and so compels entrepreneurs to secure, as Time magazine put it in 1992, the “backing of big, sophisticated companies that know which bureaucratic buttons to press and which deep [governmental] pockets to pick.”

Third, the appellees argue that a cap reduces the costs of campaigning and, hence, the chance for corruption. Similarly, the Court says bribes concern “only the most blatant and specific” acts, and that disclosure laws are only a “concomitant” (612). These scanty laws thus necessitate caps that prohibit all forms of corruption. But bribery, in whatever form, has been illegal since time immemorial. Moreover, in trying to stanch all corruption, the cap stanches the free speech rights of citizens who seek no influence, but simply to be left alone.

In this way, a cap also “necessarily reduces the quantity” and “diversity” of political speech and the “size of the audience reached” (610).

Compare caps to Prohibition

As Bill Safire once put it, “Money talks, but money is not speech.”

You can’t change the consequences without changing the cause.

What first strikes me are the criteria the Court uses. For instance, in his per curiam decision, Justice Souter refers to state “interests,” which if “sufficiently important” (612), may override the rights of the people. Yet for a country whose Declaration of Independence proclaims man’s rights to be inalienable, invoking these illusionary “interests” is utterly contradictory; dictatorships, not free countries, have “interests,” which they typically use to rationalize their despotic rule.

Rather, the state cannot ban anything except acts that violate individual rights.

Expenditures

Appellees argue that expenditure limits serve a “public interest” by equalizing the financial resources of candidates.

Court said money spent must necessarily vary according to the “size and intensity” of support for the candidates.

Ceilings also handicap minor candidates lacking name recognition

The definition of an “expenditures” is unconstitutionally vague.

Publicly Financed Elections

Additionally, presidential campaigns would, for the first time in American history, be eligible for public funds.

saddling the country with our present dysfunctional system of election finance.

Upheld a public-financing scheme for presidential elections that patently discriminates in favor of the established Republic and Democratic parties—by paying them in advance of the elections—and against third parties, who must gain at least five percent of the national vote before being compensated for any of their expenditures in the course of the campaign.

One consequence is that millionaires, constitutionally protected in unlimited spending on their own campaigns, are given significant advantages over less wealthy opponents. And, of, course, the existing two parties were given a major hedge against possible third-party competition—unless headed by the Texas billionaire Ross Perot.

Buckley increased the prominence of many unusually wealthy candidates who swamped less affluent opponents, not to mention the disgust expressed by nonwealthy candidates over the increasing amount of time they had to spend raising money in self-defense.