Category Archives: American History/Constitutional Law

The Trouble With Transgender: The Liberal Argument

by Kenneth W. Krause.

Kenneth W. Krause was a contributing editor and “Science Watch” columnist for the Skeptical Inquirer, a contributing editor and books columnist for the Humanist, and wrote frequently for Skeptic as well.

Every human should enjoy the right to dress and act as he or she prefers. For example, if a human male — that is, one born to produce male gametes — wants to wear stereotypically female dress, or to behave in stereotypically female ways, I have no objection, and I wouldn’t expect anyone to care whether I do or not. Similarly, every human should enjoy the right to choose his or her name, whether typically male or female.

But of course the acts of dressing, behaving, and naming one’s self in stereotypically female ways, for example, do not necessarily imply that one is a human female.

It does, however, appear true that all concepts of “gender” and “transgender” are necessarily based in behavioral sex stereotypes — ones that many women, for example, have fought long and hard to disown over the decades. After all, how could one conceive of what it’s like to “behave as a woman,” if not through stereotypes? And, although I accept that most men and women have in fact adopted and will continue to adopt sex stereotypes, I can’t ignore their generally degrading natures. Sex stereotypes, I hope all will agree, have helped prevent girls and women from fulfilling the promises of their intrinsic talents throughout human history.

Anthropologists have argued that many of the behaviors I might refer to as sex stereotypes are, in fact, innate traits, exhibited by human females, for example, across both time and cultures. Perhaps so, depending on the trait. But Americans, in particular, have decided — or at least I’ve long thought so — that we have intellectual and moral duties to withhold such judgements with regard to any individual girl or woman because we know that, one, many females do not adhere to female stereotypes and, two, we want to encourage all persons to be precisely who they are as individuals. I’ll suggest as well that we should refrain from even encouraging sex stereotypical behavior because, frankly, stereotypes tend to be less than intelligent, productive, or otherwise flattering.

Neither are “gender” and “transgender” categories necessary. The traditional means of categorizing human males and females — according to gamete types — provide us with at least two critical advantages. First, they’re objective. All rational people employing the traditional criterium can readily agree on who is male or female. Yes, intersex persons exist, but they’re relatively rare and, of course, biologically abnormal, much as persons with six fingers are abnormal (but otherwise equally valuable as persons). Second, and far more importantly, traditional means of identifying human males and females, unlike their “gender” and “transgender” counterparts, make no implicit assumptions or judgements about how a human female, for example, is “supposed to behave.” A female has ova and a male produces sperm — that’s it, nothing more in terms of cognition and behavior is presumed. As such, the traditional criterium allows everyone to dress, believe, and act as they please.

In any case, my objection to, and the true trouble with, “transgender” is two-fold. First, many “transgender” activists have apparently decided that it’s completely acceptable to force others to choose their sex stereotypical scheme of categorizing human males and females over the traditional, objective, and non-judgmental scheme of doing so. That’s not merely an intellectual, but also an ethical, mistake. For its benefits to all, I choose the latter scheme, and that’s why I type the word “transgender” here in quotation marks — because I don’t support sex stereotypes. Second, through government, law, and industry, some “transgender” activists have also attempted — quite predictably and understandably, given their beliefs — to institutionalize (or re-institutionalize, I suppose) those sex stereotypes inherent in all notions of “gender” and “transgender.” Such attempts are especially unfortunate for women, in particular, who have struggled against the discrimination facilitated by such stereotypes for far too long, and who have progressed entirely too far to be dragged down yet again to their level.

Those who consider themselves “transgender” deserve objectively equal rights, of course. But “trans” activists do not possess the rights to impose their beliefs on others, or, much worse, to further institutionalize for their own supposed benefit or for the alleged benefit of others the stereotypical building blocks of sexual prejudice and discrimination based on sex. We don’t have to move backward to move forward, and we shouldn’t want to.

But perhaps the most tragic result of popular support for “transgenderism” is not merely that we’ve reinforced and begun to re-institutionalize the sex stereotypes inherent in all notions of “gender,” but also that we’ve now effectively forced sex stereotype non-conforming individuals into “trans” categories, even to the point of also effectively forcing them to alter their appearances, physiologies, and even anatomies simply to satisfy our apparent and thoughtless need for sex stereotypical categories, generally.

“Transgenderism,” in other words, is more accurately described as just another (but opposing) stereotypical trap for non-conformers, rather than as a means or manifestation of liberation, as it is so often presumed to be. And the boring truth is that the concept of “gender” is useful, if at all, only to describe sex stereotypical behaviors “on average” among a given population or populations and was never responsibly intended to apply to individuals, none of whom adhere to every possible sex stereotype.

That much said, I anticipate certain broad criticisms. Some, for instance, will suggest that I’ve confused biological sex with “gender identity,” “gender roles,” and perhaps other now-popular “gender” nomenclature. But when a transgender activist suggests that a person born male, for example, should be recognized as a female because he identifies as such, that activist forces the issue for those of us who respect the traditional, objective, and non-judgmental means of identifying what it means to be a human male or female. Again, everyone is free to adopt whatever sex stereotypes they prefer, but they are not free to define for all others what male and female mean when their definition is subjective, far less parsimonious, based in sex stereotypes, and, as such, judgmental and prone to support discrimination based on sex.

Others might object that I’ve mistakenly conceived of “gender” as binary, perhaps correctly observing as well that various cultures have defined “gender” in significantly different ways throughout human history. Perhaps surprisingly to these dissenters, I see this objection as somewhat sluggish progress, because, carried to its logical conclusion, it appears to support individuality, rather than group identity. In other words, when someone suggests that “gender” is not binary, or that it’s fluid, or a tapestry, for example, what they’re really referring to is individuality and individual expression — which I fully support. But, in truth, ideas of “non-binary gender” have no relation to the sexes and, as such, are really not about alleged “gender” at all.

Science (Indeed, the World) Needs Fewer, Not More, Icons.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well.

The Sims statue and its protestors.

To the extent we are rational, we share the same identity.—Rebecca Goldstein.

September was an awkward month for Nature, perhaps the most influential and well-respected science publication on the planet.  In August, a group peacefully protested, and vandals subsequently defaced, a Central Park statue of J. Marion Sims, a 19th-century surgeon and founder of the New York Women’s Hospital often referred to as the “father of modern gynecology.”  Sims’ likeness was left with fiendish red eyes and the word “RACIST” scrawled across its back.

The quarrel stemmed from the mostly undisputed facts that, although Sims helped develop life-saving surgical techniques to help women recover from particularly traumatic births, he also experimented on female slaves without providing anesthesia, and after seeking consent only from their owners. Unsurprisingly, commentators contest whether Sims’ methods were consistent with the customs and scruples of his time (Washington 2017).

Nature’s first inclination was to publish an editorial originally titled, “Removing the Statues of Historical Figures Risks Whitewashing History,” arguing that we should leave such icons in place to remind passers-by of the important historical lessons they might provide (The Editors 2017). The piece also recommended the installation of additional iconography to “describe the unethical behavior and pay respect to the victims of the experimentation.”

Given then-recent events in the ever-emotionally explosive and divisive world of American popular culture especially, vigorous dissent was inevitable. A flurry of indignant letters descended on Nature’s editors.  Several writers suggested that, at least in America, the primary if not sole purpose of public statuary is to honor its subjects, not to inform curious minds of their historical significances (Comment 2017).  One contributor noted that the history of Nazi Germany has been well-documented in the very conspicuous absence of Nazi iconography.  Another reasoned that because written documentation always precedes statuary, removal of monuments would have “no impact on our understanding of the historical failings of those individuals.”

Other letters offered less restrained and, frankly, less disciplined commentary. One author submitted that the editorial “perpetuate[d] racist white supremacy.”  Two more branded it simply as “white privilege at its height” and as a “racist screed.”  Another found the article in support of “unethical science” and to inform Nature’s minority readers that they “remain unwelcome in science because of their race.”

Vandals defaced the Sims likeness with red paint.

But more importantly for my purposes here, many writers contributed thoughts on the Sims monument itself that reveal quite plainly our human tendencies to interpret the inherent ambiguity of statues—indeed iconography and other symbolic expressions more generally—consistent with our fears, personal agendas, or ideological mindsets. One author, for example, confided that the Sims statue bid her to “Go away, woman.  You have no authority here,” and to “Go away, woman of African descent.  You cannot have the intellect to contribute to the science of your own healthcare” (Green 2017).  Another saw Sims’ likeness as a “signal” that the “accomplishments of a white man are more important than his methods or the countless people he victimized,” and that “the unwilling subjects of that research … are unimportant and should be washed away.” (Gould 2017; Comment 2017).  Yes, all of that from a motionless, voiceless sculpture.

In the end, Nature’s guests called consistently for the icon’s swift removal.  And given its and any other statue’s essential ambiguity, I agree.  Take it away, melt it down, and donate its metal to a more fruitful purpose.  But, regrettably, many writers also petitioned for additional iconography—this time to honor accomplished females in medicine and the victims of sexist and racist medical practices.  In other words, they would display more monuments of more humans, no doubt all with potentially hideous skeletons lurking in their so far sealed closets, likely to be scrutinized and challenged by any conceivable number of equally fault- and agenda-ridden human interpreters to come.

In the rush to colonize others’ minds, or perhaps to cast painful blows against cross-cultural enemies, has anyone actually taken the time and effort to think this through? Both duly and thoroughly reproved, Nature’s editors quickly apologized and revised their article, including its title, to comply with reader objections (Campbell 2017; The Editors 2017).  But glaring similarities between the Sims controversy and more widely publicized events involving statues of Confederate generals, for example (at least one of which resulted in meaningless violence), have attracted the attention of the general media as well.

Police protect Charlottesville’s statue of General Lee.

Writing for The Atlantic, Ross Anderson aptly observed that “the writing of history and building of monuments are distinct acts, motivated by distinct values” (Anderson 2017).  No serious person ever suggested, he continued, that statuary “purport[s] to be an accurate depiction of its history.”  So far, so good.  At that critical point, Anderson appeared well on his way to advancing the sensible argument that inherently simplistic and ambiguous iconography can only divide our society, and perhaps even inspire (more) pointless violence.

Unfortunately, that was also the point where the author stumbled and then strayed onto perhaps well-worn, but nevertheless unsustainable trail. The legitimate purpose of a society’s statuary, he argued, is “an elevation of particular individuals as representative of its highest ideals,” a collective judgment as to “who should loom over us on pedestals, enshrined in metal or stone ….”  But, honestly, no credible history has ever instructed that any individual, no matter how accomplished, whether male or female, black or white, can ever represent our “highest ideals.”  And is there anything about recent American history to suggest we could ever agree on what constitutes those ideals?  And, come to think of it, how do people tend to react when others choose which monuments and symbols will “loom over” them?  Indeed, wasn’t that the problem in Charlottesville, Virginia?

White supremacists march on Charlottesville.

According to Anderson, the activists demanding removal of the Sims statue and its replacement with iconography of presumptively more deserving subjects ask only “that we absorb the hard work of contemporary historians … and use that understanding to inform our choices about who we honor.” But, as any experienced historian knows, historical facts can be, and often are, responsibly parsed and interpreted in many different ways.  And why should common citizens blindly accept one credible historian’s perspective over that of any other?  Regardless, shouldn’t we encourage the public to consult the actual history, rather than convenient, but severely underdeveloped and necessarily misleading shortcuts?

Author Dave Benner argued, instead, that we should preserve our monuments (Benner 2017). Pointing to the New Orleans statue of Franklin Roosevelt, which, to this point, remains free of public derision and vandalism, Benner reminded us of Executive Order 9066, by which FDR displaced 110,000 American citizens of Japanese ancestry into internment camps, without due process, in “one of the saddest and most tyrannical forms of executive overreach in American History.”  Should the FDR monument (indeed, the dime) be purged according to the same reasoning offered by Nature’s revised editorial and those who oppose the Sims statue?  By such a standard, would iconography depicting any of the American founders survive?

Perhaps not. But to what supposedly disastrous end?  By Benner’s lights, the removal of cultural iconography would “simply make it harder for individuals to learn from the past.”  But, again, as the many dissenter’s to Nature’s original editorial observed, the purpose of statuary is not to inform.  And let’s be completely candid here: nor is it to “honor” the dead and insensible subjects of such iconography who no longer hold a stake in that or any other outcome.  Rather, the unspoken object is no less than to decree and dispense value judgments for the masses.

And some would no doubt argue the propriety of that object in the context of politics and government. But can and should science do better?  “As the statues and portraits of Sims make clear,” offers Harriet Washington, award-winning author of Medical Apartheid, “art can create beautiful lies” (Washington 2017).  “To find the truth,” she advises, “we must be willing to dig deeper and be willing to confront ugly facts.  No scientist, no thinking individual, should be content to accept pretty propaganda.”

Science’s battle is not with any particular ideological foe. It stands against all ideologies equally.  It has no interest in turning minds to any individual’s, or any coalition’s social cause because it has no agenda beyond the entire objective truth.  Science is incapable of pursuing ambiguity or any shortcut, especially where the potential for clarity, completion, and credibility persists.  And science certainly doesn’t need more icons; it needs fewer, or none.

A final thought on symbolic expression:

Yes, American history is saturated with political symbolism, from the flags of the colonial rebellion to the Tinker armbands and beyond.  As I wrote this column, however, the discussion of alleged “race” in America grew increasingly inane—dominated, in fact, by Donald Trump, our Clown in Chief, on one side, and mostly mute and under-studied NFL football players on the other.  The social, popular, and activist media, along with their rapacious followers, of course, seemed thoroughly enchanted by this absurd spectacle.

I take no position on this “debate,” if it can be so characterized. Indeed, comprehension of the contestants’ grievances is precluded by their irresponsible methods.  The President’s very involvement is inexplicable.  But, for me, it’s the players’ exclusively symbolic expressions that cause greater concern.  Again, not because I disagree with whatever they might be trying to say.  Rather, because their gestures are so ambiguous and amenable to any number of conceivable interpretations that, in the end, they say nothing.  Is this the future of all public discourse?

Waving or burning flags just isn’t impressive. Nor is standing, or sitting when others stand.  Nor is raising a fist or locking arms.  Because these expressions require no real investments, they amount to cheap, lazy, conveniently vague, and, thus, mostly empty gestures.  I’m old enough to know that they’ll persist, of course, and no doubt dominate the general public’s collective consciousness.  I only hope we can manage to maintain, perhaps even expand, spaces for more sober, motivated, and responsible discourse.  In any case, I’d prefer not to spend my remaining years watching them being torn down, especially from within.

References:

Anderson, R. 2017. Nature’s Disastrous ‘Whitewashing’ Editorial. Available online at https://www.theatlantic.com/science/archive/2017/09/an-unfortunate-editorial-in-nature/538998/; accessed September 27, 2017.

Benner, D. 2017. Why the Purge of Historic Monuments Is a Bad Idea. Available online at http://www.intellectualtakeout.org/23021; accessed September 27, 2017.

Campbell, P. 2017. Statues: an editorial response. Nature 549: 334.

Comment. 2017. Readers Respond to Nature’s Editorial on Historical Monuments. Available online at http://www.nature.com/news/readers-respond-to-nature-s-editorial-on-historical-monuments-1.22584; accessed September 26, 2017.

Gould, K.E. 2017. Statues: for those deserving respect. Nature 549: 160.

Green, M.H. 2017. Statues: a mother of gynaecology. Nature 549: 160.

The Editors. 2017. Science must acknowledge its past mistakes and crimes. Nature 549: 5-6.

Washington, H. 2017. Statues that perpetuate lies should not stand. Nature 549: 309.

What Next for Gay Marriage?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

(The US Supreme Court may very well rule on the legality of same-sex marriage bans in 2015.  Originally published in 2011, this article remains informative.)

Voters here in Wisconsin passed a ban on same-sex marriage in the fall of 2006. The following morning, I perhaps thoughtlessly tried to console a lesbian coworker by predicting a universal right to marry within a decade. “That’s fine,” she replied blankly, “but that doesn’t help us now.” For me, the gay marriage issue was and will always be no more than a legal and moral abstraction. For my coworker and her long-time partner, on the other hand, the marriage ban was a personal tragedy and a cold, hard slap in the face from their most trusted friends and neighbors.

In November 2008, the citizens of California passed Proposition 8, amending that state’s constitution to ban recognition of same-sex marriages performed thereafter. The initiative passed with 52 percent of the vote. Another lesbian, Kristen Perry, brought suit against California’s then governor, Arnold Schwarzenegger, and then Attorney General, Jerry Brown, asking the court to strike the law as unconstitutional.

On August 4, 2010, U.S. District Judge Vaughn Walker, a republican appointed by George H. W. Bush, ruled that Proposition 8 had created an “irrational classification” and “unconstitutionally burden[ed] the exercise of the fundamental right to marry.” Thus, according to the federal court, voters in California had violated the Equal Protection and Due Process clauses of the Fourteenth Amendment. On August 16, the Ninth Circuit Court of Appeals ordered Walker’s judgment stayed pending the state’s appeal. One way or another, Perry v. Schwarzeneggar is widely expected to reach the United States Supreme Court.

My prediction in 2006 may have been insensitively timed. But it was firmly based on clear trends toward public and legal acceptance of homosexuals in America. In the 2003, for example, the Supreme Court overturned Bowers v. Hardwick—precedent from seventeen years past—to strike down a state law criminalizing gay sex. “[A]dults may choose to enter upon this relationship in the confines … of their own private lives and still retain their dignity,” Justice Anthony Kennedy wrote for the majority in Lawrence v. Texas, and “[p]ersons in a homosexual relationship may seek autonomy for these purposes, just as heterosexuals do.”

In isolation those words appear to bode well indeed for homosexuals. But other scraps of Lawrence muddy the jurisprudential waters considerably. First, although the Texas statute discriminated against gay sex on its face, the Court’s ruling was based only on due process but not on equal protection grounds. Second, and more importantly, Kennedy’s opinion explicitly warned that his ruling did “not involve … formal recognition to any relationship that homosexual persons seek to enter.”

But Justice Antonin Scalia wasn’t convinced. In his dissent, Scalia protested that the majority’s opinion “dismantles the structure of constitutional law that has permitted a distinction to be made between heterosexual and homosexual unions, insofar as formal recognition in marriage is concerned.” Lawrence “‘does not involve’ the issue of homosexual marriage,” he carped, “only if one entertains the belief that principle and logic have nothing to do with the decision of this Court.”

So, assuming that a legislative ban on gay marriage like Proposition 8 will soon reach the Supreme Court, what’s the likely outcome? For Martha Nussbaum, University of Chicago professor of law and ethics, and author of From Disgust to Humanity: Sexual Orientation & Constitutional Law (Oxford, 2010), the answer depends upon the relative influence of two competing philosophical paradigms.

Based partially in right-wing collectivism, the “politics of disgust” defer to the group and sustain democratic domination of disfavored minorities. Perhaps epitomized by Englishman Lord Devlin’s and American Leon Kass’s views that the average person’s deep-seated aversion toward a given practice is reason enough to make it illegal, disgust cares not whether the practice is actually harmful.

gay haters

The “politics of humanity,” by contrast, are founded in the tenets of classical liberalism and are categorically anti-collectivist. Exemplified by John Stuart Mill’s libertarian principle that individual freedoms should remain unrestricted except to avoid injury to others, humanity relies upon the imaginative skills inherent in compassion and sympathy and emphasizes equal respect for the dignity of all persons.

According to Nussbaum, the politics of disgust are slowly yielding to the politics of humanity in the U.S. And “[e]ven those who believe that disgust still provides a sufficient reason for rendering certain practices illegal,” she vies, “should agree … that disgust provides no good reason for limiting liberties or compromising equalities that are constitutionally protected.” But constitutional interpretation, of course, is precisely where the ethical rubber hits the political road.

One can certainly argue, as Nussbaum does, that American constitutional jurisprudence has already displayed an increasingly enthusiastic tendency to reject disgust in favor of humanity. In recent decades—especially during times of peace, the Court has afforded equal protection or substantive due process rights to a wide variety of disfavored minorities, including women, blacks, the mentally retarded, members of non-traditional families, and even prisoners.

Indeed, seven years prior to Lawrence, the Court granted a mammoth victory to homosexuals too. In 1992, Colorado passed Amendment 2, a ballot measure disqualifying gays from the benefits of antidiscrimination laws. Proponents justified the ban contending that homosexuals shouldn’t be afforded “special rights.” Penning the majority opinion in Romer v. Evans as well, Justice Kennedy rejected that characterization of the ban’s effect. “This Colorado cannot do,” he ruled on equal protection grounds: “A state cannot so deem a class of persons a stranger to its laws.”

So, in light of Romer, are states precluded from deeming gay persons strangers to their laws of marriage? Nussbaum remains skeptical. In that case, she explains, “illegitimate intent was written all over the law and its defense.” Romer was “a very narrow holding,” she cautions, offering “little guidance for future antidiscrimination cases involving sexual orientation.” Importantly, Kennedy had subjected Colorado’s law to mere rational basis review as opposed to intermediate or strict scrutiny, meaning that he did not, in Romer, identify homosexuals as a suspect class deserving maximum protection. Which is not to suggest that he couldn’t or wouldn’t do so in a future case, but rather only to point out that other discriminatory state laws, if more shrewdly crafted, might survive the less demanding standard of review.

Thus, Nussbaum reasons, “The secure protection of gays … would seem to require a holding that laws involving that classification, like laws involving race or gender, warrant some form of heightened scrutiny.” In order to induce such a holding, a plaintiff would generally need to convince a court that homosexuality is an immutable characteristic (a contentious proposition nevertheless consistent with available scientific evidence), that homosexuals have suffered a long history of discrimination, and that they remain politically vulnerable.

Somewhat surprisingly, however, Nussbaum argues that state rather than federal courts should manage the issue of gay marriage until democratic majorities can be trusted to support inclusion. Local adjudication, she argues, would shield the U.S. Supreme Court from this particularly hazardous battle in the culture wars, and encourage the kind of robust experimentation inherent in federalism that, hopefully, will result in a more educated polity.

And certain states have already taken that initiative. Nussbaum offers Varnum v. Brien, a 2009 decision delivered by the Supreme Court of Iowa, as ample grounds for optimism. Although only 44 percent of Iowans presently support same-sex marriage, the seven-member court in Varnum struck the local Defense of Marriage Act, and, applying intermediate scrutiny, unanimously ruled that the state had no important interest in denying marriage licenses to its citizens based on sexual orientation.

What Nussbaum could not have known when writing From Disgust to Humanity was that on November 2, 2010, Iowa voters would oust each of the three Varnum justices who were up for retention. The facts surrounding the election make it clear that Iowans were reacting to the previous year’s ruling on gay marriage. The high court justices faced no opponents and needed only 50 percent of the vote to retain their seats. By contrast, each of the 71 lower court judges on the ballot were easily reelected. Incidentally, the anti-retention campaign was heavily financed by out-of-state special-interest groups, including the National Organization for Marriage and the American Family Association.

So, with Iowa in mind, might judges subject to reelection in other states be less inclined to stand up for homosexuals in defiance of local majorities? Gays might be forced to look to the U.S. Supreme Court once again, and to Justice Kennedy, who is likely to cast the deciding vote on a panel equally divided over several social issues. Would Kennedy extend his reasoning in Lawrence and Romer to cover same-sex marriage rights? Or would the committed Catholic Justice, appointed by Ronald Reagan in 1988, draw the line at marriage, a term still rife with religious connotations? Would he defer to democratic majorities, perhaps siding with Scalia the constitutional originalist?

In Justice Kennedy’s Jurisprudence: The Full and Necessary Meaning of Liberty (Kansas, 2009), Frank Colucci, political scientist at Purdue University—Calumet, dispels popular reports of Kennedy’s alleged inconsistency, dissecting the Justice’s public declarations to expose an underlying jurisprudential philosophy of individual rights. Kennedy “employs a moral reading of the Constitution,” Colucci finds, “to enforce individual liberty, [but] not equality, as the moral idea he finds central” to the document. Although he often sides with judicial minimalists and originalists, he does so for different reasons. In fact, Kennedy favors an expansive role for the Court and remains the justice most likely to strike legislation he deems contrary to the Constitution.

Much to Scalia’s irritation, Kennedy’s search for liberty’s parameters ends not in the Constitution’s text or tradition. Rather, his overriding concern seems to be whether government intrusion prevents the individual “from developing his or her distinctive personality or acting according to conscience,” according to Colucci, or demeans a person’s community standing and denigrates his or her “human dignity.” To provide “objective referents” for his constitutional interpretations, the Justice cites sociological research, international law, and emerging political consensus. His moral precepts, the author says, “have clear rhetorical roots in post-Vatican II Catholic social thought.”

In cases dealing with religion specifically, Kennedy has supported “noncoercive” government action, opining in Allegheny County v. Greater Pittsburgh ACLU, for example, that states should be given “some latitude in recognizing and accommodating the central role religion plays in our society.” Then again, in Lawrence, the Justice clearly emphasized a religiously denounced individual right, professing the founders’ insight that “later generations [would] see that laws once thought necessary and proper in fact serve only to oppress.” Similarly, Kennedy was swayed in Roper v. Simmons by recent trends among a very few states and in the world at large before concluding that death was a cruel and unusual punishment for minors.

Although he recognizes Kennedy’s potentially decisive impact on such questions, Colucci does not directly address the prospects for same-sex marriage. Nevertheless, his conclusions seem to portend well for gays. In Kennedy’s constitutional jurisprudence, personal autonomy has trumped democracy. Tradition and precedent are crucial, yes, but do not entirely define the Court’s dynamic and continuing role to “discover” the meaning of individual liberty, perhaps through recent expressions of moral advances made both at home and abroad.

All of which leads me briefly to Proud To Be Right: Voices of the Next Conservative Generation (Harper, 2010), a title unlikely to be cited, one might hastily presume, in support of any article predicting the relatively imminent legalization of homosexual marriage. Here, Jonah Goldberg—founding editor of the National Review Online, contributor to Fox News, and best-selling author of Liberal Fascism—has assembled an impressive band of young and unapologetically conservative writers—some religious, some secular—“who do not yet have a megaphone, but might deserve one.”

In the ironically titled “Liberals Are Dumb,” for example, evangelical Christian blogger Rachel Motte touts the value of a rigorous liberal arts education for every conservative activist who wants to be taken seriously. And in “The Politics of Authenticity,” social conservative Matthew Lee Anderson warns politicians that his peers are considerably less obsessed over sexual mores, but much more concerned about the ethics of conducting business and war than their older, value-voting predecessors. A more intellectual and less personally intrusive conservatism focused on economics and foreign policy? One can only hope.

But particularly relevant to the issue at hand is a refreshingly candid piece from James Kirchik, contributing editor to the New Republic, called “The Consistency of Gay Conservatives.” Though the GOP’s base—presently empowered by the religious Right—remains opposed to gay marriage, Kirchik portends, support will likely increase as the Republican pool grows younger. Why? Because “the ‘gay agenda’ today,” he says, “is fundamentally conservative.”

Gay activists in California, after all, protest not for “free love,” but only for the right to marry their committed partners. “They want to join this bedrock institution,” Kirchik reminds us, “not tear it apart.” In fact, the prevailing scientific explanation for homosexuality—unmistakably deterministic—is repudiated not so much by conservatism, the author contends, but instead by “Left-wing ‘queer’ theorists, who argue that binary sexuality is a social construct.” A little more food, perhaps, for feminist thought.

Living in the largely rural Midwest, even the least bigoted person is tempted to write off homosexuals for inspiring too few vocal allies and entirely too many powerful foes. Gay marriage remains one of those annoying, distracting, “hot button” political skirmishes in a larger culture war that, quite frankly, never deserved Americans’ precious time and energy in the first place. But the forces of religious bigotry will soon lose this battle, as they have so many others in recent centuries.

Whether the courts recognize and respect it or not, public opinion from nearly every perspective appears to be converging on at least quiet support for same-sex marriage. Thus, the Earth continues to grow a little rounder, the solar system more heliocentric, and the universe ever more capacious. Meanwhile, humanity grows less childlike. As we continue to discover and realize our vast potential, we find less and less occasion for odium and pettiness.

gay hater plus

When Faith Kills: Christian Healing vs. Medical Science.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer. Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

I have fought a good fight. I have finished my course. I have kept the faith.—Dennis and Lorie Nixon’s dedication to their sixteen year-old daughter, Shannon, on her headstone.

I should begin by pointing out that the Nixons of Altoona, Pennsylvania actually lost two children in the early 1990s. When their younger son, Clayton, fell victim to a common ear infection and pleaded to his parents to make the pain stop, the Nixons chose to address the problem with prayer rather than medical science, in keeping with their Christian faith. Clayton’s tiny body eventually succumbed to extreme dehydration and malnutrition and, sadly, he never recovered.

Blair County District Attorney, William Haberstroh, prosecuted Dennis and Lorie on charges of involuntary manslaughter and child endangerment. After a plea of no contest, the Nixons were sentenced to probation. When asked what he hoped to achieve, Haberstroh answered, “What I want to do is not change their belief, but change their conduct.” But—for better or for worse—as people believe, so shall they act.

So maybe it should have come as no surprise that, when Clayton’s older sister took ill, the Nixon family elected once again to forego secular medicine in favor of prayer and anointment with oil, consistent with scriptural teachings. Shannon’s mood improved at one point, so her parents praised God for His perceived mercy. But the child’s condition quickly worsened and, following another round of intense prayer, Shannon died at the very end of spring, 1995.

According to the autopsy, Shannon had suffered from diabetic ketoacidosis (DKA), which made her blood more acidic than its surrounding tissues. Although the disease is technically incurable, it can be successfully treated scientifically with regular insulin injections typically costing less than one dollar each. Without the shots, medical experts agree, victims of DKA will surely perish. Thus, the Nixons’ decision to treat their daughter with prayer alone sealed her regrettable fate. “They sacrificed this little girl for their religious beliefs,” Haberstroh concluded after filing a fresh set of criminal charges.

I decided to delve further into the facts of these unfortunate cases when I read about an eerily similar situation in my home state of Wisconsin. By March 26, 2008, the Associated Press had begun to publish reports concerning a little Weston girl—eleven year-old Madeline Neumann—who had been obviously sick for approximately thirty days. Instead of seeking medical care, her parents, Leilani and Dale, chose to pray for Madeline’s recovery. She died from DKA on March 23, Easter Sunday. Eventually, the Neumanns were charged with second-degree reckless homicide. But Marathon County District Attorney, Jill Falstad, decided she could not charge them with child abuse because section 948 of the Wisconsin statutes provides a criminal exemption from that crime for religious parents who choose to treat their afflicted children with nothing but prayer.

The local newspapers referenced a pending case in Oregon as well, where Carl and Raylene Worthington have been charged with manslaughter and criminal mistreatment. Their fifteen-month-old daughter, Ava, became ill with pneumonia and a blood infection, both of which could have been effectively treated with antibiotics. Again, the parents rejected medical science in favor of prayer and, again, on March 2, an innocent child died.

I had no idea how frequently these cases occurred across America—or how bizarre and horrifying the details could be—until I read about literally hundreds of them in a newly published history of this collective national disaster, When Prayer Fails: Faith Healing, Children, and the Law (Oxford, 2008), written by Shawn Francis Peters, professor of history at the University of Wisconsin—Madison. Typical offenders, Peters instructs, are “intensely religious parents whose lives revolve around the doctrines and practices of small, close-knit Christian churches that ground their doctrines in narrowly literal interpretations of the Bible.” Young victims live anywhere from California to Massachusetts and, incredibly, their parents are seldom prosecuted because thirty-nine of our fifty states provide religious exemptions to child abuse or neglect charges, and nineteen states permit religion-based defenses to felony crimes against children.

One might reasonably presume that faith-healing fatalities had long ago become a vestige of our more primitive past, given relatively recent and profound scientific advances—the germ theory of disease in particular. But, in April of 1998, pediatrician Seth Asser and children’s advocate Rita Swan published an alarming study in the highly respected journal Pediatrics that proved otherwise. Asser and Swan investigated 172 child deaths in American faith-healing churches during a twenty-year period and discovered that the vast majority of them had ensued as a result of religion-based neglect. 140 deaths were caused by conditions for which medical science provided a ninety percent survival rate. Another eighteen resulted from diseases where victims could have expected to endure better than fifty percent of the time.

Such incredible stubbornness, of course, could originate from only one source, beginning with the Old Testament. In Exodus 15:22-27, God purportedly advised the Israelites that it is “the Lord who heals you.” 2 Chronicles 16 warned that Asa was gravely mistaken in seeking assistance from a physician instead of God. In the New Testament, the author(s) of both Mark and Luke characterized doctors as inept buffoons (somewhat ironically, because Luke was supposedly a physician himself), while the author(s) of John depicted Jesus as a healer of unlimited capacity who could even raise Lazarus from the dead. Those themes reverberated throughout the book of Acts, and, finally, in the Epistle of James, Jesus was said to have prescribed prayer and anointment with oil as the ultimate remedy for all bodily afflictions.

Their forebears having wrested religious authority from the miserly grips of Catholic priests, many contemporary Protestant literalists have similarly attempted to seize medicine from the purview of their more educated and highly trained counterparts. Nineteenth-century Pentecostal healer, Carrie Judd Montgomery, condemned medical science to the “sin-stricken world,” and post-World War II-era evangelist, Jack Coe, once admonished Christians that any among them who sought care from a doctor would be seared with the “mark of the beast.” But most famous of all, perhaps, was Oral Roberts, who claimed that “dozens and dozens” of his followers had been dramatically rescued from the grave during his extravagant, fire-and-brimstone services. In 1987, renowned skeptic James Randi challenged Roberts to produce evidence of the alleged miracles, but Roberts responded with nothing better than a spiteful sermon and a few citations to scripture.

Even so, a handful of modern-day Christian prayer-healing apologists have attempted in vain to employ the rational methods of science to verify the success of religious involvement generally, and religious supplication in particular. Harold Koenig, director of Duke University’s Center for Spirituality, Theology, and Health, for example, alleged that churchgoers tended to be healthier than non-churchgoers. Unfortunately, according to Richard P. Sloan, Columbia University professor of behavioral medicine and author of Blind Faith: The Unholy Alliance of Religion and Medicine (St. Martin’s, 2006), Koenig and others have consistently ignored even the most basic scientific standards, failing to control for obvious “confounders,” including the genetic and behavioral traits of their subjects, and refusing to correct for “multiple comparisons,” thereby ignoring the statistical distinction between “chance” and “real” findings. Indeed, Sloan surmises, what these authors’ studies in fact demonstrate is “how weak the evidence actually is.”

Somewhat more credible are better-controlled evaluations of long-distance intercessory prayer (IP), where numerous patients are randomly and unwittingly assigned to one or more modes of treatment or control groups. Sometimes subjects are cared for in a variety of ways, as occurred in the MANTRA (Monitoring and Actualization of Noetic Training) and the MANTRA II studies, published by Mitchell Krucoff and others in 2001 and 2005. In the pilot program, 150 acute coronary patients undergoing angioplasty were selected to receive one of five treatments: imagery, stress relaxation, touch therapy, IP, or standard care. The results of MANTRA yielded not one statistically significant difference. Nevertheless, Krucoff elected to initiate a larger evaluation. MANTRA II, designed as the definitive study on IP and released in The Lancet, randomized 748 angioplasty or cardiac catheterization patients to one of four treatments: standard care, IP, MIT (a combination of music, guided imagery, and touch), and IP plus MIT. Again, the outcomes confirmed that prayer had no effect.

Finally, in 2006, Harvard’s Herbert Benson published the $2.4 million Study of the Therapeutic Effects of Intercessory Prayer (STEP) in The American Heart Journal. One might note that such funding originated from the John Templeton Foundation, the stated goal of which is to advance Christian ideology. In any case, 1802 patients recovering from coronary artery bypass surgery were randomized into one of three groups: those who unknowingly received prayer from three mainstream religious groups, those who unknowingly did not, and those who knew they would receive prayer. Either way, the well-intended prayers had absolutely no effect.

At some level, however, most of us realize that prayer is valuable, if at all, only to the individuals who do the praying, and only at an unconscious, therapeutic level—particularly during trying and desperate times. In other words, prayer-healing is best understood as a self-administered psychological and, in the end, neurobiological phenomenon. But I also suspect that informed common sense will continue to elude the very limited intellectual horizons of religious literalists who will no doubt always believe that worldly knowledge in contradiction of religious dogma—science, most conspicuously—is the enemy of God and the thief of souls. Which is fine, I suppose, at least for them.

But what about their innocent children; and what, if anything, should we do about the statutory exemptions that expressly invite fundamentalist parents to sacrifice their own kids on the pyre of personal religious conviction? In a 1944 decision, Prince v. Massachusetts, the United States Supreme Court addressed that very question. “Parents may be free to become martyrs themselves,” wrote Justice Wiley Rutledge. “But it does not follow that they are free, in identical circumstances, to make martyrs of their children.” Along with all rights—including parenthood and freedom of religion—come enforceable responsibilities. At some point, faith must summon the courage of humility, if not of rationality. And, surely, that point is traversed by some considerable distance when a little boy or girl dies for lack of a doctor’s pill.

Abortion’s Still-Unanswered Questions.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer. Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

For thoughtful persons unburdened by ideological agendas, abortion remains a complex topic implicating tough legal, philosophical, and scientific questions. How should we characterize the fetus, for example—as part of the mother or as a separate human being? Which has superior rights? Is the “right to privacy” constitutionally defensible? If so, was the trimester system outlined in Roe v. Wade the most prudent approach to balancing the woman’s right against the state’s legitimate interests? Is abortion really about something else altogether?

Human Embryo

The jurisprudence of abortion is highlighted in Erwin Chemerinsky’s wide-ranging liberal rallying cry, The Conservative Assault On the Constitution (Simon & Schuster 2010). Founding Dean of the law school at the University of California, Irvine, Chemerinsky admits the right of privacy was never expressed in the Constitution’s text. Nor was it compelled by the Fourteenth Amendment’s equal protection clause, for example, as was the landmark ruling in Brown v. Board of Education.

Nonetheless, Roe’s revolutionary 1973 holding was not without precedent, loosely defined. The right of privacy was created—or “revealed,” as some might prefer—eight years earlier in Griswold v. Connecticut where the Supreme Court struck a law prohibiting the use of contraceptives by married couples. Writing for the Court, Justice William Douglas notoriously discovered the now fundamental right among the supposed “penumbras” emanating from the Bill of Rights—a still troublesome expression omitted from Chemerinsky’s account.

Then, in the1972 decision of Eisenstadt v. Baird, the Court extended its more restrictive ruling in Griswold to cover all couples. Here, Chemerinsky accentuates the operative language: “If the right of privacy means anything,” Justice William Brennan pronounced, “it is the right of the individual … to be free from unwarranted governmental intrusion into matters so fundamentally affecting a person as the decision whether to bear or beget a child.”

Such was Roe’s immediate constitutional foundation, however one values it. But the privacy right’s roots run deeper yet. In 1923 and 1925, respectively, the Court toppled laws forbidding the teaching of German (Meyer v. Nebraska) and proscribing parochial school education (Pierce v. Society of Sisters). In 1942, the Court invalidated a law mandating sterilization of certain criminals (Skinner v. Oklahoma) and, in 1967, a statute prohibiting interracial marriage (Loving v. Virginia).

In each case, the Court concocted specific constitutional rights never enumerated in the founding document: to marry, procreate, and raise children. So “it was clear at the time of Roe,” Chemerinsky argues, “that the Constitution had long been interpreted as protecting basic aspects of personal autonomy,” especially those relating to family. Thus, he concludes, it’s actually the textualists and not the supporters of Roe who urge radical changes in constitutional law.

The author completely disregards other obvious questions. Why should the right to procreate imply a right not to do so following a woman’s decision to risk pregnancy? Are there any theoretical limits whatsoever to substantive due process and the right of privacy? Can we the people ever know those limits, except through judicial intervention or an unlikely constitutional amendment?

Even if we accept the right’s authenticity, we still must consider the state’s rationale for intruding on behalf of the fetus. In Roe, Justice Harry Blackmun located a compelling government interest at the point of viability “because the fetus then presumably has the capability of meaningful life outside the mother’s womb.” To fix the initiation of human life at conception, Chemerinsky concurs, would be to inappropriately base the law “not on consensus or science, but on religious views.”

But is viability a distinction without a difference? Inside or outside the womb, after all, a “viable” fetus still requires intensive care. Unfortunately, the author never considers a much simpler and more traditional option—the moment of birth. Thus, cautious readers are left wondering whether scientific advances might soon render viability a confusingly fluid standard on the one hand, or if the glaring arbitrariness of Roe’s trimester system could have been avoided on the other.

So if sustainability and bright-line clarity are crucial, some skeptics might ask, why not choose the “moment” of conception? In The Fetal Position: A Rational Approach to the Abortion Issue (Prometheus 2010), University of Southern Mississippi professor of philosophy and religion, Chris Meyers, challenges the very definability of that moment.

The problem, he contends, is that conception is a “gradual process with many steps extended over several hours.” When the sperm first breaches the egg, for example, the latter has yet to divide a second time and still holds 46 chromosomes. Even after meiosis, it takes about twelve hours for the DNA of both cells to completely fuse, and another eighteen for the zygote to begin dividing.

So at what precise point would the anti-abortionist fix conception? Meyers asks us to imagine a newly invented but hardly inconceivable birth control pill. It doesn’t thwart ovulation or prevent sperm from entering the uterus or egg cell. It only precludes dissolution of the sperm’s head and, thus, the intermingling of parental DNA. Is the pill contraceptive or abortive? The anti-abortionist can either admit he doesn’t know when morally significant life begins, or designate an arbitrary point in developmental time as the moment of conception.

The rarefied details of abnormal development are no less exasperating. When, for instance, does life—and thus ensoulment, for the religionist—commence for the second of two identical twins who doesn’t even exist until several days following fertilization? Do conjoined twins possess separate lives and souls even though many share vital organs, including brains, and couldn’t survive if separated? Does the genetic chimera—one human fused from two fertilized embryos—have one soul or two and, if only one, where did the second soul go?

“Instead of identifying what makes humans morally special with what makes us biologically alive,” Myers argues, “we would do better by identifying it with that which makes us persons: consciousness, the capacity for rational thought, the ability to have human feelings,” and self-awareness. The metaphysical waters begin to clear, in other words, only when one abandons the supernatural association of ensoulment with moral significance.

In any case, let’s assume two human beings with conflicting metaphysical interests. Which attendant liberty interest should prevail—the mother’s right to control her body, or the fetus’s right to life? One might presume life—the right on which all others depend—to reign supreme. But not so fast, warns the author. What if the human seeking life can achieve it only at the expense of the human seeking bodily control?

Meyers begins with Judith Thomson’s legendary violinist hypothetical. You wake up in a hospital, the scenario goes, to find a supremely talented and thus valuable musician hooked up to your kidneys. You never consented to this burdensome union, but, if unplugged, the helpless violinist would perish. So far, the problem is relatively simple—you owe no duty to the musician or her adoring fans. So far, however, the analogy applies only to pregnant women who were raped.

But what if you bear partial responsibility for your predicament? Maybe you attended a party for the ailing violinist, Meyers continues, knowing that someone with your blood type might be drugged and recruited to the musician’s cause. No problem—we would still acknowledge your right to bodily integrity. The same reasoning would apply if you invited a homeless stranger into your house on a very cold day and later decided to evict her. The initial kindness would not imply a continuing duty to shelter and feed.

Sure, but what if the stranger was your child? You have “the right to be selfish when it comes to your own body,” Meyers resolves, “and no one can force you to let another use it.” Likewise, so long as the fetus is not viable, “the pregnant woman has the right to deny the fetus the use of her body, even if that means the fetus dies.”

Convinced? Rights comparisons resemble religious quarrels in their regrettable tendency toward insoluble emotional conflict. There must be a more rational way to resolve the predicament. What if the prevailing developmental science, for example, confirmed that no one capable of actually experiencing harm is injured during an abortion? Perhaps then we could broach more practical considerations, like the extent to which individual, family, and even national and international finances might be affected by compulsory childbirth?

Which carries us to the issue of fetal pain. In Ourselves Unborn: A History of the Fetus in Modern America (Oxford 2011)—easily the most sophisticated and engaging title of the three—Williams College historian Sara Dubow describes how the valuation of fetal life since the late nineteenth century has varied vis-à-vis intensely fought debates over gender roles and the relative authority of science and religion.

The post-Roe era was distinctly marked by the aforementioned disputes over conflicting rights. But beginning in the 1980s, two new claims—that women were psychologically traumatized by and that fetuses experienced terrible pain during abortion procedures—were woven together into a novel rhetorical strategy culminating in a popular, though conspicuously political, patriarchal, and anti-scientific, “compassionate” conservatism.

In their mission statement emphasizing both physical and emotional injury to mothers, Americans United for Life labeled abortion a “violent deception” producing two victims. AUL’s list of legislative objectives featured a mandate that clinics “protect the health and safety of women” and “inform women of the health risks of abortion including the link between abortion and breast cancer.” Similarly, the National Right to Life Committee circulated pamphlets warning women that, in addition to cancer, abortion can trigger “guilt, regret, divorce, promiscuity, child abuse, lesbianism, eating disorders, reckless behavior, substance abuse, and suicide.”

In his 1984 address to the National Religious Broadcasters convention, President Ronald Reagan insisted that “[m]edical science doctors confirm that when the lives of the unborn are snuffed out, they often feel pain, pain that is long and agonizing.” In response, Dr. Bernard Nathanson produced and narrated the Silent Scream, a graphic, twenty-eight minute videotape of an abortion procedure performed on a twelve-week-old fetus, which resurfaced in the 1990s during congressional debates over late-term (“partial-birth”) abortion.

All of which proved emotionally rousing, to say the least. But the sober facts, Dubow reminds us, supported precious little of it. The American College of Obstetricians and Gynecologists knew of “no legitimate scientific information” in support of early pregnancy fetal pain. Certain prerequisites to discomfort, including a mature cerebellum, brain and spinal cord mylenization, and neurotransmitter hormones, were absent. Leading neurologists instructed as well that twelve-week-old fetuses lack the necessary nerve cell circuitry. The National Cancer Institute and the American Psychological association were equally incredulous about the alleged links to breast cancer and post-abortion trauma.

Nevertheless, such propaganda would permeate debates over late-term abortion procedures in Congress and, eventually, the Supreme Court. In 2007, Justice Anthony Kennedy penned the majority opinions in Gonzales v. Planned Parenthood and Gonzales v. Carhart upholding the constitutionality of the Partial Birth Abortion Act of 2003. Therein, he presumed to shield fetuses from the “brutal and inhumane” dilation and extraction procedure and to protect women from the “[s]evere depression and loss of esteem” that follows.

In her dissent, Justice Ruth Bader Ginsberg first noted the glaring scientific reality gap and then scolded Kennedy for perpetuating what she deemed an embarrassingly antiquated ideology. Gonzales, she concluded, “reflects ancient notions about women’s place in the family that have long since been discredited.” But as Dubow suggests, opposition to abortion has always been less about saving lives than preserving a cultural norm at the expense of its emerging and somewhat ill-defined alternative.

Indeed, abortion might be the most inscrutable cultural issue of our time. Its current party politics are especially maddening. Republicans tend to base their stance on particular religious tenets, the implementation of which would be both morally and constitutionally amiss. Democrats, by contrast, seem confused at best—frequently confessing both their support for choice and their heartfelt desire to reduce the number of abortions. But why the latter? What do they suppose is wrong with abortion?

Editorial: Free Speech Should Be Celebrated, Not Just Tolerated. Yes, All Free Speech.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

I defend the Dove World Outreach Center’s recent decision to burn a copy of the Koran as an exercise of its members’ First Amendment rights to free speech, and I do so without qualification.

On March 20, 2011, a few dozen members of the DWOC, a small fundamentalist Christian church in Gainesville, Florida, staged a “trial” of the Koran.  Pastor Terry Jones, who had threatened to burn 200 copies of the Koran on the last anniversary of the September 11 attacks, presided as “judge” over the event.

An Imam and an ex-Muslim debated the book’s merits before a “jury” of 12 congregants who later pronounced the text guilty of five crimes against humanity, including the promotion of terrorism, death, torture, and rape worldwide.  The penalty was determined via online poll.  Voters chose burning over shredding, drowning, and a firing squad, so the congregation ignited the Koran and watched it burn in a barbecue pit at the church altar. The event was preserved on the congregation’s website.

Although the burning received little international attention, beleaguered Afghan President Hamid Karzai—ever the opportunistic politician—publicized the event on March 24.  He tagged it “a crime against a religion and the entire Muslim umma,” and called on the United Nations and America to bring the offenders to justice.

On April 1, three mullahs at the Blue Mosque incited thousands of Afghani protestors who then stormed the U.N. compound in Mazar-i-Sharif.  Twelve people were killed, including seven U.N. workers—four Nepalese guards and three Europeans.  On April 2, ten more were slain and dozens wounded in Kandahar.  Angry demonstrations persisted across Afghanistan and, on April 4, an Afghani border policeman murdered two American soldiers in the northern province of Fayab.

The U.N.’s chief envoy to Afghanistan, Staffan de Mistura, blamed the violence on Pastor Jones.  “I don’t think we should be blaming any Afghan,” he recommended.  “We should be blaming the person … who burned the Koran.  Freedom of speech does not mean freedom from offending culture, religion, traditions.”

Back in the U.S., President Obama added, “The desecration of any holy text, including the Koran, is an act of extreme intolerance and bigotry.”  General Petraeus called the act “enormously intolerant.”  Senators Harry Reid and John Kerry condemned the burning, while Lindsey Graham suggested the imposition of speech restrictions to protect the troops.

The following facts are NOT RELEVANT to my defense:  Both Islam and Christianity are irrational, dangerous, and mind-numbing ideologies.  The events outlined above can teach us much about both religions.  The DWOC is small congregation.  The U.S. government did not interfere with the DWOC’s act.  There is no logical connection between solving a social problem and symbolic speech—for example, burning a book, wearing a black armband, desecrating a flag, or torching a brazier.  Some American Muslims suffer religious discrimination.  Worldwide, most politicians and popular press outlets either condemned or ignored the DWOC’s act.  The burning does not assist the U.S. government in winning hearts and minds in the Muslim world.  The Taliban may have benefited from the burning.  Terry Jones now receives death threats regularly.  The burning was one of many sine qua non (without which not) causes of lethal violence perpetrated by Afghani criminals.  Western secular and Muslim cultures are fundamentally different.

Conversely, the following facts are supremely RELEVANT to my defense:  The First Amendment right of free speech implies neither a popularity contest nor the speaker’s responsibility for the irrational and unlawful reactions of others.  The violence and slaughter in Afghanistan were proximately caused only by the brutality of Afghani criminals (and, thus, the burning was not tantamount to shouting “fire” in a crowded theater).

As soon-to-be Supreme Court Justice Louis Brandeis observed in 1913, “Sunlight is said to be the best of disinfectants.”  Free speech is perhaps the most precious of American values for two related but distinguishable reasons.   First, it allows speakers to express themselves without threat of government interference, which thankfully hasn’t been an issue here.

Second, free speech is valuable to all people because it encourages would-be speakers to emerge from society’s shadows, cracks, and crevices and expose themselves and their opinions for our surveillance and judgment.  Skokie, Illinois was better served in 1977, for example, when the Ku Klux Klan chose a public march over yet another surreptitious, after-hours rant in the woods.  Thus, to condemn Terry Jones and his church for speaking—no matter what value we assign to their message—is also to discourage political, intellectual, and moral sunlight.

American and U.N. officials in this case should have reassessed the means by which they chose to pursue their political and foreign policy objectives.  And before they publicly passed judgment on the congregation’s speech, American politicians and popular press in particular should have stopped to reconsider the unique character of our constitutional tradition.  Compliance with the First Amendment’s restrictions against government interference simply isn’t good enough.

Free societies do not come and cannot be maintained cheaply.  If nothing else, present conditions in the Middle and Near East should highlight that reality.  The thought of enthusiastic support for unpopular free speech will make many feel uncomfortable.  But such support is also the primary source of America’s political eclecticism and entrepreneurial vigor.  It renders our democracy the ultimate economic, intellectual, and moral archetype for all freedom-hungry nations to emulate.  Elsewhere, dissent and heresy is unthinkable.  Here, it must be openly and passionately celebrated.

Book Review: Phillip E. Hammond, David W. Machacek, and Eric Michael Mazur, Religion on Trial: How Supreme Court Trends Threaten Freedom of Conscience in America (AltaMira 2004) 177 pp.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

“The First Amendment,” scholars of religion Hammond, Machacek, and Mazur argue, “only reaffirmed what [James] Madison understood already to be the case, which was that the federal government was powerless over conscience . . ..”  The founders, they insist, conceived of “freedom of conscience” not simply as a restatement of religious liberty, but rather as a much more comprehensive franchise—“the freedom of the individual to decide for him- or herself questions of morality, truth, and beauty,” irrespective of the individual’s religious proclivities or lack thereof.  In broad terms, the authors rest their claim upon two distinct foundations, the first solid and enduring, the second flimsy and ephemeral.

History, they profess, clearly demonstrates the founders’ inclusive aspirations.  Madison understood that ratification of the Bill of Rights would not conclude America’s struggle to define freedom of conscience.  His fondest hope for the First Amendment, in fact, was that it “would set in motion a process of expanding liberty.”  Indeed, mounting freedoms would seem vital to the survival of any emerging republic, or, otherwise stated, crucial to political and philosophical commerce within any sincerely free marketplace of ideas.

But the Supreme Court, say the authors, quickly subverted whatever hopes the earliest generations of forward-thinking Americans may have entertained with respect to the founders’ original intent.  “[N]ineteenth-century jurists,” in fact, were the culprits “who laid the foundation for the argument that conscience necessarily meant religion (and that religion meant Christianity).”  In his 1833 Commentaries on the Constitution, for example, Justice Joseph Story wrote that the First Amendment was designed “not to countenance . . . infidelity, . . .” but rather “to exclude all rivalry among Christian sects, and to prevent any national ecclesiastical establishment . . ..”

Consistent with his Protestant faith, Story distinguished religious conduct from religious conviction, sacrificing the former to the discretion of state governments and awarding the latter to the exclusive prerogative of individuals.  More than the framers’ Enlightenment values, this non-preferential “Republican Protestantism” formed the basis of the Court’s initially stubborn reluctance to involve itself in religious issues.

As immigration fueled religious diversity, however, American Protestantism gradually and grudgingly ceded its privileged legal status.  By the end of the century, for example, local court decisions involving blasphemy prosecutions, Sunday closing laws, and church property disputes reflected decidedly less bias toward an exclusively Protestant worldview.  Somewhat ironically, as Catholics and Quakers threatened to take advantage of government funding, resentful Protestants began to clamor for the separation of church and state.

Even so, the authors point out, “it would take a revolution” on the Supreme Court “to bring about a fuller understanding of the individual rights of conscience . . . articulated by the Framers over one hundred years earlier.”  Such transformation commenced following Roosevelt’s reelection in 1936 and, given the prevailing force of Republican Protestantism, relied heavily upon certain religious organizations’ assertion of their rights to free speech and thought rather than religious liberty.  “[W]hen the organization based its arguments either on a combination of free speech and religious liberty rights, or on free speech alone, [it] was much more likely to triumph.”

The Jehovah’s Witnesses, of course, played a key role in this revolution, facilitating post-New Deal America’s penchant toward the conceptual affiliation between free speech, free thought, free conscience, and religious liberty.  By the end of the Warren and Burger eras, the Court had recognized that a tribunal’s regulation of minority religious behavior and its corresponding deference to majority actions could be seen as a patently unconstitutional “casting of judges as ‘theologians’ . . ..”  Forced to abandon Republican Protestantism’s belief-action distinction, the authors argue, the Court expanded the right to free exercise laterally to accommodate conscience or conviction, regardless of whether it was expressed in religious terms.

According to Justice Harry Blackmun, writing for the Court in 1989, “Perhaps in the early days of the Republic [the First Amendment was] understood to protect only the diversity within Christianity, but today [it is] recognized as guaranteeing religious liberty and equality to the infidel, the atheist, or the adherent of a non-Christian faith . . ..”  Implicit in Blackmun’s analysis, the authors maintain, was the Justice’s understanding that official endorsement also amounts to a violation of free exercise, “because [endorsement] communicates to non-Christians that their religions are not endorsed” and to dissenting Christians that their religion ought to be imagined and practiced in a particular way.

In the authors’ estimation, Blackmun’s expanded vision of what is protected under the Free Exercise Clause and proscribed under the Establishment Clause “represents the gradual unfolding of the true meaning and purpose of the First Amendment.”  Conversely, the approach of contemporary non-preferentialists, including the “regressive bloc” of the Supreme Court which, at the time of publication, consisted of Justices Rehnquist, Thomas, and Scalia, is marked by an ahistorical analysis and the rejection of the co-dependent nature of the First Amendment’s two religion clauses.

During recent years, of course, the “enlightened” trend has decelerated and, arguably, reversed itself.  Considering each Justice’s record of liberality with respect to free exercise and conservatism in relation to establishment, the authors conclude—not unpredictably—that William O. Douglass was the most and that Clarence Thomas has been the least progressive Supreme Court jurist in history.  Similarly, the Warren Court represented the apogee of enlightenment and the 2004 Rehnquist Court its nadir.

The Court is now inclined, say the authors, toward the abandonment of strict scrutiny in free-exercise cases and the abolishment of the Lemon test in establishment cases.  The trend, in other words, entails the continued and perhaps expanded constitutional validation of ostensibly “neutral” legislation on the one hand, and “devolution,” or the philosophical infatuation with local as opposed to federal power, on the other.  The 2004 Supreme Court, they conclude, “would gladly submit the rights to ‘life, liberty, and property’ to the popular vote—precisely what the Fifth and Fourteenth Amendments [were] designed to prevent.”

The authors’ historical argument, once again, is credible and well founded.  Clearly, the First Amendment’s primary authors anticipated both the expansion of religious liberty and a much more capacious separation of religion and government than was achievable in eighteenth-century America.  The authors’ second argument, however, that the necessity of free exercise and separation is and has always been based on every human’s “natural rights,” is seriously flawed.

Although the authors claim to conceive of natural rights in individualistic Lockeian rather than religious Thomistic terms, their conclusion is no less irrational for the distinction.  “The U.S. Constitution,” they contend, “is fundamentally a ‘sacred,’ not a ‘secular’ document.”  “Freedom of conscience,” they add, “does not derive from our particular system of government; it is, rather, an aspect of ‘personality.’”

Such sentiments, if accepted as true, might console Americans who, regardless of politics or ideology, either favor a more expansive interpretation of the Free Exercise Clause or favor greater separation of religion and government and wish to base their hopes in that respect upon the attendant claim that such separation is essential to generally expanded religious liberty.

But wanting a proposition to be true does not affect that proposition’s veracity, no matter how prevailing, passionate, or appreciable the desire.  So, while the authors are correct that “to be inviolate,” freedom of conscience must be held “free from the interference of the state,” both reason and history are clear that there exists no factual basis for the concept of natural rights and that freedom of conscience, in fact, can be violated by the government and, all too often, egregiously so.

Such is precisely the reason why our civil rights remain fragile, after all, and why every fresh generation of Americans must fight for their civil rights’ continued existence.  In a society built upon Enlightenment ideals, no claim is exempt from rational inquiry and, accordingly, no idea or opinion can ever be held sacred.  Absent a factual basis, the patronizing and pretentious concept of natural rights is “simply rhetorical nonsense,” as Jeremy Bentham once suggested—“nonsense upon stilts.”

Book Review: Francis Fukuyama, America at the Crossroads: Democracy, Power, and the Neoconservative Legacy (Yale UP 2006). 226 pp.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

Johns Hopkins international studies guru, Francis Fukuyama, has jumped an embattled neoconservative ship.  In America at the Crossroads, the author proposes a more “realistic Wilsonianism,” rejecting both the isolationist tendencies of Jacksonian nationalism and the stubborn disregard of other nations’ internal affairs characteristic of Kissingeresque realism.  Like many contemporary neocons, Fukuyama defends American military power as an occasionally appropriate means to a moral end, but, like liberal internationalists, he emphasizes soft power, multilateralism, and the use of global institutions.

Nevertheless, neither preventive war, international social programs, nor the Bush Doctrine are compatible with Fukuyama’s eclectic worldview.  Consistent with the teachings of Leo Strauss, Fukuyama defines “regime” broadly to encompass not only a society’s political authority but its cultural underpinnings as well.  Even so, the author recommends that “certain political problems can be solved only through regime change.”  But such intercessions, he warns, are not easily accomplished, certainly much more difficult than the Bush administration appeared to believe before its invasion of Iraq.

Even a superficial understanding of Islam’s history and doctrine would have dissuaded the Bush administration from attempting to impose Enlightenment values upon Iraq in such brisk and brutish order.  As Benjamin Barber recently observed, no government can export democracy because no society can import civil rights.  Such institutions must evolve slowly and from deep inside a culture’s gut.

“This makes an exclusively military response to the challenge inappropriate,” Fukuyama concurs.  America’s record with regard to international democratization has been spotty at best.  Prior to the second Iraq war, Bush administration officials peddled Germany and Japan as obvious models of successful regime changes, but, as Fukuyama notes, those cultures were highly developed prior to WWII.  Instead, the professor offers American ventures in Cuba, the Philippines, the Dominican Republic and Haiti as more relevant, noting that in neither instance did our intervention result in timely and effective democratization.

Intercession becomes more promising, Fukuyama argues, when the democratic initiative emanates from within the target society.  Such initiative is unlikely, of course, unless the culture’s history supports it and unless the regime is already semi-authoritarian and at least somewhat tolerant of both political organization and economic liberation.

On the one hand, the author appears to recognize the obstinacy of certain religious traditions, but, on the other, he refuses to acknowledge the degree to which Islamic culture is inherently irreconcilable with the Western values of freedom of conscience, expression and economic self-determination.

While confessing to a “large number of unknowns” concerning the “nature of the terrorist threat,” including the sources from which it draws new recruits and the parameters and geographic borders of its support, Fukuyama denies that Westerners are embroiled in what either Bernard Lewis or Samuel Huntington referred to as a clash of civilizations.  “We are not fighting Islam,” the author claims, “but a radical ideology that appeals to a distinct minority of Muslims.”

Given these admittedly numerous unknowns, and in light of the consistently surly and violent history of relations between the West and Islam, upon what facts and theory does Fukuyama base this conclusion?

“Genuine Muslim religiosity,” he contends, is and has always been a “local or national” phenomenon, and not the result of radical attempts to universalize or globalize doctrine.  Thus, for Fukuyama, religion is an insignificant part of the problem, and, because Westerners have no reason to deal with Islam as such or to impose democracy through force, all we can “hope for” is that “radical Islamists” will “eventually evolve into more responsible political parties willing to accept pluralism.”

Many have argued similarly on behalf of religious moderation or even evolution, and, of course, conscientious but intractable religionists must so argue in order to feel both rational and devout.  Nevertheless, if a religion is to survive, it must, at inevitable and critical moments, fall back upon its foundational texts.  As such, perhaps both Islam and Christianity will always be dogmatic, oppressive and universalist because, when the religious meme is threatened—either from without or from within—followers will expect their religious leaders to reveal clarity and authority, and, in turn, such leaders will feel compelled to return to the Qu’ran and the Hadith, or to Deuteronomy, Ephesians and Revelations.  Thus, others will argue, Abrahamic monotheisms will remain both dictatorial and violent and, on balance, much more harmful than beneficial.

Regardless, America at the Crossroads is an informed and thoughtful illumination of neoconservative origins, principles and internal complexity and a conscientious proposal for the future of American foreign policy in the Middle East and beyond.  Better yet, Francis Fukuyama once again effectively challenges American readers to rethink their traditional political affiliations, their democracy by proxy, and to tutor themselves regarding the leading theories of international relations, theories that, unfortunately, are seldom discussed openly among politicians or the popular media.

Book Review: Darren Staloff, Hamilton, Adams, Jefferson: the Politics of Enlightenment and the American Founding (NY: Hill and Wang 2005). 419 pp.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

“America was forged in the crucible of the Enlightenment,” affirms Darren Staloff, professor of history at the City College of New York.  American ideals, in fact, at least as Alexander Hamilton, John Adams, and Thomas Jefferson developed them, are “inconceivable outside of an Enlightenment context.”  As much an infatuated celebration of eighteenth-century Western philosophy as it is a concise yet absorbing biographical triptych, Hamilton, Adams, and Jefferson fulfills every promise one might reasonably infer from its generous title.

In his introductory chapter, Staloff wisely declines to define the Enlightenment, relaying instead brief synopses of its history, its “attitudes or cultural dispositions,” and its politics.  Even so, he appropriately observes that moderns who value or perhaps foolishly assume a secular worldview featuring individual rights, representative government, and commercial freedom are profoundly indebted to the philosophes and their movement.

The Enlightenment originated in classical traditions, of course, and its emphasis on “reasoning from observation” and a “code of personal refinement and civility” was clearly reflective of Renaissance precedents.  It was seventeenth century Rationalism, however, from which the Enlightenment inherited the onerous struggle against “enthusiasm,” or religious fervor and proselytization.

But Rationalism “simply went too far,” according to Staloff.  Descartes overestimated the power of antiseptic reason.  Hobbes’ social contract theory terrified greedy monarchs and offended religionists and republicans alike.  Spinoza threw the Judeo-Christian baby “out with the bathwater” when he undermined the transcendent creator and, consequently, humanity’s free will and sense of supreme purpose.  In short, the author concludes, it was Rationalism’s seemingly unqualified vilification of human passion that the Enlightenment eventually rejected.

Because it “restrained religion without destroying it or replacing it with alternative truths,” Enlightenment architects chose science, not reason per se, as their remedy for superstitional excess.  As firmly rooted in his temporal milieu as he was devoted to science, Newton never plotted to annul the Bible even as he contradicted it.  Lockean empiricism preserved Christian viability while containing its exorbitance, and Hume’s “corrosive skepticism,” as brandished in his Enquiry Concerning Human Understanding, for example, demanded caution, modesty, and doubt “in all kinds of scrutiny.”

Like their predecessors, the philosophes eschewed religious dogma.  Arguably, however, Enlightenment method persevered where Rationalist doctrine failed because of its relative pragmatism and, more importantly, because of its dispassionate contempt for all ideology, religious or secular.  Similarly, the philosophes not only accepted human passions, but embraced them as “instincts that served human interest and survival.”

What the European Enlightenment lacked, quite predictably, was coherent and consistent politics.  Some luminaries defended monarchy while others pleaded for one or another form of popular rule.  Indeed, “it was not until the American revolutionary and constitutional epochs,” Staloff directs, “that this lacuna was finally filled and a distinctly enlightened practical ‘political sense’ was ultimately formulated.”

Although initially an idealistic, perhaps even a radical republican, the post-revolutionary Alexander Hamilton “fulfilled” the Enlightenment, Staloff professes, by emphasizing commercial capitalism, “corrupted” self-interest, and cultural expertise in national politics.  As most readers probably know, “freedom and commerce went hand in glove” for Hamilton, substantially consistent with Adam Smith’s theory regarding the wealth of nations.  Contrary to his Rationalist predecessors, however, a mature Hamilton entertained no illusions with respect to mankind’s intellectual perfectibility.  “[E]very man ought to be supposed a knave,” he warned his comrades at the Constitutional Convention, “and to have no other end . . . other than private interest.”

Accenting the philosophical contrasts between the New Yorker and Jefferson, a paradigmatic agrarian, Staloff notes that, although Hamilton’s projects dramatically expanded American commerce, “they also lined the pockets of financiers and speculators” who were “morally suspect to a nation of farmers.”  Even so, and perhaps contrary to popular understanding, “Hamilton was no fan of laissez-faire” economics.  Indeed, he typically pursued his economic agenda assuming the propriety of “a powerful, activist federal government.”

In Hamilton’s estimation, a superior cultural education was imperative for all federal statesmen.  Enlightened intellectuals, he averred, “lacked a distinct [self-]interest” and, as such, were the “natural arbiters of the public good.”  Neither senators, executives, nor judges were exempt from scholarly prerequisite.  Such was Hamilton’s nearly palpable disdain for intellectual mediocrity.

“[P]erhaps the least loved founding father,” Staloff supposes, the “[a]rrogant and imperious” Hamilton possessed “little of that warmth or folksiness that Americans have always embraced in their heroes.”  Consistent with his surly suspicion of the common, Hamilton became both America’s “greatest champion” of an independent federal judiciary and its most relentless critic of government designed and moved by popular will.  Nonetheless, his vision of commercial modernization and worldly real politique was truly both prescient and enlightened.

But Massachusetts’ John Adams eclipsed Hamilton, the author vies, by actually “transcending” the Enlightenment.  Though no less vain than his Federalist cohort, Adams was not born into an elite class.  “He lacked the polish, savoir-faire, and gentlemanly manners” of other founders, especially relative to his eventual Republican foils.

Additionally, it was Adams’ tendencies toward introspection and self-criticism, and perhaps most of all, his intellectual independence and passion for dispute that ultimately rendered him the least celebrated president from the revolutionary and constitutional eras.  Fortunately, however, popularity has not always dictated impact, at least evidently not in eighteenth-century America.

According to Staloff, Adams was the “the first of a long line of Yankee gadflies, men [and women] with a strong contrarian streak” who “have kept our culture at least minimally honest.”  “Knowledge,” Adams urged, “is among the most essential Foundations of Liberty.”  Such unrelenting skepticism and insistence upon learning, writes Staloff of Adams’ early career, “bears witness to his complete immersion in the politics of Enlightenment.”

Perhaps revealing a subtle idealism that Hamilton clearly rejected and convinced that empiricism applied to the social as well as the physical sciences, Adams became a committed proponent of public education.  “[T]he preservation of the means of knowledge, among the lowest ranks,” he wrote in 1765, “is of more importance to the public, than all the property of all the rich men in the country.”  Congruously, he advocated for a free and vigilant press, demanding that it “be easy and cheap and safe for any person” to publish his thoughts.  For Adams, popular education and the birth of America were inextricable.  Indeed, he blasted Tories for attempting to undermine independence by taxing books and college documents and by displacing intellectuals from important political and legal positions.

But Adams’ confidence in popular education and elite virtue were “shaken” during the American Revolution.  Formerly quite “progressive,” in Staloff’s judgment, Adams regrouped philosophically and redirected his skepticism against the “whole notion of progress,” while remaining constant to social empiricism, basic public education, and intellectual statesmanship.  By 1790, Adams had admitted that “the laboring part of the people can never be learned,” and that the truly learned would never be completely disinterested.  Knowledge, thus, became less a prescription for mass liberty and more “a [natural] source of inequality and division.”

Surely, contemporary and subsequent events in France contributed to Adams’ disillusionment.  The Enlightenment gradually shaded into Romanticism, particularly in Paris, as Physiocrats and luminaries like Rousseau challenged civility and urbanity.  Adams “bristled” in response, perhaps too near the debate to distinguish between the Enlightenment and its creeping betrayal.  And perhaps Staloff errs when suggesting that Adams “transcended” the Enlightenment, given facts that reveal only his stubborn allegiance to it.  Regardless, Adams’ personal dedication to scientific inquiry never faltered.

Though no casual admirer of the physical sciences, for Staloff, Thomas Jefferson initiated the American Romantic transition.  Jefferson never consciously rejected the Enlightenment, of course, but like “other great proto-Romantics,” his influence began to “unravel” it.

jefferson on gov

While his Federalist contemporaries recognized nature as a terrible burden to be survived by many and overcome by few, if any, Jefferson viewed it as perfect in its “pristine innocence and inherent goodness,” even and perhaps especially in social contexts.  His ethics stressed tender-hearted compassion more than rational calculation.  Morality turned upon sentiment and “internal authenticity” rather than reason and utility.

Although Jefferson suffered the same brand of post-revolutionary disillusionment as did Hamilton and Adams, the Virginian’s experience was “crushing” by comparison.  “These injuries,” he gloomily reported to James Monroe in 1782, “will only be cured by the all-healing grave.”  No wonder, concludes Staloff, that Jefferson’s was the more “thorough and radical” philosophical transition.  Epitomized in his famous Notes on the State of Virginia, Jefferson’s “Romantic spirit was at last unfettered from the shackles of enlightened moderation and compromise.”

Staloff unequivocally characterizes Jefferson’s core principles as impractical.  Relative to Adams’ social skepticism, Jefferson’s political ideology often “seemed to float above the realm of the factual,” thus conveniently “insulat[ing itself] from empirical refutation.”

Which is not to imply that the Republican’s comprehension of human or majoritarian limitation was fundamentally incredible.  Although Jefferson often expressed popular confidence, his vision of American democracy “combined majority rule with a profound sense of civic humanism.”  Jeffersonianism, in other words, required vigorous and meaningful popular participation which, in turn, assumed significant and continuing popular education.  Clearly, the Sage of Monticello’s many attempts to reform and expand both popular and advanced secular education bear this out.

Nevertheless, as Staloff indicates, Jefferson recognized that such civic humanism was unfeasible “in any but the most local venues.”  Even then, as a proto-libertarian, he favored personal autonomy over government, though perhaps more because of an intense trust in nature rather than, as was the case with Hamilton and Adams, a prevailing distrust in the masses.  According to Staloff, Jefferson’s localism and libertarianism “explain[ his] initial hostility to the federal Constitution,” which he often expressed to James Madison, the document’s primary architect.  What Jefferson “failed to grasp” was America’s need for a considerable federal scheme and what his friend failed to appreciate was that “Jefferson had embraced the principled politics of Romanticism.”

Indeed, many continue to misapprehend Jefferson’s writings and public professions.  How could he sponsor permanent revolution?  Why did he continue to support the Jacobins during their reign of terror?  Jefferson’s principles, Staloff answers, expressed his “moral sensibility” and “not a blueprint for legislation” or, for that matter, personal action.  Arguably, Jefferson’s primary legacy to America was his intellectual complexity.  At times, his “Romantic rhetoric” has been employed to rally Americans during national or international crises.  In other innumerable instances, however, it has been cynically exploited to “obscure the real sufferings and injustices in American society under patinas of glittering principles and abstract ideals.”

Perhaps America realized, at least to some extent, what Europe could only imagine.  Edward Gibbon coldly characterized the philosophe as one who “weighs, combines, doubts and decides.”  Denis Diderot portrayed her metaphorically as one who “walks through the night but is preceded by a torch.”  Benjamin Franklin described the enlightened person more pragmatically, more intimately, as a “heretic” with the “virtue of Fortitude.

Studied en masse, the philosophies and ideals of the American founders inform us that “enthusiasm” arrives in both religious and secular forms.  These eminent lives reveal that true enlightenment, in fact, might never be “transcended,” and that effective communication and advancement can be achieved only through empirical inquiry.  As Staloff observes, the enlightened person “searches for the truth in a spirit of open rationality[,] free from dogma,” whatever, and no matter how near, its source.

Book Review: Susan Jacoby, Freethinkers: A History of American Secularism (NY: Metropolitan Books 2004). 417 pp.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.

“Talking against Religion is unchaining a Tyger,” warned Benjamin Franklin in 1751.  “The Beast let loose may worry his Deliverer.”

I found myself reflecting upon Franklin’s counsel one morning last spring when I first heard Susan Jacoby on Wisconsin Public Radio.  A former Pulitzer Prize finalist, director of the Center for Inquiry-Metro New York, and contributor to The Washington Post, The New York Times, Newsday, and Vogue, Jacoby compromised nothing for the sake of political correctness.  I grinned like a lizard, in fact, as she punished every insensible, squealing Christian rigorist in the state with a spare hour and the wherewithal to operate a touch-tone telephone.  Regrettably, Franklin’s admonition is as relevant today as it was in the eighteenth century; but I can assure everyone—Susan Jacoby isn’t scared.

In Freethinkers, Jacoby underpaints her historical portrait of American secularism with a sketch of our constitutional roots.  “It is impossible to overstate the importance of Virginia’s 1786 Act for Establishing Religious Freedom,” she writes, “for, much to the dismay of religious conservatives, it would become the template for the secularist provisions of the federal Constitution.”  As their model, the founders chose Virginia, “not the other states, with their crazy quilts of obeisance to a more restrictive religious past.”

Our Constitution was intended as a purely secular document, Jacoby surmises, “because of what is says and what it does not say.”  Article VI, section 3, of course, which was adopted in Philadelphia with little debate and no controversy, assured that our representatives and their appointed officials would be bound “by Oath or Affirmation” and that “no religious Test [would] ever be required as a Qualification to any Office or public Trust under the United States.”

And in no way, at no point, would the Constitution refer even passingly to a supernatural entity.  “The Constitution’s silence on the deity broke not only with culturally and historically distant precedent but with proximate and recent American precedents—most notably the 1781 Articles of Confederation, which acknowledged the beneficence of ‘the Great Governor of the World.’  With its refusal to invoke any form of divine sanction, even the vague deistic ‘Providence,’” Jacoby reasons, “the Constitution went even further than Virginia’s religious freedom act in separating religion from government.”

“The inefficacy of [religious] restraint on individuals is well known,” wrote James Madison to Thomas Jefferson on October 24, 1787.  “The conduct of very popular assembly, acting on oath, the strongest of religious ties, shows that individuals join without remorse in acts against which their consciences would revolt, if proposed to them separately in their closets.”  Clearly, it was the moral tenets of Enlightenment rationalism and not reactionary superstitionism that America’s revolutionary generation sought to instill in their new government.  Jacoby concurs: “Americans lived no longer in an age of faith, but in an age of faiths and an age of reason.”

paine

The author layers her canvass sparingly, but appropriately emphasizes secular morality.  “The religiously correct version of American history has never given proper credit to the central importance of the Enlightenment concept of natural rights—or to the anticlerical abolitionists who advanced that concept before the public—in building the case against slavery.”  Indeed, but for the Bible, would slavery have ever infected American society in the first place?  “[S]ecularists are not value-free,” Jacoby writes.  “[T]heir values are simply grounded in earthly concerns rather than in anticipation of heavenly rewards or fear of infernal punishments.”

To American readers denied an honest, inclusive education with respect to their own past, Jacoby introduces a new cast of heroes from the nineteenth-century—the likes of Lucretia Mott (“Truth for Authority, Not Authority for Truth”), Elizabeth Cady Stanton (“[E]very form of religion which has breathed upon this earth has degraded woman.”), and of course, the “Great Agnostic,” Robert Green Ingersoll (“Every fact has pushed superstition from the brain and a ghost from the clouds . . . and every schoolhouse is a temple.”).

But most conspicuous were Jacoby’s final strokes.  In the chapter entitled “Reason Embattled,” she brutalizes the most formidable contemporary proponents of American theocracy.  “The real underpinnings of [Antonin] Scalia’s support of the death penalty are to be found not in constitutional law but in the Justice’s religious convictions.”  Death, according to Scalia, is simply “no big deal” for Christians with faith in an afterlife.  And the venerated principle of separation of religion and government, so elemental to democracy itself, should apparently be of no concern to American citizens content to live under the law of Scalia’s god.   But some citizens, Jacoby protests, “might respect themselves enough to respect the authority of their elected officials—even without being threatened by the sword of the Lord of Hosts.”

Yet, for all his pompous, illicit evangelism, Scalia is not the principal threat to American secularism.  “It is fair to say,” writes Jacoby, “that the first six presidents of the United States did not invoke the blessings of the Deity as frequently in their entire public careers as President George W. Bush does each month. . . .  Short of erecting a cross atop the White House . . ., the current administration could hardly do more to demonstrate its commitment to pulverizing a constitutional wall that has served both religion and government well for more than two hundred years.”  The President’s faith-based initiatives, his constant, official yet furtive allusions to scripture and empty neo-Christian platitudes (exactly what is this “culture of life,” and does it ever apply to the living?) demand our unwavering political attention as much as more traditional and less subtle attacks on separation.

Like Franklin, Jacoby offers fair warning to the community of reason.  We must challenge the “unexamined assumption that religion per se is, and always must be, a benign influence on society. . . .  For secularists to mount an effective challenge to the basic premises of religious correctness, they must first stop pussyfooting around the issue of the harm that religion is capable of doing.”

beleiver vs atheist cartoon

And most importantly, rational citizens must educate themselves: “Nor is it enough for secularists to speak up in defense of the godless constitution; they must also defend the Enlightenment values that produced the legal structure crafted by the framers.  Important as separation of church and state is to American secularists, their case must be made on a broader plane that includes the defense of rational thought itself.”  We must “reclaim the passion and emotion from the religiously correct.  The revitalization of American secularism in the twenty-first century depends upon its ability to convey the passions of humanism as Ingersoll did in the nineteenth, to move hearts as well as to change minds.”

Free Inquiry magazine touted Freethinkers as “the freethought book of the year.  Make that the decade.  OK, the century.”  I would ask every secularist to consider a number of fine books in the same tradition, including Sidney Warren’s American Freethought, 1860-1914. (NY: Gordian Press, 1966), and editor Annie Laurie Gaylor’s Women Without Superstition: “No Gods—No Masters” (Madison, WI: Freedom From Religion Foundation, 1997).

I won’t fault Jacoby, as some might, for offering a polemical history.  Democracy, after all, is the free marketplace of arguments and ideas as well as simple facts; and Jacoby has demonstrated commendable facility with all three.  Besides, to whom should Americans look for political and social ideas, if not to those with a working knowledge of our history?

My only criticism of Jacoby’s book is that it insufficiently emphasizes the freethought organs and organizations of the late nineteenth century, America’s ‘golden age’ of reason.  Our citizens should know more about the struggles of that period, often lost but always fiercely fought, that gradually led to the enhanced (though inadequate) level of freethought we enjoy today.  The Truth Seeker (arguably, the era’s most radical freethought weekly, edited by D.M. Bennett) and The Index (published by the Free Religious Association) were the venerable precursors to contemporary atheist and humanist news services.  The National Liberal League, the American Secular Union, the Freethought Federation of America, the Infidel Association of the United States, and the New York State Freethinkers’ Association all serve as worthy examples of the ability of rational people to unite for a critical cause.

But I agree with Susan Jacoby: “It is time to revive the evocative and honorable freethinker, with its insistence that Americans think for themselves instead of relying on received opinion.”  Only democracy, freedom, and peace hang in the balance.