Category Archives: Politics/Culture Wars

Science (Indeed, the World) Needs Fewer, Not More, Icons.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

The Sims statue and its protestors.

To the extent we are rational, we share the same identity.—Rebecca Goldstein.

September was an awkward month for Nature, perhaps the most influential and well-respected science publication on the planet.  In August, a group peacefully protested, and vandals subsequently defaced, a Central Park statue of J. Marion Sims, a 19th-century surgeon and founder of the New York Women’s Hospital often referred to as the “father of modern gynecology.”  Sims’ likeness was left with fiendish red eyes and the word “RACIST” scrawled across its back.

The quarrel stemmed from the mostly undisputed facts that, although Sims helped develop life-saving surgical techniques to help women recover from particularly traumatic births, he also experimented on female slaves without providing anesthesia, and after seeking consent only from their owners. Unsurprisingly, commentators contest whether Sims’ methods were consistent with the customs and scruples of his time (Washington 2017).

Nature’s first inclination was to publish an editorial originally titled, “Removing the Statues of Historical Figures Risks Whitewashing History,” arguing that we should leave such icons in place to remind passers-by of the important historical lessons they might provide (The Editors 2017). The piece also recommended the installation of additional iconography to “describe the unethical behavior and pay respect to the victims of the experimentation.”

Given then-recent events in the ever-emotionally explosive and divisive world of American popular culture especially, vigorous dissent was inevitable. A flurry of indignant letters descended on Nature’s editors.  Several writers suggested that, at least in America, the primary if not sole purpose of public statuary is to honor its subjects, not to inform curious minds of their historical significances (Comment 2017).  One contributor noted that the history of Nazi Germany has been well-documented in the very conspicuous absence of Nazi iconography.  Another reasoned that because written documentation always precedes statuary, removal of monuments would have “no impact on our understanding of the historical failings of those individuals.”

Other letters offered less restrained and, frankly, less disciplined commentary. One author submitted that the editorial “perpetuate[d] racist white supremacy.”  Two more branded it simply as “white privilege at its height” and as a “racist screed.”  Another found the article in support of “unethical science” and to inform Nature’s minority readers that they “remain unwelcome in science because of their race.”

Vandals defaced the Sims likeness with red paint.

But more importantly for my purposes here, many writers contributed thoughts on the Sims monument itself that reveal quite plainly our human tendencies to interpret the inherent ambiguity of statues—indeed iconography and other symbolic expressions more generally—consistent with our fears, personal agendas, or ideological mindsets. One author, for example, confided that the Sims statue bid her to “Go away, woman.  You have no authority here,” and to “Go away, woman of African descent.  You cannot have the intellect to contribute to the science of your own healthcare” (Green 2017).  Another saw Sims’ likeness as a “signal” that the “accomplishments of a white man are more important than his methods or the countless people he victimized,” and that “the unwilling subjects of that research … are unimportant and should be washed away.” (Gould 2017; Comment 2017).  Yes, all of that from a motionless, voiceless sculpture.

In the end, Nature’s guests called consistently for the icon’s swift removal.  And given its and any other statue’s essential ambiguity, I agree.  Take it away, melt it down, and donate its metal to a more fruitful purpose.  But, regrettably, many writers also petitioned for additional iconography—this time to honor accomplished females in medicine and the victims of sexist and racist medical practices.  In other words, they would display more monuments of more humans, no doubt all with potentially hideous skeletons lurking in their so far sealed closets, likely to be scrutinized and challenged by any conceivable number of equally fault- and agenda-ridden human interpreters to come.

In the rush to colonize others’ minds, or perhaps to cast painful blows against cross-cultural enemies, has anyone actually taken the time and effort to think this through? Both duly and thoroughly reproved, Nature’s editors quickly apologized and revised their article, including its title, to comply with reader objections (Campbell 2017; The Editors 2017).  But glaring similarities between the Sims controversy and more widely publicized events involving statues of Confederate generals, for example (at least one of which resulted in meaningless violence), have attracted the attention of the general media as well.

Police protect Charlottesville’s statue of General Lee.

Writing for The Atlantic, Ross Anderson aptly observed that “the writing of history and building of monuments are distinct acts, motivated by distinct values” (Anderson 2017).  No serious person ever suggested, he continued, that statuary “purport[s] to be an accurate depiction of its history.”  So far, so good.  At that critical point, Anderson appeared well on his way to advancing the sensible argument that inherently simplistic and ambiguous iconography can only divide our society, and perhaps even inspire (more) pointless violence.

Unfortunately, that was also the point where the author stumbled and then strayed onto perhaps well-worn, but nevertheless unsustainable trail. The legitimate purpose of a society’s statuary, he argued, is “an elevation of particular individuals as representative of its highest ideals,” a collective judgment as to “who should loom over us on pedestals, enshrined in metal or stone ….”  But, honestly, no credible history has ever instructed that any individual, no matter how accomplished, whether male or female, black or white, can ever represent our “highest ideals.”  And is there anything about recent American history to suggest we could ever agree on what constitutes those ideals?  And, come to think of it, how do people tend to react when others choose which monuments and symbols will “loom over” them?  Indeed, wasn’t that the problem in Charlottesville, Virginia?

White supremacists march on Charlottesville.

According to Anderson, the activists demanding removal of the Sims statue and its replacement with iconography of presumptively more deserving subjects ask only “that we absorb the hard work of contemporary historians … and use that understanding to inform our choices about who we honor.” But, as any experienced historian knows, historical facts can be, and often are, responsibly parsed and interpreted in many different ways.  And why should common citizens blindly accept one credible historian’s perspective over that of any other?  Regardless, shouldn’t we encourage the public to consult the actual history, rather than convenient, but severely underdeveloped and necessarily misleading shortcuts?

Author Dave Benner argued, instead, that we should preserve our monuments (Benner 2017). Pointing to the New Orleans statue of Franklin Roosevelt, which, to this point, remains free of public derision and vandalism, Benner reminded us of Executive Order 9066, by which FDR displaced 110,000 American citizens of Japanese ancestry into internment camps, without due process, in “one of the saddest and most tyrannical forms of executive overreach in American History.”  Should the FDR monument (indeed, the dime) be purged according to the same reasoning offered by Nature’s revised editorial and those who oppose the Sims statue?  By such a standard, would iconography depicting any of the American founders survive?

Perhaps not. But to what supposedly disastrous end?  By Benner’s lights, the removal of cultural iconography would “simply make it harder for individuals to learn from the past.”  But, again, as the many dissenter’s to Nature’s original editorial observed, the purpose of statuary is not to inform.  And let’s be completely candid here: nor is it to “honor” the dead and insensible subjects of such iconography who no longer hold a stake in that or any other outcome.  Rather, the unspoken object is no less than to decree and dispense value judgments for the masses.

And some would no doubt argue the propriety of that object in the context of politics and government. But can and should science do better?  “As the statues and portraits of Sims make clear,” offers Harriet Washington, award-winning author of Medical Apartheid, “art can create beautiful lies” (Washington 2017).  “To find the truth,” she advises, “we must be willing to dig deeper and be willing to confront ugly facts.  No scientist, no thinking individual, should be content to accept pretty propaganda.”

Science’s battle is not with any particular ideological foe. It stands against all ideologies equally.  It has no interest in turning minds to any individual’s, or any coalition’s social cause because it has no agenda beyond the entire objective truth.  Science is incapable of pursuing ambiguity or any shortcut, especially where the potential for clarity, completion, and credibility persists.  And science certainly doesn’t need more icons; it needs fewer, or none.

 

A final thought on symbolic expression:

Yes, American history is saturated with political symbolism, from the flags of the colonial rebellion to the Tinker armbands and beyond.  As I wrote this column, however, the discussion of alleged “race” in America grew increasingly inane—dominated, in fact, by Donald Trump, our Clown in Chief, on one side, and mostly mute and under-studied NFL football players on the other.  The social, popular, and activist media, along with their rapacious followers, of course, seemed thoroughly enchanted by this absurd spectacle.

I take no position on this “debate,” if it can be so characterized. Indeed, comprehension of the contestants’ grievances is precluded by their irresponsible methods.  The President’s very involvement is inexplicable.  But, for me, it’s the players’ exclusively symbolic expressions that cause greater concern.  Again, not because I disagree with whatever they might be trying to say.  Rather, because their gestures are so ambiguous and amenable to any number of conceivable interpretations that, in the end, they say nothing.  Is this the future of all public discourse?

Waving or burning flags just isn’t impressive. Nor is standing, or sitting when others stand.  Nor is raising a fist or locking arms.  Because these expressions require no real investments, they amount to cheap, lazy, conveniently vague, and, thus, mostly empty gestures.  I’m old enough to know that they’ll persist, of course, and no doubt dominate the general public’s collective consciousness.  I only hope we can manage to maintain, perhaps even expand, spaces for more sober, motivated, and responsible discourse.  In any case, I’d prefer not to spend my remaining years watching them being torn down, especially from within.

 

References:

Anderson, R. 2017. Nature’s Disastrous ‘Whitewashing’ Editorial. Available online at https://www.theatlantic.com/science/archive/2017/09/an-unfortunate-editorial-in-nature/538998/; accessed September 27, 2017.

Benner, D. 2017. Why the Purge of Historic Monuments Is a Bad Idea. Available online at http://www.intellectualtakeout.org/23021; accessed September 27, 2017.

Campbell, P. 2017. Statues: an editorial response. Nature 549: 334.

Comment. 2017. Readers Respond to Nature’s Editorial on Historical Monuments. Available online at http://www.nature.com/news/readers-respond-to-nature-s-editorial-on-historical-monuments-1.22584; accessed September 26, 2017.

Gould, K.E. 2017. Statues: for those deserving respect. Nature 549: 160.

Green, M.H. 2017. Statues: a mother of gynaecology. Nature 549: 160.

The Editors. 2017. Science must acknowledge its past mistakes and crimes. Nature 549: 5-6.

Washington, H. 2017. Statues that perpetuate lies should not stand. Nature 549: 309.

Advertisements

Editing the Human Germline: Groundbreaking Science and Mind-numbing Sentiment.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

The CRISPR Complex at work.

Should biologists use new gene-editing technology to modify or “correct” the human germline? Will our methods soon prove sufficiently safe and efficient and, if so, for what purposes?  Much-celebrated CRISPR pioneer, Jennifer Doudna, recently recalled her initial trepidations over that very prospect:

Humans had never before had a tool like CRISPR, and it had the potential to turn not only living people’s genomes but also all future genomes into a collective palimpsest upon which any bit of genetic code could be erased and overwritten depending on the whims of the generation doing the writing…. Somebody was inevitably going to use CRISPR in a human embryo … and it might well change the course of our species’ history in the long run, in ways that were impossible to foretell.

(Doudna and Sternberg 2017). And it didn’t take long.  Just one month after Doudna and others called for a moratorium on human germline editing in the clinical setting, scientists in Junjiu Huang’s lab at Sun Yat-sen University in Guangzhou, China published a paper describing their exclusively in vitro use of CRISPR on eighty-six human embryos (Liang et al. 2015).  Huang’s goal was to edit mutated beta-globin genes that would otherwise trigger a debilitating blood disorder called beta-thalassemia.

But the outcomes were mixed, at best.  After injecting each embryo with a CRISPR complex composed of a guide RNA molecule, a gene-slicing Cas-9 enzyme, a synthetic repair DNA template, and a “glow-in-the-dark” jellyfish gene that allows investigators to track their results as cells continue to divide, Huang’s team delivered a paltry five percent efficiency rate.  Some embryos displayed unintended, “off-target” editing.  In others, cells ignored the repair template and used the related delta-globin gene as a model instead.  A third group of embryos turned mosaic, containing cells with an untidy jumble of editions.  Part of the problem was that CRISPR had initiated only after the fertilized egg had begun to divide.

By using non-viable triploid embryos containing three sets of chromosomes, instead of the usual two, Huang avoided objections that he had destroyed potential human lives.  Nevertheless, both Science and Nature rejected his manuscript based in part on ethical concerns.  Several scientific agencies also promptly reemphasized their stances against human germline modification in viable embryos, and, in the US, the Obama Administration announced its position that the human germline should not be altered at that time for clinical purposes.  Francis Collins, director of the National Institutes of Health, emphasized that the US government would not fund any experiments involving the editing of human embryos.  And finally, earlier this year, a committee of the US National Academies of Sciences and Medicine decreed that clinical use of germline editing would be allowed only when prospective parents had no other opportunities to birth healthy children.

Meanwhile, experimentation continued in China, with similarly grim results. But this past August, an international team based in the US—this time led by embryologist Shoukhrat Mitalipov at the Oregon Health and Science University in Portland—demonstrated that, under certain circumstances, genetic defects in human embryos can, in fact, be efficiently and safely repaired (Ma et al. 2017).

Embyologist Shoukhrat Mitalipov.

Mitalipov’s group attempted to correct an autosomal dominant mutation—where a single copy of a mutated gene results in disease symptoms—of the MYBPC3 gene.  Crucially, such mutations are responsible for an estimated forty percent of all genetic defects causing hypertrophic cardiomyopathy (HCM), along with ample portions of other inherited cardiomyopathies.  Afflicting one in every 500 adults, HCM cannot be cured, and remains the most common cause of heart failure and sudden death among otherwise healthy young athletes.  These mutations have escaped the pressures of natural selection, unfortunately, due to the disorder’s typically late onset—that is, following reproductive maturity.

Prospective parents can, however, prevent HCM in their children during the in vitro fertilization/preimplantation genetic diagnosis (IVF/PGD) process.  Where only one parent carries a heterozygous mutation, fifty percent of the resulting embryos can be diagnosed as healthy contenders for implantation.  The remaining unhealthy fifty percent will be discarded.  As such, correction of mutated MYBPC3 alleles would not only rescue the latter group of embryos, but improve pregnancy rates and save prospective mothers—especially older women with high aneuploidy rates and fewer viable eggs—from risks associated with increasing numbers of IVF/PGD cycles as well

With these critical facts in mind, Mitalipov and colleagues employed a CRISPR complex generally similar to that used by Huang.  It included a guide RNA sequence, a Cas-9 endonuclease, and a synthetic repair template.  In one phase of their investigation, the team fertilized fifty-four human oocytes (from twelve healthy donors) with unhealthy sperm carrying the MYBPC3 mutation (from a single donor), and injected the resulting embryos eighteen hours later with the CRISPR complex.  The result? Thirteen treated embryos became jumbled mosaics.

Mitalipov changed things up considerably, however, in the study’s second phase by delivering the complex much earlier than he and others had done in previous experiments—indeed, at the very brink fertilization. More precisely, his colleagues injected the CRISPR components along with the mutated sperm cells into fifty-eight healthy, “wild-type” oocytes during metaphase of the second meiotic division.  Here, the results were impressive, to say the least.  Forty-two treated embryos were normalized, carrying two copies of the healthy MYBPC3 allele—a seventy-two percent rate of efficiency, no “off-target effects” were detected, and only one embryo turned mosaic.

Mosaicism and Off-target Effects.

Mitalipov’s team achieved a genuine breakthrough in terms of both efficacy and safety.  Perhaps nearly as interesting—and, in fact, the study’s primary finding, according to the authors—is that, in both experimental phases, the embryos consistently ignored Mitalipov’s synthetic repair template and turned instead to the healthy maternal allele as their model.  Such is not the case when CRISPR is used to edit somatic (body) cells, for example.  Apparently, the team surmised, human embryos evolved an alternative, germline-specific DNA repair mechanism, perhaps to afford the germline special protection.

The clinical implications of this repair preference are profound and, at least arguably, very unfortunate.  First, with present methods, it now appears unlikely that scientists could engineer so-called “designer babies” endowed with trait enhancements.  Second, it seems nearly as doubtful that CRISPR can be used to repair homozygous disease mutations where both alleles are mutant.  Nevertheless, Mitalipov’s method could be applied to more than 10,000 diseases, including breast and ovarian cancers linked to BRCA gene mutations, Huntington’s, cystic fibrosis, Tay-Sachs, and even some cases of early-onset Alzheimer’s.

At least in theory.  As of this writing, Mitalipov’s results have yet to be replicated, and even he warns that, despite the new safety assurances and the remarkable targeting efficiencies furnished by his most recent work, gene-editing techniques must be “further optimized before clinical application of germline correction can be considered.” According to stem-cell biologist George Daley of Boston Children’s Hospital, Mitalipov’s experiments have proven that CRISPR is “likely to be operative,” but “still very premature” (Ledford 2017).  And while Doudna characterized the results as “one giant leap for (hu)mankind,” she also expressed discomfort with the new research’s unmistakable inclination toward clinical applications (Belluck 2017).

Indeed, within a single day of Mitalipov’s report, eleven scientific and medical organizations, including the American Society of Human Genetics, published a position statement outlining their recommendations regarding the human germline (Ormond et al. 2017).  Therein, the authors appeared to encourage not only in vitro research but public funding as well.  Although they advised against any contemporary gene-editing process intended to culminate in human pregnancy, but also suggested that clinical applications might proceed in the future subject to a compelling medical rationale, a sufficient evidence base, an ethical justification, and a transparent and public process to solicit input.

And of course researchers like Mitalipov will be forced to contend with those who claim that, regardless of purpose, the creation and destruction of human embryos is always ethically akin to murder (Mitalipov destroyed his embryos within days of their creation).  But others have lately expressed even less forward-thinking and, frankly, even more irrational and dangerous sentiments.

For example, a thoroughly galling article I can describe further only as “pro-disability” (In stark contrast to “pro-disabled”) was recently published, surprisingly to me, in one of the world’s most prestigious science publications (Hayden 2017).  It begins by describing a basketball game in which a nine-year-old girl, legally blind due to genetic albinism, scored not only the winning basket, but, evidently—through sheer determination—all of her team’s points.  Odd, perhaps, but great!  So far.

But the story quickly turns sour, to the girl’s father who apparently had asked the child, first, whether she wished she would have been born with normal sight, and, second (excruciatingly), whether she would ever help her own children achieve normal sight through genetic correction. Unsurprisingly, the nine-year-old is said to have echoed what we then learn to be her father’s heartfelt but nonetheless bizarre conviction: “Changing her disability … would have made us and her different in a way we would have regretted,” which to him, would be “scary.”

To be fair, the article very briefly appends counsel from a man with Huntington’s, for instance, who suggests that “[a]nyone who has to actually face the reality … is not going to have a remote compunction about thinking there is any moral issue at all.”  But the narrative quickly circles back to a linguist, for example, who describes deaf parents who deny both their and their children’s disabilities and have even selected for deafness in their children through IVF/PGD, and a literary scholar who believes that disabilities have brought people closer together to create a more inclusive world (much as some claim Western terrorism has).  The author then laments the fact that, due to modern reproductive technology, fewer children are being born with Down’s syndrome.

To summarize, according to one disabilities historian, “There are some good things that come from having a genetic illness.”  Uh-huh.  In other words, disabilities are actually beneficial because they provide people with challenges to overcome—as if relatively healthy people are incapable of voluntarily and thoughtfully designing both mental and physical challenges for themselves and their kids.

I think not. Disabilities, by definition, are bad.  And, as even a minimally compassionate people, if we possess a safe and efficient technological means of preventing blindness, deafness, or any other debilitating disease in any child or in any child’s progeny, we also necessarily have an urgent ethical obligation to use them.

 

References:

Belluck, P. 2017. In breakthrough, scientists edit a dangerous mutation from genes in human embryos. Available online at https://nyti.ms/2hnZ9ey; accessed August 9, 2017.

Doudna, J.A., and S.H. Sternberg. 2017. A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution. Boston: Houghton Mifflin Harcourt.

Hayden, E.C. 2017. Tomorrow’s children. Nature 530:402-05.

Ledford, H. 2017. CRISPR fixes embryo error. Science 548:13-14.

Liang, P., Y. Xu, X. Zhang, et al. 2015. CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein and Cell 6(5):363-72.

Ma, H., N. Marti-Gutierrez, S. Park, et al. 2017. Correction of a pathogenic gene mutation in human embryos. Nature DOI:10.1038/nature23305.

Ormond, K.E., D.P. Mortlock, D.T. Scholes, et al. 2017. Human germline genome editing. The American Journal of Human Genetics 101:167-76.

Biological Race and the Problem of Human Diversity (Cover Article).

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Race 1

Some would see any notion of “race” recede unceremoniously into the dustbin of history, taking its ignominious place alongside the likes of phlogiston theory, Ptolemaic geocentricism, or perhaps even the Iron Curtain or Spanish Inquisition.  But race endures, in one form or another, despite its obnoxious, though apparently captivating dossier.

In 1942, anthropologist Ashley Montagu declared biological race “Man’s Most Dangerous Myth,” and, since then, most scientists have consistently agreed (Montagu 1942).  Nevertheless, to most Americans in particular, heritable race seems as obvious as the colors of their neighbors’ skins and the textures of their hair.  So too have a determined minority of researchers always found cause to dissent from the professional consensus.

Here, I recount the latest popular skirmish over the science of race and attempt to reveal a victor, if there be one.  Is biological race indeed a mere myth, as the academic majority has asked us to concede for more than seven decades?  Is it instead a scandalously inconvenient truth—something we all know exists but, for whatever reasons, prefer not to discuss in polite company?  Or is it possible that a far less familiar rendition of biological race could prove not only viable, but both scientifically and socially valuable as well?

Race Revived.

The productive questions pertain to how races came to be and the extent to which racial variation has significant consequences with respect to function in the modern world.—Vincent Sarich and Frank Miele, 2004.

I have no reason to believe that Nicholas Wade, long-time science editor and journalist, is a racist, if “racist” is to mean believing in the inherent superiority of one human race over any other.  In fact, he expressly condemns the idea.  But in the more limited and hopefully sober context of the science of race, Wade is a veritable maverick.  Indeed, his conclusions that biological human races (or subspecies, for these purposes) do exist, and conform generally to ancestral continental regions, appear remarkably more consistent with those of the general public.

In his most recent and certainly controversial book, A Troublesome Inheritance: Genes, Race and Human History, Wade immediately acknowledges that the vast majority of both anthropologists and geneticists deny the existence of biological race (Wade 2014).  Indeed, “race is a recent human invention,” according to the American Anthropological Association (AAA 2008), and a mere “social construct,” per the American Sociological Association (ASA 2003).  First to decode the human genome, Craig Venter was also quick to announce during his White House visit in 2000 that “the concept of race has no genetic or scientific basis.”

But academics especially are resistant to biological race, or the idea that “human evolution is recent, copious, and regional,” Wade contends, because they fear for their careers in left-leaning political atmospheres and because they tend to be “obsessed with intelligence” and paralyzed by the “unlikely” possibility that genetics might one day demonstrate the intellectual superiority of one major race over others.

According to Wade, “social scientists often write as if they believe that culture explains everything and race [indeed, biology] explains nothing, and that all cultures are of equal value.”  But “the emerging truth,” he insists, “is more complicated.”  Although the author sees individuals as fundamentally similar, “their societies differ greatly in their structure, institutions and their achievements.”  Indeed, “contrary to the central belief of multiculturalists, Western culture has achieved far more” than others “because Europeans, probably for reasons of both evolution and history, have been able to create open and innovative societies, starkly different from the default human arrangements of tribalism or autocracy.”

Race 6

Wade admits that much of his argument is speculative and has yet to be confirmed by hard, genetic evidence.  Nevertheless, he argues, “even a small shift in [genetically-based] social behavior can generate a very different kind of society,” perhaps one where trust and cooperation can extend beyond kin or the tribe—thus facilitating trade, for example, or one emphasizing punishment for nonconformity—thus advancing rule-orientation and isolationism, for instance.  “[I]t is reasonable to assume,” the author vies, “that if traits like skin color have evolved in a population, the same may be true of its social behavior.”

But what profound environmental conditions could possibly have selected for more progressive behavioral adaptations in some but not all populations?  As the climate warmed following the Pleistocene Ice Age, Wade reminds, the agricultural revolution erupted around 10,000 years ago among settlements in the Near East and China.  Increased food production led to population explosions, which in turn spurred social stratification, wealth disparities, and more frequent warfare.  “Human social behavior,” Wade says, “had to adapt to a succession of makeovers as settled tribes developed into chiefdoms, chiefdoms into archaic states and states into empires.”

Meanwhile, other societies transformed far less dramatically.  “For lack of good soils, favorable climate, navigable rivers and population pressures,” Wade observes, “Africa south of the Sahara remained largely tribal throughout the historical period, as did Australia, Polynesia and the circumpolar regions.”

Citing economist Gregory Clark, Wade then postulates that, during the period between 1200 and 1800 CE—twenty-four generations and “plenty of time for a significant change in social behavior if the pressure of natural selection were sufficiently intense,”—the English in particular evolved a greater tendency toward “bourgeoisification” and at least four traits—nonviolence, literacy, thrift, and patience—thus enabling them to escape the so-called “Malthusian trap,” in which agrarian societies never quite learn to produce more than their expanding numbers can consume, and, finally, to lead the world into the Industrial Revolution.

In other words, according to this author, modern industrialized societies have emerged only as a result of two evolved sets of behaviors—initially, those that favor broader trust and contribute to the breakdown of tribalism, and, subsequently, those that favor discipline and delayed gratification and lead to increased productivity and wealth.  On the other hand, says Wade, Sub-Saharan Africans, for example, though well-adapted to their unique environmental circumstances, generally never evolved traits necessary to move beyond tribalism.  Only an evolutionary explanation for this disparity, he concludes, can reveal, for instance, why foreign aid to non-modern societies frequently fails and why Western institutions, including democracy and free markets, cannot be readily transferred to (or forced upon) yet pre-industrial cultures.

So how many races have evolved in Wade’s estimation?  Three major races—Caucasian, East Asian, and African—resulted from an early migration out of Africa some 50,000 years ago, followed by a division between European and Asian populations shortly thereafter.  Quoting statistical geneticist, Neil Risch, however, Wade adds Pacific Islanders and Native Americans to the list because “population genetic studies have recapitulated the classical definition of races based on continental ancestry” (Risch 2002).

To those who would object that there can be no biological race when so many thousands of people fail to fit neatly into any discreet racial category, Wade responds, “[T]o say there are no precise boundaries between races is like saying there are no square circles.”  Races, he adds, are merely “way stations” on the evolutionary road toward speciation.  Different variations of a species can arise where different populations face different selective challenges, and humans have no special exemption from this process.  However, the forces of differentiation can reverse course when, as now, races intermingle due to increased migration, travel, and intermarriage.

Race Rejected.

It is only tradition and shortsightedness that leads us to think there are multiple distinct oceans.—Guy P. Harrison, 2010.

So, if we inherit from our parents traits typically associated with race, including skin, hair, and eye color, why do most scientists insist that race is more social construct than biological reality?  Are they suffering from an acute case of political correctness, as Wade suggests, or perhaps a misplaced paternalistic desire to deceive the irresponsible and short-sighted masses for the greater good of humanity?  More ignoble things have happened, of course, even within scientific communities. But according to geneticist Daniel J. Fairbanks, the denial of biological race is all about the evidence.

In his new book, Everyone is African: How Science Explodes the Myth of Race, Fairbanks points out that, although large-scale analyses of human DNA have recently unleashed a deluge of detailed genetic information, such analyses have so far failed to reveal discrete genetic boundaries along traditional lines of racial classification (Fairbanks 2015).  “What they do reveal,” he argues, “are complex and fascinating ancestral backgrounds that mirror known historical immigration, both ancient and modern.”

Fairbanks

In 1972, Harvard geneticist Richard Lewontin analyzed seventeen different genes among seven groups classified by geographic origin.  He famously discovered that subjects within racial groups varied more among themselves than their overall group varied from other groups, and concluded that there exists virtually no genetic or taxonomic significance to racial classifications (Lewontin 1972).  But Lewontin’s word on the subject was by no means the last. Later characterizing his conclusion as “Lewontin’s Fallacy,” for example, Cambridge geneticist A.W.F. Edwards reminded us how easy it is to predict race simply by inspecting people’s genes (Edwards 2003).

So who was right?  Both of them were, according to geneticist Lynn Jorde and anthropologist Stephen Wooding.  Summarizing several large-scale studies on the topic in 2004, they confirmed Lewontin’s finding that about 85-90% of all human genetic variation exists within continental groups, while only 10-15% between them (Jorde and Wooding 2004).  Even so, as Edwards had insisted, they were also able to assign all native European, East Asian, and sub-Saharan African subjects to their continent of origin using DNA alone.  In the end, however, Jorde and Wooding showed that geographically intermediate populations—South Indians, for example—did not fit neatly into commonly conceived racial categories.  “Ancestry,” they concluded, was “a more subtle and complex description” of one’s genetic makeup than “race.”

Fairbanks concurs.  Humans have been highly mobile for thousands of years, he notes.  As a result, our biological variation “is complex, overlapping, and more continuous than discreet.”  When one analyzes DNA from a geographically broad and truly representative sample, the author surmises, “the notion of discrete racial boundaries disappears.”

Nor are the genetic signatures of typically conceived racial traits always consistent between populations native to different geographic regions.  Consider skin color, for example.  We know, of course, that the first Homo sapiens inherited dark skin previously evolved in Africa to protect against sun exposure and folate degradation, which negatively affects fetal development.  Even today, the ancestral variant of the MC1R gene, conferring high skin pigmentation, is carried uniformly among native Africans.

Race 2

But around 30,000 years ago, Fairbanks instructs, long after our species had first ventured out of Africa into the Caucasus region, a new variant appeared.  KITLG evolved in this population prior to the European-Asian split to reduce pigmentation and facilitate vitamin D absorption in regions of diminished sunlight.  Some 15,000 years later, however, another variant, SLC24A5, evolved by selective sweep as one group migrated westward into Europe.  Extremely rare in other native populations, nearly 100% of modern native Europeans carry this variant.  On the other hand, as their assorted skin tones demonstrate, African and Caribbean Americans carry either two copies of an ancestral variant, two copies of the SLC24A5 variant, or one of each.  Asians, by contrast, developed their own pigment-reducing variants—of the OCA2 gene, for example—via convergent evolution, whereby similar phenotypic traits result independently among different populations due to similar environmental pressures.

So how can biology support the traditional, or “folk,” notion of race when the genetic signatures of that notion’s most relied upon trait—that is, skin color—are so diverse among populations sharing the same or similar degree of skin pigmentation?  Fairbanks judges the idea utterly bankrupt “in light of the obvious fact that actual variation for skin color in humans does not fall into discrete classes,” but rather “ranges from intense to little pigmentation in continuously varying gradations.”

To Wade, Fairbanks offers the following reply: “Traditional racial classifications constitute an oversimplified way to represent the distribution of genetic variation among the people of the world. Mutations have been creating new DNA variants throughout human history, and the notion that a small proportion of them define human races fails to recognize the complex nature of their distribution.”

Race 8

A Severe Response.

I use the term scientific racism to refer to scientists who continue to believe that race is a biological reality.—Robert Wald Sussman, 2014.

Since neither author disputes the absence of completely discreet racial categories, one could argue that part of the battle is really one over mere semantics, if not politics. Regardless, critical aspects of Wade’s analysis were quickly and sharply criticized by several well-respected researchers.

Former president of the AAA and co-drafter of its statement on race, Alan Goodman, for example, argues that Wade’s “speculations follow from misunderstandings about most everything, including the idea of race, evolution and gene action, culture and institutions, and most fundamentally, the scientific process” (Goodman 2014). Indeed, he compares Wade’s book to the most maligned texts on race ever published, including Madison Grant’s 1916 The Passing of the Great Race, Arthur Jensen’s 1969 paper proposing racial intelligence differences, and Herrnstein’s and Murray’s 1994 The Bell Curve.

But Wade’s “biggest error,” according to Goodman, “is his inability to separate the data on human variation from race.” He mistakenly assumes, in other words, “that all he sees is due to genes,” and that culture means little to nothing. A “mix of mysticism and sociobiology,” he continues, Wade’s simplistic view of human culture ignores the archeological and historical fact that cultures are “open systems” that constantly change and interact. And although biological human variation can sometimes fall into geographic patterns, Goodman emphasizes, our centuries-long attempt to force all such variation into racial categories has failed miserably.

Characterizing Wade’s analysis similarly as a “spectacular failure of logic,” population geneticist Jennifer Raff takes special issue with the author’s attempt to cluster human genetic variation into five or, really, any given number of races (Raff 2014). To do so, Wade relied in part on a 2002 study featuring a program called Structure, which is used to group people across the globe based on genetic similarities (Rosenberg 2002). And, indeed, when Rosenberg et al. asked Structure to bunch genetic data into five major groups, it produced clusters conforming to the continents.

But, as Raff observes, the program was capable of dividing the data into any number of clusters, up to twenty in this case, depending on the researchers’ pre-specified desire. When asked for six groups, for example, Structure provided an additional “major” cluster, the Kalash of northwestern Pakistan—which Wade arbitrarily, according to Raff, rejected as a racial category. In the end, she concludes, Wade seems to prefer the number five “simply because it matches his pre-conceived notions of what race should be.”

Interestingly, when Rosenberg et al. subsequently expanded their dataset to include additional genetic markers for the same population samples, Structure simply rejected the Kalesh and decided instead that one of Wade’s five human races, the Native Americans, should be split into two clusters (Rosenberg 2005). In any event, Rosenberg et al. expressly warned in their second paper that Structure’s results “should not be taken as evidence of [their] support of any particular concept of ‘biological race.’”

Structure was able to generate discrete clusters from a very limited quantity of genetic variation, adds population geneticist Jeremy Yoder, because its results reflect what his colleagues refer to as isolation-by-distance, or the fact that populations separated by sufficient geographic expanses will display genetic distinctions even if intimately connected through migration and interbreeding (Yoder 2014). In reality, however, human genetic variation is clinal, or gradual in transition between such populations. In simpler terms, people living closer together tend to be more closely related than those living farther apart.

In his review, biological anthropologist Greg Laden admits that human races might have existed in the past and could emerge at some point in the future (Laden 2014). He also concedes that “genes undoubtedly do underlie human behavior in countless ways.” Nevertheless, he argues, Wade’s “fashionable” hypothesis proposing the genetic underpinnings of racially-based behaviors remains groundless. “There is simply not an accepted list of alleles,” Laden reminds, “that account for behavioral variation.”

Chimpanzees, by contrast, can be divided into genetically-based subspecies (or races). Their genetic diversity has proven much greater than ours, and they demonstrate considerable cultural variation as well. Even so, Laden points out, scientists have so far been unable to sort cultural variation among chimps according to their subspecies. So if biologically-based races cannot explain cultural differences among chimpanzees, despite their superior genetic diversity as a species, why would anyone presume the opposite of humans?

Race 7

None of which is to imply that every review of Wade has been entirely negative. Conservative journalist Anthony Daniels (a.k.a. Theodore Dalrymple), for example, praises the author lavishly as a “courageous man … who dares raise his head above the intellectual parapet” (Daniels 2014). While judging Wade’s arguments mostly unconvincing, he nevertheless defends his right to publish them: “That the concept of race has been used to justify the most hideous of crimes should no more inhibit us from examining it dispassionately … than the fact that economic egalitarianism has been used to justify crimes just as hideous …”

Similarly, political scientist and co-author of The Bell Curve, Charles Murray warned readers of the social science “orthodoxy’s” then-impending attempt to “not just refute” Wade’s analysis, “but to discredit it utterly—to make people embarrassed to be seen purchasing it in public” (Murray 2014). “It is unhelpful,” Murray predicts, “for social scientists and the media to continue to proclaim that ‘race is a social construct’” when “the problem facing us down the road is the increasing rate at which the technical literature reports new links between specific genes and specific traits.” Although “we don’t yet know what the genetically significant racial differences will turn out to be,” Murray contends, “we have to expect that they will be many.”

Perhaps; perhaps not. But race is clearly problematic from a biological perspective—at least as Wade and many before him have imagined it. Humans do not sort neatly into separate genetic categories, or into a handful of continentally-based groups. Nor have we discovered sufficient evidence to suggest that human behaviors match to known patterns of genetic diversity. Nonetheless, because no “is” ever implies an “ought,” the cultural past should never define, let alone restrain, the scientific present.

Characterizing Biological Diversity.

Instead of wasting our time “refuting” straw-man positions dredged from a distant past or from fiction, we should deal with the strongest contemporary attempts to rehabilitate race that are scientifically respectable and genetically informed.—Neven Sesardic, 2010.

To this somewhat belated point, I have avoided the task of defining “biological race,” in large measure because no single definition has achieved widespread acceptance. In any event, preeminent evolutionary biologist, Ernst Mayr, once described “geographic race” generally as “an aggregate of phenotypically similar populations of a species inhabiting a geographic subdivision of the range of that species and differing taxonomically from other populations of that species” (Mayr 2002). A “human race,” he added, “consists of the descendants of a once-isolated geographic population primarily adapted for the environmental conditions of their original home country.”

Sounds much like Wade, so far. But unlike Wade, Mayr firmly rejected any typological, essentialist, or folk approach to human race denying profuse variability and mistaking non-biological attributes—especially those implicating personality and behavior—for racial traits. Accepting culture’s profound sway, Mayr warned that it is “generally unwise to assume that every apparent difference … has a biological cause.” Nonetheless, he concluded, recognizing human races “is only recognizing a biological fact”:

Geographic groups of humans, what biologists call races, tend to differ from each other in mean differences and sometimes even in specific single genes. But when it comes to the capacities that are required for the optimal functioning of our society, I am sure that any racial group can be matched by that of some individual in another racial group. This is what population analysis reveals.

So how might one rescue biological race from the present-day miasma of popular imparsimony and professional denialism, perhaps even to the advancement of science and benefit of society? Evolutionary biologist and professor of science philosophy, Massimo Pigliucci, thinks he has an answer.

Race 3

More than a decade ago, he and colleague Jonathan Kaplan proposed that “the best way of making sense of systematic variation within the human species is likely to rely on the ecotypic conception of biological races” (Pigliucci and Kaplan 2003). Ecotypes, they specify, are “functional-ecological entities” genetically adapted to certain environments and distinguished from one another based on “many or a very few genetic differences.” Consistent with clinal variation, ecotypes are not always phylogenetically distinct, and gene flow between them is common. Thus, a single population might consist of many overlapping ecotypes.

All of which is far more descriptive of human evolution than even the otherwise agreeable notion of “ancestry,” for example. For Pigliucci and Kaplan, the question of human biological race turns not on whether there exists significant between-population variation overall, as Lewontin, for example, suggested, but rather on “whether there is [any] variation in genes associated with significant adaptive differences between populations.” As such, if we accept an ecotypic description of race, “much of the evidence used to suggest that there are no biologically significant human races is, in fact, irrelevant.”

On the other hand, as Pigliucci observed more recently, the ecotypic model implies the failure of folk race as well. First, “the same folk ‘race’ may have evolved independently several times,” as explained above in the context of skin color, “and be characterized by different genetic makeups” (Pigliucci 2013). Second, ecotypes are “only superficially different from each other because they are usually selected for only a relatively small number of traits that are advantageous in certain environments.” In other words, the popular notion of the “black race,” for example, centers on a scientifically incoherent unit—one “defined by a mishmash of small and superficial set of biological traits … and a convoluted cultural history” (Pigliucci 2014).

So, while the essentialist and folk concepts of human race can claim “no support in biology,” Pigliucci concludes, scientists “should not fall into the trap of claiming that there is no systematic variation within human populations of interest to biology.” Consider, for a moment, the context of competitive sports. While the common notion that blacks are better runners than whites is demonstrably false, some evidence does suggest that certain West Africans have a genetic edge as sprinters, and that certain East and North Africans possess an innate advantage as long-distance runners (Harrison 2010). As the ecotypic perspective predicts, the most meaningful biological human races are likely far smaller and more numerous than their baseless essentialist and folk counterparts (Pigliucci and Kaplan 2003).

So, given the concept’s exceptionally sordid history, why not abandon every notion of human race, including the ecotypic version? Indeed, we might be wise to avoid the term “race” altogether, as Pigliucci and Kaplan acknowledge. But if a pattern of genetic variation is scientifically coherent and meaningful, it will likely prove valuable as well. Further study of ecotypes “could yield insights into our recent evolution,” the authors urge, “and perhaps shed increased light onto the history of migrations and gene flow.” By contrast, both the failure to replace the folk concept of race and the continued denial of meaningful patterns of human genetic variation have “hampered research into these areas, a situation from which neither biology nor social policy surely benefit.”

References:

American Anthropological Association. 2008. Race continues to define America. http://new.aaanet.org/pdf/upload/Race-Continues-to-Define-America.pdf (last accessed November 12, 2015).

American Sociological Association. 2003. The importance of collecting data and doing social scientific research in race. http://www.asanet.org/images/press/docs/pdf/asa_race_statement.pdf (last accessed November 12, 2015).

Clark, E. 2007. A farewell to alms: a brief economic history of the world. Princeton, NJ: Princeton University Press.

Daniels, A. 2014. Genetic disorder. http://www.newcriterion.com/articleprint.cfm/Genetic-disorder-7903 (last accessed November 19, 2015).

Edwards, A.W.F. 2003. Human genetic diversity: Lewontin’s fallacy. BioEssays 25(8):798-801.

Fairbanks, D.J. 2015. Everyone is African: how science explodes the myth of race. Amherst, NY: Prometheus Books.

Goodman, A. 2014. A troublesome racial smog. http://www.counterpunch.org/2014/05/23/a-troublesome-racial-smog/print (last accessed November 17, 2015).

Harrison, G.P. 2010. Race and reality: what everyone should know about our biological diversity. Amherst, NY: Prometheus Books.

Jorde, L.B. and S.P. Wooding. 2004. Genetic variation, classification and ‘race.’ Nature Genetics 36(11):528-533.

Laden, G. 2014. A troubling tome. http://www.americanscientist.org/bookshelf/id.16216,content.true,css.print/bookshelf.aspx (last accessed November 16, 2015).

Lewontin, R. 1972. The apportionment of human diversity. Evolutionary Biology 6:397.

Mayr, E. 2002. The biology of race and the concept of equality. Daedalus 131(1):89-94.

Montagu, A. 1942. Man’s most dangerous myth: the fallacy of race. NY: Columbia University Press.

Murray, C. 2014. Book review: ‘A Troublesome Inheritance’ by Nicholas Wade: a scientific revolution is under way—upending one of our reigning orthodoxies. http://www.wsj.com/articles/SB10001424052702303380004579521482247869874 (last accessed November 19, 2015).

Pigliucci, M. 2013. What are we to make of the concept of race? Thoughts of a philosopher-scientist. Studies in History and Philosophy of Biological and Biomedical Sciences. 44:272-277.

Pigliucci, M. 2014. On the biology of race. http://www.scientiasalon.wordpress.com/2014/05/29/on-the-biology-of-race/. (last accessed November 22, 2015).

Pigliucci, M. and J. Kaplan. 2003. On the concept of biological race and its applicability to humans. Philosophy of Science 70:1161-1172.

Raff, J. 2014. Nicholas Wade and race: building a scientific façade. http://www.violentmetaphors.com/2014/05/21/nicholas-wade-and-race-building-a-scientific-facade/ (last accessed November 16, 2015).

Risch, N., E. Burchard, E. Ziv, and H. Tang. 2002. Categorization of humans in biomedical research: genes, race and disease. Genome Biology 3(7):1-12.

Rosenberg, N., J.K. Pritchard, J.L. Weber, et al. 2002. Genetic structure of human populations. Science 298(5602):2381-2385.

Rosenberg, N., M. Saurabh, S. Ramachandran, et al. 2005. Clines, clusters, and the effect of study design on the inference of human population structure. PLOS Genetics 1(6):e70.

Sarich, V. and F. Miele. 2004. Race: the reality of human differences. Boulder, CO: Westview Press.

Sesardic, N. 2010. Race: a social deconstruction of a biological concept. Biological Philosophy 25:143-162.

Sussman, R.W. 2014. The myth of race: the troubling persistence of an unscientific idea. Cambridge, MA: Harvard University Press.

Wade, N. 2014. A troublesome inheritance: genes, race and human history. NY: Penguin Press.

Yoder, J. 2014. How A Troublesome Inheritance gets human genetics wrong. http://www.molecularecologist.com/2014/05/troublesome-inheritance/ (last accessed November 16, 2015).

Nature, Nurture, and the Folly of “Holistic Interactionism.”

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Most contemporary scientists, according to Harvard University experimental psychologist, Steven Pinker, have abandoned both the nineteenth-century belief in biology as destiny and the twentieth-century doctrine that the human mind begins as a “blank slate.”  In his new anthology, Language, Cognition, and Human Nature: Selected Articles (Oxford 2015), Pinker first reminds us of the now-defunct blank slate’s political and moral appeal:  “If nothing in the mind is innate,” he chides, “then differences among races, sexes, and classes can never be innate, making the blank slate the ultimate safeguard against racism, sexism, and class prejudice.”

Pinker15

Even so, certain angry ideologues, for example, continue to wallow in blank slate dogma.  Gender differences in STEM professions, for example, are often attributed entirely to prejudice and hidden barriers.  The mere possibility that women, on average, are less interested than men in people-free pursuits remains oddly “unspeakable,” says Pinker (but see a recent exception here).  The point, he clarifies, is not that we know for certain that evolution and genetics are relevant to explaining so-called “underrepresentation” in high-end science and math, but that “the mere possibility is often treated as an unmentionable taboo, rather than as a testable hypothesis.”

A similar exception to the general rule centers around parenting and the behavior of children.  It may be true that parents who spank raise more violent children, and that more conversant parents produce children with better language skills.  But why does “virtually everyone” conclude from such facts that the parent’s behavior causes that of the child?  “The possibility that the correlations may rise from shared genes is usually not even mentioned, let alone tested,” says Pinker.

Equally untenable for the author is the now-popular academic doctrine he dubs “holistic interactionism” (HI).  Carrying a “veneer of moderation [and] conceptual sophistication,” says Pinker, HI is based on a few “unexceptional points,” including the facts that nature and nurture are not mutually exclusive and that genes cannot cause behavior directly.  But we should confront this doctrine with heightened scrutiny, according to Pinker, because “no matter how complex the interaction is, it can be understood only by identifying the components and how they interact.”  HI “can stand in the way of such an understanding,” he warns, “by dismissing any attempt to disentangle heredity and environment as uncouth.”

HI mistakenly assumes, for example, that hereditary cannot constrain behavior because genes depend critically on the environment.  “To begin with,” says Pinker, “it is simply not true that any gene can have any effect in some environment, with the implication that we can always design an environment to produce whatever outcome we value.”  And even if some extreme “gene-reversing” environment can be imagined, it simply doesn’t follow that “the ordinary range of environments will [even] modulate that trait, [or that] the environment can explain the nature of the trait.”  The mere existence of environmental mitigations, in other words, does not render the effects of genes inconsequential.  To the contrary, Pinker insists, “genes specify what kinds of environmental manipulations will have what kinds of effects and with what costs.”

Although the postmodernists and social constructionists who tend to dominate humanities departments in American Universities especially, continue to tout HI as a supposedly nuanced means of comprehending the nature-nurture debate, it is in truth little more than a pseudo-intellectual “dodge,” Pinker concludes: a convenient means to “evade fundamental scientific problems because of their moral, emotional, and political baggage.”

Among intellectually honest, truly curious, and consistently rational thinkers (a diminutive demographic indeed), Pinker’s reputation is and has long stood as something perhaps just short of heroic, in no small part due to his defense of politically incorrect but nonetheless scientifically viable hypotheses.  What a shame it is that only academics of similar status (and tenure) can safely rise and demand the freedom required to mount such defenses.  And what a tragedy that so few in such privileged company actually do.

 

Undeniably Nye.

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

In Undeniable: Evolution and the Science of Creation (St. Martin’s 2014), recently-celebrated creationist debater, Bill Nye (“the Science Guy”), has collected a few dozen frustratingly brief essays on a wide variety of scientific topics with special emphasis, of course, on evolution.  Undeniable could inspire very casual readers of full-length non-fiction–if such creatures really exist.  But it might disappoint certain others.

I can’t help but conclude that Nye’s primary motive was to cash in quickly on his recent popularity among science literates and left-leaning political ideologues.  Which is absolutely fine–better him than Oprah’s spawn, for example.  First, he includes precious few references that discriminating and truly curious readers not only crave, but require.  Second, Nye’s opinions on GMOs, for example, were apparently premature.  Indeed, he changed his mind around February of this year, shortly after his visit to Monsanto’s headquarters.  “GMOs are not inherently bad,” he finally concluded in an interview with HuffPost Live. “We are able to feed 7.2 billion people, which a century and a half ago you could barely feed 1 and a half billion people and [it’s] largely because of the success of modern farming.”  True, but why would that fact surprise anyone, let alone Bill Nye?

In any case, Nye remains an exceptional science communicator, perhaps because of his wonderfully geeky bow-tie, or maybe because he claims to empathize with the many religiously-abused deniers of evolution–including his debate opponent, Ken Ham, who still insists that the Earth is no more than 6000 years old.  Nye understands “the troubling nature of the shortness of our lives,” for instance.  But human mortality can either “make you want to listen to old country western songs about how miserable life can be,” he says, “or it can fill you with joy.”

I don’t know about any of that–mortality is a pretty tough nut to crack, regardless of one’s musical tastes.  But Nye’s certainly correct that humans as a species–most individuals excluded, of course–have made great strides in comprehending the objective truths of our existence in just the last 150 of our 100,000 total years on planet Earth–thanks to the methods of science.  “Think what lies ahead for our species,” he prescribes hopefully, “if we preserve biodiversity and raise the standard of living for everyone.”  That could be great, I suppose–depending.  But maybe a great deal to ask of a species whose adult members continue to think of “science” as merely a subject they studied (or not so much) in school.

Religion and Violence: A Conceptual, Evolutionary, and Data-Driven Approach (Cover Article).

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

For a recent broadcast of Real Time with Bill Maher, the impish host arranged a brief “debate” between neuroscientist and popular religion critic, Sam Harris, and movie actor Ben Affleck(1).  The timely topic for consideration, of course, was Muslim violence.  The exchange warmed up quickly.  Harris pronounced Islam “the mother-lode of bad ideas” and Affleck scorned his opponent’s attitude as “gross” and “racist.”  Sound-bites duly served, the discussion ended almost as soon as it began.

But the Harris-Affleck affair wasn’t a complete waste of electricity.  If nothing else, it exposed a gaping intellectual void in the dialogue over the relationship between religion and hostility.  Unfortunately, this debate has been long-dominated by extreme or undisciplined claims on both sides.  Some suggest, for example, that all organized violence is religiously inspired at some level, while others insist that all religion is entirely benevolent when practiced “correctly.”  These arguments are plainly meritless and compel no response.

Nor can I credit the proposition that religion is often or ever the sole cause of violence.  Organized aggression—whether war, Crusade, Inquisition, lesser jihad, slavery, or terrorism, for instance—typically derives in some measure from greed or political machination.  Similarly, individual violence—honor killing, suicide bombing, genital mutilation, and faith healing, to name a few—usually results from jealousy, bigotry, ideology, or psychopathology in addition to religion.

Some social scientists have argued that religious belligerence ensues from simple prejudice, defined as judgment in the absence of accurate information.  Here, the customary prescription includes education and exposure to a broader diversity of religious tradition.  But as Rodney Stark, co-director at Baylor University’s Institute for Studies of Religion, recently observed, “it is mostly true beliefs about one another’s religion that separates the major faiths.”(2)  Muslims deny Christ’s divinity, for example, and Christians reject Muhammad’s claim as successor to Moses and Jesus.  As such, Stark reasons, education is unnecessary and “increased contact might well result in increased hostility.”

religious violence

Religion Misunderstood?

More interesting, on the other hand, are a collection of perspectives that both diminish and subordinate the role of religion in violent contexts to that of mere pretense or veneer.  In other words, these writers contend that religion is seldom, if ever, the original or primary cause of aggression.  Rather, they suggest, the sacred serves only as an efficient means of either motivating or justifying what should otherwise be recognized as purely secular violence.

Such is the latest appraisal of Karen Armstrong, ex-Catholic nun and easily the twenty-first century’s most prolific popular historian of religion.  In rapid response to Harris’s televised vilification of Islam, Armstrong enlisted the popular press.  During an inexplicable interview with Salon, she echoed Affleck’s hyperbole, equating Harris’s criticism of Islam to Nazi anti-Semitism.(3)  Such comparisons are absurd, of course, because condemnation of an idea is categorically different from denigration of an entire population, or any member thereof.

But more to the point, Armstrong argued that the very idea of “religious violence” is flawed for two reasons.  First, ancient religion was inseparable from the state and, as such, no aspect of pre-modern life—including organized violence—could have been divided from either the state or religion.  Second, she continued, “all our motivation is always mixed.”  Thus, modern suicide bombing and Muslim terrorism, for example, are more personal and political, according to Armstrong, than religious.

The point was developed further in Fields of Blood, Armstrong’s new history of religious violence:

Until the modern period, religion permeated all aspects of life, including politics and warfare … because people wanted to endow everything with significance. Every state ideology was religious … [and thus every] successful empire has claimed that it had a divine mission; that its enemies were evil …. And because these states and empires were all created and maintained by force, religion has been [wrongly] implicated in their violence.(4)

To the contrary, says the author, religion has consistently stood against aggression.  The Priestly authors of the Hebrew Bible, for instance, believed that warriors were contaminated by violence, “even if the campaign had been endorsed by God.”  Similarly, the medieval Peace and Truce of God graciously “outlawed violence from Wednesday to Sunday.”  And in the past, Sunni Muslims were “loath to call their coreligionists ‘apostates,’ because they believed that God alone knew … a person’s heart.”

So both the ancient and modern problems, Armstrong contends, are not in religion per se, “but in the violence embedded in our human nature and the nature of the state.”  Thus, the “xenophobic theology of the Deuteronomists developed when the Kingdom of Judah faced political annihilation,” and the Muslim practices of al-jihad al-asghar and takfir (the process of declaring someone an apostate or unbeliever) were resuscitated “largely as a result of political tension arising from Western imperialism (associated with Christianity) and the Palestinian problem.”

Some of Armstrong’s claims are no doubt true, but far less relevant than she apparently imagines.  For example, that religion was conjoined with the state did not render it ineffectual in terms of bellicosity—perhaps quite the opposite, as we will soon see.  In other cases, the author’s claims are logically flawed.  For instance, an older version of a tradition is not more “authentic” than its successors simply by virtue of its age.  Also, that violence results from manifold causes does not negate or even diminish the accountability of any contributing influence, including religion.

Ultimately, Armstrong misrepresents the issue entirely by setting up her true intellectual adversaries as conveniently feeble straw men.  “It is simply not true,” she postures, “that ‘religion’ is always aggressive.”  Agreed, but no serious person has ever made that accusation.  If the author’s primary argument is that every (or any) religion isn’t always violent, I can’t help but conclude she wasted a great deal of time and energy supporting it.

Nevertheless, Armstrong’s most recent commentary reminds us that religion generally, and all major religious traditions collectively, are a well-mixed bag.  Indeed, both Buddhism and Jainism were at least founded on the principle of ahimsa, or non-violence.  And, yes, the sacred regularly intertwines with politics and government, sometimes to a degree rendering it indistinguishable from the state itself.  Finally, hostility in the name of religion, whether perpetrated by a state, group, or individual, is frequently motivated by a host of factors in addition to faith.  However, that religion is so often employed as a pretense or veneer to inspire people to violence only tends to confirm its hazardous nature.

A More Methodical Approach.

To more astutely characterize the relationship between religion and violence, and to distinguish between differentially aggressive traditions, we need to apply a more disciplined and less biased method.  Cultural anthropologist David Eller proposes a comprehensive model of violence consisting of five contributing dimensions or conditions that, together, predict the source’s propensity to expand both the scope and scale of hostility(5).  These dimensions include group integration, identity, institutions, interests, and ideology.

Eller applies his model to religion as follows: First, religion is clearly a group venture featuring “exclusionary membership,”  “collective ideas,” and “the leadership principle, with attendant expectations of conformity if not strict obedience”—often to superhuman authorities deserving of special deference.  Second, sacred traditions offer both personal and collective identities to their adherents that stimulate moods, motivations, and “most critically, actions.”

Next, most faiths provide institutions, perhaps involving creeds, codes of conduct, rituals, and hierarchical offices which at some point, according to Eller, can render the religion indistinguishable from government.  Fourth, all religions aspire to fulfill certain interests.  Most crucially, they seek to preserve and perpetuate the group along with its doctrines and behavioral norms.  The attainment of ultimate good or evil (heaven or hell, for example), the discouragement or punishment of “dissent or deviance,” proselytization and conversion, and opposition to non-believers might be included as well.

Finally, “religion may be the ultimate ideology,” the author avers, “since its framework is so totally external (i.e., supernaturally ordained or given), its rules and standards so obligatory, its bonds so unbreakable, and its legitimation so absolute.”  For Eller, the “supernatural premise” is critical:

This provides the most effective possible legitimation for what we are ordered or ordained to do: it makes the group, its identity, its institutions, its interests, and its particular ideology good and right … by definition. Therefore, if it is in the identity or the institutions or the interests or the ideology of a religion to be violent, that too is good and right, even righteous.

Arguably, the author surmises, “no other social force observed in history can meet those conditions as well as religion.”  And when a given tradition satisfies multiple conditions, “violence becomes not only likely but comparatively minor in the light of greater religious truths.”

Confronting the question at hand, then, and with Armstrong’s historical observations and Eller’s generalized model of violence in mind, I propose a somewhat familiar, though perhaps distinctively limited two-part hypothesis describing potential relationships between religion and aggression.

First, I do not contend that religion is ever the sole, original, or even primary cause of bellicosity.  Such might be the case in any given instance, but for the purpose of determining generally whether faith plays a meaningful role in violence, we need only ask whether the religion is a sine qua non (without which not), or “cause-in-fact,” of the conflict.  Second, although all religions can and often do stimulate a variety of both positive and negative behaviors, clearly not all faiths are identical in their inherent inclination toward hostility.  Indeed, there should be little question that the traditions of Judaism, Christianity, and Islam have all satisfied each of Eller’s conditions with exceptional profusion.  Accordingly, I propose that the Abrahamic monotheisms are either uniquely adapted to the task or otherwise especially capable of inspiring violence from both their followers and non-followers.

Causation, Briefly.

Determining whether a violent act would have occurred absent religious belief can be difficult, to say the least.  Even so, it is insufficient to simply note, as some critics of religion often do, that the Bible prescribes death for a variety of objectively mundane offenses, including adultery (Leviticus 20:10) and taking the Lord’s name in vain (Leviticus 24:16).  And to merely remind us, for example, that Deuteronomy 13:7-11 commands the devoted to stone to death all who attempt to “divert you from Yahweh your God,” or that Qur’an 9:73 instructs prophets of Islam to “make war” on unbelievers, provides precious little evidence upon which to base an indictment of religious conviction.

Sam Harris’s vague declaration, “As man believes, so will he act,” seems entirely plausible, of course, but is also highly presumptive given the fact that humans are known to frequently hold two or more conflicting beliefs simultaneously.(6)  Nor can we casually assume that every suicide bomber or terrorist has taken inspiration from holy authority—even if he or she is a religious extremist.

On the other hand, there is substantial merit in Harris’s criticism of those faithful who, regardless of the circumstances, “tend to argue that it is not faith itself but man’s baser nature that inspires such violence.”  Again, there can be more than one cause-in-fact for any outcome, especially in the psychologically knotty context of human aggression.  Further, when an aggressor confesses religious inspiration, we should accept him at his word.

So when we are made aware, for example, that one of Francisco Pizarro’s companions, whose fellow soldiers brutalized the Peruvian town of Cajamarca in 1532, had written back to the Holy Roman Emperor Charles V (a.k.a. King Charles I of Spain), recounting that “for the glory of God … they have conquered and brought to our holy Catholic Faith so vast a number of heathens, aided by His holy guidance,” we should concede the rather evident possibility that the Spaniards slaughtered or forcibly converted these natives at least in part because of their religion.(7)

Monotheism Conceptually.

Eller denies that all religion is “inherently” violent.  Nonetheless, he recognizes monotheism’s tendency toward a dualistic, good versus evil, attitude that not only “builds conflict into the very fabric of the cosmic system” by crafting two “irrevocably antagonistic” domains “with the ever-present potential for actual conflict and violence,” but also “breeds and demands a fervor of belief that makes persecution seem necessary and valuable.”

Stark agrees.  Committed to a “doctrine of exclusive religious truth,” he writes, particularistic traditions “always contain the potential for dangerous conflicts because theological disagreements seem inevitable.”  Innovative heresy naturally arises from the religious person’s desire to comprehend scripture thought to be inspired by the all-powerful and “one true god.”  As such, Stark finds, “the decisive factor governing religious hatred and conflict is whether, and to what degree, religious disagreement—pluralism, if you will—is tolerated.”(8)

Indeed, many modern-era writers before me have distinguished monotheism as an exceptionally belligerent force.  Sigmund Freud, for example, argued in 1939 that “religious intolerance was inevitably born with the belief in one God.”(9)  More recently, Jungian psychologist, James Hillman, concurred: “Because a monotheistic psychology must be dedicated to unity, its psychopathology is intolerance of difference.”(10)  Even Karen Armstrong agreed when writing in her late fifties.  Of the faiths of Abraham, she reflected, “all three have developed a pattern of holy war and violence that is remarkably similar and which seems to surface from some deep compulsion inherent in this tradition of monotheism, the worship of only one God.”(11)

Author Jonathan Kirsch, however, addressed the issue directly in 2004, comparing the relative bellicosity of polytheistic and monotheistic traditions.  Noting the early dominance of the former over the latter, Kirsch described their most profound dissimilarity:

[F]atefully, monotheism turned out to inspire a ferocity and even a fanaticism that are mostly absent from polytheism. At the heart of polytheism is an open-minded and easygoing approach to religious belief and practice, a willingness to entertain the idea that there are many gods and many ways to worship them. At the heart of monotheism, by contrast, is the sure conviction that only a single god exists, a tendency to regard one’s own rituals and practices as the only proper way to worship the one true god.(12)

Former professor of religion, Edward Meltzer, adds that for the monotheist, “all divine volition must have one source, and this entails the attribution of violent and vengeful actions to one and the same deity and makes them an indelible part of the divine persona.”  Meanwhile, polytheists “have the flexibility of compartmentalizing the divine” and to “place responsibility for … repugnant actions on certain deities, and thus to marginalize them.”(13)

For Kirsch, the Biblical tale of the golden calf reveals an exceptional belligerence in the faiths of Abraham.  After convincing a pitiless and indiscriminate Yahweh not to obliterate every Israelite for worshiping the false idol, Moses nonetheless organizes a “death squad” to murder the 3000 men and women (to “slay brother, neighbor, and kin,” according to Exodus 32:27) who actually betrayed their strangely jealous god.

In the Pentateuch and elsewhere, Kirsch elaborates, “the Bible can be read as a bitter song of despair as sung by the disappointed prophets of Yahweh who tried but failed to call their fellow Israelites to worship of the True God.”  “Fatefully,” the author continues, the prophets—like their wrathful deity—“are roused to a fierce, relentless and punishing anger toward any man or woman who they find to be insufficiently faithful.”

This ultimate and non-negotiable “exclusivism” of worship and belief, Kirsch concludes, comprises the “core value of monotheism.”  And “the most militant monotheists—Jews, Christians and Muslims alike—embrace the belief that God demands the blood of the nonbeliever” because the foulest of sins is not lust, greed, rape, or even murder, but “rather the offering of worship to gods and goddesses other than the True God.”

Indeed, the historical plight of these faiths’ Holy City seems to bear credible testimony to Kirsch’s rendering.  As Biblical archeologist Eric Cline observed a decade ago, Jerusalem has suffered 118 separate conflicts in the past four millennia.  It has been “completely destroyed twice, besieged twenty-three times, attacked an additional fifty-two times, and captured and recaptured forty-four times.”  The city has endured twenty revolts and “at least five separate periods of violent terrorist attacks during the past century.”  Ironically, the “Holy Sanctuary” has changed hands peacefully only twice during the last four thousand years.(14)

For anthropologist Hector Avalos, Jerusalem figures prominently in this discussion as a religiously-defined “scarce resource.”  Of course many social scientists have attributed hostility to competition over limited resources.  Avalos, however, argues that the Abrahamic faiths have created from whole cloth four categories of scarce resource that render them especially prone to the inducement of recurrent and often shocking acts of violence.(15)

Sacred spaces and divinely inspired or otherwise authoritative scriptures comprise the author’s first and second categories.  Such spaces and scriptures are scarce because only certain people will ever receive access to or be ordained with the power to control or interpret them.  Group privilege and salvation constitute Avalos’ third and fourth categories, neither of which will be conferred on a person, consistent with religious tradition, except under extraordinary circumstances.  Obviously, all such resources are related and, in many ways, interdependent.

To emphasize the point, Regina Schwartz, director of the Chicago Institute for Religion, Ethics, and Violence, employs the Biblical story of Cain and Abel.  In the book of Genesis, the first brothers offer dissimilar sacrifices to God, who favors Abel’s offering, but not Cain’s.  And so the gifting is transformed into a competition for God’s blessing, apparently a commodity in very limited supply.  Denied God’s approval—and now God’s preference—Cain murders Abel in a jealous rage.  Here, Schwartz finds, “monotheism is depicted as endorsing exclusion and intolerance,” and the scarce resource of “divine favor” as “inspiring deadly rivalries.”(16)

In the religious milieu, Avalos argues, scarcity is markedly more tragic and immoral because the alleged existence of these resources is ultimately unverifiable and, according to empirical standards, not scarce at all.  Even so, for religionists the stakes are not only real, but as high as one could possibly imagine.  Control over such resources, after all, determines everlasting bliss or torment for both one’s self and all others.  Assuming belief, at least in the context of scarce resource theory, indeed—what’s not to fight, perhaps even kill or die for?

The Evolution of Monotheism.

The God of Abraham was created not only in the image of man, says professor of psychiatry, Hector Garcia, but far more revealingly in the images of alpha-male humans and their non-human primate forebears.  It is no accident (and certainly no indication of credibility), Garcia continues, that the majority of all religionists worship a god who is “fearsome and male,” who “demands reckoning” and “rains fury upon His enemies and slaughters the unfaithful,” and who is portrayed in the holy texts as “policing the sex lives of His subordinates and obsessing over sexual infidelity.”(17)

No more an accident, that is, than the evolutionary process of natural selection and differential reproduction.  Why would an eternal, non-material, and all-powerful divinity like Yahweh, Allah, or Christ, Garcia asks, preoccupy himself with “what are ultimately very human, and very apelike” concerns?  That such a god would need to assert and maintain dominance by threat or physical aggression, for example, or to use violence “to obtain evolutionary rewards such as food, territory, and sex,” seems unfathomable.

Until, that is, one comes to recognize the Abrahamic gods as the highest-ranking alpha-male apes of all time.  In that light, these divinities “reflect the essential concerns of our primate evolutionary past—namely, securing and maintaining power, and using that power to exercise control over material and reproductive resources.”  In other words, to help them cope during a particularly brutal era, the male authors of the Abrahamic texts fashioned a god “intuitive to their evolved psychology,” and, as history demonstrates, “with devastating consequences.”

Rules of reciprocity govern the social lives of non-human primates (which scientists routinely study as surrogates for the ancestors of modern humans).  When fights break out among chimpanzees, for instance, those who have previously received help from the victim are much more likely than others to answer his calls.  And apes that are called but fail to respond are far more likely to be ignored or even attacked rather than helped if and when they plead for assistance during future altercations.  Dominant males also rely on alliances to maintain rank and will punish subordinates that so much as groom or share food with their rivals.  In fact, many researchers calculate that the most common intra-society cause of ape aggression is the perceived infraction of social rules—many of which administer reciprocity and maintain alliances.

Like their primate ancestors, men have long sought alliances with their dominant alpha-gods.  Extreme examples abound in our sacred texts.  In Genesis 22:1-19, Abraham’s willingness to sacrifice Isaac, his own son, demonstrates his unflinching submissiveness to God, who “reciprocates in decidedly evolutionary terms,” according to Garcia, by offering Abraham and his descendants the ultimate ally in war.  Similarly, In Judges 11:30-56, Jephthah sacrifices his daughter as “burnt offering” to Yahweh for help in battle against the Ammonites.

But gods have rivals too; and strangely—except from an evolutionary perspective—so do omnipotent gods.  Created by dominant men, these divinities are expressly jealous.  And like their primate forebears, they build and enforce alliances with their followers against all divine rivals.  As Exodus 22.20 warns, “He who sacrifices to any god, except to the LORD only, he shall be utterly destroyed.”  But as an earthly extension of loyalty, God requires action as well.  Muslims, for example, are expected to “fight those of the unbelievers who are near to you and let them find in you hardness.” (Sura 9:123).

Thus, monotheism not only establishes in- and out-groups with evolutionary efficiency, it also intensifies and legitimizes them.  The founding texts are capable of removing all compassion from the equation (“thine eye shall have no pity on them” [Deut. 7:16]), thus leaving all manner of brutality permissible (“strike off their heads and strike off every fingertip of them” [Sura 8:12]).  The first Crusade offers just one bloody case in point.  Accounts of the Christian attack on Jerusalem in 1099 document the slaughter of nearly 70,000 Muslims.  The faithful reportedly burned the Jews, raped the women, and dashing their babies’ wailing heads against posts.  As a campaign waged against a religiously-defined “other,” this assault was considered unequivocally righteous.

As a second, more sexually-oriented, illustration of the alpha-God parable, Garcia offers Catholic Spain’s late sixteenth- and early seventeenth-century conquest of the Pueblo Indians in New Mexico.  Here, the incursion didn’t end with the violent acquisition of territory.  In striking resemblance to the behaviors of dominant male non-human primates, Christian occupiers emasculated their native male rivals, cloistered their women, and appropriated their mating opportunities.

The Spaniards began, of course, by claiming the natives’ territory in the name of Christ and God.  They destroyed their prisoners’ religious buildings and icons and, as many male animals do, marked their newly pilfered grounds.  Catholic iconography was erected while the most powerful medicine men were persecuted and killed.  Conquistador and governor of the New Mexico province, Juan de Onate, neutralized all capable men over the age of twenty-five by hacking away one of their feet.(18)

Meanwhile, the Franciscan friars were tasked with their captives’ spiritual conquest.  To install themselves as earthly dominant males, the friars undermined the existing male rank structure through public humiliation.  Native sons were forced to watch helplessly as the Franciscans literally seized, twisted, and in some cases tore away their fathers’ penises and testicles, rendering them both socially submissive and sexually impotent.  “Indian men were to sexually acquiesce to Christ, the dominant male archetype,” says Garcia, “and the Franciscans exercised extreme brutality to accomplish such subservience, to include attacking genitalia in the style of male apes and monkeys.”

The friars hoarded the native women in cloisters, thus acquiring exclusive sexual access—which was sometimes but not always voluntary.  Inquisitorial court logs documented numerous incidences of violence which were seldom if ever prosecuted.  One example involved Fray Nicolas Hidalgo of the Taos Pueblo who fathered a native woman’s child after strangling her husband and violating her.  Another friar, Luis Martinez, was accused of raping a native girl, cutting her throat, and burying her body under his cell.  In these brutal but, to primatologists, eerily familiar cases, Garcia writes, “we can easily spy male evolutionary paradigms grinding their way across the Conquista—the sexual domination of men, the sexual acquisition of females, and differential reproduction among despotic men—all strongly within a religious context.”

But the most unnerving evolutionary strategy among male animals—especially apes and monkeys, is infanticide.  Typically only males attempt it, and often after toppling other males from power.  The reproductive advantage is unmistakable.  Killing another male’s offspring eliminates the killer’s (and his male progeny’s) future competition for females.  In many species, the practice also sends the offended mother immediately into estrus, providing the killer with additional reproductive access.  Perhaps counterintuitively, the mothers also have much to gain by mating with their infants’ slayers because infanticidal males are genetically more likely to produce infanticidal, and thus more evolutionarily fit offspring.

Unfortunately, this disturbing pattern is replicated in modern humans.  As Garcia notes, the number of child homicides committed by stepfathers and boyfriends is substantially higher—in some instances, up to one-hundred times higher—than those committed by biological fathers.  And we know that genetics are involved in this pattern because it occurs across cultures and geographic regions, including the United States, Canada, and Great Britain.

Perhaps unsurprisingly at this point, the evolutionary strategy of infanticide is also reflected in religion.  In the Bible, for example, God orders his followers to “kill every male among the little ones” along with “every woman who has known man lying with him.” (Numbers 31:17-18)  The virgins, of course, are to be enslaved for sexual amusement.  Also, in his prophesy against Babylon, God declares that the doomed city’s “infants will be dashed to pieces” as their parents look on. (Isaiah 13-16)  This time, the hapless infants’ mothers will be “violated” as well.

It is no mere coincidence, Garcia argues, that mostly men have claimed to know what God wants.  Dominant human males have inherited their most basic desires from our primate ancestors.  Interestingly, their omnipotent and immortal God is frequently portrayed as possessing identical earthly cravings.  He demands territory and access to women, for example.  And from an objective perspective, this God’s desires serve only to justify the ambitions of the most powerful men.

As natural history would predict, human males have relentlessly pursued—and continue to pursue—the monopolization of territorial and sexual resources through “fear, submission, and unquestioning obeisance.”  The alpha-God expects and accepts no less.  Most regrettably, however, “men have claimed this dominant male god’s backing while perpetrating unspeakable cruelties—including rape, homicide, infanticide, and even genocide.”

Modern Islam.

Sam Harris believes we are at war with Islam.  “It is not merely that we are at war with an otherwise peaceful religion that has been ‘hijacked’ by extremists,” he argues.  “We are at war with precisely the vision of life that is prescribed to all Muslims in the Koran, and further elaborated in the literature of the hadith.”  “A future in which Islam and the West do not stand on the brink of mutual annihilation,” Harris portends, “is a future in which most Muslims have learned to ignore most of their canon, just as most Christians have learned to do.”(19)

Incendiary rhetoric aside, and given what we know about monotheism generally, is Harris naïve to emphasize Islamic violence?  After all, Western history is saturated with exclusively Christian bloodshed.  Pope Innocent III’s thirteenth-century crusade against the French Cathars, for example, may have ended a million lives.  The French Religious Wars of the sixteenth-century between Catholics and Protestant Huguenots left around three million slain, and the seventeenth-century Thirty Years War waged by French and Spanish Catholics against Protestant Germans and Scandinavians annihilated perhaps 7.5 million.

Islamic scholar and apostate, Ibn Warraq, doesn’t think so.  Westerners tend to mistakenly differentiate between Islam and “Islamic fundamentalism,” he explains.  The two are actually one in the same, he says, because Islamic cultures continue to receive their Qur’an and hadith literally.  Such societies will remain hostile to democratic ideals, Warraq advises, until they permit a “rigorous self-criticism that eschews comforting delusions of a … Golden Age of total Muslim victory in all spheres; the separation of religion and state; and secularism.”(20)

Likely entailed in this hypothetical transformation would be a religious schism the magnitude of which would resemble the Christian Reformation in its tendency to wrest scriptural control and interpretation from the clutch of religious and political elites and into the hands of commoners.  Only then can a meaningful Enlightenment toward secularism follow.  And as author Lee Harris has opined, “with the advent of universal secular education, undertaken by the state, the goal was to create whole populations that refrained from solving their conflicts through an appeal to violence.”(21)

In the contemporary West, Rodney Stark concurs, “religious wars seldom involve bloodshed, being primarily conducted in the courts and legislative bodies.”(22)  In the United States, for example, anti-abortion terrorism might be the only exception.  But such is clearly not the case in many Muslim nations, where religious battles continue and are now “mainly fought by civilian volunteers.”  In fact, data recently collected by Stark appears to support Sam Harris’s critique rather robustly.

Consulting a variety of worldwide sources, Stark assembled a list of all religious atrocities that occurred during 2012.(23)  In order to qualify, each attack had to be religiously motivated and result in at least one fatality.  Attacks committed by government forces were excluded.  In the process, Stark’s team “became deeply concerned that nearly all of the cases we were finding involved Muslim attackers, and the rest were Buddhists.”  In the end, they discovered only three Christian assaults—all “reprisals for Muslim attacks on Christians.”

808 religiously motivated homicides were found in the reports.  A total of 5026 persons died—3774 Muslims, 1045 Christians, 110 Buddhists, 23 Jews, 21 Hindus, and 53 seculars.  Most were killed with explosives or firearms but, disturbingly, twenty-four percent died from beatings or torture perpetrated not by deranged individuals, but rather by “organized groups.”  In fact, Stark details, many reports “tell of gouged out eyes, of tongues torn out and testicles crushed, of rapes and beatings, all done prior to victims being burned to death, stoned, or slowly cut to pieces.”

Table 1:  Incidents of Religious Atrocities by Nation (2012).

Nation Number of Incidents
Pakistan 267
Iraq 119
Nigeria 106
Thailand 52
Syria 44
Afghanistan 27
Yemen 22
India 20
Lebanon 20
Egypt 15
Somalia 14
Myanmar 11
Kenya 9
Russia 7
Sudan 7
Iran 6
Israel 6
Mali 6
Indonesia 5
Philippines 5
China 4
France 4
Libya 4
Palestinian 4
Algeria 2
Bangladesh 2
Belgium 2
Germany 2
Jordan 2
Macedonia 2
Saudi Arabia 2
Bahrain 1
Bulgaria 1
Kosovo 1
South Africa 1
Sri Lanka 1
Sweden 1
Tajikistan 1
Tanzania 1
Turkey 1
Uganda 1
TOTAL 808

As Table 1 shows, present-day religious terrorism almost always occurs within Islam.  Seventy percent of the atrocities took place in Muslim countries, and seventy-five percent of the victims were Muslims slaughtered by other Muslims, often the result of majority Sunni killing Shi’ah (the majority only in Iran and Iraq).  Pakistan (80 percent Sunni) ranked first in 2012, likely due to its chronically weak central government and the contributions of al-Qaeda and the Taliban.

Christians accounted for twenty percent (159) of all documented victims.  Eleven percent of those (17) were killed in Pakistan, but nearly half (79) were slain in Nigeria, often by Muslim members of Boko Haram, often translated from the Hausa language as “Western education is forbidden.”  Formally known as the Congregation and People of Tradition for Proselytism and Jihad, Boko Haram was founded in 2002 to impose Muslim rule on 170 million Nigerians, nearly half of which are Christian.  Some estimate that Boko Haram jihadists—funded in part by Saudi Arabia—have slaughtered more than 10,000 people in the last decade.

Such attacks are indisputably perpetrated by few among many Muslims.  But whether the Muslim world condemns religious extremism, even religious violence, is another question.  According to Stark, “it is incorrect to claim that the support of religious terrorism in the Islamic world is only among small, unrepresentative cells of extremists.”  In fact, recent polling data tends to demonstrate “more widespread public support than many have believed.”

Shari’a, the religious law and moral code of Islam, is considered infallible because it derives from the Qur’an, tracks the examples of Muhammad, and is thought to have been given by Allah.  It controls everything from politics and economics to prayer, sex, hygiene, and diet.  The expressed goal of all militant Muslim groups, Stark argues, is to establish Shari’a everywhere in the world.

Table 2:  Percent of Muslims Who Think . . .

  Shari’a must be the ONLY source of legislation Shari’a must be a source of legislation Total
Saudi Arabia 72% 27% 99%
Qatar 70% 29% 99%
Yemen 67% 31% 98%
Egypt 67% 31% 98%
Afghanistan 67% 28% 95%
Pakistan 65% 28% 93%
Jordan 64% 35% 99%
Bangladesh 61% 33% 94%
United Arab Emirates 57% 40% 97%
Palestinian Territories 52% 44% 96%
Iraq 49% 45% 94%
Libya 49% 44% 93%
Kuwait 46% 52% 98%
Morocco 41% 55% 96%
Algeria 37% 52% 89%
Syria 29% 57% 86%
Tunisia 24% 67% 91%
Iran 14% 70% 84%

Gallup World Polls from 2007 and 2008 show that nearly all Muslims in Muslim countries want Shari’a to play some role in government.(24)  As Table 2 illustrates, the degree of desired implementation varies from nation to nation.  Strikingly, however, a clear majority in ten Muslim countries—and a two-thirds supermajority in five—want Shari’a to be the exclusive source of legislation.

In 2013, an Egyptian criminal court sentenced Nadia Mohamed Ali and her seven children to fifteen years imprisonment for apostasy.  One could argue, however, that Nadia got off easy because in Egypt the decision to leave Islam is punishable by death.  In fact, death is the mandatory sentence for apostasy in both Afghanistan and Saudi Arabia.  But do such laws garner support from Muslims in general?

That same year, the Pew Forum on Religion and Public Life asked citizens in twelve Islamic nations whether they supported the death penalty for apostasy.(25)  Their responses are reflected in Table 3.  In Egypt, eighty-eight percent of Nadia’s fellow residents would have approved of her and her children’s executions, as would a majority of Jordanians, Afghans, Pakistanis, Palestinians, Dijboutians, and Malaysians.

Table 3:  Death Penalty for People Who Leave the Muslim Religion?

Percent of Muslims Who Favor the Death Penalty for Apostasy
Egypt 88%
Jordan 83%
Afghanistan 79%
Pakistan 75%
Palestinian Territories 62%
Djibouti 62%
Malaysia 58%
Bangladesh 43%
Iraq 41%
Tunisia 18%
Lebanon 17%
Turkey 8%

But from a western perspective, so-called “honor” killing ranks among the most incomprehensible of Muslim customs.  Stark details four truly mindboggling cases:  In one, a young lady was strangled by her own family for the “offense” of being raped by her cousins.  In the other three, girls who eloped, acquired a cell phone, or merely wore slacks that day were hung or beaten to death.  From 2012 alone, Stark isolated seventy-eight reported honor killings, forty-five of which were committed in Pakistan.

Many protest that simple domestic violence is often misclassified as honor killing.  But, again, Pew survey data seems to suggest otherwise.(25)  Table 4 shows the percentage of Muslims in eleven countries who believe it is often or sometimes justified to kill a woman for adultery or premarital sex in order to protect her family’s honor.  Thankfully, only in Pakistan and Iraq do a majority (sixty percent) agree.  But in all other Muslim nations polled, a substantial minority—including forty-one percent in Jordan, Lebanon, and Pakistan—appear to approve of these horrific murders as well as their governments’ documented reluctance to prosecute them.

Table 4:  Is it justified for family members to end a woman’s life who engages in premarital sex or adultery in order to protect the family’s honor?

Percent of Muslims Who Responded Sometimes/Often Justified
Afghanistan* 60%
Iraq* 60%
Jordan 41%
Lebanon 41%
Pakistan 41%
Egypt 38%
Palestinian Territories 37%
Bangladesh 36%
Tunisia 28%
Turkey 18%
Morocco 11%

*In these countries, the question was modified to: “Some people think that if a woman brings dishonor to her family it is justified for family members to end her life in order to protect the family’s honor . . .”

Stark also cites a report from the Human Rights Commission of Pakistan.(27)  In 2012 alone, according to that organization, 913 Pakistani females were honor killed—604 following accusations of illicit sexual affairs, and 191 after marriages unapproved by their families.  Six Christian and seven Hindu women were included.

Monotheism Tamed?

Islam is not universally violent, of course.  The same polls, for example, show that few if any British and German Muslims and only five percent of French Muslims agree that honor killing is morally acceptable.  But the data from Islamic nations tend first, to support the proposition that Abrahamic monotheism is uniquely adapted to inspire violence, and second, to demonstrate that the belief in one god continues to fulfill this exceptionally vicious legacy.  It is no accident, for example, that nearly all Muslims in these countries are particularists, believing that “Islam is the one true faith leading to eternal life.”(28)

On the other hand, Westerners ought not to conclude from these polls that the perils of monotheism are confined to the geographic regions surrounding North Africa and the Middle East.  Even in the distant United States, for example, children continue to die needlessly because their Christian parents reject science-based medicine in favor of “prayer healing.”(29)  Enduring tragedies of this ilk would seem unimaginable in the absence of religious devotion to an allegedly all-powerful, ultra-dominant god.

References: 

(1)  Real Time with Bill Maher: Ben Affleck, Sam Harris and Bill Maher Debate Radical Islam (HBO). 2014. http://www.youtube.com/watch?v=vln9D81eO60 (posted October 6, 2014).

(2)  Stark, R. and K. Corcoran. 2014. Religious Hostility: A Global Assessment of Hatred and Terror. Waco, TX: ISR Books.

(3)  Schulson, M. 2014. Karen Armstrong on Sam Harris and Bill Maher. http://www.salon.com/2014/11/23/karen_armstrong_sam_harris_anti_islam_talk_fills_me_with_despair/ (posted November 23, 2014).

(4)  Armstrong, Karen. 2014. Fields of Blood: Religion and the History of Violence. NY: Knopf.

(5)  Eller, Jack David. 2010. Cruel Creeds, Virtuous Violence: Religious Violence across Culture and History. NY: Prometheus.

(6)  Harris, S. 2005. The End of Faith: Religion, Terror, and the Future of Reason. NY: W.W. Norton.

(7)  Diamond, J. 1997. Guns, Germs, and Steel: The Fates of Human Societies. NY: W.W. Norton.

(8)  Stark, R., and K. Corcoran. 2014. Religious Hostility.

(9)  Freud, S. 1967. Moses and Monotheism. NY: Vintage.

(10)  Hillman, J. 2005. A Terrible Love of War. NY: Penguin.

(11)  Armstrong, Karen. 2001. Holy War: The Crusades and Their Impact on Today’s World. NY: Anchor Books.

(12)  Kirsch, J. 2004. God Against the Gods: The History of the War Between Monotheism and Polytheism. NY: Viking Compass.

(13) Meltzer, E. 2004. “Violence, Prejudice, and Religion: A Reflection on the Ancient Near East,” in The Destructive Power of Religion: Violence in Judaism, Christianity, and Islam (Volume 2: Religion, Psychology, and Violence), ed. J. Harold Ellens. Westport, CT: Praeger.

(14)  Cline, E.H. 2004. Jerusalem Besieged: From Ancient Canaan to Modern Israel. U. of Mich. Press 2004.

(15)  Avalos, H. 2005. Fighting Words: The Origins of Religious Violence. Amherst, NY: Prometheus.

(16)  Schwartz, R. 2006. “Holy Terror,” in The Just War and Jihad: Violence in Judaism, Christianity, & Islam, ed. R.J. Hoffman. Amherst, NY: Prometheus.

(17)  Garcia, H. 2015. Alpha God: The Psychology of Religious Violence and Oppression. Amherst, NY: Prometheus.

(18)  Guitierrez, R. 1991. When Jesus Came the Corn Mothers Went Away: Marriage, Sexuality and Power in Mexico, 1500-1846. Stanford: Stanford University Press.

(19)  Harris, S. The End of Faith.

(20)  Warraq, Ibn. 2003. Why I Am Not a Muslim. Amherst, NY: Prometheus.

(21)  Harris, L. 2007. The Suicide of Reason: Radical Islam’s Threat to the West. NY: Basic Books.

(22)  Stark, R., and K. Corcoran. 2014. Religious Hostility.

(23)  Stark’s sources included thereligionofpeace.com, the Political Instability Task Force Worldwide Atrocities Data Set, Tel Aviv University’s annual report on worldwide anti-Semitic incidents, the U.S. Commission on International Religious Freedom’s annual report for 2013, and the U.S. State Department’s International Freedom Report, 2013.

(24)  The Gallup World Poll studies have surveyed at least one thousand adults in each of 160 countries (having about 97 percent of the world’s population) every year since 2005.

(25)   The World’s Muslims: Religion, Politics and Society. 2013. http://www.pewforum.org/2013/04/30/the-worlds-muslims-religion-politics-society-overview/ (posted April 30, 2013) and http://www.pewforum.org/files/2013/04/worlds-muslims-religion-politics-society-topline1.pdf

(26)  Ibid.

(27)  State of Human Rights in Pakistan in 2012. Islamabad, Pakistan, May 4, 2013.

(28)  Pew Forum on Religion and Public Life, The World’s Muslims: Religion Politics and Society. (Washington, DC, 2013).

(29)  Hall, H. 2013. Faith Healing: Religious Freedom vs. Child Protection. http://www.sciencebasedmedicine.org/faith-healing-religious-freedom-vs-child-protection/ (posted November 19, 2013).

Why Gay and Lesbian: A New Epigenetic Proposal.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

The persistence of homosexuality among certain animal species, including humans, has bewildered scientists at least since the time of Darwin.  Why should same-sex attraction persist when evolution assumes reproductive success?  Does homosexuality—especially among humans—facilitate the intergenerational transfer of genetic material in some other way?  Or perhaps it advances an entirely different objective that justifies it’s more obvious procreative disadvantage.  Such questions have long attracted gene-based explanations for homosexuality.

Consider “kin selection,” for example.  As E.O. Wilson first suggested in 1975, maybe human homosexuals are like sterile female worker bees that assist the queen in reproduction.  One study of homosexual men, known in Independent Samoa as fa’afafine, revealed that gays are significantly more likely than straight men to help their siblings raise children.

But to satisfy the kin selection hypothesis, each gay must account for the survival of at least two sibling-born children for every one he fails to reproduce—a difficult standard to attain accomplish.  In any case, relevant studies in the U.S. and U.K. have failed to provide such evidence.

As a possible explanation for male homosexuality, other researchers have offered the “fertile female” hypothesis.  Here, a genetic tendency toward androphilia, or attraction to males—though problematic for men from an evolutionary perspective—is thought to enhance the reproductive success of their straight, opposite-sex relatives by rendering them hyper-sexual.

At least two studies have claimed results in support of the fertile female model.  Notably, this hypothesis is also capable of explaining why gayness persists at a constant but low frequency of about eight percent in the general global population.

A former faculty member at Harvard Medical School and the Salk Institute, neuroscientist Simon LeVay favors evidence suggesting a suite of several “feminizing” genes (LeVay 2011).  The inheritance of a limited number of these genes, LeVay proposes, will make males, for instance, more attractive to females—and thus presumably more successful in terms of reproduction—by rendering them less aggressive and more empathetic, for example.

But a few men in the family tree will receive “too many” feminizing genes and, as a result, be born gay.  Indeed, one Australian study has discovered that gender-atypical traits do enhance reproduction, and that heterosexuals with homosexual twins achieved more opposite-sex partnerships than heterosexuals without homosexual twins—though statistical significance was observed only among females.

Even so, most explanations are not based solely in genetics.  Evidence suggests as well, for example, that a variety of mental gender traits are shaped during fetal life by varying levels of circulating sex hormones.  Especially during certain critical periods of development, testosterone (T) levels in particular are thought to cause the brain to organize in a more masculine or feminine direction and, later in life, to influence a broad spectrum of gender traits including sexual preference.

For instance, women suffering from congenital adrenal hyperplasia due to elevated levels of prenatal T and other androgens are known to possess gender traits significantly shifted toward masculinity and lesbianism.  Importantly, female fetuses most severely affected by CAH and, thus, most heavily exposed to prenatal androgens are the most likely to experience same-sex attraction later in life.

Similarly, the bodies of male fetuses afflicted with androgen insensitivity syndrome—a condition in which the gene coding for the androgen receptor has mutated—will fail to react normally to circulating T.  As a result, these XY fetuses will later appear as girls and, as adults, share an attraction to men.  In sum, although a number of other factors could be, and likely are, at play, it is now fairly well established that prenatal androgen levels have a substantial impact on sexual orientation in both men and women.

But three researchers working through the National Institute for Mathematical and Biological Synthesis have recently combined evolutionary theory with the rapidly advancing science of both androgen-dependent sexual development and molecular regulation of gene expression to propose a new and provocative epigenetic model to explain both male and female homosexuality (Rice, et. al. 2012).

According to lead author William Rice at the university of California, Santa Barbara, his group’s hypothesis succeeds not only in squaring homosexuality with natural selection—it also explains why same-sex attraction has been proven substantially heritable even though, one, numerous molecular studies have so far failed to locate associated DNA markers and, two, concordance between identical twins—about twenty percent—is far lower than genetic causation might predict.

At the model’s heart are sex-specific epigenetic modifications, or epi-marks.  Generally speaking, epi-marks can be characterized as molecular regulatory switches attached to genes’ backbones that direct how, when, and to what degree genetic instructions are carried out during an organism’s development.  They are created anew during each generation and are usually “erased” between generations.

But because epi-marks are produced at the embryonic stem cell stage of development—prior to division between soma and germline—they can in theory be transmitted across generations.  Indeed, some evidence does suggest that on rare occasions (though not at scientifically trivial rates) they will carry over, and thus mimic the hereditary effect of genes.

Under typical circumstances, Rice instructs, sex-specific epi-marks serve our species’ evolutionary objectives well by canalizing subsequent sexual development.  In other words, they protect sexually essential developmental endpoints by buffering XX fetuses from the masculinizing effects and XY fetuses from the feminizing effects of fluctuating in utero androgen levels.  Significantly, each epi-mark will influence some sexually dimorphic traits—sexual orientation, for example—but not others.

According to the new model, however, when sex-specific epi-marks manage to escape intergenerational erasure and transfer to opposite-sex offspring, they become sexually antagonistic (SA) and thus capable of guiding the development of sexual phenotypes in a gonad-discordant direction.  As such, Rice hypothesizes, “homosexuality occurs when stonger-than-average SA-epi-marks (influencing sexual preference) from an opposite-sex parent escape erasure and are then paired with weaker-than-average de novo sex-specific epi-marks produced in opposite-sex offspring.”

To summarize, Rice’s team argues that differences in the sensitivity of XY and XX fetuses to the same levels of T might be caused by epigenetic mechanisms.  Normally, such mechanisms would render male fetuses comparatively more sensitive and female fetuses relatively less sensitive to exposure.  But if such epigenetic labels pass between generations, they can influence sexual development.  And if they pass from mother to son or from father to daughter, sexual development can proceed in a manner that is abnormal (or “atypical,” if you prefer).  In those very exceptional cases, offspring brain development can progress in a fashion more likely to result in homosexuality.

Rice’s observations and insights are fascinating, to say the least.  Indeed, popular news reports describe a scientific community highly appreciative of the new model’s theoretical power.   Nevertheless, a great deal of criticism has been tendered as well.

LeVay, for example, describes the authors’ hypothesis generally as “a reasonable one that deserves to be tested—for example by actual measurement of the epigenetic labeling of relevant genes in gay people and their parents.”  He reminded me, however, that Rice hasn’t actually discovered anything.  The new model is in fact pure speculation, says LeVay, and it never should have been reported—as some media have done—as “the cause” (or even as “a cause”) of homosexuality.

More specifically, LeVay offers three points of caution.  First, he warns that an epigenetic explanation is not to any degree implied from the current data on fetal T levels.  When based on single measurements, he concedes, male and female fetuses may indeed show some overlap.  But because T levels fluctuate in both males and females throughout development, allegedly anomalous individuals might easily average completely sex-typical T levels over time.  Second, LeVay sees “little or no evidence” that epi-marks ever escape erasure in humans.

Finally, LeVay continues to favor genetic explanations.  The incidence of homosexuality in some family trees, he says, is more consistent with DNA inheritance than with any known epigenetic mechanism.  Moreover, he warns, we should never underestimate the difficulty of identifying genetic influences—especially with regard to mental traits.  In such cases, complex polygenic origins are far more likely to be at play than single, magic genetic bullets.

Other neuroscientists have posed equally important questions.  How can we test whether the appropriate epi-marks—probably situated in the brain—have been erased?  Is it too simplistic to suggest identical or even similar mechanisms for both male and female homosexuality?  Why is it important to isolate the specific biological causes of same-sex attraction?  By doing so, do we run the risk of further stigmatizing an already beleaguered population?

Rice doesn’t deny his new model’s data deficit.  Nor does he portray the epigenetic influence on same-sex attraction as an exclusive one.  His team does, however, insist that epigenetics is “a probable agent contributing to homosexuality.”  We now have “clear evidence,” they maintain, that “epigenetic changes to gene promoters … can be transmitted across generations and … can strongly influence, in the next generation, both sex-specific behavior and gene expression in the brain.”

The authors contend as well that their hypothesis can be rapidly falsified because it makes “two unambiguous predictions that are testable with current technology.”  First, future large-scale association studies will not identify genetic markers correlated with most homosexuality.  Any such associations found, they say, will be weak.

Second, future genome-wide epigenetic profiles will distinguish differences between homosexuals and non-homosexuals, but only at genes associated with androgen signaling or in brain regions controlling sexual orientation.  Testing this second prediction, they admit, may proceed only with regard to lesbianism by comparing profiles of sperm from fathers with and without homosexual daughters.

To my knowledge, Rice and his colleagues have never squarely addressed the question of whether, for philosophical or sociological reasons, we should refrain from delving further into the dicey subject of same-sex attraction.  Such questions do, however, expose a tendency toward communal repression and a general lack of respect for the scientific enterprise.  These decisions should be left to the scientists and those who fund them.

References:

LeVay, Simon. 2011. Gay, Straight, and the Reason Why: The Science of Sexual Orientation. NY: Oxford University Press.

Rice, W., Friberg, U., and Gavrilets, S. 2012. Homosexuality as a consequence of epigenetically canalized sexual development.  The Quarterly Review of Biology 87(4): 343-368.