Monthly Archives: February 2015

Diet-Heart: A Hypothesis in Crisis?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer. Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well. He may be contacted at

Was the Government Wrong?

Dietary recommendations were issued by the U.S. Select Committee on Nutrition and Human Needs and the UK National Advisory Committee on Nutritional Education in 1977 and 1983, respectively. In each case, the government focused heavily on reducing consumption of dietary fat to combat coronary heart disease. The official advice was, and remains into 2015, to restrict overall fat intake to thirty percent and saturated fat intake to ten percent of total calories.

One might assume that the national guidelines were based on a plethora of hard evidence. Not so, according to obesity researcher, Zoe Harcombe. Earlier this year, her team published a review and meta-analysis of randomized controlled trials accessible in 1983 to the U.S. and UK committee members. Their conclusion? The “best evidence” then available “did not support the contention that reducing dietary fat intake would contribute to a reduction in coronary heart disease or related mortality.”(1)

Harcombe et al. narrowed their collection of professional literature down to six relevant studies including a total of 740 deaths, 423 from coronary heart disease, among 2467 participants—all male. Every trial but one had examined only secondary prevention participants—in other words, those who had already experienced myocardial infarction. The remaining study was comprised of twenty percent secondary and eighty percent primary prevention subjects.

Somewhat interestingly, five of the six trials never examined the effects of dietary interventions set at levels matching the eventual recommendations. The one study that had examined the consequences of a diet limited to ten percent saturated fat, according to Harcombe, actually “reported a higher incidence of all-cause mortality and coronary heart disease in the intervention group.”

In fact, Harcombe et al.’s overall analysis revealed no statistically significant relationship between dietary intervention and either all-cause mortality or heart deaths, despite the fact that reductions in mean serum (blood) cholesterol levels were significantly greater among the intervention groups. This, they insist, “undermines the role of serum cholesterol levels as an intermediary to the development of coronary heart disease and contravenes the theory that reducing fat generally and saturated fat particularly potentiates a reduction in coronary heart disease.”

As such, Harcombe concludes, “it seems incomprehensible that dietary advice was introduced for 220 million Americans and 56 million UK citizens , given the contrary results from a small number of unhealthy men.” And because other reviews of more recent evidence have also questioned the same diet-heart hypothesis upon which these recommendations were based, her group contends, our governments’ “dietary advice not merely needs review; it should not have been introduced” in the first place.

Harcombe’s position on the diet-heat hypothesis is characteristic of an expanding and increasingly popular minority view that saturated dietary fats in particular have been wrongly demonized as killers. In this article, I’ll briefly discuss the debate’s necessary history and many of the most important studies and reviews published by nutrition researchers on each side of the scientific rift. In some cases—to reveal the debate’s curious ferocity on the one hand, and notable lack of scientific confidence on the other—I’ll also include certain experts’ personal opinions and criticisms. Finally, I’ll attempt to expose what the science really says, if anything, about the much-disputed diet-heart hypothesis and the foods we really ought to eat.

From Proposal to Paradigm to Policy.

In the January 4, 1985 issue of Science, journalist Gina Kolata covered the 47th consensus panel report from the National Institutes of Health, published some three weeks earlier. Since 1961, the American Heart Association had urged Americans to consume less saturated fat and cholesterol, and recommended its so-called “prudent diet” emphasizing fruits, vegetables, and vegetable oils.

The NIH had been hesitant to take a firm position on the controversial diet-heart hypothesis, according to Kolata, because the scientific literature focusing on the connection between dietary cholesterol and saturated fatty acids (SFA) on the one hand, and heart disease on the other, did “not show that lowering cholesterol makes a difference.”(2)

But the NIH’s reticence was finally overcome by the results of a then-new study conducted by the National Heart, Lung and Blood Institute on the effects of a cholesterol-reducing diet along with a drug, cholestyramine, on about 4000 middle-aged men with elevated serum cholesterol levels.(3) On average, the intervention group’s cholesterol had plunged by 13.4 percent since the investigation began in 1973, or 8.5 percent better than the average decrease found among placebo-treated controls.

According to the NHLBI, its researchers had provided not only “strong evidence” of a “causal role” for low-density lipoprotein cholesterol, or LDL-C, in the pathogenesis of coronary heart disease (CHD), but good reason as well to extend its findings to “other age groups and women” along with “others with more modest elevations of serum cholesterol.”

Others were less impressed. In an interview with Kolata, University of Chicago and frequent NIH statistician, Paul Meier, deemed the new study’s findings “weak,” referring to the disappointing and statistically insignificant distinction between the intervention and control groups in terms of deaths from all causes. And although incidences of angina, bypass surgery, and abnormal exercise electrocardiograms all dropped in the modified diet-drug group, Kolata judged that the new study had “failed, as every other trial did, to prove that lowering blood cholesterol saves lives.”

But perhaps most provocative was the panel’s recommendation that all Americans from the age of two should reduce their consumption of SFA and cholesterol. Also interviewed by Kolata, Thomas Chalmers of the Mt. Sinai Medical School argued that the report “made an unconscionable exaggeration of the data,” emphasizing that “there is absolutely no evidence that it’s safe for children to be on a cholesterol-lowering diet.”

In a subsequent letter to Science, Daniel Steinberg, who had chaired the NIH consensus development conference, criticized Kolata for devoting the lion’s share of her article to “no more than a handful among some 600 conferees.”(4) “The panel’s recommendation,” he countered, “is sound when all of the evidence is taken into account.” Such evidence also led quickly in 1985 to the National Cholesterol Education Program, a new NIH administration created to instruct physicians how to identify and treat “at-risk” patients.

In fact, there was much agreement among the conferees. But dissent in 1984 was both loud and determined, as it remains at the dawn of 2015. The differences between the debates then and now are as multifactorial as heart disease itself. While television continues to manipulate viewers’ emotions and bombard them with highly-varied and always simplistic scientific interpretations and dietary advice, internet bloggers now flood every recess of the popular consciousness with harsh, often tactically ruthless diatribes against their nutritional adversaries.

Journalists and popular authors publish infuriated or one-sided narratives alleging conspiracies between researchers, big food, and government agencies to intentionally mislead the public for personal gain. And while new studies and reviews have lately called the very foundations of the diet-heart hypothesis into question, some leading scientists have entrenched themselves as well, separating into mutually antagonistic nutritional camps.

It might sound too melodramatic to be true. But the most regrettable result is unmistakable—the American public is bewildered and incredulous. What kind of diet will help us to not only lose or maintain our weight, but remain healthy and safe as well? Should we cast our lots with the low-fat or low-carbohydrate paradigm? Do saturated fats really raise the risk of heart disease, and, if so, are the officially endorsed replacements perhaps even more dangerous? Worst of all, Americans wonder whether nutrition science has anything at all of substance to offer them.

The classic diet-heart hypothesis (D-Hh) posits simply enough that diets high in SFA and cholesterol (and low in polyunsaturated fatty acids [PUFA]) raise serum total and LDL cholesterol levels and lead to the accumulation of atheromatous plaques. These plaques gradually narrow coronary arteries, reduce blood flow to the heart, and can eventuate in myocardial infarction.

Early evidence linking heart disease to foods rich in cholesterol (and SFA), including red meat, eggs, and shellfish, derived from animal experiments. In 1913, for example, a Russian pathologist reported the ability to induce atherosclerotic-like lesions in rabbits by feeding them copious amounts of cholesterol.(5) Others soon replicated these results, mainly with other herbivorous animals. Many researchers objected, however, that such creatures were naturally ill-suited to metabolize cholesterol. And when similar experiments were carried out on non-herbivorous (and more human-like) dogs, they added, the animals appeared to tolerate the cholesterol much better.

But the D-Hh wasn’t formally articulated until University of Minnesota physiologist Ancel Keys presented the concept in 1952 at Mt. Sinai in New York. Also published in a famous paper the following year, Keys’ employed a simple yet powerful graph correlating in precise curvilinear fashion total fat intake as a percentage of all calories with death rates from heart disease among men in six countries—Japan, Italy, England and Wales, Australia, Canada, and the United States.(6)

Detractors claimed that Keys intentionally ignored data from twenty-two other countries. And when researchers scrutinized additional data from those nations, they found not only that Keys’ correlation was greatly diminished, but that no association whatsoever existed between dietary fats and death from all causes.(7) Nevertheless, according to James DiNicolantonio, cardiovascular researcher at St. Luke’s Mid America Heart Institute, Keys’ early data “seemingly led us down the wrong ‘dietary-road’ for decades to follow.”(8)

Nevertheless, the most convincing early evidence for the D-Hh may have originated from Keys’ Seven Countries Study of sixteen cohorts (12,763 rural males age 40-59) in Greece, Italy, the former Yugoslavia, the Netherlands, Finland, the U.S., and Japan. By this time, Keys had refined his initial proposal to impugn primarily SFA and animal products (dietary cholesterol not so much). Indeed, in this, the first multi-country epidemiological undertaking in history, coronary mortality and the five-year incidence of CHD was positively correlated with SFA but not with either total fat or PUFA intake.(9)

The cross-cultural results were striking. Upon 25-year follow-up, inter-population death rates from CHD had differed dramatically. In East Finland, for example, 268 per 1000 lumberjacks and farmers living on diets high in meat and dairy had died. By contrast, of the Greeks of Crete who in terms of dietary fats subsisted on olive oil and very little meat, only twenty-five per 1000 had perished. Perhaps most notably, however, SFA had accounted for twenty-two percent of the Finns’, yet only eight percent of the Cretans’, total calories.

But Keys was and continues to be criticized for having “cherry-picked” his Seven Countries data. Some argue that information from populations in countries like France, Switzerland, Germany, Norway, or Sweden, for example, might have challenged Keys’ hypothesis. Some, like journalist Nina Teicholz, have recently gone so far as to charge that he deliberately selected “only those nations … that seemed likely to confirm it.”(10)

Indeed, Keys’ data was not chosen randomly. But Henry Blackburn, Keys’ colleague in Seven Countries, sees no reason why the populations should have been selected entirely by chance. “Demonstrating a lack of understanding of how scientists approach new questions,” he explains, the critics ignore the fact that “any savvy scientist at an early phase of questioning knows to look first not randomly but across wide variations of the cause under consideration, in this case diet.”(11)

Others observed of Keys’ results that CHD mortality varied widely within certain countries. “Despite similar risk factors and diet,” Danish independent researcher Uffe Ravnskov found that “the 5-year incidence of fatal CHD in Crevalcore, Italy, was more than twice that in Montegiorgio, while in Karelia it was five times higher than in West Finland; and on Corfu, 6-7 times higher than on Crete.”(12) Thus, for these and other dissenters, the supposed correlation between diet and heart disease was little more than a well-staged illusion.

On May 28, 1980—only five years previous to the NIH’s consensus report on the same subject—the Food and Nutrition Board of the National Academy of Sciences issued a far more controversial paper, Toward Healthful Diets, finding no clear evidence that reducing serum cholesterol through dietary intervention could prevent CHD. Therein, the fifteen-member Board reproved those “who seek to change the national diet” for assuming a nominal risk in widespread dietary adjustment and for relying so heavily on epidemiological rather than experimental evidence.

Critics of the Board accused its members of maintaining inappropriately cozy relationships with big food organizations like the American Egg Board. In response to the report itself, Robert Levy, director of the NHLBI, offered the following faint-hearted guidance: “Existing information indicates that Americans should hedge their bets and seek a diet lower in saturated fats and cholesterol, at least until more evidence is available.”(13)

At that point, only the Academy and the American Medical Association stood in defiance of the U.S. government and at least eighteen distinguished health organizations. For better or worse, America had officially become a low-fat, low-cholesterol nation.

The Modern Macronutrient Wars Begin.

In her now-classic tome, Food Politics, Marion Nestle—NYU professor of nutrition and former nutrition science adviser to the DHHS, USDA, and FDA—affirmed that “scientists consistently have demonstrated the health benefits of diets rich in fruits and vegetables [and] limited in foods and fats of animal origin.”(14) She continued:

Decades ago, researchers discovered that high levels of cholesterol in the blood predispose individuals to coronary heart disease and that saturated fat (most prominent in meat and dairy products) raises blood cholesterol more than monounsaturated fats (typical of olive oil). They also observed that polyunsaturated fats (most prominent in vegetable seed oils) reduce blood cholesterol levels.

In fact, by the time Food Politics was first published in 2002, every major health organization, including the National Academy of Sciences, had agreed that saturated fatty acids (SFA) raise the risk of cardiovascular disease (CVD), including coronary heart disease (CHD), and that their replacement with either monounsaturated fatty acids (MUFA) or polyunsaturated fatty acids (PUFA)—including all Omega 3 and Omega 6 fatty acids—reduces that risk.

But when I contacted Nestle later, in the summer of 2014, her position seemed less definite. For example, when I inquired about the wisdom of current governmental and non-governmental guidelines relating to SFA intake, most if not all of which warn the entire public against consumption greater than ten percent—in some cases, more than six percent—of total calories, Nestle flatly replied, “I don’t think the jury is in yet on this one.”

So what’s the problem? The “jury” seemed to be “in” decades ago. Despite consistency among official guidelines, the research community now seems hopelessly—indeed, zealously—divided on the topic. Meanwhile, average Americans grow increasingly confused about dietary fats. Science is relentlessly progressive by nature, but its nutritional constituent appears paralyzed. Why?

Consider a recent wave of conflicting studies. In 2009, researchers led by Danish epidemiologist Marianne Jakobsen, pooled data from eleven cohort studies to examine the effects of replacing SFA with MUFA, PUFA, or carbohydrates (CHO) on CHD risk.(15) After four to ten years of follow-up, along with 5249 coronary events and 2155 coronary deaths among 344,696 participants, Jakobsen et al. concluded that MUFA were unassociated, CHO were modestly but directly associated, and PUFA were inversely associated with coronary events. The effect was not modified by either age or gender.

Thus, consistent with conventional wisdom, this group argued that “replacing SFA intake with PUFA intake rather than MUFA or carbohydrate intake prevents CHD over a wide range of intakes and among all middle-aged and older men and women.” Although such distinctions were not tested, Jakobsen also suggested that CHO quality—i.e., fiber content, extent of processing, and glycemic index—might alter the analysis.

But the modern-day macronutrient wars positively erupted a year later when four American researchers led by Patty Siri-Tarino published a meta-analysis of twenty-one studies finding “insufficient evidence … to conclude that dietary saturated fat is associated with an increased risk of CHD, stroke, or CVD.”(16) In other words, this group’s rousing judgment had threatened to undercut the very foundation upon which the diet-heart hypothesis (D-Hh) had been built.

Arguably, however, the Americans’ findings were not entirely hostile to those of Jakobsen. They acknowledged, for example, that a decreased risk often results when SFA are replaced with PUFA. But their interpretation of the data suggested not that SFA were independently problematic, but rather that the benefit of replacement might derive from either increased PUFA or the PUFA to SFA ratio.

In a companion opinion piece, the Americans commented further on the relationships between SFA, CHO, and CVD risk.(17) Therein, Siri-Tarino assailed not only existing and proposed guidelines, but the long-standing use of low-density lipoprotein-cholesterol (LDL-C) as the primary biomarker for CVD risk as well:

Recommendations for further reductions in saturated fat intake (e.g., to ≤ 7% of total energy) are based primarily on the prediction of a progressive reduction in CVD risk associated with greater reductions in LDL cholesterol. However, from the standpoint of implementation, further reductions in saturated fat intake usually involve … increased proportion of carbohydrate…. [But] the effect of higher carbohydrate diets, particularly those enriched in refined carbohydrates, coupled with the rising incidence of overweight and obesity, creates a metabolic state characterized by elevated triglycerides, reduced HDL cholesterol, and increased concentrations of small, dense LDL particles.

Similarly, this group noted that CHO restriction (or weight loss absent restriction) had been shown to yield reductions in the total-cholesterol to high-density lipoprotein-cholesterol (HDL-C) ratio, apolipoprotein B, and the number of small, dense LDL particles. In Siri-Tarino’s estimation, these markers were more closely associated with reduced CVD risk than LDL-C, which “appears to be specific to [less dangerous and] larger, more buoyant particles.”

The Americans finally emphasized the data’s failure to support recommendations for reductions in SFA below ten percent of total calories. While conceding that SFA might raise CVD risk by increasing inflammation and reducing insulin sensitivity, they insisted that their relative effect should be reevaluated “given the changing landscape of CVD risk factors.” Dietary efforts to curb CVD, they urged, “should primarily emphasize limitation of refined carbohydrate intakes and reduction in excess adiposity.”

But the professional response was both swift and severe.(18) Oxford University nutritionist Peter Scarborough denounced the Americans’ methodological design along with their failure to recognize the “well established” association between SFA, serum cholesterol, and CVD. Siri-Tarino countered that her “overall results” remained “robust” and unaffected by “different analytic strategies” even after adjustment for the alleged methodological weakness.

Dutch researcher Martijn Katan criticized the Siri-Tarino et al.’s reliance on the underlying studies’ use of single-day dietary assessments, as opposed to multiday diet records. He insisted as well that, since fat-reduction recommendations had been issued fifty years ago, falling LDL-C concentrations had resulted in conspicuous declines in CHD. Finally, he questioned the private interests of Siri-Tarino’s colleague, Ronald Krauss, relative to his “advisory activities for the dairy industry.” Siri-Tarino acknowledged the underlying studies’ limitations, but argued that all dietary assessments had been subjected to a “quality score” which confirmed her final results. She also informed Katan that Krauss had discontinued his dairy industry associations years prior to publication.

In a separate editorial, Northwestern University preventive-medicine specialist, Jeremiah Stamler—who had served with Ancel Keys on the AHA nutrition committee during the early 1960s—vigorously defended what he deemed an unwarranted attack on the D-Hh:(19)

Do they doubt the validity of the equations of Keys … which are based on dozens of metabolic ward-type feeding experiments, showing independent relations of dietary SFA and cholesterol (direct) and [PUFA] (inverse) to cholesterol …, findings that are repeatedly confirmed in observational and interventional studies in free-living people? … Do they bring into question the classical findings?

Stamler accused Siri-Tarino’s group of ignoring several important investigations, including the Seven Countries, Ni-Hon-San, and National Diet-Heart studies, along with the DASH/Omni-Heart and Multiple Risk Factor Intervention trials. He also challenged Siri-Tarino’s emphasis on CHO-induced dyslipidemia, wondering how she might explain why native Japanese have demonstrated favorable lipid profiles relative to American Japanese, despite their distinctly low-fat, high-CHO diets.

Finally, Stamler denied that the “limited data” supporting the differential effects of SFA and CHO on smaller or larger LDL particles could justify significant changes to the official guidelines. Indeed, nothing Siri-Tarino had to say, Stamler maintained, could “warrant modification of recommendations … beyond intensified emphasis on prevention and control of obesity.”

But the macronutrient guidelines would be challenged again in 2010 by NIH biochemist Christopher Ramsden. Although the AHA, for example, recommended substantial replacement of SFA with Omega-6 (n-6) PUFA-rich vegetable oils, Ramsden suspected that the underlying literature had failed to either, one, distinguish between interventions increasing n-6 PUFA specifically and those boosting both Omega-3 (n-3) and n-6 PUFA, or, two, compare the relative effects of these interventions on CHD outcomes.

Indeed, following their meta-analysis of “all randomized controlled trials that increased PUFA and reported relevant CHD outcomes,” Ramsden’s team calculated a twenty-two percent reduction in CHD risk for mixed n-3/n-6 PUFA diets, but a thirteen percent increase in risk for specific n-6 diets.(20) “These analyses were thus not appropriate,” Ramsden decided, “for formulating advice specific to n-6 PUFA.” Recommendations to increase n-6 PUFA “should be reconsidered,” he warned, “because there is no indication of benefit, and there is a possibility of harm.”

One year later—in apparent disregard of Ramsden’s findings—a diverse group consisting of members from both Jakobsen’s and Siri-Tarino’s 2010 teams and led by Danish nutritionist, Arne Astrup, decided to reevaluate SFA in light of recently published evidence.(21) While now confessing the over-simplicity of the D-Hh, this group predictably confirmed prior claims that SFA should be replaced with PUFA, but not refined CHO. The data regarding MUFA, they added, were too limited to be instructive.

Astrup conceded as well that biomarkers other than LDL-C—to include the total cholesterol-to-HDL-C ratio, non-HDL-C, and apolipoprotien B—can be more enlightening of CVD risk. He also acknowledged evidence indicating that distinct SFA have different physiological effects, depending on their complement of carbon atoms. For example, in terms of raising serum cholesterol levels, stearic acid (found in meat and cocoa butter) appears neutral, while lauric and palmitic acids (found, for example, in tropical oils and dairy, respectively) might be far more problematic.

So the consensus as of 2011, according to Astrup, was that “the effect of a specific food on risk of CVD cannot be determined simply on the basis of the fatty acid profile.” Indeed, “the total matrix of a food is more important.” Nevertheless, he advised, a “healthy dietary pattern is primarily plant-based and low in SFA.”

Perhaps, but 2011 was a lifetime ago in the increasingly volatile world of nutrition science. Important questions about fats, CHO, and various biomarkers and risk factors for CHD were raised and recognized. But ensuing research would reveal quite clearly that the macronutrient wars were just warming up.

The Macronutrient Wars Rage On.

Controversy has beset the diet-heart hypothesis (D-Hh) from its very inception. Throughout the 1950s and 1960s, the idea’s founder, Ancel Keys, was accused of “cherry picking” evidence to advance his career. If the Masai warriors of Kenya and Tanzania can subsist healthfully on raw milk, meat, and blood, the detractors prodded, why should Westerners shun butter, omelets, and cheeseburgers?(22) Since then, every official recommendation urging Americans to consume less cholesterol and saturated fatty acids (SFA) to reduce their risks of coronary heart disease (CHD) and cardiovascular disease (CVD) has suffered incessant criticism.

In recent months, the attacks have only intensified. Cardiologist Aseem Malhotra scolded researchers for their allegedly outdated obsession over the link between SFA and low-density lipoprotein cholesterol (LDL-C). Yes, consumption of SFA tends to elevate blood LDL-C levels, he conceded. But that increase “seems to be specific to large, buoyant (type A) LDL particles,” and not the “small, dense (type B) particles” implicated in CVD.(23) More dangerous type B particles are actually “responsive to carbohydrate intake,” Malhotra insisted. Indeed, “saturated fat has been found to be protective.”

So, while the majority continues to condemn the SFA typically found in animal products and tropical oils, the minority has instead begun to impugn carbohydrates (CHO), particularly the refined variety favored by Western palates. Most researchers agree that excess energy from whatever sources leads to obesity, diabetes, and CHD. But authors of the official guidelines spurn minority influences and continue to recommend replacement of SFA with their polyunsaturated counterparts (PUFA).

In response, indignant writers have pummeled the professional literature with papers hostile to official policies. For example, following a meta-analysis of twelve studies involving 7150 participants, Austrian nutritionist, Lukas Schwingshackl, argued that replacing SFA with PUFA “showed no significant benefit in the secondary prevention of coronary heart disease.”(24) NIH biochemist Christopher Ramsden raised the stakes after examining newly-recovered data from the Sydney Diet Heart Study. He decided not only that linoleic acid (LA), a common omega-6 (n-6) PUFA, “did not provide the intended benefits,” but also that its substitution for SFA “increased all-cause mortality, cardiovascular death, and death from coronary heart disease.”(25) Uffe Ravnskov concurred, emphasizing studies associating consumption of PUFA with inflammation, immune system suppression, decreased high-density lipoprotein cholesterol (HDL-C) levels, and an increased risk of many cancers.(26)

At that point, CVD researcher (and Ravnskov co-author) James DiNicolantonio stepped in to summarize the detractors’ position. First, he stressed that the current outbreak of Western diabetes and obesity derives from overconsumption of CHO, not SFA. Second, the replacement of SFA with CHO only increases small, dense LDL particles and shifts the overall lipid profile toward decreased HDL-C, elevated triglycerides, and an increase in the apolipoprotein B-to-apolipoprotein A-1 ratio (ApoB/ApoA-1). Third, the substitution of SFA with n-6 PUFA only reduces HDL-C and raises the risk of cancer, CHD, and overall mortality. Finally, he argued, the PREDIMED and Lyon Diet Heart studies had demonstrated that Mediterranean-style diets reduce cardiovascular events, cardiovascular mortality, and all-cause mortality relative to either low-fat diets or the AHA’s “prudent” diet. Thus, DiNicolantonio concluded, those responsible for the official guidelines “should assess the totality of evidence and strongly reconsider their recommendations.”(27)

When I contacted him, DiNicolantonio maintained that the D-Hh “has never been proven.” SFA acids might raise LDL-C levels, he advised, but an increased risk of CHD simply doesn’t follow. Small, dense LDL particles are more atherogenic, or “more capable of penetrating a damaged endothelium”—which likely results from a “high refined carb/sugar diet,” among other things. To the contrary, he argued, the SFA found in whole, unprocessed foods like red meat can provide immune-boosting properties and stability against oxidation.

Meanwhile, minority support continued to mount. In his very controversial meta-analysis of forty-nine observational studies and twenty-seven randomized controlled trials, Cambridge University epidemiologist, Rajiv Chowdhury, found null associations between coronary risk and both total SFA and monounsaturated fatty acids (MUFA), along with a statistically non-significant association between coronary risk and PUFA supplementation.(28) Likewise, Tulane University epidemiologist, Lydia Bazzano, conducted a twelve-month randomized, parallel-group diet intervention trial of 148 healthy men and women, and found that participants who completed the low-CHO regime lost more weight and presented with fewer CVD risk factors compared to subjects who completed the low-fat program.(29)

Predictably, the majority’s response was immediate. Harvard University nutritionist, Maryam Farvid, performed her own systematic review and meta-analysis of prospective cohort studies to examine the effect of increased LA consumption in healthy subjects. Contrary to Ramsden’s and Chowdhury’s findings, Farvid revealed a linear inverse association between the predominant n-6 PUFA and CHD. LA’s “cardio-protective effects,” she argued, included a “9% lower risk of total CHD and 13% lower risk of CHD deaths” with a “5% increase in energy from LA, replacing SFA.”(30)

While acknowledging Ramsden’s and Ravnskov’s concerns relating to LA, Farvid nevertheless resolved that such fears remained unsupported by both prospective studies and randomized controlled feeding trials. Even so, she conceded, “the effects of LA on heart disease risks are difficult to predict,” and diets high in LA may increase lipid oxidation and “play a role in the pathogenesis of cancer.”

Farvid also distinguished her analysis from those of her minority predecessors. Ramsden, she noted, based his results primarily on one short-term trial from the 1960s restricted to a small sample of unhealthy men. And because the partial hydrogenation of vegetable oils was then common, she suggested, Ramsden’s findings might have been confounded by the trans-fats found in margarines high in LA. Chowdhury, on the other hand, had based his analysis on a limited number of studies and was unable to compare LA with SFA or any other macronutrient. In a separate critique, Walter Willet—Farvid’s co-author and Harvard colleague—accused Chowdhury of committing “multiple serious errors and omissions” and creating unnecessary confusion. When SFA are replaced with PUFA or MUFA in the form of olive oil, nuts and other plant oils, he disputed, “we have much evidence that risk will be reduced.”(31)

The Bazzano study’s relevance to the D-Hh and the guidelines are less clear than many imagine. While certain popular media outlets seized on these results to extol the virtues of bacon and eggs breakfasts, for example, Bazzano had actually urged subjects in both the low-fat and low-CHO intervention groups to consume less SFA and more unsaturated fat.(32) Further, the low-fat cohort was advised merely to diminish overall fat intake to thirty percent of total calories which is “hardly low,” as NYU professor of nutrition, Marion Nestle, reminded me.

Finally, an interesting variation on the minority theme alleges first, that dietary SFA do not affect plasma SFA levels, and second, that excessive intake of refined CHO does raise either SFA or palmitoleic MUFA plasma levels via de novo lipogenesis in the liver, thereby increasing the risk of CHD.(33, 34) But James Kenny, nutrition researcher at the low-fat advocacy Pritikin Longevity Center judges this argument a “gross over-interpretation of the data.” Yes, the liver will convert sugar to both SFA and MUFA when its glycogen stores are maximized. Nonetheless, Kenney told me, even under these circumstances, mere trivial portions of ingested CHO will be converted and only when consumed far in excess of energy needs. Regardless, he added, “low-fat/high-CHO diets composed of vegetables, whole-grains, and fruit decrease total cholesterol and ApoB-containing lipoproteins, reduce inflammation, and may actually reverse atherosclerosis.”

A Crossroads; What Should We Eat?

So where does the D-Hh stand at the dawn of 2015? My conversations with the experts could not have yielded more starkly conflicting opinions. According to Ravnskov, for example, “the diet-heart idea is the greatest medical scandal in modern time.” By contrast, Tufts University researcher Alice Lichtenstein—instrumental in generating the AHA’s most recent guidelines—wouldn’t dignify minority rhetoric with a response. She maintains that “the observational and intervention data are entirely consistent” and “support substituting PUFA for saturated fat to decrease the risk of CVD.” Kenney was less reluctant to characterize minority views as “fringe” and “pseudoscientific.”

Regrettably, the members of neither faction are likely to abandon their positions soon. So the general public is left shrugging its collective shoulders. Nonetheless, important inferences can be drawn from a century’s worth of diet-heart literature. With respect to biomarkers, for example, we might soon reconsider our preoccupations with LDL-C and HDL-C per se. Non-HDL-C (total cholesterol minus HDL-C) and total ApoB containing lipoproteins likely provide more revealing indicators of cardiovascular risk and, correspondingly, particle quantity is more critical than size. Concern also grows that, while SFA and MUFA increase HDL-C, much of that HDL-C can become dysfunctional and actually pro-atherogenic.

As intense friction between research communities continues to mount, the USDA Dietary Guidelines Advisory Committee has proposed meaningful revisions for 2015.(35)  Recommended ceilings for the consumption of dietary cholesterol—currently set at 300 milligrams per day—will likely be rescinded, overturning forty years of advisory precedent.  Further restriction of refined sugars was also advised.  On the other hand, the Committee apparently remains committed to the D-Hh and long-standing recommendations to reduce intake of SFA.

In an editorial response to Zoe Harcombe’s recent condemnation of the original (and persisting) government guidelines, British cardiologist Rahul Bahl reflected on their empirical support overall. Given the results of more recent analyses detailed here, he found Harcombe et al.’s results and conclusions “unsurprising” but not necessarily convincing.(36) “There remain reasons to postulate a causal connection” he argued, “between fat consumption and heart disease.” First, the epidemiological and ecological evidence suggest such a link and, second, we should expect certain randomized controlled trials to produce negative results given the capriciousness of human behavior. On the other hand, Bahl reasoned,

There is certainly a strong argument that an overreliance in public health on saturated fat as the main dietary villain for cardiovascular disease has distracted from the risks posed by other nutrients such as carbohydrates. Yet replacing one caricature with another does not feel like a solution. It is plausible that both can be harmful or indeed that the relationship between diet and cardiovascular risk is more complex than a series of simple relationships with the proportions of individual macronutrients.

Beyond that, the inherent limitations of nutrition science tend to frustrate the public’s demand for concrete conclusions. Confounding factors, including genetics, are difficult to both identify and account for, especially in cross-sectional observation studies. Assessing the significance of any single risk factor for a chronic disease of multifactorial etiology is a knotty problem at best. And because people consume foods rather than mere nutrients, intake itself complicates the issue. High-fat diets, for example, might be loaded with sugar too, and are often low in fiber, antioxidants, flavonoids, folate, and carotenes. Study methodologies vary considerably as well, and participant behaviors often render findings non-replicable and scientifically suspect. Finally, given the establishment of a paradigm like the D-Hh, ethical considerations often make meaningful test conditions impossible.

Even so, a sensible combination of nutrition science and sound reasoning proves very helpful. First, ignore the popular media. Some truths simply don’t sell advertising. Second, nutritionally irredeemable trans-fats should be eliminated from our diets unceremoniously. Otherwise, as Bahl and others suggest, focus on whole foods over fatty acids and remember that humans never evolved to consume processed foods, including refined fats and oils. Third, learn to prepare and spice lean, unprocessed meats and fresh vegetables to otherwise unembellished satisfaction. Add a few fruits and nuts and avoid everything else—especially refined sugars and starches. Finally, exercise regularly and vigorously, avoid excess adiposity, and don’t smoke. With that, what within our control could possibly go wrong?


(1)Harcombe, Z., J.S. Baker, S.M. Cooper, et al. 2015. Evidence from randomized controlled trials did not support the introduction of dietary fat guidelines in 1977 and 1983: a systematic review and meta-analysis. Open Heart 2015;2:e000196. DOI:10.1136/openhrt-2014-000196.

(2)Kolata, G. 1985. Heart panel’s conclusions questioned. Science 227:40-41.

(3)Lipid Research Clinics Program. 1984. The Lipid Research Clinics Coronary Primary Prevention Trial. 1. Reduction in incidence of coronary heart disease. Journal of the American Medical Association 251(3):351-364.

(4)Steinberg, D. 1985. Heart panel’s conclusions. Science 227:582.

(5)Anitschkow, N., S. Chalatov, C. Muller, et al. 1913. Uber experimentelle cholesterinsteatose: Ihre bedeutung fur die entehung einiger pathogischer prozessen. Zentralblatt fur Allgemeine Pathologie und Pathologiche Anatomie 24:1-9.

(6)Keys, A. 1953. Atherosclerosis: a problem in newer public health. Journal of Mt. Sinai Hospital, New York 20(2):118-139.

(7)Yerushalmy, J. and H. Hilleboe. 1957. Fat in the diet and mortality from heart disease. A methodological note. New York State Journal of Medicine 57:2343-54.

(8)DiNicolantonio, J. 2014. The cardiometabolic consequences of replacing saturated fats with carbohydrates or omega-6 polyunsaturated fats: Do the dietary guidelines have it wrong? Open Heart 2014;1:e000032. doi:10.1136/openhrt-2013-000032.

(9)Keys, A. 1970. Coronary heart disease in seven countries. Circulation 41(Suppl. 1):1-211.

(10)Teicholz, N. 2014. The big fat surprise: why butter, meat & cheese belong in a healthy diet. NY: Simon & Schuster.

(11)Blackburn, H. 2014. In defense of U research: The Ancel Keys legacy. The Star Tribune (July 17). Online at

(12)Ravnskov, U. 1998. The questionable role of saturated and polyunsaturated fatty acids in cardiovascular disease. Journal of Clinical Epidemiology 51(6):443-460.

(13)Broad, W.J. 1980. Academy says curb on cholesterol not needed. Science 208:1354-55.

(14)Nestle, M.. 2013. Food Politics: How the food industry influences nutrition and health. Berkeley: University of California Press.

(15)Jakobsen, M.U., E.J. O’Reilly, B.L. Heitmann, et al. 2009. Major types of dietary fat and risk of coronary heart disease: a pooled analysis of 11 cohort studies. American Journal of Clinical Nutrition 89:1425-32.

(16)Siri-Tarino, P.W., Qi Sun, F.B. Hu, and R.M. Krauss. 2010a. Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease. American Journal of Clinical Nutrition 91:535-46.

(17)Siri-Tarino, P.W., Qi Sun, F.B. Hu, and R.M. Krauss. 2010b. Saturated fat, carbohydrate, and cardiovascular disease. American Journal of Clinical Nutrition 91:502-09.

(18)Letters to the editor. 2010. American Journal of Clinical Nutrition. 92:458-61.

(19)Stamler, J. 2010. Diet-Heart: a problematic revisit. American Journal of Clinical Nutrition 91:497-99.

(20)Ramsden, C.E., J.R. Hibbeln, S.F. Majchrzak, and J.M. Davis. 2010. n-6 fatty acid-specific and mixed polyunsaturated dietary interventions have different effects on CHD risk: a meta-analysis of randomized controlled trials. British Journal of Nutrition 104:1586-1600.

(21)Astrup, A., J. Dyerberg, P. Elwood, et al. 2011. The role of reducing intakes of saturated fat in the prevention of cardiovascular disease: where does the evidence stand in 2010? American Journal of Clinical Nutrition 93:684-88.

(22) Indeed, Masai serum cholesterol levels were minimal. Their autopsies, however, later revealed severe atherosclerotic lesions.

(23)Malhotra, A. 2013. Saturated fat is not the major issue: Let’s bust the myth of its role in heart disease. British Medical Journal 347:f6340.

(24)Schwingshackl, L. and G. Hoffmann. 2014. Dietary fatty acids in the secondary prevention of coronary heart disease: A systematic review, meta-analysis and meta-regression. British Medical Journal Open 4:e004487.

(25)Ramsden, C.E., D. Zamora, L. Boonseng, et al. 2013. Use of dietary linoleic acid for secondary prevention of coronary heart disease and death: Evaluation of recovered data from the Sydney Diet Heart Study and updated meta-analysis. British Medical Journal 346:e8707.

(26)Ravnskov, U., J. DiNicolantonio, Z. Harcombe, et al. 2014. The questionable benefits of exchanging saturated fat with polyunsaturated fat. Mayo Clinic Proceedings 89(4):41-53.

(27)DiNicolantonio, J. 2014. The cardiometabolic consequences of replacing saturated fats with carbohydrates or Ω-6 polyunsaturated fats: Do the dietary guidelines have it wrong? Open Heart 1:e000032.

(28)Chowdhury, R., S. Warnakula, S. Kunutsor, et al. 2014. Association of dietary, circulating, and supplement fatty acids with coronary risk: A systematic review and meta-analysis. Annals of Internal Medicine 160:398-406.

(29)Bazzano, L.A., T. Hu, K. Reynolds, et al. 2014. Effects of low-carbohydrate and low-fat diets: A randomized trial. Annals of Internal Medicine 161(5):309-318.

(30)Farvid, M.S., D. Ming, P. An, et al. 2014. Dietary linoleic acid and risk of coronary heart disease: A systematic review and meta-analysis of prospective cohort studies. Circulation 130:1568-1578.

(31)Willet, W. 2014. Dietary fat and heart disease study is seriously misleading. (posted March 19, 2014).

(32)ABC News. 2014. Low-carb diet trumps low-fat diet in weight-loss study. (posted September 2, 2014).

(33)Kuipers, R.S., D.J. de Graff, M.F. Luxwolda et al. 2011. Saturated fat, carbohydrates and cardiovascular disease. Netherlands Journal of Medicine 69(9):372-378.

(34)Volk, B.M., L.J. Kunces, D.J. Freidenreich et al. 2014. Effects of step-wise increases in dietary carbohydrates on circulating saturated fatty acids and palmitoleic acid in adults with metabolic syndrome. PLOS ONE DOI:10.1371/journal.pone.0113605.

(35)Scientific Report of the 2015 DGAC,; O’Connor, A. 2015. Nutrition panel calls for less sugar and eases cholesterol and fat restrictions. New York Times.

(36)Bahl, R. 2015. The evidence base for fat guidelines: a balanced diet. Open Heart 2015;2:e000229. DOI:10.1136/openhrt-2014-000229.


Religion and Violence: A Conceptual, Evolutionary, and Data-Driven Approach (Cover Article).

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

For a recent broadcast of Real Time with Bill Maher, the impish host arranged a brief “debate” between neuroscientist and popular religion critic, Sam Harris, and movie actor Ben Affleck(1).  The timely topic for consideration, of course, was Muslim violence.  The exchange warmed up quickly.  Harris pronounced Islam “the mother-lode of bad ideas” and Affleck scorned his opponent’s attitude as “gross” and “racist.”  Sound-bites duly served, the discussion ended almost as soon as it began.

But the Harris-Affleck affair wasn’t a complete waste of electricity.  If nothing else, it exposed a gaping intellectual void in the dialogue over the relationship between religion and hostility.  Unfortunately, this debate has been long-dominated by extreme or undisciplined claims on both sides.  Some suggest, for example, that all organized violence is religiously inspired at some level, while others insist that all religion is entirely benevolent when practiced “correctly.”  These arguments are plainly meritless and compel no response.

Nor can I credit the proposition that religion is often or ever the sole cause of violence.  Organized aggression—whether war, Crusade, Inquisition, lesser jihad, slavery, or terrorism, for instance—typically derives in some measure from greed or political machination.  Similarly, individual violence—honor killing, suicide bombing, genital mutilation, and faith healing, to name a few—usually results from jealousy, bigotry, ideology, or psychopathology in addition to religion.

Some social scientists have argued that religious belligerence ensues from simple prejudice, defined as judgment in the absence of accurate information.  Here, the customary prescription includes education and exposure to a broader diversity of religious tradition.  But as Rodney Stark, co-director at Baylor University’s Institute for Studies of Religion, recently observed, “it is mostly true beliefs about one another’s religion that separates the major faiths.”(2)  Muslims deny Christ’s divinity, for example, and Christians reject Muhammad’s claim as successor to Moses and Jesus.  As such, Stark reasons, education is unnecessary and “increased contact might well result in increased hostility.”

religious violence

Religion Misunderstood?

More interesting, on the other hand, are a collection of perspectives that both diminish and subordinate the role of religion in violent contexts to that of mere pretense or veneer.  In other words, these writers contend that religion is seldom, if ever, the original or primary cause of aggression.  Rather, they suggest, the sacred serves only as an efficient means of either motivating or justifying what should otherwise be recognized as purely secular violence.

Such is the latest appraisal of Karen Armstrong, ex-Catholic nun and easily the twenty-first century’s most prolific popular historian of religion.  In rapid response to Harris’s televised vilification of Islam, Armstrong enlisted the popular press.  During an inexplicable interview with Salon, she echoed Affleck’s hyperbole, equating Harris’s criticism of Islam to Nazi anti-Semitism.(3)  Such comparisons are absurd, of course, because condemnation of an idea is categorically different from denigration of an entire population, or any member thereof.

But more to the point, Armstrong argued that the very idea of “religious violence” is flawed for two reasons.  First, ancient religion was inseparable from the state and, as such, no aspect of pre-modern life—including organized violence—could have been divided from either the state or religion.  Second, she continued, “all our motivation is always mixed.”  Thus, modern suicide bombing and Muslim terrorism, for example, are more personal and political, according to Armstrong, than religious.

The point was developed further in Fields of Blood, Armstrong’s new history of religious violence:

Until the modern period, religion permeated all aspects of life, including politics and warfare … because people wanted to endow everything with significance. Every state ideology was religious … [and thus every] successful empire has claimed that it had a divine mission; that its enemies were evil …. And because these states and empires were all created and maintained by force, religion has been [wrongly] implicated in their violence.(4)

To the contrary, says the author, religion has consistently stood against aggression.  The Priestly authors of the Hebrew Bible, for instance, believed that warriors were contaminated by violence, “even if the campaign had been endorsed by God.”  Similarly, the medieval Peace and Truce of God graciously “outlawed violence from Wednesday to Sunday.”  And in the past, Sunni Muslims were “loath to call their coreligionists ‘apostates,’ because they believed that God alone knew … a person’s heart.”

So both the ancient and modern problems, Armstrong contends, are not in religion per se, “but in the violence embedded in our human nature and the nature of the state.”  Thus, the “xenophobic theology of the Deuteronomists developed when the Kingdom of Judah faced political annihilation,” and the Muslim practices of al-jihad al-asghar and takfir (the process of declaring someone an apostate or unbeliever) were resuscitated “largely as a result of political tension arising from Western imperialism (associated with Christianity) and the Palestinian problem.”

Some of Armstrong’s claims are no doubt true, but far less relevant than she apparently imagines.  For example, that religion was conjoined with the state did not render it ineffectual in terms of bellicosity—perhaps quite the opposite, as we will soon see.  In other cases, the author’s claims are logically flawed.  For instance, an older version of a tradition is not more “authentic” than its successors simply by virtue of its age.  Also, that violence results from manifold causes does not negate or even diminish the accountability of any contributing influence, including religion.

Ultimately, Armstrong misrepresents the issue entirely by setting up her true intellectual adversaries as conveniently feeble straw men.  “It is simply not true,” she postures, “that ‘religion’ is always aggressive.”  Agreed, but no serious person has ever made that accusation.  If the author’s primary argument is that every (or any) religion isn’t always violent, I can’t help but conclude she wasted a great deal of time and energy supporting it.

Nevertheless, Armstrong’s most recent commentary reminds us that religion generally, and all major religious traditions collectively, are a well-mixed bag.  Indeed, both Buddhism and Jainism were at least founded on the principle of ahimsa, or non-violence.  And, yes, the sacred regularly intertwines with politics and government, sometimes to a degree rendering it indistinguishable from the state itself.  Finally, hostility in the name of religion, whether perpetrated by a state, group, or individual, is frequently motivated by a host of factors in addition to faith.  However, that religion is so often employed as a pretense or veneer to inspire people to violence only tends to confirm its hazardous nature.

A More Methodical Approach.

To more astutely characterize the relationship between religion and violence, and to distinguish between differentially aggressive traditions, we need to apply a more disciplined and less biased method.  Cultural anthropologist David Eller proposes a comprehensive model of violence consisting of five contributing dimensions or conditions that, together, predict the source’s propensity to expand both the scope and scale of hostility(5).  These dimensions include group integration, identity, institutions, interests, and ideology.

Eller applies his model to religion as follows: First, religion is clearly a group venture featuring “exclusionary membership,”  “collective ideas,” and “the leadership principle, with attendant expectations of conformity if not strict obedience”—often to superhuman authorities deserving of special deference.  Second, sacred traditions offer both personal and collective identities to their adherents that stimulate moods, motivations, and “most critically, actions.”

Next, most faiths provide institutions, perhaps involving creeds, codes of conduct, rituals, and hierarchical offices which at some point, according to Eller, can render the religion indistinguishable from government.  Fourth, all religions aspire to fulfill certain interests.  Most crucially, they seek to preserve and perpetuate the group along with its doctrines and behavioral norms.  The attainment of ultimate good or evil (heaven or hell, for example), the discouragement or punishment of “dissent or deviance,” proselytization and conversion, and opposition to non-believers might be included as well.

Finally, “religion may be the ultimate ideology,” the author avers, “since its framework is so totally external (i.e., supernaturally ordained or given), its rules and standards so obligatory, its bonds so unbreakable, and its legitimation so absolute.”  For Eller, the “supernatural premise” is critical:

This provides the most effective possible legitimation for what we are ordered or ordained to do: it makes the group, its identity, its institutions, its interests, and its particular ideology good and right … by definition. Therefore, if it is in the identity or the institutions or the interests or the ideology of a religion to be violent, that too is good and right, even righteous.

Arguably, the author surmises, “no other social force observed in history can meet those conditions as well as religion.”  And when a given tradition satisfies multiple conditions, “violence becomes not only likely but comparatively minor in the light of greater religious truths.”

Confronting the question at hand, then, and with Armstrong’s historical observations and Eller’s generalized model of violence in mind, I propose a somewhat familiar, though perhaps distinctively limited two-part hypothesis describing potential relationships between religion and aggression.

First, I do not contend that religion is ever the sole, original, or even primary cause of bellicosity.  Such might be the case in any given instance, but for the purpose of determining generally whether faith plays a meaningful role in violence, we need only ask whether the religion is a sine qua non (without which not), or “cause-in-fact,” of the conflict.  Second, although all religions can and often do stimulate a variety of both positive and negative behaviors, clearly not all faiths are identical in their inherent inclination toward hostility.  Indeed, there should be little question that the traditions of Judaism, Christianity, and Islam have all satisfied each of Eller’s conditions with exceptional profusion.  Accordingly, I propose that the Abrahamic monotheisms are either uniquely adapted to the task or otherwise especially capable of inspiring violence from both their followers and non-followers.

Causation, Briefly.

Determining whether a violent act would have occurred absent religious belief can be difficult, to say the least.  Even so, it is insufficient to simply note, as some critics of religion often do, that the Bible prescribes death for a variety of objectively mundane offenses, including adultery (Leviticus 20:10) and taking the Lord’s name in vain (Leviticus 24:16).  And to merely remind us, for example, that Deuteronomy 13:7-11 commands the devoted to stone to death all who attempt to “divert you from Yahweh your God,” or that Qur’an 9:73 instructs prophets of Islam to “make war” on unbelievers, provides precious little evidence upon which to base an indictment of religious conviction.

Sam Harris’s vague declaration, “As man believes, so will he act,” seems entirely plausible, of course, but is also highly presumptive given the fact that humans are known to frequently hold two or more conflicting beliefs simultaneously.(6)  Nor can we casually assume that every suicide bomber or terrorist has taken inspiration from holy authority—even if he or she is a religious extremist.

On the other hand, there is substantial merit in Harris’s criticism of those faithful who, regardless of the circumstances, “tend to argue that it is not faith itself but man’s baser nature that inspires such violence.”  Again, there can be more than one cause-in-fact for any outcome, especially in the psychologically knotty context of human aggression.  Further, when an aggressor confesses religious inspiration, we should accept him at his word.

So when we are made aware, for example, that one of Francisco Pizarro’s companions, whose fellow soldiers brutalized the Peruvian town of Cajamarca in 1532, had written back to the Holy Roman Emperor Charles V (a.k.a. King Charles I of Spain), recounting that “for the glory of God … they have conquered and brought to our holy Catholic Faith so vast a number of heathens, aided by His holy guidance,” we should concede the rather evident possibility that the Spaniards slaughtered or forcibly converted these natives at least in part because of their religion.(7)

Monotheism Conceptually.

Eller denies that all religion is “inherently” violent.  Nonetheless, he recognizes monotheism’s tendency toward a dualistic, good versus evil, attitude that not only “builds conflict into the very fabric of the cosmic system” by crafting two “irrevocably antagonistic” domains “with the ever-present potential for actual conflict and violence,” but also “breeds and demands a fervor of belief that makes persecution seem necessary and valuable.”

Stark agrees.  Committed to a “doctrine of exclusive religious truth,” he writes, particularistic traditions “always contain the potential for dangerous conflicts because theological disagreements seem inevitable.”  Innovative heresy naturally arises from the religious person’s desire to comprehend scripture thought to be inspired by the all-powerful and “one true god.”  As such, Stark finds, “the decisive factor governing religious hatred and conflict is whether, and to what degree, religious disagreement—pluralism, if you will—is tolerated.”(8)

Indeed, many modern-era writers before me have distinguished monotheism as an exceptionally belligerent force.  Sigmund Freud, for example, argued in 1939 that “religious intolerance was inevitably born with the belief in one God.”(9)  More recently, Jungian psychologist, James Hillman, concurred: “Because a monotheistic psychology must be dedicated to unity, its psychopathology is intolerance of difference.”(10)  Even Karen Armstrong agreed when writing in her late fifties.  Of the faiths of Abraham, she reflected, “all three have developed a pattern of holy war and violence that is remarkably similar and which seems to surface from some deep compulsion inherent in this tradition of monotheism, the worship of only one God.”(11)

Author Jonathan Kirsch, however, addressed the issue directly in 2004, comparing the relative bellicosity of polytheistic and monotheistic traditions.  Noting the early dominance of the former over the latter, Kirsch described their most profound dissimilarity:

[F]atefully, monotheism turned out to inspire a ferocity and even a fanaticism that are mostly absent from polytheism. At the heart of polytheism is an open-minded and easygoing approach to religious belief and practice, a willingness to entertain the idea that there are many gods and many ways to worship them. At the heart of monotheism, by contrast, is the sure conviction that only a single god exists, a tendency to regard one’s own rituals and practices as the only proper way to worship the one true god.(12)

Former professor of religion, Edward Meltzer, adds that for the monotheist, “all divine volition must have one source, and this entails the attribution of violent and vengeful actions to one and the same deity and makes them an indelible part of the divine persona.”  Meanwhile, polytheists “have the flexibility of compartmentalizing the divine” and to “place responsibility for … repugnant actions on certain deities, and thus to marginalize them.”(13)

For Kirsch, the Biblical tale of the golden calf reveals an exceptional belligerence in the faiths of Abraham.  After convincing a pitiless and indiscriminate Yahweh not to obliterate every Israelite for worshiping the false idol, Moses nonetheless organizes a “death squad” to murder the 3000 men and women (to “slay brother, neighbor, and kin,” according to Exodus 32:27) who actually betrayed their strangely jealous god.

In the Pentateuch and elsewhere, Kirsch elaborates, “the Bible can be read as a bitter song of despair as sung by the disappointed prophets of Yahweh who tried but failed to call their fellow Israelites to worship of the True God.”  “Fatefully,” the author continues, the prophets—like their wrathful deity—“are roused to a fierce, relentless and punishing anger toward any man or woman who they find to be insufficiently faithful.”

This ultimate and non-negotiable “exclusivism” of worship and belief, Kirsch concludes, comprises the “core value of monotheism.”  And “the most militant monotheists—Jews, Christians and Muslims alike—embrace the belief that God demands the blood of the nonbeliever” because the foulest of sins is not lust, greed, rape, or even murder, but “rather the offering of worship to gods and goddesses other than the True God.”

Indeed, the historical plight of these faiths’ Holy City seems to bear credible testimony to Kirsch’s rendering.  As Biblical archeologist Eric Cline observed a decade ago, Jerusalem has suffered 118 separate conflicts in the past four millennia.  It has been “completely destroyed twice, besieged twenty-three times, attacked an additional fifty-two times, and captured and recaptured forty-four times.”  The city has endured twenty revolts and “at least five separate periods of violent terrorist attacks during the past century.”  Ironically, the “Holy Sanctuary” has changed hands peacefully only twice during the last four thousand years.(14)

For anthropologist Hector Avalos, Jerusalem figures prominently in this discussion as a religiously-defined “scarce resource.”  Of course many social scientists have attributed hostility to competition over limited resources.  Avalos, however, argues that the Abrahamic faiths have created from whole cloth four categories of scarce resource that render them especially prone to the inducement of recurrent and often shocking acts of violence.(15)

Sacred spaces and divinely inspired or otherwise authoritative scriptures comprise the author’s first and second categories.  Such spaces and scriptures are scarce because only certain people will ever receive access to or be ordained with the power to control or interpret them.  Group privilege and salvation constitute Avalos’ third and fourth categories, neither of which will be conferred on a person, consistent with religious tradition, except under extraordinary circumstances.  Obviously, all such resources are related and, in many ways, interdependent.

To emphasize the point, Regina Schwartz, director of the Chicago Institute for Religion, Ethics, and Violence, employs the Biblical story of Cain and Abel.  In the book of Genesis, the first brothers offer dissimilar sacrifices to God, who favors Abel’s offering, but not Cain’s.  And so the gifting is transformed into a competition for God’s blessing, apparently a commodity in very limited supply.  Denied God’s approval—and now God’s preference—Cain murders Abel in a jealous rage.  Here, Schwartz finds, “monotheism is depicted as endorsing exclusion and intolerance,” and the scarce resource of “divine favor” as “inspiring deadly rivalries.”(16)

In the religious milieu, Avalos argues, scarcity is markedly more tragic and immoral because the alleged existence of these resources is ultimately unverifiable and, according to empirical standards, not scarce at all.  Even so, for religionists the stakes are not only real, but as high as one could possibly imagine.  Control over such resources, after all, determines everlasting bliss or torment for both one’s self and all others.  Assuming belief, at least in the context of scarce resource theory, indeed—what’s not to fight, perhaps even kill or die for?

The Evolution of Monotheism.

The God of Abraham was created not only in the image of man, says professor of psychiatry, Hector Garcia, but far more revealingly in the images of alpha-male humans and their non-human primate forebears.  It is no accident (and certainly no indication of credibility), Garcia continues, that the majority of all religionists worship a god who is “fearsome and male,” who “demands reckoning” and “rains fury upon His enemies and slaughters the unfaithful,” and who is portrayed in the holy texts as “policing the sex lives of His subordinates and obsessing over sexual infidelity.”(17)

No more an accident, that is, than the evolutionary process of natural selection and differential reproduction.  Why would an eternal, non-material, and all-powerful divinity like Yahweh, Allah, or Christ, Garcia asks, preoccupy himself with “what are ultimately very human, and very apelike” concerns?  That such a god would need to assert and maintain dominance by threat or physical aggression, for example, or to use violence “to obtain evolutionary rewards such as food, territory, and sex,” seems unfathomable.

Until, that is, one comes to recognize the Abrahamic gods as the highest-ranking alpha-male apes of all time.  In that light, these divinities “reflect the essential concerns of our primate evolutionary past—namely, securing and maintaining power, and using that power to exercise control over material and reproductive resources.”  In other words, to help them cope during a particularly brutal era, the male authors of the Abrahamic texts fashioned a god “intuitive to their evolved psychology,” and, as history demonstrates, “with devastating consequences.”

Rules of reciprocity govern the social lives of non-human primates (which scientists routinely study as surrogates for the ancestors of modern humans).  When fights break out among chimpanzees, for instance, those who have previously received help from the victim are much more likely than others to answer his calls.  And apes that are called but fail to respond are far more likely to be ignored or even attacked rather than helped if and when they plead for assistance during future altercations.  Dominant males also rely on alliances to maintain rank and will punish subordinates that so much as groom or share food with their rivals.  In fact, many researchers calculate that the most common intra-society cause of ape aggression is the perceived infraction of social rules—many of which administer reciprocity and maintain alliances.

Like their primate ancestors, men have long sought alliances with their dominant alpha-gods.  Extreme examples abound in our sacred texts.  In Genesis 22:1-19, Abraham’s willingness to sacrifice Isaac, his own son, demonstrates his unflinching submissiveness to God, who “reciprocates in decidedly evolutionary terms,” according to Garcia, by offering Abraham and his descendants the ultimate ally in war.  Similarly, In Judges 11:30-56, Jephthah sacrifices his daughter as “burnt offering” to Yahweh for help in battle against the Ammonites.

But gods have rivals too; and strangely—except from an evolutionary perspective—so do omnipotent gods.  Created by dominant men, these divinities are expressly jealous.  And like their primate forebears, they build and enforce alliances with their followers against all divine rivals.  As Exodus 22.20 warns, “He who sacrifices to any god, except to the LORD only, he shall be utterly destroyed.”  But as an earthly extension of loyalty, God requires action as well.  Muslims, for example, are expected to “fight those of the unbelievers who are near to you and let them find in you hardness.” (Sura 9:123).

Thus, monotheism not only establishes in- and out-groups with evolutionary efficiency, it also intensifies and legitimizes them.  The founding texts are capable of removing all compassion from the equation (“thine eye shall have no pity on them” [Deut. 7:16]), thus leaving all manner of brutality permissible (“strike off their heads and strike off every fingertip of them” [Sura 8:12]).  The first Crusade offers just one bloody case in point.  Accounts of the Christian attack on Jerusalem in 1099 document the slaughter of nearly 70,000 Muslims.  The faithful reportedly burned the Jews, raped the women, and dashing their babies’ wailing heads against posts.  As a campaign waged against a religiously-defined “other,” this assault was considered unequivocally righteous.

As a second, more sexually-oriented, illustration of the alpha-God parable, Garcia offers Catholic Spain’s late sixteenth- and early seventeenth-century conquest of the Pueblo Indians in New Mexico.  Here, the incursion didn’t end with the violent acquisition of territory.  In striking resemblance to the behaviors of dominant male non-human primates, Christian occupiers emasculated their native male rivals, cloistered their women, and appropriated their mating opportunities.

The Spaniards began, of course, by claiming the natives’ territory in the name of Christ and God.  They destroyed their prisoners’ religious buildings and icons and, as many male animals do, marked their newly pilfered grounds.  Catholic iconography was erected while the most powerful medicine men were persecuted and killed.  Conquistador and governor of the New Mexico province, Juan de Onate, neutralized all capable men over the age of twenty-five by hacking away one of their feet.(18)

Meanwhile, the Franciscan friars were tasked with their captives’ spiritual conquest.  To install themselves as earthly dominant males, the friars undermined the existing male rank structure through public humiliation.  Native sons were forced to watch helplessly as the Franciscans literally seized, twisted, and in some cases tore away their fathers’ penises and testicles, rendering them both socially submissive and sexually impotent.  “Indian men were to sexually acquiesce to Christ, the dominant male archetype,” says Garcia, “and the Franciscans exercised extreme brutality to accomplish such subservience, to include attacking genitalia in the style of male apes and monkeys.”

The friars hoarded the native women in cloisters, thus acquiring exclusive sexual access—which was sometimes but not always voluntary.  Inquisitorial court logs documented numerous incidences of violence which were seldom if ever prosecuted.  One example involved Fray Nicolas Hidalgo of the Taos Pueblo who fathered a native woman’s child after strangling her husband and violating her.  Another friar, Luis Martinez, was accused of raping a native girl, cutting her throat, and burying her body under his cell.  In these brutal but, to primatologists, eerily familiar cases, Garcia writes, “we can easily spy male evolutionary paradigms grinding their way across the Conquista—the sexual domination of men, the sexual acquisition of females, and differential reproduction among despotic men—all strongly within a religious context.”

But the most unnerving evolutionary strategy among male animals—especially apes and monkeys, is infanticide.  Typically only males attempt it, and often after toppling other males from power.  The reproductive advantage is unmistakable.  Killing another male’s offspring eliminates the killer’s (and his male progeny’s) future competition for females.  In many species, the practice also sends the offended mother immediately into estrus, providing the killer with additional reproductive access.  Perhaps counterintuitively, the mothers also have much to gain by mating with their infants’ slayers because infanticidal males are genetically more likely to produce infanticidal, and thus more evolutionarily fit offspring.

Unfortunately, this disturbing pattern is replicated in modern humans.  As Garcia notes, the number of child homicides committed by stepfathers and boyfriends is substantially higher—in some instances, up to one-hundred times higher—than those committed by biological fathers.  And we know that genetics are involved in this pattern because it occurs across cultures and geographic regions, including the United States, Canada, and Great Britain.

Perhaps unsurprisingly at this point, the evolutionary strategy of infanticide is also reflected in religion.  In the Bible, for example, God orders his followers to “kill every male among the little ones” along with “every woman who has known man lying with him.” (Numbers 31:17-18)  The virgins, of course, are to be enslaved for sexual amusement.  Also, in his prophesy against Babylon, God declares that the doomed city’s “infants will be dashed to pieces” as their parents look on. (Isaiah 13-16)  This time, the hapless infants’ mothers will be “violated” as well.

It is no mere coincidence, Garcia argues, that mostly men have claimed to know what God wants.  Dominant human males have inherited their most basic desires from our primate ancestors.  Interestingly, their omnipotent and immortal God is frequently portrayed as possessing identical earthly cravings.  He demands territory and access to women, for example.  And from an objective perspective, this God’s desires serve only to justify the ambitions of the most powerful men.

As natural history would predict, human males have relentlessly pursued—and continue to pursue—the monopolization of territorial and sexual resources through “fear, submission, and unquestioning obeisance.”  The alpha-God expects and accepts no less.  Most regrettably, however, “men have claimed this dominant male god’s backing while perpetrating unspeakable cruelties—including rape, homicide, infanticide, and even genocide.”

Modern Islam.

Sam Harris believes we are at war with Islam.  “It is not merely that we are at war with an otherwise peaceful religion that has been ‘hijacked’ by extremists,” he argues.  “We are at war with precisely the vision of life that is prescribed to all Muslims in the Koran, and further elaborated in the literature of the hadith.”  “A future in which Islam and the West do not stand on the brink of mutual annihilation,” Harris portends, “is a future in which most Muslims have learned to ignore most of their canon, just as most Christians have learned to do.”(19)

Incendiary rhetoric aside, and given what we know about monotheism generally, is Harris naïve to emphasize Islamic violence?  After all, Western history is saturated with exclusively Christian bloodshed.  Pope Innocent III’s thirteenth-century crusade against the French Cathars, for example, may have ended a million lives.  The French Religious Wars of the sixteenth-century between Catholics and Protestant Huguenots left around three million slain, and the seventeenth-century Thirty Years War waged by French and Spanish Catholics against Protestant Germans and Scandinavians annihilated perhaps 7.5 million.

Islamic scholar and apostate, Ibn Warraq, doesn’t think so.  Westerners tend to mistakenly differentiate between Islam and “Islamic fundamentalism,” he explains.  The two are actually one in the same, he says, because Islamic cultures continue to receive their Qur’an and hadith literally.  Such societies will remain hostile to democratic ideals, Warraq advises, until they permit a “rigorous self-criticism that eschews comforting delusions of a … Golden Age of total Muslim victory in all spheres; the separation of religion and state; and secularism.”(20)

Likely entailed in this hypothetical transformation would be a religious schism the magnitude of which would resemble the Christian Reformation in its tendency to wrest scriptural control and interpretation from the clutch of religious and political elites and into the hands of commoners.  Only then can a meaningful Enlightenment toward secularism follow.  And as author Lee Harris has opined, “with the advent of universal secular education, undertaken by the state, the goal was to create whole populations that refrained from solving their conflicts through an appeal to violence.”(21)

In the contemporary West, Rodney Stark concurs, “religious wars seldom involve bloodshed, being primarily conducted in the courts and legislative bodies.”(22)  In the United States, for example, anti-abortion terrorism might be the only exception.  But such is clearly not the case in many Muslim nations, where religious battles continue and are now “mainly fought by civilian volunteers.”  In fact, data recently collected by Stark appears to support Sam Harris’s critique rather robustly.

Consulting a variety of worldwide sources, Stark assembled a list of all religious atrocities that occurred during 2012.(23)  In order to qualify, each attack had to be religiously motivated and result in at least one fatality.  Attacks committed by government forces were excluded.  In the process, Stark’s team “became deeply concerned that nearly all of the cases we were finding involved Muslim attackers, and the rest were Buddhists.”  In the end, they discovered only three Christian assaults—all “reprisals for Muslim attacks on Christians.”

808 religiously motivated homicides were found in the reports.  A total of 5026 persons died—3774 Muslims, 1045 Christians, 110 Buddhists, 23 Jews, 21 Hindus, and 53 seculars.  Most were killed with explosives or firearms but, disturbingly, twenty-four percent died from beatings or torture perpetrated not by deranged individuals, but rather by “organized groups.”  In fact, Stark details, many reports “tell of gouged out eyes, of tongues torn out and testicles crushed, of rapes and beatings, all done prior to victims being burned to death, stoned, or slowly cut to pieces.”

Table 1:  Incidents of Religious Atrocities by Nation (2012).

Nation Number of Incidents
Pakistan 267
Iraq 119
Nigeria 106
Thailand 52
Syria 44
Afghanistan 27
Yemen 22
India 20
Lebanon 20
Egypt 15
Somalia 14
Myanmar 11
Kenya 9
Russia 7
Sudan 7
Iran 6
Israel 6
Mali 6
Indonesia 5
Philippines 5
China 4
France 4
Libya 4
Palestinian 4
Algeria 2
Bangladesh 2
Belgium 2
Germany 2
Jordan 2
Macedonia 2
Saudi Arabia 2
Bahrain 1
Bulgaria 1
Kosovo 1
South Africa 1
Sri Lanka 1
Sweden 1
Tajikistan 1
Tanzania 1
Turkey 1
Uganda 1

As Table 1 shows, present-day religious terrorism almost always occurs within Islam.  Seventy percent of the atrocities took place in Muslim countries, and seventy-five percent of the victims were Muslims slaughtered by other Muslims, often the result of majority Sunni killing Shi’ah (the majority only in Iran and Iraq).  Pakistan (80 percent Sunni) ranked first in 2012, likely due to its chronically weak central government and the contributions of al-Qaeda and the Taliban.

Christians accounted for twenty percent (159) of all documented victims.  Eleven percent of those (17) were killed in Pakistan, but nearly half (79) were slain in Nigeria, often by Muslim members of Boko Haram, often translated from the Hausa language as “Western education is forbidden.”  Formally known as the Congregation and People of Tradition for Proselytism and Jihad, Boko Haram was founded in 2002 to impose Muslim rule on 170 million Nigerians, nearly half of which are Christian.  Some estimate that Boko Haram jihadists—funded in part by Saudi Arabia—have slaughtered more than 10,000 people in the last decade.

Such attacks are indisputably perpetrated by few among many Muslims.  But whether the Muslim world condemns religious extremism, even religious violence, is another question.  According to Stark, “it is incorrect to claim that the support of religious terrorism in the Islamic world is only among small, unrepresentative cells of extremists.”  In fact, recent polling data tends to demonstrate “more widespread public support than many have believed.”

Shari’a, the religious law and moral code of Islam, is considered infallible because it derives from the Qur’an, tracks the examples of Muhammad, and is thought to have been given by Allah.  It controls everything from politics and economics to prayer, sex, hygiene, and diet.  The expressed goal of all militant Muslim groups, Stark argues, is to establish Shari’a everywhere in the world.

Table 2:  Percent of Muslims Who Think . . .

  Shari’a must be the ONLY source of legislation Shari’a must be a source of legislation Total
Saudi Arabia 72% 27% 99%
Qatar 70% 29% 99%
Yemen 67% 31% 98%
Egypt 67% 31% 98%
Afghanistan 67% 28% 95%
Pakistan 65% 28% 93%
Jordan 64% 35% 99%
Bangladesh 61% 33% 94%
United Arab Emirates 57% 40% 97%
Palestinian Territories 52% 44% 96%
Iraq 49% 45% 94%
Libya 49% 44% 93%
Kuwait 46% 52% 98%
Morocco 41% 55% 96%
Algeria 37% 52% 89%
Syria 29% 57% 86%
Tunisia 24% 67% 91%
Iran 14% 70% 84%

Gallup World Polls from 2007 and 2008 show that nearly all Muslims in Muslim countries want Shari’a to play some role in government.(24)  As Table 2 illustrates, the degree of desired implementation varies from nation to nation.  Strikingly, however, a clear majority in ten Muslim countries—and a two-thirds supermajority in five—want Shari’a to be the exclusive source of legislation.

In 2013, an Egyptian criminal court sentenced Nadia Mohamed Ali and her seven children to fifteen years imprisonment for apostasy.  One could argue, however, that Nadia got off easy because in Egypt the decision to leave Islam is punishable by death.  In fact, death is the mandatory sentence for apostasy in both Afghanistan and Saudi Arabia.  But do such laws garner support from Muslims in general?

That same year, the Pew Forum on Religion and Public Life asked citizens in twelve Islamic nations whether they supported the death penalty for apostasy.(25)  Their responses are reflected in Table 3.  In Egypt, eighty-eight percent of Nadia’s fellow residents would have approved of her and her children’s executions, as would a majority of Jordanians, Afghans, Pakistanis, Palestinians, Dijboutians, and Malaysians.

Table 3:  Death Penalty for People Who Leave the Muslim Religion?

Percent of Muslims Who Favor the Death Penalty for Apostasy
Egypt 88%
Jordan 83%
Afghanistan 79%
Pakistan 75%
Palestinian Territories 62%
Djibouti 62%
Malaysia 58%
Bangladesh 43%
Iraq 41%
Tunisia 18%
Lebanon 17%
Turkey 8%

But from a western perspective, so-called “honor” killing ranks among the most incomprehensible of Muslim customs.  Stark details four truly mindboggling cases:  In one, a young lady was strangled by her own family for the “offense” of being raped by her cousins.  In the other three, girls who eloped, acquired a cell phone, or merely wore slacks that day were hung or beaten to death.  From 2012 alone, Stark isolated seventy-eight reported honor killings, forty-five of which were committed in Pakistan.

Many protest that simple domestic violence is often misclassified as honor killing.  But, again, Pew survey data seems to suggest otherwise.(25)  Table 4 shows the percentage of Muslims in eleven countries who believe it is often or sometimes justified to kill a woman for adultery or premarital sex in order to protect her family’s honor.  Thankfully, only in Pakistan and Iraq do a majority (sixty percent) agree.  But in all other Muslim nations polled, a substantial minority—including forty-one percent in Jordan, Lebanon, and Pakistan—appear to approve of these horrific murders as well as their governments’ documented reluctance to prosecute them.

Table 4:  Is it justified for family members to end a woman’s life who engages in premarital sex or adultery in order to protect the family’s honor?

Percent of Muslims Who Responded Sometimes/Often Justified
Afghanistan* 60%
Iraq* 60%
Jordan 41%
Lebanon 41%
Pakistan 41%
Egypt 38%
Palestinian Territories 37%
Bangladesh 36%
Tunisia 28%
Turkey 18%
Morocco 11%

*In these countries, the question was modified to: “Some people think that if a woman brings dishonor to her family it is justified for family members to end her life in order to protect the family’s honor . . .”

Stark also cites a report from the Human Rights Commission of Pakistan.(27)  In 2012 alone, according to that organization, 913 Pakistani females were honor killed—604 following accusations of illicit sexual affairs, and 191 after marriages unapproved by their families.  Six Christian and seven Hindu women were included.

Monotheism Tamed?

Islam is not universally violent, of course.  The same polls, for example, show that few if any British and German Muslims and only five percent of French Muslims agree that honor killing is morally acceptable.  But the data from Islamic nations tend first, to support the proposition that Abrahamic monotheism is uniquely adapted to inspire violence, and second, to demonstrate that the belief in one god continues to fulfill this exceptionally vicious legacy.  It is no accident, for example, that nearly all Muslims in these countries are particularists, believing that “Islam is the one true faith leading to eternal life.”(28)

On the other hand, Westerners ought not to conclude from these polls that the perils of monotheism are confined to the geographic regions surrounding North Africa and the Middle East.  Even in the distant United States, for example, children continue to die needlessly because their Christian parents reject science-based medicine in favor of “prayer healing.”(29)  Enduring tragedies of this ilk would seem unimaginable in the absence of religious devotion to an allegedly all-powerful, ultra-dominant god.


(1)  Real Time with Bill Maher: Ben Affleck, Sam Harris and Bill Maher Debate Radical Islam (HBO). 2014. (posted October 6, 2014).

(2)  Stark, R. and K. Corcoran. 2014. Religious Hostility: A Global Assessment of Hatred and Terror. Waco, TX: ISR Books.

(3)  Schulson, M. 2014. Karen Armstrong on Sam Harris and Bill Maher. (posted November 23, 2014).

(4)  Armstrong, Karen. 2014. Fields of Blood: Religion and the History of Violence. NY: Knopf.

(5)  Eller, Jack David. 2010. Cruel Creeds, Virtuous Violence: Religious Violence across Culture and History. NY: Prometheus.

(6)  Harris, S. 2005. The End of Faith: Religion, Terror, and the Future of Reason. NY: W.W. Norton.

(7)  Diamond, J. 1997. Guns, Germs, and Steel: The Fates of Human Societies. NY: W.W. Norton.

(8)  Stark, R., and K. Corcoran. 2014. Religious Hostility.

(9)  Freud, S. 1967. Moses and Monotheism. NY: Vintage.

(10)  Hillman, J. 2005. A Terrible Love of War. NY: Penguin.

(11)  Armstrong, Karen. 2001. Holy War: The Crusades and Their Impact on Today’s World. NY: Anchor Books.

(12)  Kirsch, J. 2004. God Against the Gods: The History of the War Between Monotheism and Polytheism. NY: Viking Compass.

(13) Meltzer, E. 2004. “Violence, Prejudice, and Religion: A Reflection on the Ancient Near East,” in The Destructive Power of Religion: Violence in Judaism, Christianity, and Islam (Volume 2: Religion, Psychology, and Violence), ed. J. Harold Ellens. Westport, CT: Praeger.

(14)  Cline, E.H. 2004. Jerusalem Besieged: From Ancient Canaan to Modern Israel. U. of Mich. Press 2004.

(15)  Avalos, H. 2005. Fighting Words: The Origins of Religious Violence. Amherst, NY: Prometheus.

(16)  Schwartz, R. 2006. “Holy Terror,” in The Just War and Jihad: Violence in Judaism, Christianity, & Islam, ed. R.J. Hoffman. Amherst, NY: Prometheus.

(17)  Garcia, H. 2015. Alpha God: The Psychology of Religious Violence and Oppression. Amherst, NY: Prometheus.

(18)  Guitierrez, R. 1991. When Jesus Came the Corn Mothers Went Away: Marriage, Sexuality and Power in Mexico, 1500-1846. Stanford: Stanford University Press.

(19)  Harris, S. The End of Faith.

(20)  Warraq, Ibn. 2003. Why I Am Not a Muslim. Amherst, NY: Prometheus.

(21)  Harris, L. 2007. The Suicide of Reason: Radical Islam’s Threat to the West. NY: Basic Books.

(22)  Stark, R., and K. Corcoran. 2014. Religious Hostility.

(23)  Stark’s sources included, the Political Instability Task Force Worldwide Atrocities Data Set, Tel Aviv University’s annual report on worldwide anti-Semitic incidents, the U.S. Commission on International Religious Freedom’s annual report for 2013, and the U.S. State Department’s International Freedom Report, 2013.

(24)  The Gallup World Poll studies have surveyed at least one thousand adults in each of 160 countries (having about 97 percent of the world’s population) every year since 2005.

(25)   The World’s Muslims: Religion, Politics and Society. 2013. (posted April 30, 2013) and

(26)  Ibid.

(27)  State of Human Rights in Pakistan in 2012. Islamabad, Pakistan, May 4, 2013.

(28)  Pew Forum on Religion and Public Life, The World’s Muslims: Religion Politics and Society. (Washington, DC, 2013).

(29)  Hall, H. 2013. Faith Healing: Religious Freedom vs. Child Protection. (posted November 19, 2013).

Why Gay and Lesbian: A New Epigenetic Proposal.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

The persistence of homosexuality among certain animal species, including humans, has bewildered scientists at least since the time of Darwin.  Why should same-sex attraction persist when evolution assumes reproductive success?  Does homosexuality—especially among humans—facilitate the intergenerational transfer of genetic material in some other way?  Or perhaps it advances an entirely different objective that justifies it’s more obvious procreative disadvantage.  Such questions have long attracted gene-based explanations for homosexuality.

Consider “kin selection,” for example.  As E.O. Wilson first suggested in 1975, maybe human homosexuals are like sterile female worker bees that assist the queen in reproduction.  One study of homosexual men, known in Independent Samoa as fa’afafine, revealed that gays are significantly more likely than straight men to help their siblings raise children.

But to satisfy the kin selection hypothesis, each gay must account for the survival of at least two sibling-born children for every one he fails to reproduce—a difficult standard to attain accomplish.  In any case, relevant studies in the U.S. and U.K. have failed to provide such evidence.

As a possible explanation for male homosexuality, other researchers have offered the “fertile female” hypothesis.  Here, a genetic tendency toward androphilia, or attraction to males—though problematic for men from an evolutionary perspective—is thought to enhance the reproductive success of their straight, opposite-sex relatives by rendering them hyper-sexual.

At least two studies have claimed results in support of the fertile female model.  Notably, this hypothesis is also capable of explaining why gayness persists at a constant but low frequency of about eight percent in the general global population.

A former faculty member at Harvard Medical School and the Salk Institute, neuroscientist Simon LeVay favors evidence suggesting a suite of several “feminizing” genes (LeVay 2011).  The inheritance of a limited number of these genes, LeVay proposes, will make males, for instance, more attractive to females—and thus presumably more successful in terms of reproduction—by rendering them less aggressive and more empathetic, for example.

But a few men in the family tree will receive “too many” feminizing genes and, as a result, be born gay.  Indeed, one Australian study has discovered that gender-atypical traits do enhance reproduction, and that heterosexuals with homosexual twins achieved more opposite-sex partnerships than heterosexuals without homosexual twins—though statistical significance was observed only among females.

Even so, most explanations are not based solely in genetics.  Evidence suggests as well, for example, that a variety of mental gender traits are shaped during fetal life by varying levels of circulating sex hormones.  Especially during certain critical periods of development, testosterone (T) levels in particular are thought to cause the brain to organize in a more masculine or feminine direction and, later in life, to influence a broad spectrum of gender traits including sexual preference.

For instance, women suffering from congenital adrenal hyperplasia due to elevated levels of prenatal T and other androgens are known to possess gender traits significantly shifted toward masculinity and lesbianism.  Importantly, female fetuses most severely affected by CAH and, thus, most heavily exposed to prenatal androgens are the most likely to experience same-sex attraction later in life.

Similarly, the bodies of male fetuses afflicted with androgen insensitivity syndrome—a condition in which the gene coding for the androgen receptor has mutated—will fail to react normally to circulating T.  As a result, these XY fetuses will later appear as girls and, as adults, share an attraction to men.  In sum, although a number of other factors could be, and likely are, at play, it is now fairly well established that prenatal androgen levels have a substantial impact on sexual orientation in both men and women.

But three researchers working through the National Institute for Mathematical and Biological Synthesis have recently combined evolutionary theory with the rapidly advancing science of both androgen-dependent sexual development and molecular regulation of gene expression to propose a new and provocative epigenetic model to explain both male and female homosexuality (Rice, et. al. 2012).

According to lead author William Rice at the university of California, Santa Barbara, his group’s hypothesis succeeds not only in squaring homosexuality with natural selection—it also explains why same-sex attraction has been proven substantially heritable even though, one, numerous molecular studies have so far failed to locate associated DNA markers and, two, concordance between identical twins—about twenty percent—is far lower than genetic causation might predict.

At the model’s heart are sex-specific epigenetic modifications, or epi-marks.  Generally speaking, epi-marks can be characterized as molecular regulatory switches attached to genes’ backbones that direct how, when, and to what degree genetic instructions are carried out during an organism’s development.  They are created anew during each generation and are usually “erased” between generations.

But because epi-marks are produced at the embryonic stem cell stage of development—prior to division between soma and germline—they can in theory be transmitted across generations.  Indeed, some evidence does suggest that on rare occasions (though not at scientifically trivial rates) they will carry over, and thus mimic the hereditary effect of genes.

Under typical circumstances, Rice instructs, sex-specific epi-marks serve our species’ evolutionary objectives well by canalizing subsequent sexual development.  In other words, they protect sexually essential developmental endpoints by buffering XX fetuses from the masculinizing effects and XY fetuses from the feminizing effects of fluctuating in utero androgen levels.  Significantly, each epi-mark will influence some sexually dimorphic traits—sexual orientation, for example—but not others.

According to the new model, however, when sex-specific epi-marks manage to escape intergenerational erasure and transfer to opposite-sex offspring, they become sexually antagonistic (SA) and thus capable of guiding the development of sexual phenotypes in a gonad-discordant direction.  As such, Rice hypothesizes, “homosexuality occurs when stonger-than-average SA-epi-marks (influencing sexual preference) from an opposite-sex parent escape erasure and are then paired with weaker-than-average de novo sex-specific epi-marks produced in opposite-sex offspring.”

To summarize, Rice’s team argues that differences in the sensitivity of XY and XX fetuses to the same levels of T might be caused by epigenetic mechanisms.  Normally, such mechanisms would render male fetuses comparatively more sensitive and female fetuses relatively less sensitive to exposure.  But if such epigenetic labels pass between generations, they can influence sexual development.  And if they pass from mother to son or from father to daughter, sexual development can proceed in a manner that is abnormal (or “atypical,” if you prefer).  In those very exceptional cases, offspring brain development can progress in a fashion more likely to result in homosexuality.

Rice’s observations and insights are fascinating, to say the least.  Indeed, popular news reports describe a scientific community highly appreciative of the new model’s theoretical power.   Nevertheless, a great deal of criticism has been tendered as well.

LeVay, for example, describes the authors’ hypothesis generally as “a reasonable one that deserves to be tested—for example by actual measurement of the epigenetic labeling of relevant genes in gay people and their parents.”  He reminded me, however, that Rice hasn’t actually discovered anything.  The new model is in fact pure speculation, says LeVay, and it never should have been reported—as some media have done—as “the cause” (or even as “a cause”) of homosexuality.

More specifically, LeVay offers three points of caution.  First, he warns that an epigenetic explanation is not to any degree implied from the current data on fetal T levels.  When based on single measurements, he concedes, male and female fetuses may indeed show some overlap.  But because T levels fluctuate in both males and females throughout development, allegedly anomalous individuals might easily average completely sex-typical T levels over time.  Second, LeVay sees “little or no evidence” that epi-marks ever escape erasure in humans.

Finally, LeVay continues to favor genetic explanations.  The incidence of homosexuality in some family trees, he says, is more consistent with DNA inheritance than with any known epigenetic mechanism.  Moreover, he warns, we should never underestimate the difficulty of identifying genetic influences—especially with regard to mental traits.  In such cases, complex polygenic origins are far more likely to be at play than single, magic genetic bullets.

Other neuroscientists have posed equally important questions.  How can we test whether the appropriate epi-marks—probably situated in the brain—have been erased?  Is it too simplistic to suggest identical or even similar mechanisms for both male and female homosexuality?  Why is it important to isolate the specific biological causes of same-sex attraction?  By doing so, do we run the risk of further stigmatizing an already beleaguered population?

Rice doesn’t deny his new model’s data deficit.  Nor does he portray the epigenetic influence on same-sex attraction as an exclusive one.  His team does, however, insist that epigenetics is “a probable agent contributing to homosexuality.”  We now have “clear evidence,” they maintain, that “epigenetic changes to gene promoters … can be transmitted across generations and … can strongly influence, in the next generation, both sex-specific behavior and gene expression in the brain.”

The authors contend as well that their hypothesis can be rapidly falsified because it makes “two unambiguous predictions that are testable with current technology.”  First, future large-scale association studies will not identify genetic markers correlated with most homosexuality.  Any such associations found, they say, will be weak.

Second, future genome-wide epigenetic profiles will distinguish differences between homosexuals and non-homosexuals, but only at genes associated with androgen signaling or in brain regions controlling sexual orientation.  Testing this second prediction, they admit, may proceed only with regard to lesbianism by comparing profiles of sperm from fathers with and without homosexual daughters.

To my knowledge, Rice and his colleagues have never squarely addressed the question of whether, for philosophical or sociological reasons, we should refrain from delving further into the dicey subject of same-sex attraction.  Such questions do, however, expose a tendency toward communal repression and a general lack of respect for the scientific enterprise.  These decisions should be left to the scientists and those who fund them.


LeVay, Simon. 2011. Gay, Straight, and the Reason Why: The Science of Sexual Orientation. NY: Oxford University Press.

Rice, W., Friberg, U., and Gavrilets, S. 2012. Homosexuality as a consequence of epigenetically canalized sexual development.  The Quarterly Review of Biology 87(4): 343-368.

What Next for Gay Marriage?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

(The US Supreme Court may very well rule on the legality of same-sex marriage bans in 2015.  Originally published in 2011, this article remains informative.)

Voters here in Wisconsin passed a ban on same-sex marriage in the fall of 2006. The following morning, I thoughtlessly tried to console a lesbian coworker by predicting a universal right to marry within a decade. “That’s fine,” she replied blankly, “but that doesn’t help us now.” For me, the gay marriage issue was and will always be no more than a legal and moral abstraction. For my coworker and her long-time partner, on the other hand, the marriage ban was a personal tragedy and a cold, hard slap in the face from their most trusted friends and neighbors.

In November 2008, the citizens of California passed Proposition 8, amending that state’s constitution to ban recognition of same-sex marriages performed thereafter. The initiative passed with 52 percent of the vote. Another lesbian, Kristen Perry, brought suit against California’s then governor, Arnold Schwarzenegger, and then Attorney General, Jerry Brown, asking the court to strike the law as unconstitutional.

On August 4, 2010, U.S. District Judge Vaughn Walker, a republican appointed by George H. W. Bush, ruled that Proposition 8 had created an “irrational classification” and “unconstitutionally burden[ed] the exercise of the fundamental right to marry.” Thus, according to the federal court, voters in California had violated the Equal Protection and Due Process clauses of the Fourteenth Amendment. On August 16, the Ninth Circuit Court of Appeals ordered Walker’s judgment stayed pending the state’s appeal. One way or another, Perry v. Schwarzeneggar is widely expected to reach the United States Supreme Court.

My prediction in 2006 may have been insensitively timed. But it was firmly based on clear trends toward public and legal acceptance of homosexuals in America. In the 2003, for example, the Supreme Court overturned Bowers v. Hardwick—precedent from seventeen years past—to strike down a state law criminalizing gay sex. “[A]dults may choose to enter upon this relationship in the confines … of their own private lives and still retain their dignity,” Justice Anthony Kennedy wrote for the majority in Lawrence v. Texas, and “[p]ersons in a homosexual relationship may seek autonomy for these purposes, just as heterosexuals do.”

In isolation those words appear to bode well indeed for homosexuals. But other scraps of Lawrence muddy the jurisprudential waters considerably. First, although the Texas statute discriminated against gay sex on its face, the Court’s ruling was based only on due process but not on equal protection grounds. Second, and more importantly, Kennedy’s opinion explicitly warned that his ruling did “not involve … formal recognition to any relationship that homosexual persons seek to enter.”

But Justice Antonin Scalia wasn’t convinced. In his dissent, Scalia protested that the majority’s opinion “dismantles the structure of constitutional law that has permitted a distinction to be made between heterosexual and homosexual unions, insofar as formal recognition in marriage is concerned.” Lawrence “‘does not involve’ the issue of homosexual marriage,” he carped, “only if one entertains the belief that principle and logic have nothing to do with the decision of this Court.”

So, assuming that a legislative ban on gay marriage like Proposition 8 will soon reach the Supreme Court, what’s the likely outcome? For Martha Nussbaum, University of Chicago professor of law and ethics, and author of From Disgust to Humanity: Sexual Orientation & Constitutional Law (Oxford, 2010), the answer depends upon the relative influence of two competing philosophical paradigms.

Based partially in right-wing collectivism, the “politics of disgust” defer to the group and sustain democratic domination of disfavored minorities. Perhaps epitomized by Englishman Lord Devlin’s and American Leon Kass’s views that the average person’s deep-seated aversion toward a given practice is reason enough to make it illegal, disgust cares not whether the practice is actually harmful.

gay haters

The “politics of humanity,” by contrast, is founded in the tenets of classical liberalism and is categorically anti-collectivist. Exemplified by John Stuart Mill’s libertarian principle that individual freedoms should remain unrestricted except to avoid injury to others, humanity relies upon the imaginative skills inherent in compassion and sympathy and emphasizes equal respect for the dignity of all persons.

According to Nussbaum, the politics of disgust are slowly yielding to the politics of humanity in the U.S. And “[e]ven those who believe that disgust still provides a sufficient reason for rendering certain practices illegal,” she vies, “should agree … that disgust provides no good reason for limiting liberties or compromising equalities that are constitutionally protected.” But constitutional interpretation, of course, is precisely where the ethical rubber hits the political road.

One can certainly argue, as Nussbaum does, that American constitutional jurisprudence has already displayed an increasingly enthusiastic tendency to reject disgust in favor of humanity. In recent decades—especially during times of peace, the Court has afforded equal protection or substantive due process rights to a wide variety of disfavored minorities, including women, blacks, the mentally retarded, members of non-traditional families, and even prisoners.

Indeed, seven years prior to Lawrence, the Court granted a mammoth victory to homosexuals too. In 1992, Colorado passed Amendment 2, a ballot measure disqualifying gays from the benefits of antidiscrimination laws. Proponents justified the ban contending that homosexuals shouldn’t be afforded “special rights.” Penning the majority opinion in Romer v. Evans as well, Justice Kennedy rejected that characterization of the ban’s effect. “This Colorado cannot do,” he ruled on equal protection grounds: “A state cannot so deem a class of persons a stranger to its laws.”

So, in light of Romer, are states precluded from deeming gay persons strangers to their laws of marriage? Nussbaum remains skeptical. In that case, she explains, “illegitimate intent was written all over the law and its defense.” Romer was “a very narrow holding,” she cautions, offering “little guidance for future antidiscrimination cases involving sexual orientation.” Importantly, Kennedy had subjected Colorado’s law to mere rational basis review as opposed to intermediate or strict scrutiny, meaning that he did not, in Romer, identify homosexuals as a suspect class deserving maximum protection. Which is not to suggest that he couldn’t or wouldn’t do so in a future case, but rather only to point out that other discriminatory state laws, if more shrewdly crafted, might survive the less demanding standard of review.

Thus, Nussbaum reasons, “The secure protection of gays … would seem to require a holding that laws involving that classification, like laws involving race or gender, warrant some form of heightened scrutiny.” In order to induce such a holding, a plaintiff would generally need to convince a court that homosexuality is an immutable characteristic (a contentious proposition nevertheless consistent with available scientific evidence), that homosexuals have suffered a long history of discrimination, and that they remain politically vulnerable.

Somewhat surprisingly, however, Nussbaum argues that state rather than federal courts should manage the issue of gay marriage until democratic majorities can be trusted to support inclusion. Local adjudication, she argues, would shield the U.S. Supreme Court from this particularly hazardous battle in the culture wars, and encourage the kind of robust experimentation inherent in federalism that, hopefully, will result in a more educated polity.

And certain states have already taken that initiative. Nussbaum offers Varnum v. Brien, a 2009 decision delivered by the Supreme Court of Iowa, as ample grounds for optimism. Although only 44 percent of Iowans presently support same-sex marriage, the seven-member court in Varnum struck the local Defense of Marriage Act, and, applying intermediate scrutiny, unanimously ruled that the state had no important interest in denying marriage licenses to its citizens based on sexual orientation.

What Nussbaum could not have known when writing From Disgust to Humanity was that on November 2, 2010, Iowa voters would oust each of the three Varnum justices who were up for retention. The facts surrounding the election make it clear that Iowans were reacting to the previous year’s ruling on gay marriage. The high court justices faced no opponents and needed only 50 percent of the vote to retain their seats. By contrast, each of the 71 lower court judges on the ballot were easily reelected. Incidentally, the anti-retention campaign was heavily financed by out-of-state special-interest groups, including the National Organization for Marriage and the American Family Association.

So, with Iowa in mind, might judges subject to reelection in other states be less inclined to stand up for homosexuals in defiance of local majorities? Gays might be forced to look to the U.S. Supreme Court once again, and to Justice Kennedy, who is likely to cast the deciding vote on a panel equally divided over several social issues. Would Kennedy extend his reasoning in Lawrence and Romer to cover same-sex marriage rights? Or would the committed Catholic Justice, appointed by Ronald Reagan in 1988, draw the line at marriage, a term still rife with religious connotations? Would he defer to democratic majorities, perhaps siding with Scalia the constitutional originalist?

In Justice Kennedy’s Jurisprudence: The Full and Necessary Meaning of Liberty (Kansas, 2009), Frank Colucci, political scientist at Purdue University—Calumet, dispels popular reports of Kennedy’s alleged inconsistency, dissecting the Justice’s public declarations to expose an underlying jurisprudential philosophy of individual rights. Kennedy “employs a moral reading of the Constitution,” Colucci finds, “to enforce individual liberty, [but] not equality, as the moral idea he finds central” to the document. Although he often sides with judicial minimalists and originalists, he does so for different reasons. In fact, Kennedy favors an expansive role for the Court and remains the justice most likely to strike legislation he deems contrary to the Constitution.

Much to Scalia’s irritation, Kennedy’s search for liberty’s parameters ends not in the Constitution’s text or tradition. Rather, his overriding concern seems to be whether government intrusion prevents the individual “from developing his or her distinctive personality or acting according to conscience,” according to Colucci, or demeans a person’s community standing and denigrates his or her “human dignity.” To provide “objective referents” for his constitutional interpretations, the Justice cites sociological research, international law, and emerging political consensus. His moral precepts, the author says, “have clear rhetorical roots in post-Vatican II Catholic social thought.”

In cases dealing with religion specifically, Kennedy has supported “noncoercive” government action, opining in Allegheny County v. Greater Pittsburgh ACLU, for example, that states should be given “some latitude in recognizing and accommodating the central role religion plays in our society.” Then again, in Lawrence, the Justice clearly emphasized a religiously denounced individual right, professing the founders’ insight that “later generations [would] see that laws once thought necessary and proper in fact serve only to oppress.” Similarly, Kennedy was swayed in Roper v. Simmons by recent trends among a very few states and in the world at large before concluding that death was a cruel and unusual punishment for minors.

Although he recognizes Kennedy’s potentially decisive impact on such questions, Colucci does not directly address the prospects for same-sex marriage. Nevertheless, his conclusions seem to portend well for gays. In Kennedy’s constitutional jurisprudence, personal autonomy has trumped democracy. Tradition and precedent are crucial, yes, but do not entirely define the Court’s dynamic and continuing role to “discover” the meaning of individual liberty, perhaps through recent expressions of moral advances made both at home and abroad.

All of which leads me briefly to Proud To Be Right: Voices of the Next Conservative Generation (Harper, 2010), a title unlikely to be cited, one might hastily presume, in support of any article predicting the relatively imminent legalization of homosexual marriage. Here, Jonah Goldberg—founding editor of the National Review Online, contributor to Fox News, and best-selling author of Liberal Fascism—has assembled an impressive band of young and unapologetically conservative writers—some religious, some secular—“who do not yet have a megaphone, but might deserve one.”

In the ironically titled “Liberals Are Dumb,” for example, evangelical Christian blogger Rachel Motte touts the value of a rigorous liberal arts education for every conservative activist who wants to be taken seriously. And in “The Politics of Authenticity,” social conservative Matthew Lee Anderson warns politicians that his peers are considerably less obsessed over sexual mores, but much more concerned about the ethics of conducting business and war than their older, value-voting predecessors. A more intellectual and less personally intrusive conservatism focused on economics and foreign policy? One can only hope.

But particularly relevant to the issue at hand is a refreshingly candid piece from James Kirchik, contributing editor to the New Republic, called “The Consistency of Gay Conservatives.” Though the GOP’s base—presently empowered by the religious Right—remains opposed to gay marriage, Kirchik portends, support will likely increase as the Republican pool grows younger. Why? Because “the ‘gay agenda’ today,” he says, “is fundamentally conservative.”

Gay activists in California, after all, protest not for “free love,” but only for the right to marry their committed partners. “They want to join this bedrock institution,” Kirchik reminds us, “not tear it apart.” In fact, the prevailing scientific explanation for homosexuality—unmistakably deterministic—is repudiated not so much by conservatism, the author contends, but instead by “Left-wing ‘queer’ theorists, who argue that binary sexuality is a social construct.” A little more food, perhaps, for feminist thought.

Living in the largely rural Midwest, even the least bigoted person is tempted to write off homosexuals for inspiring too few vocal allies and entirely too many powerful foes. Gay marriage remains one of those annoying, distracting, “hot button” political skirmishes in a larger culture war that, quite frankly, never deserved Americans’ precious time and energy in the first place. But the forces of religious bigotry will soon lose this battle, as they have so many others in recent centuries.

Whether the courts recognize and respect it or not, public opinion from nearly every perspective appears to be converging on at least quiet support for same-sex marriage. Thus, the Earth continues to grow a little rounder, the solar system more heliocentric, and the universe ever more capacious. Meanwhile, humanity grows less childlike. As we continue to discover and realize our vast potential, we find less and less occasion for odium and pettiness.

gay hater plus

Gender Personality Differences: Planets or P.O. Boxes, Evidence or Ideology?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

Gender personality differences

Why are people so afraid of the idea that the minds of men and women are not identical in every way?—Steven Pinker, 2002.

The mere suggestion that one group of people is cognitively or emotionally distinct from another can leave many of us speechless and squirming in our seats.  The effect is intensified, of course, in the regrettable event of historical discrimination, and especially so when the differences are alleged to be innate.

Scientists of many stripes have bravely confronted, struggled with, and evidently resolved the issue as it pertains to “race.”  Such classifications lack sturdy biological bases, the current consensus holds, and their very existence relies on nothing more concrete or dependable than cultural convention and political expediency (1).

Gender or sex (I use the terms interchangeably here) is similar in some respects, but clearly distinct in others.  Some biological differences between men and women are both unmistakable and abundantly appreciated.  Combat can erupt, however—perhaps most furiously in intellectual circles—over questions involving mental differences and, assuming their existence, over their proposed causes.

In a recent column, I investigated the origins of female “underrepresentation” in high-end STEM fields.  The latest analyses had suggested that, rather than being discriminated against, qualified women tended to choose people-centered over thing-centered professions.  That is, the somewhat narrow mental trait examined was interest.

Other studies have explored broader gender differences in personality—a related and at least equally sensitive domain.  In a highly influential 2005 paper, for example, University of Wisconsin-Madison professor of psychology and women’s studies, Janet Hyde, rebuked the popular media and general public for their apparent fascination with an assumed profusion of deep psychological variances between genders (Hyde 2005).

After reviewing 46 meta-analyses on the subject, Hyde proposed a new model.  The gender similarities hypothesis (GSH) holds that “males and females are similar on most, but not all, psychological variables.”  Because most differences are negligible or small, and because very few are large, Hyde contended, “men and women as well as boys and girls are more alike than they are different.”  Physical aggression and sexuality were offered as exceptions.

gender personality differences 5

But in a new study, “The Distance Between Mars and Venus,” Hyde’s renowned hypothesis was directly and expressly challenged by a trio of Europeans led by Marco Del Giudici, evolutionary psychologist at the University of Turin, Italy (Del Giudici 2012).  Having subjected a sample of 10,261 American men and women between ages 15 and 92 to an assessment of multiple personality variables, Del Giudici obtained results he and his team described as “striking.”

The “true extent of sex differences in human personality,” he argued, “has been consistently underestimated.”  Del Giudici now compares personality disparities to those of other psychological constructs like vocational interests and aggression.  When properly measured, he reports, gender personality differences are “large” and “robust.”  Indeed, roughly 82 percent of his cohort delivered personality profiles that could not be matched with any member of the opposite sex.

So by what method should researchers measure these distinctions?  The Europeans broke new ground by combining three techniques.  First, to enhance reliability and repeatability, they estimated differences based on latent factors rather than observed scores.  Second, instead of employing the so-called “Big Five” variables (extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience), Del Giudici and company applied 15 narrower traits in order to assess personality with “higher resolution.”  Finally, they chose multivariate over univariate effect sizes—thus aggregating rather than averaging variances—to more accurately reveal “global” sex differences.

Hyde swiftly posted her response.  Roundly disparaging Del Giudici’s statistical method, she charged that it “introduces bias by maximizing differences.”  In the end, she continued, the Europeans’ “global” result is merely a single and “uninterpretible” dimension that only “blur[s] the question rather than offering higher resolution.”  The GSH stands intact, she insists.  The true expanse between genders, Hyde argued, is anything but astronomical: Instead, it more resembles “the distance between North Dakota and South Dakota.”

Either way, a third researcher teased, “you’ll still have a mighty long way to walk.”  Richard Lippa, professor of psychology at California State University, Fullerton, proposed an attractive analogy in the Italian’s defense.  Consider sex differences in body shape, he suggested.  The approach underlying Hyde’s GSH would average certain trait ratios—shoulder-to-waist, waist-to-hip, torso-to-leg length, for example—and likely declare that men and women have similar bodies.  By contrast, Del Giudici’s multivariate method would probably generate the much more intuitive conclusion that “sex differences in human body shape are quite large, with men and women having distinct multivariate distributions that overlap very little.”  The Italian offered a similar and equally effective analogy comparing male and female faces.

Del Giudici finds Hyde’s “single dimension” criticism ironic indeed because his method’s essential point, he says, was to integrate multiple personality factors rather than isolate them.  Most dramatically in univariate terms, those traits included sensitivity, warmth, and apprehension (higher in women), and emotional stability, dominance, vigilance, and rule-consciousness (higher in men).

Nor does he see an interpretability problem.  The Italian’s “weighted blend” of 15 personality traits, he argues, provides a concrete and meaningful description of global differences informing us of a 10 to 20 percent overlap between male and female distributions.  He denies as well Hyde’s claim that his techniques were either controversial or prone to maximizing bias.  To the contrary, he told me, the Europeans simply “thought hard about the various artifacts that can deflate sex differences in personality, and took steps to correct them.”

gender personality differences 4

Pinker’s provocative query denouncing our fear of sex differences was largely rhetorical, of course.  He answered the question soon after asking it: “The fear,” he acknowledged, “is that different implies unequal.”  If we momentarily assume that gender personality differences are substantial, the next issue to confront might be whether those differences are driven more by culture or biology.  In either case, certain groups may be forced to rethink some much-cherished ideas and practices.

Lippa recently probed the ultimate “nature vs. nurture” question in a review of two meta-analyses and three cross-cultural studies on gender differences in both personality and interests (Lippa 2010).  In the end, he discovered that women tend to score significantly higher over time and across cultures in the Big Five categories of agreeableness and neuroticism, and, as others have found since, that they gravitate more toward people-centered than thing-centered occupations.

The Californian then described two basic sets of non-exclusive theories under which such evidence is typically evaluated.  Biological theories, of course, focus on genes, hormones, neural development, and brain structure, for example.  These models are inspired by our knowledge of evolution.  Social-environmental theories, by contrast, concentrate on stereotypes, self-conceptualization, and social learning.  Here, cultural influences are thought to dominate.

Supporters of distinct sub-theories would no doubt evaluate the evidence in varying ways.  But significant gender differences that are consistent across cultures and over time, Lippa contends, are more likely to reflect underlying biological rather than social-environmental causes.  Similarly suggestive, the author says, is the fact that such divergences tend to be greater in relatively ‘modern,’ individualistic, and gender-egalitarian societies.

In his new paper, Del Giudici chose not to directly engage the difficult question of underlying causes.  Nonetheless, he reminds us that evolutionary principles—sexual selection and parental investment theories, in particular—provide us with ample grounds to “expect robust and wide-ranging sex differences in this area.”  “Most personality traits” he continues, “have substantial effects on mating- and parenting-related behaviors.”

Even so, Hyde answers, more than one evolutionary force may be at play here.  Although sexual selection can produce sex differences, she admits, other forms of natural selection can render sex similarities.  “The evolutionary psychologists,” she reckons, “have forgotten about natural selection.”

On these limited questions, truly common ground seems scarce indeed.  Why should the authors interpret the evidence so differently?  Of course no member of any group or human institution is impervious to personal or philosophical biases.  One might reasonably expect academics to be more objective than others, but—for what it’s worth—that has seldom been my experience as a science writer.

In his review, Lippa argued generally that “[c]ontemporary gender researchers, particularly those who adopt social constructionist and feminist ideologies, often reject the notion that biologic factors directly cause gender differences.”  And more pertinently here, he claims that Hyde has long “ignored ‘big’ differences in men’s and women’s interests,” and that the GSH “is, in part, motivated by feminist ideologies and ‘political’ attitudes.”

gender personality differences 6

Hyde denies the accusation categorically: “The GSH is not based on ideology,” she told me.  “It is a summary of what the data show … data from millions of subjects.”  One might note of the Wisconsinite’s pioneering paper, however, that a great deal of concluding space was consumed decrying the perceived social costs of gender difference claims—especially to women, rather than further illuminating or summarizing the data.

Del Giudici appears to find the issue of bias somewhat less motivating.  If sex differences are small, he suggests, we have little to explain and more time to discuss incorrect stereotypes—“this is the main appeal of the GSH.”  The author agrees that “ideology has played a part in the success of the GSH.”  Nonetheless, he maintains that the aforementioned “methodological limitations have played a larger role.”

In his closing comments to me, the Californian echoed much of what Steven Pinker has so courageously recognized in recent years with regard to the broader subject of group divergences.  The ongoing examination of sex differences in personality may or may not be tainted by feminism or other ideologies.  But given the inquiry’s great sensitivity and profound implications, Lippa’s comments—crafted in the finest tradition of true skepticism—bear repeating here:

“I believe this is not a topic where ‘ignorance is bliss.’  We have to examine the nature of sex differences objectively…  We should, as researchers, be open to all possible explanations.  And then, as a society, we have to decide whether we want to let the differences be whatever they may be, or work to reduce them.”

Words to inquire by.  So let the research into gender differences continue, as the Europeans urge, unfettered by irrelevant politics or pet, self-serving causes.  I suspect we have little to fear.  But let science characterize our differences objectively, whatever their nature and degree.  Then, if necessary, we’ll decide together—as an open and informed community—how best to cope with them.

gender personality differences 3


(1) Two excellent books have recently reviewed the scientific and cultural particulars of “race” for a popular audience: Ian Tattersall and Rob De Salle. 2011. Race? Debunking a Scientific Myth. Texas A&M University Press, and Sheldon Krimsky and Kathleen Sloan, eds. 2011. Race and the Genetic Revolution: Science, Myth, and Culture. Columbia University Press.


Del Giudice, M., Booth, T., and Irwing, P. 2012. The distance between Mars and Venus: Measuring global sex differences in personality. PLoS ONE 7(1): e29265.

Hyde, J.S. 2005. The gender similarities hypothesis. American Psychologist 60: 581-592.

Lippa, R. A. 2010. Gender differences in personality and interests: When, where, and why? Pers. and Soc. Psych. Compass 4/11: 1098-1110.

Pinker, Steven. 2002. The Blank Slate: The Modern Denial of Human Nature.  NY: Penguin Viking.

Innate Morality? Human Babies Weigh In.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

In 1762, Rousseau characterized the human baby as “a perfect idiot.”  In 1890, William James judged the infant’s mental life to be “one great blooming, buzzing confusion.”  We’ve learned much about early human cognition since the nineteenth century, of course, and the current trend is to assign well-expanded mental capacities to young children.  But intellectual battles continue to rage, for example, over the possibility of an innate and perhaps nuanced moral sensibility.

Indeed, psychologists split last summer over the question of whether preverbal infants are capable of evaluating the social value of others.  Back in 2007, Yale University researchers led by J. Kiley Hamlin claimed to have demonstrated that infants can morally assess individuals based on their behavior toward third parties (Hamlin, et. al. 2007).  Those findings were challenged last August, however, by postdoctoral research fellow, Damian Scarf, and his colleagues from the University of Otago in New Zealand (Scarf, et. al. 2012).

Hamlin’s pioneering study deployed three experiments on six- and ten-month-old babies to test her team’s hypothesis that social evaluation is a universal and unlearned biological adaptation.  In all trials infants observed characters shaped like circles and either squares or triangles moving two-dimensionally in a scene involving an artificial hill.  Parents held their children during the program, but were instructed not to interfere.

In experiment one, the characters were endowed with large “googly eyes” and made to either climb the hill, hinder the climber from above, or help the climber from below.  With looking times carefully measured, the infants observed alternating helping and hindering trials.  The question here was whether witnessing one character’s actions toward another would affect infants’ attitudes toward that character.

When encouraged to reach for either the helper or the hinderer, twelve of twelve six-month-olds and fourteen of sixteen ten-month-olds chose the helper.  But might the babies have responded to superficial perceptual, rather than social, aspects of the experiment?  For example, perhaps the infants merely preferred upward or downward movements.  In an attempt to rule out that possibility, Hamlin modified a single test condition and gathered a second group of children.

In experiment two, the object pushed was represented as inanimate.  Its googly eyes were detached and it was never made to appear self-propelled.  If the infants had chosen based on mere perceptual events in the first experiment, Hamlin proposed, they should express an analogous preference for the upward-pushing character in the second.  But that didn’t happen.  Only four of twelve six-month-olds and six of twelve ten-month-olds picked the upward-pushing shape.

So the team decided that three possibilities remained.  The infants might positively evaluate helpers, negatively evaluate hinderers, or both.  To determine which, Hamlin assembled a third group of children, reattached the googly eyes, and altered the experimental design to include a neutral character that would never interact with the climber.

In the final experiment, then, children first observed either a helper or a hinderer interact with a climber as in experiment one.  Thereafter, they witnessed a neutral, non-interactive character that moved uphill or downhill in the same way.

When prompted to choose, infants reacted differently toward the neutral shape depending on the character with which it was paired.  Seven of eight babies in each age group preferred the helper to the neutral character and the neutral character to the hinderer.  Hamlin thus inferred that her subjects were fond of those who facilitate others’ goals and disapproving of those who inhibit them.

“Humans engage in social evaluation,” the Yale researchers concluded, “far earlier in development than previously thought.”  The critical human ability to distinguish cooperators and reciprocators from free riders, they agreed, “can be seen as a biological adaptation.”

Having viewed recorded portions of these experiments, I felt compelled to question some of the program’s most basic assumptions and methods.  Can infants fathom, for instance, what artificial landscapes represent, or what “hills” look like?  Can they grasp the symbolic significance of squares, triangles, and circles adorned with “googly eyes”?

Although groundbreaking in its own right, Hamlin assured me that her 2007 study was built on a solid foundation of previous experiments employing both a hill and a helping/hindering paradigm.  Numerous analyses, she insisted, have shown that infants will interpret even two-dimensional animations as real, and often attribute goals and intentions to basic shapes engaging in apparently self-propelled movement—with or without artificial eyes.

I also wondered how the infants were “encouraged” to choose.  In the video, characters were shaken by the person holding them.  Could that have affected the outcome, perhaps combined with verbal inflection?  Was one character ever held closer to an infant than the other, or at least closer to the infant’s dominant hand?

Her presenting colleagues, Hamlin answered, were always blind to the condition—i.e., ignorant of which character was helper or hinderer for that particular baby.  So, if differences in proximity or emphasis existed, their effects would have been divided randomly across subjects.  Also, she noted, parents were instructed to close their eyes during choice phases.

Scarf responded quite differently.  He sees no reason to believe, for example, that six- and ten-month-olds would interpret Hamlin-esque displays as landscapes, or that they would be familiar with the concept of a hill.  Nor could they distinguish between helping and hindering, he argued.  And while infants may attribute intentions and goals to animate objects, he added, no convincing data suggests they might assign relevant feelings to them as well.

Five years passed before Scarf’s team would offer a conflicting explanation—the “simple association hypothesis”—for the infants’ remarkable behavior.  While inspecting Hamlin’s videos, Scarf distinguished “two conspicuous perceptual events” during the helper/hinderer trials: first, an “aversive collision” between the climber and either the helper or the hinderer, and second, a “positive bouncing” when the climber and helper reached the hill’s summit.

Rather than rendering complex social evaluations, Scarf proposed, Hamlin’s babies may have simply been reacting to a visual commotion.  The hinderer was perceived negatively, he hypothesized, because it was associated only with an aversive collision.  The helper, by contrast, was viewed more positively because it was linked with an optimistic bouncing in addition to a collision.

To test their suspicions, the New Zealanders devised two experiments.  In the first, eight ten-month-olds would be presented with googly-eyed characters on a Hamlin-esque stage.  Scarf would eliminate the climber’s bounce on help trials and then pair the helper with a neutral character.  If infants choose based on social evaluation, he reasoned, they should select the helper.  But if infants find the helper/climber collision aversive and react instead via simple association, they should pick the neutral character.

In the second experiment—this time involving forty-eight ten-month-olds—the team would manipulate whether the climber bounced during help trials (at the top), hinder trials (at the bottom), or both.  They would then present the children with a choice between hinderers and helpers.  Again, Scarf proposed, if infants choose based on social evaluation, they should select the helper universally.  But if driven by simple association instead, they should select whatever character was present in the trials when bouncing occurred, and show no preference in the bounce-at-both-top-and-bottom condition.

The results were striking.  In the first experiment, seven of eight children chose the neutral character over the colliding and non-bouncing helper.  In the second experiment, twelve of sixteen picked the helper in the bounce-at-the-top condition, another twelve of sixteen opted for the hinderer in the bounce-at-the-bottom condition, and an equal number (eight of sixteen) chose the helper and hinderer in the bounce-at-both condition.

Thus, Scarf resolved, simple association can explain Hamlin’s 2007 results without resorting to the comparatively complicated notion of an innate moral compass.  In fact, he continued, his results were entirely inconsistent with Hamlin’s core conclusions.  Infants do not perceive collisions between hinderers and climbers as qualitatively different from those between helpers and climbers, and they do not prefer helpers regardless of bounce condition.

Invoking Darwin, Scarf claimed to add momentum to a movement in developmental psychology toward more parsimonious interpretations of infant behavior.  There is much “grandeur in the view,” he philosophized, that complex adult cognitive abilities can be discovered through a more sober comprehension of “these simple beginnings.”

On August 9, 2012, Hamlin—now at the University of British Columbia—posted her team’s unyielding response.  Generally, they found Scarf’s account “unpromising,” and were “bemused” by the New Zealanders’ attempt to recruit Darwin—who “wrote extensively about the powers (and the limits) of our inborn moral sense”—to their cause.  Hamlin criticized Scarf’s experimental design as well, and his failure to adequately address results she had obtained and published after 2007.

By Hamlin’s lights, Scarf’s stimuli had differed from her own in ways that left the climber’s goal—and thus the insinuation of being helped or hindered—unclear.  First, the googly eyes attached to Scarf’s climber were not fixed in an upward gaze.  Second, Scarf’s climber moved less organically, as if able to climb easily without the helper’s assistance.

Hamlin emphasized too that she had replicated the 2007 results in studies involving no climbing, colliding, or bouncing whatsoever.  In 2011, for instance, she found that infants prefer characters who return balls to others who drop them over characters who take them and run away (Hamlin, et. al. 2011).

More recently, Hamlin’s new team claimed to demonstrate that, like adults, babies interpret others’ actions as positive or negative depending on context (Hamlin, et. al. in press).  In this particularly chilling report, infants were found to prefer both individuals who helped others with attitudes (tastes in food) similar to their own, and individuals who hindered others with different attitudes.

Scarf stands firm, however.  He finds it implausible, for example, that ten-month-olds would consider such small differences between stimuli significant.  Regardless, his team had also replicated Hamlin’s 2007 results when the climber was made to bounce at the top of the hill—an unlikely outcome, Scarf chides, if their stimuli were—as Hamlin claims—somehow deficient.

He argues as well that the Canadian’s more recent experiments, though admittedly altered in design, suffer from the same general confound as the originals.  In one case, the protagonist was made to dive toward a rattle in helping trials—an “interesting” event, according to Scarf, while the hinderer was made to slam a box closed in hindering trials—an “aversive” event.

Though already extensive, Hamlin’s explorations into infant prosociality will continue.  For his part, Scarf intends to author a review of the existing literature.  Defending parsimony is an honorable cause, of course, and the New Zealanders have succeeded in raising important questions for further research.  Are we innately moral, or is prosociality primarily learned?  Are we naturally discriminatory and intolerant, or must those behaviors be taught and learned as well?


Hamlin, J.K., Mahagan, N., Liberman, Z., and Wynn, K. (in press). Not like me = bad: Infants prefer those who harm dissimilar others. Psychol Sci.

Hamlin, J.K., and Wynn, K. 2011. Young infants prefer prosocial to antisocial others. Cog Dev 26(1): 30-39.

Hamlin, J.K., Wynn, K., and Bloom, P. 2007. Social evaluation by preverbal infants. Nature 450: 557-560.

Scarf, D., Imuta, K., Colombo, M., and Hayne, H. 2012. Social evaluation or simple association? Simple associations may explain moral reasoning in infants. PLoS One 7(8): e42698.

In Vitro Meat: An Imminent Revolution in Food Production?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at

Some ideas make so much sense that you know great minds somewhere must be working on them.  The impediments could be political, cultural, technological, or more often some formidable combination of all three.  But in extremely rare instances one can’t help but believe that a particularly powerful idea’s time has finally arrived.  Biologists, conservationists, and economists around the world are saying precisely that about the commercial production of cultured, or in-vitro, meat.

The facts surrounding “slow-grown” meat are compelling, to say the least.  Conventional meat production is a $1.4 trillion industry globally.  We consumed 228 million tons of flesh in 2000, and that number is expected to more than double by 2050 as world population swells to 9 billion.  Gorging themselves on 40 percent of the planet’s cereal grain, livestock also use and despoil about 30 percent of the earth’s surface, 70 percent of its arable land, and eight percent of its water supply.

The world’s 1.5 billion livestock are responsible for between 15 and 24 percent of all anthropogenic greenhouse gasses—including 68 percent of ammonia, 65 percent of nitrous oxide, 37 percent of methane, and nine percent of carbon dioxide.  Beef ranching accounts for 80 percent of Amazon deforestation, and cattle, which poop 130 times more by volume than humans, dump 64 million tons of sewage in the United States alone.  Pigs, of course, are no less prolific.

When we use antibiotics on intensively farmed animals, we contribute mightily to the emergence of multi-drug resistant strains of bacteria.  Animal diseases—the chicken flu, for example—can lead to novel epidemics or even pandemics capable of killing millions of people.  What are the most common causes of food-born diseases in the U.S., EU, and Canada?  That’s right—contaminated meats and animal products.  And don’t forget that the nutritional maladies associated with animal fats—diabetes and cardiovascular disease, in particular—are now responsible for a full third of global mortality.

In rather stark contrast, meat grown in culture doesn’t poop, burp, fart, eat, overgraze, drink, bleed, or scream in agony—and it’s a great deal less likely to poison, infect, or kill us.  In those bright practical and ethical lights, a growing number of scientists are hopping onto the cultured meat bandwagon.  The conventional meat industry “no longer makes sense,” according to Zuhaib and Hina Bhat, Indian biotechnologists and authors of an enlightening new study on cultured meat (Bhat 2011).  All things considered, they argue, the transition to “an in vitro meat production system is becoming increasingly justifiable.”  And although the technology is still in its early stages, adds a seasoned trio of Dutch veterinary scientists, cultured meat “holds great promise as a solution” to reduce livestock’s horrific impact on the environment (Haagsman 2009).

To that noble end, Hanna Tuomisto and M. Joost Teixeira de Mattos from the Universities of Oxford and Amsterdam, respectively, calculated the likely energy use, greenhouse gas emissions, and land requirements associated with large-scale in vitro meat production (Tuomisto 2010).  When contrasted with the conventional industry in Europe, cultured meat would involve 35-60 percent less energy use for pork, sheep, and beef, they say, and 80-95 percent lower greenhouse gas emissions and 98 percent reduced land use overall.  Although in vitro chicken could require 14 percent more energy, if land use savings were partially converted to bioenergy production, the total energy efficiency of the cultured product would still prevail.

And because most greenhouse gas emissions caused by cultured meat production are associated with fuel and electricity use, such emissions could be further reduced through the application of renewable energy sources.  That potential doesn’t exist for the conventional industry, however, because most of its emissions are produced by methane from manure and enteric fermentation and nitrous oxide from the soil.

Cultured meat would also promote wildlife conservation, Tuomisto and de Mattos contend, because it shrinks economic pressure to convert natural habitats to agricultural lands, and because it provides an alternative means of producing meat from rare, endangered, or currently over-hunted or over-fished species.  And although neither transportation nor refrigeration expenses were figured into their study, they add that such costs would likely be less with in vitro meat.  Whole animals wouldn’t need to be hauled about, after all, production sites could be located closer to actual consumers, and the finished product would present fewer issues relating to microbial contamination.

Not that cultured meat is a new idea.  Back in the 1920s, in fact, Winston Churchill predicted its use within fifty years.  Following the discovery of stem cells and the development of the in vitro tissue culture, Dutch scientist Willem van Eeelen first patented the idea in 1999.  In 2002, NASA financed a study involving the culturing of a goldfish fillet to explore the possibility of growing meat for long-term space flight (Benjaminson 2002).

Since then, most of the research has taken place in the Netherlands.  Between 2005 and 2009, the Dutch government funded a study exploring the possibility of culturing skeletal muscle cells from farm animal stem cells.  The group was largely successful, but, unfortunately, the US$2.6 million grant has since expired without renewal.

The general process behind in vitro meat is relatively basic.  In theory, embryonic stem cells could provide a cheap and unending supply of cultured meat.  But scientists have yet to isolate and develop such cell lines from farm animals.  Thus, most of the research so far has involved myosatellites, or the adult stem cells that grow and repair muscle.

Myosatellites are extracted from a small biopsy—reasonably painless to the animal—using enzymes or pipetting. A bacterial-based growth serum is applied to multiply the stems.  Researchers then coax them to differentiate into muscle cells, which are grown on an edible or biodegradable scaffold to form myofibers.  Those, in turn, are exercised under tension—as if in a miniature, high-tech gymnasium—to build bigger muscle tissues.  The appropriate level of stress can be achieved in a variety of ways, including electrical impulses, anchor points, or possibly microspheres.

Once produced on a commercial scale using bioreactors, producers could then grind the muscle strips while adding spices, iron, and vitamins to taste.  In a nutshell, that’s the proposed method for creating processed meats like sausages and hamburger patties.  The fabrication of structured meats like steaks will be more complicated because, as muscle fibers grow larger—more than 200 micrometers thick, they tend to die off as their inner cell layers become isolated from the flow of nutrients and oxygen.

Regardless of the specific goal, scientists face difficult challenges at every phase of production.  As African food security expert Phillip Thornton explains, although in vitro meat currently represents a “perfectly feasible” “wildcard” driver of change in the livestock industry—indeed, in world culture more sweepingly, “at least another decade of research is needed” before we can even begin to effectively confront the critical issues of scale and cost (Thornton 2010).

Stem cells, of course, are a bountiful source of both amazement and frustration for everyone who works with them.  Scientists would love to culture the embryonic lines from farm animals because of their incomparable regenerative capacity—ten cells, according to the Dutch group, could produce 50 million kilograms of meat within two months.  But even if we develop that technology, embryonic stems must be specifically stimulated to produce myoblasts and at present we have no way of guaranteeing they will do so accurately.

Myosatellites, by contrast, have been successfully isolated from the muscle of cattle, chicken, turkey, pigs, and fish.  But in addition to their general rarity and severely limited regenerative abilities, myosatellites have different capacities to proliferate, differentiate, and respond to growth factors depending on their specific muscle of origin.  Adipose-derived adult stems provide an attractive potential alternative, the Indian team notes, because they can be obtained less invasively from subcutaneous fat and can differentiate into multiple cell lineages, including muscle.

As anchorage-dependent cells, myoblasts require some sort of substratum or scaffold upon which to proliferate and differentiate.  The challenge here is to develop structures that mimic the in vivo milieu.  They should have large surface areas for growth and be flexible enough to facilitate contraction.  And their by-products must be edible, natural, and derived from non-animal sources.  Researchers have proposed a number of inventive solutions, including porous collagen beads or meshworks, large sheets or thin filaments, and microspheres made of cellulose, alginate, chitosan, or collagen that fluctuate in size following slight changes in temperature or pH.

To commercialize the process, we’ll need new bioreactors as well—ones that maintain low sheer and uniform perfusion of nutrients at large volumes.  Balancing centrifugal, drag, and gravitational forces, rotating bioreactors allow the structures inside to stay medium-submerged in a perpetual state of free fall.  In theory, research-size rotating systems can be scaled up to industrial capacity without affecting their physics.

Perhaps most crucial of all, however, is progress toward a cheap, clean, and consistently effective culture medium.  At this point, myoblast culturing usually occurs in animal (fetal calf or horse) sera, which are expensive, highly variable in composition, and potentially rife with infectious contamination.  They also raise familiar ethical concerns for some, and rather defeat the important point of creating an animal-free protein product.

Serum-free, chemically defined media have already been developed to support turkey, sheep, and pig myosatellites, and one particularly inventive researcher has employed a medium made from maitake mushroom extract.  Thus far, however, the price of these media remains inconsistent with mass production.  In addition, we need to formulate species- and cell-specific arrays of growth factors to effectively control proliferation and differentiation.

Clearly we have much left to achieve.  Then again, as a group of Dutch and American researchers observed six years ago in the very first peer-reviewed paper published on the subject, the technical challenges facing cultured meat producers are far less daunting that those facing scientists pursuing the application of engineered muscle tissue in a clinical setting (Edelman 2005).  And maybe the Dutch group put it best two years ago.  “It may seem somewhat premature to start a societal discussion,” they advised.  “However, food is a subject that evokes many emotions: it is, if we recall the turmoil associated with the introduction of genetically modified foods, a good idea to educate citizens about all aspects” of in vitro meat, and to do so now.

Compared to its conventional counterpart, cultured meat will allow us to lead significantly safer and more sustainable lives.  We will be able to control not only its flavor, but its nutritional composition as well.  It will free our valuable resources and our land, minimize animal suffering, and satisfy mounting consumer demand for protein across the globe.  How can we transform this truly great idea into a reality?  At this critical point, the experts contend, we require only the degree of public investment long lavished upon in vitro meat’s dirty, dangerous, inefficient, and plainly outdated predecessor.


Benjaminson, M.A., Gilchriest, J.A., and Lorenz, M. 2002. In vitro edible muscle protein production system (MPPS): Stage 1, fish. Acta Astronaut 51, 879-889.

Bhat, Z.F., Bhat, H. 2011. Animal-free meat biofabrication. American Journal of Food Technology 6(6): 441-459.

Edelman, P.D., McFarland, D.C., Mironov, V.A., and Matheny, J.G. 2005. Commentary: in vitro-cultured meat production. Tissue Engineering 11(5/6), 659-662.

Haagsman, H.P., Hellingwerf, K.J., Roelen, B.A.J. 2009. Production of animal proteins by cell systems: desk study on cultured meat (“kweekvlees”). University of Utrecht, Faculty of Veterinary Medicine.

Thornton, P.K. 2010. Livestock production: recent trends, future prospects. Phil. Trans. R. Soc. B 365, 2853-2867.

Tuomisto, H.L. and de Mattos, M.J.T. 2010. Life cycle assessment of cultured meat production. 7th International Conference on Life Cycle Assessment in the Agri-Food Sector, 22nd-24th September 2010, Bari, Italy.