Category Archives: Psychology/Neuroscience

Obesity: “Fat Chance” or Failure of Sincerity?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

popular culture3

Man is condemned to be free.—Jean-Paul Sartre.

Beginning about five years ago, the chronically overweight and obese were offered a new paradigm, one more consistent with their majority’s shared experiences in the twenty-first century. Emerging science from diverse fields, certain experts argued, complicated—perhaps even contradicted—the established view that weight maintenance was a straightforward, if not simple, matter of volitional control and balancing energy intake against energy expenditure.

As a host of potential complexities materialized, the frustrated members of this still expanding demographic were notified that, contrary to conventional wisdom, they had little or no control over their conditions. The popular literature especially began to hammer two captivating messages deeply into the public consciousness.  First, from within, the overweight and obese have been overwhelmed by their genomes, epigenomes, hormones, brains, and gut microbiomes, to name just a few.  Second, from without, their otherwise well-calculated and ample efforts have been undermined, for example, by the popular media, big food, government subsidies, poverty, and the relentless and unhealthy demands of contemporary life.

In a 2012 Nature opinion piece, Robert Lustig, Laura Schmidt, and Claire Brindis—three public health experts from the University of California, San Francisco, compared the “deadly effect” of added sugars (high-fructose corn syrup and sucrose) to that of alcohol(1).  Far from mere “empty calories,” they added, sugar is potentially “toxic” and addictive.  It alters metabolisms, raises blood pressures, causes hormonal chaos, and damages our livers.  Like both tobacco and alcohol (a distillation of sugar), it affects our brains as well, encouraging us to increase consumption.

Apparently unimpressed with Americans’ abilities to control themselves, Lustig et al. urged us to back restrictions on our own choices in the form of government regulation of sugar. In support of their appeal, the trio relied on four criteria—“now largely accepted by the public health community,”—originally offered by social psychologist Thomas Babor in 2003 to justify the regulation of alcohol: The target substance must be toxic, unavoidable (or pervasive), produce a negative impact on society, and present potential for abuse.  Perhaps unsurprisingly, they discovered that sugar satisfied each criterion with ease.

Robert Lustig.

Lustig, a pediatric endocrinologist and, now, television infomercial star, contends that obesity results primarily from an intractable hormonal predicament. In his wildly popular 2012 book, Fat Chance, Lustig indicted simple, super-sweet sugars as chief culprits, claiming that sucrose and high-fructose corn syrup corrupt our biochemistry to render us hungry and lethargic in ways fat and protein do not(2).  In other words, he insisted that sugar-induced hormonal imbalances cause self-destructive behaviors, not the other way around.

Lustig’s argument proceeds essentially as follows: In the body, insulin causes energy to be stored as fat.  In the hypothalamus, it can cause “brain starvation,” or resistance to leptin, the satiety hormone released from adipose tissue.  Excess insulin, or hyperinsulinemia, thus causes our hypothalami to increase energy storage (gluttony) and decrease energy consumption (sloth).  To complete the process, add an increasingly insulin-resistant liver (which drives blood insulin levels even higher), a little cortisol (the adrenal stress hormone), and of course sugar addiction.  In the end, Lustig concludes, dieters hardly stand a chance.

Journalist Gary Taubes, author of the similarly successful Why We Get Fat, was in full agreement(3).  Picking up the theoretical mantle where Lustig dropped it, Taubes expanded the list of nutritional villains considerably to include all the refined carbohydrates that quickly boost consumers’ glycemic indices. In a second Nature opinion piece, he then blamed the obesity problem on both the research community, for failure to fully comprehend the condition, and the food industry, for exploiting that failure(4).

Gary Taubes with Dr. Oz.

Gary Taubes with Dr. Oz.

To their credit, Lustig and Taubes provided us with some very sound and useful advice.  Credible nutrition researchers agree, for example, that Americans in particular should drastically reduce their intakes of added sugars and refined carbohydrates.  Indeed, most would be well-advised to eliminate them completely.  The authors’ claims denying self-determination might seem reasonable as well, given that, as much research has shown, most obese who have tried to lose weight and to keep it off, have failed.

On the other hand, failure is common in the context of any difficult task, and evidence of “don’t” does not amount to evidence of “can’t.” One might wonder as well whether obesity is a condition easily amenable to controlled scientific study given that every solution—and of course many, in fact, do succeed(5)—is both multifactorial and as unique as every obese person’s biology.  So can we sincerely conclude, as so many commentators apparently have, that the overweight and obese are essentially powerless to help themselves?  Or could it be that the vast majority of popular authors and health officials have largely—perhaps even intentionally—ignored the true root cause of obesity, if for no other reasons, simply because they lack confidence in the obese population’s willingness to confront it?

Though far less popular, a more recently published text appears to suggest just that.  In The Psychology of Overeating, clinical psychologist Kima Cargill attempts to “better contextualize” overeating habits “within the cultural and economic framework of consumerism”(6).  What current research fails to provide, she argues, is a unified construct identifying overeating (and sedentism, one might quickly add) as “not just a dietary [or exercise] issue,” but rather as a problem implicating “the consumption of material goods, luxury experiences, … evolutionary behaviors, and all forms of acquisition.”

Kima Cargill.

Kima Cargill.

To personalize her analysis, Cargill introduces us to a case study named “Allison.”  Once an athlete, Allison gained fifty pounds after marriage.  Now divorced and depressed, she regularly eats fast food or in expensive restaurants and rarely exercises.  Rather than learn about food and physical performance, Allison attempts to solve her weight problem by throwing money at it.  “When she first decided to lose weight,” Cargill recalls, “which fundamentally should involve reducing one’s consumption, Allison went out and purchased thousands of dollars of branded foods, goods, and services.” She hired a nutritionist and a trainer.  She bought a Jack Lalanne juicer, a Vitamix blender, a Nike Feulband, Lululemon workout clothing, an exclusive gym membership, diet and exercise DVDs and iPhone apps, and heaping bags full of special “diet foods.”

None of it worked, according to the author, because Allison’s “underlying belief is that consumption solves rather than creates problems.”  In other words, like so many others, Allison mistook “the disease for its cure.”  The special foods and products she purchased were not only unnecessary, but ultimately harmful.  The advice she received from her nutritionist and trainer was based on fads, ideologies, and alleged “quick-fixes” and “secrets,” but not on actual science.  Yet, despite her failure, Allison refused to “give up or simplify a life based on shopping, luxury, and materialism” because any other existence appeared empty to her.  In fact, she was unable to even imagine a more productive and enjoyable lifestyle “rich with experiences,” rather than goods and services.

Television celebritism: also mistaking the disease for its cure.

Television celebritism: also mistaking the disease for its cure.

Like Lustig, Taubes, and their philosophical progeny, Cargill recognizes the many potential biological factors capable of rendering weight loss and maintenance an especially challenging task.  But what she does not see in Allison, or in so many others like her, is a helpless victim of either her body or her culture.  Judging it unethical for psychologists to help their patients accept overeating behaviors and their inevitably destructive consequences, Cargill appears to favor an approach that treats the chronically overweight and obese like any other presumably capable, and thus responsible, adult population.

Compassion, in other words, must begin with uncommon candor.  As Cargill acknowledges, for example, only a “very scant few” get fat without overeating because of their genes.  After all, recently skyrocketing obesity rates cannot be explained by the evolution of new genes during the last thirty to forty years.  And while the food industry (along with the popular media that promote it) surely employs every deceit at its disposal to encourage overconsumption and the rejection of normal—that is, species appropriate—eating habits, assigning the blame to big food only “obscures our collusion.”  Worse yet, positioning the obese as “hapless victims of industry,” Cargill observes, “is dehumanizing and ultimately undermines [their] sense of agency.”

Education is always an issue, of course. And, generally speaking, higher levels of education are inversely associated with the least healthy eating behaviors.  But the obese are not stupid, and shouldn’t be treated as such.  “None of us is forced to eat junk food,” the author notes, “and it doesn’t take a college degree or even a high school diploma to know that an apple is healthier than a donut.”  Nor is it true, as many have claimed, that the poor live in “food deserts” wholly lacking in cheap, nutritious cuisine(7).  Indeed, low-income citizens tend to reject such food, Cargill suggests, because it “fails to meet cultural requirements,” or because of a perceived “right to eat away from home,” consistent with societal trends.

Certain foods, especially those loaded with ridiculous amounts of added sugars, do in fact trigger both hormonal turmoil and addiction-like symptoms (though one might reasonably question whether any substance we evolved to crave should be characterized as “addictive”).  And as the overweight continue to grow and habituate to reckless consumption behaviors, their tasks only grow more challenging.  I know this from personal experience, in addition to the science.  Nevertheless, Cargill maintains, “we ultimately degrade ourselves by discounting free will.”

popular culture4

Despite the now-fashionable and, for many, lucrative “Fat Chance” paradigm, the chronically overweight and obese are as capable as anyone else of making rational and intelligent decisions at their groceries, restaurants, and dinner tables. And surely overweight children deserve far more inspiring counsel.  But as both Lustig and Taubes, on the one hand, and Cargill, on the other, have demonstrated in different ways, the solution lies not in mere diet and exercise, per se.  The roots of obesity run far deeper.

Changes to basic life priorities are key. To accomplish a more healthful, independent, and balanced existence, the chronically overweight and obese in particular must first scrutinize their cultural environments, and then discriminate between those aspects that truly benefit them and those that were designed primarily to take advantage of their vulnerabilities, both intrinsic and acquired.  Certain cultural elements can stimulate the intellect, inspire remarkable achievement, and improve the body and its systems.  But most if not all of its popular component exists only to manipulate its consumers into further passive, mindless, and frequently destructive consumption.  The power to choose is ours, at least for now.

References:

(1)Lustig, R.H., L.A. Schmidt, and C.D. Brindis. 2012. Public health: the toxic truth about sugar. Nature 482: 27-29.

(2)Lustig, R. 2012. Fat Chance: Beating the Odds Against Sugar, Processed Food, Obesity, and Disease. NY: Hudson Street Press.

(3)Taubes, G. 2012. Treat obesity as physiology, not physics. Nature 492: 155.

(4)Taubes, G. 2011. Why We Get Fat: And What to Do About It. NY: Knopf.

(5)See, e.g., The National Weight Loss Control Registry. http://www.nwcr.ws/Research/default.htm

(6)Cargill, K. 2015. The Psychology of Overeating: Food and the Culture of Consumerism. NY: Bloomsbury Academic.

(7)Maillot, M., N. Darmon, A. Drewnowski. 2010. Are the lowest-cost healthful food plans culturally and socially acceptable? Public Health Nutrition 13(8): 1178-1185.

Advertisements

Nature, Nurture, and the Folly of “Holistic Interactionism.”

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Most contemporary scientists, according to Harvard University experimental psychologist, Steven Pinker, have abandoned both the nineteenth-century belief in biology as destiny and the twentieth-century doctrine that the human mind begins as a “blank slate.”  In his new anthology, Language, Cognition, and Human Nature: Selected Articles (Oxford 2015), Pinker first reminds us of the now-defunct blank slate’s political and moral appeal:  “If nothing in the mind is innate,” he chides, “then differences among races, sexes, and classes can never be innate, making the blank slate the ultimate safeguard against racism, sexism, and class prejudice.”

Pinker15

Even so, certain angry ideologues, for example, continue to wallow in blank slate dogma.  Gender differences in STEM professions, for example, are often attributed entirely to prejudice and hidden barriers.  The mere possibility that women, on average, are less interested than men in people-free pursuits remains oddly “unspeakable,” says Pinker (but see a recent exception here).  The point, he clarifies, is not that we know for certain that evolution and genetics are relevant to explaining so-called “underrepresentation” in high-end science and math, but that “the mere possibility is often treated as an unmentionable taboo, rather than as a testable hypothesis.”

A similar exception to the general rule centers around parenting and the behavior of children.  It may be true that parents who spank raise more violent children, and that more conversant parents produce children with better language skills.  But why does “virtually everyone” conclude from such facts that the parent’s behavior causes that of the child?  “The possibility that the correlations may rise from shared genes is usually not even mentioned, let alone tested,” says Pinker.

Equally untenable for the author is the now-popular academic doctrine he dubs “holistic interactionism” (HI).  Carrying a “veneer of moderation [and] conceptual sophistication,” says Pinker, HI is based on a few “unexceptional points,” including the facts that nature and nurture are not mutually exclusive and that genes cannot cause behavior directly.  But we should confront this doctrine with heightened scrutiny, according to Pinker, because “no matter how complex the interaction is, it can be understood only by identifying the components and how they interact.”  HI “can stand in the way of such an understanding,” he warns, “by dismissing any attempt to disentangle heredity and environment as uncouth.”

HI mistakenly assumes, for example, that hereditary cannot constrain behavior because genes depend critically on the environment.  “To begin with,” says Pinker, “it is simply not true that any gene can have any effect in some environment, with the implication that we can always design an environment to produce whatever outcome we value.”  And even if some extreme “gene-reversing” environment can be imagined, it simply doesn’t follow that “the ordinary range of environments will [even] modulate that trait, [or that] the environment can explain the nature of the trait.”  The mere existence of environmental mitigations, in other words, does not render the effects of genes inconsequential.  To the contrary, Pinker insists, “genes specify what kinds of environmental manipulations will have what kinds of effects and with what costs.”

Although the postmodernists and social constructionists who tend to dominate humanities departments in American Universities especially, continue to tout HI as a supposedly nuanced means of comprehending the nature-nurture debate, it is in truth little more than a pseudo-intellectual “dodge,” Pinker concludes: a convenient means to “evade fundamental scientific problems because of their moral, emotional, and political baggage.”

Among intellectually honest, truly curious, and consistently rational thinkers (a diminutive demographic indeed), Pinker’s reputation is and has long stood as something perhaps just short of heroic, in no small part due to his defense of politically incorrect but nonetheless scientifically viable hypotheses.  What a shame it is that only academics of similar status (and tenure) can safely rise and demand the freedom required to mount such defenses.  And what a tragedy that so few in such privileged company actually do.

 

Gender Personality Differences: Planets or P.O. Boxes, Evidence or Ideology?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Gender personality differences

Why are people so afraid of the idea that the minds of men and women are not identical in every way?—Steven Pinker, 2002.

The mere suggestion that one group of people is cognitively or emotionally distinct from another can leave many of us speechless and squirming in our seats.  The effect is intensified, of course, in the regrettable event of historical discrimination, and especially so when the differences are alleged to be innate.

Scientists of many stripes have bravely confronted, struggled with, and evidently resolved the issue as it pertains to “race.”  Such classifications lack sturdy biological bases, the current consensus holds, and their very existence relies on nothing more concrete or dependable than cultural convention and political expediency (1).

Gender or sex (I use the terms interchangeably here) is similar in some respects, but clearly distinct in others.  Some biological differences between men and women are both unmistakable and abundantly appreciated.  Combat can erupt, however—perhaps most furiously in intellectual circles—over questions involving mental differences and, assuming their existence, over their proposed causes.

In a recent column, I investigated the origins of female “underrepresentation” in high-end STEM fields.  The latest analyses had suggested that, rather than being discriminated against, qualified women tended to choose people-centered over thing-centered professions.  That is, the somewhat narrow mental trait examined was interest.

Other studies have explored broader gender differences in personality—a related and at least equally sensitive domain.  In a highly influential 2005 paper, for example, University of Wisconsin-Madison professor of psychology and women’s studies, Janet Hyde, rebuked the popular media and general public for their apparent fascination with an assumed profusion of deep psychological variances between genders (Hyde 2005).

After reviewing 46 meta-analyses on the subject, Hyde proposed a new model.  The gender similarities hypothesis (GSH) holds that “males and females are similar on most, but not all, psychological variables.”  Because most differences are negligible or small, and because very few are large, Hyde contended, “men and women as well as boys and girls are more alike than they are different.”  Physical aggression and sexuality were offered as exceptions.

gender personality differences 5

But in a new study, “The Distance Between Mars and Venus,” Hyde’s renowned hypothesis was directly and expressly challenged by a trio of Europeans led by Marco Del Giudici, evolutionary psychologist at the University of Turin, Italy (Del Giudici 2012).  Having subjected a sample of 10,261 American men and women between ages 15 and 92 to an assessment of multiple personality variables, Del Giudici obtained results he and his team described as “striking.”

The “true extent of sex differences in human personality,” he argued, “has been consistently underestimated.”  Del Giudici now compares personality disparities to those of other psychological constructs like vocational interests and aggression.  When properly measured, he reports, gender personality differences are “large” and “robust.”  Indeed, roughly 82 percent of his cohort delivered personality profiles that could not be matched with any member of the opposite sex.

So by what method should researchers measure these distinctions?  The Europeans broke new ground by combining three techniques.  First, to enhance reliability and repeatability, they estimated differences based on latent factors rather than observed scores.  Second, instead of employing the so-called “Big Five” variables (extraversion, agreeableness, conscientiousness, neuroticism, and openness to experience), Del Giudici and company applied 15 narrower traits in order to assess personality with “higher resolution.”  Finally, they chose multivariate over univariate effect sizes—thus aggregating rather than averaging variances—to more accurately reveal “global” sex differences.

Hyde swiftly posted her response.  Roundly disparaging Del Giudici’s statistical method, she charged that it “introduces bias by maximizing differences.”  In the end, she continued, the Europeans’ “global” result is merely a single and “uninterpretible” dimension that only “blur[s] the question rather than offering higher resolution.”  The GSH stands intact, she insists.  The true expanse between genders, Hyde argued, is anything but astronomical: Instead, it more resembles “the distance between North Dakota and South Dakota.”

Either way, a third researcher teased, “you’ll still have a mighty long way to walk.”  Richard Lippa, professor of psychology at California State University, Fullerton, proposed an attractive analogy in the Italian’s defense.  Consider sex differences in body shape, he suggested.  The approach underlying Hyde’s GSH would average certain trait ratios—shoulder-to-waist, waist-to-hip, torso-to-leg length, for example—and likely declare that men and women have similar bodies.  By contrast, Del Giudici’s multivariate method would probably generate the much more intuitive conclusion that “sex differences in human body shape are quite large, with men and women having distinct multivariate distributions that overlap very little.”  The Italian offered a similar and equally effective analogy comparing male and female faces.

Del Giudici finds Hyde’s “single dimension” criticism ironic indeed because his method’s essential point, he says, was to integrate multiple personality factors rather than isolate them.  Most dramatically in univariate terms, those traits included sensitivity, warmth, and apprehension (higher in women), and emotional stability, dominance, vigilance, and rule-consciousness (higher in men).

Nor does he see an interpretability problem.  The Italian’s “weighted blend” of 15 personality traits, he argues, provides a concrete and meaningful description of global differences informing us of a 10 to 20 percent overlap between male and female distributions.  He denies as well Hyde’s claim that his techniques were either controversial or prone to maximizing bias.  To the contrary, he told me, the Europeans simply “thought hard about the various artifacts that can deflate sex differences in personality, and took steps to correct them.”

gender personality differences 4

Pinker’s provocative query denouncing our fear of sex differences was largely rhetorical, of course.  He answered the question soon after asking it: “The fear,” he acknowledged, “is that different implies unequal.”  If we momentarily assume that gender personality differences are substantial, the next issue to confront might be whether those differences are driven more by culture or biology.  In either case, certain groups may be forced to rethink some much-cherished ideas and practices.

Lippa recently probed the ultimate “nature vs. nurture” question in a review of two meta-analyses and three cross-cultural studies on gender differences in both personality and interests (Lippa 2010).  In the end, he discovered that women tend to score significantly higher over time and across cultures in the Big Five categories of agreeableness and neuroticism, and, as others have found since, that they gravitate more toward people-centered than thing-centered occupations.

The Californian then described two basic sets of non-exclusive theories under which such evidence is typically evaluated.  Biological theories, of course, focus on genes, hormones, neural development, and brain structure, for example.  These models are inspired by our knowledge of evolution.  Social-environmental theories, by contrast, concentrate on stereotypes, self-conceptualization, and social learning.  Here, cultural influences are thought to dominate.

Supporters of distinct sub-theories would no doubt evaluate the evidence in varying ways.  But significant gender differences that are consistent across cultures and over time, Lippa contends, are more likely to reflect underlying biological rather than social-environmental causes.  Similarly suggestive, the author says, is the fact that such divergences tend to be greater in relatively ‘modern,’ individualistic, and gender-egalitarian societies.

In his new paper, Del Giudici chose not to directly engage the difficult question of underlying causes.  Nonetheless, he reminds us that evolutionary principles—sexual selection and parental investment theories, in particular—provide us with ample grounds to “expect robust and wide-ranging sex differences in this area.”  “Most personality traits” he continues, “have substantial effects on mating- and parenting-related behaviors.”

Even so, Hyde answers, more than one evolutionary force may be at play here.  Although sexual selection can produce sex differences, she admits, other forms of natural selection can render sex similarities.  “The evolutionary psychologists,” she reckons, “have forgotten about natural selection.”

On these limited questions, truly common ground seems scarce indeed.  Why should the authors interpret the evidence so differently?  Of course no member of any group or human institution is impervious to personal or philosophical biases.  One might reasonably expect academics to be more objective than others, but—for what it’s worth—that has seldom been my experience as a science writer.

In his review, Lippa argued generally that “[c]ontemporary gender researchers, particularly those who adopt social constructionist and feminist ideologies, often reject the notion that biologic factors directly cause gender differences.”  And more pertinently here, he claims that Hyde has long “ignored ‘big’ differences in men’s and women’s interests,” and that the GSH “is, in part, motivated by feminist ideologies and ‘political’ attitudes.”

gender personality differences 6

Hyde denies the accusation categorically: “The GSH is not based on ideology,” she told me.  “It is a summary of what the data show … data from millions of subjects.”  One might note of the Wisconsinite’s pioneering paper, however, that a great deal of concluding space was consumed decrying the perceived social costs of gender difference claims—especially to women, rather than further illuminating or summarizing the data.

Del Giudici appears to find the issue of bias somewhat less motivating.  If sex differences are small, he suggests, we have little to explain and more time to discuss incorrect stereotypes—“this is the main appeal of the GSH.”  The author agrees that “ideology has played a part in the success of the GSH.”  Nonetheless, he maintains that the aforementioned “methodological limitations have played a larger role.”

In his closing comments to me, the Californian echoed much of what Steven Pinker has so courageously recognized in recent years with regard to the broader subject of group divergences.  The ongoing examination of sex differences in personality may or may not be tainted by feminism or other ideologies.  But given the inquiry’s great sensitivity and profound implications, Lippa’s comments—crafted in the finest tradition of true skepticism—bear repeating here:

“I believe this is not a topic where ‘ignorance is bliss.’  We have to examine the nature of sex differences objectively…  We should, as researchers, be open to all possible explanations.  And then, as a society, we have to decide whether we want to let the differences be whatever they may be, or work to reduce them.”

Words to inquire by.  So let the research into gender differences continue, as the Europeans urge, unfettered by irrelevant politics or pet, self-serving causes.  I suspect we have little to fear.  But let science characterize our differences objectively, whatever their nature and degree.  Then, if necessary, we’ll decide together—as an open and informed community—how best to cope with them.

gender personality differences 3

Note:

(1) Two excellent books have recently reviewed the scientific and cultural particulars of “race” for a popular audience: Ian Tattersall and Rob De Salle. 2011. Race? Debunking a Scientific Myth. Texas A&M University Press, and Sheldon Krimsky and Kathleen Sloan, eds. 2011. Race and the Genetic Revolution: Science, Myth, and Culture. Columbia University Press.

References:

Del Giudice, M., Booth, T., and Irwing, P. 2012. The distance between Mars and Venus: Measuring global sex differences in personality. PLoS ONE 7(1): e29265.

Hyde, J.S. 2005. The gender similarities hypothesis. American Psychologist 60: 581-592.

Lippa, R. A. 2010. Gender differences in personality and interests: When, where, and why? Pers. and Soc. Psych. Compass 4/11: 1098-1110.

Pinker, Steven. 2002. The Blank Slate: The Modern Denial of Human Nature.  NY: Penguin Viking.

Innate Morality? Human Babies Weigh In.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

In 1762, Rousseau characterized the human baby as “a perfect idiot.”  In 1890, William James judged the infant’s mental life to be “one great blooming, buzzing confusion.”  We’ve learned much about early human cognition since the nineteenth century, of course, and the current trend is to assign well-expanded mental capacities to young children.  But intellectual battles continue to rage, for example, over the possibility of an innate and perhaps nuanced moral sensibility.

Indeed, psychologists split last summer over the question of whether preverbal infants are capable of evaluating the social value of others.  Back in 2007, Yale University researchers led by J. Kiley Hamlin claimed to have demonstrated that infants can morally assess individuals based on their behavior toward third parties (Hamlin, et. al. 2007).  Those findings were challenged last August, however, by postdoctoral research fellow, Damian Scarf, and his colleagues from the University of Otago in New Zealand (Scarf, et. al. 2012).

Hamlin’s pioneering study deployed three experiments on six- and ten-month-old babies to test her team’s hypothesis that social evaluation is a universal and unlearned biological adaptation.  In all trials infants observed characters shaped like circles and either squares or triangles moving two-dimensionally in a scene involving an artificial hill.  Parents held their children during the program, but were instructed not to interfere.

In experiment one, the characters were endowed with large “googly eyes” and made to either climb the hill, hinder the climber from above, or help the climber from below.  With looking times carefully measured, the infants observed alternating helping and hindering trials.  The question here was whether witnessing one character’s actions toward another would affect infants’ attitudes toward that character.

When encouraged to reach for either the helper or the hinderer, twelve of twelve six-month-olds and fourteen of sixteen ten-month-olds chose the helper.  But might the babies have responded to superficial perceptual, rather than social, aspects of the experiment?  For example, perhaps the infants merely preferred upward or downward movements.  In an attempt to rule out that possibility, Hamlin modified a single test condition and gathered a second group of children.

In experiment two, the object pushed was represented as inanimate.  Its googly eyes were detached and it was never made to appear self-propelled.  If the infants had chosen based on mere perceptual events in the first experiment, Hamlin proposed, they should express an analogous preference for the upward-pushing character in the second.  But that didn’t happen.  Only four of twelve six-month-olds and six of twelve ten-month-olds picked the upward-pushing shape.

So the team decided that three possibilities remained.  The infants might positively evaluate helpers, negatively evaluate hinderers, or both.  To determine which, Hamlin assembled a third group of children, reattached the googly eyes, and altered the experimental design to include a neutral character that would never interact with the climber.

In the final experiment, then, children first observed either a helper or a hinderer interact with a climber as in experiment one.  Thereafter, they witnessed a neutral, non-interactive character that moved uphill or downhill in the same way.

When prompted to choose, infants reacted differently toward the neutral shape depending on the character with which it was paired.  Seven of eight babies in each age group preferred the helper to the neutral character and the neutral character to the hinderer.  Hamlin thus inferred that her subjects were fond of those who facilitate others’ goals and disapproving of those who inhibit them.

“Humans engage in social evaluation,” the Yale researchers concluded, “far earlier in development than previously thought.”  The critical human ability to distinguish cooperators and reciprocators from free riders, they agreed, “can be seen as a biological adaptation.”

Having viewed recorded portions of these experiments, I felt compelled to question some of the program’s most basic assumptions and methods.  Can infants fathom, for instance, what artificial landscapes represent, or what “hills” look like?  Can they grasp the symbolic significance of squares, triangles, and circles adorned with “googly eyes”?

Although groundbreaking in its own right, Hamlin assured me that her 2007 study was built on a solid foundation of previous experiments employing both a hill and a helping/hindering paradigm.  Numerous analyses, she insisted, have shown that infants will interpret even two-dimensional animations as real, and often attribute goals and intentions to basic shapes engaging in apparently self-propelled movement—with or without artificial eyes.

I also wondered how the infants were “encouraged” to choose.  In the video, characters were shaken by the person holding them.  Could that have affected the outcome, perhaps combined with verbal inflection?  Was one character ever held closer to an infant than the other, or at least closer to the infant’s dominant hand?

Her presenting colleagues, Hamlin answered, were always blind to the condition—i.e., ignorant of which character was helper or hinderer for that particular baby.  So, if differences in proximity or emphasis existed, their effects would have been divided randomly across subjects.  Also, she noted, parents were instructed to close their eyes during choice phases.

Scarf responded quite differently.  He sees no reason to believe, for example, that six- and ten-month-olds would interpret Hamlin-esque displays as landscapes, or that they would be familiar with the concept of a hill.  Nor could they distinguish between helping and hindering, he argued.  And while infants may attribute intentions and goals to animate objects, he added, no convincing data suggests they might assign relevant feelings to them as well.

Five years passed before Scarf’s team would offer a conflicting explanation—the “simple association hypothesis”—for the infants’ remarkable behavior.  While inspecting Hamlin’s videos, Scarf distinguished “two conspicuous perceptual events” during the helper/hinderer trials: first, an “aversive collision” between the climber and either the helper or the hinderer, and second, a “positive bouncing” when the climber and helper reached the hill’s summit.

Rather than rendering complex social evaluations, Scarf proposed, Hamlin’s babies may have simply been reacting to a visual commotion.  The hinderer was perceived negatively, he hypothesized, because it was associated only with an aversive collision.  The helper, by contrast, was viewed more positively because it was linked with an optimistic bouncing in addition to a collision.

To test their suspicions, the New Zealanders devised two experiments.  In the first, eight ten-month-olds would be presented with googly-eyed characters on a Hamlin-esque stage.  Scarf would eliminate the climber’s bounce on help trials and then pair the helper with a neutral character.  If infants choose based on social evaluation, he reasoned, they should select the helper.  But if infants find the helper/climber collision aversive and react instead via simple association, they should pick the neutral character.

In the second experiment—this time involving forty-eight ten-month-olds—the team would manipulate whether the climber bounced during help trials (at the top), hinder trials (at the bottom), or both.  They would then present the children with a choice between hinderers and helpers.  Again, Scarf proposed, if infants choose based on social evaluation, they should select the helper universally.  But if driven by simple association instead, they should select whatever character was present in the trials when bouncing occurred, and show no preference in the bounce-at-both-top-and-bottom condition.

The results were striking.  In the first experiment, seven of eight children chose the neutral character over the colliding and non-bouncing helper.  In the second experiment, twelve of sixteen picked the helper in the bounce-at-the-top condition, another twelve of sixteen opted for the hinderer in the bounce-at-the-bottom condition, and an equal number (eight of sixteen) chose the helper and hinderer in the bounce-at-both condition.

Thus, Scarf resolved, simple association can explain Hamlin’s 2007 results without resorting to the comparatively complicated notion of an innate moral compass.  In fact, he continued, his results were entirely inconsistent with Hamlin’s core conclusions.  Infants do not perceive collisions between hinderers and climbers as qualitatively different from those between helpers and climbers, and they do not prefer helpers regardless of bounce condition.

Invoking Darwin, Scarf claimed to add momentum to a movement in developmental psychology toward more parsimonious interpretations of infant behavior.  There is much “grandeur in the view,” he philosophized, that complex adult cognitive abilities can be discovered through a more sober comprehension of “these simple beginnings.”

On August 9, 2012, Hamlin—now at the University of British Columbia—posted her team’s unyielding response.  Generally, they found Scarf’s account “unpromising,” and were “bemused” by the New Zealanders’ attempt to recruit Darwin—who “wrote extensively about the powers (and the limits) of our inborn moral sense”—to their cause.  Hamlin criticized Scarf’s experimental design as well, and his failure to adequately address results she had obtained and published after 2007.

By Hamlin’s lights, Scarf’s stimuli had differed from her own in ways that left the climber’s goal—and thus the insinuation of being helped or hindered—unclear.  First, the googly eyes attached to Scarf’s climber were not fixed in an upward gaze.  Second, Scarf’s climber moved less organically, as if able to climb easily without the helper’s assistance.

Hamlin emphasized too that she had replicated the 2007 results in studies involving no climbing, colliding, or bouncing whatsoever.  In 2011, for instance, she found that infants prefer characters who return balls to others who drop them over characters who take them and run away (Hamlin, et. al. 2011).

More recently, Hamlin’s new team claimed to demonstrate that, like adults, babies interpret others’ actions as positive or negative depending on context (Hamlin, et. al. in press).  In this particularly chilling report, infants were found to prefer both individuals who helped others with attitudes (tastes in food) similar to their own, and individuals who hindered others with different attitudes.

Scarf stands firm, however.  He finds it implausible, for example, that ten-month-olds would consider such small differences between stimuli significant.  Regardless, his team had also replicated Hamlin’s 2007 results when the climber was made to bounce at the top of the hill—an unlikely outcome, Scarf chides, if their stimuli were—as Hamlin claims—somehow deficient.

He argues as well that the Canadian’s more recent experiments, though admittedly altered in design, suffer from the same general confound as the originals.  In one case, the protagonist was made to dive toward a rattle in helping trials—an “interesting” event, according to Scarf, while the hinderer was made to slam a box closed in hindering trials—an “aversive” event.

Though already extensive, Hamlin’s explorations into infant prosociality will continue.  For his part, Scarf intends to author a review of the existing literature.  Defending parsimony is an honorable cause, of course, and the New Zealanders have succeeded in raising important questions for further research.  Are we innately moral, or is prosociality primarily learned?  Are we naturally discriminatory and intolerant, or must those behaviors be taught and learned as well?

References:

Hamlin, J.K., Mahagan, N., Liberman, Z., and Wynn, K. (in press). Not like me = bad: Infants prefer those who harm dissimilar others. Psychol Sci.

Hamlin, J.K., and Wynn, K. 2011. Young infants prefer prosocial to antisocial others. Cog Dev 26(1): 30-39.

Hamlin, J.K., Wynn, K., and Bloom, P. 2007. Social evaluation by preverbal infants. Nature 450: 557-560.

Scarf, D., Imuta, K., Colombo, M., and Hayne, H. 2012. Social evaluation or simple association? Simple associations may explain moral reasoning in infants. PLoS One 7(8): e42698.

Women and High-End Science: Nurture or Nature, Prejudice or Preference?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Female scientists 2

In a letter to King Frederick II of Prussia, Voltaire wrote of his lover and French noblewoman, Émilie du Chatelet, that he considered her “a great man whose only fault was being a woman.”  Privately trained and, among eighteenth-century females, exceptionally well-versed in math and physics, du Chatelet’s French translation of Newton’s Principia Mathematica remains the definitive text.

Nonetheless, as Voltaire’s ironic letter suggests, du Chatelet was less than delighted with the plight of intelligent, science-minded women.  Writing in about 1735, she confessed to “feel the full weight of prejudice which so universally excludes us from the sciences.”  Plainly, du Chatelet believed it was France’s culture and education system and not the female brain that was responsible for the inequity.

In 2005, the blazing-hot topic of female underrepresentation in high-end science was addressed by American economist and then Harvard University president, Lawrence Summers.  “In the special case of science and engineering,” he famously suggested, “there are issues of intrinsic aptitude, and … those considerations are reinforced by what are in fact lesser factors involving socialization and continuing discrimination.”

A notoriously impassioned debate ensued.  Within a few months, Harvard’s arts and sciences faculty passed a motion demonstrating a “lack of confidence” in their president’s leadership.  Conversely, when asked whether Summers’s comments were intellectually appropriate, prominent cognitive scientist, Steven Pinker, responded as follows:

Good grief, shouldn’t everything be within pale of legitimate academic discourse, as long as it is presented with some degree of rigor?  That’s the difference between a university and a madrassa.  There is certainly enough evidence for the hypothesis to be taken seriously.

Summers resigned the following year, but his provocative challenge—“I would like nothing better than to be proved wrong”—was not forgotten.

No one seriously disputes the statistical facet of female underrepresentation among the higher echelons of the science, technology, engineering, and math (STEM) fields.  Recent data from the U.S—collected by Cornell University researchers, Stephen Ceci and Wendy Williams—leave little room for disagreement.  In 2005, PhDs were awarded to women as follows: 30% in math, 21% in computer science, 14.3% in physics, and 8.4% in mechanical engineering.  Females were hired to tenure-track university positions as such: 26.8% in math, 20% in computer science, 16.8% in physics, and 18% in mechanical engineering.  Finally, full professorships were awarded to women as follows: 7.1% in math, 10.3% in computer science, 6.1% in physics, and 4.4% in mechanical engineering (Ceci and Williams, 2011).

Rather, the real quarrel centers on the disparity’s likely causes.  Was du Chatelet correct to attribute the gender gap to persisting discrimination?  Was Summers justified in suggesting the possibility of disparate abilities?  Or perhaps the issue is substantially more complicated.  If so, do the statistics represent a serious social problem demanding intervention, or just a natural and acceptable dissimilarity between human genders?

Female scientists 3

In the opening pages of a new book on the subject, Ceci and Williams recognize that, although differences in aptitude occasionally register during early childhood, “the size of the male advantage accelerates” later, beginning in puberty (Ceci and Williams 2010, ix-x).  By the end of high school, the authors continue, boys are much more likely to be seated at the “right tail of the distribution”—in the top 10%, 1%, or 0.1%.

Ceci and Williams offer three examples to illuminate the phenomenon.  First, Honors Math 55 (Advanced Calculus and Linear Algebra) at Harvard University.  Reportedly the most intimidating math class in the country, each year the majority of enrolling students drop out within a few weeks.  The distribution by the bitter end of 2006 was, for lack of a better word, dumbfounding: according to The Crimson newspaper, “45 percent Jewish, 18 percent Asian, 100 percent male.”

Second, the Scholastic Assessment Test—Mathematics (SAT-M).  Twice as many boys as girls achieve a score of 650 (19% versus 10%) or 700 (10% versus 5%).  According to Ceci and Williams, “The farther out on the right tail one goes (toward the top 0.01%, or 1 in 10,000), the fewer females there are.”  Males, in fact, “are sometimes overrepresented by a factor of 7 or more to 1.

Finally, the Putnam Mathematical Competition—a 6-hour intercollegiate test for U.S. and Canadian students administered every December by the Mathematics Association of America.  Putnam winners have gone on to lead illustrious careers in math, and several have become Nobelists and Fields medalists.  Predictably, by this point, females are rare among the top five scorers, who are dubbed Putnam Fellows.  Since 2000, in fact, only three of 51 Fellows were women.

Clearly, we have cause to be concerned.  But researchers aren’t convinced that girls lack the necessary skills.  In 2008, for example, a team led my University of Wisconsin psychologist Janet Hyde concluded of American students that, at least “for grades 2 to 11, the general population no longer shows a gender difference in math skills, consistent with the gender similarities hypothesis,” which proposes that males and females are similar on most, but not all, psychological variables. (Hyde et al., 2008).

After analyzing the math scores of more than seven million students from across the U.S., Hyde found “trivial differences” on average, coupled with some “unexplained” evidence of slightly greater variability among males.  Unfortunately, the state administered tests were incapable of assessing the students’ relative abilities to solve more complex problems—in other words, to test for skills most crucial for advanced work in STEM careers.

In early 2010, two members of Hyde’s team collaborated with Villanova University psychologist Nicole Else-Quest to probe the issue more broadly and inclusively by evaluating data gathered from two previous studies, the Trends in International Mathematics and Science Study (TIMSS) and the Programme for International Student Assessment (PISA) (Else-Quest et al., 2010).  TIMSS tests were generally easier and more sensitive to curricula or institutions; PISA exams were more difficult and emphasized math literacy and its practical application.  Together, these data sets represented 493,495 students aged 14-16 from 69 nations.

In terms of math achievement, Else-Quest’s results, like Hyde’s, substantially supported the gender similarities hypothesis because the sizes of all mean effects were “very small, at best.”  The largest effect was in the space/shape domain of the PISA, consonant with historical evidence of male superiority in mental rotation skills.  The team was quick to point out, however, that gender disparities in this area can be mediated through appropriate education, as other studies have shown.

Even so, Else-Quest offered two possible explanations for the PISA gender gap.  The first is rooted in the greater male variability hypothesis, which predicts no discrepancies on average, but more top performers among males.  Boys, however, didn’t outperform girls on the most challenging TIMSS problems demanding creative or strategic reasoning.  “Thus,” the team cautioned, “comparisons between TIMSS and PISA regarding test difficulty should not be overplayed as support for the greater male variability hypothesis.”

The second explanation spotlights society-based gender inequities.  More pertinent here, Else-Quest suggests, is the gender stratification hypothesis, which posits performance gaps closely related to cultural variations in opportunities for females.  Indeed, effect sizes revealed considerable variability across nations, and, despite similar achievement levels, boys regularly reported more positive attitudes and affects toward math.  So, do societal valuations of math proficiency among young females affect achievement?  Consistent with stratification, the team judged that “girls will perform at the same level as their male counterparts when they are encouraged to succeed, are given the necessary educational tools, and have visible female role models excelling in mathematics.”

Female scientists

At about the same time, Ceci, Williams, and Cornell colleague Susan Barnett reviewed more than 400 articles and book chapters to reconcile competing claims of biological and sociocultural causation (Ceci et al., 2009).  In the end, they pronounced the evidence for each contention to be both contradictory and inconclusive.

First, if underrepresentation were solely the function of ability, women should still occupy at least twice as many high-end science positions as they do.  Second, although women still experience unequal childrearing responsibilities in many or all cultures, such inequity should result in women having inadequate time for all professional careers to the same degree, which doesn’t seem to be the case.

Disparate abilities and cultural attitudes might play important roles, the trio agreed, but only a “confluence of factors” can account for all salient data.  “Of these factors,” they concluded, “personal lifestyle choices, career preferences, and social pressures probably account for the largest portion of variance.”  Math-proficient women tend to prefer non-math careers and are more likely to relinquish them as they advance.  They are also more likely than men to possess outstanding verbal competence and, thus, the additional option to flourish in law, the humanities, or medicine.

According to Ceci and Williams, “The tenure structure in academe demands that women who have children make their greatest intellectual achievements contemporaneously with their greatest physical and emotional achievements, a feat fathers are never expected to accomplish” resulting in career choices “men are not required to make.”  But in order to counteract the “childbearing penalty,” as he terms it, they suggest that universities consider deferred start-up tenure track positions and part-time work that segues into full-time tenure-track employment as children mature.

Finally, on February 7 of this year, Ceci and Williams published a hard-hitting and no doubt divisive paper addressing persistent and pervasive claims of sex discrimination in interviewing, hiring, and grant and manuscript reviewing. (Ceci and Williams, 2011).  After reviewing twenty years of data, Ceci and Williams—married with three daughters of their own—decided that the evidence of discrimination against women in math-intensive fields is “aberrant, of small magnitude” and “superseded by larger, more sophisticated analyses showing no bias, or occasionally, bias in favor of women.”

In agreement with their most recent work, Ceci and Williams surmised instead that the gender gap results primarily from women’s career preferences and fertility and lifestyle choices, “both free and constrained.”  Adolescent girls tend to gravitate toward careers focusing on people as opposed to things, the couple found, and female PhDs interested in childrearing are less likely to apply for or maintain tenure track positions.  As a secondary explanation, Ceci and Williams again pointed to evidence for upper tail disparities in cognitive ability.

The authors briefly addressed the thorny question of solutions as well, emphasizing the need to move beyond historical causes.  But if the existing bases of female underrepresentation are mostly a function of female preferences—for non-math or less math-intensive careers, or for reproduction and childrearing—is it really “underrepresentation” in any meaningful sense of the word?  If so, does it represent a problem justifying remedies involving sacrifices from others, average taxpayers in particular?  Perhaps some arrangements would benefit many and harm none.  But others implicating the commitment or reallocation of valuable resources ought to be vetted thoroughly at all levels of society.

References:

Ceci, S.J., Williams, W.M. 2010. The Mathematics of Sex: How Biology and Society Conspire to Limit Talented Women and Girls. New York: Oxford University Press.

Ceci, S.J., Williams, W.M. 2011. Understanding current causes of women’s underrepresentation in science. Proceedings of the National Academy of Sciences, USA, DOI: 10.1073/pnas.1014871108.

Ceci, S.J., Williams, W.M., Barnett, S.M. 2009. Women’s underrepresentation in science: sociocultural and biological considerations. Psychological Bulletin 135(2): 218-261.

Else-Quest, N.M., Hyde, J.S., Linn, M.C. 2010. Cross-national patterns of gender differences in mathematics: a meta-analysis. Psychological Bulletin 136(1): 103-127.

Hyde, J.S., Lindgerg, S.M., Linn, M.C. 2008. Gender similarities characterize math performance. Science 321: 494-495.

Mind over Metaphor: This Is Your Brain on Figurative Language.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

“Children may not understand political alliances or intellectual argumentation, but they surely understand rubber bands and fistfights.”—Steven Pinker, from The Stuff of Thought: Language as a Window into Human Nature (Viking, 2007).

Sometimes a “cigar” is just a cigar.  Then again, mischief is the hot smoke curling off the end of a lit intellect.  Sometimes a “diamond in the rough” is indeed just an ancient deposit of highly compressed carbon.  But no facet of humanity’s evolved “genius,” as Aristotle put it more than 2300 years ago, sparkles so brilliantly as our unique capacities for extra-literal description and comprehension.

Until recently, most professional sources have attributed our proficiency with language to a pair of knuckle-sized regions on the brain’s left side called “Broca’s area” and “Wernicke’s area.”  The former module was responsible for grammar, the latter for word meanings.  But new technologies, featuring functional Magnetic Resonance Imaging (fMRI), have allowed scientists to probe non-invasively into the brains of healthy volunteers and to discover, first, that other parts of the brain share in these responsibilities and, second, that Broca’s and Wernicke’s regions contribute to other important tasks as well.

For some, such frustrating complexity spells the end of the “modularity” hypothesis that instructs the functional specialization of certain identifiable neural systems.  For others, including psychologist Gary Marcus, author of The Birth of the Mind (Basic Books, 2004), new evidence suggests “not that we should abandon modules (the Swiss Army Knife view of the brain) but that we should rethink them—in light of evolution.”

While he acknowledges that the brain’s left hemisphere appears to be devoted to both language and problem solving, R. Grant Steen, psychologist, neurophysiologist, and author of The Evolving Brain (Prometheus, 2007), agrees with Marcus, defining language (as opposed to mere communication) in very practical, adaptive terms:

[L]anguage is a system of communication that enables one to understand, predict, and influence the action of others.  Inherent in this definition is a concept of theory of mind: if communication is instinctual rather than having a purpose, then it should probably not be considered a language.  If communication has a purpose, this assumes an awareness of other independent actors, whose actions can potentially be influenced.… [F]or communication to serve the needs of the listener as well as the needs of the speaker, the listener must be able to understand what the speaker is “really” saying.  It is not enough to understand the literal meaning of speech.

Broadly stated, then, experts seek out the neural substrates and processes of figurative language comprehension in order to distinguish the biological bases of what makes humans most exceptional among animals.  In the end, they hope as well to develop more effective means of restoring these extraordinary abilities to those who have lost them and, perhaps, to enhance such talents for the benefit of our collective future.  Although the search has just begun, we have already learned a great deal.

Two intimately associated paradigms have suffered intense scrutiny in recent years.  The standard model of figurative language processing—sometimes referred to as the “indirect” or “sequential” view—maintains that the brain initially analyzes passages for literal meaning and, if the literal interpretation makes no sense, then reprocesses the words for access to an appropriate figurative meaning.  According to the related dichotomous model of “laterality,” the brain’s left hemisphere (LH) is responsible for processing literal language while its right hemisphere (RH) is enlisted only to decode figurative expressions.

Such paradigms were based on classic lesion studies beginning with those conducted in 1977 by Ellen Winner and Howard Gardner who showed that patients with RH damage had much more difficulty processing metaphors than subjects with LH damage.  However, in an editorial from the February, 2007 issue of Brain and Language, linguistics expert Rachel Giora argued that Winner and Gardner’s results had been widely misinterpreted.  Although only the LH patients in the lesion studies were able to competently match metaphorical figures with their corresponding pictures, Giora explained, it was not true that the RH patients were unable to make such connections when asked to do so verbally.  Indeed, a number of studies published at the turn of the century challenged the notion that RH damage selectively impairs people’s command over verbal figurative language.

During the last few years, researchers have begun to dissect the old paradigms more systematically.  In August of 2004, Alexander Rapp’s team of German scientists published a report in Cognitive Brain Research titled “Neural Correlates of Metaphor Processing.”  They used event-related fMRI technology to detect brain activity in sixteen healthy subjects as they read short, simple sentences with either a literal or a metaphorical meaning.

Consistent with the laterality model, Rapp had predicted that metaphorical versus literal sentences would induce more vigorous activation in his participants’ right lateral temporal cortices.  Instead, the strongest signal disparities occurred in the subjects’ LH, the left inferior frontal and temporal gyri, or cortical folds, in particular.  In possible contradiction to the indirect or sequential view of metaphor processing, Rapp’s study noted as well that neither response times nor accuracy diverged between the two conditions.  In summary, the team advised their colleagues to reassess the RH theory of figurative language comprehension and that, although the RH appeared to play some important role, factors other than figurativity per se might be involved.

Two years later, cognitive scientists Zohar Eviatar and Marcel Adam Just published a similar study, “Brain Correlates of Discourse Processing: An fMRI Investigation of Irony and Conventional Metaphor Comprehension” in Neuropsychologia.  There, sixteen subjects digested ironic sentences in addition to literal and simple metaphorical expressions.

As one might guess, the results were considerably more complicated.  First, all three types of statements stimulated the classical language areas of the LH: moving roughly from front to back, the left inferior frontal gyrus, the left inferior temporal gyrus, and the left inferior extrastriate region.  Second, metaphorical sentences activated these same areas to a significantly higher degree than did either literal or ironic statements.  Third, the right superior and middle temporal gyri were significantly more sensitive to ironic statements than to any others and, finally, the right inferior temporal gyrus was differentially sensitive to metaphorical meanings.

From these varied results, Eviatar and Just drew three major conclusions.  Because all kinds of stimuli had activated the same classical language regions of the LH, the exclusive RH theory of figurative language as such was deemed untenable.  In addition to this general pattern, however, both metaphor and irony had triggered further brain activation—metaphor most conspicuously in the LH and less forcefully in one part of the RH, and irony quite vigorously in a rather disparate region of the RH.  For whatever reasons, then, the metaphors used in this experiment were processed in a slightly dissimilar way than the literals were and, perhaps most significantly, the metaphorical and ironical expressions were processed differently in relation to one another.

The authors proposed a number of possible causes for this last distinction, but seemed inclined to attribute it to the sentences’ character rather than to their category.  Recall that Eviator and Just had chosen conventional (sometimes called “salient”) metaphors.  Long hackneyed, such expressions have been “lexicalized” to the point where people really don’t have to think about them in order to understand them.  In this experiment, for example, a fast worker was compared to a “hurricane” and a conscientious sister was likened to an “angel from heaven.”  Simple, idiomatic metaphors like these, the authors speculated, might be processed most efficiently in the LH as a unit, not unlike long words and literal phrases.

Irony, on the other hand, is always more interpretive and complex because it implicates an association between the speaker’s thoughts and the thoughts of someone else.  Citing developmental studies relating to theory of mind mechanisms, the authors alluded to the fact that, while healthy children and adults who are able to correctly attribute first order beliefs (modeling what another person knows) are also able to comprehend metaphor but not necessarily irony, subjects who can make second order attributions (modeling what another person knows about what a third person knows) are usually capable of understanding irony as well.  As such, Eviatar and Just prodded, the possibility that complexity rather than figurativity per se might be responsible for RH involvement invoked “an extremely interesting set of issues for future research.”

Psychologist Gwen L. Schmidt apparently concurred before she and her team of American researchers announced the results of their study, “Right Hemisphere Metaphor Processing? Characterizing the Lateralization of Semantic Processes,” in the February, 2007 edition of Brain and Language.  Instead of fMRI, the authors used a divided visual field technique where the reaction times of 81 subjects were measured after reading the final, experimentally relevant portions of sentences either in their left visual fields (to test stimulation of the RH) or in their right visual fields (to check activation of the LH).

Three different phases were designed to investigate how the brain processes various types of figurative and literal sentences.  Phases one and two compared reaction times between moderately unfamiliar (or “non-salient”) metaphors and both familiar and unfamiliar literals.  Phase three compared times between familiar and highly unfamiliar metaphors.  During the first two trials, the team recorded a RH processing time advantage for moderately unfamiliar metaphor sentence endings and a LH advantage for literal-familiar sentence endings.  Literal-unfamiliar sentences, like novel metaphors, produced an advantage for the RH.  During the final trial, the authors found a LH advantage for familiar metaphors and a RH advantage for their highly unfamiliar complements.

In other words, Schmidt and company got exactly what they had expected consistent with the “coarse coding model” of semantic processing.  Displacing the old indirect/sequential processing and dichotomous laterality paradigms, the coarse coding model predicts that any sentence depending on a close semantic relationship (for example, The camel is a desert animal) will activate the LH, and that any sentence relying on a distant semantic relationship (for example, either The camel is a desert taxi, or The camel is a good friend) will activate the RH, regardless of whether the expression is intended metaphorically or literally.  More hackneyed stimuli can be efficiently processed in a fine semantic field in the LH.  Novel ones with multiple possible meanings, however, must be dealt with more methodically in a much coarser field in the RH.

All of which makes good, practical sense from an evolutionary point of view.  Adaptations are cumulative, of course, and nature builds ever so slowly and imperfectly, if at all, upon existing structures.  While theoretically possible, we should never assume a priori that an isolated region of the brain would take on sole responsibility for any behavior or the accomplishment of any task.  Recent investigations make it clear that the human brain has evolved into a highly integrated (not to mention surprisingly plastic) organ.

But why should cognitive scientists of all people agonize over literary minutia normally regarded only in university humanities departments?  Generally, because the days are long past when science could be neatly segregated from “other subjects.”  More specifically, because significant clinical interests are at stake as well.  Several patient populations reliably suffer from diminished or otherwise altered comprehension of irony, humor, metonymy, and non-salient metaphors in particular.  Certain diseases, therefore, might well find their causes in brain anomalies also responsible for linguistic deficiencies.  Regardless, such deficiencies surely exacerbate the existing social impairments experienced among patients overwhelmed by serious psycho- and neuropathologies.

One of the more unfortunate features of schizophrenic thought disturbance, for example, is the still mysterious problem of “concretism,” the inability to grasp non-literal language.  In the January, 2007 issue of NeuroImage, Tilo Kircher’s team (Rapp’s group headlined by another member) published “Neural Correlates of Metaphor Processing in Schizophrenia,” an fMRI study involving twelve subacute in- and outpatients and twelve control subjects who inspected brief sentences with either a literal or novel metaphorical connotation.  Kircher’s goal, of course, was to begin the process of exposing the disease’s neural bases.

As predicted, all participants’ brains activated in the left inferior frontal gyrus more forcefully for metaphors than for literals.  With respect to metaphors only, controls clearly reacted more strongly than patients in the RH (more specifically, the right precuneus and right middle/superior temporal gyrus).  LH results were more complicated.  Healthy subjects activated most vigorously in the anterior portion of the left inferior frontal gyrus, a locus equivalent to what researchers call Brodmann’s areas 45 and 47 (which, incidentally, is just anterior-inferior to the classical Broca’s area).  Remarkably, this region has been closely associated with sentence level semantic language comprehension.  By contrast, patients activated most impressively in Brodmann’s area 45, three centimeters dorsal to peak stimulation among controls.

While first acknowledging prior evidence demonstrating the RH’s valuable role in complex syntactic and semantic processing, Kircher’s team stressed their findings that the inferior frontal and superior temporal gyri “are key regions in the neuropathology of schizophrenia,” and that “[t]heir dysfunction seems to underlie the clinical symptom of concretism, reflected in the impaired understanding of non-literal, semantically complex language structures.”  In other words, the patients’ shared failure to recruit now specifically identified areas in the LH appears to be at least pertinent if not vital to our struggle against this horribly debilitating illness.

In an even more recent edition of Brain and Language, a group of Italian psychologists and neurologists led by Martina Amanzio published “Metaphor Comprehension in Alzheimer’s Disease: Novelty Matters,” a study comparing conventional and novel metaphor comprehension among twenty probable Alzheimer’s sufferers and twenty matched controls.  Based in part on some of the above-referenced experiments, Amanzio successfully predicted that patients would perform relatively well with salient metaphors but significantly less so with non-salient ones.

While maintaining a healthy skepticism, the team hypothesized that the distinction might this time involve the prefrontal cortex, the brain’s executive center, because prefrontal dysfunction is a common symptom of Alzheimer’s disease and because the comprehension of non-salient metaphors requires the executive ability to compare and combine vehicles and topics in order to appreciate figurative meanings.  “These findings,” the Italians concluded, “may have some clinical implications for the real life communication with [Alzheimer’s] patients.  Salience matters.”

And, thus, so does metaphor.  Figurative language is surely more than an intellectual extravagance.  It is as much a fiber of our very being as each of the countless neurons contained in our big, beautiful brains.  Most fortunately, however, comprehension of novel expression serves as a useful barometer of our personal and communal health as well.  So one might permit one the guilty pleasure of mixing his metaphors on occasion, despite academic decorum.

How Deep Is Your Love? Human Morality and the Question of Altruism Among Non-human Primates.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

I looked into his eyes. It was like looking into the eyes of a man. And the message was: Won’t anybody help me?—A zoo visitor who had rescued a chimpanzee from drowning in the enclosure’s moat.

The depth and instinctual force of human altruism is a complicated matter.  Today’s leading anthropologists, primatologists, and comparative psychologists—together with their ever-enigmatic primate subjects—are slogging it out in laboratories around the world.  Will apes and monkeys, our nearest evolutionary relatives, think to bestow charity upon other beings, and, if so, under what specific circumstances?  As the experiments progress, the debate grows ever more contentious.

Some say that altruism is primarily a cultural phenomenon, unique to humans.  Others insist that its roots extend much deeper, past the common ancestors of humans and chimpanzees that flourished in Africa some six million years ago.  If the latter is true, goodness can be characterized as a given, at least to some predictable and very comforting extent.  If not, human morality might be only a veneer—a thin gauze nearly soaked by the gaping, untreatable wounds of greed and insensitivity, or something to that general effect.

So given the metaphysical stakes, it should come as no great surprise that the possibility of so-called “other regarding preferences” in non-human primates is among the hottest of hot scientific topics.  For many years, the clear consensus had been that humans were the only genuinely altruistic species on earth.  On June 25, 2007, however, the journal Nature reported gathering evidence that “we might not be alone.”  Similarly, in January of this year, Discover magazine summarized the first experimental support for “spontaneous altruism in chimpanzees, toward both non-related chimps and humans.”

These articles described recent methodological breakthroughs made by Felix Warneken, a leading researcher at the Max Planck Institute of Evolutionary Anthropology in Leipzig, Germany.  To that point, experimenters had mostly used food rewards as a means of probing the prosocial tendencies of chimpanzees, and the results had been largely unimpressive.  Journal headlines had carped, for example, that “chimpanzees are indifferent to the welfare of unrelated group members” (Nature, 2005, 437, 1357-1359), and that “self-regard precludes altruism and spite in chimpanzees” (Proc. R. Soc. B, 2006, 273, 1013-1021).  But Warneken decided to break ranks with his colleagues and confront the question from an entirely different angle, judging that the nutritional imperative might be entirely too strong for chimps to resist.

By 2006, he had tested three young chimps’ desires to assist their human caretakers in obtaining useful objects just beyond their beckoning reach—ballpoint pens, for example (Science, 311, 1301-1303).  Warneken’s subjects performed admirably, even without the possibility of reward.  And by 2007, he had designed two additional experiments, each of which applied improvements to his innovative “instrumental helping” paradigm (PLoS Biology, 5(7), e184).

First, he replicated the 2006 study—this time using unfamiliar human partners—and got similar results.  Then, he positioned his subjects to watch unfamiliar partner chimps, or “conspecifics,” as they struggled to open a locked door.  Fully unaware that food had been placed beyond that door, the subjects chose to aid their partners by freeing the chain attached thereto nearly 80 percent of the time.  Thus, in all three studies, Warneken’s animals had demonstrated an eagerness to indulge others, even when doing so required them to expend a little extra effort.

When I asked Warneken about these intriguing results, he emphasized that food sharing “is only one type of potentially altruistic behavior,” and, indeed, that food exchange might not be the greatest method of assessing altruism, given that chimpanzees are generally hyper-competitive over food.  In other words, researchers relying on the erstwhile paradigm may have set the altruism bar a bit too high, even for our closest cousins.

But not according to Frans de Waal, perhaps the most accomplished primatologist on the planet—and certainly the most celebrated one since the pop-media heyday of Jane Goodall.  In a primer to Warneken’s 2007 study, de Waal remarked of previous trials involving chimps and food techniques that “all that these experiments really showed was that humans can create situations in which apes focus on their own interests” (PLoS Biology, 2007, 5(7), e190).  After all, de Waal pointed out, although human bargain hunters will mercilessly trample their fellow holiday shoppers to hoard insanely cheap microwave ovens and televisions sets, that doesn’t necessarily mean we are completely or chronically indifferent to one another’s welfare.

Although he doesn’t deny that kinship and reciprocation play major roles in the prosocial tendencies of primates, de Waal proposes a more philosophically nuanced analysis that distinguishes a behavior’s ultimate from its proximate cause.  The former might explain why actions are favored by natural selection, while the latter illuminates the psychological or physiological mechanisms triggered by animals’ present situations.  de Waal offers sex as an especially helpful and engaging analogy.  Without the slightest interest in reproduction—sex’s ultimate cause—men and women crave physical intimacy and carnal knowledge of one another’s nearly irresistible bodies simply for the immediate emotional and physical ecstasy of it all.

Once evolved, in other words, behaviors can rebel against their Darwinian overlords, seizing motivational autonomy from their ultimate goals.  Thus, de Waal argues, “empathy evolved in animals as the main proximate mechanism for directed altruism,” and it is empathy—not self interest—that “causes altruism to be dispensed in accordance with predictions from kin selection and reciprocal altruism theory” (Annu. Rev. Psych., 2008, 59, 279-300).  Having originated in parental care, empathy in general is as old as the storied mammalian class itself.  When coupled with the perspective-taking abilities intrinsic to a very few large-brained species, however, empathic instincts take on an entirely different character, producing spontaneous, yet intentional, other-regarding responses.

De Waal and Sarah Brosnan, de Waal’s student at Emory University until very recently, became famous in anthropological circles for their token-exchange experiments revealing aversion to inequity in both brown capuchin monkeys and chimpanzees (Nature, 2003, 425, 279-299, and Proc. R. Soc. B, 2005, 272, 253-258).  Importantly, the team’s monkeys appeared to be sensitive only to their own relative disadvantages while their chimps’ reactions depended on each individual’s social history—members of older, more tightly-knit groups tending to be more accepting of iniquitous outcomes.

De Waals’ most recent token-exchange experiment, however, was designed to more directly probe the alleged predilection for altruism in capuchins (Proc. Natl. Acad. Sci., 2008, 105, 13685-13689).  Subject monkeys were given two simple options. They could selfishly reward only themselves or, more philanthropically, both themselves and their hungry partners.  In the end, they tended to choose the prosocial option regardless of condition, but did so more consistently when their partners were either familiar or genetically related.  Because his subjects were predominantly other-regarding in all situations, and without fear of belated group reprisals, de Waal again inferred that the underlying impetus for capuchin prosociality had to be empathetic—as predicted by his theory of motivational autonomy.

But other, perhaps less celebrated, researchers have arrived at very different conclusions.  In 2007, Keith Jensen’s team, also at the Max Planck Institute in Leipzig, set eleven enthusiastic animals loose on a chimp-friendly adaptation of the legendary ultimatum game (Science, 318, 107-109).

In the game’s standard human version, of course, two unfamiliar players are assigned the roles of proposer and responder.  The proposer provisionally receives a gift of money and decides whether and how to divide it with the expectant responder.  The responder, in turn, resolves whether to accept the proposer’s offer.  If she rejects it, neither player receives any money whatsoever.  With those rules in mind, proposers tend to graciously volunteer 40 to 50 percent of the sum and responders routinely reject offers of less than 20 percent.  In other words, human responders are sensitive to unfairness and will punish inconsiderate proposers—even at a significant cost to themselves—and human proposers, realizing this, tend to make relatively fair offers that are more likely to be tolerated.

According to Jensen’s results, by contrast, chimpanzees play the ultimatum game more in keeping with the canonical economic model of pure self-interest.  Using a relatively uncomplicated apparatus featuring ropes, rods, and baited trays, Jensen discovered that his chimp proposers—supposedly accustomed to the apparatus and, thus, familiar with the game’s uncomplicated rules—tended to make coldly iniquitous offers, and, conversely, that his chimp responders were apt to accept all nonzero tenders, ostensibly without umbrage.

More recently, Jennifer Vonk, along with Brosnan and several others, published their study of eighteen chimps at the University of Louisiana’s Cognitive Evolution Group laboratory (Animal Behavior, 2008, 75, 1757-1770).  In her introduction to the study, Vonk commented on de Waal’s “anecdotal” accounts of primate prosociality, cautioning her readers that “conclusions about chimpanzees’ capacity for empathy and other-regarding sentiments rests on subjective interpretations of behavior and have not been subjected to systematic analysis.”

She also referred to and, indeed, credited Felix Warneken’s instrumental helping experiments, but carefully distinguished them from her studies, which had been calculated to address Warneken’s concerns about food as an experimental medium.  In two separate trials involving two different apparatuses—one featuring ramps, the other trays—Vonk’s subjects were provided opportunities either to reward only themselves with fruit, to reward only their conspecific partners, or to furnish food for both themselves and their partners.  Crucially, the chimps were allowed to make their fateful choices either before or after consuming their own rewards and, thus, according to Vonk, “avoiding the possibility that obtaining food for themselves distracted them from obtaining food for their partners.”

Following the training, and then all of the rolling, pulling, and consumption of tasty treats, Vonk’s team finally surmised that chimpanzees do not reliably take advantage of low-cost opportunities to nourish their hopeful peers.  In nearly every case, the presence or absence of a partner conspecific had no effect on the subject’s decision to send fruit to the other animal’s enclosure.  Vonk’s subjects seemed wholly indifferent to the desires of others.  Chimpanzee behavior, she concluded, “is consistent with standard evolutionary models based on kinship and reciprocity.”

But could it be that some chimps and monkeys—like some humans we might know—are more callous or self-absorbed than others?  Every experiment, after all, tests only a very limited number of individuals.  Maybe Jensen and Vonk just happened to assemble a group of stingy, egocentric misers, and Warneken a rare cache of Good Samaritans.  “Normally, negative results would be largely ignored,” de Waal told me, “but the negative results on animal altruism are hyped over and over” because they support the dominant “strong reciprocity” school of thought that insists on human uniqueness.

Now Assistant Professor of Psychology at Georgia State University, Sarah Brosnan remains fundamentally ambivalent about the sticky issue of primate altruism.  On the one hand, her personal encounters with primates inform her that both apes and monkeys at least seem other-regarding.  On the other, she admits that to this point her intuitions remain largely unsubstantiated by solid evidence.  Even so, Brosnan is confident of at least two things.  First, despite her and Vonk’s findings in Louisiana, chimp altruism is “more likely to be elicited in contexts which do not involve food,” and, second, it is “much more likely to be based on emotion and relationships than on cognitive calculations.”

Unsurprisingly, Keith Jenson endorses a more objective and skeptical attitude.  Like de Waal, he suspects that even chimpanzees lack the mental capacity for delayed, calculated reciprocity.  But he also criticizes de Waal’s reliance on subjective evidence of primate altruism and his “somewhat contentious” use of the term “empathy.”  In the end, Jensen believes that kinship is usually, if not always, the dominant underlying mechanism of animal prosociality, and, thus, that empathy and other-regarding preferences “only emerged somewhere during human evolution.”

The current preponderance of hard, empirical support appears to weigh-in on the side of reservation, despite recent media exuberances.  Then again, as de Waal has been quick to point out, non-human primates have been known to achieve extraordinary, if exceptionally rare, prosocial performances that if committed by humans would doubtless be characterized as nothing short of heroic.

One chimpanzee in particular is known to have overcome its species’ intense dread of water in order to save a drowning infant chimp’s life—only to surrender its own.  I can no more than speculate on what the Midwestern farmer would have done all those years ago had he been unable to tread water.  And in August of 1996, a valiant female gorilla named Binta Jua was actually filmed as it rescued a three-year-old boy who had fallen eighteen feet into the primate exhibit at Chicago’s Brookfield Zoo.  Binta quickly whisked the boy away to a safe location, cradling and pampering him there, before finally delivering him to the zoo’s dumbfounded and no doubt nervous staff.

The skeptics would do well to develop and refine their explanations of these amazing behaviors, just as the champions of primate altruism should be set to the task of providing empirical evidence for their predominantly philosophical and intuitional hypotheses.  Either way, human morality is no mere act; clearly, it is the evolutionary extrapolation of our innate, cooperative tendencies.  The question, rather, is one of degree—how deep are our moral foundations, and how much more can we reasonably expect from one another?