Category Archives: Evolution

Obesity: “Fat Chance” or Failure of Sincerity?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

popular culture3

Man is condemned to be free.—Jean-Paul Sartre.

Beginning about five years ago, the chronically overweight and obese were offered a new paradigm, one more consistent with their majority’s shared experiences in the twenty-first century. Emerging science from diverse fields, certain experts argued, complicated—perhaps even contradicted—the established view that weight maintenance was a straightforward, if not simple, matter of volitional control and balancing energy intake against energy expenditure.

As a host of potential complexities materialized, the frustrated members of this still expanding demographic were notified that, contrary to conventional wisdom, they had little or no control over their conditions. The popular literature especially began to hammer two captivating messages deeply into the public consciousness.  First, from within, the overweight and obese have been overwhelmed by their genomes, epigenomes, hormones, brains, and gut microbiomes, to name just a few.  Second, from without, their otherwise well-calculated and ample efforts have been undermined, for example, by the popular media, big food, government subsidies, poverty, and the relentless and unhealthy demands of contemporary life.

In a 2012 Nature opinion piece, Robert Lustig, Laura Schmidt, and Claire Brindis—three public health experts from the University of California, San Francisco, compared the “deadly effect” of added sugars (high-fructose corn syrup and sucrose) to that of alcohol(1).  Far from mere “empty calories,” they added, sugar is potentially “toxic” and addictive.  It alters metabolisms, raises blood pressures, causes hormonal chaos, and damages our livers.  Like both tobacco and alcohol (a distillation of sugar), it affects our brains as well, encouraging us to increase consumption.

Apparently unimpressed with Americans’ abilities to control themselves, Lustig et al. urged us to back restrictions on our own choices in the form of government regulation of sugar. In support of their appeal, the trio relied on four criteria—“now largely accepted by the public health community,”—originally offered by social psychologist Thomas Babor in 2003 to justify the regulation of alcohol: The target substance must be toxic, unavoidable (or pervasive), produce a negative impact on society, and present potential for abuse.  Perhaps unsurprisingly, they discovered that sugar satisfied each criterion with ease.

Robert Lustig.

Lustig, a pediatric endocrinologist and, now, television infomercial star, contends that obesity results primarily from an intractable hormonal predicament. In his wildly popular 2012 book, Fat Chance, Lustig indicted simple, super-sweet sugars as chief culprits, claiming that sucrose and high-fructose corn syrup corrupt our biochemistry to render us hungry and lethargic in ways fat and protein do not(2).  In other words, he insisted that sugar-induced hormonal imbalances cause self-destructive behaviors, not the other way around.

Lustig’s argument proceeds essentially as follows: In the body, insulin causes energy to be stored as fat.  In the hypothalamus, it can cause “brain starvation,” or resistance to leptin, the satiety hormone released from adipose tissue.  Excess insulin, or hyperinsulinemia, thus causes our hypothalami to increase energy storage (gluttony) and decrease energy consumption (sloth).  To complete the process, add an increasingly insulin-resistant liver (which drives blood insulin levels even higher), a little cortisol (the adrenal stress hormone), and of course sugar addiction.  In the end, Lustig concludes, dieters hardly stand a chance.

Journalist Gary Taubes, author of the similarly successful Why We Get Fat, was in full agreement(3).  Picking up the theoretical mantle where Lustig dropped it, Taubes expanded the list of nutritional villains considerably to include all the refined carbohydrates that quickly boost consumers’ glycemic indices. In a second Nature opinion piece, he then blamed the obesity problem on both the research community, for failure to fully comprehend the condition, and the food industry, for exploiting that failure(4).

Gary Taubes with Dr. Oz.

Gary Taubes with Dr. Oz.

To their credit, Lustig and Taubes provided us with some very sound and useful advice.  Credible nutrition researchers agree, for example, that Americans in particular should drastically reduce their intakes of added sugars and refined carbohydrates.  Indeed, most would be well-advised to eliminate them completely.  The authors’ claims denying self-determination might seem reasonable as well, given that, as much research has shown, most obese who have tried to lose weight and to keep it off, have failed.

On the other hand, failure is common in the context of any difficult task, and evidence of “don’t” does not amount to evidence of “can’t.” One might wonder as well whether obesity is a condition easily amenable to controlled scientific study given that every solution—and of course many, in fact, do succeed(5)—is both multifactorial and as unique as every obese person’s biology.  So can we sincerely conclude, as so many commentators apparently have, that the overweight and obese are essentially powerless to help themselves?  Or could it be that the vast majority of popular authors and health officials have largely—perhaps even intentionally—ignored the true root cause of obesity, if for no other reasons, simply because they lack confidence in the obese population’s willingness to confront it?

Though far less popular, a more recently published text appears to suggest just that.  In The Psychology of Overeating, clinical psychologist Kima Cargill attempts to “better contextualize” overeating habits “within the cultural and economic framework of consumerism”(6).  What current research fails to provide, she argues, is a unified construct identifying overeating (and sedentism, one might quickly add) as “not just a dietary [or exercise] issue,” but rather as a problem implicating “the consumption of material goods, luxury experiences, … evolutionary behaviors, and all forms of acquisition.”

Kima Cargill.

Kima Cargill.

To personalize her analysis, Cargill introduces us to a case study named “Allison.”  Once an athlete, Allison gained fifty pounds after marriage.  Now divorced and depressed, she regularly eats fast food or in expensive restaurants and rarely exercises.  Rather than learn about food and physical performance, Allison attempts to solve her weight problem by throwing money at it.  “When she first decided to lose weight,” Cargill recalls, “which fundamentally should involve reducing one’s consumption, Allison went out and purchased thousands of dollars of branded foods, goods, and services.” She hired a nutritionist and a trainer.  She bought a Jack Lalanne juicer, a Vitamix blender, a Nike Feulband, Lululemon workout clothing, an exclusive gym membership, diet and exercise DVDs and iPhone apps, and heaping bags full of special “diet foods.”

None of it worked, according to the author, because Allison’s “underlying belief is that consumption solves rather than creates problems.”  In other words, like so many others, Allison mistook “the disease for its cure.”  The special foods and products she purchased were not only unnecessary, but ultimately harmful.  The advice she received from her nutritionist and trainer was based on fads, ideologies, and alleged “quick-fixes” and “secrets,” but not on actual science.  Yet, despite her failure, Allison refused to “give up or simplify a life based on shopping, luxury, and materialism” because any other existence appeared empty to her.  In fact, she was unable to even imagine a more productive and enjoyable lifestyle “rich with experiences,” rather than goods and services.

Television celebritism: also mistaking the disease for its cure.

Television celebritism: also mistaking the disease for its cure.

Like Lustig, Taubes, and their philosophical progeny, Cargill recognizes the many potential biological factors capable of rendering weight loss and maintenance an especially challenging task.  But what she does not see in Allison, or in so many others like her, is a helpless victim of either her body or her culture.  Judging it unethical for psychologists to help their patients accept overeating behaviors and their inevitably destructive consequences, Cargill appears to favor an approach that treats the chronically overweight and obese like any other presumably capable, and thus responsible, adult population.

Compassion, in other words, must begin with uncommon candor.  As Cargill acknowledges, for example, only a “very scant few” get fat without overeating because of their genes.  After all, recently skyrocketing obesity rates cannot be explained by the evolution of new genes during the last thirty to forty years.  And while the food industry (along with the popular media that promote it) surely employs every deceit at its disposal to encourage overconsumption and the rejection of normal—that is, species appropriate—eating habits, assigning the blame to big food only “obscures our collusion.”  Worse yet, positioning the obese as “hapless victims of industry,” Cargill observes, “is dehumanizing and ultimately undermines [their] sense of agency.”

Education is always an issue, of course. And, generally speaking, higher levels of education are inversely associated with the least healthy eating behaviors.  But the obese are not stupid, and shouldn’t be treated as such.  “None of us is forced to eat junk food,” the author notes, “and it doesn’t take a college degree or even a high school diploma to know that an apple is healthier than a donut.”  Nor is it true, as many have claimed, that the poor live in “food deserts” wholly lacking in cheap, nutritious cuisine(7).  Indeed, low-income citizens tend to reject such food, Cargill suggests, because it “fails to meet cultural requirements,” or because of a perceived “right to eat away from home,” consistent with societal trends.

Certain foods, especially those loaded with ridiculous amounts of added sugars, do in fact trigger both hormonal turmoil and addiction-like symptoms (though one might reasonably question whether any substance we evolved to crave should be characterized as “addictive”).  And as the overweight continue to grow and habituate to reckless consumption behaviors, their tasks only grow more challenging.  I know this from personal experience, in addition to the science.  Nevertheless, Cargill maintains, “we ultimately degrade ourselves by discounting free will.”

popular culture4

Despite the now-fashionable and, for many, lucrative “Fat Chance” paradigm, the chronically overweight and obese are as capable as anyone else of making rational and intelligent decisions at their groceries, restaurants, and dinner tables. And surely overweight children deserve far more inspiring counsel.  But as both Lustig and Taubes, on the one hand, and Cargill, on the other, have demonstrated in different ways, the solution lies not in mere diet and exercise, per se.  The roots of obesity run far deeper.

Changes to basic life priorities are key. To accomplish a more healthful, independent, and balanced existence, the chronically overweight and obese in particular must first scrutinize their cultural environments, and then discriminate between those aspects that truly benefit them and those that were designed primarily to take advantage of their vulnerabilities, both intrinsic and acquired.  Certain cultural elements can stimulate the intellect, inspire remarkable achievement, and improve the body and its systems.  But most if not all of its popular component exists only to manipulate its consumers into further passive, mindless, and frequently destructive consumption.  The power to choose is ours, at least for now.

References:

(1)Lustig, R.H., L.A. Schmidt, and C.D. Brindis. 2012. Public health: the toxic truth about sugar. Nature 482: 27-29.

(2)Lustig, R. 2012. Fat Chance: Beating the Odds Against Sugar, Processed Food, Obesity, and Disease. NY: Hudson Street Press.

(3)Taubes, G. 2012. Treat obesity as physiology, not physics. Nature 492: 155.

(4)Taubes, G. 2011. Why We Get Fat: And What to Do About It. NY: Knopf.

(5)See, e.g., The National Weight Loss Control Registry. http://www.nwcr.ws/Research/default.htm

(6)Cargill, K. 2015. The Psychology of Overeating: Food and the Culture of Consumerism. NY: Bloomsbury Academic.

(7)Maillot, M., N. Darmon, A. Drewnowski. 2010. Are the lowest-cost healthful food plans culturally and socially acceptable? Public Health Nutrition 13(8): 1178-1185.

Dog Behavior: Beneath the Veneer of “Man’s Best Friend.”

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

We love our dogs—often more than many fellow humans. In Homer’s 8th century BCE Odyssey, Odysseus referred to the domestic dog as a “noble hound,” and it may have been Frederick II, King of Prussia, who in 1789 first characterized Canis lupus familiaris as “man’s best friend.”  More recently, Emily Dickenson judged that dogs are “better than human beings” because they “know but do not tell.”  Dogs are capable creatures, certainly, but are they as intelligent and considerate as most of humans apparently believe?  Regardless of breed, can their levels of consciousness truly support qualities like nobility, loyalty, and friendship?

Ethologists attempt to assess animal behavior objectively by emphasizing its biological foundations. Classic ethology was founded on the notion that animals are driven by intrinsic motor-patterns, or species-specific, stereotyped products of natural selection (Lorenz 1982).  Modern practitioners, however, often introduce additional factors into the ethological equation.  Many suggest, for example, that intrinsic motor-patterns can be accommodated to developmental and environmental influences.  Some argue as well that complex and otherwise confusing behaviors can emerge from interactions between two or more simpler behavioral rules.

Evolving Motor-Patterns.

Fellow ethologists, biologist Raymond Coppinger and cognitive scientist Mark Feinstein argue that much, if not all, dog behavior can be explained by reference to these three ideas—without resorting to more romantic notions of consciousness or sentience, let alone loyalty and friendship (2015). In the crucial context of foraging, for instance, intrinsic motor-patterns manifest similarly in all canids.

When born, both dogs and wolves spontaneously demonstrate a characteristic mammalian neonatal foraging sequence: ORIENT (toward mom) > LOCOMOTION (to mom) > ATTACHMENT (to her teat) > FOREFOOT-TREAD (stimulating lactation) > SUCK. Here, pups’ mouths and digestive systems are well-adapted to challenges imposed by the foraging environment—that is, mom.  But despite the cozy evolutionary relationship between dogs and wolves, puppyhood is the point after which precise foraging parallels end.

As adults, canid predators display the following generalized foraging sequence: ORIENT > EYE (body still, gaze fixed, head lowered) > STALK (slowly forward, head lowered) > CHASE (full speed) > GRAB-BITE (disabling the prey) > KILL-BITE > DISSECT > CONSUME. But some species, and some individual dogs and wolves, might substitute one motor-pattern for another.  Tending to hunt small prey, coyotes might occasionally swap the FOREFOOT-STAB pattern for the CHASE pattern, and HEADSHAKE for KILL-BITE. Big cats like the puma, by contrast, might substitute FOREFOOT-SLAP for GRAB-BITE to bring larger prey down from behind.

The coyote "forefoot-stab" rule

The coyote forefoot-stab rule

The precise form of GRAB- or KILL-BITE can vary between species as well, often based on the predator’s evolved anatomy. The puma usually kills with a bite to the neck, crushing its prey’s trachea, or to the muzzle, suffocating the victim.  But the wolf often GRAB-BITES the prey’s hind legs, shredding its arteries and slowly bleeding it to death. Puma and wolf anatomies—jaw structure, dentition, and musculature, in particular—apply different mechanical forces, and thus demand the evolution of at least slightly different foraging behaviors.

Domestic dogs, on the other hand, tend to be far less purposeful.   Having long-relied on humans for sustenance, they rarely demonstrate complete predatory foraging patterns.  Instead, different breeds have retained programs for different partial sequences.  Border collies, for instance, are famous for obeying the EYE pattern. I, in fact, once owned an Akita that employed FOREFOOT-STAB with astonishing expertise to capture mice foraging deep beneath the snow.

Border collie "eye" rule.

The Border collie eye rule.

Nevertheless, say Coppinger and Feinstein, “certain commonalities have long persisted in the predatory motor-pattern sequences of all carnivores, reflecting their shared ancestry” and “an intrinsic ‘wired-in’ program of rules.” Learning is neither necessary nor optional.  Indeed, as soon as their foraging sequences are interrupted for whatever reasons, some wild-types are rendered incapable of continued pursuit.  Pumas can’t consume an animal that’s already dead, for example, and, although wolves generally can enter their foraging sequences at any point, they often can’t perform GRAB-BITE if interfered with following expression of the partial EYE > STALK > CHASE sequence.

Dogs have similar limitations. Coppinger and Feinstein recall two conversations with different sheep ranchers.  The first shepherd commended his livestock-guarding dog for independently standing watch over a sick ewe for days without consuming it.  The second, however, complained because his guarding dog ate a lamb that tore itself open on a barbed-wire fence.  To the ethologists, these seemingly inconsistent behaviors were anything but.  Livestock-guarding dogs generally do not express the DISSECT motor-pattern. In fact, their only foraging rule is CONSUME. The first dog wasn’t a “good” dog, necessarily—it was just an unlucky dog.  And the second animal wasn’t really a “bad” dog—it simply performed its intrinsic program when afforded the opportunity to do so.

Accommodating Environment.

However, that many motor-patterns are stereotyped and non-modifiable does not imply that dogs and other animals are mere automata driven solely by internal programs. Certain behaviors can arise as well from an accommodation of the intrinsic to the contingencies of external forces.  Under the “right” circumstances, in other words, animals commonly act in species- or breed-specific manners.  But when exposed to other environmental conditions, their behaviors can look very different (Twitty 1966).

Indeed, if animals encounter such conditions during a developmentally “critical” or “sensitive” period—that is, a species-specific, time-bound stage of growth—they might never display certain typical behaviors. For example, many prospective service dogs flunk out merely because they can’t negotiate stairs, curbs, or even sewer grates.  Why not?  According to Coppinger and Feinstein, their vision systems never fully developed because they were raised in kennels that were sterile and spacious, but nevertheless lacking in three-dimensional structures.

Research also suggests that canids have critical periods for social bonding, during which exposure to a given stimulus will reduce the animal’s fear of that stimulus in the future (Scott and Fuller 1998). Some argue further that certain conspicuous behavioral differences between canid species can be explained, at least in part, by distinct onsets and offsets of these periods (Lord 2013).  For instance, the sensitive bonding period for dogs begins at about four weeks and ends at about eight weeks, while the same period begins and ends two weeks earlier for wolves.

Crucially, dogs and wolves develop their sensory abilities at about the same time—sight and hearing at six weeks, smell much earlier. As such, wolves have only their sense of smell to rely on during their sensitive bonding period.  One general result is that more stimuli, including humans, will remain unfamiliar and thus frightening to them as adults.  But dogs can suffer similar consequences when raised in the absence of direct human contact.

With bonding periods in mind, Coppinger and Feinstein invite us to guess why the Maremma guarding dogs they studied in Italy would abandon their human shepherds to trail their flocks. Were they merely obeying their evolved, gene-based intrinsic motor-patterns—or perhaps their training?  Did they actually understand the importance of their job?  None of the above, say the authors: guarding dog behavior can be “explained by accommodation to particular environmental factors during a critical period in the development of socialization.”

Maremma guarding-dogs.

Maremma guarding dog pups.

During his famous, Nobel Prize-winning experiments in 1935, Konrad Lorenz was able to transfer the social allegiance of newly-hatched greylag geese from their mothers to not only Lorenz himself, but to inanimate objects including a water faucet as well. Coppinger and Feinstein produced similar effects with their Maremmas.  When raised with sheep instead of humans, the dogs usually stuck with the sheep.  Interestingly, however, a few Maremmas preferred to remain at home when both the sheep and their shepherds left for the fields.  These dogs had actually bonded with milk cans that no doubt smelled very much like the sheep.

Emerging Complexities.

Yet other canine behaviors cannot be easily explained by reference to either intrinsic motor-patterns or their accommodations to environmental influences. Consider the collaborative hunting of large prey in wolves, for example.  Here, individuals within the pack appear to work closely together according to preconceived plan, synchronizing their movements and relative positions to prevent prey escape.  At first glance, the spectacle tempts us to infer not only extraordinary intelligence, but insight as well.

Coppinger and Feinstein, however, suggest an explanation relying not on naïve anthropomorphisms, but rather on our knowledge of canine behavior plus the intriguing concept of emergence.  Nothing new under the intellectual sun, emergence proposes that complex and novel phenomena can arise from the accidental, “self-organizing” interaction of far simpler rules and processes.

Two classic examples illustrate the principle. Known for their towering, complicated, yet surprisingly well-ventilated structures, termite mounds obviously are not designed and erected by hyper-intelligent insects.  Similarly, many species of migrating birds, Canada geese in particular, tend to fly in a conspicuous V-pattern, but not because individual geese possess advanced senses of aesthetics.  The more likely explanation, say the authors, is that members of each species act only according to very simple rules.  Termites transport sand grains to a central location.  Perhaps their movements are sensitive to humidity levels and the relative concentration of gasses.  Geese evolved to fly long distances and to draft behind others to lessen their burdens.  The impressive results were never planned; they simply emerged from the interaction of much humbler, species-specific rules.

The practice of collaborative hunting among wolves is no different, according to Coppinger. He and his colleagues recently created a computational model including digital agents representing both pack and prey (Muro et al. 2011).  When they imposed three basic rules on individual predators—move toward the prey, maintain a safe distance, and move away from other wolves—the model produced a successful pattern of prey capture appearing remarkably similar to the real thing.  But in no way were Coppinger’s results dependent on agent purpose, intelligence, or cooperation.

Collaborative hunting among wolves.

Collaborative hunting among wolves.

Consider dog “play” as well. One popular explanation of play generally is that it evolved as a “practice” motor-pattern to prepare animals for escape.  But play only partially resembles any given adult motor-pattern sequence.  And to be even minimally effective, escape has to be performed correctly the first time, which is precisely why motor-patterns are intrinsic, stereotyped, and automatic—as Coppinger and Feinstein observe, “no practice is ever required.”

As any dog owner will attest, canine play is commonly manifested in the “play bow”—a posture in which the animal halts, lowers its head, raises its rear-end, and stretches its front legs forward. Many have interpreted the bow as a purposeful invitation to engage in play.  But more careful observations and experiments suggest otherwise.  Play bows often result when dogs and wolves enter into an EYE > STALK motor-pattern sequence only to be interrupted by their subjects’ failures to react—that is, to run. As such, the bow might actually reveal a combination of two conflicting rules: stalk and retreat.  If so, the posture itself is neither an adaptive motor-pattern nor a signal of intent.  Rather, say Coppinger and Feinstein, “it is an emergent effect of a dog (or wolf) simultaneously displaying two motor-pattern components when it is in multiple or conflicting states.”

None of which should diminish the love and allegiance we typically bestow upon our dogs. That they have no desire to please us—indeed, that their conscious goals are severely limited in general—is no reason to deny ourselves the great pleasure we so often derive from their company.  Even so, our failure to pursue a more objective understanding of dog behavior frequently results in disaster, for both ourselves and our pets.  For some, ignorance might be bliss.  But it’s never a solution for anyone or any thing.

References:

Coppinger, R. and M. Feinstein. 2015. How Dogs Work. Chicago: University of Chicago Press.

Lord, K.A. 2013. A comparison of the sensory development of wolves and dogs. Ethology 119:110-120.

Lorenz, K. 1982. The Foundations of Ethology: The Principle Ideas and Discoveries in Animal Behavior. NY: Simon and Schuster.

Muro, C., R. Escobedo, L. Spector, et al. 2011. Wolf-pack hunting strategies emerge from simple rules in computational simulations. Behavioral Processes 88: 192-197.

Scott, J.P. and J.L. Fuller. 1998. Genetics and the Social Behavior of the Dog. Chicago: University of Chicago Press.

Twitty, V. 1966. Of Scientists and Salamanders. San Fransisco: W.H. Freeman.

Cat

The Evolutionary Foundations of Dog Behavior.

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

Border collie "eye" rule.

The Border collie “eye” rule.

Ethologists like Mark Feinstein and Raymond Coppinger attempt to study animal behavior objectively by emphasizing its biological foundations. In their new book, How Dogs Work, the authors scrutinize, for example, the adaptive motor-pattern foraging sequences of both dogs and wolves.  In all such species, different patterns—not learned, but genetically based—emerge at different stages of life.

When born, both dogs and wolves demonstrate a characteristic mammalian neonatal foraging sequence: orientation (toward mom) > locomotion (to mom) > attachment (to her teat) > forefoot-tread (stimulating lactation) > suck. Here, the pups’ mouths and digestive systems are well-adapted to challenges imposed by the foraging environment—that is, mom.  But despite the close evolutionary relationship between dogs and wolves, puppyhood is the point after which foraging parallels end.

Adult predators exhibit the following generalized foraging pattern: orient > eye (still, with gaze fixed and head lowered) > stalk (slowly forward with head still lowered) > chase (full speed) > grab-bite (disabling the prey) > kill-bite > dissect > consume. But some species, and some individual dogs and wolves, might substitute one element, or “rule,” for another.  Coyotes that tend to hunt small prey, for instance, might occasionally substitute the forefoot-stab rule for the chase rule, and the headshake rule for the kill-bite rule.  Large cats like the puma, by contrast, might substitute the forefoot-slap rule for the grab-bite rule in order to bring larger prey down from behind.

The coyote "forefoot-stab" rule.

The coyote “forefoot-stab” rule.

The form of grab- or kill-bite can vary between species as well, often based on the predator’s evolved anatomy. The puma usually kills with a bite to the neck, crushing its prey’s trachea, or to the muzzle, suffocating the prey.  But the wolf often grab-bites its prey’s hind legs, shredding its arteries and slowly bleeding it to death.  Puma and wolf anatomies—jaw structure, dentition, and musculature, in particular—apply different mechanical forces, and thus demand the evolution of at least slightly different foraging behaviors.

Domestic dogs, on the other hand, have long relied on humans for food and now rarely demonstrate complete predatory foraging patterns. Rather, different breeds have retained different partial sequences or distinct individual rules.  Border collies, for instance, are famous for following the eye rule.  I once owned an Akita that employed the forefoot-stab rule with astonishing expertise to catch mice rummaging deep beneath the snow.

Nevertheless, say Feinstein and Coppinger, “certain commonalities have long persisted in the predatory motor-pattern sequences of all carnivores, reflecting their shared ancestry” and “an intrinsic ‘wired-in’ program of rules” that at least partly governs their foraging behaviors. Learn much more about why our beloved pets and working partners really behave in the ways they do in the authors’ fascinating new work, How Dogs Work (University of Chicago Press 2015.)

How Dogs Work

The Serengeti Rules.

by Kenneth W. Krause.

[Notable New Media]

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well.  He can be contracted at krausekc@msn.com.

Serengeti

Science has saved countless lives in strangely uncelebrated ways. How did military doctors first learn to treat shock? Well, that’s another interesting story.

In the early 20th century, Harvard physiologist, Walter Cannon, coined the term “fight-or-flight” following his observations in animal studies that digestive functions were strongly affected by stress. The sympatheric nervous system, he surmised, works in concert with the adrenal glands to modulate body organs during tough times.

As casualties mounted during WWI, Cannon was asked to figure out why the wounded so often went into shock and died. These soldiers exhibited some of the same symptoms as his beleauguered animal subjects–rapid pulse, dilated pupils, and heavy sweating. He quickly volunteered to treat injured casualties overseas in the Harvard Hospital Unit.

Cannon decided to measure the soldiers’ blood pressure, instead of just their pulse. Shock patients, he discovered, had abnormally low BPs–usually under 90 mmHg. After measuring the concentration of bicarbonate ions in their bloodstreams, he found it similarly lacking. The patients’ normally alkaline blood had become more acidic, and the more acidic it was, the lower the patients’ BPs.

So, to raise their pH levels, Cannon began adminsitering sodium bicarbonate to shock victims. And it worked. Innumerable soldiers were saved before WWI finally came to a grisly end. Later emphasizing how most bodily organs receive dual nervous system inputs that generally oppose one another, he coined another term–“homeostasis.” This dual regulation, he concluded, “is the central problem of physiology,” and thus the physician’s role was to reinforce or restore homeostasis.

University of Wisconsin-Madison professor of molecular biology, Sean B. Carroll, uses this story and others to illustrate a worthy point. Regulation is critical to not just human health, but the health of entire ecosystems as well. Also the author of “Endless Forms Most Beautiful” (one of my all-time favorites), Carroll’s new book, The Serengeti Rules: The Quest to Discover How Life Works and Why It Matters (Princeton University Press 2016), explains why the overarching logic of the small and familiar also applies to the large and far-flung.

Carroll

Out of Southern East Asia: The Origin and Evolution of the Domestic Dog.

by Kenneth W. Krause.

[Notable New Media]

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer. Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

Evoltion dogs

Despite biologists’ great interest and effort, a detailed history of the domestic dog’s evolution has remained elusive. Scientists have estimated the date of divergence between dogs (Canis lupus familiaris) and wolves (or wolf-like canids) at anywhere between 10,000 and 32,000 years ago.  Similarly, using maternally transmitted mitochondrial DNA and haplotype analyses, researchers have proposed a number of possible regions as the dog’s birthplace, including Europe and the Middle East.

But a new study on dog origins suggests a more definitive answer.  An international team of biologists led by geneticist Ya-Ping Zhang recently collected the whole genome sequences of 58 canids, including 12 grey wolves from Europe, 11 dogs from southern East Asia, 12 dogs from northern East Asia, 4 dogs from Nigeria, and 19 diverse dog breeds from across the Old World and the Americas (Wang et al. 2015).

Following examination of these sequences, Zhang’s team discovered that the highest genetic diversity—a strong signal of species origination—occurred among dogs indigenous to southern East Asia. Other populations demonstrated a progressive gradient in ancestry away from wolves beginning in southern East Asia.  These findings, the group noted, tend to corroborate earlier work based on mitochondrial DNA and paternally transmitted Y-chromosomal DNA.

As for the timing of dog-wolf divergence and the subsequent dispersal of dogs globally, Zhang and colleagues used various genomic techniques that, in their estimation, revealed a two-step process. First, dog and wolf populations began to separate about 33,000 years ago in southern East Asia.  Then, around 15,000 years ago, dog subgroups began to radiate westward, reaching the Middle East, Africa, and finally Europe about 10,000 years ago.  Meanwhile, one Asian population backtracked to northern China, they suggest, mixed with northern East Asian dogs, and eventually made its way to the New World.

So how and when were dogs actually domesticated? A number of non-exclusive evolutionary theories of varying plausibility have been advanced over the years.  According to Hungarian ethologist Adam Miklosi, however, only a handful of these theories are consistent with the scientific evidence.  In his new book, Miklosi specifies that early humans might have plucked canid cubs from their dens, for example, selecting only those with the most affiliative temperaments.   Or perhaps humans and canids co-evolved, each species by exerting selective pressures on the other.  Group selection may have played a role as well, if early dogs somehow boosted the survival rate and reproductive fitness of some human groups over that of others (Miklosi 2015).

Zhang’s group, however, favors the scenario in which an ancient dog-wolf split comprised the first step in both the domestication of wolves and evolution of domestic dogs. Humans and the ancestors of dogs probably shared an ecological niche in southern East Asia, they argue, that offered refuge to both species during the last glacial period, which peaked between 26,500 and 19,000 years ago.  The long process of domestication may have began with a group of wolves that became “loosely associated and scavenged with” humans before undergoing “self-domestication”—that is, “waves of selection for phenotypes [in other words, behaviors and physical traits] that gradually favored stronger bonding with humans.”

References:

Miklosi, Adam. 2015. Dog behavior, evolution, and cognition (second edition). Oxford, UK: Oxford University Press.

Wang, G., W. Zhai, H. Yang, et al. 2015. Out of southern East Asia: the natural history of domestic dogs across the world. Cell Research 15 December 2015; doi:10.1038/cr.2015.147.

Evolution dogs3

Biological Race and the Problem of Human Diversity (Cover Article).

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

Race 1

Some would see any notion of “race” recede unceremoniously into the dustbin of history, taking its ignominious place alongside the likes of phlogiston theory, Ptolemaic geocentricism, or perhaps even the Iron Curtain or Spanish Inquisition.  But race endures, in one form or another, despite its obnoxious, though apparently captivating dossier.

In 1942, anthropologist Ashley Montagu declared biological race “Man’s Most Dangerous Myth,” and, since then, most scientists have consistently agreed (Montagu 1942).  Nevertheless, to most Americans in particular, heritable race seems as obvious as the colors of their neighbors’ skins and the textures of their hair.  So too have a determined minority of researchers always found cause to dissent from the professional consensus.

Here, I recount the latest popular skirmish over the science of race and attempt to reveal a victor, if there be one.  Is biological race indeed a mere myth, as the academic majority has asked us to concede for more than seven decades?  Is it instead a scandalously inconvenient truth—something we all know exists but, for whatever reasons, prefer not to discuss in polite company?  Or is it possible that a far less familiar rendition of biological race could prove not only viable, but both scientifically and socially valuable as well?

Race Revived.

The productive questions pertain to how races came to be and the extent to which racial variation has significant consequences with respect to function in the modern world.—Vincent Sarich and Frank Miele, 2004.

I have no reason to believe that Nicholas Wade, long-time science editor and journalist, is a racist, if “racist” is to mean believing in the inherent superiority of one human race over any other.  In fact, he expressly condemns the idea.  But in the more limited and hopefully sober context of the science of race, Wade is a veritable maverick.  Indeed, his conclusions that biological human races (or subspecies, for these purposes) do exist, and conform generally to ancestral continental regions, appear remarkably more consistent with those of the general public.

In his most recent and certainly controversial book, A Troublesome Inheritance: Genes, Race and Human History, Wade immediately acknowledges that the vast majority of both anthropologists and geneticists deny the existence of biological race (Wade 2014).  Indeed, “race is a recent human invention,” according to the American Anthropological Association (AAA 2008), and a mere “social construct,” per the American Sociological Association (ASA 2003).  First to decode the human genome, Craig Venter was also quick to announce during his White House visit in 2000 that “the concept of race has no genetic or scientific basis.”

But academics especially are resistant to biological race, or the idea that “human evolution is recent, copious, and regional,” Wade contends, because they fear for their careers in left-leaning political atmospheres and because they tend to be “obsessed with intelligence” and paralyzed by the “unlikely” possibility that genetics might one day demonstrate the intellectual superiority of one major race over others.

According to Wade, “social scientists often write as if they believe that culture explains everything and race [indeed, biology] explains nothing, and that all cultures are of equal value.”  But “the emerging truth,” he insists, “is more complicated.”  Although the author sees individuals as fundamentally similar, “their societies differ greatly in their structure, institutions and their achievements.”  Indeed, “contrary to the central belief of multiculturalists, Western culture has achieved far more” than others “because Europeans, probably for reasons of both evolution and history, have been able to create open and innovative societies, starkly different from the default human arrangements of tribalism or autocracy.”

Race 6

Wade admits that much of his argument is speculative and has yet to be confirmed by hard, genetic evidence.  Nevertheless, he argues, “even a small shift in [genetically-based] social behavior can generate a very different kind of society,” perhaps one where trust and cooperation can extend beyond kin or the tribe—thus facilitating trade, for example, or one emphasizing punishment for nonconformity—thus advancing rule-orientation and isolationism, for instance.  “[I]t is reasonable to assume,” the author vies, “that if traits like skin color have evolved in a population, the same may be true of its social behavior.”

But what profound environmental conditions could possibly have selected for more progressive behavioral adaptations in some but not all populations?  As the climate warmed following the Pleistocene Ice Age, Wade reminds, the agricultural revolution erupted around 10,000 years ago among settlements in the Near East and China.  Increased food production led to population explosions, which in turn spurred social stratification, wealth disparities, and more frequent warfare.  “Human social behavior,” Wade says, “had to adapt to a succession of makeovers as settled tribes developed into chiefdoms, chiefdoms into archaic states and states into empires.”

Meanwhile, other societies transformed far less dramatically.  “For lack of good soils, favorable climate, navigable rivers and population pressures,” Wade observes, “Africa south of the Sahara remained largely tribal throughout the historical period, as did Australia, Polynesia and the circumpolar regions.”

Citing economist Gregory Clark, Wade then postulates that, during the period between 1200 and 1800 CE—twenty-four generations and “plenty of time for a significant change in social behavior if the pressure of natural selection were sufficiently intense,”—the English in particular evolved a greater tendency toward “bourgeoisification” and at least four traits—nonviolence, literacy, thrift, and patience—thus enabling them to escape the so-called “Malthusian trap,” in which agrarian societies never quite learn to produce more than their expanding numbers can consume, and, finally, to lead the world into the Industrial Revolution.

In other words, according to this author, modern industrialized societies have emerged only as a result of two evolved sets of behaviors—initially, those that favor broader trust and contribute to the breakdown of tribalism, and, subsequently, those that favor discipline and delayed gratification and lead to increased productivity and wealth.  On the other hand, says Wade, Sub-Saharan Africans, for example, though well-adapted to their unique environmental circumstances, generally never evolved traits necessary to move beyond tribalism.  Only an evolutionary explanation for this disparity, he concludes, can reveal, for instance, why foreign aid to non-modern societies frequently fails and why Western institutions, including democracy and free markets, cannot be readily transferred to (or forced upon) yet pre-industrial cultures.

So how many races have evolved in Wade’s estimation?  Three major races—Caucasian, East Asian, and African—resulted from an early migration out of Africa some 50,000 years ago, followed by a division between European and Asian populations shortly thereafter.  Quoting statistical geneticist, Neil Risch, however, Wade adds Pacific Islanders and Native Americans to the list because “population genetic studies have recapitulated the classical definition of races based on continental ancestry” (Risch 2002).

To those who would object that there can be no biological race when so many thousands of people fail to fit neatly into any discreet racial category, Wade responds, “[T]o say there are no precise boundaries between races is like saying there are no square circles.”  Races, he adds, are merely “way stations” on the evolutionary road toward speciation.  Different variations of a species can arise where different populations face different selective challenges, and humans have no special exemption from this process.  However, the forces of differentiation can reverse course when, as now, races intermingle due to increased migration, travel, and intermarriage.

Race Rejected.

It is only tradition and shortsightedness that leads us to think there are multiple distinct oceans.—Guy P. Harrison, 2010.

So, if we inherit from our parents traits typically associated with race, including skin, hair, and eye color, why do most scientists insist that race is more social construct than biological reality?  Are they suffering from an acute case of political correctness, as Wade suggests, or perhaps a misplaced paternalistic desire to deceive the irresponsible and short-sighted masses for the greater good of humanity?  More ignoble things have happened, of course, even within scientific communities. But according to geneticist Daniel J. Fairbanks, the denial of biological race is all about the evidence.

In his new book, Everyone is African: How Science Explodes the Myth of Race, Fairbanks points out that, although large-scale analyses of human DNA have recently unleashed a deluge of detailed genetic information, such analyses have so far failed to reveal discrete genetic boundaries along traditional lines of racial classification (Fairbanks 2015).  “What they do reveal,” he argues, “are complex and fascinating ancestral backgrounds that mirror known historical immigration, both ancient and modern.”

Fairbanks

In 1972, Harvard geneticist Richard Lewontin analyzed seventeen different genes among seven groups classified by geographic origin.  He famously discovered that subjects within racial groups varied more among themselves than their overall group varied from other groups, and concluded that there exists virtually no genetic or taxonomic significance to racial classifications (Lewontin 1972).  But Lewontin’s word on the subject was by no means the last. Later characterizing his conclusion as “Lewontin’s Fallacy,” for example, Cambridge geneticist A.W.F. Edwards reminded us how easy it is to predict race simply by inspecting people’s genes (Edwards 2003).

So who was right?  Both of them were, according to geneticist Lynn Jorde and anthropologist Stephen Wooding.  Summarizing several large-scale studies on the topic in 2004, they confirmed Lewontin’s finding that about 85-90% of all human genetic variation exists within continental groups, while only 10-15% between them (Jorde and Wooding 2004).  Even so, as Edwards had insisted, they were also able to assign all native European, East Asian, and sub-Saharan African subjects to their continent of origin using DNA alone.  In the end, however, Jorde and Wooding showed that geographically intermediate populations—South Indians, for example—did not fit neatly into commonly conceived racial categories.  “Ancestry,” they concluded, was “a more subtle and complex description” of one’s genetic makeup than “race.”

Fairbanks concurs.  Humans have been highly mobile for thousands of years, he notes.  As a result, our biological variation “is complex, overlapping, and more continuous than discreet.”  When one analyzes DNA from a geographically broad and truly representative sample, the author surmises, “the notion of discrete racial boundaries disappears.”

Nor are the genetic signatures of typically conceived racial traits always consistent between populations native to different geographic regions.  Consider skin color, for example.  We know, of course, that the first Homo sapiens inherited dark skin previously evolved in Africa to protect against sun exposure and folate degradation, which negatively affects fetal development.  Even today, the ancestral variant of the MC1R gene, conferring high skin pigmentation, is carried uniformly among native Africans.

Race 2

But around 30,000 years ago, Fairbanks instructs, long after our species had first ventured out of Africa into the Caucasus region, a new variant appeared.  KITLG evolved in this population prior to the European-Asian split to reduce pigmentation and facilitate vitamin D absorption in regions of diminished sunlight.  Some 15,000 years later, however, another variant, SLC24A5, evolved by selective sweep as one group migrated westward into Europe.  Extremely rare in other native populations, nearly 100% of modern native Europeans carry this variant.  On the other hand, as their assorted skin tones demonstrate, African and Caribbean Americans carry either two copies of an ancestral variant, two copies of the SLC24A5 variant, or one of each.  Asians, by contrast, developed their own pigment-reducing variants—of the OCA2 gene, for example—via convergent evolution, whereby similar phenotypic traits result independently among different populations due to similar environmental pressures.

So how can biology support the traditional, or “folk,” notion of race when the genetic signatures of that notion’s most relied upon trait—that is, skin color—are so diverse among populations sharing the same or similar degree of skin pigmentation?  Fairbanks judges the idea utterly bankrupt “in light of the obvious fact that actual variation for skin color in humans does not fall into discrete classes,” but rather “ranges from intense to little pigmentation in continuously varying gradations.”

To Wade, Fairbanks offers the following reply: “Traditional racial classifications constitute an oversimplified way to represent the distribution of genetic variation among the people of the world. Mutations have been creating new DNA variants throughout human history, and the notion that a small proportion of them define human races fails to recognize the complex nature of their distribution.”

Race 8

A Severe Response.

I use the term scientific racism to refer to scientists who continue to believe that race is a biological reality.—Robert Wald Sussman, 2014.

Since neither author disputes the absence of completely discreet racial categories, one could argue that part of the battle is really one over mere semantics, if not politics. Regardless, critical aspects of Wade’s analysis were quickly and sharply criticized by several well-respected researchers.

Former president of the AAA and co-drafter of its statement on race, Alan Goodman, for example, argues that Wade’s “speculations follow from misunderstandings about most everything, including the idea of race, evolution and gene action, culture and institutions, and most fundamentally, the scientific process” (Goodman 2014). Indeed, he compares Wade’s book to the most maligned texts on race ever published, including Madison Grant’s 1916 The Passing of the Great Race, Arthur Jensen’s 1969 paper proposing racial intelligence differences, and Herrnstein’s and Murray’s 1994 The Bell Curve.

But Wade’s “biggest error,” according to Goodman, “is his inability to separate the data on human variation from race.” He mistakenly assumes, in other words, “that all he sees is due to genes,” and that culture means little to nothing. A “mix of mysticism and sociobiology,” he continues, Wade’s simplistic view of human culture ignores the archeological and historical fact that cultures are “open systems” that constantly change and interact. And although biological human variation can sometimes fall into geographic patterns, Goodman emphasizes, our centuries-long attempt to force all such variation into racial categories has failed miserably.

Characterizing Wade’s analysis similarly as a “spectacular failure of logic,” population geneticist Jennifer Raff takes special issue with the author’s attempt to cluster human genetic variation into five or, really, any given number of races (Raff 2014). To do so, Wade relied in part on a 2002 study featuring a program called Structure, which is used to group people across the globe based on genetic similarities (Rosenberg 2002). And, indeed, when Rosenberg et al. asked Structure to bunch genetic data into five major groups, it produced clusters conforming to the continents.

But, as Raff observes, the program was capable of dividing the data into any number of clusters, up to twenty in this case, depending on the researchers’ pre-specified desire. When asked for six groups, for example, Structure provided an additional “major” cluster, the Kalash of northwestern Pakistan—which Wade arbitrarily, according to Raff, rejected as a racial category. In the end, she concludes, Wade seems to prefer the number five “simply because it matches his pre-conceived notions of what race should be.”

Interestingly, when Rosenberg et al. subsequently expanded their dataset to include additional genetic markers for the same population samples, Structure simply rejected the Kalesh and decided instead that one of Wade’s five human races, the Native Americans, should be split into two clusters (Rosenberg 2005). In any event, Rosenberg et al. expressly warned in their second paper that Structure’s results “should not be taken as evidence of [their] support of any particular concept of ‘biological race.’”

Structure was able to generate discrete clusters from a very limited quantity of genetic variation, adds population geneticist Jeremy Yoder, because its results reflect what his colleagues refer to as isolation-by-distance, or the fact that populations separated by sufficient geographic expanses will display genetic distinctions even if intimately connected through migration and interbreeding (Yoder 2014). In reality, however, human genetic variation is clinal, or gradual in transition between such populations. In simpler terms, people living closer together tend to be more closely related than those living farther apart.

In his review, biological anthropologist Greg Laden admits that human races might have existed in the past and could emerge at some point in the future (Laden 2014). He also concedes that “genes undoubtedly do underlie human behavior in countless ways.” Nevertheless, he argues, Wade’s “fashionable” hypothesis proposing the genetic underpinnings of racially-based behaviors remains groundless. “There is simply not an accepted list of alleles,” Laden reminds, “that account for behavioral variation.”

Chimpanzees, by contrast, can be divided into genetically-based subspecies (or races). Their genetic diversity has proven much greater than ours, and they demonstrate considerable cultural variation as well. Even so, Laden points out, scientists have so far been unable to sort cultural variation among chimps according to their subspecies. So if biologically-based races cannot explain cultural differences among chimpanzees, despite their superior genetic diversity as a species, why would anyone presume the opposite of humans?

Race 7

None of which is to imply that every review of Wade has been entirely negative. Conservative journalist Anthony Daniels (a.k.a. Theodore Dalrymple), for example, praises the author lavishly as a “courageous man … who dares raise his head above the intellectual parapet” (Daniels 2014). While judging Wade’s arguments mostly unconvincing, he nevertheless defends his right to publish them: “That the concept of race has been used to justify the most hideous of crimes should no more inhibit us from examining it dispassionately … than the fact that economic egalitarianism has been used to justify crimes just as hideous …”

Similarly, political scientist and co-author of The Bell Curve, Charles Murray warned readers of the social science “orthodoxy’s” then-impending attempt to “not just refute” Wade’s analysis, “but to discredit it utterly—to make people embarrassed to be seen purchasing it in public” (Murray 2014). “It is unhelpful,” Murray predicts, “for social scientists and the media to continue to proclaim that ‘race is a social construct’” when “the problem facing us down the road is the increasing rate at which the technical literature reports new links between specific genes and specific traits.” Although “we don’t yet know what the genetically significant racial differences will turn out to be,” Murray contends, “we have to expect that they will be many.”

Perhaps; perhaps not. But race is clearly problematic from a biological perspective—at least as Wade and many before him have imagined it. Humans do not sort neatly into separate genetic categories, or into a handful of continentally-based groups. Nor have we discovered sufficient evidence to suggest that human behaviors match to known patterns of genetic diversity. Nonetheless, because no “is” ever implies an “ought,” the cultural past should never define, let alone restrain, the scientific present.

Characterizing Biological Diversity.

Instead of wasting our time “refuting” straw-man positions dredged from a distant past or from fiction, we should deal with the strongest contemporary attempts to rehabilitate race that are scientifically respectable and genetically informed.—Neven Sesardic, 2010.

To this somewhat belated point, I have avoided the task of defining “biological race,” in large measure because no single definition has achieved widespread acceptance. In any event, preeminent evolutionary biologist, Ernst Mayr, once described “geographic race” generally as “an aggregate of phenotypically similar populations of a species inhabiting a geographic subdivision of the range of that species and differing taxonomically from other populations of that species” (Mayr 2002). A “human race,” he added, “consists of the descendants of a once-isolated geographic population primarily adapted for the environmental conditions of their original home country.”

Sounds much like Wade, so far. But unlike Wade, Mayr firmly rejected any typological, essentialist, or folk approach to human race denying profuse variability and mistaking non-biological attributes—especially those implicating personality and behavior—for racial traits. Accepting culture’s profound sway, Mayr warned that it is “generally unwise to assume that every apparent difference … has a biological cause.” Nonetheless, he concluded, recognizing human races “is only recognizing a biological fact”:

Geographic groups of humans, what biologists call races, tend to differ from each other in mean differences and sometimes even in specific single genes. But when it comes to the capacities that are required for the optimal functioning of our society, I am sure that any racial group can be matched by that of some individual in another racial group. This is what population analysis reveals.

So how might one rescue biological race from the present-day miasma of popular imparsimony and professional denialism, perhaps even to the advancement of science and benefit of society? Evolutionary biologist and professor of science philosophy, Massimo Pigliucci, thinks he has an answer.

Race 3

More than a decade ago, he and colleague Jonathan Kaplan proposed that “the best way of making sense of systematic variation within the human species is likely to rely on the ecotypic conception of biological races” (Pigliucci and Kaplan 2003). Ecotypes, they specify, are “functional-ecological entities” genetically adapted to certain environments and distinguished from one another based on “many or a very few genetic differences.” Consistent with clinal variation, ecotypes are not always phylogenetically distinct, and gene flow between them is common. Thus, a single population might consist of many overlapping ecotypes.

All of which is far more descriptive of human evolution than even the otherwise agreeable notion of “ancestry,” for example. For Pigliucci and Kaplan, the question of human biological race turns not on whether there exists significant between-population variation overall, as Lewontin, for example, suggested, but rather on “whether there is [any] variation in genes associated with significant adaptive differences between populations.” As such, if we accept an ecotypic description of race, “much of the evidence used to suggest that there are no biologically significant human races is, in fact, irrelevant.”

On the other hand, as Pigliucci observed more recently, the ecotypic model implies the failure of folk race as well. First, “the same folk ‘race’ may have evolved independently several times,” as explained above in the context of skin color, “and be characterized by different genetic makeups” (Pigliucci 2013). Second, ecotypes are “only superficially different from each other because they are usually selected for only a relatively small number of traits that are advantageous in certain environments.” In other words, the popular notion of the “black race,” for example, centers on a scientifically incoherent unit—one “defined by a mishmash of small and superficial set of biological traits … and a convoluted cultural history” (Pigliucci 2014).

So, while the essentialist and folk concepts of human race can claim “no support in biology,” Pigliucci concludes, scientists “should not fall into the trap of claiming that there is no systematic variation within human populations of interest to biology.” Consider, for a moment, the context of competitive sports. While the common notion that blacks are better runners than whites is demonstrably false, some evidence does suggest that certain West Africans have a genetic edge as sprinters, and that certain East and North Africans possess an innate advantage as long-distance runners (Harrison 2010). As the ecotypic perspective predicts, the most meaningful biological human races are likely far smaller and more numerous than their baseless essentialist and folk counterparts (Pigliucci and Kaplan 2003).

So, given the concept’s exceptionally sordid history, why not abandon every notion of human race, including the ecotypic version? Indeed, we might be wise to avoid the term “race” altogether, as Pigliucci and Kaplan acknowledge. But if a pattern of genetic variation is scientifically coherent and meaningful, it will likely prove valuable as well. Further study of ecotypes “could yield insights into our recent evolution,” the authors urge, “and perhaps shed increased light onto the history of migrations and gene flow.” By contrast, both the failure to replace the folk concept of race and the continued denial of meaningful patterns of human genetic variation have “hampered research into these areas, a situation from which neither biology nor social policy surely benefit.”

References:

American Anthropological Association. 2008. Race continues to define America. http://new.aaanet.org/pdf/upload/Race-Continues-to-Define-America.pdf (last accessed November 12, 2015).

American Sociological Association. 2003. The importance of collecting data and doing social scientific research in race. http://www.asanet.org/images/press/docs/pdf/asa_race_statement.pdf (last accessed November 12, 2015).

Clark, E. 2007. A farewell to alms: a brief economic history of the world. Princeton, NJ: Princeton University Press.

Daniels, A. 2014. Genetic disorder. http://www.newcriterion.com/articleprint.cfm/Genetic-disorder-7903 (last accessed November 19, 2015).

Edwards, A.W.F. 2003. Human genetic diversity: Lewontin’s fallacy. BioEssays 25(8):798-801.

Fairbanks, D.J. 2015. Everyone is African: how science explodes the myth of race. Amherst, NY: Prometheus Books.

Goodman, A. 2014. A troublesome racial smog. http://www.counterpunch.org/2014/05/23/a-troublesome-racial-smog/print (last accessed November 17, 2015).

Harrison, G.P. 2010. Race and reality: what everyone should know about our biological diversity. Amherst, NY: Prometheus Books.

Jorde, L.B. and S.P. Wooding. 2004. Genetic variation, classification and ‘race.’ Nature Genetics 36(11):528-533.

Laden, G. 2014. A troubling tome. http://www.americanscientist.org/bookshelf/id.16216,content.true,css.print/bookshelf.aspx (last accessed November 16, 2015).

Lewontin, R. 1972. The apportionment of human diversity. Evolutionary Biology 6:397.

Mayr, E. 2002. The biology of race and the concept of equality. Daedalus 131(1):89-94.

Montagu, A. 1942. Man’s most dangerous myth: the fallacy of race. NY: Columbia University Press.

Murray, C. 2014. Book review: ‘A Troublesome Inheritance’ by Nicholas Wade: a scientific revolution is under way—upending one of our reigning orthodoxies. http://www.wsj.com/articles/SB10001424052702303380004579521482247869874 (last accessed November 19, 2015).

Pigliucci, M. 2013. What are we to make of the concept of race? Thoughts of a philosopher-scientist. Studies in History and Philosophy of Biological and Biomedical Sciences. 44:272-277.

Pigliucci, M. 2014. On the biology of race. http://www.scientiasalon.wordpress.com/2014/05/29/on-the-biology-of-race/. (last accessed November 22, 2015).

Pigliucci, M. and J. Kaplan. 2003. On the concept of biological race and its applicability to humans. Philosophy of Science 70:1161-1172.

Raff, J. 2014. Nicholas Wade and race: building a scientific façade. http://www.violentmetaphors.com/2014/05/21/nicholas-wade-and-race-building-a-scientific-facade/ (last accessed November 16, 2015).

Risch, N., E. Burchard, E. Ziv, and H. Tang. 2002. Categorization of humans in biomedical research: genes, race and disease. Genome Biology 3(7):1-12.

Rosenberg, N., J.K. Pritchard, J.L. Weber, et al. 2002. Genetic structure of human populations. Science 298(5602):2381-2385.

Rosenberg, N., M. Saurabh, S. Ramachandran, et al. 2005. Clines, clusters, and the effect of study design on the inference of human population structure. PLOS Genetics 1(6):e70.

Sarich, V. and F. Miele. 2004. Race: the reality of human differences. Boulder, CO: Westview Press.

Sesardic, N. 2010. Race: a social deconstruction of a biological concept. Biological Philosophy 25:143-162.

Sussman, R.W. 2014. The myth of race: the troubling persistence of an unscientific idea. Cambridge, MA: Harvard University Press.

Wade, N. 2014. A troublesome inheritance: genes, race and human history. NY: Penguin Press.

Yoder, J. 2014. How A Troublesome Inheritance gets human genetics wrong. http://www.molecularecologist.com/2014/05/troublesome-inheritance/ (last accessed November 16, 2015).

“Race” in 2015: Myth or Reality? (part 2)

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes regularly to Skeptic as well.  He may be contacted at krausekc@msn.com.

If we inherit from our parents traits typically associated with “race,” including skin, hair, and eye color, why do most scientists insist that race is more social construct than biological reality?  Are they suffering from an acute case of political correctness, perhaps, or a misplaced paternalistic desire to deceive the irresponsible and short-sighted masses for the greater good of humanity?  More ignoble things have happened, of course, even within scientific communities.  But according to geneticist Daniel J. Fairbanks, the denial of biological “race” is all about the evidence.

In Everyone is African: How Science Explodes the Myth of Race (Prometheus 2015), Fairbanks points out that, although large-scale analyses of human DNA have recently unleashed a deluge of detailed genetic information, such analyses have so far failed to reveal discrete genetic boundaries along traditional lines of racial classification.  “What they do reveal,” he argues, “are complex and fascinating ancestral backgrounds that mirror known historical immigration, both ancient and modern.”

Fairbanks

In 1972, Harvard geneticist Richard Lewontin analyzed seventeen different genes among seven groups classified by geographic origin.  He famously discovered that subjects within racial groups varied more among themselves than their overall group varied from other groups, and concluded that there exists virtually no genetic or taxonomic significance to racial classifications.  Later characterizing that conclusion as “Lewontin’s Fallacy” in 2003, Cambridge geneticist A.W.F. Edwards reminded us how easy it is to predict race simply by looking at people’s genes.

So who was right?  Both of them were, according to Lynn Jorde and Stephen Wooding at the University of Utah School of Medicine.  Summarizing several large-scale studies on the topic in 2004, they confirmed Lewontin’s finding that about 85-90% of all human genetic variation exists within continental groups, while only 10-15% between them.  Even so, as Edwards had insisted, they were also able to assign all native European, east Asian, and sub-Saharan African subjects to their continent of origin using DNA alone.  In the end, however, Jorde and Wooding revealed that geographically intermediate populations–South Indians, for example–did not fit neatly into commonly conceived racial categories.  “Ancestry,” they concluded, was “a more subtle and complex description” of one’s genetic makeup than “race.”

Fairbanks concurs.  Humans have been highly mobile for thousands of years, he notes.  As a result, our biological variation “is complex, overlapping, and more continuous than discreet.”  When one analyzes DNA from a geographically broad and truly representative sample, the author surmises, “the notion of discrete racial boundaries disappears.”

Nor are the genetic signatures of typically conceived racial traits always consistent between populations native to different geographic regions.  Take skin color, for example.  We know, of course, that the first Homo sapiens inherited dark skin previously evolved in Africa to protect against sun exposure and folate degradation, which negatively affects fetal development.  Even today, the ancestral variant of the MC1R gene, conferring high skin pigmentation, is carried uniformly among native Africans.

But around 30,000 years ago, long after our species had first ventured out of Africa into the Caucasus region, a new variant appeared.  KITLG evolved in this population prior to the European-Asian split to reduce pigmentation and facilitate vitamin D absorption in regions of diminished sunlight.  Some 15,000 years later, however, another variant, SLC24A5, evolved by selective sweep as one group migrated west into Europe.  Extremely rare in other native populations, nearly 100% of modern native Europeans carry this variant.  On the other hand, as their varied skin tones demonstrate, African and Caribbean Americans carry either two copies of an ancestral variant, two copies of the SLC24A5 variant, or one of each.  Asians, by contrast, developed their own pigment-reducing variants–of the OCA2 gene, for example–via convergent evolution, a process where similar external traits result independently among different populations due to similar environmental pressures.

So how can biology support traditional notions of race when the genetic signatures of those notions’ most relied upon trait–that is, skin color–are so diverse among people sharing the same or similar degree of skin pigmentation?  Fairbanks finds such ideas utterly bankrupt “in light of the obvious fact that actual variation for skin color in humans does not fall into discrete classes,” but rather “ranges from intense to little pigmentation in continuously varying gradations.”

To long-time science journalist, Nicholas Wade, who, in his recent book, A Troublesome Inheritance, judged that biological races are real and can be distinguished genetically at the continental level, Fairbanks offers the following reply: “Traditional racial classifications constitute an oversimplified way to represent the distribution of genetic variation among the people of the world.  Mutations have been creating new DNA variants throughout human history, and the notion that a small proportion of them define human races fails to recognize the complex nature of their distribution.”