Editing the Human Germline: Groundbreaking Science and Mind-numbing Sentiment.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

The CRISPR Complex at work.

Should biologists use new gene-editing technology to modify or “correct” the human germline? Will our methods soon prove sufficiently safe and efficient and, if so, for what purposes?  Much-celebrated CRISPR pioneer, Jennifer Doudna, recently recalled her initial trepidations over that very prospect:

Humans had never before had a tool like CRISPR, and it had the potential to turn not only living people’s genomes but also all future genomes into a collective palimpsest upon which any bit of genetic code could be erased and overwritten depending on the whims of the generation doing the writing…. Somebody was inevitably going to use CRISPR in a human embryo … and it might well change the course of our species’ history in the long run, in ways that were impossible to foretell.

(Doudna and Sternberg 2017). And it didn’t take long.  Just one month after Doudna and others called for a moratorium on human germline editing in the clinical setting, scientists in Junjiu Huang’s lab at Sun Yat-sen University in Guangzhou, China published a paper describing their exclusively in vitro use of CRISPR on eighty-six human embryos (Liang et al. 2015).  Huang’s goal was to edit mutated beta-globin genes that would otherwise trigger a debilitating blood disorder called beta-thalassemia.

But the outcomes were mixed, at best.  After injecting each embryo with a CRISPR complex composed of a guide RNA molecule, a gene-slicing Cas-9 enzyme, a synthetic repair DNA template, and a “glow-in-the-dark” jellyfish gene that allows investigators to track their results as cells continue to divide, Huang’s team delivered a paltry five percent efficiency rate.  Some embryos displayed unintended, “off-target” editing.  In others, cells ignored the repair template and used the related delta-globin gene as a model instead.  A third group of embryos turned mosaic, containing cells with an untidy jumble of editions.  Part of the problem was that CRISPR had initiated only after the fertilized egg had begun to divide.

By using non-viable triploid embryos containing three sets of chromosomes, instead of the usual two, Huang avoided objections that he had destroyed potential human lives.  Nevertheless, both Science and Nature rejected his manuscript based in part on ethical concerns.  Several scientific agencies also promptly reemphasized their stances against human germline modification in viable embryos, and, in the US, the Obama Administration announced its position that the human germline should not be altered at that time for clinical purposes.  Francis Collins, director of the National Institutes of Health, emphasized that the US government would not fund any experiments involving the editing of human embryos.  And finally, earlier this year, a committee of the US National Academies of Sciences and Medicine decreed that clinical use of germline editing would be allowed only when prospective parents had no other opportunities to birth healthy children.

Meanwhile, experimentation continued in China, with similarly grim results. But this past August, an international team based in the US—this time led by embryologist Shoukhrat Mitalipov at the Oregon Health and Science University in Portland—demonstrated that, under certain circumstances, genetic defects in human embryos can, in fact, be efficiently and safely repaired (Ma et al. 2017).

Embyologist Shoukhrat Mitalipov.

Mitalipov’s group attempted to correct an autosomal dominant mutation—where a single copy of a mutated gene results in disease symptoms—of the MYBPC3 gene.  Crucially, such mutations are responsible for an estimated forty percent of all genetic defects causing hypertrophic cardiomyopathy (HCM), along with ample portions of other inherited cardiomyopathies.  Afflicting one in every 500 adults, HCM cannot be cured, and remains the most common cause of heart failure and sudden death among otherwise healthy young athletes.  These mutations have escaped the pressures of natural selection, unfortunately, due to the disorder’s typically late onset—that is, following reproductive maturity.

Prospective parents can, however, prevent HCM in their children during the in vitro fertilization/preimplantation genetic diagnosis (IVF/PGD) process.  Where only one parent carries a heterozygous mutation, fifty percent of the resulting embryos can be diagnosed as healthy contenders for implantation.  The remaining unhealthy fifty percent will be discarded.  As such, correction of mutated MYBPC3 alleles would not only rescue the latter group of embryos, but improve pregnancy rates and save prospective mothers—especially older women with high aneuploidy rates and fewer viable eggs—from risks associated with increasing numbers of IVF/PGD cycles as well

With these critical facts in mind, Mitalipov and colleagues employed a CRISPR complex generally similar to that used by Huang.  It included a guide RNA sequence, a Cas-9 endonuclease, and a synthetic repair template.  In one phase of their investigation, the team fertilized fifty-four human oocytes (from twelve healthy donors) with unhealthy sperm carrying the MYBPC3 mutation (from a single donor), and injected the resulting embryos eighteen hours later with the CRISPR complex.  The result? Thirteen treated embryos became jumbled mosaics.

Mitalipov changed things up considerably, however, in the study’s second phase by delivering the complex much earlier than he and others had done in previous experiments—indeed, at the very brink fertilization. More precisely, his colleagues injected the CRISPR components along with the mutated sperm cells into fifty-eight healthy, “wild-type” oocytes during metaphase of the second meiotic division.  Here, the results were impressive, to say the least.  Forty-two treated embryos were normalized, carrying two copies of the healthy MYBPC3 allele—a seventy-two percent rate of efficiency, no “off-target effects” were detected, and only one embryo turned mosaic.

Mosaicism and Off-target Effects.

Mitalipov’s team achieved a genuine breakthrough in terms of both efficacy and safety.  Perhaps nearly as interesting—and, in fact, the study’s primary finding, according to the authors—is that, in both experimental phases, the embryos consistently ignored Mitalipov’s synthetic repair template and turned instead to the healthy maternal allele as their model.  Such is not the case when CRISPR is used to edit somatic (body) cells, for example.  Apparently, the team surmised, human embryos evolved an alternative, germline-specific DNA repair mechanism, perhaps to afford the germline special protection.

The clinical implications of this repair preference are profound and, at least arguably, very unfortunate.  First, with present methods, it now appears unlikely that scientists could engineer so-called “designer babies” endowed with trait enhancements.  Second, it seems nearly as doubtful that CRISPR can be used to repair homozygous disease mutations where both alleles are mutant.  Nevertheless, Mitalipov’s method could be applied to more than 10,000 diseases, including breast and ovarian cancers linked to BRCA gene mutations, Huntington’s, cystic fibrosis, Tay-Sachs, and even some cases of early-onset Alzheimer’s.

At least in theory.  As of this writing, Mitalipov’s results have yet to be replicated, and even he warns that, despite the new safety assurances and the remarkable targeting efficiencies furnished by his most recent work, gene-editing techniques must be “further optimized before clinical application of germline correction can be considered.” According to stem-cell biologist George Daley of Boston Children’s Hospital, Mitalipov’s experiments have proven that CRISPR is “likely to be operative,” but “still very premature” (Ledford 2017).  And while Doudna characterized the results as “one giant leap for (hu)mankind,” she also expressed discomfort with the new research’s unmistakable inclination toward clinical applications (Belluck 2017).

Indeed, within a single day of Mitalipov’s report, eleven scientific and medical organizations, including the American Society of Human Genetics, published a position statement outlining their recommendations regarding the human germline (Ormond et al. 2017).  Therein, the authors appeared to encourage not only in vitro research but public funding as well.  Although they advised against any contemporary gene-editing process intended to culminate in human pregnancy, but also suggested that clinical applications might proceed in the future subject to a compelling medical rationale, a sufficient evidence base, an ethical justification, and a transparent and public process to solicit input.

And of course researchers like Mitalipov will be forced to contend with those who claim that, regardless of purpose, the creation and destruction of human embryos is always ethically akin to murder (Mitalipov destroyed his embryos within days of their creation).  But others have lately expressed even less forward-thinking and, frankly, even more irrational and dangerous sentiments.

For example, a thoroughly galling article I can describe further only as “pro-disability” (In stark contrast to “pro-disabled”) was recently published, surprisingly to me, in one of the world’s most prestigious science publications (Hayden 2017).  It begins by describing a basketball game in which a nine-year-old girl, legally blind due to genetic albinism, scored not only the winning basket, but, evidently—through sheer determination—all of her team’s points.  Odd, perhaps, but great!  So far.

But the story quickly turns sour, to the girl’s father who apparently had asked the child, first, whether she wished she would have been born with normal sight, and, second (excruciatingly), whether she would ever help her own children achieve normal sight through genetic correction. Unsurprisingly, the nine-year-old is said to have echoed what we then learn to be her father’s heartfelt but nonetheless bizarre conviction: “Changing her disability … would have made us and her different in a way we would have regretted,” which to him, would be “scary.”

To be fair, the article very briefly appends counsel from a man with Huntington’s, for instance, who suggests that “[a]nyone who has to actually face the reality … is not going to have a remote compunction about thinking there is any moral issue at all.”  But the narrative quickly circles back to a linguist, for example, who describes deaf parents who deny both their and their children’s disabilities and have even selected for deafness in their children through IVF/PGD, and a literary scholar who believes that disabilities have brought people closer together to create a more inclusive world (much as some claim Western terrorism has).  The author then laments the fact that, due to modern reproductive technology, fewer children are being born with Down’s syndrome.

To summarize, according to one disabilities historian, “There are some good things that come from having a genetic illness.”  Uh-huh.  In other words, disabilities are actually beneficial because they provide people with challenges to overcome—as if relatively healthy people are incapable of voluntarily and thoughtfully designing both mental and physical challenges for themselves and their kids.

I think not. Disabilities, by definition, are bad.  And, as even a minimally compassionate people, if we possess a safe and efficient technological means of preventing blindness, deafness, or any other debilitating disease in any child or in any child’s progeny, we also necessarily have an urgent ethical obligation to use them.

 

References:

Belluck, P. 2017. In breakthrough, scientists edit a dangerous mutation from genes in human embryos. Available online at https://nyti.ms/2hnZ9ey; accessed August 9, 2017.

Doudna, J.A., and S.H. Sternberg. 2017. A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution. Boston: Houghton Mifflin Harcourt.

Hayden, E.C. 2017. Tomorrow’s children. Nature 530:402-05.

Ledford, H. 2017. CRISPR fixes embryo error. Science 548:13-14.

Liang, P., Y. Xu, X. Zhang, et al. 2015. CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein and Cell 6(5):363-72.

Ma, H., N. Marti-Gutierrez, S. Park, et al. 2017. Correction of a pathogenic gene mutation in human embryos. Nature DOI:10.1038/nature23305.

Ormond, K.E., D.P. Mortlock, D.T. Scholes, et al. 2017. Human germline genome editing. The American Journal of Human Genetics 101:167-76.

The Delectable Myths of Healthy and Healthier Obesity.

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

obesity-paradox1

Why, sometimes I’ve believed as many as six impossible things before breakfast.–The Queen, to Alice in Through the Looking Glass.

Wouldn’t it be splendid to have our cakes and eat them too? Arguably, both ideology and popular culture allow their followers to do just that.  Until they don’t, of course.  At that point, when facts and logic can no longer be denied, the rudely awakened find themselves confronted with difficult choices.

The concept of healthy obesity, for example, has gained much traction during the last fifteen years. At one end of the continuum, members of the popular but clearly flawed “Healthy at Every Size (HAES)” movement profess the nonexistence of excess adiposity and suggest that even the most obese people can lead perfectly healthy lives (“Every size”—really?).  On the other end, and somewhat more credibly, others allege the existence of an “obesity paradox” and a “metabolically healthy obesity.”  Such are the tantalizing subjects of this column.

Cardiologist and obesity researcher, Carl J. Lavie, has described the paradox as follows: “Overweight and moderately obese patients with certain chronic diseases … often live longer and fare better than normal-weight patients with the same ailments” (Lavie 2014). In addition to his own research, Lavie’s conclusions are based on a revolutionary and, in some circles, much-celebrated JAMA study led by Katherine Flegal at the US Centers for Disease Control and Prevention, who reviewed 97 studies of more than 2.88 million individuals to calculate all-cause mortality hazard ratios for standard body mass index (BMI) classifications (Flegal et al. 2013).

Katherine Flegal

Katherine Flegal

Flegal’s team reported as follows: Relative to normal weight, all combined grades of obesity were associated with an 18 percent higher incidence of all-cause mortality. In cases of more extreme of obesity, the association rose to 29 percent.  By itself, however, the mildest grade of obesity was not correlated with a significantly elevated risk, and the overweight but not obese category was actually associated with a 6 percent lower incidence of all-cause mortality.  Predictably, the popular media quickly seized on the overweight population’s presumed appetite for these tempting results.

bmi

Metabolically healthy, or “benign,” obesity, on the other hand—which Lavie dubs the “ultimate paradox”—appears to have no standard definition or list of qualifying criteria, but is often characterized generally as “obesity without the presence of metabolic diseases such as type 2 diabetes, dyslipidemia or hypertension” (Munoz-Garach et al. 2016). Retained insulin sensitivity, however, is the hallmark trait of this subpopulation.  Researchers have assigned up to 32 percent of the obese population to this phenotype.  It applies more prevalently to women than men, but is thought to decrease with age among both sexes.  Researchers have yet to determine whether these obese are genetically predisposed to decreased risks of disease or mortality.  But their existence, along with that of the metabolically unhealthy normal-weight population, suggests that factors other than excess adiposity are at play.

All of which might sound at least somewhat comforting to the now 600 million obese worldwide (and still growing) who have been told for decades that obesity per se will significantly increase one’s susceptibility to heart disease, stroke, cancer, diabetes, and arthritis, for example. Preferences and popular reports aside, however, it appears we may yet be forced to choose between possessing our cakes and consuming them, because an impressive body of new and well-conceived research has called both the paradox and healthy obesity into serious question.

Consider, for example, a truly enormous international meta-analysis published last July in The Lancet by the Global BMI Mortality Collaboration (GBMC 2016).  Led by Harvard professor of nutrition and epidemiology, Frank Hu, this study poured over data from more than 10.6 million participants who were followed for up to 14 years.  239 large studies conducted in 32 countries were included.  Importantly, the Collaboration attempted to control for a “reverse causation bias,” in which low BMI was the result, rather than the cause, of an underlying or preclinical illness by excluding current or former smokers, those who suffered from chronic disease at the study’s inception, and those who died during the initial five years of follow-up.  In other words, Hu’s team addressed the potential for potent confounders that Flegal’s team, for lack of data, was forced to ignore.

The Collaboration’s results were startling. Interestingly, Hu “was able to reproduce [Flegal’s results] when conducting crude analyses with inadequate control of reverse causality, but not when [he] conducted appropriately strict analyses.”  In the end, then, the Collaboration found that, worldwide, participants with a normal BMI in the 22.5 to 25 range enjoyed the lowest risk of mortality and that such risk increased significantly throughout the overweight and obese ranges.  In fact, every five units of BMI in excess of 25 was associated generally with a 31 percent greater risk of premature death—specifically, 49 percent for cardiovascular-related, 38 percent for respiratory-related, and 19 percent for cancer-related mortality.  According to Hu, his team had succeeded in “challeng[ing] previous suggestions that overweight and grade 1 obesity are not associated with higher mortality, bypassing speculations about hypothetical protective metabolic effects of increased body fat in apparently healthy individuals.”

Frank Hu

Frank Hu

Consider too, a large prospective cohort study published last October in the BMJ in which about 115,000 participants—free of cardiovascular disease and cancer at baseline—were followed for up to 32 years (Veronese et al. 2016).  Evaluating the combined associations of diet, exercise, alcohol consumption, and smoking with BMI on the risk of all-cause and cause-specific mortality, this study was also designed to address Flegal’s peculiar 2013 results.  A lead author here as well, Frank Hu first noted, once again, that previous examinations suggesting an obesity paradox, including Flegal’s, had allowed for potentially confounding bias by failing to distinguish between healthy normal-weight individuals and a “substantial proportion of the US population” in which “leanness is driven by other factors that can increase risk of mortality,” including existing or preclinical chronic diseases and smoking.

Contrary to the alleged paradox, Hu discovered that when lifestyle factors were taken into serious consideration, the lowest risk of all-cause and cardiovascular mortality was enjoyed by participants in the slightly low-to-normal, 18.5 to 22.4 BMI range—that is, when those subjects also displayed at least three out of four healthy lifestyle factors, including healthy eating, adequate exercise, moderate alcohol intake, and no smoking. In the end, according to Hu’s team, “the U-shaped relation between BMI and mortality observed in many epidemiological studies is driven by an over-representation in our societies of individuals who are lean because of chronic metabolic and pathological conditions caused by exposure to smoking, a sedentary lifestyle, and/or unhealthy diets.”  The optimal human condition, in other words, is not overweight of any kind or to any degree, but rather “leanness induced by healthy lifestyles.”

So much for the obesity paradox, at least for now. But what of its somewhat less voracious cousin, the notion of metabolically healthy obesity?

Recognizing prior support for so-called “benign obesity,” a trio of Canadian diabetes researchers led by Caroline Kramer conducted a systematic review and meta-analysis of eight studies evaluating over 61,000 subjects—many of whom were classified as metabolically healthy obese—for all-cause mortality and cardiovascular events (Kramer et al. 2013). When all studies were considered, regardless of follow-up duration, the healthy obese subjects displayed risks similar to those of healthy normal-weight participants.  However, when considering only those studies that followed-up for at least ten years, Kramer and colleagues discovered that the purportedly healthy obese were significantly more likely than their normal counterparts to perish or suffer serious cardiovascular trouble.

Caroline Kramer

Caroline Kramer

Should we infer, then, that the healthy obese are, in fact, healthy until circumstances render them otherwise a decade later? Not according to Kramer.  Regardless of metabolic status, she warned, even in the short term, obesity is associated with subclinical vascular disease, left-ventricular abnormalities, chronic inflammation, and increased carotid artery intima-media thickness and coronary calcification.  In the end, the Canadians found no support for the “benign obesity” phenotype and declared with no uncertainty that “there is no ‘healthy’ pattern of obesity.”

Most recently, however, a diverse and impressively creative group of Swedish scientists used transcriptomic profiling in white adipose tissue to contrast responses to insulin stimulation between never-obese, unhealthy obese, and, again, supposedly healthy obese subjects. (Ryden et al. 2016). Led by Mikael Ryden at the Karolinska Institutet, this group revealed, first, clear distinctions between the never-obese and both groups of obese participants, and, second, nearly identical and abnormal patterns of gene expression among both insulin-resistant and insulin-sensitive obese subjects, independent of other cardiovascular or metabolic risk factors.

Said Ryden during a post-publication interview: “Insulin-sensitive obese individuals may not be as metabolically healthy as previously believed.” (ScienceDaily 2016). His team’s findings, he continued, “suggest that vigorous interventions may be necessary for all obese individuals, even those previously considered … healthy.”

To Lavie’s credit, he generally acknowledges obesity’s proven hazards. He also recognizes serious and consistent exercise as the most reliable strategy for attaining and maintaining good health.  Far less defensible, however, is Lavie’s insistence that exercise can render obesity a benign condition.  First, as much of the research presented here demonstrates, the chronic diseases strongly associated with obesity are, by definition, progressive and apt to cause damage down the road.  Second, in the real world, excess adiposity always leaves meaningful exercise a far more difficult and, thus, far less likely prospect.

obesity-paradox

Obese or not, our health continues to be undermined by the popular, ever-emotion-manipulating media, the misguided and oppressive forces of political correctness, and, most crucially, our own subjective prejudices and appetites. But as their numbers continue to swell, the overweight and obese grow increasingly vulnerable to seductive messages inviting self-deception and failure.  As in all other contexts, their liberation from these influences derives only from an unflinching appreciation for the methods of science—that is, empiricism, rationality, candor, and the assumption of responsibility for individual experimentation.  In a word, skepticism.

References:

Flegal, K.M., B.K. Kit, H. Orpana, et al. 2013. Association of all-cause mortality with overweight and obesity using standard body mass index categories. Journal of the American Medical Association 309(1): 71-82.

Global BMI Mortality Collaboration. 2016. Body-mass index and all-cause mortality: individual-participant-data meta-analysis of 239 prospective studies in four continents. The Lancet 388: 776-786.

Kramer, C.K., B. Zinman, and R. Retnakaran. 2013. Are metabolically healthy overweight and obesity benign conditions? Annals of Internal Medicine 159(11): 758-769.

Lavie, Carl J. 2014. The Obesity Paradox: When Thinner Means Sicker and Heavier Means Healthier. NY: Plume.

Munoz-Garach, A., I. Cornejo-Pareja, and F.J. Tinahones. 2016. Does metabolically healthy obesity exist? Nutrients 8: 320.

Ryden, M., O. Hrydziuszko, E. Mileti, et al. 2016. The adipose transcriptional response to insulin is determined by obesity, not insulin sensitivity. Cell Reports 16: 2317-2326.

ScienceDaily. 2016. More evidence that “healthy obesity” may be a myth.” 18 August 2016. https://www.sciencedaily.com/releases/2016/08/160818131127.htm>.

Veronese, N., L. Yanping, J.E. Manson, et al. 2016. Combined associations of body weight and lifestyle factors with all cause and cause specific mortality in men and women: prospective cohort study. BMJ. DOI:10.1136/bmj.i5855.

Obesity: “Fat Chance” or Failure of Sincerity?

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

popular culture3

Man is condemned to be free.—Jean-Paul Sartre.

Beginning about five years ago, the chronically overweight and obese were offered a new paradigm, one more consistent with their majority’s shared experiences in the twenty-first century. Emerging science from diverse fields, certain experts argued, complicated—perhaps even contradicted—the established view that weight maintenance was a straightforward, if not simple, matter of volitional control and balancing energy intake against energy expenditure.

As a host of potential complexities materialized, the frustrated members of this still expanding demographic were notified that, contrary to conventional wisdom, they had little or no control over their conditions. The popular literature especially began to hammer two captivating messages deeply into the public consciousness.  First, from within, the overweight and obese have been overwhelmed by their genomes, epigenomes, hormones, brains, and gut microbiomes, to name just a few.  Second, from without, their otherwise well-calculated and ample efforts have been undermined, for example, by the popular media, big food, government subsidies, poverty, and the relentless and unhealthy demands of contemporary life.

In a 2012 Nature opinion piece, Robert Lustig, Laura Schmidt, and Claire Brindis—three public health experts from the University of California, San Francisco, compared the “deadly effect” of added sugars (high-fructose corn syrup and sucrose) to that of alcohol(1).  Far from mere “empty calories,” they added, sugar is potentially “toxic” and addictive.  It alters metabolisms, raises blood pressures, causes hormonal chaos, and damages our livers.  Like both tobacco and alcohol (a distillation of sugar), it affects our brains as well, encouraging us to increase consumption.

Apparently unimpressed with Americans’ abilities to control themselves, Lustig et al. urged us to back restrictions on our own choices in the form of government regulation of sugar. In support of their appeal, the trio relied on four criteria—“now largely accepted by the public health community,”—originally offered by social psychologist Thomas Babor in 2003 to justify the regulation of alcohol: The target substance must be toxic, unavoidable (or pervasive), produce a negative impact on society, and present potential for abuse.  Perhaps unsurprisingly, they discovered that sugar satisfied each criterion with ease.

Robert Lustig.

Lustig, a pediatric endocrinologist and, now, television infomercial star, contends that obesity results primarily from an intractable hormonal predicament. In his wildly popular 2012 book, Fat Chance, Lustig indicted simple, super-sweet sugars as chief culprits, claiming that sucrose and high-fructose corn syrup corrupt our biochemistry to render us hungry and lethargic in ways fat and protein do not(2).  In other words, he insisted that sugar-induced hormonal imbalances cause self-destructive behaviors, not the other way around.

Lustig’s argument proceeds essentially as follows: In the body, insulin causes energy to be stored as fat.  In the hypothalamus, it can cause “brain starvation,” or resistance to leptin, the satiety hormone released from adipose tissue.  Excess insulin, or hyperinsulinemia, thus causes our hypothalami to increase energy storage (gluttony) and decrease energy consumption (sloth).  To complete the process, add an increasingly insulin-resistant liver (which drives blood insulin levels even higher), a little cortisol (the adrenal stress hormone), and of course sugar addiction.  In the end, Lustig concludes, dieters hardly stand a chance.

Journalist Gary Taubes, author of the similarly successful Why We Get Fat, was in full agreement(3).  Picking up the theoretical mantle where Lustig dropped it, Taubes expanded the list of nutritional villains considerably to include all the refined carbohydrates that quickly boost consumers’ glycemic indices. In a second Nature opinion piece, he then blamed the obesity problem on both the research community, for failure to fully comprehend the condition, and the food industry, for exploiting that failure(4).

Gary Taubes with Dr. Oz.

Gary Taubes with Dr. Oz.

To their credit, Lustig and Taubes provided us with some very sound and useful advice.  Credible nutrition researchers agree, for example, that Americans in particular should drastically reduce their intakes of added sugars and refined carbohydrates.  Indeed, most would be well-advised to eliminate them completely.  The authors’ claims denying self-determination might seem reasonable as well, given that, as much research has shown, most obese who have tried to lose weight and to keep it off, have failed.

On the other hand, failure is common in the context of any difficult task, and evidence of “don’t” does not amount to evidence of “can’t.” One might wonder as well whether obesity is a condition easily amenable to controlled scientific study given that every solution—and of course many, in fact, do succeed(5)—is both multifactorial and as unique as every obese person’s biology.  So can we sincerely conclude, as so many commentators apparently have, that the overweight and obese are essentially powerless to help themselves?  Or could it be that the vast majority of popular authors and health officials have largely—perhaps even intentionally—ignored the true root cause of obesity, if for no other reasons, simply because they lack confidence in the obese population’s willingness to confront it?

Though far less popular, a more recently published text appears to suggest just that.  In The Psychology of Overeating, clinical psychologist Kima Cargill attempts to “better contextualize” overeating habits “within the cultural and economic framework of consumerism”(6).  What current research fails to provide, she argues, is a unified construct identifying overeating (and sedentism, one might quickly add) as “not just a dietary [or exercise] issue,” but rather as a problem implicating “the consumption of material goods, luxury experiences, … evolutionary behaviors, and all forms of acquisition.”

Kima Cargill.

Kima Cargill.

To personalize her analysis, Cargill introduces us to a case study named “Allison.”  Once an athlete, Allison gained fifty pounds after marriage.  Now divorced and depressed, she regularly eats fast food or in expensive restaurants and rarely exercises.  Rather than learn about food and physical performance, Allison attempts to solve her weight problem by throwing money at it.  “When she first decided to lose weight,” Cargill recalls, “which fundamentally should involve reducing one’s consumption, Allison went out and purchased thousands of dollars of branded foods, goods, and services.” She hired a nutritionist and a trainer.  She bought a Jack Lalanne juicer, a Vitamix blender, a Nike Feulband, Lululemon workout clothing, an exclusive gym membership, diet and exercise DVDs and iPhone apps, and heaping bags full of special “diet foods.”

None of it worked, according to the author, because Allison’s “underlying belief is that consumption solves rather than creates problems.”  In other words, like so many others, Allison mistook “the disease for its cure.”  The special foods and products she purchased were not only unnecessary, but ultimately harmful.  The advice she received from her nutritionist and trainer was based on fads, ideologies, and alleged “quick-fixes” and “secrets,” but not on actual science.  Yet, despite her failure, Allison refused to “give up or simplify a life based on shopping, luxury, and materialism” because any other existence appeared empty to her.  In fact, she was unable to even imagine a more productive and enjoyable lifestyle “rich with experiences,” rather than goods and services.

Television celebritism: also mistaking the disease for its cure.

Television celebritism: also mistaking the disease for its cure.

Like Lustig, Taubes, and their philosophical progeny, Cargill recognizes the many potential biological factors capable of rendering weight loss and maintenance an especially challenging task.  But what she does not see in Allison, or in so many others like her, is a helpless victim of either her body or her culture.  Judging it unethical for psychologists to help their patients accept overeating behaviors and their inevitably destructive consequences, Cargill appears to favor an approach that treats the chronically overweight and obese like any other presumably capable, and thus responsible, adult population.

Compassion, in other words, must begin with uncommon candor.  As Cargill acknowledges, for example, only a “very scant few” get fat without overeating because of their genes.  After all, recently skyrocketing obesity rates cannot be explained by the evolution of new genes during the last thirty to forty years.  And while the food industry (along with the popular media that promote it) surely employs every deceit at its disposal to encourage overconsumption and the rejection of normal—that is, species appropriate—eating habits, assigning the blame to big food only “obscures our collusion.”  Worse yet, positioning the obese as “hapless victims of industry,” Cargill observes, “is dehumanizing and ultimately undermines [their] sense of agency.”

Education is always an issue, of course. And, generally speaking, higher levels of education are inversely associated with the least healthy eating behaviors.  But the obese are not stupid, and shouldn’t be treated as such.  “None of us is forced to eat junk food,” the author notes, “and it doesn’t take a college degree or even a high school diploma to know that an apple is healthier than a donut.”  Nor is it true, as many have claimed, that the poor live in “food deserts” wholly lacking in cheap, nutritious cuisine(7).  Indeed, low-income citizens tend to reject such food, Cargill suggests, because it “fails to meet cultural requirements,” or because of a perceived “right to eat away from home,” consistent with societal trends.

Certain foods, especially those loaded with ridiculous amounts of added sugars, do in fact trigger both hormonal turmoil and addiction-like symptoms (though one might reasonably question whether any substance we evolved to crave should be characterized as “addictive”).  And as the overweight continue to grow and habituate to reckless consumption behaviors, their tasks only grow more challenging.  I know this from personal experience, in addition to the science.  Nevertheless, Cargill maintains, “we ultimately degrade ourselves by discounting free will.”

popular culture4

Despite the now-fashionable and, for many, lucrative “Fat Chance” paradigm, the chronically overweight and obese are as capable as anyone else of making rational and intelligent decisions at their groceries, restaurants, and dinner tables. And surely overweight children deserve far more inspiring counsel.  But as both Lustig and Taubes, on the one hand, and Cargill, on the other, have demonstrated in different ways, the solution lies not in mere diet and exercise, per se.  The roots of obesity run far deeper.

Changes to basic life priorities are key. To accomplish a more healthful, independent, and balanced existence, the chronically overweight and obese in particular must first scrutinize their cultural environments, and then discriminate between those aspects that truly benefit them and those that were designed primarily to take advantage of their vulnerabilities, both intrinsic and acquired.  Certain cultural elements can stimulate the intellect, inspire remarkable achievement, and improve the body and its systems.  But most if not all of its popular component exists only to manipulate its consumers into further passive, mindless, and frequently destructive consumption.  The power to choose is ours, at least for now.

References:

(1)Lustig, R.H., L.A. Schmidt, and C.D. Brindis. 2012. Public health: the toxic truth about sugar. Nature 482: 27-29.

(2)Lustig, R. 2012. Fat Chance: Beating the Odds Against Sugar, Processed Food, Obesity, and Disease. NY: Hudson Street Press.

(3)Taubes, G. 2012. Treat obesity as physiology, not physics. Nature 492: 155.

(4)Taubes, G. 2011. Why We Get Fat: And What to Do About It. NY: Knopf.

(5)See, e.g., The National Weight Loss Control Registry. http://www.nwcr.ws/Research/default.htm

(6)Cargill, K. 2015. The Psychology of Overeating: Food and the Culture of Consumerism. NY: Bloomsbury Academic.

(7)Maillot, M., N. Darmon, A. Drewnowski. 2010. Are the lowest-cost healthful food plans culturally and socially acceptable? Public Health Nutrition 13(8): 1178-1185.

Dog Behavior: Beneath the Veneer of “Man’s Best Friend.”

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

We love our dogs—often more than many fellow humans. In Homer’s 8th century BCE Odyssey, Odysseus referred to the domestic dog as a “noble hound,” and it may have been Frederick II, King of Prussia, who in 1789 first characterized Canis lupus familiaris as “man’s best friend.”  More recently, Emily Dickenson judged that dogs are “better than human beings” because they “know but do not tell.”  Dogs are capable creatures, certainly, but are they as intelligent and considerate as most of humans apparently believe?  Regardless of breed, can their levels of consciousness truly support qualities like nobility, loyalty, and friendship?

Ethologists attempt to assess animal behavior objectively by emphasizing its biological foundations. Classic ethology was founded on the notion that animals are driven by intrinsic motor-patterns, or species-specific, stereotyped products of natural selection (Lorenz 1982).  Modern practitioners, however, often introduce additional factors into the ethological equation.  Many suggest, for example, that intrinsic motor-patterns can be accommodated to developmental and environmental influences.  Some argue as well that complex and otherwise confusing behaviors can emerge from interactions between two or more simpler behavioral rules.

Evolving Motor-Patterns.

Fellow ethologists, biologist Raymond Coppinger and cognitive scientist Mark Feinstein argue that much, if not all, dog behavior can be explained by reference to these three ideas—without resorting to more romantic notions of consciousness or sentience, let alone loyalty and friendship (2015). In the crucial context of foraging, for instance, intrinsic motor-patterns manifest similarly in all canids.

When born, both dogs and wolves spontaneously demonstrate a characteristic mammalian neonatal foraging sequence: ORIENT (toward mom) > LOCOMOTION (to mom) > ATTACHMENT (to her teat) > FOREFOOT-TREAD (stimulating lactation) > SUCK. Here, pups’ mouths and digestive systems are well-adapted to challenges imposed by the foraging environment—that is, mom.  But despite the cozy evolutionary relationship between dogs and wolves, puppyhood is the point after which precise foraging parallels end.

As adults, canid predators display the following generalized foraging sequence: ORIENT > EYE (body still, gaze fixed, head lowered) > STALK (slowly forward, head lowered) > CHASE (full speed) > GRAB-BITE (disabling the prey) > KILL-BITE > DISSECT > CONSUME. But some species, and some individual dogs and wolves, might substitute one motor-pattern for another.  Tending to hunt small prey, coyotes might occasionally swap the FOREFOOT-STAB pattern for the CHASE pattern, and HEADSHAKE for KILL-BITE. Big cats like the puma, by contrast, might substitute FOREFOOT-SLAP for GRAB-BITE to bring larger prey down from behind.

The coyote "forefoot-stab" rule

The coyote forefoot-stab rule

The precise form of GRAB- or KILL-BITE can vary between species as well, often based on the predator’s evolved anatomy. The puma usually kills with a bite to the neck, crushing its prey’s trachea, or to the muzzle, suffocating the victim.  But the wolf often GRAB-BITES the prey’s hind legs, shredding its arteries and slowly bleeding it to death. Puma and wolf anatomies—jaw structure, dentition, and musculature, in particular—apply different mechanical forces, and thus demand the evolution of at least slightly different foraging behaviors.

Domestic dogs, on the other hand, tend to be far less purposeful.   Having long-relied on humans for sustenance, they rarely demonstrate complete predatory foraging patterns.  Instead, different breeds have retained programs for different partial sequences.  Border collies, for instance, are famous for obeying the EYE pattern. I, in fact, once owned an Akita that employed FOREFOOT-STAB with astonishing expertise to capture mice foraging deep beneath the snow.

Border collie "eye" rule.

The Border collie eye rule.

Nevertheless, say Coppinger and Feinstein, “certain commonalities have long persisted in the predatory motor-pattern sequences of all carnivores, reflecting their shared ancestry” and “an intrinsic ‘wired-in’ program of rules.” Learning is neither necessary nor optional.  Indeed, as soon as their foraging sequences are interrupted for whatever reasons, some wild-types are rendered incapable of continued pursuit.  Pumas can’t consume an animal that’s already dead, for example, and, although wolves generally can enter their foraging sequences at any point, they often can’t perform GRAB-BITE if interfered with following expression of the partial EYE > STALK > CHASE sequence.

Dogs have similar limitations. Coppinger and Feinstein recall two conversations with different sheep ranchers.  The first shepherd commended his livestock-guarding dog for independently standing watch over a sick ewe for days without consuming it.  The second, however, complained because his guarding dog ate a lamb that tore itself open on a barbed-wire fence.  To the ethologists, these seemingly inconsistent behaviors were anything but.  Livestock-guarding dogs generally do not express the DISSECT motor-pattern. In fact, their only foraging rule is CONSUME. The first dog wasn’t a “good” dog, necessarily—it was just an unlucky dog.  And the second animal wasn’t really a “bad” dog—it simply performed its intrinsic program when afforded the opportunity to do so.

Accommodating Environment.

However, that many motor-patterns are stereotyped and non-modifiable does not imply that dogs and other animals are mere automata driven solely by internal programs. Certain behaviors can arise as well from an accommodation of the intrinsic to the contingencies of external forces.  Under the “right” circumstances, in other words, animals commonly act in species- or breed-specific manners.  But when exposed to other environmental conditions, their behaviors can look very different (Twitty 1966).

Indeed, if animals encounter such conditions during a developmentally “critical” or “sensitive” period—that is, a species-specific, time-bound stage of growth—they might never display certain typical behaviors. For example, many prospective service dogs flunk out merely because they can’t negotiate stairs, curbs, or even sewer grates.  Why not?  According to Coppinger and Feinstein, their vision systems never fully developed because they were raised in kennels that were sterile and spacious, but nevertheless lacking in three-dimensional structures.

Research also suggests that canids have critical periods for social bonding, during which exposure to a given stimulus will reduce the animal’s fear of that stimulus in the future (Scott and Fuller 1998). Some argue further that certain conspicuous behavioral differences between canid species can be explained, at least in part, by distinct onsets and offsets of these periods (Lord 2013).  For instance, the sensitive bonding period for dogs begins at about four weeks and ends at about eight weeks, while the same period begins and ends two weeks earlier for wolves.

Crucially, dogs and wolves develop their sensory abilities at about the same time—sight and hearing at six weeks, smell much earlier. As such, wolves have only their sense of smell to rely on during their sensitive bonding period.  One general result is that more stimuli, including humans, will remain unfamiliar and thus frightening to them as adults.  But dogs can suffer similar consequences when raised in the absence of direct human contact.

With bonding periods in mind, Coppinger and Feinstein invite us to guess why the Maremma guarding dogs they studied in Italy would abandon their human shepherds to trail their flocks. Were they merely obeying their evolved, gene-based intrinsic motor-patterns—or perhaps their training?  Did they actually understand the importance of their job?  None of the above, say the authors: guarding dog behavior can be “explained by accommodation to particular environmental factors during a critical period in the development of socialization.”

Maremma guarding-dogs.

Maremma guarding dog pups.

During his famous, Nobel Prize-winning experiments in 1935, Konrad Lorenz was able to transfer the social allegiance of newly-hatched greylag geese from their mothers to not only Lorenz himself, but to inanimate objects including a water faucet as well. Coppinger and Feinstein produced similar effects with their Maremmas.  When raised with sheep instead of humans, the dogs usually stuck with the sheep.  Interestingly, however, a few Maremmas preferred to remain at home when both the sheep and their shepherds left for the fields.  These dogs had actually bonded with milk cans that no doubt smelled very much like the sheep.

Emerging Complexities.

Yet other canine behaviors cannot be easily explained by reference to either intrinsic motor-patterns or their accommodations to environmental influences. Consider the collaborative hunting of large prey in wolves, for example.  Here, individuals within the pack appear to work closely together according to preconceived plan, synchronizing their movements and relative positions to prevent prey escape.  At first glance, the spectacle tempts us to infer not only extraordinary intelligence, but insight as well.

Coppinger and Feinstein, however, suggest an explanation relying not on naïve anthropomorphisms, but rather on our knowledge of canine behavior plus the intriguing concept of emergence.  Nothing new under the intellectual sun, emergence proposes that complex and novel phenomena can arise from the accidental, “self-organizing” interaction of far simpler rules and processes.

Two classic examples illustrate the principle. Known for their towering, complicated, yet surprisingly well-ventilated structures, termite mounds obviously are not designed and erected by hyper-intelligent insects.  Similarly, many species of migrating birds, Canada geese in particular, tend to fly in a conspicuous V-pattern, but not because individual geese possess advanced senses of aesthetics.  The more likely explanation, say the authors, is that members of each species act only according to very simple rules.  Termites transport sand grains to a central location.  Perhaps their movements are sensitive to humidity levels and the relative concentration of gasses.  Geese evolved to fly long distances and to draft behind others to lessen their burdens.  The impressive results were never planned; they simply emerged from the interaction of much humbler, species-specific rules.

The practice of collaborative hunting among wolves is no different, according to Coppinger. He and his colleagues recently created a computational model including digital agents representing both pack and prey (Muro et al. 2011).  When they imposed three basic rules on individual predators—move toward the prey, maintain a safe distance, and move away from other wolves—the model produced a successful pattern of prey capture appearing remarkably similar to the real thing.  But in no way were Coppinger’s results dependent on agent purpose, intelligence, or cooperation.

Collaborative hunting among wolves.

Collaborative hunting among wolves.

Consider dog “play” as well. One popular explanation of play generally is that it evolved as a “practice” motor-pattern to prepare animals for escape.  But play only partially resembles any given adult motor-pattern sequence.  And to be even minimally effective, escape has to be performed correctly the first time, which is precisely why motor-patterns are intrinsic, stereotyped, and automatic—as Coppinger and Feinstein observe, “no practice is ever required.”

As any dog owner will attest, canine play is commonly manifested in the “play bow”—a posture in which the animal halts, lowers its head, raises its rear-end, and stretches its front legs forward. Many have interpreted the bow as a purposeful invitation to engage in play.  But more careful observations and experiments suggest otherwise.  Play bows often result when dogs and wolves enter into an EYE > STALK motor-pattern sequence only to be interrupted by their subjects’ failures to react—that is, to run. As such, the bow might actually reveal a combination of two conflicting rules: stalk and retreat.  If so, the posture itself is neither an adaptive motor-pattern nor a signal of intent.  Rather, say Coppinger and Feinstein, “it is an emergent effect of a dog (or wolf) simultaneously displaying two motor-pattern components when it is in multiple or conflicting states.”

None of which should diminish the love and allegiance we typically bestow upon our dogs. That they have no desire to please us—indeed, that their conscious goals are severely limited in general—is no reason to deny ourselves the great pleasure we so often derive from their company.  Even so, our failure to pursue a more objective understanding of dog behavior frequently results in disaster, for both ourselves and our pets.  For some, ignorance might be bliss.  But it’s never a solution for anyone or any thing.

References:

Coppinger, R. and M. Feinstein. 2015. How Dogs Work. Chicago: University of Chicago Press.

Lord, K.A. 2013. A comparison of the sensory development of wolves and dogs. Ethology 119:110-120.

Lorenz, K. 1982. The Foundations of Ethology: The Principle Ideas and Discoveries in Animal Behavior. NY: Simon and Schuster.

Muro, C., R. Escobedo, L. Spector, et al. 2011. Wolf-pack hunting strategies emerge from simple rules in computational simulations. Behavioral Processes 88: 192-197.

Scott, J.P. and J.L. Fuller. 1998. Genetics and the Social Behavior of the Dog. Chicago: University of Chicago Press.

Twitty, V. 1966. Of Scientists and Salamanders. San Fransisco: W.H. Freeman.

Cat

The Evolutionary Foundations of Dog Behavior.

[Notable New Media]

by Kenneth W. Krause.

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well. He can be contracted at krausekc@msn.com.

Border collie "eye" rule.

The Border collie “eye” rule.

Ethologists like Mark Feinstein and Raymond Coppinger attempt to study animal behavior objectively by emphasizing its biological foundations. In their new book, How Dogs Work, the authors scrutinize, for example, the adaptive motor-pattern foraging sequences of both dogs and wolves.  In all such species, different patterns—not learned, but genetically based—emerge at different stages of life.

When born, both dogs and wolves demonstrate a characteristic mammalian neonatal foraging sequence: orientation (toward mom) > locomotion (to mom) > attachment (to her teat) > forefoot-tread (stimulating lactation) > suck. Here, the pups’ mouths and digestive systems are well-adapted to challenges imposed by the foraging environment—that is, mom.  But despite the close evolutionary relationship between dogs and wolves, puppyhood is the point after which foraging parallels end.

Adult predators exhibit the following generalized foraging pattern: orient > eye (still, with gaze fixed and head lowered) > stalk (slowly forward with head still lowered) > chase (full speed) > grab-bite (disabling the prey) > kill-bite > dissect > consume. But some species, and some individual dogs and wolves, might substitute one element, or “rule,” for another.  Coyotes that tend to hunt small prey, for instance, might occasionally substitute the forefoot-stab rule for the chase rule, and the headshake rule for the kill-bite rule.  Large cats like the puma, by contrast, might substitute the forefoot-slap rule for the grab-bite rule in order to bring larger prey down from behind.

The coyote "forefoot-stab" rule.

The coyote “forefoot-stab” rule.

The form of grab- or kill-bite can vary between species as well, often based on the predator’s evolved anatomy. The puma usually kills with a bite to the neck, crushing its prey’s trachea, or to the muzzle, suffocating the prey.  But the wolf often grab-bites its prey’s hind legs, shredding its arteries and slowly bleeding it to death.  Puma and wolf anatomies—jaw structure, dentition, and musculature, in particular—apply different mechanical forces, and thus demand the evolution of at least slightly different foraging behaviors.

Domestic dogs, on the other hand, have long relied on humans for food and now rarely demonstrate complete predatory foraging patterns. Rather, different breeds have retained different partial sequences or distinct individual rules.  Border collies, for instance, are famous for following the eye rule.  I once owned an Akita that employed the forefoot-stab rule with astonishing expertise to catch mice rummaging deep beneath the snow.

Nevertheless, say Feinstein and Coppinger, “certain commonalities have long persisted in the predatory motor-pattern sequences of all carnivores, reflecting their shared ancestry” and “an intrinsic ‘wired-in’ program of rules” that at least partly governs their foraging behaviors. Learn much more about why our beloved pets and working partners really behave in the ways they do in the authors’ fascinating new work, How Dogs Work (University of Chicago Press 2015.)

How Dogs Work

The Laws of Immunity.

by Kenneth W. Krause.

[Notable New Media]

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well.  He can be contracted at krausekc@msn.com.

So how did humans overcome smallpox? Well, that’s an interesting story.

Toward the end of the 18th century, Edward Jenner noticed that, after recovery from infection with the cowpox virus (vaccinia), milkmaids rarely contracted smallpox. Importantly, vaccinia is very closely related to variola, the smallpox virus.

Edward Jenner

Edward Jenner

So in 1796, Jenner extracted fluid from the pustules on one Sarah Nelmes, suffering from cowpox, and injected it into a healthy James Phipps. After Phipps recovered from a mild case of cowpox, Jenner then intentionally injected him with fluid from the pustules of a smallpox patient. Phipps didn’t develop signs of smallpox infection because the cowpox virus (and the process of “adaptive immunity”) had protected him.

Jenner’s methods obviously wouldn’t pass ethical muster today. But such practices were not uncommon in the 18th century, and understandably so. Smallpox was responsible for more human deaths than any other infectious agent.

Regardless, smallpox vaccination became common in Europe and infection rates dropped dramatically by 1820. In 1853, the UK required every healthy child to be so vaccinated within 3 or 4 months of birth. By 1980, smallpox was formally declared eliminated worldwide.

Read more about the subject generally in William Paul’s new book, Immunity (Johns Hopkins University Press 2015).

Immunity

The Serengeti Rules.

by Kenneth W. Krause.

[Notable New Media]

Kenneth W. Krause is a contributing editor and “Science Watch” columnist for the Skeptical Inquirer.  Formerly a contributing editor and books columnist for the Humanist, Kenneth contributes frequently to Skeptic as well.  He can be contracted at krausekc@msn.com.

Serengeti

Science has saved countless lives in strangely uncelebrated ways. How did military doctors first learn to treat shock? Well, that’s another interesting story.

In the early 20th century, Harvard physiologist, Walter Cannon, coined the term “fight-or-flight” following his observations in animal studies that digestive functions were strongly affected by stress. The sympatheric nervous system, he surmised, works in concert with the adrenal glands to modulate body organs during tough times.

As casualties mounted during WWI, Cannon was asked to figure out why the wounded so often went into shock and died. These soldiers exhibited some of the same symptoms as his beleauguered animal subjects–rapid pulse, dilated pupils, and heavy sweating. He quickly volunteered to treat injured casualties overseas in the Harvard Hospital Unit.

Cannon decided to measure the soldiers’ blood pressure, instead of just their pulse. Shock patients, he discovered, had abnormally low BPs–usually under 90 mmHg. After measuring the concentration of bicarbonate ions in their bloodstreams, he found it similarly lacking. The patients’ normally alkaline blood had become more acidic, and the more acidic it was, the lower the patients’ BPs.

So, to raise their pH levels, Cannon began adminsitering sodium bicarbonate to shock victims. And it worked. Innumerable soldiers were saved before WWI finally came to a grisly end. Later emphasizing how most bodily organs receive dual nervous system inputs that generally oppose one another, he coined another term–“homeostasis.” This dual regulation, he concluded, “is the central problem of physiology,” and thus the physician’s role was to reinforce or restore homeostasis.

University of Wisconsin-Madison professor of molecular biology, Sean B. Carroll, uses this story and others to illustrate a worthy point. Regulation is critical to not just human health, but the health of entire ecosystems as well. Also the author of “Endless Forms Most Beautiful” (one of my all-time favorites), Carroll’s new book, The Serengeti Rules: The Quest to Discover How Life Works and Why It Matters (Princeton University Press 2016), explains why the overarching logic of the small and familiar also applies to the large and far-flung.

Carroll