Sabrina Little

Sabrina Little is a Medieval History and Physics teacher at Live Oak Classical School in Waco, Texas. She is a 2009 graduate of The College of William & Mary in Pre-Medical Philosophy and Psychology, with a concentration in Neuroscience, and a 2012 graduate of Yale Divinity School in Philosophical Theology. Sabrina is an ultramarathon runner and a member of the 24-Hour U.S. National Team.

On Parakeets and Human Nature

One Mother’s Day in elementary school, my family arrived home from church to find that my sister’s parakeet, Duncan, had eaten my parakeet, Blue Jeans, and had discarded the remains of his skeleton in the water dish. Duncan sat there unrepentant, our very own avian harbinger of doom, while my mother gathered Blue Jeans’ carcass and put it in a coffee can. Blue Jeans was buried that afternoon under our rose bush in a harried, child-run ceremony. Duncan ate another bird later that year. She died shortly after, professedly of natural causes. I would suggest she died of self-loathing.

I present to you Duncan, as the counterargument for all that follows.

On Human Nature

AMC’s popular show, The Walking Dead, features a post-apocalyptic world of zombies. The zombies are flesh-eating, horrible creatures—dead, human, detritus feeders especially eager to prey on living, non-zombie humans. The show, developed by Frank Darabont, follows a group of non-zombie protagonists as they seek refuge. It is a survival tale.

The Walking Dead is of sociological interest because it depicts the spontaneous government that arises among the group of protagonists, best described as a Humean, common good constellation of social laws and contingencies that are erected ad hoc among the group as they work together to survive. At times, they become like beasts, acting as both predator and prey in turn. They wield weapons. They share meals together. And as life around them unfolds in destruction, Guillermo declares, “[The world] is the same as it ever was: the weak get taken.”

Perhaps most interesting are the basic assumptions of zombie-ism itself. At the end of season one, the group finds itself at the Center for Disease Control (CDC) in Atlanta, with the sole remaining doctor, Edwin Jenner. Dr. Jenner shows the group a video scan of the process of a person being overtaken by the zombie virus: At first, the brain activity arrests, the person dies, and then the base of the brain (site of basic living processes—respiration, circulation, etc.) resurrects. The zombie is born. “The frontal lobe, the neocortex, the human part—that doesn’t come back. The you part,” Jenner explains. “Just a shell driven by mindless instinct.”

Essentially, once your higher-order rational superintendence is gone, you become a zombie, which is not a morally-neutral state. Zombies are aggressive human-eaters because “mindless instinct” is understood to be savage. The assumption here is that human cognition is tiered, not strictly in terms of anatomical positioning, but rather with regard to functional primacy. The outer, cortical layers are those we associate with our humanity, and ethics is a higher-order “cultural overlay,” cloaking deeper, darker impulses. This is called Veneer Theory—the popular hypothesis so dubbed (and dethroned) by Dutch primatologist ethologist Frans de Waal. It is the central notion that animates the Hobbesian state of nature, wherein men live in bellum omnium contra omnes—war of all against all—unless tempered by a social regime that cloaks their instinct (Hobbes, De Cive).

And veneer ethics is not assumed exclusively in zombie shows. It is the governing assumption of most popular media. In Breaking Bad, a devoted family man and high school chemistry teacher is diagnosed with lung cancer and cracks. He loses his moral reservations and becomes a crystal methamphetamine dealer to provide for his family. The story is post-ethical; moral calculus becomes an auxiliary task—incidental, not integral to his daily affairs. Likewise, the political drama House of Cards depicts an American government rife with hostility, indecency, and revenge, both revealing and disguising the tumult of key political players’ lives in turn. In this show, the American government—transcribed by its founders in the words of John Locke—startles by being more Hobbesian in praxis, at least just beneath the surface.

Ironically, while screenwriters are preoccupied with the savage human animal stifled beneath a veneer, there is a radical inversion occurring in the narrative of our popular media regarding non-human animals—that they are not all that brutish themselves.

On Animal Nature 

In recent months, there has been an influx of articles surrounding the topic of animal altruism—death behaviors, ensoulment, and apparent morality—sharing a common thesis that non-human animals are of higher sentience and potential for goodness than we might have imagined. Last year, Catrin Nicol wrote asking New Atlantis readers, “Do Elephants Have Souls?”, a question few ask of humans anymore. In October, Gregory Berns proclaimed that “Dogs are People, too.” In June, The New York Times published Maggie Koerth-Baker’s article, “Want to Understand Morality? Look to the Chimps,” an anecdote-based surveyal of chimpanzee death behavior that projects outward onto animal morality more generally.

In Koerth-Baker’s article, she explores the question of animal mortality rituals on the locus of a 2010 account from Current Biology of Pansy, an elderly chimpanzee who died in the company of friends. At the outset of Koerth-Baker’s descriptions, we are duty-bound to care, as we read the flowery descriptions of chimps with human names growing old together and holding hands. The zoologists are the most inhuman in Koerth-Baker’s account: “When the scientists at the park realized Pansy’s death was imminent, they turned on video cameras, capturing the intimate moments during her last hours.” It seems insensitive, like tactless chaplaincy.

We are in the process of realizing that animals are more capable of moral-like action than we may have previously perceived. So the statements we are making of ourselves in popular media need to be adjusted, because the veneer theory does not accommodate a beast that is not exceptionally beastly. The cortical veneer, in this case, would basically be covering a scaled level of goodness.

As things stand, the combined statements of our media are: 1) Beasts are actually quite noble. 2) Humans are rather savage. But we are covered in a tissue paper layer of human moral cognition that disguises this (until perhaps we become zombies). We are veiled, appetitive terrors.

Is anyone paying attention? Not only have we collapsed the gap between human and animal morality, we have inverted them so that animals are our moral superiors.

An Alternative

Frans de Waal, a contemporary primatologist ethologist, has spent his lifework studying the gap between primates and people, specifically in terms of morality, operationally defined as capacity for empathy. In this gap, you find what it is that distinguishes human nature from non-human animal nature.

The primary innovation of de Waal’s project is his objection to Veneer Theory. Instead of the moral cloak, he offers a view of “nested dolls” of prehuman selves—layers upon layers of increased aptitudes in social-interdependence relationships among organisms. He does so specifically by drawing out the moral-like impulses in animals, primarily of our closest nonhuman relatives, the chimpanzees, to show that our evolutionarily inherited instincts are not as brutish and self-serving as we imagined.

DeWaal’s work is given legs with cognitive modularity theories, which presume that a significant amount of our moral processing takes place in locked-down mental modules (or processing units), reified by our biological inheritance to uphold basic kinship altruism. These are super fast moral shortcuts that happen without reflection. They produce gut-response, impulse reactions. With animals, we can stop there. But human moral cognition extends beyond this. Set apart in a secondary tier of processing, are the rational, utilitarian, Divine command, and revelation-based modes of moral cognition we typically associate with humanness.

For deWaal, the non-human animal pre-moral landscape stops just short of attribution, the ability to fully step outside of oneself and see things from another’s perspective. Humans alone have the highest order of empathic capacity, a robust self- and social-cognizance paired with language and memory, which, combined, enable a true morality. We transcend the pre-moral landscape and can uniquely act in ways that are not dictated under the auspices of adaptive proliferation. We feel accountability toward moral laws we cannot satisfy (a phenomenon John Hare calls the “performance gap”), and we feel compelled to establish our own laws. In The Roots of American Order, Russell Kirk remarks,

“Even the simplest human communities cannot endure without some form of laws, consciously held and enforced. Ants and bees may cooperate by instinct; men must have revelation and reason.”

Humans have a desire for external coherence and a more stable sense of identity that elevates accountability and prevents us from being a theatre of passing appetites. We are different, though not in a cloaked way. We depart from non-human nature in a way that is continuous with animal nature—a departure that advances and sophisticates a pre-extant capacity for moral conviction, not in a way that radically dismisses the good of those below, but instead affirms it.

Interestingly, deWaal’s stance better accommodates both the Judeo-Christian notion of creation’s goodness, and modern evolutionary theory. St. Augustine paints an analogous picture of the human as moral actor. In his chain of being, he collapses morality onto ontology and illustrates how virtue affirms being. By living rightly—in obedience to God—we affirm our proper place on the chain of being. Conversely, when we disobey or seek lesser goods than God, we become less of who we were created to be. We “[sink] to the animal” (Augustine, Confessions). Likewise, evolutionary theory is sustained because veneer theory is antiparsimonious (evolutionarily uneconomical).  With deWaal’s moral scaling, we no longer have to posit a reason why humans departed from their inherited natures, because their natures are fairly good to begin with.

As things stand, there is a strange disjunction between depictions of animal nature and human nature in contemporary media, though this might change. Perhaps as we grow in our understanding of animal nature, we will learn to refine the questions we ask about ourselves.

 

Don’t Tread On Me, Neuroscience

In second grade, my older sister stole my diary. I found her in the kitchen reading an entry to my mom, and it felt like the deepest form of betrayal. It wasn’t that I had secrets worth keeping because there wasn’t much going on in my life besides constructing dinosaur models and attending to a (heart-stoppingly-beautiful) My Little Ponies collection. The problem was that I intended my diary to be for my eyes alone. I didn’t want my sister and my mom to have that sort of privileged access to my deepest, most personal thoughts.

I didn’t want them to know how much I cared for dinosaurs.

Privacy is important. This is ripe fruit. The government is reading our emails, and many people feel exposed. It isn’t necessarily that we have secrets worth keeping. We’re just reliving that time in the kitchen with our moms and sisters and our dinosaur diary betrayal. But these conversations are also happening in an entirely different context: popular neuroscience.

When I read David Brooks’ recent column, “Beyond the Brain,” and various responses to it, I was struck not by the recognition of the reductive nature of the sciences in general—and neuroscience in particular—but by the implications of these things. One is the invasion of privacy. Another is the truncation of the human person from a conscious, surprising, autonomous enigma capable of grasping beauty and participating in goodness, to a predictable, exposed network of causal circuits.

Privacy

A major touchstone in Brooks’ article is the mereological fallacy: The mind is not merely the brain. If my mind is merely my brain, then let’s get some fMRIs done and get to know one another. If you track my neural circuitry, I stand fully exposed because you can see my core. This is obviously a lot worse than email-tapping. Emails are sent with dignity, tact, and proper salutations. And emoticons. But your pure mind—unfunneled through email—is the deepest you there is. Sans neuroscientific insight, this self is hidden to everyone. What is shared is what you choose to release.

Furthermore, it goes without saying, but most of you cannot feasibly be transcribed into email form. Try it. Attempt to write an email about the “what it’s like” phenomenological qualia of your experience of a color. You can’t. Not all of the contents of your mind can be exported because language is limited. There are some thoughts you can never express. We have “tacit knowledge,” philosopher Michael Polanyi’s term for “knowing more than we can say.” Beyond that, Nietzsche offers, “We knowers are unknown to ourselves.” We cannot see ourselves as we are. Our motivations are hidden. Our hearts are deceptive. Our thoughts stem from responses to unconscious stimuli we never rationally encounter. It would be awkward if I couldn’t know myself but my fMRI technician could, and we had unilateral mental intimacy. That would be the most embarrassing doctor’s appointment in the world. Is that what is making us upset? Unilateral exposure? The government gets to read our emails, but we don’t get to read theirs.

Brooks addresses this. Have no fear! Your secrets are safe. There is no brain-tapping yet, or possibly ever. “It is probably impossible to look at a map of brain activity and predict or even understand the emotions, reactions, hopes and desires of the mind.” Here, he spends a great deal of time on neurological complexity.

Brooks surveys the literature and explains that brain states are not typically generated in isolated regions of the brain, and they differ over time and in different situations, like fatigue, nostalgia, or thirst. Also, diverse sets of activities occur in each region, so it is difficult to isolate specific patterns. It is worth noting that even with all of the complexity offered here, this is too tidy of an account because it treats the brain as if isolated from the rest of the human body and its being-in-the-world. There are also the spinal cord and the peripheral nervous system. In The Second Brain, Dr. Michael Gershon writes about the enteric nervous system, the neurons implanted in the lining of our digestive systems. That’s about one hundred million additional neurons to consider. We should count them. And what about shared consciousness, mirror neurons, coerced action, electrical interference, lesions, and group dynamics? What about what I ate today, how loved I feel, and whether or not I am effectively maintaining homeostasis? Truly, the mind is not merely the brain. The brain is not a closed system.

In “Beyond the Brain,” Brooks identifies and upends science without boundaries. He draws our attention to the dangers of neuroscience’s efforts to define all of human behavior. He indicts “nothing buttery,” the conversations that begin by saying, “The human being is nothing but…” Because if there is one thing we know of ourselves, it is that we are complex. Brooks ends by beckoning us forth to “harvest the exciting gains made by science and data while understanding the limits of science and data.”

Scientism

His article expertly introduces the problem of scientism—the attempt to universally apply empirical principles and methods to areas that extend beyond empirical boundaries. Smaller, more repeatable things have greater explanatory power. The scientific method guides all modes of inquiry. But scientism is not a problem unique to neuroscience. It is, as Brooks writes, an age-old problem of human progress. “[P]eople get caught up in the excitement of [a] breakthrough and try to use it to explain everything.” Yes, and it never works. Consider John Lennox’s illustration of Aunt Matilda’s cake:

Let us imagine that my Aunt Matilda had baked a beautiful cake and we take it along to be analyzed by a group of the world’s top scientists. I, as master of ceremonies, ask them for an explanation of the cake and they go to work. The nutrition scientists will tell us about the number of calories in the cake and its nutritional effect; the biochemists will inform us about the structure of the proteins, fats etc. in the cake; the chemists, about the elements involved and their bonding; the physicists will be able to analyze the cake in terms of fundamental particles; and the mathematicians will no doubt offer us a set of elegant equations to describe the behaviour of those particles. Now that these experts, each in terms of his or her scientific discipline have given us an exhaustive description of the cake, can we say that the cake is completely explained?

No, none of the scientists can answer the question of why the cake was made—its final cause. It is outside the proper boundaries of their disciplines. It is outside the boundaries of even of their combined disciplines. Yet scientism is insidious because science is excited and hopeful.

Scientism is the noble error of enthusiastic nerds. It is erroneous and haughty, but it is also bold and hopeful.  Science is animated by an optimism that there is a closed set of information in our world, and we’re going to reach the end of it if we keep on trucking with our empirical tool belts. Scientism keeps on trucking. But scientism is also dangerous because of what it says about the world, and fails to say. Nowhere is this greater felt than when we turn the lens inward, upon ourselves, and attempt to craft a science of humanity with the same tools we use to explore everything else.

It seems the central issue of reductive neuroscience (scientism as evinced in neuroscience) is most illuminated in Brooks’ remark about the “effort to take the indeterminacy of life and reduce it to measurable, scientific categories.” It does not deeply offend us when the world outside of us—trees, mountains, and reptiles—are made measurable.

Yet when the same principles are applied inward, we are forced to reckon with a science of man that discounts the full persons we believe ourselves to be. We have transcendent, intangible qualities that cannot be fully captured—like consciousness, free will, and an ability to depart from the dictates of a material/self-proliferating nature—and that means something big for humans. To apply Frans deWaal’s language, non-human animals participate in “moral precursors,” such as kinship altruism or scaled empathy for group fitness. Humans, however, are set apart in some fundamental way, through language and an ability to rationally superintend appetites and physical natures. There is a transcendent space for belief, religion, and morality. We have ideas and emotional lives, and we are different. We surprise ourselves. I didn’t know I was going to write that.

There are merits to the science of the human—finding that we are affixed to a past, grounding us here on earth in a way that makes us an empathetic part of the world we live in. Science teaches us how to treat our bodies better and offers medicine to heal. It gives us the delight of getting to know ourselves on the level of observation of Aunt Matilda’s cake.

The most interesting thing about Brooks’ article is how much we care. There are no botany rallies and mycology debates in the media, but we are worried about neuroscience. This is because science applied to humanity, specifically to the human mind, somehow leaves us feeling unsettled and unduly exposed. And this means one of two things: Either the human is a higher being, different from the other creatures we readily explore with reductive tools. Or the fact that we are sensitive to science when applied inward, indicates that we should adjust the tools we use when looking outside of ourselves because it is equally possible that we are reducing reality as it stands apart from us.

In any case, neuroscience is an exciting new frontier that can offer rich insights into the human person, even though it cannot teach us everything. It should be encouraged with tempered enthusiasm and a respect for its boundaries. And we can rest assured that it has not yet advanced to the point when it can detect our deepest thoughts, predict our every action, or whether or not we love dinosaurs.