It is becoming increasingly
difficult to argue that human morality is entirely the product of culture. Christopher Boehm presents a very strong case
in his two books Hierarchy in the Forest
and Moral Origins that morality
emerged as a product of social selection.
Human groups imposed strong pressure on their individual members to
suppress selfish and especially bullying behavior. Over periods of evolutionary adaptation,
individuals who better internalized the moral rules of their groups and groups
that contained more honest cooperators were more successful than those that did
not do so. Frans de Waal and Michael
Tomasello have shown that human morality builds on cooperative behaviors
observed in other primates and the latter has shown that moral instincts emerge
in children reliably at a young age.
Those who hold that morality is
entirely artificial, the product of learning and culture, haven’t yet given up
the game. Francisco J. Ayala contributes
a chapter in a book he co-edited with Robert Arp (Contemporary Debates in Philosophy of Biology) that makes just that
case. Ayala argues that the capacity for
ethics is indeed part of human nature.
It is grounded in three products of natural selection: the ability to
anticipate the consequences of our actions; the ability to make value
judgments, and the ability to choose between courses of action. All moral norms are the consequence of the application
of these basic human capacities to recurrent problems. If some moral norms seem to be universal
(e.g. return favors) that is only because the problems are universal. All moral behaviors are, however, exaptations
rather than adaptations: they emerge in the course of human evolution not
because they were selected for but because they are secondary consequences of
traits that were selected for.
I have to say that this smacks of
an almost Cartesian view of human beings.
Descartes famously believed that animals were mere biological machines,
whereas human beings alone, having souls, possessed the metaphysical capacity for
consciousness and freedom. If Ayala is
right, human beings alone achieved by hard work alone what the other social
primates were granted by way of instincts: altruistic norms promoting
cooperative behaviors.
Whether such a dualism is tenable
is not worth discussing. I would point
out here that it neglects the distinction between moral reasoning and moral
emotions, about which we now know a lot.
Consider the two basic versions of the infamous trolley problem. In the “trolley problem”, a trolley is headed
toward a switch. As the switch is set, the
trolley will move onto a track and kill five innocent people. If you throw the switch it will divert the
trolley, saving those people. However,
it will kill one innocent person on the other track. In the “footbridge problem,” you can save the
five people only by pushing one innocent person off a bridge, thus stopping the
trolley.
I have frequently presented these
two dilemmas to my philosophy classes and asked for a vote. My results always conform more or less
exactly to the general statistics. Most
of my students (but not all of them) throw the switch in the first case but
refuse to push the fellow off in the second.
The most fascinating thing about these two scenarios is that the two
popular results‑throw the switch but don’t push the fellow off the bridge‑can
each be easily explained by two mutually exclusive theories of ethics. Utilitarians (or consequentialists) argue
that an action is just if it secures the most good for the most people. Throwing the switch in the first case is an
easy utilitarian calculus and most people make it. Deontologists (or Kantians) argue that if
something is wrong then you just shouldn’t do it, regardless of the
consequences. Most people confronted
with the second scenario come to precisely that conclusion. Yet the body count is the same in either
case.
The key to understanding why the
jury splits here is to consider the roles of moral reasoning and moral
emotions. This has been done in a
remarkably precise way by Joshua D. Greene et al. in “An fMRI Investigation of
Emotional Engagement in Moral Judgment” (Science
vol. 293 14 September 2001). The result
was that the two problems engage different parts of the brain. The reason that people react different to footbridge
problem than to the trolley problem is that the former engages moral emotions
whereas the latter does not.
Human beings are clearly capable
of exercising abstract moral reasoning.
That’s a good thing. It allows us
to make impartial judgments and to arbitrate in controversies where we have no
interests. It makes it possible for us
to respond with flexibility to unexpected circumstances. It makes it possible to legislate. If Ayala were right, that is all there is to
morality.
Of course, he is wrong. We also have strong moral emotions that, when
engaged, channel our moral reasoning and place limits on our choices. That’s a good thing too. In the absence of moral instincts we would
not be impartial judges all the time. We
are also subject to selfish instincts and conflicts of interests. Moral emotions make us better partners,
friends, and citizens by making it uncomfortable for us to bend our moral
reasoning to selfish ends. It is
difficult if not impossible to explain our moral emotions except as evolved
dispositions in a social primate species.
What too many don't realize is that one can simultaneously have a universal moral system, common to all human beings (meaning they are part of human nature), and cultural variations of those morals. People today seem to only notice the variations, falsely concluding that there is no commonality. If one group's definition of murder differs from ours, that doesn't mean murder isn't universal -- the fact that there is a definition at all proves it is!
ReplyDeleteUnfortunately, Troy, you have said nothing with which I disagree.
ReplyDeleteWhy doesn't the trolley problem evoke moral emotions? Since the sacrifice is the same in both cases, I would have expected it to be equally emotionally troubling in both. Is the idea of physical distance (provided by the lever) really significant enough to activate a different part of the brain?
ReplyDeleteYes, apparently. In some people, the two problems do feel the same. For most, however, changing the direction of the train feels different from pushing the guy off the footbridge. In all fairness, it's not merely the "distance". In the trolley problem, the death of the one innocent is a consequence of the act that saves the three. In the footbridge problem, you are using an innocent person as a train stop. For Kantians, using a person in such a way to achieve an end, however laudable, is fundamentally immoral. It does look like this distinction depends on different parts of the brain.
ReplyDeleteThis makes a great deal of sense. Thanks, Dr. Blanchard!
ReplyDelete