David
Barash has an interesting piece in Aeon on the evolution of human
consciousness. It is adapted from his book
Homo
Mysterious: Evolutionary
Puzzles of Human Nature. I note
that Barash is described as an “aspiring Buddhist”. As I sit zazen myself, I have a connection to
the author. Just sayin’.
Consciousness is the single
most obvious existential fact. As in
Descartes’ famous Cogito Ergo Sum, I know that I think (which means that I am
conscious of sensations, emotions, and concepts) better than I know anything
else. However, I also think that I am
aware of a world around me. The problem
of consciousness (the big problem in philosophy of mind) is how to describe
consciousness in a way that fits in with our knowledge of the external
world.
Barash takes the route followed
by many in the field: consciousness means self-consciousness.
I propose that consciousness can be defined as a particular
state of awareness, characterised by a curious recursiveness in which
individuals are not only aware, but aware that they are aware. By this
conception, many animals are aware but not strictly conscious. My two German
shepherd dogs, for example, are exquisitely aware of and responsive to just
about everything around them — more so, in many cases, than me. I know,
however, that I am conscious because I am aware of my own internal mental
state, sometimes even paradoxically aware of that about which I am unaware.
I am unconvinced that
self-consciousness is nearly as important as Barash thinks it is. He distinguishes between awareness in the
case of his dogs, who are aware of their surroundings, and consciousness which
involves self-awareness. However, this
is a mere distinction between the objects of awareness. If one mental concept “tree” is about
something in front of me and another mental concept “me” is about myself, these
are only two concepts. My ability to
distinguish myself and my experience from other things of which I am aware
surely indicates a highly developed mind.
It does not, however, indicate that I am more conscious than my
beagle. She feels pain as I feel pain,
or so it seems reasonable to assume.
That existential fact is the great mystery.
Awareness can be modeled
without any recourse to consciousness.
As a virus responds to the type of cell that it evolved to target, so an
alligator responds to the wildebeest.
You could, conceivably, have creatures that respond in this way without
any need for existential pain and pleasure.
Assuming that viruses never suffer disappointment or pain, why do
wildebeests have to put up with it?
Barash offers us the
distinction between proximate and ultimate causes. The former causes of consciousness are, no
doubt, to be sought in the mechanics of the brain. I am sure this is right, though I am less
optimistic than Barash about the prospects of neuroscience here. The ultimate cause must be evolution by
natural selection.
Even on a strictly biological basis, consciousness seems
hard to justify, if only because it evidently requires a large number of
neurons, the elaboration and maintenance of which is bound to be, in terms of
energy, expensive. What is the compensating payoff?
Bear in mind that, for consciousness to have been selected
for over evolutionary time, individuals (and their consciousness-promoting
genes) would have to be more successful in propagating copies of those genes in
the future than owners of alternative genes that generated less consciousness,
or none at all. The bottom line is that consciousness should have paid its way.
Or maybe the big C is not
selected for at all. Barash offers the
best explanation of the alternative-eliminative materialism-that I have yet
seen.
Maybe [consciousness] is just a nonadaptive by-product of
having brains bigger than is strictly necessary for bossing our bodies around.
A single molecule of water, for example, isn’t wet. Neither are two, or,
presumably, a few thousand, or even a million. But with enough of them, we get
wetness — not because wetness is adaptively favoured over, say, dryness by the
evolutionary process, but simply as an unavoidable physical consequence of
piling up enough H2O molecules. Could consciousness be like that? Accumulate
enough neurons — perhaps because they permit its possessor to integrate
numerous sensory inputs and generate complex, variable behaviour — wire them up
and, hey presto, they’re conscious?
I note, though Barash does not,
that this makes sense of eliminative materialism without saving it. “Wetness” may be a mere product of a sufficient
number of water molecules but it is not a mere by-product. It has big consequences for the behavior of
water at the human scale. There is no
reason to assume that consciousness does not have big consequences for the
behavior of conscious organisms. If it
does not, we have something like Cartesian dualism with its weird attempts to
resolve the problem of interaction. What
a strange world if human beings think they consciously influence their material
bodies but in fact do not.
Barash suggests that the
adaptive function of consciousness may lie in two human capacities. One is the
ability to distinguish between what looks good in the short run and what is
better over the long run, thus allowing us to make better decisions (i.e.,
decisions that favor reproductive success).
That was one of the basic functions of logos, according to Aristotle. Logos is the capacity that distinguishes human beings from other animals.
The other is the ability to form a “theory of mind.” As I am aware of myself and my thoughts, so I can model the thoughts of other people. This promotes social cooperation which can pay big Darwinian benefits.
The other is the ability to form a “theory of mind.” As I am aware of myself and my thoughts, so I can model the thoughts of other people. This promotes social cooperation which can pay big Darwinian benefits.
I have no quarrel with any of
that. I would suggest, however, a
simpler account of the adaptive function of consciousness which focuses on
pleasure and pain, happiness and disappointment. Animals capable of creative and unpredictable
responses to their environments had a distinct advantage over those who were
less flexible. That creativity had,
however, to be directed toward reproductive success if it were to remain in
business. Once you have existentially
free animals, if you want them to behave in a way that is reproductively fit,
you have to present them with bribes (pleasure) and threats (pain).
I think that the fact of pain
and pleasure indicate a dimension of freedom that was opened up by the first
emergence of animal consciousness. Only
if consciousness is a cause of animal behavior as much as wetness is a cause of
rivers can one make sense of pain and pleasure.
If it hurts to do something that is contraindicated from a Darwinian
point of view, that can only be because the consciousness of pain makes that
thing less likely.
I am dubious about the
distinction between animal awareness and human consciousness. If the former is more sophisticated than the
latter, it is nonetheless just a more sophisticated version of
consciousness. Consciousness is always
an indicator of some measure of existential freedom.
No comments:
Post a Comment