Consciousness & Other Stories —

When we left off last time we were marveling at the hoops that philosophers like John Searle and David Chalmers seemed willing to jump through in hopes of finding that mind is not simply reducible to gross materiality. And we closed by suggesting that what justifies all those conceptual gymnastics is a simple, albeit inconvenient, truth — namely, that the one aspect of reality which has so far eluded reductionist explanation is our conscious experience of that same reality.

The Feeling of What Happens

The problem with thinking and talking about conscious experience, though, is that — given how totally immersed we are in and by it — it’s hard to gain any perspective on it. Might as well ask a fish about its experience of water.

But let’s give it a try.

To begin with, some insight into consciousness can be gained by considering the question of who and/or what can be said to have it. People, obviously (unless, of course, you’re a solipsist, in which case nobody does, except you!). But what about dogs? Field mice? Rocks?

Thomas Nagel offered a useful touchstone in his 1974 Philosophical Review article “What is it like to be a bat?” — namely, that a thing is conscious if there is something it would be like to be that thing.

Now, a bat’s sensorium, for instance, might give it a very different set of experiences from a human’s — gliding through the crepuscular air, hearing in the ultrasonic, navigating via echo-location rather than sight. Nonetheless, Nagel argued, it would be possible to imagine what it might be like to be a bat.

On the other hand, it doesn’t seem as if it would be like anything at all to be, say, a rock or a mud puddle. There just wouldn’t be any internal experience whatsoever with which to connect.

To clarify things a bit further, the sort of consciousness Nagel is talking about here is relatively low level, a rung or two down the ladder from the higher, more human-like quality of true self-awareness. There are plenty of creatures who would seem to pass the “what is it like to be” test, but only a few (apes, chimps, elephants, and dolphins among them) who can also ace the so-called “mirror test” — who can, that is, look into a mirror and realize that what they’re seeing is themselves.

In any case, when I had Jon Knox wondering whether Dualism’s resident artificial intelligence Nietzsche is really conscious, it’s this whole spectrum of sensation he’s wondering about: Would it be like anything to be Nietzsche? Would an AI have any experience of an inner life? Would it be like John Searle’s (essentially empty) Chinese Room, or would it truly be, as Knox himself imagines, an experience of “no algorithms, just mysteries?”

The stumbling block here, though, is that nowadays — having been raised on a media diet of Star Trek’s Commander Data and Spike Jonze’s Her — folks may find it difficult to recognize the nature of the problem, or even that there is a problem. After all, why shouldn’t a sufficiently complex machine intelligence attain consciousness? What’s the big deal?

From this standpoint, the difference between Siri or Alexa and Nietzsche begins to look more like one of degree than of kind.

The issue would take on a different complexion if it could be shown to be the case that David Chalmers is right, that conscious experience is not equivalent to, or emergent from, any physical process at all, be it ever so sophisticated — that conscious experience is something existing out there alongside physical reality, at right angles to it, so to speak.

But Can That be Shown?

Maybe, maybe not. For my money, the closest anybody’s come is Frank Jackson, with his 1982 Philosophical Quarterly article on “Epiphenomenal qualia” (where qualia is Frank’s coinage for the felt quality of a conscious experience). The notion is related to Tom Nagel’s bat question, except here we’re not exploring what it would feel like to inhabit some other, alien consciousness, but what it feels like to inhabit our own while we’re undergoing some experience: What it feels like to see the color red, for instance.

No one who’s read my previous blog-series on “The Why of Stories” should be surprised to learn that Frank drives the point home with a story — in this case, the parable of Mary. Mary is, in Frank’s telling, one of the world’s leading neurophysiologists, living at some future time when the neurophysiological physics of color vision has been explored to the point of being completely understood. But, owing to some odd whim of her parents, Mary has been raised since birth in a totally black-and-white environment. It’s not that she’s color blind — her color vision is entirely unimpaired — it’s just that she has never been exposed to anything other than black or white, and hence has never experienced at first hand the perception of color which she, as a neurophysiologist, otherwise comprehends so comprehensively. In the fullness of time, Mary is finally released from her chromatically-impoverished prison and encounters her first rose. We can imagine her exclaiming to herself, “Oh, so that’s what red looks like!” — but what does this mean, really?

In particular, does Mary learn anything new about the world or her experience of it when she beholds that rose for the first time? If so, it can’t be any new physical fact, can it? Because, by hypothesis, she already knows all the physics and physiology, all the physical facts, involved in color vision. For that matter, if knowing the physics and physiology is all it takes, shouldn’t Mary have been able to conjure up for herself the experience of seeing red without ever leaving her chiaroscuro confinement?

Note, however, that the price of establishing, in this manner, a mind/body distinction is that we find ourselves back in the old epiphenomenalist quandary once again: If mind is in fact wholly different from matter, how can they ever interact?

They can’t. Not as long as we’re stuck with the mind/body dichotomy.

But are we?