Tuesday, April 29, 2014

Baby You Can Drive My Car, Part 5: The Argument from Knowledge

Chalmers is very clear about his challenge to the interactionist.  Chalmers wants an argument to “show us why the explanatory irrelevance of consciousness simply cannot be true.” (Page 194)

Chalmers suggests three arguments against epiphenomenalism: the argument from knowledge, the argument from memory, and the argument from reference.  My argument is none of these, but in particular, it is not the argument from knowledge.

My lack of interest in the knowledge argument is based on broader theoretical considerations.  What is knowledge, after all?  Philosophers have traditionally accepted four conditions for knowledge.  To say that epistemological agent A knows proposition P implies four things:

1. A believes P.
2. A has certainty, or high confidence, in P.
3. P is true.
4. A’s belief in P is justified.

Conditions 1, 2 and 3 look good to me, but I can’t accept condition 4.  Some of our knowledge is knowledge of axioms, such as the reliability of induction and memory.  Axioms seem to have no justification at all.  Furthermore, much of our non-axiomatic knowledge is derived, at least in part, from the axioms themselves, so it’s hard to see how the non-axiomatic knowledge can be any more justified than the axioms it derives from.  (How did we come to be in possession of these axioms?  That’s an interesting and important question, but for the current discussion, it is beside the point.)

Once we dispense with condition 4, there doesn’t seem to be any reason why an epiphenomenal mind couldn’t possess knowledge of its own consciousness.  A nonconscious epistemological agent could believe (with any level of certainty you like) that there are phenomenal properties associated with its functional states.  If we could build such an agent, we could simply hard-wire it with the assumption that there are such properties.  And since the epiphenomenal mind is conscious by assumption, we get condition 3 for free.

Monday, April 28, 2014

Baby You Can Drive My Car, Part 4: Theories and Theorists

Now let’s go back to the text of “The Conscious Mind”, this time bearing in mind that the book was written by a nonconscious epistemological agent, David Chalmers’ nonconscious physical brain.  Chalmers’ brain writes: “To take the line that explaining our judgments about consciousness is enough...is most naturally understood as an eliminativist position about consciousness...As such it suffers from all the problems eliminativism naturally faces.  In particular, it denies the evidence of our own experience.  This is the sort of thing that can only be done by a philosopher -- or by someone else tying themselves in intellectual knots.”  (Page 185)

Who is the “we” of “our own experience”?  Who or what is “denying the evidence”?  Grammatically, the sentence would seem to be attributing the “denial” to the eliminativist theory itself; surely, that’s not what Chalmers means.  Theories don’t deny evidence; theorists deny evidence.  Theorists such as Armstrong, Dennett, Lewis and Ryle (see Chalmers’ list on page 163).  That is, epistemological agents deny evidence.

But you can’t deny evidence you don’t have.  It is not Armstrong’s conscious mind that is denying experience.  The conscious properties of Armstrong’s mind, whatever they may be, do not add any “denying” competence that is not already present in Armstrong’s nonconscious physical brain.  It is not Armstrong’s conscious mind that is “taking the line that explaining our judgments about consciousness is enough” -- it is Armstrong’s physical nonconscious brain.  And Armstrong’s physical nonconscious brain has no access to the facts, whatever they may be, about Armstrong’s own experiences.  So it’s simply not possible that Armstrong is denying evidence -- there is no such evidence to be denied.

Evidence is only evidence in the hands of an epistemological agent, who has the degrees of freedom to weigh or deny it.

The processes that determine what “line to take” on the question of consciousness are confined to the physical nonconscious brain; and the phenomenal facts, whatever they may be, are simply not inputs to those processes.

Given that the only parties to Armstrong’s mind that have access to the information about consciousness are barred from speaking, not to mention voting, it is no wonder that the Parliament of Armstrong’s Mind votes to deny.  Notwithstanding the fact (that is, the assumption) that they are right, it is the realists (that is, the nonconscious brains of the realists) who are tying themselves in intellectual knots, isn’t it?

Sunday, April 27, 2014

Baby You Can Drive My Car, Part 3: The Nonconscious Brain

Thinking about epiphenomenalism can be very frustrating.  Until very recently, I would often slip into the linguistic pattern of referring to the epiphenomenal brain as “unconscious”.  But Chalmers would surely object to such a usage.  Zombies are unconscious.  Zombie brains are unconscious.  But our brains are not unconscious, because they have conscious experiences associated with them.

Chalmers would probably want to say that the epiphenomenal brain is conscious.  Metaphysically speaking, Chalmers might argue, the conscious experiences might not just be associated with the brain, but might be properties of the brain itself.  Conscious experiences might be phenomenal qualities of the functional organization of, or the information encoded in, the brain.  Metaphysically speaking, the brain might be an ontological object with a physical aspect and a phenomenal aspect.

In my opinion, to refer to the physical brain as conscious would be grossly misleading, and only add to the confusion.  As in the psychons argument (see http://mccomplete.blogspot.co.il/2014/03/psychons-and-intentyons-part-1-chalmers.html), we can subtract the phenomenal properties from Chalmers’ brain and arrive at Chalmers’ physical brain.  The physical brain is self contained and self sufficient, and it has no access to the phenomenal properties.

Given this vexing situation, I hope that Chalmers would agree to call the physical brain nonconcsious.

If we can agree on that, maybe we can agree that the epiphenomenal mind is a nonconcsious epistemological agent.  Epistemological agents take inputs (assumptions) and return outputs (conclusions).  The epistemological functions of the mind are exhausted by the nonconscious brain.  The nonconscious brain is a self-sufficient epistemological agent, and any conscious experiences that may be associated (in some sense) with the brain add nothing to its epistemological agency.

The epiphenomenal mind is an epistemological subject, but not an epistemological agent.  The epiphenomenal mind is a consumer of beliefs, but not a producer of beliefs.  The conscious mind will believe a proposition if and only if the nonconscious brain has generated an isomorphic judgment.

Thursday, April 10, 2014

Is David Chalmers an Epiphenomenalist?


On page 158, Chalmers writes, “I do not describe my view as epiphenomenalism.”  His preferred terms are “natural supervenience” and “explanatory irrelevance”.  If Chalmers doesn’t describe his view as epiphenomenalism, why do I describe it as epiphenomenalism?

Chalmers continues: “The question of the causal relevance of experience remains open, and a more detailed theory of both causation and experience will be required before the issue can be settled.  But [my] view implies at least a weak form of epiphenomenalism, and it may end up leading to a stronger sort.”

A few pages earlier, Chalmers writes: “It remains the case that natural supervenience feels epiphenomenalistic.  We might say that the view is epiphenomenalistic to a first approximation: if it allows some causal relevance for experience, it does so in a subtle way. I think we can capture this first-approximation sense by noting that the view makes experience explanatorily irrelevant. We can give explanations of behavior in purely physical or computational terms, terms that neither involve nor imply phenomenology.” (Page 154)

Chalmers is saying something like: “If you want to call me an epiphenomenalist, go right ahead.”  To categorize Chalmers’ theory as a variety of epiphenomenalism is not essential to my argument, but it is a greatly simplifying move, and it makes my flow chart work.  So, for the rest of the essay, I will accept Chalmers’ “first approximation”, and simply assume that Chalmers is an epiphenomenalist.

Wednesday, April 9, 2014

Baby You Can Drive My Car, Part 2: Lennon and McCartney

Most Beatles songs are attributed to “Lennon and McCartney”, but it is well known that many of these songs were not really collaborations.  In fact, some of the songs attributed to “Lennon and McCartney” were written by John Lennon, and some of them were written by Paul McCartney, and usually you can find out which one by looking up the song on Wikipedia.

Let’s assume that the song “Baby You Can Drive My Car” was written entirely by Paul McCartney, and John Lennon made no contributions whatsoever.  (Actually, Lennon did contribute some lyrics to “Baby You Can Drive My Car”.)

Given that assumption, would it be true to say that “Baby You Can Drive My Car” was written by Lennon and McCartney?  The statement does have some truth to it: after all, if you add up all of the contributions from John Lennon and all the contributions from Paul McCartney, you get the song.  But usually, we would assume that the attribution to “Lennon and McCartney” is a convention, a fiction.  You might as well say that “Baby you can Drive my Car” was written by Starr and McCartney, or by Stalin and McCartney.  The true story can be found in Wikipedia: the song was written by Paul McCartney.

Similarly, if epiphenomenalism is true, the book “The Conscious Mind” was not written by the conscious mind of David Chalmers; it was written by the physical brain of David Chalmers.  David Chalmers’ physical brain, according to epiphenomenalism, is self-contained and self sufficient.  It takes no input from Chalmers’ conscious mind, it needs no help from Chalmers’ conscious mind, and it has no access to Chalmers’ conscious mind.  Chalmers’ conscious mind is no better than a reader of “The Conscious Mind”, certainly not a co-author.

“The Conscious Mind” was written by Chalmers’ physical brain in two important senses.  First, Chalmers’ physical brain is a natural language engine that generated the English sentences that comprise “The Conscious Mind”.  Even more important, however, is that Chalmers’ physical brain is an epistemological engine -- an epistemological agent -- that arrived at the conclusions presented in “The Conscious Mind”.

Monday, April 7, 2014

Baby You Can Drive My Car, Part 1: Whatever Doesn't Kill You

Epiphenomenalism comes with a standard objection, one that is straightforward and almost obvious.  If consciousness really has no effect on the physical world, how can we talk and write about consciousness?  Chalmers refers to this objection as “the paradox of phenomenal judgment”.  He writes:

It is one thing to accept that consciousness is irrelevant to explaining how I walk around the room; it is another to accept that it is irrelevant to explaining why I talk about consciousness. One would surely be inclined to think that the fact that I am conscious will be part of the explanation of why I say that I am conscious, or why I judge that I am conscious; and yet it seems that this is not so...If phenomenal judgments arise for reasons independent of consciousness itself, does this not mean that they are unjustified?

(Page 180)

Chalmers responds by conceding that phenomenal judgments arise for reasons independent of phenomenology itself, but argues that this is not a sufficient reason to reject epiphenomenalism.  He writes: “Epiphenomenalism may be counterintuitive, but it is not obviously false, so if a sound argument forces it on us, we should accept it.”  (Page 157)  And later: “This paradoxical situation is at once delightful and disturbing.  It is not obviously fatal to the nonreductive position, but it is at least something to come to grips with...If we can cope with this paradox, we may be led to valuable insights about the relationship between consciousness and cognition.” (Page 178)

If consciousness did not exist, Chalmers argues, people might still think that it exists.  The same factors that would lead to such a delusion lead, in our case, to the correct conclusion, that we are conscious.  He writes:

To get some feel for the situation, imagine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives...Would it have any concept of consciousness, or any related notions? ...If such a system were reflective, it might start wondering how it is that things look red, and why it is that red just is a particular way, and blue another.

(Page 183)

The robot that Chalmers describes might think that it is conscious even if it is actually not conscious.  An analogous process, he argues, is going on in our brains (except for the brains of the eliminativists, of course; they come to the opposite conclusion).  The fact that (some of) our brains get the right answer here is a kind of coincidence, but sometimes coincidences do happen.  Chalmers writes:

Nietzsche said, “What does not kill us, makes us stronger.” If we can cope with this paradox, we may be led to valuable insights about the relationship between consciousness and cognition.

(Page 179)

In my next few blog posts, I intend to kill David Chalmers.  I mean, I intend to kill Chalmers’ theory.

A Hefty Bet, Part 3: Reasons and Evidence

Chalmers writes: “Giving in to this temptation...[that is, interactionism]...requires a hefty bet on the future of physics, one that does not currently seem at all promising.”  (Page 154)

Interactionism may be a “hefty bet”, but sometimes physics is about making big bets.  When Einstein proposed the theory of general relativity, there was approximately no evidence for it.  Supposedly, gravitational waves were just verified last week, almost a hundred years after Einstein predicted them.

We don’t have any scientific evidence yet for interactionism, but is the lack of evidence, in this case, evidence of a lack?  Since we can’t yet simulate brains, and we don’t yet know the computer architecture of the brain, it is no surprise that we haven’t yet found the intentyon receptors.

I hope that one day science will answer the question of whether there is psychophysical interaction.  Or maybe it never will.  One thing is clear: we are currently far away from that goal.

We have no evidence of interactionism, but we have reasons -- philosophical reasons -- to believe that it is *very* promising.  Some materialists think that there are no reasons to believe in dualism, but Chalmers is not a materialist.  Chalmers agrees with me that there are reasons to believe in dualism.  Once you accept dualism, your only two choices are epiphenomenalism and interactionism.  And epiphenomenalism, as I hope to show in my next few blog posts, is not coherent.

Sunday, April 6, 2014

A Hefty Bet, Part 2: Testable in Principle

What would count as scientific evidence against interactionism?  I could imagine neuroscience arriving at an exact model of the computer architecture of the brain, an exact transcript of the sequence (I know, the brain does parallel processing -- I don’t mean “sequence” literally) of primitive instructions that constitutes the brain’s binary code, and a reverse engineering of the binary code.  However, there are two problems with that fantasy.

First of all, neuroscience today is not anywhere close to such a theory.  Understanding complex software systems written in abstract programming languages like Java is almost impossible unless the author went to a great deal of trouble to make the system understandable.  Reverse engineering binary code is much more difficult, especially if the system was written in binary code (as the brain’s binary code presumably was), rather than being written in an abstract language and compiled into binary code.

Second of all, it just begs the question.  What would be the evidence that the theory really models the brain’s computer architecture?  And what would be the evidence that the proposed reverse engineering is correct?

There is a shortcut, however: simulation.  If we could simulate the brain -- replicate neuron by neuron the exact functional organization of the brain --  and show that our simulation doesn’t need any intentyon receptors to function properly, that would constitute a strong empirical case against interactionism.

In his attack on interactionism, Chalmers plays the simulation card:

Secondly, in order that this theory allows that consciousness does any interesting causal work, it needs to be the case that the behavior produced by these microscopic decisions is somehow different in kind than that produced by most other sets of decisions that might have been made by a purely random process.  Presumably the behavior is more rational than it would have been otherwise, and leads to remarks such as “I am seeing red now”, that the random process would not have produced.  This again is testable in principle, by running a simulation of a brain with real random processes determining those decisions.  Of course we do not know for certain which way this test would come out, but to hold that the random version would lead to unusually degraded behavior would be to make a bet at long odds.

(Page 155)

Chalmers says this as if a testable prediction is a bad thing!

Note that Chalmers says that this is testable in principle.  Of course, it is not currently testable in practice.  We do not have simulated brains at this point in time.  (Does a simulated brain have rights?  Can you be tried for murder if you turn it off?)

Since this kind of simulation has not been tested in practice, it certainly does not count as evidence against interactionism.

I think we will not succeed in simulating the brain.  I think that the brain probably has intentyon receptors that are dependent on the fine details of chemistry, like chlorophyll, and the virtual machine would lack this.  If you simulate a network port on a virtual machine you will get nonsense, unless you also simulate the server.  If you write a computer program to simulate chlorophyll and then put your laptop in direct sunlight, you won’t get any carbohydrates.

(Note: when I say that we will not succeed in simulating the brain, I do not mean to say that we will not build computer programs that can mimic conscious people, as in the “Turing Test”.  I mean that a computer program that models the functional organization of the brain to a high degree of detail will not work, because it lacks the hardware to receive and generate intentyons.)

Friday, April 4, 2014

A Hefty Bet, Part 1: Is Interactionism Unscientific?

Interactionism also comes with a standard objection, straightforward and almost obvious.  On its face, interactionism sounds unscientific, almost superstitious.  Our brains are made of protons, neutrons, and electrons -- GOEPs -- and science (seems to) tell us that nothing can move GOEPs except for other GOEPs.

Chalmers writes that interactionism “requires a hefty bet on the future of physics, one that does not currently seem at all promising; physical events seem to be inexorably explained in terms of other physical events.  It also requires a large wager on the future of cognitive science, as it suggests that the usual kinds of physical/functional models will be insufficient to explain behavior.” (Page 154)

Wednesday, April 2, 2014

Psychons and Intentyons, Part 4: Client-Server Dualism

The stories in my previous two posts are fantasies, of course.  But in my opinion, they are particularly coherent fantasies, and they are the curious kind of fantasy that might be true, in the sense that we have no evidence that the possible world they describe is not in fact our world.

We currently have no evidence of the details of the real psychophysical interaction mechanism.  But maybe the real mechanism is similar to my two stories in that it is not consciousness itself that has a direct influence on matter; rather, there is a medium of communication of meaningful messages between the physical domain and the conscious domain.

This theory could be called “client-server dualism”.  According to client-server dualism, the conscious mind is like a server that exchanges meaningful messages with a client program running in the brain.

Tuesday, April 1, 2014

Psychons and Intentyons, Part 3: The Dictionary Theory of Meaning

There is a naive theory of meaning which holds that words have meanings, and if you want to know the meaning of a word, you can look it up in the dictionary.

This theory comes with a ready objection.  Definitions themselves are made of words.  If you take a sentence and substitute each word with its dictionary definition, you will just get a longer sentence with many more words.  If you take each word in your second sentence and substitute it with its dictionary definition, you will get a very long sentence, and you will be no closer to “meaning” than when you started; perhaps, much further away, since you have many more words to interpret.

There is a simple response to this objection, which is almost as naive as the original theory.  Maybe there is a base case to stop the recursion.  Maybe some words are not really defined by their dictionary definitions.  Maybe some words (such as “good”, “evil”, “pleasure”, “pain” and “consciousness”) are primitive, or irreducible, as David Chalmers would say.  These special words have meanings that are pure meanings, disembodied concepts, almost like elementary particles: intentyons.

There is a straightforward recursive function from a sentence to a proposition: for each word, if it has a dictionary definition, replace it with the dictionary definition.  If it has a characteristic intentyon, replace it with the intentyon.  Eventually, you reduce the sentence to a string of intentyons.  The reverse function, natural language generation, is more difficult.  It’s easy to replace each intentyon with a word, but then you get a very long, very unnatural sentence.  The hard work of natural language generation is compressing the long sentence into a shorter, natural sentence that would reduce to the same intentyonic string.  (Or at least, a sufficiently similar intentyonic string.)

There are a lot of intentyons, but it’s possible that each one has its own physical signature, that is, its own unique structural pattern of interaction with the GOEPs.  The computational and sensory equipment of the brain has no access to, or appreciation of, the intentional properties of the intentyons, but when the brain learns a natural language, it can build a kind of map between the intentyons and the words.

When the soul receives (or generates) a stream of intentyons, it experiences understanding.  When the client program running in the brain sends a request to the soul that includes the information about the auditory signature of a sentence along with the string of intentyons carrying the meaning of the sentence, the soul understands the sentence.

So maybe Chalmers’ soul doesn’t know English.  Maybe the thought that originates in Chalmers’ soul is not the English sentence “Consciousness is the biggest mystery”, but the meaning of that sentence as a stream of intentyons.  Perhaps the stream of intentyons is picked up by the intentyon receptor in Chalmers’ brain, and it is the brain that generates the English sentence.  Once his brain generates the sentence, Chalmers can “hear himself think”.

Psychons and Intentyons, Part 2: Zeros and Ones

Suppose that David Chalmers’ soul experiences the thought “Consciousness is the biggest mystery.”  If interactionism is correct, the thought “Consciousness is the biggest mystery” may not, at least initially, be represented by a computational judgment in Chalmers’ brain.  The sentence might originate in Chalmers’ soul.

Chalmers’ soul is not satisfied with thinking this thought; it wants to get the thought in writing, and eventually publish it as the first sentence of a book called “The Conscious Mind”.  But to do that, it needs Chalmers’ fingers to type the sentence on the computer.  And Chalmers’ fingers are controlled by Chalmers’ brain.  So in order to put its thoughts in writing, Chalmers’ soul must find a way to communicate this thought to Chalmers’ brain.

How could this be accomplished?  Could psychons somehow be pressed into service to deliver this message from the soul to the brain?  Text that is stored in a computer, or sent over a computer network, is usually encoded in ASCII or Unicode.  ASCII and Unicode are abstract protocols.  They are conventions for interpreting sequences of bits -- zeros and ones -- as sequences of text characters (that is, letters, numbers, punctuation, etc.)  ASCII doesn’t care what the bits are made of.  They can be made of flip-flops, capacitors, photons, checkers pieces, or psychons.

Of course, we don’t currently have devices that can encode text in psychons, because we don’t have hardware that can generate psychons.  But if the psychophysical interaction mechanism could generate psychons, it could generate a series of psychons that could encode text in the ASCII protocol.  Chalmers’ brain could then detect those psychons, decode the message, and instruct Chalmers’ fingers to type it on the computer.

Is this mechanism -- oversimplified and fanciful as it is -- vulnerable to the psychons objection?  The “story about the causal relations between psychons and physical processes” indeed does not “invoke the fact that psychons have phenomenal properties”.  In fact, the psychons no longer have phenomenal properties; they have intentional properties.  (That is, they have semantics, or meaning.)  At this point, “psychon” is a misnomer; a better term would be “intentyon”.

Could we somehow subtract the intentional properties of the intentyons from this story, “yielding a situation where the causal dynamics are isomorphic”?  Could we subtract the phenomenal facts about Chalmers experiences and obtain a story that is somehow isomorphic or equivalent?

If we subtract the intentional properties from the intentyons, we get a random stream of particles.  We would be two steps away from explanation: we would be oblivious to the pattern that needs to be explained.  If we were to subtract the conscious thought that the intentyons were generated to encode, we would be subtracting the explanation for the pattern.  So both the intentional properties of the intentyon stream and the phenomenal facts about Chalmers’ experiences are “explanatorily relevant”.

The most outlandish element in my story is the suggestion that the message is in ASCII.  So the message is probably not in ASCII or Unicode, but any character encoding will do; A could be 0, B could be 1, and so on.

I am not claiming that this story is true in all its details.  Rather, this story is an example of a class of possible psychophysical mechanisms that are not vulnerable to the psychons argument.  In the following post, I intend to give another such example.