Sunday, April 6, 2014

A Hefty Bet, Part 2: Testable in Principle

What would count as scientific evidence against interactionism?  I could imagine neuroscience arriving at an exact model of the computer architecture of the brain, an exact transcript of the sequence (I know, the brain does parallel processing -- I don’t mean “sequence” literally) of primitive instructions that constitutes the brain’s binary code, and a reverse engineering of the binary code.  However, there are two problems with that fantasy.

First of all, neuroscience today is not anywhere close to such a theory.  Understanding complex software systems written in abstract programming languages like Java is almost impossible unless the author went to a great deal of trouble to make the system understandable.  Reverse engineering binary code is much more difficult, especially if the system was written in binary code (as the brain’s binary code presumably was), rather than being written in an abstract language and compiled into binary code.

Second of all, it just begs the question.  What would be the evidence that the theory really models the brain’s computer architecture?  And what would be the evidence that the proposed reverse engineering is correct?

There is a shortcut, however: simulation.  If we could simulate the brain -- replicate neuron by neuron the exact functional organization of the brain --  and show that our simulation doesn’t need any intentyon receptors to function properly, that would constitute a strong empirical case against interactionism.

In his attack on interactionism, Chalmers plays the simulation card:

Secondly, in order that this theory allows that consciousness does any interesting causal work, it needs to be the case that the behavior produced by these microscopic decisions is somehow different in kind than that produced by most other sets of decisions that might have been made by a purely random process.  Presumably the behavior is more rational than it would have been otherwise, and leads to remarks such as “I am seeing red now”, that the random process would not have produced.  This again is testable in principle, by running a simulation of a brain with real random processes determining those decisions.  Of course we do not know for certain which way this test would come out, but to hold that the random version would lead to unusually degraded behavior would be to make a bet at long odds.

(Page 155)

Chalmers says this as if a testable prediction is a bad thing!

Note that Chalmers says that this is testable in principle.  Of course, it is not currently testable in practice.  We do not have simulated brains at this point in time.  (Does a simulated brain have rights?  Can you be tried for murder if you turn it off?)

Since this kind of simulation has not been tested in practice, it certainly does not count as evidence against interactionism.

I think we will not succeed in simulating the brain.  I think that the brain probably has intentyon receptors that are dependent on the fine details of chemistry, like chlorophyll, and the virtual machine would lack this.  If you simulate a network port on a virtual machine you will get nonsense, unless you also simulate the server.  If you write a computer program to simulate chlorophyll and then put your laptop in direct sunlight, you won’t get any carbohydrates.

(Note: when I say that we will not succeed in simulating the brain, I do not mean to say that we will not build computer programs that can mimic conscious people, as in the “Turing Test”.  I mean that a computer program that models the functional organization of the brain to a high degree of detail will not work, because it lacks the hardware to receive and generate intentyons.)

No comments:

Post a Comment