Wednesday, March 26, 2025

Biology, Cosmology and the Direction of Time

 1. Introduction

Why does biological evolution move "forward" in time, despite the fact that the fundamental laws of nature seem to be more or less time-symmetrical? Why are we born before we die, and not after?

Physicists and philosophers often seem to assume that all, or almost all time-asymmetrical (or, irreversible) processes can be reduced to, or explained by, the Second Law of Thermodynamics. In this essay, I will argue that, the biological arrow of time may not be reducible in any straightforward manner to the thermodynamic arrow of time, and that we may need a new cosmological/thermodynamic explanatory framework.

2. The Limits of Thermodynamics

Evolution enables the emergence of more complex beings over time, which, in and of themselves, have relatively low entropy. This doesn't violate the second law, because "mother nature" pays for the reduced local entropy by compensating for it in other ways, like infrared radiation.

That explains how forward evolution is possible despite the second law. That's a far cry from invoking the second law to explain why backwards evolution is impossible!

For evolution to go "forward" in time, it would seem that it needs to go thermodynamically "uphill". Wouldn't it be easier to go downhill? If it was going downhill, then, in principle, it wouldn't need to bother with the infrared radiation, because there would be nothing to compensate for.

3. Niwrad's Universe

Let me introduce you to Niwrad's universe, which I just made up: the universe in which everything else is the same as ours, except biological evolution runs in the opposite time direction. In Niwrad's universe, human beings assemble themselves over a period of years, say, in the ground, and, when they're ready, they're raised from their graves, taken somewhere to come to life, and walk away. Then they get younger until they turn into babies, and the doctors put them into their mothers' wombs.

Something seems wrong with that picture -- but what, really, is the problem? Is the second law of thermodynamics somehow being violated? Let's focus on reverse-decomposition, which is equivalent to spontaneous assembly. Does it violate the second law? No, it doesn't, for the simple reason that, although spontaneous assembly decreases local entropy, mother nature is compensating for it elsewhere. How does mother nature do it? Who knows, that's beside the point. The second law certainly doesn't care. All you have to do to placate the Entropy Police is to pay for the entropy you use. There are no requirements on how you pay.

In fact, the local entropy loss generated by spontaneous assembly is approximately the same entropy loss that happens when a man and a woman have a child (in our universe) -- and mother nature pays for it. How? The second law doesn't care.

So let's put the second law aside for a moment. (Put the second law aside? How could we? That's against the rules! You can't put the second law aside, even for a moment! The second law explains everything! Or maybe it doesn't?) This process still seems astronomically improbable. It should never happen in a million years. And in Niwrad's universe it's happening all over the world all the time.

Does that explain what's wrong with Niwrad's Universe? It certainly seems like it should, but that's just because of our forward-centric bias. After all, look at our own universe.

4. Darwin's Universe

In our own universe, we tend to think that spontaneous assembly doesn't happen, but it all depends on your point of view. If you look at our universe from the other direction of time, "backwards", then spontaneous assembly is happening all over the world, all the time. That's not fair! Why is it OK for astronomically improbable events to happen if you look at our world backwards, but not if you look at our world forwards? Can this somehow be reduced to the second law of thermodynamics? (Hint: it can't.)

We could generalize by saying that in forward-directed time, improbable things rarely happen, and astronomically improbable things approximately never happen, whereas in backwards-directed time, improbable things happen all the time. And this is *not* reducible to the second law of thermodynamics. With apologies to all the real physicists, I'm going call this the first law of chronodynamics, just so that I'll be able to refer to it later.

So we established that, in the backwards direction of time, astronomically improbable things are happening all the time. But not just any random astronomically improbable things. More specifically, in backwards time, astronomically improbable things happen *only if* they *are* probable in the forwards direction. Let's call that the second law of chronodynamics.

Both of these laws seem to be true in our universe. But why? What is really going on?

5. Did the Big Bang Really Have Low Entropy?

The consensus among physicists seems to be that the early universe had (improbably) low entropy. This would seem to be an inescapable consequence of the second law of thermodynamics. After all, if entropy rises with time, then the universe in the past must have had lower entropy than the universe of today -- there's no getting around that. And almost by definition, low entropy is improbable, certainly more improbable than high entropy.

However, I believe that the early universe actually had high entropy -- so high that it was actually at equilibrium. How is that possible? Because the early universe was smaller than the current universe -- not just a bit smaller but way, way smaller. A smaller universe means fewer places particles can be -- smaller positional "phase space", if I'm using the term correctly.

The argument can be found here: https://mccomplete.blogspot.com/2025/03/did-big-bang-really-have-low-entropy.html

6. Macrostate Transitions and Explanation

Take a snapshot of the universe (our universe) at some moment of time, T -- say, 1 billion years ago, with life already flourishing on Earth. This macrostate is far from equilibrium — full of complex structures, gradients, and processes like evolution and metabolism.

If you take the forward-looking view, the macrostate at time T follows -- is caused by -- the macrostate at T - 1. The macrostate transition from T - 1 to T is a "probable" transition, whereas the macrostate transition from T to T - 1 is an improbable transition.

The macrostate transition from T - 1 to T is a "probable" transition, even though neither T - 1 is or T is a probable macrostate. Both T and T - 1 are not at thermodynamic equilibrium -- maximum entropy -- far from it, and therefore, they are both improbable, in and of themselves.

The macrostate at T is an "improbable" macrostate -- unless you *explain* it by invoking the macrostate at T - 1. But that kind of explanation just "begs the question". It pushes the "question" back to T - 1.

But the improbability of T - 1 can be "explained" by invoking T - 2. And so on, back to the big bang.

That's where the high entropy big bang comes in. According to my argument referenced above, the big bang *was* at equilibrium. It was a *probable* macrostate, so it needs no explanation. (What does need explanation is the cosmological arrow of time, the expanding universe.)

7. Conclusion

I have argued that the biological arrow of time is traceable to a boundary condition in the past, but contrary to the standard view, it is traceable to a high entropy -- in fact, equilibrium -- boundary condition, not a low entropy boundary condition. 

Due to the fact that the big bang was at equilibrium, relative to its small size, it was able to "kick off" a chain of moments where each moment explains the next -- but only in the forward direction of time.


Did the Big Bang Really Have Low Entropy?

 1. Introduction

We’re often told that the early universe was in a state of "inexplicably low entropy." This idea appears everywhere from textbooks to pop science: it’s the puzzle behind the arrow of time. If entropy always increases, and the future is higher-entropy than the past, then the early universe must have started in a low-entropy state. And low entropy states are highly ordered states, right? Why would the universe have started in a highly ordered state?

2. What is Entropy?

Entropy is often misunderstood as "disorder," but more precisely, it’s a count of how many microscopic states are compatible with the macroscopic conditions of a system. More possible configurations = more entropy.

In a box of gas, high entropy means the particles are spread out randomly, not clustered in a corner. As time passes, the gas tends to spread — not because it’s trying to "disorder" itself, but because there are vastly more ways to be spread out than to be concentrated. Entropy increases not by intention, but by statistics.

3. Why was the Big Bang Hot?

In statistical mechanics, high temperature corresponds to high entropy. If you have a lot of energy in a small volume — as in the early universe — the most probable, highest-entropy state is a hot (smooth?) radiation bath. The energy gets distributed among many short-wavelength, high-energy particles. That’s what high entropy looks like in a small universe.

In other words: the early universe was hot because it was high entropy. In fact, I believe that one could argue that the early universe was at equilibrium -- it had maximum entropy for its size.

4. Then Why Does Entropy Keep Increasing?

Because the universe didn’t stay small. As space expands, it creates more room — not just in a literal sense, but in "phase space", the abstract space of all possible configurations. More volume means more ways particles can be arranged, more available microstates, and thus a higher maximum entropy.

So even if the early universe started with the highest entropy available to it, the expansion of space allowed entropy to keep rising. The second law of thermodynamics doesn’t demand that the early universe was "low" entropy — only that entropy increases from whatever value it started with. And that’s exactly what happened, because space itself was growing.

The growth of "max entropy" due to expansion of space far outpaced the growth of actual entropy, even though both were growing. That gave the universe thermodynamic "elbow room" to undergo processes that unfolded by leveraging the gap -- such as gravitational clumping.

5. Do Clumps Really Have High Entropy?

It’s often said that the early universe was "too smooth," and that a clumpier configuration — with stars and planets already formed — would’ve had higher entropy. But there would seem to be a problem with that idea. In the hot early universe, clumps wouldn’t have lasted. They would have instantly disintegrated under the enormous pressure and thermal motion. Smoothness wasn’t a fragile, special arrangement — it was the stable, high-entropy state under those conditions.

Later, as the universe cooled and expansion made clumping possible, gravitational structures emerged. But that was a change in what kinds of configurations were entropically favored — not a sign that the early universe had been low entropy to begin with. And that change itself was a direct result of the expansion of the universe.

6. Do Black Holes Really Have High Entropy?

The current consensus among physicists today would seem to be that black holes are very highly entropic objects. If that's true, then it would be a mystery why the universe didn't start out as one big black hole, or maybe a collection of smaller black holes.

It could be that the universe was too small at the big bang to mathematically support black holes, or it could be that the universe did start out with black holes but they evaporated -- small black holes evaporate more quickly, and if there were black holes at the big bang, they would have to have been extremely small.

But I think we should be a bit skeptical of the claim that black holes are highly entropic, for reasons I've outlined here: https://mccomplete.blogspot.com/2025/03/do-black-holes-really-have-high-entropy.html

7. The Arrow(s) of Time

Instead of saying "the past is when entropy was lower," maybe we should say something deeper: The past is when space was smaller. The future is when space will be larger. Entropy increases because there’s more room to grow.

First of all, this reduces the "early universe low entropy" problem to the "expanding universe" problem -- a problem we already had. Second of all, it takes two arrows of time -- the thermodynamic arrow and the cosmological arrow -- and unifies them, and argues that the cosmological arrow is more fundamental. In a sense, it reinterprets thermodynamics as geometry.

8. Conclusion

The early universe wasn’t cold and clumpy. It was hot and smooth, and — it would seem to me — typical for a small, newly born universe. As space expanded, entropy increased. The universe of the past was not improbably ordered -- it was just kind of small.


Do Black Holes Really Have High Entropy?

1. Introduction

The idea that black holes are highly entropic objects originates in the Bekenstein-Hawking entropy formula:

S = (k_B * c^3 * A) / (4 * G * ħ)

where A is the area of the event horizon. This formula has been deeply influential, leading to the development of black hole thermodynamics. However, it relies on an analogy between Hawking temperature and classical temperature that is difficult, if not impossible, to test empirically. In this post, I question the assumption that black hole entropy must be as high as the Bekenstein-Hawking formula suggests.

2. Gravitational Collapse and the Asymptotic Horizon

From the frame of a distant observer, a collapsing star never crosses its Schwarzschild radius in finite time. Due to extreme gravitational time dilation, the surface of the star appears to freeze ever closer to the would-be event horizon, asymptotically approaching it.

There seems to be no reason that the entropy of the object should suddenly jump, or, as was assumed before Bekenstein, suddenly drop. Instead, the entropy of the object should evolve continuously as more matter collapses or falls in. It would seem that the entropy of the object should be traceable to its matter content and configuration, much as in any other thermodynamic system.

3. Rethinking Bekenstein's Argument

Bekenstein originally introduced black hole entropy to resolve apparent violations of the second law of thermodynamics. His concern was that objects with entropy could fall into a black hole, causing the total entropy of the universe to decrease unless the black hole itself was assigned an entropy. However, Bekenstein was largely concerned with avoiding the conclusion that black holes had zero entropy.

From the perspective of a distant observer, collapsing matter retains its identity outside the horizon for all time. Assigning it its original entropy—without invoking an area law—is sufficient to satisfy the second law in this frame.

4. Hawking Radiation and Thermodynamic Analogies

Hawking's conclusion that black hole entropy is extraordinarily large -- proportional to the area of the event horizon -- relies on the formal analogy between the surface gravity of a black hole and the temperature of a classical thermodynamic system. This analogy, while elegant, is still a conjecture. It is not derived from a statistical mechanics account of microstates, but from a semi-classical field theory calculation. The identification of entropy as proportional to the area is not a necessity of the theory but an interpretive leap.

5. Relevance to the Information Paradox

The question of whether black holes are highly entropic is often linked to the black hole information paradox. However, this link may be overstated. It is generally accepted that even asymptotically collapsing black holes emit Hawking radiation and can evaporate, regardless of whether a true event horizon forms. The existence of Hawking radiation and black hole evaporation depends on quantum field behavior in curved spacetime, not on entropy accounting.

Therefore, the claim that black holes must be highly entropic is not essential for understanding or resolving the information paradox. Even if the entropy remains equivalent to that of the infalling matter in the frame of a distant observer, the mechanisms for evaporation and information retention (or loss) remain intact. The paradox should be addressed on its own terms, without assuming a horizon-based entropy law.

6. Observational Considerations

It remains unclear whether any empirical observation could definitively distinguish between the standard claim that black holes have an enormous entropy proportional to horizon area and the alternative proposal that their entropy corresponds to the infalling matter. Both perspectives make the same predictions for external observers in terms of Hawking radiation and gravitational dynamics. As a result, the entropy-area law may be more a theoretical convention than an empirically testable fact.

7. Conclusion

I have presented a conceptual argument for skepticism about the claim that black holes are intrinsically highly entropic. From the frame of a distant observer, where an event horizon is never fully realized, there is no compelling reason to assume that black hole entropy must obey the area law. Instead, it may be more natural to assign entropy based on the matter content that is collapsing, without invoking an enormous entropy jump.


Monday, January 1, 2024

That's What I Call a Compliment: A Parable by MC Complete

 There once was a commoner transporting a lot of heavy packs on his donkey. Suddendly the donkey stumbled and all the packs fell off. Oh no, thought the commoner. It will take forever to reload this donkey. Just then, he saw his friend running over to help him. Together, they were able to reload the donkey in half the time. The commoner was so grateful to his friend. "You're the king," he said.

The next day, the commoner went to a show where a musician was playing the harp. The music was so beautiful and so skillfully played that it almost brought the commoner to tears. After the show, the commoner ran up to the musician. "You're the king!" he said.

The next day, the commoner went to pick up a shirt from the tailor. He couldn't believe his eyes. It looked as good as new. "You're the king!" he said to the tailor.

The next day, the palace guards grabbed him and dragged him to the throne room. "What did I do?" he asked the king.

"Nothing," said the king. "I just want you to sing my praises."

"But I can't sing," said the commoner.

The king rolled his eyes. "You don't need to sing," said the king. "Just say something nice about me."

"You're the king!" said the commoner.

"Excuse me?" said the king.

"I said, you're the king."

"Off with his head," said the king.

"What?" said the commoner. "What did I do?"

"Obviously I'm the king," said the king. "That's not a compliment, that's just stating a fact."

The executioner was summoned. The guards put the commoner's neck on the chopping block. The executioner raised his axe. "Wait, wait!" said the king.

"What now?" said the executioner.

"This idiot deserves to die," said the king. "But I'm going to let him go home."

"Home?" said the executioner. "You're not even going to like throw him in the dungeon or something?"

"You heard me," said the king.

The executioner shrugged. "You're the king," he said.

The commoner fell to his knees and bowed. "Thank you, thank you, thank you!" he said. "You truly are a merciful king!"

The king smiled. "Now that's what I call a compliment," he said.


Saturday, July 1, 2023

Promises

In Pirkei Avos 1:15, Shammai Hazaken says, "say little and do much."

Devarim 23:23 says, "if you don't make vows, you won't end up doing sins." (Meaning, you won't end up doing the sin of breaking your vow.)

We learn from this that keeping promises is good, and making promises is bad.

Thursday, June 1, 2023

A Plague of Sound

 Why didn't the dogs bark on the night of the Exodus?

I never thought about it much until recently. I'd usually assumed that what the Torah really means by saying that the dogs didn't bark was that the Egyptians didn't protest when the Jews sacrificed lambs. Earlier, Moses had predicted that if the Jews were to sacrifice lambs in Egypt, then the Egyptians would stone them, because the lamb was considered a god (Shmos 8:26). So I'd thought that the Torah was using a kind of hyperbole: when the Jews actually did sacrifice lambs, not only did they not get stoned, the Egyptians didn't even raise their voices in protest, and even the dogs didn't bark.

It also reminded me of the trope that often appears in ghost stories, that humans can't see ghosts, but dogs can (and the dogs usually bark at the ghosts). During the plague of the firstborn, the angel of death came to Egypt, so you might have thought that the dogs would have barked at him, but they didn't.

When you read the pasuk in context, however, a different explanation suggests itself. (It's Shmos 11:6-7.) The previous pasuk talks about the screaming and wailing that will take place in Egypt when the firstborn die. The lack of dogs barking seems to describe the silence (peace and quiet) that will prevail among the Jews. As if the plague of the firstborn is not only a plague of death, but also a plague of sound.

This makes an interesting juxtaposition to the previous plague, which was a plague of darkness, so in a sense, a plague of light, or lack thereof: "For all of the children of Israel, their was light in their dwelling places." (Shmos 10:23)

The Artscroll commentary on the Chumash says basically the same thing: "In sharp contrast to the grief and death that will engulf the Egyptians, the Jews will enjoy complete tranquility; not even a dog will bark or howl against them."

Thursday, July 28, 2022

Why Did Moshe Hit the Rock?

Moshe dies right before the Israelites invade the land of Canaan. The Torah makes it clear that this is not an accident; it's a punishment. What is it a punishment for? There are actually two versions in the Torah. The less familar one, stated in parshas Devarim, is that Moshe is being punished along with the generation of the desert for the sin of the spies. The one most of us are more familiar with is the sin of Mei Meriva, when Moshe hit the rock.

What exactly did Moshe do wrong in the story of Mei Meriva? There are actually multiple opinions about this, and the Torah does not say explicitly. Most of us assume that Moshe's sin was hitting the rock after Hashem had told him to speak to the rock; presumably the reason that this theory is so common is that it is endorsed by Rashi.

In this case, I find Rashi's position compelling. So let's assume that Rashi is right. Hashem told Moshe to talk to the rock, and Moshe disobeyed a direct order and hit the rock instead.

That raises a big question. What was Moshe's motive? Why didn't he just talk to the rock?

Another question -- not so big, but still a question -- why does Hashem say that Moshe (and Aharon) didn't believe in Him, and desecrated His name? I can understand how disobeying a direct order would lead to the punishment of not going into the land -- it's arguably not such a big punishment, Moshe lived 120 years -- but why would hitting the rock constitute 1. a desecration of Hashem's name and 2. a lack of faith?

To find out why Moshe might have hit the rock instead of talking to it, let's take a look at what he says to the Israelites right before he hits the rock. "Listen you rebels, from this rock, shall we bring forth water?"

Grammatically, this is a question. But what kind of question is it? It doesn't seem to be the kind of question where Moshe is actually asking for information. It sounds like a rhetorical question. Right?

Whenever there is a rhetorical question, there is an implied answer. What is the implied answer here?

One possibility is that Moshe is implicitly asking whether the Israelites want him to draw water from the rock. Maybe it could be rephrased as "Hey guys, want me to draw water from this rock?" in which case the implied answer would be "yes" -- Moshe knows that the Israelites want water, they've made that painfully clear.

But there's another possibility. Imagine that there's a natural disaster today that creates a water shortage in some area of the world. There's no running water, no water in the stores. Imagine a family where all the kids are whining and complaining to the father (or the mother) that they're thirsty and they want water. In a fit of frustration, the father picks up a broom and says, "What do you want from me? Do you think I can just hit the wall with a broom and water's gonna come out?" Maybe Moshe was saying something like that. "What do you want from me? Do you think I can just hit a rock and make water flow?"

The big difference between the story I just told and the story of Mei Meriva is that the father in the story has no magical powers and has no reason to believe that a miracle will happen. Moshe, on the other hand, was told by Hashem that if he talks to the rock, the rock will indeed bring forth water.

Which would explain why Moshe hit the rock and didn't talk to it -- he didn't want the rock to bring forth the water.

Which begs the question, of course -- why not? Why didn't he want the rock to bring forth the water?

In the story of Mei Meriva, the Israelites were asking for water. That's a reasonable request -- a very reasonable request. Water is one of our most basic needs. Without water, they might all have died. However, they didn't ask very nicely. They said "why did you take us out of Egypt" and other not-so-nice things. They started a riot. Without divine intervention, Moshe himself might have been killed.

This probably bothered Moshe very much. But somehow, it didn't seem to bother Hashem at all. Hashem didn't even condemn the Israelites' behavior. He just said "give them water" and He told Moshe how. At that point, maybe Moshe snapped. He didn't think they deserved water, at least not without a good talking-to.

If this interpretation is correct, it makes perfect sense that Hashem would say to Moshe (and Aharon, who was apparently in on Moshe's plan, though the Torah is coy on those details) that he failed to sanctify Hashem's name. In this instance, Hashem -- for whatever reason -- was focusing on performing the miracle of producing the water, which would be a spectacular show of power. Moshe tried -- and ultimately failed -- to get in the way of that.

Sunday, June 5, 2016

Abstraction Considered Harmful (In Unit Tests)

If you were building an ALU, how would you test addition?  You'd probably write tests like

# Figure 1.
self.assertEqual(1 + 2, 3)
self.assertEqual(1 + (-2), -1)

etc. etc.  But those tests are exactly for having those numbers.  It's very specific to the use case.  What your ALU really does is add numbers, and so maybe that's what it should check?  So why not write a test like this:

# Figure 2.
for x in range(MIN_INT, MAX_INT):
    for y in range(MIN_INT, MAX_INT):
        self.assertEqual(x + y, x + y)

Now that test tests what your code does, not just specific use cases.  But there's an obvious problem: that test does absolutely nothing.  If there's a bug in the ALU, both expressions will return the same wrong answer, and the test will pass.  So how about a test like this?


# Figure 3.
def my_alternative_addition_function(x, y):
    # The best software arithmetic code ever

def test_addition():
    for x in range(MIN_INT, MAX_INT):
        for y in range(MIN_INT, MAX_INT):
            self.assertEqual(
                x + y, 
                my_alternative_addition_function(x, y)
            )

That code most definitely is testing something.  And it’s definitely better than the code in Figure 2.  It’s testing what the ALU does, not a specific use case.  So it looks like a pretty good test, right?

Maybe, but I would argue that there’s still a problem.  The trouble is that it’s very tempting to use similar procedures in my_alternative_addition_function(), so that the code in your “alternative” function looks very similar to the code that you’re effectively using in your ALU.  If that’s the case, then the code in Figure 3 is not really much better than the code in Figure 2; the bugs in your ALU are likely to be present as well in your “alternative” function.

Sure, if you (or someone else) later changes the ALU, then this test might catch the regression.  On the other hand, this test won’t catch the bugs in the first version of the ALU; also, when the test fails in the future, the code maintainer will be tempted to fix the test by “fixing” the alternative function.  After all, tests need to be updated when code changes, right?

What if you make sure that the code for my_alternative_addition_function() is totally different than the code effectively used by your ALU?  What if you use a different addition algorithm, or hand the test off to a third party who’s never seen your ALU design?  Would that be sufficient?

Maybe, but maybe not.  If the test fails, you don’t know whether the bug is in the ALU or in the alternative addition function.  And the temptation is always there to fix the test by “fixing” the alternative, even when the real bug is in the ALU, thereby breaking them both, and effectively putting us back where we started, in Figure 2.

That’s why the code in Figure 1 is probably the best.  Unit testing is about test cases.  Production code is abstract, because it’s supposed to implement a single abstract contract for an effectively infinite domain of inputs.  What unit tests do is to select a meaningful, representative set of sample inputs, where the expected output is known in advance.  Most good unit tests specify that expected output precisely; that is to say, the output is “hard coded”.

Some suites of unit tests might have a mix of abstractions and concrete cases.  Sometimes, expressing expected outputs concretely can actually be much more complex than making abstract assertions, and is not worth the effort.  But we shouldn’t lose sight of the fact that when it comes to unit tests, hard coding is good, and abstraction is at best a mixed blessing.

(Note: if you missed the literary reference from the title, see https://en.wikipedia.org/wiki/Considered_harmful)

Tuesday, May 27, 2014

Appearance and Reality


Section 11.8 of "Consciousness Explained" is a Platonic dialogue between Daniel Dennett and his reluctant (and fictional) student, Otto.  Here is an excerpt from the dialogue:

Dennett: These additions are perfectly real, but they are just more “text” -- there is nothing more to phenomenology than that.

Otto: But there seems to be!

Dennett: Exactly!  There seems to be phenomenology.  That’s a fact that the heterophenomenologist enthusiastically concedes.  But it does not follow from this undeniable, universally attested fact that there really is phenomenology.

(Page 366)

It sounds very reasonable, doesn’t it?  “There seems to be phenomenology” doesn’t imply “there is phenomenology.”  Appearance does not imply reality.

The problem is with Dennett’s unorthodox theory of appearances, which he gives a few pages earlier in the same dialogue.

Now you’ve done it.  You’ve fallen into a trap, along with a lot of others.  You seem to think there’s a difference between thinking (judging, deciding, being of the heartfelt opinion that) something seems pink to you and something really seeming pink to you.  But there is no difference.  There is no such phenomenon as really seeming -- over and above the phenomenon of judging in one way or another that something is the case.

(Page 364)

Dennett seems to be saying that (for some proposition P) “P appears to be true” is equivalent to “I believe that P is true”.  If that is the definition of seeming, then “P seems to be true but it is actually false”, is equivalent to “I believe that P is true but it is actually false”, which makes no sense.

Saturday, May 24, 2014

Rotating Images


In Sweet Dreams (sorry I’m not giving page numbers in this essay -- Kindle isn’t showing me page numbers) Daniel Dennett argues that practicing experimental psychologists work under the assumptions of heterophenomenology, and not classical phenomenology (which Dennett sometimes calls “autophenomenology”.)  He cites as an example experiments by Roger Shepard where subjects are shown drawings of two three-dimensional shapes and asked if they are actually the same shape in two different positions.  Apparently, the subjects got the correct answer a lot of the time.  Dennett writes:

Most subjects claimed to solve the problem by rotating one of the two figures in their “mind’s eye” or imagination, to see if it could be imposed on the other.

Shepard argued that the subjects actually did solve the problem by rotating the shapes in their imagination, and he supported this claim by trying to show that the time it took to solve the problem correlated well with the “rotation distance” between the two shapes, that is, how many degrees the shape would need to be rotated from the position of the first shape to the position of the second shape.  “This didn’t settle the issue,” Dennett writes, “since Pylyshyn and others were quick to compose alternative hypotheses.”

Pylyshyn, Dennett argues, is clearly practicing heterophenomenology.  The subjects claim to be rotating mental images -- they report a conscious experience -- and Pylyshyn’s hypotheses suggest that no such experience actually happens.  Even Shepard seems to be practicing heterophenomenology.  If he simply assumes that his subjects actually experience what they think they are experiencing, there would be no reason to find further evidence of those experiences.

Dennett writes:

Subjects always say they are rotating their mental images, so if agnosticism were not the tacit order of the day, Shepard and Kosslyn would never have needed to do their experiments to support subjects’ claims that what they were doing (at least if described metaphorically) really was a process of image manipulation.

This doesn’t quite follow, IMAO.  Dennett doesn’t seem to consider the possibility that there is no “order of the day”, that some experimental psychologists identify with heterophenomenolgy and some identify with classical phenomenology.

Furthermore, I don’t think that Pylyshyn’s alternative hypotheses are actually relevant to heterphenomenology.  Dennett assumes that if the alternative hypotheses were true, it would imply that the subjects’ reports were false, but IMAO this doesn’t follow.  It’s possible that the brain subconsciously solved the problem through some non-rotational algorithm, but people still imagined the images rotating -- the experience may have reflected the nature of the problem being solved, rather than the process of the problem being solved.

In general, these image rotation experiments are not phenomenology experiments at all -- their relevance to phenomenology is indirect.  The experiments analyze the brain’s computational competence, rather than the person’s conscious experiences.

Thursday, May 15, 2014

Unbelievable


Is it possible that there are true propositions that we are incapable of believing?

I don’t see why not.  If we are information processing machines, it’s reasonable to assume that we come hard-code with assumptions or modes of thought that we are not capable of stepping away from.  So it seems reasonable that there are propositions that we are not capable of believing.  We could call these the “unbelievable” propositions.  We can hope that most of the unbelievable propositions are actually false, but maybe some of them are true.  We could call these propositions the “unbelievably true” propositions.

If there are unbelievably true propositions, we will never be able to identify what they are.  To identify an unbelievably true proposition, we would have to realize that it is true, which is impossible by definition.

Therefore, if you can prove that some proposition P is unbelievable, that’s almost as good as proving it false.  It might be true, but if it were, it would do us no good, since we couldn’t beleive it anyway.

What does all of this have to do with heterophenomenology?  Maybe nothing.  See, however, an interesting passage from the heterophenomenlogy chapter in Consciousness Explained: “People undoubtedly do believe that they have mental images, pains, perceptual experiences, and all the rest, and these facts -- the facts about what people believe, and report when they express their beliefs -- are phenomena that any scientific theory of the mind must account for.” (Page 98)

What does Dennett mean when he says “people undoubtedly believe that they have mental images...and all the rest”?  Does he mean:

Everyone necessarily believes that he has mental images and all the rest
Most people currently happen to believe that they have mental images and all the rest

Undoubtedly, Dennett doesn’t mean #1.  After all, the question of whether people “have mental images and all the rest” is exactly what heterophenomenologists are supposed to remain “agnostic” about.  So if we are going to be good heterophenomenologists, which Dennett seems to think that all sensible people can be without too much trouble, we are required to not believe that people have mental images and all the rest (and, of course, not to believe the negation, which would be that people do not have mental images and all the rest).

So Dennett must mean #2.  The belief that people have mental images and all the rest must be a hunch, similar to what Dennett calls “the zombic hunch”, that will soon experience (so to speak) “the death of an illusion” (see the title of Sweet Dreams, Chapter 1).  We could call it “the phenomenological hunch”.  As Dennett writes at the end of Chapter 1:

I anticipate a day when philosophers and scientists and laypersons will chuckle over the fossil traces of our earlier bafflement about consciousness: "It still seems as if these mechanistic theories of consciousness leave something out, but of course that's an illusion. They do, in fact, explain everything about consciousness that needs explanation."

Does Dennett also anticipate a day when philosophers, scientists and laypersons will chuckle about our earlier confusion about experiences?  Will they say something like, “It still seems as if we have mental images, pains, perceptual experiences, and all the rest, but of course that’s an illusion”?

Is that all?  Is Dennett’s science of mind about nothing more than explaining the mistaken beliefs of some people?

Tuesday, May 13, 2014

The Dinosaurs of Deuteronomy


According to the numbers given in the Torah, human beings have walked the Earth for about 5000 years, and the Earth predates us by about one week.  Modern science, on the other hand, has accumulated a wealth of evidence showing that human beings have been around for millions of years, and the Earth predates us by a few billion years.

How can these different positions be reconciled?  One possible solution would be to speculate that the scientific evidence favoring millions and billion of years, for humans and the Earth respectively, was planted by Hashem to test our faith.  However, most people who take the question seriously don’t take the “testing our faith” answer seriously, for a simple reason.  Hashem is infinitely good, therefore it’s impossible that He would deliberately mislead us.

A priori, this makes a certain amount of sense. But before projecting our own categories on Hashem, maybe we should take a minute to see what the Torah has to say.  Particularly relevant is the passage of the False Prophet, from Dvarim 13:1 to 13:5:

If there arise among you a prophet, or a dreamer of dreams, and gives you a sign or a wonder, and the sign or the wonder comes to pass; and he says to you, “Let us go after other gods, who you do not know, and let us serve them”; do not listen to the words of the prophet, or that dreamer of dreams.  For Hashem your God is testing you, to know whether you love Hashem your God with all your heart and all your soul.

The Torah tells us that sometimes, Hashem allows false prophets to provide convincing evidence of their false religions, in order to test our faith.  If Hashem doesn’t see anything wrong with sending false prophets to test our faith, why would He have any qualms about planting a few dinosaur bones in the ground?

Thursday, May 8, 2014

I Believe in Miracles


A miracle is an event that violates the laws of nature.

The conservation of mass-energy is a law of nature.  (For simplicity, we can assume that mass is a form of energy and call it “the conservation of energy”.)  Energy is never created or destroyed.  No lab experiment has ever succeeded in creating or destroying energy.

Energy was created in the Big Bang.

Therefore, the Big Bang was a miracle.

Tuesday, May 6, 2014

Baby You Can Drive My Car, Part 6: The Argument from Self-Evidence

This is the final installment in a series responding to "The Conscious Mind" by David Chalmers: http://www.amazon.com/Conscious-Mind-Search-Fundamental-Philosophy-ebook/dp/B004SL4KI0/

The previous installment can be found here: http://mccomplete.blogspot.co.il/2014/04/baby-you-can-drive-my-car-part-5.html

As I explained in my previous post, I don't think knowledge needs to be justified; it just needs to be true.  David Chalmers does not agree.  He wants us (the epiphenomenal us) to have justified knowledge of our own consciousness.  He writes:

Intuitively, our access to conscious experience is not mediated at all.  Conscious experience lies at the center of our epistemic universe; we have access to it directly…What is it that justifies our beliefs about our experiences?...It is having the experiences that justifies the beliefs...There is something intrinsically epistemic about experience.  To have an experience is automatically to stand in some sort of intimate epistemic relation to the experience -- a relation that we might call “acquaintance”.

(Page 194)

Chalmers is saying that experience is self-evident, and I couldn’t agree more.  But if experience is self-evident, then epiphenomenalism must be false.

If epiphenomenalism were true, experiences would be facts.  But facts can only become evidence if they fall into the hands of an epistemological agent.  And if epiphenomenalism is true, the epistemological agents are all out to lunch.

If epiphenomenalism is true, then the only (relevant) epistemological agent is the nonconscious brain.  The nonconscious brain has no access to phenomenal facts.  Since no epistemological agent has access to the phenomenal facts, the phenomenal facts never become evidence.

Epistemological agents have inputs and outputs.  My experiences are inputs to me as an epistemological agent.  Not a functional organization isomorphic to my experiences, not information about my experiences, but my experiences themselves.  That is how I know that in the real world, the world that I inhabit, epiphenomenalism is false.

Tuesday, April 29, 2014

Baby You Can Drive My Car, Part 5: The Argument from Knowledge

Chalmers is very clear about his challenge to the interactionist.  Chalmers wants an argument to “show us why the explanatory irrelevance of consciousness simply cannot be true.” (Page 194)

Chalmers suggests three arguments against epiphenomenalism: the argument from knowledge, the argument from memory, and the argument from reference.  My argument is none of these, but in particular, it is not the argument from knowledge.

My lack of interest in the knowledge argument is based on broader theoretical considerations.  What is knowledge, after all?  Philosophers have traditionally accepted four conditions for knowledge.  To say that epistemological agent A knows proposition P implies four things:

1. A believes P.
2. A has certainty, or high confidence, in P.
3. P is true.
4. A’s belief in P is justified.

Conditions 1, 2 and 3 look good to me, but I can’t accept condition 4.  Some of our knowledge is knowledge of axioms, such as the reliability of induction and memory.  Axioms seem to have no justification at all.  Furthermore, much of our non-axiomatic knowledge is derived, at least in part, from the axioms themselves, so it’s hard to see how the non-axiomatic knowledge can be any more justified than the axioms it derives from.  (How did we come to be in possession of these axioms?  That’s an interesting and important question, but for the current discussion, it is beside the point.)

Once we dispense with condition 4, there doesn’t seem to be any reason why an epiphenomenal mind couldn’t possess knowledge of its own consciousness.  A nonconscious epistemological agent could believe (with any level of certainty you like) that there are phenomenal properties associated with its functional states.  If we could build such an agent, we could simply hard-wire it with the assumption that there are such properties.  And since the epiphenomenal mind is conscious by assumption, we get condition 3 for free.

Monday, April 28, 2014

Baby You Can Drive My Car, Part 4: Theories and Theorists

Now let’s go back to the text of “The Conscious Mind”, this time bearing in mind that the book was written by a nonconscious epistemological agent, David Chalmers’ nonconscious physical brain.  Chalmers’ brain writes: “To take the line that explaining our judgments about consciousness is enough...is most naturally understood as an eliminativist position about consciousness...As such it suffers from all the problems eliminativism naturally faces.  In particular, it denies the evidence of our own experience.  This is the sort of thing that can only be done by a philosopher -- or by someone else tying themselves in intellectual knots.”  (Page 185)

Who is the “we” of “our own experience”?  Who or what is “denying the evidence”?  Grammatically, the sentence would seem to be attributing the “denial” to the eliminativist theory itself; surely, that’s not what Chalmers means.  Theories don’t deny evidence; theorists deny evidence.  Theorists such as Armstrong, Dennett, Lewis and Ryle (see Chalmers’ list on page 163).  That is, epistemological agents deny evidence.

But you can’t deny evidence you don’t have.  It is not Armstrong’s conscious mind that is denying experience.  The conscious properties of Armstrong’s mind, whatever they may be, do not add any “denying” competence that is not already present in Armstrong’s nonconscious physical brain.  It is not Armstrong’s conscious mind that is “taking the line that explaining our judgments about consciousness is enough” -- it is Armstrong’s physical nonconscious brain.  And Armstrong’s physical nonconscious brain has no access to the facts, whatever they may be, about Armstrong’s own experiences.  So it’s simply not possible that Armstrong is denying evidence -- there is no such evidence to be denied.

Evidence is only evidence in the hands of an epistemological agent, who has the degrees of freedom to weigh or deny it.

The processes that determine what “line to take” on the question of consciousness are confined to the physical nonconscious brain; and the phenomenal facts, whatever they may be, are simply not inputs to those processes.

Given that the only parties to Armstrong’s mind that have access to the information about consciousness are barred from speaking, not to mention voting, it is no wonder that the Parliament of Armstrong’s Mind votes to deny.  Notwithstanding the fact (that is, the assumption) that they are right, it is the realists (that is, the nonconscious brains of the realists) who are tying themselves in intellectual knots, isn’t it?

Sunday, April 27, 2014

Baby You Can Drive My Car, Part 3: The Nonconscious Brain

Thinking about epiphenomenalism can be very frustrating.  Until very recently, I would often slip into the linguistic pattern of referring to the epiphenomenal brain as “unconscious”.  But Chalmers would surely object to such a usage.  Zombies are unconscious.  Zombie brains are unconscious.  But our brains are not unconscious, because they have conscious experiences associated with them.

Chalmers would probably want to say that the epiphenomenal brain is conscious.  Metaphysically speaking, Chalmers might argue, the conscious experiences might not just be associated with the brain, but might be properties of the brain itself.  Conscious experiences might be phenomenal qualities of the functional organization of, or the information encoded in, the brain.  Metaphysically speaking, the brain might be an ontological object with a physical aspect and a phenomenal aspect.

In my opinion, to refer to the physical brain as conscious would be grossly misleading, and only add to the confusion.  As in the psychons argument (see http://mccomplete.blogspot.co.il/2014/03/psychons-and-intentyons-part-1-chalmers.html), we can subtract the phenomenal properties from Chalmers’ brain and arrive at Chalmers’ physical brain.  The physical brain is self contained and self sufficient, and it has no access to the phenomenal properties.

Given this vexing situation, I hope that Chalmers would agree to call the physical brain nonconcsious.

If we can agree on that, maybe we can agree that the epiphenomenal mind is a nonconcsious epistemological agent.  Epistemological agents take inputs (assumptions) and return outputs (conclusions).  The epistemological functions of the mind are exhausted by the nonconscious brain.  The nonconscious brain is a self-sufficient epistemological agent, and any conscious experiences that may be associated (in some sense) with the brain add nothing to its epistemological agency.

The epiphenomenal mind is an epistemological subject, but not an epistemological agent.  The epiphenomenal mind is a consumer of beliefs, but not a producer of beliefs.  The conscious mind will believe a proposition if and only if the nonconscious brain has generated an isomorphic judgment.

Thursday, April 10, 2014

Is David Chalmers an Epiphenomenalist?


On page 158, Chalmers writes, “I do not describe my view as epiphenomenalism.”  His preferred terms are “natural supervenience” and “explanatory irrelevance”.  If Chalmers doesn’t describe his view as epiphenomenalism, why do I describe it as epiphenomenalism?

Chalmers continues: “The question of the causal relevance of experience remains open, and a more detailed theory of both causation and experience will be required before the issue can be settled.  But [my] view implies at least a weak form of epiphenomenalism, and it may end up leading to a stronger sort.”

A few pages earlier, Chalmers writes: “It remains the case that natural supervenience feels epiphenomenalistic.  We might say that the view is epiphenomenalistic to a first approximation: if it allows some causal relevance for experience, it does so in a subtle way. I think we can capture this first-approximation sense by noting that the view makes experience explanatorily irrelevant. We can give explanations of behavior in purely physical or computational terms, terms that neither involve nor imply phenomenology.” (Page 154)

Chalmers is saying something like: “If you want to call me an epiphenomenalist, go right ahead.”  To categorize Chalmers’ theory as a variety of epiphenomenalism is not essential to my argument, but it is a greatly simplifying move, and it makes my flow chart work.  So, for the rest of the essay, I will accept Chalmers’ “first approximation”, and simply assume that Chalmers is an epiphenomenalist.

Wednesday, April 9, 2014

Baby You Can Drive My Car, Part 2: Lennon and McCartney

Most Beatles songs are attributed to “Lennon and McCartney”, but it is well known that many of these songs were not really collaborations.  In fact, some of the songs attributed to “Lennon and McCartney” were written by John Lennon, and some of them were written by Paul McCartney, and usually you can find out which one by looking up the song on Wikipedia.

Let’s assume that the song “Baby You Can Drive My Car” was written entirely by Paul McCartney, and John Lennon made no contributions whatsoever.  (Actually, Lennon did contribute some lyrics to “Baby You Can Drive My Car”.)

Given that assumption, would it be true to say that “Baby You Can Drive My Car” was written by Lennon and McCartney?  The statement does have some truth to it: after all, if you add up all of the contributions from John Lennon and all the contributions from Paul McCartney, you get the song.  But usually, we would assume that the attribution to “Lennon and McCartney” is a convention, a fiction.  You might as well say that “Baby you can Drive my Car” was written by Starr and McCartney, or by Stalin and McCartney.  The true story can be found in Wikipedia: the song was written by Paul McCartney.

Similarly, if epiphenomenalism is true, the book “The Conscious Mind” was not written by the conscious mind of David Chalmers; it was written by the physical brain of David Chalmers.  David Chalmers’ physical brain, according to epiphenomenalism, is self-contained and self sufficient.  It takes no input from Chalmers’ conscious mind, it needs no help from Chalmers’ conscious mind, and it has no access to Chalmers’ conscious mind.  Chalmers’ conscious mind is no better than a reader of “The Conscious Mind”, certainly not a co-author.

“The Conscious Mind” was written by Chalmers’ physical brain in two important senses.  First, Chalmers’ physical brain is a natural language engine that generated the English sentences that comprise “The Conscious Mind”.  Even more important, however, is that Chalmers’ physical brain is an epistemological engine -- an epistemological agent -- that arrived at the conclusions presented in “The Conscious Mind”.

Monday, April 7, 2014

Baby You Can Drive My Car, Part 1: Whatever Doesn't Kill You

Epiphenomenalism comes with a standard objection, one that is straightforward and almost obvious.  If consciousness really has no effect on the physical world, how can we talk and write about consciousness?  Chalmers refers to this objection as “the paradox of phenomenal judgment”.  He writes:

It is one thing to accept that consciousness is irrelevant to explaining how I walk around the room; it is another to accept that it is irrelevant to explaining why I talk about consciousness. One would surely be inclined to think that the fact that I am conscious will be part of the explanation of why I say that I am conscious, or why I judge that I am conscious; and yet it seems that this is not so...If phenomenal judgments arise for reasons independent of consciousness itself, does this not mean that they are unjustified?

(Page 180)

Chalmers responds by conceding that phenomenal judgments arise for reasons independent of phenomenology itself, but argues that this is not a sufficient reason to reject epiphenomenalism.  He writes: “Epiphenomenalism may be counterintuitive, but it is not obviously false, so if a sound argument forces it on us, we should accept it.”  (Page 157)  And later: “This paradoxical situation is at once delightful and disturbing.  It is not obviously fatal to the nonreductive position, but it is at least something to come to grips with...If we can cope with this paradox, we may be led to valuable insights about the relationship between consciousness and cognition.” (Page 178)

If consciousness did not exist, Chalmers argues, people might still think that it exists.  The same factors that would lead to such a delusion lead, in our case, to the correct conclusion, that we are conscious.  He writes:

To get some feel for the situation, imagine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives...Would it have any concept of consciousness, or any related notions? ...If such a system were reflective, it might start wondering how it is that things look red, and why it is that red just is a particular way, and blue another.

(Page 183)

The robot that Chalmers describes might think that it is conscious even if it is actually not conscious.  An analogous process, he argues, is going on in our brains (except for the brains of the eliminativists, of course; they come to the opposite conclusion).  The fact that (some of) our brains get the right answer here is a kind of coincidence, but sometimes coincidences do happen.  Chalmers writes:

Nietzsche said, “What does not kill us, makes us stronger.” If we can cope with this paradox, we may be led to valuable insights about the relationship between consciousness and cognition.

(Page 179)

In my next few blog posts, I intend to kill David Chalmers.  I mean, I intend to kill Chalmers’ theory.