Are Roses Really Red?

comments 32
Uncategorized

Photo by Milly Sime on Unsplash

I’m grateful to Edward Feser for commenting on my work. We share a common starting point, namely our conviction that the qualities we encounter in experience cannot be fully accounted for in the purely quantitative terms of physical science, and hence that our conventional scientific worldview misses something out. But we disagree on how to rectify this error. I agree with Galileo (ironic, given the title of my book) that the qualities aren’t really out there in the world but exist only in consciousness. So I don’t think we need to account for the redness of the rose any more than we need to account for the Loch Ness monster (neither exist!); but we do need to account for the redness in my experience. Following Russell and Eddington I do this by incorporating the qualities of experience into the intrinsic nature of matter, ultimately leading me to a panpsychist theory of reality. Feser, in contrast, rejects Galileo’s initial move of taking the qualities out of the external world. The redness really is in the rose, the greenness really in the grass, etc., and hence we have a ‘hard problem’ not just about consciousness but also about the qualities in external objects.

A very similar critique was made of my work my Michelle Liu and Alex Moran in this special issue of the Journal of Consciousness Studies on my work, which is coming out later this year as a book called Is Consciousness Everywhere? I reply to Liu and Moran (and all the other contributors) here.

Feser’s view is standardly called ‘direct realism’ or ‘naïve realism’: the view that conscious perception (when it goes right) is essentially a relationship between the mind and the external world. According to direct realism, experiences are not inside the head but are rather ‘world-involving’: the red rose out there in the world is literally inside of (or at least partly constitutes) my visual experience of the red rose. The redness I directly encounter in my experience is not a property of my experience but of the rose itself.

I have a familiar philosophical concern with this view, which arises from thinking about hallucinations. If I’m hallucinating a red rose, then there’s no red rose out there in the world for me to be related to. So when it comes to hallucinations, at least, the experience must be in the head. The direct realist, then, is led to the view that veridical experiences (‘veridical’ is the technical term for experiences that present things as they really are) are radically different kinds of thing from hallucinations: the former are world-involving relationships, the latter are in the head. This view is known in philosophy of perception as ‘disjunctivism.’

For these reasons, Feser’s direct realism entails disjunctivism, and I think there’s a pretty good argument against disjunctivism. It was first formulated by my good friend Howard Robinson but further developed by Mike Martin. It goes as follows.

Consider a moment when Sara is veridically seeing a red rose at precisely 2pm. Now let’s imagine a genius evil scientist kidnaps Sara later that day, removes her brain and puts it in a vat, and then fiddles with it so that it’s in the exact same state it was at 2pm that day. Presumably, Sara’s brain in the vat will now be having an internal experience that makes it seem to her that it’s seeing a red rose (even though the brain doesn’t have any eyes, so isn’t seeing anything). But, given that at 2pm Sara’s brain was in the same state, then her brain at 2pm must have also generated an internal experience that made it seem to Sara that she’s seeing a red rose. Strictly speaking this doesn’t rule out direct realism: at 2pm Sara’s brain might have generated an internal experience (that made it seem to her that she’s seeing a red rose) and in addition Sara might have also had a world-involving experience (that also made it seem to have that she’s seeing a red rose). But the latter experience seems redundant, given that the former is sufficient to make it seem to Sara that she’s seeing a red rose. This argument persuades me that there aren’t really any world-involving experiences: experiences are all in the head (which is not to deny that experience inside our heads can put us in contact with reality outside of our heads).

(In fact, Mike Martin, who developed this argument, goes onto reject it, but I think the position he ultimately goes for is incompatible with the kind of anti-materialist view Feser and I agree on…I will write a paper about this at some point…).

Feser also suggests I don’t have good justification for panpsychism, but he doesn’t in fact consider my main argument for panpsychism, which is a simplicity-based argument. I have argued that panpsychism is the most parsimonious theory able to account for both the reality of consciousness and the data of third-person science. All things being equal, we should go for more parsimonious theories. So unless there’s some data we can point to that panpsychism is unable to account for, then panpsychism is the view we ought to go for.

More on Fine-Tuning & the Multiverse

comments 14
Uncategorized

I’ve thought of a simpler way of making the argument from my last post. Suppose Jack is the defendant, and the lawyer for the prosecution shares with the jury that Jack carries a knife around with him. In fact, as the lawyer for the prosecution well knows, Jack carries a butter knife around with him, but the lawyer chooses not to share this detail. Obviously the jury are going to be misled. It’s not that they’ve been told a lie: it’s true that Jack carries a knife around with him. The lawyer has misled the jury by giving them a less ‘filled in’ account of the evidence than is available.

This example (modified from an example due to Paul Draper) reveals a very important principle in probabilistic reasoning, a principle I would define as follows:

The Requirement of Total Evidence (RTE) – Never bypass the evidential implications of specific evidence in order to focus on weaker evidence.

(That ‘Jack goes around with a knife’ is weaker evidence than ‘Jack goes around with a butter knife’ because the latter entails the former but not vice versa).

It is this principle that is violated by those inferring a multiverse from the fine-tuning. The evidence that our universe is fine-tuned is striking to us because it raises the probability of something god-ish: simulation hypothesis, Nagel-style teleological laws, cosmopsychism, etc (I would not go for the omni-God, as the existence of suffering is powerful evidence against the omni-God). However, our culture tells us that god-ish hypotheses are ridiculous. And so, in general, those scientists who do find something compelling about fine-tuning bypass the god-ish evidential implications of ‘our universe is fine-tuned’ in order to focus on the weaker evidence that ‘a universe is fine-tuned.’ This is in violation of RTE and hence is a fallacious inference.

It’s exactly the same error we find in the classic Inverse Gambler’s Fallacy (IGF) case:

You walk into a casino and see someone roll all sixes with twenty dice. You infer that there must be lots of people playing in the casino tonight, as it’s more likely that someone will make such an incredible roll if there are many players.

The problem in this case is that the evidential implications of the specific evidence that ‘this roll was a double six’ (such an extraordinary roll raises the probability that the dice are loaded) are bypassed in order to assess the probability of the weaker evidence that ‘someone the casino rolled all sixes with twenty dice.’

It is commonly pointed out that there is a selection effect in the fine-tuning case which isn’t present in the casino case: we couldn’t have observed a universe that wasn’t life-conducive but we could have observed someone making a terrible roll. In my last post, I sketched the Jane analogy which has a selection effect but is still fallacious. However, a more direct response is to point out that the presence of absence of a selection effect has no relevance to the explanation of why the casino example is fallacious (which is that the reasoner bypasses specific evidence to focus on weaker evidence) and hence does nothing to block the multiverse inference being fallacious in the same way (as this inference also involves bypassing specific evidence to focus on weaker evidence).

If the multiverse theorist wants to resist this argument, they not only have to deal with the Jane analogy from my last post, but they also need to explain why the inference to the multiverse is exempt from RTE. I’ve seen some attempts at the former (although none I’m as yet convinced by) but I haven’t seen any attempts at the latter.

[I should credit a tweet from Thomas Metcalf with making me think that maybe we can simply modify our understanding of RTE rather than qualifying it, although I don’t quite agree with his way of doing that, which I hope to talk about in future work. This has been a ‘Eureka’ moment in my thinking on this. In his original article on this, White appealed to RTE, but I was mistakenly thinking it entailed we always have to focus on the strongest evidence we have, which is obviously false, and so I thought White must have got the theoretical underpinnings of his argument wrong (in a postscript to a re-published version of the paper, White gave up on RTE and offered instead a new theoretical justification, which I don’t agree with). But now I see that we can make the argument in terms of RTE, so long as we define it as I do above. Finally, for those following these Twitter discussions, I should also confess that I haven’t yet got around to reading Quentin Ruyant’s second response to me.]

Why the Multiverse Can’t Explain Fine-Tuning

comments 27
Uncategorized

A startling discovery of recent decades is that the laws of physics are fine-tuned for the possibility of life. That is to say, for life to be possible, certain numbers in physics had to fall in a certain narrow range. Some scientists and philosophers try to explain this by postulating an enormous number of universes, exemplifying a huge range of different numbers in their physics, making it statistically likely that at least one will have the right numbers for life by chance. The trouble is that this kind of inference is fallacious; specifically, it commits the inverse gambler’s fallacy.

Here’s the classic example of the Inverse Gambler’s Fallacy (IGF):

You walk into a casino and see someone roll a double six. You infer that there must be lots of people playing in the casino tonight, as it’s more likely that someone will roll a double six if there are many players.

This is a fallacious inference. You’ve only observed one roll, and postulating many other rolls in the casino does not make it any more likely that the roll you observed would be a double six. The challenge for the multiverse theorist to explain why the inference they make does not commit the same fallacy. We have only observed one universe and the postulation of many other universes does not make it any more likely that the universe we have observed would be fine-tuned.

The answer standardly given is that the fine-tuning case, in contrast with the classic IGF case, involves a selection effect: we could not have observed a universe that was not fine-tuned, whereas we could have observed someone rolling something other than double six. This clearly does mark a difference to the classic IGF case. The trouble is when we introduce an artificial selection effect into IGF cases, the fallacy doesn’t go away. Consider the following case:

Jane was conceived through IVF. One day she discovers that the doctor who performed the IVF had a nervous breakdown around the time, and as a result rolled dice to determine whether she would fertilise the egg, committing only to do so if 5 sixes were rolled. The doctor only rolled the dice once, and subsequently got some therapy and never did this again.

There is a selection effect here. Jane could not have discovered that the doctor failed to roll all sixes, because, if the doctor hadn’t rolled all sixes, Jane would not exist. And yet it would be fallacious for Jane to explain the remarkable improbability of her birth with the following hypothesis:

The Many Doctors Hypothesis: Many IVF doctors have been rolling dice to decide whether to fertilise eggs, in most cases failing to get the right numbers to proceed.

This would be another case of the inverse gambler’s fallacy: Jane’s evidence is that her doctor rolled five sixes, and the postulation of many other doctors rolling dice does not make it any more likely that her doctor would roll five sixes. If the multiverse theorist wants to defend the inference from a fine-tuning to the multiverse, they need to tell us why their inference is relevantly different to Jane’s clearly fallacious inference.

More Detailed Analysis

Part of what is objectionable about IGF cases, as pointed out by Roger White in his classic article on this, is that it involves setting aside a specific piece of evidence – this universe is fine-tuned – for the sake of a weaker piece of evidence – a universe is fine-tuned. White offers a nice example to illustrate how problematic this can be:

Suppose I’m wondering why I feel sick today, and someone suggests that perhaps Adam got drunk last night. I object that I have no reason to believe this hypothesis since Adam’s drunkenness would not raise the probability of me feeling sick. But, the reply goes, it does raise the probability that someone in the room feels sick, and we know that this is true, since we know that you feel sick, so the fact that someone in the room feels sick is evidence that Adam got drunk. Clearly something is wrong with this reasoning. Perhaps if all I knew (by word of mouth, say) was that someone or other was sick, this would provide some evidence that Adam got drunk. But not when I know specifically that I feel sick. This suggests that in the confirming of hypotheses, we cannot, as a general rule, set aside a specific piece of evidence in favor of a weaker piece (White 2002: 264).

Following Peter Epstein, we can call the general rule White is expressing here ‘the Requirement of Total Evidence,’ or RTE. As Epstein and others have noted, RTE does have exceptions. Take the example of the inference from the existence of complex organisms to the hypothesis of evolution by natural selection. The Darwinian hypothesis does not raise the probability that certain specific animals, e.g. Tony the Tiger, will come to exist, and hence if we take very specific information about which particular animals exist as our evidence, then we will not get evidential support for the Darwinian hypothesis.

I propose that we are permitted to violate RTE if doing so moves us from a fact that isn’t surprising to a fact that is. How do we define when an event is ‘surprising’? Here White adopts Paul Horwich’s account, which he describes as follows:

The crucial feature of surprising events seems to be that they challenge our assumptions about the circumstances in which they occurred. If at first we assume that the monkey is typing randomly, then her typing “nie348n sio 9q” does nothing to challenge this assumption. But when she types “I want a banana” we suspect that this was more than an accident. The difference is that in the second case there is some alternative but not wildly improbable hypothesis concerning the conditions in which the event took place, upon which the event is much more probable. On the assumption that the monkey is typing randomly, it is just as improbable that she types “nie348n sio 9q” as it is that she types “I want a banana.” But that the second sequence is typed is more probable on the hypothesis that it was not merely a coincidence, but that an intelligent agent had something to do with it, either by training the monkey or rigging the typewriter, or something similar. There is no such hypothesis (except an extremely improbable ad hoc one) which raises the probability that the monkey would type the first sequence. Of course by P1, the human intervention hypothesis is confirmed in the case of “I want a banana.” So what makes the event surprising is that it forces us to reconsider our initial assumptions about how the string of letters was produced (of course someone who already believes that the typewriter was rigged should not be surprised). (White 2000: 270)

The existence of intelligent organisms is surprising to a pre-Darwinian atheist, because the existence of complex organisms is much more likely on a design hypothesis than on the chance hypothesis that organisms came about through random interactions of particles. Of course, Darwin gave us an alternative to both chance and design: natural selection.

Whilst the fact that there are complex organisms is surprising, the fact that these specific organisms exist (e.g. Tony the Tiger) is not surprising. And that’s because there is no non ad hoc hypothesis that raises the probability that Tony the Tiger exists (the hypothesis that there is a designer who specifically wanted to create Tony the Tiger would raise the probability that Tony exists, but that’s ad hoc in the way that a designer who wanted a monkey to type “nie348n sio 9q” would be ad hoc). This, I suggest, is why it’s permissible to violate RTE by moving from Tony the Tiger (and a long list of all of the other particular organisms that exist) exists to complex organisms exist.

I propose, then, that we qualify RTE as follows:

RTE*: It is not permissible to set aside a piece of specific evidence in favour of a piece of weaker evidence, unless in doing so one moves from a piece of evidence that is not surprising to a piece of evidence that is surprising.

Turning to the fine-tuning, is it surprising that this universe is fine-tuned? One might think: ‘No, because there’s nothing special about this universe, as opposed to any other possible or actual universe.’ I see how that can seem to make sense at a very intuitive level. But the fact that our universe is fine-tuned is surprising in the sense defined above. This is because when we run the Bayesian fine-tuning argument, everything but the values of the constants is in the background information of the calculation. And so it’s already assumed that this universe exists. Against that background, the evidence that this universe is fine-tuned is more likely on design/teleology that it is on a chance hypothesis. This explains why it is not permissible to violate RTE in the fine-tuning case by moving from this universe is fine-tuned to a universe is fine-tuned: because this universe is fine-tuned is in itself surprising.

This at any rate is my attempt to formulate a theoretical principle to explain why IGF is fallacious. I may be wrong, and I would welcome potential counterexamples. But the point I am more confident about is that we should expect the correct theoretical principle to rule that the inference from fine-tuning to a multiverse is fallacious, given its similarity to the clearly fallacious Jane case. Or to put it another way, the correct explanation of why IGF is fallacious is unlikely to have anything to do with the presence of a selection effect, given that the presence of a selection effect in the Jane case does not undermine the fallacy.

Response to Quentin Ruyant

Quentin Ruyant has just written an extensive blog post responding to this argument, with lots of very interesting thought experiments. It’s a long piece, and time constraints don’t allow me to answer of all of his objections, but I’ll give here my reaction to some of his thought experiments, and then use that to develop in more detail my analysis of the IGF.

[I have made some edits after initial publication, in response to Tweets from Quentin.]

Quentin claims that is that it would be equally fallacious for Jane to infer that the dice were loaded, which would seem to imply the IGF applies not only to the inference from fine-tuning to the multiverse but also to inferences from fine-tuning to design or teleology (for those who don’t know, I reject the omni-God hypothesis but think the fine-tuning supports something god-ish, e.g. the simulation hypothesis, or Nagel-style teleological laws). That seems to me wrong. Suppose the doctor rolled all five dice for an hour, determining only to fertilise the egg if all sixes came up every single time. This is wildly improbable and it would surely be rational for Jane to infer that the dice must have been loaded.

Quentin then tries out a twist on the Mary experiment which is supposed to be analogous to the inference from fine-tuning to the multiverse:

Jane and Tarzan are the last humans on earth. They are the product of IVF. One day they discover that just before the apocalypse, the aliens that were in control of the earth at the time obliged all doctors performing an IVF to roll dice each time to see whether to fertilise the eggs, determining to do so only if they rolled a double six. Doctors can only perform two IVFs in their career. Given that they exist, Jane and Tarzan conclude that many different doctors must have made trials before to succeed.

I would say that Jane and Tarzan’s inference does not commit IGF because it does not violate RTE*. It is not surprising that Jane and Tarzan were conceived, as there is no non ad hoc hypothesis that would raise the probability of Jane and Tarzan in particular being conceived. But it is surprising that some humans were conceived on the hypothesis that there are few doctors. My account thus allows Jane and Tarzan to set aside the specific evidence that Jane and Tarzan exist in favour of the weaker evidence that some humans were conceived.

Here’s another thought experiment Quentin raises:

Jane* is the product of IVF.  She learns that the success of IVF depends on the co-presence of many contingent factors. Any trial has only one chance over a thousand to be successful, and gives rise to a different baby when it succeeds. A doctor can only make an IVF once, but parents can see many doctors. Given that she exists, Jane* concludes that her parents saw many doctors before her conception.

I would say that in this case, Jane* is not committing IGF because she has some observational data that is made more likely by postulating many doctors, namely that her mother got pregnant. This contrasts with the real-world fine-tuning case, in which our observational evidence, namely that this universe is fine-tuned, is not made more likely by the existence of multiple universes.

Quentin expects this response, based on our Twitter discussion, and objects as follows:

Philip argues that in this case, the relevant evidence is not Jane’s existence, but her mother’s pregnancy. There’s something right in this diagnostic: relevance matters. But it cannot be the whole story, because the mother’s pregnancy can be deduced from Jane’s existence, and so, if we can infer many trials from pregnancy, we can also infer many trials from Jane’s existence.

It should now be clear that Quentin’s point that Jane*’s existence entails that her mother was pregnant is beside the point. The point is that Jane* doesn’t need to violate RTE* in order to get a piece of evidence that would support a many doctors hypothesis, because she can take as evidence that her mother got pregnant. However, the Jane in my original thought experiment described above would need to violate RTE*: the many doctors hypothesis would not raise the probability that Jane’s mother got pregnant, and hence in order to try to get evidence for the many doctors hypothesis, Jane would need to set aside the specific evidence that her doctor rolled all sixes to decide whether to perform IVF in favour of the weaker evidence that a doctor rolled all sixes to decide whether to perform IVF, and this would not involve moving from a piece of evidence that isn’t surprising to a piece of evidence that is surprising (because the evidence that her doctor rolled all sixes to decide whether to perform IVF raises the probability that the dice were loaded). Similarly, the multiverse theorist also violates RTE* by setting aside the specific evidence that this universe is fine-tuned in favour of the weaker evidence that a universe is fine-tuned, which does not involve moving from a piece of evidence that is not surprising to a piece of evidence that is surprising (the evidence that this universe is fine-tuned is in itself surprising, as it raises the probability of teleology/design).

At the end of the day, what Quentin needs to show is that my Jane analogy is relevantly different to the real-world fine-tuning case. I’m afraid I found the discussion here a little bit hard to follow. He claims that what marks the instances of fallacious inference is that ‘the random process we are interested in is causally related to us (its reference is fixed) in a way that is independent of its outcome.’ So, for example, in the classic IGF case, my latching on to the particular roll I observe has nothing to do with whether it is a double six. In contrast, according to Quentin, in the fine-tuning case, ‘…we came to refer to our universe not by some direct acquaintance with the process of selection of physical constants, but by acquaintance with what this universe produced after the constants were selected, and the existence of the selection process is theoretically inferred from its products.’

But how is this different from my Jane case? Just as in the real-world fine-tuning case, Jane is only able to refer to herself and the improbable circumstances of her conception after she comes to exist as a result of the right numbers coming up. Maybe it’s my fault for not understanding the point Quentin was making, but I’m not yet seeing a relevant disanalogy here, and in the absence of that, we ought to conclude that the person who infers a multiverse from fine-tuning commits the inverse gambler’s fallacy.

In a Tweet after initial publication of this post, Quentin clarified that his point is that an instance of IVF needn’t be caused by the rolling of dice, whereas a universe’s having the constants it does had to be caused by the kind of probabilistic processes referred to by multiverse theorists. But the latter claims seems to me false. A universe could exist, having certain constants, without there being any more fundamental explanation of why it has the constants it does.

I know I haven’t responded to all of Quentin’s objection, but I’ve run out of time, and I hope my detailed analysis of IGF will enable reader to work out (if they’re very bored!) how I would respond to the other points he raises.

Can You Prove a Miracle?

comments 10
Uncategorized

I’m currently taking a week off work, having submitted a draft of my book manuscript (‘The Purpose of Existence: Between God and Atheism’). Sometimes when I take time off, I get lost in a rabbit hole or two. On Easter Sunday, I listened to a debate on my favourite Christian vs. Atheist debate podcast on whether you could demonstrate historically that Jesus rose from the dead. I Tweeted brief thoughts about this, and someone tweeted back at me about a recent six-hour debate on this topic by atheist New Testament scholar Bart Ehrman and Christian New Testament scholar Mike Lacona. I ended up paying $50 to watch all six hours of it, and would like to share some thoughts about it.

I found the debate incredibly frustrating. There was very little disagreement on the historical facts, apart from whether a miracle occurred. Both accepted that after Jesus died several of his followers had experiences which persuaded them that Jesus had physically risen from the dead (Ehrman thinks there’s good evidence that at least Peter, Paul and Mary Magdalen had such experiences). Rather than disputing the history, a lot of time was taken up arguing about whether it is possible for a historian, as a ‘historian’, to argue for a miracle. This seems to me a very silly thing to argue about. We could define the word ‘historian’ however we wish. Surely the interesting question is what we have reason to believe.

Moreover, there was also no mention of our mathematically precise was of understanding how evidential support works, namely Bayes theorem. According to Bayes theorem, the probability of a hypothesis is determined by two things: evidence and prior probability. The ‘prior probability’ is simply how likely the hypothesis is before we take the evidence into account. It seems clear that whether a case can be made for the resurrection is going to hang on the prior probability of the resurrection. Suppose you’re an atheist who thinks the odds of God existing are one in a billion. You’re obviously going to attach an even lower probability than that to God having raised Jesus from the dead. From that starting point, you’re going to need quite extraordinary evidence to get the probability of the resurrection up anywhere reasonable. And even if there is evidence for the resurrection that is fairly strong by the standards of ancient history, it’s clearly not that impressive. For this reason, I don’t think atheists should be worried about conceding to Christian apologists that there is non-negligible evidence for the resurrection. There is strong evidence for all kinds of things that we nonetheless have absolutely no reason to believe, precisely because the prior probability is so low.

Bayes theorem tells us that if a hypothesis renders the evidence less improbable than it would otherwise be, then the evidence supports that hypothesis. Suppose, for example, that Joan’s DNA was found on the body, and that that’s really unlikely unless Joan is the murderer. It follows that we have evidence that Joan is the murderer, as that hypothesis renders the DNA evidence less improbable than it would otherwise be. On the face of it, it is fairly improbable that several people would hallucinate Jesus being back from the dead, especially when one of them (Paul) was a zealous and violent opponent of the Christian movement. Whereas if Jesus really did rise from the dead and appear to people, then it’s not so surprising that those people would have experiences that persuaded them that he was risen from the dead. It’s pretty plausible, then, that the resurrection hypothesis renders the evidence less improbable that it would otherwise be, and that’s all that’s required, according to Bayes theorem, for there to be evidence for the resurrection hypothesis.

But that doesn’t mean, by any stretch of the imagination, that we should believe in the resurrection. Perhaps the evidence for the resurrection is strong enough to get the probability up from one in a trillion to one in a billion; this still ends up not being a hypothesis we should take seriously. Both Ehrman and Licona agreed that a skeptic about the resurrection needs to have some alternative hypothesis to explain the evidence. That’s simply not true. If we’re dealing with an event that ends up, even after the evidence has been taken into account, being very improbable, then it’s quite rational to say ‘I don’t know what happened, but that didn’t happen.’

To be fair, Erhman did press the old Carl Sagan line ‘Extraordinary events require extraordinary evidence’, arguing that the resurrection is very improbable because it would require violating a law of nature.* But without bringing in Bayesian notions, this is just a rhetorical slogan, and it ended sounding like Ehrman took naturalism to be unfalsifiable. The only mention of Bayes in the whole six hours was Licona saying that evidence can overcome low prior probabilities. That’s right, in principle. But given that, what this debate should have been about is:

(A) What is the prior probability?

(B) How strong is the evidence?

(C) What probability do we end up with as a function of (A) and (B)?

Because they didn’t engage with the Bayesian framework, this discussion spent six hours getting nowhere.

Improbable things happen. And when they do, we often end up with evidential support for whacky hypotheses. If Christian apologists want to make a case for the resurrection of Jesus, then they need to argue for a worldview in which the resurrection of Jesus should be assigned a non-negligible prior probability.

*Actually, I was more on Lacona’s side on the specific point of whether a miracle would violate the laws of nature. That’s because I follow my colleague Nancy Cartwright in conceiving of the the laws of physics as ‘ceteris paribus’ laws, which tell us what would happen in the absence of other causal factors. I’m to be arguing with Sean Carroll about this (again!) on next month’s Mind Chat.

Are There Objective Values?

comments 37
Uncategorized

A lot of my Twitter debating time recently has been spent defending the claim that there are facts about value that hold independent of our desires. The example I always press is that there is an objective distinction between things that are worth doing (e.g. pursuing pleasure, learning, being creative) and things that are pointless (e.g. counting blades of grass, for its own sake, when you don’t enjoy it or get any further good from it). In order to set out my case and debate it a bit, I cheekily invited myself on Kane B’s YouTube channel (he has the opposing view) and we had a really good chat about it: https://www.youtube.com/watch?v=HL2D6Mxkvh8

In hindsight, I think there’s one argument I briefly raised that I should’ve pressed more. I have two kids. Suppose the older one of them grows up to hate philosophy and love football. Now, I hate football and love philosophy. Suppose I really worry about my elder child’s preferences, even though she’s clearly very happy, and I try to talk her round, even suggest therapy to try to align her preferences to mine. I think we’d all think my behaviour would be deeply unreasonable: I’d just be imposing my personal preferences on her.

But now suppose my younger child grows up to have the basic goal of counting how many yellow cars there are in our neighbourhood every day. Suppose this really makes her really unhappy (t takes several hours) and often unwell (it’s very tiring, and she does it even when the weather is really bad), but it’s her only main freely chosen goal in life, and it’s not among her life-goals to be healthy or happy. Now in this case, if I really worry, try to talk her around, maybe suggest therapy, I think we’d think my behaviour was understandable and reasonable.

Why the difference? In both cases, the child is pursuing a freely chosen goal. The younger child is miserable and unhappy, but if there are no objective value facts, then pursuing health and happiness are just as arbitrary goals as counting yellow cars. Surely, then, my concern with the younger child should be seen as me imposing my values on her just as much as in the case of the older child? Or at least, that is what we should think if we deny the existence of objective value.

Kane claimed not only that he doesn’t believe that some goals and more worthwhile than others, but that he just doesn’t even feel the intuition. I suggest that someone who differentiates between these two cases in a normal way is implicitly committing to objective values. They are effectively committing to the idea that there’s something objectively problematic about not being concerned with your own health and happiness; it’s this implicit assumption that explains why we think concern and intervention is reasonable with the younger child but not the older. If we totally don’t feel the pull of the intuition that there are objective values, then we should be mystified by the intuition that my treatment of the younger child is somehow ‘more reasonable’. But is anyone really mystified by this?

We could put the argument like this:

  1. Anyone who would judge concern and intervention is reasonable in the case of the younger child but unreasonable in the case of the older child implicitly believes in objective values.
  2. Almost everyone would judge concern and intervention reasonable in the case of the younger child but unreasonable in the case of the older child.
  3. Almost everyone implicitly believes in objective values.

It’s a further question as to whether this belief is justified (I try to argue that it is in the video), but the case I’m trying to make here is that many people who think they don’t believe in objective values, in fact do.

Genuinely interested in hearing from value anti-realists on what they think about this case.

A Surprise Point of Agreement With Sean Carroll

comments 15
Uncategorized

I’ve had some great philosophical interactions with Sean Carroll, of late. I was on Sean’s podcast a while back, and more recently he kindly contributed to a volume of essays responding to my book Galileo’s Error: Foundations for a New Science of Consciousness (I counter-responded to all of the essays, including Sean’s, here). We then debated this for three hours on the Mind Chat podcast I host with Keith Frankish. Finally, Sean wrote this blog post summarising his reflections on the Mind Chat discussion.

At the end of the post, Sean conceded that, if panpsychism is true, consciousness underlies my behaviour in the same way that the hardware of my computer underlies its behaviour. However, he then went on to make a surprising statement: because of substrate independence, the panpsychist can’t claim that ‘consciousness gets any credit at all for our behavior in the world.’

Why not? I really don’t get where Sean’s coming from here. The term ‘substrate independence’ just means that the same function can be realised by different hardware. It certainly doesn’t mean that hardware doesn’t do anything! If my consciousness underlies my behaviour in the same way the hardware of my laptop underlies Microsoft Word, that’s as much of a causal role for consciousness as anyone could reasonably want.

I’m so glad Sean and I ended on a point of agreement: consciousness does ground behaviour on a panpsychist worldview.

Can Calculators Add? Can Brains Add?

comments 31
Uncategorized

Suppose you had a rubbish that calculator that could only deal with numbers below 100. Assuming that, apart from that limitation, it behaved as calculators normally behave, you’d naturally interpret it as performing the addition function when you type in ‘1+1’ and get the answer ‘2’. But is it? Consider the following mathematical function, which, following the philosopher Saul Kripke, I will call ‘quus’:

QUUS: The quus function is just like the plus function when the numbers inputted are below 100, but when any of the numbers inputted are above 100, the output of the function is 5.

Given that the rubbish calculator can’t deal with numbers above 100, there’s no fact of the matter as to whether it’s performing the plus function or the quus function. So we can’t really say that the calculator is adding. Rather, it’s indeterminate whether it’s adding or quadding.

Okay, but we can still say that normal calculators add, right? I’m afraid not. Once we’ve conceded the above, essentially the same point is going to apply to all calculators. That’s because, for any calculator, there’s going to be some number N that is too huge for that calculator to deal with. And so we can just define the quus function in terms of N, yielding essentially the same problem:

QUUS*: The quus* function is just like the plus function when the numbers inputted are below N, but when any of the numbers inputted are above N, the output of the function is 5.

For the calculator in question, given that it can’t deal with numbers bigger than N, there’s no fact of the matter as to whether it’s performing the plus function or the quus* function. We cannot definitely say that the calculator is adding rather than quadd*ing!

Does any of this matter? As long as we get the right answer when we’re doing our accounts, who cares about the deep metaphysics? I don’t think the above would be of deep interest if it was just a problem for calculators. Things start hotting up when we ask whether the same problem applies to us. Can our brains add?

You might see where this is going. There is going to be some number so huge that my brain can’t deal with, and if we define the quus function in terms of that number, we’ll reach the conclusion that there’s no fact of the matter as to whether my brain is performing the plus function or the quus function.

The trouble is there certainly is a fact of the matter as to whether I perform the plus function or the quus function. When I do mathematics, it is determinately the case that I’m adding rather than quadding. It follows that my mathematical thought cannot be reducing to the physical functioning of my brain. We could put the argument as follows:

  1. If my mathematical thought was reducible to the physical functioning of my brain, then there would be no fact of the matter as to whether or not I performed the plus function or the quus function when I do maths.
  2. There is a fact of the matter as to whether or not I perform the plus function or the quus function when I do maths.
  3. Therefore, my mathematical functioning is not reducible to the physical functioning of my brain.

The problem is that conscious thought, e.g. about mathematics, has a specificity that finite physical mechanisms cannot deliver.

This is just one of the deep problems raised by what philosophers call cognitive phenomenology: the distinctive kinds of experience involved in thought. It’s broadly agreed that there is a ‘hard problem’ of consciousness. But when people think about consciousness, they tend to think about sensory consciousness, things like colours, sounds, smells and tastes. But consciousness also incorporates conceptual thought and understanding, and these forms of experiences raise distinctive philosophical challenges of their own. I believe that we’re not even at first base in appreciating, never mind addressing, the challenges raised by cognitive consciousness. If you thought it was hard to explain the feeling of pain, you ain’t see nothing yet!

Happy Christmas!