Can Calculators Add? Can Brains Add?

comments 26
Uncategorized

Suppose you had a rubbish that calculator that could only deal with numbers below 100. Assuming that, apart from that limitation, it behaved as calculators normally behave, you’d naturally interpret it as performing the addition function when you type in ‘1+1’ and get the answer ‘2’. But is it? Consider the following mathematical function, which, following the philosopher Saul Kripke, I will call ‘quus’:

QUUS: The quus function is just like the plus function when the numbers inputted are below 100, but when any of the numbers inputted are above 100, the output of the function is 5.

Given that the rubbish calculator can’t deal with numbers above 100, there’s no fact of the matter as to whether it’s performing the plus function or the quus function. So we can’t really say that the calculator is adding. Rather, it’s indeterminate whether it’s adding or quadding.

Okay, but we can still say that normal calculators add, right? I’m afraid not. Once we’ve conceded the above, essentially the same point is going to apply to all calculators. That’s because, for any calculator, there’s going to be some number N that is too huge for that calculator to deal with. And so we can just define the quus function in terms of N, yielding essentially the same problem:

QUUS*: The quus* function is just like the plus function when the numbers inputted are below N, but when any of the numbers inputted are above N, the output of the function is 5.

For the calculator in question, given that it can’t deal with numbers bigger than N, there’s no fact of the matter as to whether it’s performing the plus function or the quus* function. We cannot definitely say that the calculator is adding rather than quadd*ing!

Does any of this matter? As long as we get the right answer when we’re doing our accounts, who cares about the deep metaphysics? I don’t think the above would be of deep interest if it was just a problem for calculators. Things start hotting up when we ask whether the same problem applies to us. Can our brains add?

You might see where this is going. There is going to be some number so huge that my brain can’t deal with, and if we define the quus function in terms of that number, we’ll reach the conclusion that there’s no fact of the matter as to whether my brain is performing the plus function or the quus function.

The trouble is there certainly is a fact of the matter as to whether I perform the plus function or the quus function. When I do mathematics, it is determinately the case that I’m adding rather than quadding. It follows that my mathematical thought cannot be reducing to the physical functioning of my brain. We could put the argument as follows:

  1. If my mathematical thought was reducible to the physical functioning of my brain, then there would be no fact of the matter as to whether or not I performed the plus function or the quus function when I do maths.
  2. There is a fact of the matter as to whether or not I perform the plus function or the quus function when I do maths.
  3. Therefore, my mathematical functioning is not reducible to the physical functioning of my brain.

The problem is that conscious thought, e.g. about mathematics, has a specificity that finite physical mechanisms cannot deliver.

This is just one of the deep problems raised by what philosophers call cognitive phenomenology: the distinctive kinds of experience involved in thought. It’s broadly agreed that there is a ‘hard problem’ of consciousness. But when people think about consciousness, they tend to think about sensory consciousness, things like colours, sounds, smells and tastes. But consciousness also incorporates conceptual thought and understanding, and these forms of experiences raise distinctive philosophical challenges of their own. I believe that we’re not even at first base in appreciating, never mind addressing, the challenges raised by cognitive consciousness. If you thought it was hard to explain the feeling of pain, you ain’t see nothing yet!

Happy Christmas!

The Author

I am a philosopher and consciousness researcher at Durham University, UK. My research focuses on how to integrate consciousness into our scientific worldview.

26 Comments

  1. Alex Popescu says

    Hey Philip,

    Even if we accept the semantic indeterminancy thesis, I’m not sure that’s enough to prove premise 1. Yes, there’s a problem of underdetermination in assigning any semantic interpretation of a function to a fixed set of outputs by a calculator or brain, but we’ve got more evidence than just that fixed set of outputs. For instance, we don’t just believe a calculator is adding things, as opposed to quadding, because we’ve observed the numbers that pop up on the screen. We (meaning engineers) can also have a deeper understanding of the mechanistic structure of the calculator itself. If we understand how the mechanism works, then we can see that a calculator can’t be doing the quadding function when we press the ‘+’ symbol.

    Naturally, it might be objected that we might not really know the true mechanism the calculator employs, and this in turn can get pushed back to objections towards our not being able to know the laws of physics that underpin the calculator’s behavior. We can always push for skepticism of that kind, but at that point our objections become epistemological in nature. There is still going to be a fact of the matter as to which function the calculator implements (assuming there are laws of physics), even if we may not know it. Of course, the brain is a different matter, however the difference in complexity between a brain and calculator is again only an issue for our practical knowledge.

    In other words, because we can keep pushing back the reasons for our thinking that the calculator is doing addition instead of quadding, the only way you can make the objection stick is if you adopt radical skepticism. So that there would be no good reason to even think that there are set rules that govern our local universe. But that’s way too heavy of a price to pay (I feel) in order to make your kind of argument.

  2. Hi Philip,

    > The trouble is there certainly is a fact of the matter as to whether I perform the plus function or the quus function

    Why?

    All we can say is that both you and the calculator are implementing some function equivalent to addition for small numbers and undefined for large numbers. We can describe it as implementing either addition or quaddition or neither or both at the same time. These are just different ways of talking about it. From my point of view there is no fact of the matter as to which function you are really implementing.

    The only difference between you and the calculator is that you represent to yourself a generalised abstract function of addition, and you believe yourself to be implementing it. We do not need calculators to have such abstract models or beliefs, so they don’t. But a functional account of such things is possible, and so is building a system that could describe and appear to understand how addition works in the abstract as well as demonstrating it for small numbers just as you do. I think we have just as much reason to say this system is adding as that you are.

    • Alex Popescu says

      Hey DM,

      I just wanted to say that I’m glad to see you’re still alive and thinking philosophy. I miss our previous conversations, and I hope you are well and that everything went well with the move.

      I know yours was a question for Philip, but since I already responded, I might as well. I’m pretty sure when he speaks of internally implementing a function, he means nothing more than “you represent to yourself a generalised abstract function of addition”. Which you yourself concede is true.

      So, the point is that we have a fixed internal conception of a function (supposedly) and there is a fact of the matter as to what function we are thinking about. But when we externally implement this function by actually adding out the numbers in the real world, the final output (as you rightly pointed out) is underdetermined.

      • Cheers Alex!

        Still haven’t moved, unfortunately. Got delayed by a number of things. First, waiting for a chance to get vaccinated (NZ was a bit behind on this), then the Auckland lockdown amongst other things.

        Hoping to move in January now, unless further Omicron developments mess up plans.

        > Which you yourself concede is true

        I don’t regard it as a concession. I don’t think this is a problem for my view or a reason to think that humans really implement addition and calculators don’t. It’s just that humans just believe they are implementing addition and calculators don’t have any such beliefs.

        > and there is a fact of the matter as to what function we are thinking about

        To a point. I could get into subtleties here about whether the objects of human beliefs are ever completely determined — quite possibly they are not, but to a first approximation, sure. My point is that calculators are not thinking about functions at all, and that this is the salient difference between calculators and us, not that we are really implementing addition and calculators are not.

        > But when we externally implement this function by actually adding out the numbers in the real world, the final output (as you rightly pointed out) is underdetermined.

        I don’t think this is what you mean. If you’re saying the output is underdetermined, this literally means that we can’t tell what the output will be, but this is not the case. I think you might mean that the interpretation of what function was carried out is underdetermined. But I think this is also true for humans. It’s just that humans may also have beliefs about what calculation they just carried out. They have beliefs about what the function was supposed to be. But you could say something similar about calculators. It’s just that the intentions here are in the minds of the caculator’s designers and users. Calculators themselves don’t have such intentions, but I can imagine that a more advanced algorithmic mathematician might.

    • Alex Popescu says

      Hi DM,

      Wow, I didn’t realize New Zealand was so behind in terms of vaccination! Unless you meant you were waiting for your booster shot. Stay safe out there.

      “I don’t think this is what you mean… I think you might mean that the interpretation of what function was carried out is underdetermined”

      Yes, that’s what I meant. I meant to say “underdetermines” and not “is underdetermined”.

      “My point is that calculators are not thinking about functions at all, and that this is the salient difference between calculators and us, not that we are really implementing addition and calculators are not.”

      I agree with this; I just view Philip’s argument as being about the impossibility of reducing our inner thoughts and beliefs to our external implementations of mathematical functions. So, when I read Philip as talking about people performing mathematical functions in their heads, I read him as talking about our beliefs and mental processes. This is also the standard non-reductionist argument that is put forward by philosophers of mind in the field.

      Remember, Philip is talking about conscious thought. I quote, “The problem is that conscious thought, e.g. about mathematics, has a specificity that finite physical mechanisms cannot deliver.”

      I think perhaps you are intrinsically defining human thought as being some external process, something that the brain is engaged in, in other words. But that’s not the argument that Philip et al. are making.

      So once again to clarify,
      1) There is a fact of the matter as to what mathematical function I’m thinking about/holding a belief about when I’m implementing some internal mental arithmetical operation.
      2) There is no fact of the matter as to what mathematical function my brain or body is implementing when it’s engaged in some behavioral activity.
      C) The mind is not reducible to the brain/body.

      Now of course it’s possible to push back on 1, as it may not be the case that our inner mental representations are truly fixed (as you yourself point out). And I’m sympathetic to those lines of argument, but that seems to be something that is different from what you’re saying.

      • NZ is not as behind as all that — we were vaccinated in July. But then shortly after this Auckland went into lockdown, then my wife was too busy with work stuff to move, etc. We’re now going to get our boosters before we leave, I think.

        > Remember, Philip is talking about conscious thought. I quote, “The problem is that conscious thought, e.g. about mathematics, has a specificity that finite physical mechanisms cannot deliver.”

        OK, so my issue with this is that calculators are not engaged in anything like conscious thought. Calculators don’t have beliefs about what they are doing. Any system which functionally mimics conscious thought, e.g. a detailed computer simulation of a human engaged in mental calculation, would in my view have as much specificity as an actual human does because it would have beliefs about what it is doing.

    • Alex Popescu says

      DM,

      “NZ is not as behind as all that — we were vaccinated in July. But then shortly after this Auckland went into lockdown, then my wife was too busy with work stuff to move, etc.”

      I see. Hopefully we are now in agreement that when Philip says, “The trouble is there certainly is a fact of the matter as to whether I perform the plus function or the quus function” (this was the premise you were questioning), he just means conscious thought and beliefs.

      “Any system which functionally mimics conscious thought, e.g. a detailed computer simulation of a human engaged in mental calculation, would in my view have as much specificity as an actual human does because it would have beliefs about what it is doing”

      It’s not clear to me why you think this is a refutation of the above, or for premise 1 of my updated argument. In your example, both premise 1 (its beliefs would have specificity) and premise 2 (any interpretation of its actions would be underdetermined) would remain true for the hypothetical computer simulation. The truth of functionalism is irrelevant; the argument goes through regardless.

      Unless you mean to say that the computer simulation has beliefs but isn’t itself conscious. But as I pointed out, we are defining beliefs and thoughts at the conscious level. In any case, Philip and co. are committed to the specificity of conscious beliefs, not unconscious beliefs.

      • Hi Alex,

        Most of what I’ve written has been to explain my point regarding Philip’s article. I haven’t really engaged with your argument so far. I see it as distinct.

        I’m not sure I’m willing to concede that there really is a fact of the matter about what function a human is performing. I’m making a more subtle point.

        I’m saying that if we consider an electronic system X which seems to have beliefs about what it is doing the same way a human does, then claims about what function X is really implementing are of the same status as claims about what function a human is really implementing. If there is a fact of the matter for a human, so for X. If there isn’t, then likewise so for X.

        On whether there actually is an objective fact of the matter on which all rational observers are obliged to agree, my view is basically that there isn’t. But I’m not relying on this in my criticism of Philip’s article.

        > It’s not clear to me why you think this is a refutation of the above, or for premise 1 of my updated argument.

        It undermines Philip’s point because a calculator is not analogous to a human. We can’t draw conclusions about what is possible on functionalism by considering only a simple system like a calculator.

        On your argument, I would

        * question premise 1
        * ask how you get the conclusion from the two premises
        * agree with your conclusion regardless because I personally believe that the mind is not quite the same as the brain. I think the mind is an abstract mathematical structure corresponding to whatever a causal/neural model of the brain would look like if it were sufficiently detailed to produce similar behaviour to the brain.

    • Alex Popescu says

      DM,

      “Most of what I’ve written has been to explain my point regarding Philip’s article. I haven’t really engaged with your argument so far. I see it as distinct.”

      They are the same argument. The only difference is that Philip’s argument has a conditional premise (his premise 1) which is an implicit premise in my argument, and also my argument’s premise 2 is an implicit premise in Philip’s argument. But Philip’s premise 2 is qualitatively identical to my premise 1. When he speaks of “I”, he means his conscious mind.

      “It undermines Philip’s point because a calculator is not analogous to a human. We can’t draw conclusions about what is possible on functionalism by considering only a simple system like a calculator.”

      So again, given the above, none of this serves as a defeater for Philip’s argument.

      The point about the calculator is just to serve as an analogy. It’s meant to be a simple stand-in for the human brain. The conclusions we reach about the calculator are meant to be the same ones we reach about the human brain (or any physical functional system). We are supposed to conclude that all of the above are systems that output data which underdetermines the final result (at the external physical level).

      At the internal level (if functional system X is conscious), both system X and the human mind have a specificity associated with their thought. This is equivalent to what you are saying here, “then claims about what function X is really implementing are of the same status as claims about what function a human is really implementing.”

      • Ok, so without the calculator analogy, then I would ask why should I accept that there is a fact about what function you are implementing and why this would not be reproduced by an electronic functional duplicate of your brain.

    • Alex Popescu says

      “Ok, so without the calculator analogy, then I would ask why should I accept that there is a fact about what function you are implementing and why this would not be reproduced by an electronic functional duplicate of your brain.”

      I don’t think it helpful to talk of what ‘you’ or ‘I’ are doing in this context, because there might be some equivocation going on here (i.e. are you referring to your brain, or mind, or both?).

      I also hope it’s evident by now that the argument says that both the outputs of an electronic functional replica of my brain, and my real brain are underdetermining the function. There’s no fact of the matter in either case. Assuming this point is clear (I’ve made it many times), allow me to rephrase your question:

      ‘Why, is there a fact of the matter as to what function both my mind and the conscious mind of an electronic functional replica of me implements (assuming it’s conscious), but there is no such fact of the matter regarding what my brain or the electronic brain implements?’

      The answer is because of this, (quoting you) “you represent to yourself a generalised abstract function of addition, and you believe yourself to be implementing it” and that furthermore “I could get into subtleties here about whether the objects of human beliefs are ever completely determined — quite possibly they are not, but to a first approximation, sure.”

      We are assuming and have been assuming all along for the sake of the argument, that the objects of my thoughts are fixed.

      • Hi Alex,

        When I say “I” or “you” you can take me to be referring to the mind, because that’s what I identify with.

        Philip’s argument seems to me to be against functionalism. But you don’t seem to be particularly against functionalism, because you are willing to assume that the electronic functional equivalent might have a mind. Whether it has a mind is what I take to be the question.

        I’m with you that the mind and the brain are not identical, because I think the mind is an abstract structure and the brain is a physical object which can be interpreted to be implementing that structure. My point re Philip’s article is really just that the calculator is not a good analogy, which you seem to accept.

        So I’m not clear where we disagree.

    • Alex Popescu says

      Hey DM,

      “Philip’s argument seems to me to be against functionalism. But you don’t seem to be particularly against functionalism,”

      That’s not how I read Philip. As I said, I take Philip to be basically making the same argument as I am making. His reasoning is compatible with functionalism being true, but also works in case functionalism is false. Can you maybe elaborate on why you think he’s arguing against functionalism?

      • Hi Alex,

        Part of it is that I know that Philip is not a functionalist and doesn’t think that functionalism can account for subjective experience. I read this as an argument illustrating why,

        Although in the article Philip does conclude that the mathematical functioning of his mind cannot be reduced to the physical functioning of his brain, so I agree that your reading is reasonable. Perhaps my reading is wrong.

      • From Philip on Twitter:

        Me: “Minor disagreement between Alex Popescu and I on how to interpret your calculator argument. Would you see it more as an argument against functionalism or against physicalism or both? It’s presented as an argument against “physical functionalism” so not clear.”

        “Good question! I suppose it’s an argument against functionalism about thought…but then it’s hard to see how a physicalist might account for thought without some form of functionalism in the mix”

    • Alex Popescu says

      Hey DM,

      Thanks for that. I find it interesting that Goff takes his argument to be against functionalism. I’m guessing that he has a particular variant of functionalism in mind, identity functionalism (i.e. physical functional states ARE conscious states). But there are other forms, for example we can believe in a type of functionalism where conscious states are not identified as being functional states, but rather are distinct entities that supervene on those functional states. I don’t see how Philip’s argument could have shown that that type of functionalism is false, so I agree with you there.

      Since you claim that you reject physicalism but accept functionalism (I checked out the twitter thread), I’m guessing you had that variety of functionalism in mind and not the identity version. If that’s true, then I would hazard a guess that you and Philip are just talking about different things, and that Philip wasn’t really trying to argue against your specific position.

      Merry Christmas and happy holidays!

      Best,

      Alex

      • Hi Alex,

        I think I’d be inclined to agree that it’s incorrect to say that physical states are conscious states, but I think I would say that conscious states just are abstract functional states.

        Merry Christmas!

  3. I feel you have been rather hasty in limiting your comparison to calculators. Among Turing machines, there are those for which it is a fact that they calculate the plus function, and not the quus function (or any variant, for any value of the threshold.) Your calculator is indeed rubbish – for refuting physicalism!

    • What could possibly ground this fact? For any system, there is going to be some number it cannot compute, and we can just define the quus function in terms of that.

      • araybold says

        For any given human, there are numbers it will never be able to count to or compute with. We can reasonably assume this is so for 10^80 (10 raised to the power 80), for example – approximately the number of protons in the visible universe.

        Perhaps you are thinking that an abstraction of a human, freed from its physical limitations by being granted an unbounded lifetime and supply of paper and pencils, could count to or compute with any any number? The thing is, though, that Turing machines are an example of the same abstraction with respect to to calculating machines. Of course this abstract human will outperform any physical realization of a calculator, just as it will outperform any physical human, but this is completely compatible with physicalism.

        Perhaps you are thinking that notations such as the exponential one I used above allow physical humans to calculate up to and beyond 10^80? Indeed, to some extent, but they cannot, in general, add two arbitrary numbers of that magnitude, any more than a calculator could. No human will ever calculate the sum of 10^-80 and the number given by the first 10^80 decimal digits of pi.

      • araybold says

        In my previous replies, I made the mistake of not carefully reading what you wrote.  I hope to correct that now, not only with respect to this article, but also the paper of which it appears to be a summary:  ‘Does Mary know I experience Plus rather than Quus?’

        We agree that for both any given human and also for any given calculator, there are mathematical functions she or it, respectively, will be unable to calculate, and I have no problem extending that to finite Turing-equivalent devices.

        You say that it is indeterminate whether a calculator calculates a particular function, as opposed to another function that yields the same result in every case that the calculator is capable of performing, but which differs somewhere outside of this domain. I am not sure you have accounted for all the ways in which such a determination might be made, but as I have other doubts about the overall argument, I will put this concern aside for now [1].

        It is different for a person, you say: when she performs an operation that yields the sum of two numbers, it is determinate whether she is performing plus, rather than some quus-like calculation, even though there are quus-like functions in her case. In this post, you do not explain how we know this; in the paper, you say it is because she experiences ‘plus’ as plus, and I must admit that I am not experiencing that as clarifying anything.

        For one thing, it seems to me that this leads to Gettier-style questions of whether or not her idea of ‘plus’ is what we mean by it – perhaps she has somehow come to think that ‘plus’ means what we understand to be one of the quus* functions? (Is that the sort of issue footnote 15 is referring to?)

        I will leave that hanging, as I am willing to accept that she differs from your calculator in that she is doing the operation for a purpose, and intends to calculate plus and not something else; somewhere in her mind is the concept that she is adding two numbers. Furthermore, she knows this is what she wants to do, and believes that this is what she is doing. The closest your calculator comes to this is that its designer probably intended it to calculate plus and not quus.

        Physicalism, however, neither entails nor adopts the premise that a larger version of your calculator could be conscious. Even the proponents of Strong AI require a suitably-programmed Turing-equivalent machine, which is strictly and vastly more powerful than any mere calculator, while other physicalists suppose something yet more powerful would be required (leveraging some as-yet unidentified quantum effect, in Penrose’s case.) I am not seeing anything here ruling out the possibility of these machines determinately calculating plus in the same sense as a human does.

        One thing that clearly does not make that case is the existence of quus-like functions for such devices. As you say, they exist for humans too, so not even primitive calculators can be differentiated from humans on this basis.

        As far as I can discern, the premise that no machine could experience ‘plus’ as plus and not quus is an unargued-for premise, and a tacit one in this article. When you write “When I do mathematics, it is determinately the case that I’m adding rather than quadding. It follows that my mathematical thought cannot be reduced to the physical functioning of my brain”, that “It follows” is standing in for a lot that has not been said; it looks like a leap of faith rather than a logical conclusion. 

        It took me a while to see how this article so smoothly glosses over this omission. It starts in the argument for there being no fact of the matter for calculators: as not even physicalists propose a mere calculator could be conscious, this goes through without raising the feeling that it is incomplete – and it is not, so far as it goes. Therefore, when you say it is determinate for humans, – again, something we can easily accept – it is easy to feel that a line has been drawn between all machines and humans. Even if we wonder about Turing-equivalent machines, it is easy to think – too hastily – that, as they also have quus-like functions, they are, for the purpose of this discussion, the same as calculators, overlooking that this could also be said of humans. The structure of the argument guides one into drawing the line between humans and machines – especially if one’s intuition is that this is where the line lies – while the argument can actually only draw a line between the conscious and the non-conscious.

        When I looked into how you set out the argument in the paper, it seems, in ‘Mary Strikes Back’, that you are explicitly claiming the plus/quus (rule-following) issue rules out Mary determining what Cuthbert makes of ‘plus’ from the physical facts about what’s going on in his head – so here, too, we seem to have the tacit assumption that whatever it is that makes human calculation determinate, despite their having quus-like functions, it cannot be physically observable. This may not technically amount to begging the question, but it feels close to it, in that we effectively have “machines cannot think like people” as a premise in an argument against physicalism.

        On the very last page, you first mention psycho-physical laws, where you write “Therefore, physicalists must try to explain how the physical facts determine semantic phenomenology without appeal to psycho-physical laws of nature which obtain in some worlds but not others. But without such an appeal, it is hard to see how we can give an internalist physicalist account of why semantic phenomenology has, as it generally does have, straight rather than quus-like content.” Am I right in thinking that this reference to contingent psycho-physical laws is an invocation of the zombie argument, and its contention that no matter how much evidence we could accumulate that, in the actual world, minds are explicable as physical phenomena, there are possible worlds where this is not the case? If so, then your concluding argument appears to be preaching to the choir, and has little to say to me or Sean Carroll, let alone the approximately 50% of western academic philosophers who apparently do not accept that the zombie argument succeeds.

        —-

        [1] We could examine the calculator’s circuitry and determine whether or not it compares the operands to 100, and if it does not, we can say that it is not performing the calculation of quus, as the comparison is a necessary part of that computation. You might reply that nevertheless, its input-output mapping will be as much a partial quus function as it is a partial plus function, but that applies equally in the human case. There is more to a calculation than the function it calculates.

  4. Pingback: Can Calculators Add? Can Brains Add? – Conscience and Consciousness - Nobodys word

Leave a Reply to Tom Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s