Announcing a new newsletter on mechanizing mathematics

I have finally got around to creating a newsletter, tentatively entitled Silicon Reckoner, to be published on Substack. This will be a continuation of the recurring discussion on this blog of the implications of projects to mechanize mathematics, for example in this post or this post.

You can read more about the goals of the newsletter below, in excerpts from the first entry. There will be no additions to the MWA blog (the blog you are now reading) for the foreseeable future. However, at least at the outset I plan not to allow comments on Substack; instead, the comments section of this post will be reserved for discussion of the newsletter. As always, I will decide whether or not to approve comments. This is a form of censorship but the purpose is not to exclude (legitimate) points of view but to keep control of the amount of time I spend on this part of my agenda.

I don’t expect to set up paid subscriptions on Substack, but that may change at some point.

And the disclaimer, to appear in the first newsletter entry:

I will not claim familiarity with any of the formal systems used in the design of automated proof checkers, nor to understand any of the software that implements the actual automatic verification, much less to understand the details of current or future work on AI, whether or not it is applied to mathematics.  Even when I have a pretty good idea of what is going on with some of these systems, I will fiercely deny any technical understanding whatsoever, because my understanding of the technicalities should never be an issue. 

Here, then, is what Silicon Reckoner will be about:

Is artificial intelligence on track to meet the expectations of its investors, who just in 2020 poured $50 billion into the industry?  AI’s record of missed deadlines for predicted milestones is as old as its name.  But literary production on the subject could hardly be more extensive.  Reading all the non-technical books on my local bookstore’s AI shelf would be more than a full-time job, leaving less than no time for my real job, which AI has not yet eliminated.  Even the sub- or parallel discipline of AI ethics now occupies 10 pages of footnotes on the English-language Wikipedia page and 1400 pages published in the last two years by Oxford University Press, on my own bookshelf; practically every day I discover another 100 pages or so.   I have nevertheless forced myself to dip into a representative sample as preparation for an experiment that is beginning to take shape with this text. 

Most of what I’ve read tries to address the question of just how “intelligent” the products of this industry have been up to now, or will be in the near future, or what it would take for actually existing AI to deserve to be called “intelligent,” or whether it would be a good thing, or whether it’s even possible.  None of these is my problem.  Or rather, they are my problem, but only as a citizen of my country, or of borderless civilization, concerned, like everyone else, by what the massive implementation of ostensibly intelligent artificial systems would entail for what matters to me — not least, whether it would make sense for these things to continue to matter to me, or perhaps more accurately whether what matters to me would still matter to anyone or anything else, if the ambitions of AI’s promoters even minimally come to fruition.…

My motivation in undertaking this experiment is to understand the consequences of this way of thinking for my own vocation of pure mathematics, which is marginal to the concerns of most of those at risk of the AI project’s collateral damage but which has been central to the project’s imagination and its aspirations from the very outset. 

It is possible to view the growing interest in automated proof verification and artificial theorem proving, two aspects of a still largely hypothetical AI future of mathematics, as stemming from purely internal factors that govern the profession’s development as it evolves to meet its autonomously defined goals.  The ideal of incontrovertible proof has been bound up with mechanization since it was first articulated, and the logic that ultimately made digital computers possible is a direct outgrowth of the attempt to perfect this ideal in the development of symbolic and philosophical logic in the late 19th and early 20th century, and can even be seen as a byproduct of the proof of the absolute impossibility of realizing this ideal.  I don’t think this view is plausible, given the saturation of our culture with AI themes and memes, that goes well beyond bookstores’ overloaded AI shelves.… 

This post is meant to be the first of a series of texts exploring the reasons for the absence of any sustained discussion of these issues on the part of mathematicians, in contrast to the very visible public debate about the perils and promises of AI.  Much of my book Mathematics without Apologies was devoted to a critique of claims regarding the “usefulness” of mathematics when, as is nearly always the case, they are not accompanied by close examination of the perspectives in which an application of mathematics may or may not be seen as “useful.”  Similarity with the intended critique of the uncritical use of words (like “progress”) that accompany the ideology surrounding mechanization — mechanical proof verification and automated theorem proving, in particular — will be apparent.  The reason should be obvious:  unless we can conceive an alternative to conventional measures of utility for which human mathematics is a positive good, the forces that make decisions about this sort of thing will declare my vocation obsolete.  Most of my colleagues who are involved in advancing the mechanization program have conceded the rhetorical battle and some are already forecasting the demise of human mathematics.  So the plan is to continue the discussion in this new format, and gradually to phase out the blog that I launched when Mathematics without Apologies was published, as I have already tried and failed to do once before. 

Because I will be forced to draw on so many different disciplinary perspectives in the course of exploring the topic of mechanization, there is a real danger that these texts will lose any chance of forming a coherent whole.  For my own sake, then, as much as for the sake of potential readers, I propose a slogan that is meant to hold everything together until I come up with a better slogan.  Here it is: 

Current trends in mechanization belong to the history of mathematics, both as events in a historical process and in the creation of common narratives about the meaning of the process. …

26 thoughts on “Announcing a new newsletter on mechanizing mathematics

  1. assaf

    Hi Michael,

    I wrote a 10-15 line comment at the bottom of your new newsletter just a few minutes ago. Then when I tried to read what I wrote, by clicking on “See all comments”, I was not able to retrieve it. Nor was I able to read any of the other comments, if there are any.


    On Wed, Jul 21, 2021 at 8:07 AM Mathematics without Apologies, by Michael Harris wrote:

    > mathematicswithoutapologies posted: ” I have finally got around to > creating a newsletter, tentatively entitled Silicon Reckoner, to be > published on Substack. This will be a continuation of the recurring > discussion on this blog of the implications of projects to mechanize > mathematics, for ex” >


    1. mathematicswithoutapologies Post author

      I disabled comments on the newsletter, or at least I think I did, because I don’t know how to filter comments on Substack. The plan is to collect all comments here.

      Here is the comment you asked me to include:

      Just a brief comment, as I am really looking forward to reading the exchanges on your newsletter.

      The expression “mechanizing mathematics” sounds like a little threat to me, which it is, if it means something like changing the nature of mathematics. But let me assume it is not. Moreover, by placing the project of “mechanizing mathematics” — for me this should be a project of getting mathematics to take advantage of electronic computing devices developed on principles of mathematical logic, not to change mathematics into something it is not — inside AI will color it in a particular way which, I think, is fraught with misunderstandings and bad connotations. This is so partly because too many things labelled “AI” have a bad smell, and deservedly so, like the many ways of intruding on people’s privacy, of spying and surveillance, etc. The particular aspects of “mechanizing mathematics” which are very much worth pursuing by mathematicians, in my view, like automated theorem provers (ATPs) and interactive proof assistants (IPAs), I would place them outside AI.


      1. Loïc Merel

        To be clear, you can rename the fortified camps around the village of the irreducible Gauls: Coq, Lean, Hol and the like.

        Also, as I prefer to avoid the substack system, which seems to me to undermine further the open internet, I will remain in touch with your publications via a RSS feed, but I will not subscribe to the newsletter.


      2. mathematicswithoutapologies Post author

        I have seen claims to the effect that traditional mathematics is the (elitist) empire and the formalizers are the insurgents. In future texts I will try to dig up some of these claims.

        I share your misgivings about Substack but I chose to publish there in the belief that the audience includes people outside the profession who are concerned about implications of AI in a variety of domains. Soon enough I’ll know whether or not this belief is mistaken.


  2. Bhupinder Singh Anand

    Dear Michael,

    Such a newsletter/forum is, indeed, overdue and could prove invaluable for investigating the decision-making constraints of any artificial intelligence.

    I’m particularly interested in how such a forum—from the perspective of my 2016 paper—could help investigate further, for instance, the distinction between:

    (i) evidence-based validation of the proof of a formal theorem as a `true’ (algorithmically verifiable) mathematical proposition, which could be taken to lie within, and define, the ambit of human intelligence (`validation’ in the sense of Per Martin-Loef’s `SCHEMATIC FIGURE’ of a `true’ proposition in his 1987 paper);

    [SCHEMATIC FIGURE which I interpret as: A proposition ‘A’ is a ‘truth’ if, and only if, it can be ‘asserted/judged’ as ‘A is true’ by appeal to some ‘evidence/proof’ for the truth of A which can be ‘validated’.]

    (ii) automated proof verification of the proof of a formal theorem as a `true’ (algorithmically computable) mathematical proposition, which could be taken to lie within, and define, the ambit of an artificial intelligence;

    The above distinction reflects the distinction between the algorithmically verifiable truth, and the algorithmically computable truth, of a mathematical proposition under a well-defined interpretation (see Definitions 1 and 2 in my 2016 paper).

    I see the significance of such distinction reflected in an implicit dogma of conventional wisdom, which dictates that proofs of mathematical propositions should be treated as necessary, and sufficient, for entailing significant mathematical truths ONLY if the proofs are expressed in a—at a minimal, deemed consistent—formal mathematical theory in terms of:

    * Axioms/Axiom schemas * Rules of Deduction * Definitions * Lemmas * Theorems * Corollaries.

    However, the above distinction suggests that we could, alternatively, posit that pre-formal evidence-based reasoning—in Marcus Pantsar’s sense (see this 2009 paper)—is necessary not only for treating a theorem as `formally’ significant (which is what differentiates it from Axioms, Lemmas and Corollaries), but also for the insight necessary for a human intelligence to understand why the theorem can be treated as an intuitively significant, `true’, arithmetical proposition under a well-defined interpretation.

    This would support the argument that proving a theorem formally (or mechanically by a theorem-proving artificial intelligence) from explicit premises/axioms using currently accepted rules of deduction is a meaningless game, of little scientific value, in the absence of evidence that has already established—unambiguously by a human intelligence—why the premises/axioms and rules of deduction can be treated, and categorically communicated, as pre-formal truths.

    Pantsar’s `pre-formal mathematics’ (in this 2009 paper) could then be viewed as evidencing the philosophy that an evidence-based definition of `pre-formal mathematical truth’ is a (necessarily transparent to a human intelligence) prerequisite for determining, in a formal proof theory, which axiomatic assumptions of a formal theory underlie the truth of pre-formal, evidence-based, reasoning.

    Consequently, only evidence-based, pre-formal, truth can entail formal provability; and the formal proof of any significant mathematical theorem cannot entail its pre-formal truth as evidence-based.

    It can only identify the explicit/implicit premises that have been used to evidence the, already established, pre-formal truth of a mathematical proposition; and validate them in Martin-Loef’s sense only if the theorem is, further, true under a well-defined Tarskian interpretation.

    Hence visualising and understanding the evidence-based, pre-formal, truth of a mathematical proposition is the only raison d’etre for subsequently seeking a formal proof of the proposition within a formal mathematical language (whether first-order or second order set theory, arithmetic, geometry, etc.)

    The philosophical and mathematical significance of such a perspective is that it then becomes imperative to differentiate between:

    (a) Plato’s unfalsifiable `knowledge as justified true belief’; which seeks a formal proof in a first-order mathematical language in order to justify a mathematical proposition as `true’ within a well-defined community constrained by the ambit of artificial intelligence; and

    [Comment: The `constraints’ defining the ambit of an artificial intelligence refer to the entailments of Theorem 7.1 of this 2016 paper]

    (b) Gualtiero Piccinini’s falsifiable `knowledge as factually grounded belief’ within a well-defined community constrained by the ambit of human intelligence; which seeks a `pre-formal’ proof of a mathematical proposition in order to justify the axioms and rules of inference of a first-order mathematical language that can, then, formally prove the mathematical proposition as justifiably `true’, within the community, under an `evidence-based’ interpretation of the language.

    Such a perspective implicitly appeals to the view that Pestalozzi’s `Principle of Anschauung’ can be viewed as the basis of a `common intuition’—characteristic of human intelligences—which could, then, be treated as the necessary, and sufficient, evidence for well-defining the concept of `truth’ in `pre-formal’ reasoning/mathematics/proof within a well-defined community constrained by the ambit of human intelligence.

    Kind regards,



    1. mathematicswithoutapologies Post author

      Dear Bhup,
      Thank you very much for your contribution. I hope my newsletter will stimulate discussion along the lines of the questions you raise. I probably will not contribute much to such a discussion, because I have no training in logic, and in the hundreds of papers I have read and the hundreds of seminar talks I have attended, considerations from logic have only arisen very rarely, and then nearly always (except in some applications of model theory to algebraic geometry, for example) only to be dismissed.

      The fact that my colleagues refer to “theorems” rather than “mathematical truths” is an indication that the two expressions should not be taken to be synonymous. But the relation between the two notions definitely needs to be explored, in connection with mechanization in particular.

      Liked by 1 person

  3. Bhupinder Singh Anand

    Dear Michael,

    I was intrigued by your remark that:

    ‘… I have read and the hundreds of seminar talks I have attended, considerations from logic have only arisen very rarely, and then nearly always (except in some applications of model theory to algebraic geometry, for example) only to be dismissed.”

    It was only when I read your earlier post that I could place it in a context that reflects my own perspective: questioning the scientific value of a classical logic that admits mathematical propositions as `true’ if, and only if, such `truth’ can be defined in terms of algorithmically computable functions/relations; which, by Theorem 7.1 of my 2016 paper, would entail that mathematical `truths’ are countable and can, therefore, be `mechanized’.

    However, such an, essentially inherited and uncritical, perspective is, at best, misleading if not false. Theorem 2.1 of my 2016 paper entails that the algorithmically computable mathematical `truths’ are only the tip of the iceberg since, by Cantor’s diagonal argument, mathematical `truths’ which are constructively verifiable, but not finitarily computable, can be viewed as essentially uncountable.

    The significance of this is reflected in my response to the following intriguing comment by Carey Carlson in this discussion on Patrizia Piredda’s Academia Letter `Heisenberg’s Dynamic Concepts. The Metaphor beyond the Limits of Language’:

    `Reliance on metaphor should become a thing of the past, now that physics [can be] constructed from graphs of temporal/causal succession.’

    My response was:

    “Surely use of metaphor is invaluable to a human intelligence for appreciating the vastness of the universe in which it exists; a universe that has continually revealed itself to be even stranger than any fiction which a human intelligence could conceive.

    Reason: A metaphor admits of multiple, equally valid, interpretations; corresponding to, say, the multitude of human intelligences that seek to symbolise the metaphor’s referent in a language of unambiguous expression (whether for its own subsequent reference, or for communication to an other).

    What can then be accepted further, by consensus, as a definition—in a language of categorical communication—of that which is common to the multitude of symbolic expressions, can consequently be treated as a non-metaphorical representation of that which was sought to be alluded to by the metaphor.

    Moreover, it would not be entirely unreasonable to posit that it is this ability to crystalise a metaphor into a categorically communicable definition, alluding to some property of commonly perceived natural phenomena, that has empowered human intelligence into conjecturing falsifiable hypotheses for furthering its perception of, and relation to, the nature of the universe it cohabits with what it perceives as both animate and inanimate matter.

    It would also not be entirely unreasonable to posit that, in sharp contrast, what characterises a mechanical intelligence is that it has no reliance on metaphor; but only on definitions unambiguously expressed in a language of categorical communication.”

    My alluding to metaphor as `invaluable to a human intelligence’ reflected another entailment of my 2016 paper; which is that whereas the values of algorithmically computable functions/relations are completely `knowable’ by a mechanical intelligence, since they are both determinate and predictable, the values of algorithmically verifiable functions/relations that are not algorithmically computable are not completely `knowable’ by any intelligence, human or mechanical since, though they are determinate, they are not predictable.

    The significance of this is that only the classical laws of physics can be exploited by a mechanical intelligence, since they are expressed only in terms of algorithmically computable functions/relations, and are thus both determinate and predictable.

    As we now know, only the limited macroscopic behaviour of the Universe that is directly accessible to human sensory perception can be expressed mathematically, and communicated categorically, in terms of classical laws that are both determinate and predictable.

    The vastness of the Universe beyond human sensory perception is, however, of a nature that can only be expressed mathematically, and communicated categorically, in terms of quantum laws that are essentially unpredictable; but which—contrary to accepted paradigms, as I argue in Chapter 22 of my 2020 book—are determinate since they can be expressed mathematically in terms of algorithmically verifiable functions/relations.

    Kind regards,



  4. mathematicswithoutapologies Post author

    Dear Bhup,
    You’ll have to excuse me for not responding in depth to your long and thoughtful comment, much of which is beyond the scope of the material I intend to discuss in this newsletter. I would like to make a few observations. First, as I already hinted, the word “truth” may be important for philosophers of mathematics but it plays next to no role in mathematical practice. On the other hand, the word “metaphor” is central to my earlier article on mechanization. I write, for example, that

    “Foundations” is an example of a metaphor for normative mathematical practice that
    somehow stuck.

    I would suggest that Carey Carlson’s “graphs” are also metaphors.


  5. David J. Littleboy

    This won’t be much help in thinking about mathematics and computers (even though I had a part time job as an undergrad in the Mathlab group at MIT and used MACSYMA (to do magnet design) and thus to get into grad school in Materials Science.)

    As an enthused new to the new field of computer science freshman at MIT in 1972, I sat in on (audited) Minsky and Papert’s graduate AI seminar. It was in one of MIT’s larger classrooms, and impressing the profs got one into the MIT AI grad program at the time; it was intense. Much of the time was spent on Minsky’s proof that perceptrons couldn’t do much of anything, including, for example, telling the difference between a closed circle (letter “O”) and an open one (letter “C”). Shortly after that, the perceptron fans gussied up the model to be able to handle that particular case, claimed that the objection had been disproved, and blithely went on working with models that didn’t include the gussied-up aspects and still couldn’t do anything. But since some of the AI types got it that the human ability to reason logically and flexibly about the world was really amazing and not trivially solvable by one-word gimmicks, AI looked kewl, I thought, and (among other things) did an all-but-thesis under Roger Schank in 1984. (But I wasn’t coming up with anything useful, so I punted.) Fast forward to the 2010s, and we’re seeing a repeat performance. (See the book “Rebooting AI” for a discussion of what neural nets can’t do (long story short: neural nets recognize textures, not shapes).) Someone points out that the neural net model doesn’t work, some researcher figures out a gussie to fix a particular problem, and then everyone goes back to having fun with their GPU computations without said gussie. Similarly, a textbook by Newell and Simon (also around 1972) included a comic of a robot climbing a tree, pointing to the moon, and screaming “I’m getting closer”, yet the whole game in current AI is (local) gradient descent search. Give me a break, I want to scream. Sigh. Again.

    There’s an enormous amount of money and work going on in AI nowadays, and there are a few people still trying to figure out how human reasoning works, but it sure looks to me as though (a) there’s no there there and (b) at some point someone really ought to notice that. But the problem is that we humans are humongously fond of science fiction, and really want to think and talk about “computers that are smarter than people”, and really don’t want to be told that intelligence is really amazing and kewl and that we don’t have clue as to how it works. Figuring out how human intelligence works (using computation to keep us honest (if we can’t code it, it’s not a theory)) is hard. We failed during the first two runs (early 1970s, late 1980s). But we may not even notice that we’re failing this time around.

    Liked by 1 person

    1. mathematicswithoutapologies Post author

      Thanks for this long comment. My suspicion, which I will be developing throughout, is that if and
      when the AI industry concludes that human intelligence is not a realistic goal as a source of profit
      then humanity as such will lose much of its appeal.

      I do plan to review Rebooting AI at some point next winter. In the meantime, I am competent neither to
      confirm nor to dispute your conclusions, but I do hope another reader will take up your challenge.


  6. Bhupinder Singh Anand

    Dear Michael,

    One could reasonably speculate that: `… if and when the the AI industry concludes that human intelligence is not a realistic goal as a source of profit then [AI] as such will lose much of its appeal’.

    However, the Turing Test, Are You a Man or a Machine, would argue against such a `conclusion’ being proffered other than as a polemical thesis.

    Reason: Kurt Goedel’s argumentation in his seminal 1931 paper, on formally `undecidable’ arithmetical propositions, entails that a human intelligence can `sense’—and subsequently `recognise’—natural phenomena which can be expressed as determinate, but not predictable, in a well-defined mathematical language by means of functions/relations which are algorithmically verifiable, but not algorithmically computable.

    However, Alan Turing’s paper on computable numbers entails that any Turing-machine based AI can only `sense’—and subsequently `recognise’—natural phenomena that can be expressed as determinate, and predictable, in a well-defined mathematical language by means of functions/relations which are algorithmically computable (and, ipso facto, algorithmically verifiable).

    Moreover, since—as entailed by Goedel’s Theorem VII in his seminal 1931 paper—such an AI could, in principle, be designed, programmed, and constructed to record and express in such an algorithmically computable language:

    (a) not only all that a human intelligence can experience;

    (b) but also natural phenomena that lie beyond the sensory perceptions of any organic intelligence;

    it is unlikely—as the industrial revolution has already demonstrated—that AI will lose any of its appeal as a means of translating this capability into practical applications which meet the needs, and desires, of a humanity that not only tolerates, but increasingly respects, the economic, and social, imperative of profitable enterprise.

    Kind regards,



    1. mathematicswithoutapologies Post author

      Thank you for this comment. If I’m not mistaken, Proposition VII in Gödel’s paper asserts that “Every recursive relation is arithmetical.” It’s quite a leap to claim that this Proposition accounts for everything a human intelligence can experience.


      1. Bhupinder Singh Anand

        The argument, as addressed in Is the brain a Turing machine?, is that all a human/organic intelligence can experience with awareness is reflected in the neuronic activity evidenced within a human/organic brain.

        Activity which can be observed, recorded, and expressed in terms of algorithmically verifiable/computable functions/relations in the digital language of a mechanical intelligence that can, moreover, be treated as the recursive arithmetical language of a Turing machine in the sense that:

        “… if we posit that all outputs of sensory organs can only be received/perceived and/or transmitted as digital pulses to/by the brain then, from the evidence-based perspective of this investigation, one could speculate that an organic brain can be modeled by a Turing machine, and strongly hypothesise that:

        Hypothesis 1. Whilst an organic brain can evidence that an arithmetical proposition is algorithmically computable as true under an interpretation, only the sensory organs (such as those of sight, smell, hearing, taste and touch) can evidence that an arithmetical proposition is algorithmically verifiable as true under an interpretation.

        In other words, whilst the brain functions can be treated as essentially digital, and representable completely by a Turing machine, the functions of the sensory organs could be treated as essentially analog, and representable only by geometrical models that cannot always be represented completely in their limiting cases by a Turing machine.”


      2. mathematicswithoutapologies Post author

        I expect to be addressing computational metaphors for the mind in the course of some future book reviews. I gather that some researchers are finding the Turing machine metaphor an obstacle in attempts to develop artificial general intelligence (which itself is a poorly-defined notion). Even from the standpoint of the older metaphors of cybernetics, it seems to me that isolating the brain from the body in which it is… embodied… is a step backwards.

        I’m more interested, however, in addressing mathematics as a social rather than individual practice.


      3. David J. Littleboy

        “Analog” arguments have the problem is that such use of the term “analog” has implicit within it the idea that “analog” has infinite resolution (“cannot always be represented completely in their limiting cases by a Turing machine”), and that’s simply wrong. All analog systems (especially biological ones!) have noise, and thus can be described, with no loss of generality (or anything else), as digital systems. Analog isn’t a get out of jail free card: the human mind is either a computer or it’s magic, and “magic” isn’t an explanation. (FWIW, Jerry Fodor has some great examples of how good human cognition is, but then he goes one step too far and claims that we have _complete knowledge of everything_, whereas it’s merely _an amazing amount of knowledge about a lot of things_.)

        Also, talking about Turing Machines doesn’t help much. The mind clearly has memory, and a lot of it, and remembers what it’s done (within limits, of course). So the problems that mathematicians use Turing Machines to think about (e.g. whether a computation terminates) are finessed by simply remembering what you’ve done and recalling that you already tried that. “Embodiment” ideas are important here: how much work one did on a problem is an _index_ to one’s episodic memory: the “Sheesh, I’m just as exhausted as I was yesterday” state tells us to check whether we’ve made any progress or not. (Building a digital system that logs it’s actions isn’t a big deal, but it’s not in the realm of things Turing Machines are used to deal with.)

        (FWIW, Turing’s conception of the “Turing Test” was way more subtle and sophisticated than pretty much anyone understands.)

        Of course, I’m smack dab in the middle of a fight between folks with strong gut intuitions (“the mind can’t be as simple as a computer, it must be magic” vs. “If i do zillions and zillions of things in massive parallelism intelligence must magically appear”. There’s too much magic there; I prefer my reality to be real.)


      4. mathematicswithoutapologies Post author

        The fourth entry (three weeks from now) will include a slogan that I find useful in thinking about the relevance of the digital to mathematics (and “anything else”) but it will have to wait. But the more general lesson I have learned as a mathematician is that the impulse, or even the compulsion, to get to the bottom of things will always be frustrated, because nothing of interest has a bottom. I’ve written “Physicists like Steven Weinberg can ‘dream of a final theory,’ but mathematicians can only realistically dream of an endlessly receding horizon,” and I suspect cognitive science is no different.

        I’m also planning to devote a few entries to the kinds of stories that appeal to the kinds of mathematicians I know. It’s probably possible to find a role for Turing machines in these stories, but that’s not what people do in practice, and I don’t think it’s helpful to suggest that this is because the mathematicians are not paying sufficient attention.


  7. Bhupinder Singh Anand

    Dear Michael,

    1. It is not immediately obvious what one would seek to gain by `addressing mathematics as a social rather than individual practice’; since mathematics seeks to function equally—and inseparably—as much as a language of unambiguous, individual, expression as it does as a language of categorical, social, communication.

    In other words, mathematics could be viewed functionally as:

    * merely a set of complementary, symbolic, languages,

    * intended to serve Philosophy and the Natural Sciences,

    * by seeking to provide the necessary tools for adequately expressing our sensory observations—and their associated perceptions (and abstractions)—of a `common’ external world;

    * corresponding to what some cognitive scientists, such as Lakoff and Nunez term as primary and secondary `conceptual metaphors’,

    * in a symbolic language of unambiguous expression and, ideally, categorical communication.

    2. The social dimensions of such a perspective are highlighted if, as posited in What is mathematics, we make a distinction between:

    The natural scientist’s hat, whose wearer’s responsibility is recording—as precisely and as objectively as possible—our sensory observations;

    The philosopher’s hat, whose wearer’s responsibility is abstracting a coherent—albeit informal and not necessarily objective—holistic perspective of the external world from the natural scientist’s sensory observations and their associated perceptions/metaphors;

    The mathematician’s hat, whose wearer’s responsibility is providing the tools for adequately expressing such recordings and abstractions in a symbolic language of, ideally, unambiguous communication.

    Comment: We could view this distinction as seeking to address:

    * What we do in scientific disciplines;

    * Why we do what we do in scientific disciplines; and

    * How we express and communicate whatever it is that we do in scientific disciplines.

    Kind regards,



    1. mathematicswithoutapologies Post author

      I will just answer your first point, regarding “what one would seek to gain,” by repeating that “addressing mathematics as a social rather than individual practice” is precisely the reason I write about mathematics. What my readers may seek to gain by reading what I write is a question I cannot answer.

      But I will add that you will have to search for quite a long time to find a pure mathematician willing to define the mathematician’s “responsibility” as “providing tools” for philosophers or natural scientists or anyone else. Texts addressed to funding agencies don’t count. This specific misunderstanding has a distinguished, if distressing, lineage in European philosophy, and I would be grateful if you could provide any quotations of classic Indian philosophers who make such claims. I address this misconception at great length in my book Mathematicians without Apologies, but since (unsurprisingly) I didn’t succeed in rooting it out, least of all in the literature on AI, it will be one of the recurrent themes of the new newsletter.


  8. Bhupinder Singh Anand

    Dear David,

    1. Some examples of non-analog mathematical representations—of hypothetical real-life cases—that cannot be represented `completely’ in their limiting cases by a Turing machine are detailed in Mythical `set-theoretical’ limits of fractal constructions:

    Case 1: Interpretation as a virus cluster;

    Case 2: Interpretation as an elastic string;

    Case 3: Interpretation as a quantum chimera;

    Case 4: Interpretation as a political revolution;

    Case 5: Modelling the states of the total energy in a universe that recycles.

    2. You are right in highlighting that the term `analog’ ought not to be used as `a get out of jail free card’. One way of addressing this issue could be to define the terms `Analog process’ and `Digital process’ mathematically as detailed in Is the brain a Turing machine?:

    “Definition 39. (Analog process) A physical process is analog if, and only if, it’s states can be represented mathematically by a number-theoretic function that is algorithmically verifiable.

    Definition 40. (Digital process) A physical process is digital if, and only if, it’s states can be represented mathematically by a number-theoretic function that is algorithmically computable.”

    Kind regards,



  9. Bhupinder Singh Anand

    Dear Michael,

    1. M. D. Srinivas in his 2005 paper Proofs in Indian Mathematics, and P. P. Divakaran in his 2018 paper The Mathematics of India, both address the perspective of classic Indian philosophers/mathematicians towards the `responsibility’ of mathematicians in asserting a mathematical argument as proven in the absence of a (physically?) falsifiable model.

    For instance—as detailed in The foundational significance of the Complementarity Thesis and of evidence-based reasoning—according to Divakaran, Nilakantha was “uncompromising about the need to subject prior knowledge, whether revealed or merely uttered by mortals and lodged in an abstract communal memory (smrti ), to the tests of observation and logical inference and rejected if found wanting”.

    2. I agree that perceiving a `mathematician’s “responsibility” as “providing tools” for philosophers or natural scientists or anyone else’ in pejorative terms would not only be demeaning, but misleading.

    It would not do justice to the likes of, say, Isaac Newton, Albert Einstein or Roger Penrose, each of whom can be seen to have comfortably worn a natural scientists hat, or a philosopher’s hat, or a mathematician’s hat, as appropriate to the demands of its associated responsibilities.

    Kind regards,




Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s