It’s clear to me that, in spite of the best of intentions, this has been mainly a dialogue of the deaf, much like the situation described in this post, which in spite of its name is definitely part of this thread. It can be characterized as a dialogue between one group who are convinced that mathematics is ripe for a conceptual revolution or paradigm shift, which would take the form of adoption of new foundations, and a second group who see no need for such a revolution. The dialogue is complicated by the fact that the overwhelming majority of the members of the first group come from programming or proof theory, and by the fact that the overwhelming majority of mathematicians are completely indifferent to the proposed new foundations.
I would draw your attention to the irony of the fact that I have been the target of a variety of reactions, ranging from abuse to well-meaning attempts at explanation of matters about which I have already written at length; and this because, unlike the overwhelming majority of my colleagues, I am talking and writing about the new proposals rather than dismissing them out of hand. I naturally anticipated that by speaking out I would be making myself a target, simply because it’s easier to talk back (to kill the messenger, so to speak) to someone who is paying attention than to someone who is ignoring you.
So let me explain that, while I am naturally curious about new intellectual developments and while I anticipate that this new research program will have very interesting consequences, which may or may not have anything to do with “Foundations,” the arguments put forward in support of the new movement are essentially irrelevant to my practice as a mathematician — except in one important way, which I will explain below. If you want to know what factors I do consider relevant to my research, I’m afraid you’ll have to read my book; they are not easy to summarize and they don’t fit into a neat logical sequence.
Here’s a passage in my book in which I quote Mike Shulman.
“I never would have guessed,” wrote Mike Shulman in 2011, “that the computerization of mathematics would be best carried out… by an enrichment of mathematics” [my emphasis] along the lines of Voevodsky’s univalent foundations project, to which Shulman is an active contributor.
I italicized the definite article in the Shulman quotation to emphasize how mechanization of mathematics — with or without human participation in the long term — is viewed in some circles as inevitable. To my mind, the fixation on mechanical proof checking is less interesting as a reminder that standards of proof evolve over time, which is how it’s usually treated by philosophers and sociologists, than as a chapter in the increasing qualification of machines as sources of validation, to the detriment of human rivals. In Yevgeny Zamyatin’s novel We, whose protagonist is a mathematician named D-503, the scourge of human subjectivity — what virtual reality pioneer Jaron Lanier calls “the unfathomable penumbra of meaning that distinguishes a word in natural language from a command in a computer program” — is ultimately eliminated by an X-ray operation that turns people into machines.
Here is a dangerously oversimplified model of scientific revolution: In the absence of external forces (like the mission civilisatrice to which I allude in the last post) a scientific revolution takes place as a response to a crisis. Voevodsky’s article made it clear that he perceived the need for new foundations in response to a contemporary crisis in the verification of mathematical proofs. I respect both his perception and his response, but very few of my colleagues share that perception. To express “surprise” or “disappointment” that this is the case is to display a fundamental misunderstanding of the sociology of mathematical practice. And if you are not actually a mathematician, I’m not sure that “disappointment” with the attitudes of mathematicians is an appropriate reaction, even if you are Plato.
The notion of revolution may be broadened to include radical innovations like Grothendieck’s new (small f) foundations of algebraic geometry, or even important innovations that largely remain within an established framework, like derived algebraic geometry or Scholze’s perfectoid geometry. In striking contrast to HOTT/UF, these have been adopted eagerly and rapidly by highly motivated groups of mathematicians all over the world. Surely there is a sociological lesson in this contrast, for those willing to see it?
To make my own motivations clear, in my book I treated projects to mechanize mathematics, including (one aspect of) HOTT/UF, as one of two “proposals for reconfiguration” that challenge “the professional autonomy to which we have grown attached.” The other one is the pressure to subordinate pure research to an “entrepreneurial mindset.” The book briefly mentions a possible convergence between these proposals: if the providers of the material conditions — the “external goods” — on which mathematics depends become convinced (1) that this is only necessary because of eventual commercial or industrial spinoffs and (2) that computers can provide these spinoffs more reliably than pure researchers, then the future of pure research in mathematics is by no means guaranteed.
From this perspective, one is entitled to wonder whether the conscious or unconscious intention of mathematicians and computer scientists who argue along the lines of point (2) above isn’t to redirect some of the limited supply of “external goods” from pure mathematics to automated proof. I don’t know anyone who is actually arguing for such a shift, of course. But it’s easy to imagine that it would be welcomed by decision-makers, and rationalized as progress in Silicon Valley.