“No Comment” was Jacob Lurie’s reaction when the panel of Breakthrough Prize laureates was asked, at last November’s Breakthrough Prize Symposium at Stanford, what they thought of the “prospects for the Univalent Foundations program and whether that is a contender for changing” foundations of mathematics. The other panelists had deferred to Lurie, in some amusement; after Lurie’s “No Comment,” Terry Tao gave the diplomatic answer that “the person you need to ask is not sitting here.” Of course this can also be understood to mean that the five laureats don’t feel concerned by the question.
The same questioner continued: “Is anyone willing to bet against” the prediction that computer-verified proofs will be of “widespread use in mathematics” within 25 years? Lurie immediately replied: “I’ll take that!” to which Richard Taylor added “Yes, me too.” Terry Tao thought that some people, at least, would be using working with computer verification at that point.
From some recent comments on this blog it seems not all readers are happy with this attitude; please don’t blame me!
Simon Donaldson made a point (around 30:15) with which most of my colleagues would agree: “One doesn’t read a mathematical paper, what one gets is the idea to reconstruct the argument it’s not that people (generally speaking) would be …checking the logic line by line — they would go and extract the fundamental idea; that’s really the essential thing.” This is very similar to one of my earlier quotations from Atiyah: the proof “isn’t the primary thing at all.” I hope it’s clear just how radically different this is from the mainstream philosophical position, that the written proof is an imperfect approximation to an ideal formalized proof. Donaldson (and elsewhere Atiyah, and even earlier Hardy) is saying that the written proof is an imperfect approximation to a pure stream of ideas. In that optic, not only the language of a formal proof, but even the standardized language of a publishable proof, is an obstacle to understanding, a necessary evil in the absence of a pure language of ideas.
Richard Thomas, the geometer from Imperial College, reminded me of this exchange on Monday; but I shouldn’t have needed reminding, because I was sitting in the room during the whole Symposium, and I probably even took notes. The Symposium — some 40 minutes of answers to questions posed by Breakthrough Prize co-founder, the billionaire Yuri Milner, followed by 20 minutes of answers to questions from the audience — can be viewed on Youtube and is quite interesting from start to finish. (The sequence described in the previous paragraph starts at about 54:50; the “No Comment” is at 55:19.) As I wrote in my Slate piece, Milner really does know what mathematics is about, though I remember at the time thinking that he seemed a little disappointed that the laureates showed so little interest in computer-assisted proof. Maxim Kontsevich, who comes as close as any mathematician I know to speaking in pure streams of ideas, made some subtle and unexpected comments; maybe I’ll return to them later. He did say (around 41:40) that AI is not so hard and that there may be mathematically intelligent computers in our lifetime. When Milner asked what made him so optimistic, Kontsevich answered that he was actually pessimistic: he doesn’t see any fundamental difficulty, but “it would be immoral to work on that.”
Tao was the most positive; at about 31:20 he expressed the hope that “some day we may actually write our papers, not in LaTeX … but in language that some smart software could convert into a formal language, and every so often you get a compilation error: ‘THE COMPUTER DOES NOT UNDERSTAND HOW YOU DERIVE THIS STEP.'” To Milner’s question, “Do you think computer verification will get to that stage where it will verify every new paper?” Tao replied “Oh, I hope so.”
And this reminds me of a quite different concern I have — this is MH speaking now, not the panelists — about computer-verifiable proofs: the prospect of technological lock-in. The Wikipedia article has a perfectly good summary of the pitfalls. For better or for worse, mathematical typesetting is now locked-in to LaTeX, but even if it weren’t as generally satisfactory as it is, it doesn’t constrain the imagination. Algebraic geometers had the flexibility to slough off (small f-) foundations when they grew too confining or inflexible; but imagine what would happen if the technology of mathematical publication were yoked to a specific proof verification system (25, or 50, or 100 years from now). Suppose (as is extremely likely) it’s a system that is well-adapted to verifying that a computer program will run as it should (the original, and perfectly understandable, motivation for automated proof verification). Would mathematics then have to adapt in turn to the system, and would mathematicians who refuse to adapt — for example, because they find the system an obstacle to expressing their ideas — be forced to resort to samizdat, or even blogs, to get their work published?