The Helix Center has now posted the video of Saturday’s round table on “Mechanization of Mathematics”. The discussion was lively and everyone agreed that we should meet again, or even that we should organize a conference on the theme.

Since concern about correctness of proofs is one of the primary motivations of mathematicians who are active in automated proof verification, it was interesting to hear several colleagues at the IAS quote the following remark about Solomon Lefschetz:

He had marvelous intuition, and so far as I know, all of the results he claimed in algebraic geometry have now been proved. When I was a graduate student at Princeton, it was frequently said that “Lefschetz never stated a false theorem nor gave a correct proof.”

This is Philip Griffiths reminiscing, in his contribution to the biographical memoirs of Lefschetz (on p. 289). The Helix Center discussion did raise the question of mechanizing mathematical intuition, but didn’t reach any conclusions. The mathematicians I know would prefer to have correct proofs of correct theorems, but if our choice were between mechanical generators of false proofs of false theorems and false proofs of true theorems I guess we would pick the latter — especially if they were as consequential as the hard Lefschetz theorem.

And indeed, I was surprised to learn — but perhaps I should not have been — that Griffiths’s description of Lefschetz fits quite of few of the mathematicians I have most admired (I won’t name names).

Several people I admire were in the audience and others were watching the livestream. Kevin Buzzard congratulated me for finding a way to quote William Burroughs (at 53′). I repeat the quotation for the reader’s convenience:

[The] junk merchant does not sell his product to the consumer, he sells the consumer to his product. He does not improve and simplify his merchandise. He degrades and simplifies the client.

—William Burroughs, *Naked Lunch*

**UPDATE**: A point I was trying to make at the roundtable, and also in the middle of this article, and in this post, about the inevitability of mechanization of mathematics (and of everything else, and of the monetization of the resulting data by tech companies), is made much more clearly and forcefully by Rose Eveleth in an article published today on *Vox*.

### Like this:

Like Loading...

*Related*

Jon AwbreyAh, yes, the Inescapable Algebra of Need …

LikeLike

Robin AllisonExcellent discussion. But I disagree with the frame that these conversations always take.

If you are going to frame the role of computers in math as the “mechanization of mathematics” it’s no wonder that a major theme of the discussion will be the threat computers pose to the humanism of math. It presupposes the form that computers can contribute to mathematics, but honestly, this is far from a given.

The reason we frame the role of computers this way is to some extent circular. It’s inspired by existing technologies and research programs but those in turn define what we regard as the role of computers in math. The frame is also a function of our biases and what we personally have at stake. It seems that a big motivating factor is proof verification because there are instances where nobody else can check a proof, the experts die off, or some other reason. Also most mathematicians working on this project seem supremely adept at abstract math and are able to much more fully appreciate the beauty of arithmetic geometry for instance. They are closer to Lefschetz as you describe him. They already have mathematical intuition, so it is something to be lost, whereas for most other people computers only have the possibility of making those subjects intelligible.

Given these biases we are much less likely to recognize the work of someone like Bret Victor as having transformative potential for research math. I would however recommend Michael Nielsen’s essays which disseminate some of Bret’s ideas in a more palatable form. See for instance “Reinventing Explanation”.

Another reason that we have the frame we have now is that all the people working outside of it had no ambition for using their work to meet the challenges of abstract math. Consequently people doing serious math could easily ignore their work. For instance Seymour Papert in Mindstorms talked about creating a “mathland” (with the logo turtle) that children could inhabit to learn math just as children in France learn french. The idea that such a mathland could be used to teach children honest differential geometry is naive but the spirit is profound. We create “mathlands” of sorts with our formal theories and we “play” in them all the time, but some of us are more adept to these kinds of environments than others. We never ask how computers can be used to really enrich these environments.

To give a concrete example, think about a person learning spectral sequences for the first time. They consult a number of resources, including Hatcher, Bott and Tu, Chow, and Mosher and Tangora. These resources have a variety of approaches and notation. At the same time they each have complimentary perspectives that one would want to integrate into a coherent and complete picture. Let’s focus further on the spectral sequence of a filtered complex, comparing the expositions of Chow’s article and Mosher and Tangora’s book. The schtick of Chow’s article is an intuitive development of the concept of the spectral sequence of a filtered complex, “you could have invented spectral sequences”, taking the reader step by step to closer approximations of the associated graded complex associated to the filtration of the actual homology we are interested in. Mosher and Tangora by contrast begin with the the construction of exact couples. They then apply it to an exact couple derived from a filtered complex. They show that it converges under some circumstances. Ideally one would like to “line up” the two developments, realizing exact couples as the seat of the pants method in Chow. What kind of computer enabled tools could help with this “lining up”? Certainly nothing resembling “mechanizing” (unless you want to solve Hilbert’s 24th problem while you’re at it).

Here is a much more modest example. In any subject there is a “primordial soup” of foundations and different ways to go about developing the subject. In linear algebra you can nearly enumerate all of these but it always escapes intelligibility. Is a matrix a set of row vectors stacked on top of each other, or is it a set of column vectors side by side, or is it just an m by n array? Are row vectors and column vectors primitives or are they just 1 by n or m by 1 matrices? On which definition of a matrix is matrix multiplication defined? How can we interpret matrix multiplication with those different definitions? How are they equivalent? How do the higher level abstractions parameterize lower level interpretations? For example when you start talking about abstract vector spaces and linear maps you can think of column vectors as linear maps from the field to the vector space. You can then think of a linear map evaluated on a vector as a composition of linear maps corresponding to the multiplication of matrices. From this depending on your interpretation of (compositions of) linear maps you can recover the matrix/row/column-vector interpretations. I can almost spell this out, but not quite.

I find it extremely hard to put in words what I want to describe, because in the words of George Orwell, “the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning”. Whatever I can put in sufficiently clear terms gets flattened to the existing dialect (abstraction, axioms, linear logical development) while that is just what I want to break away from. Consequently, I can only point towards some tension, which I hope you and others can see.

Here is a poor metaphor. Each logical development is like finding a maximal subtree of a graph. This graph is embedded in a space with various axes of abstraction. Understanding the subject matter is a matter of being able to quickly navigate this space. Bourbaki in “The Architecture of Mathematics” talks about abstraction and the axiomatic method as a way of restoring the intelligibility of the subjects, but when the problem is navigating details or moving up and down the latter of abstraction, this is inadequate for all but a few. Case in point is an eclectic subject like algebraic topology.

This suggests that the entire architecture of math (a la Bourbaki), with theorems deduced in a linear fashion using the axiomatic method, should be rethought. The reason we cannot break away from this type of architecture I claim is not intrinsic to math but intrinsic to the media we do math in. With computer enabled “dynamic” media I can conceive of an architecture of mathematics that admits nonlinear logical development, that preserves the power of math and greatly extends its accessibility.

In less grandiose terms it seems that there are some structures we can reasonably abstract and there are some other structures that we cannot reasonably abstract. You can’t abstract them because the real problem is navigating the details while abstraction is about hiding the details. We have the infrastructure to establish the truth of claims but we don’t have that infrastructure for exploring why those claims are true. I suppose I believe math would be a lot more ‘democratic’ if there was such infrastructure and it took less extraordinary minds to fill in the understanding-gaps given their obvious implications for filling in the logical-gaps.

Anyway, thats my take on the potential for computers in math. Even if you don’t agree I’d hope you and others take a more aggressive stance trying to understand how to use computers to actively extend the humanity of mathematics and not just try to preserve it.

LikeLike