I had promised myself that I had already said my last word on this question, but I want to tie up at least a few of the loose ends. Everyone is welcome to continue to comment while I move on to other topics.

I apologize to Dimitris if I took his comments out of context, but I have been reading them over and over for the last half hour and I don’t see how they could possibly mean anything other than what I thought they meant. I have also tried to read the previous posts but was stymied by the analytic/synthetic jargon that I have never encountered in mathematics and doesn’t seem to have much to do with the Kantian distinction.

(By the way, if I were Rick Deckard sitting in a bar in *Blade Runner* and I heard the person next to me say “I certainly do not claim that the notion of a set is objectively clearer than that of an ∞-groupoid” then I would begin to get nervous. Of course we may wonder what the word “objective” has to do with ZFC, but someone who has never owned a set of dishes or stayed at a club for the last set or, if the ambient language happened to be German, had never heard this song, is almost certainly a replicant. I like to reserve the word “objective” for the objects of our shared human perceptual environment, and in this sense the notion of a set is adapted to our sense of objectivity in a way that a Grothendieck topology doesn’t seem to be. Learning how to use Grothendieck topologies was worth the trouble because it answered specific questions of interest. Whether the same can be said of replacing sets by types remains to be seen.

Lenstra’s comment was probably meant as a joke, or as a rhetorical flourish. He was well aware that, twenty years before he wrote those words, Mazur had announced his proof of the bound on torsion of elliptic curves over Q, and he did it by representing a functor and closely studying the geometry of the result. As a rule, mathematicians don’t realize that their offhand remarks are frequently read as philosophical position statements. I deliberately wrote much of my book in an elliptical style, in order to avoid misunderstandings; of course, the result is a different kind of misunderstanding.

I’m sure Dimitris sincerely believes that “no reasonable practicioner/admirer of HTT that I know of is trying to convince anyone to abandon the way they think of things, especially number theorists and other practicing mathematicians.” I quote Voevodsky in Part 2 asserting that “these foundations [HOTT/UF] will soon replace set theory.” In my book I quote his prediction (in 2011) that within five years no journal would publish papers not accompanied by a machine-verifiable equivalent. There are a number of similar comments in my book and in my chapter in *Circles Disturbed*. I think the expression “militant enthusiasm” is in many cases an accurate description of this attitude. (Tim Gowers predicts that computers will have replaced human mathematicians by 2099; but he’s not a militant enthusiast; he writes that he’s “not particularly happy” at the prospect.)

The most interesting point of contention is one that has nothing to do with HOTT or automated proof-verification or theorem-proving. Dimitris writes that logic

is detached from “real” mathematics because its object of study *is* “real” mathematics (rather than the object of study *of* “real” mathematics.

I have no objection to this sentence if by “‘real’ mathematics” Dimitris means an idealized and formalized simulacrum of human mathematics. If, however, the mathematics of practicing mathematicians is seen as an imperfect approximation to the “real” mathematics of formal systems, then we have a philosophical disagreement. Too many philosophers of mathematics hold this position (and philosophers who see things differently have a very hard time getting jobs); I don’t know what logicians believe. My own position is stated in Chapter 3:

Unlike those of my colleagues who predict that computers will soon replace human mathematicians by virtue of their superior skill and reliability in proving theorems, I am inclined to respond that the goal of mathematics is to convert rigorous proofs to heuristics. The latter are in turn used to produce new rigorous proofs, a necessary input (but not the only one) for new heuristics.

I had one more comment in response to Tim Campion. He wrote that

Univalence says that isomorphic things are equal.

Of course I have already seen that, but I don’t know what that entails (when representing functors, for example). It seems to be related to his later comment:

If you have to work with (higher) groupoids, currently you have to go read something like the Stacks project or Lurie’s book, learn a kind of elaborate formalism, and develop a whole suite of intuitions about the right ways to formulate statements about higher groupoids that are going to be correct and provable, and you have to learn lots of technical details about how to go about actually proving such statements. With the utmost respect to the authors of existing texts, this is a somewhat painful and time-consuming process.

I am actually in the throes of just such a process, and while I think the Stacks project is pretty good for a 4000-page manuscript, I would naturally like to save time and avoid pain. But a lot of hard work has gone into proving the theorems in the Stacks project and Lurie’s books, and I seriously doubt that introducing new foundations will simply make that work go away.

Loïc MerelA light-hearted and ignorant comment that came to me on the Univalent Axiom, the only axiom of HOTT. It sounds so much like a liberating revelation that it seems to hold a similar place than the Ātman-Brahman identity, proclaimed as a great truth in Hindu metaphysics.

LikeLike

Richard WilliamsonI have no particular wish to contribute to this debate, but since you quoted something that I wrote in the n-café discussion, namely that “I certainly do not claim that the notion of a set is objectively clearer than that of an ∞-groupoid”, as a point for criticism, I feel obliged to clarify my point of view.

Before I do so, I should say that I largely have no idea what you are talking about in the second sentence of the paragraph in which this quote appears. So the clarification is just in response to what I believe (from the rest of the paragraph, and my general impression of your thoughts on foudations) is your feeling about the matter, which is standard: I wrote the sentence in full knowledge that most mathematicians would react to it as exactly as you have (but some of the mathematicians reading the discussion at the n-café would, I believe, understand it better, and some might understand it fully).

The point, which Dimitris also made clearly in the n-café discussion, is that one has to separate familiarity, the weight of a hundred years or so of domination in foundational thinking, and other sociological factors, from more fundamental considerations. And when I reflect deeply, trying to remove the baggage of convention and get to the heart of things, I find that there is _nothing_ in the notion of a set, either in a naive sense or in a formal sense (a set within a foundations such as ZFC) that means that it is, in its very nature, a clearer concept than that of an $\infty$-groupoid. This is what I mean by ‘objectively’: you may or may not like the choice of word, but what I am getting at does in fact involve the question of understanding of concepts, one human to another, beyond mathematics itself.

The choice of $\infty$-groupoids to compare with sets here can be confusing, because, as you can see from the n-café discussion, I do not regard the notion of an $\infty$-groupoid as conceptually clear _at the present time_. But please do not trivialise this statement or take it out of the context that I elaborated upon in the n-café discussion: I have worked for a long time with $\infty$-groupoids, and it is not a question of my understanding of the notion as mathematicians view it/the expectations that mathematicians have of it at the present time.

A simpler choice for the comparison would be the notion of a category (or groupoid). As I claim in the n-café discussion, I regard this notion as as conceptually clear as that of a set, and I believe I could argue that it is even clearer. I believe (and this is not ‘fanatical’ belief, this is belief based on experience and having thought seriously about this kind of thing and carried it out in practise) that all teaching that is based on sets could be discarded and be carried out equally clearly (I would say more clearly) with the notion of a category. I do not expect most mathematicians to understand this statement, but I make it to emphasise that, in whatever sense you would regard the notion of a set as ‘objectively clear’, I believe (again, this is a reasoned opinion, not a ‘fanantical’ one) that I would be able to justify the notion of a category or groupoid as equally ‘objectively clear’.

In summary, my point of view is as follows: the notion of a weak $\infty$-groupoid is not as conceptually clear as that of a set at the present time, but there is no ‘objective’ (use whatever word you prefer: fundamental, inherent, …) reason that the notion of a weak $\infty$-groupoid will not _come to be_ regarded as equally conceptually clear as the notion of a set; and to illustrate the point, I _do_ regard the notion of a category or groupoid as as good and clear a notion to base a foundations upon as that of a set, all the way to being able to explain the notion (i..e give a semantics) in a way that appeals only to human understanding beyond mathematics, not in a way that is ‘internal to mathematics’.

LikeLike

sureI would disagree that the concept of category is a good foundational one. What is *really* foundational, is the concept of graph. As simple as that: some objects and some links/relations. A category is a special kind of graph, a set is a special kind of graph, a function is a special kind of graph, a groupoid is a special kind of graph, and so on.

PS: category theory is a theory of composition. I believe that cocomposition (that is, cocategory theory) and its interaction with composition (that is, bicategory theory where bi has to be understood in the same sense of the one in bialgebra) is something lacking. Notice that if one has a good notion of bicategory in the above sense (which is easy to get formally by gluing the concept of internal category and internal cocategory, that is, with a pushout), one automatically have an infinite layer of compositions.

LikeLike

Shon FederI am very interested in this position. Could you point me towards any books, articles, sites—whatever—that elaborate on it?

LikeLike

DimitrisDear Michael,

My explanation of what my initial quote was suggesting and in what context it originated seems to me adequate, and I will not explain myself further.

“As a rule, mathematicians don’t realize that their offhand remarks are frequently read as philosophical position statements.”

The fact that you feel comfortable dismissing e.g. Lenstra’s comments as rhetorical flourish (which they probably are) but are inclined to take Voevodsky’s (or Gowers’) words at face value and make points with them (even though they too can be seen as rhetorical flourishes) shows, if anything, that your mind is already made up about all this. I suppose I am also guilty of the converse, and so perhaps my mind is made up too. In any case, it hardly seems likely we’ll get anywhere by throwing quotes around – our choice of emphasis (which quotes we pick to take seriously) merely reveals, as I just said, in what way exactly we have already made up our minds.

By “real” mathematics I simply meant the kind of mathematical arguments and constructions that practicing mathematicians are coming up with, e.g. Grothendieck topolgies are real mathematics in the way I was using the term. It is irrelevant whether or not these arguments and constructions are conceived of as approximations to an “ideal mathematics” or merely parts of a chaotic and beautiful network of human inquiry (as you seem to believe, and I tend to agree) – logic itself takes no stance on this question. This is similar to how you can model chemical processes on a computer without taking a stance on the fundamental constituents of matter. This is what I believe the attitude of logic is (or should be) towards mathematics. Logic-as-applied-to-Foundations is a different, more complicated (chaotic?) matter that would take longer to get into, but I have roughly the same kind of attitude there.

The only regret I have is that you still seem to insist on dragging HoTT/UF into this “man vs. machine” debate. My reasons for thinking that this is unfair were outlined in my previous comment and I will not repeat them here. This “man vs. machine” debate is a deep and troubling issue, I admit – but using Voevodsky and the HoTT community as somehow representatives of the kind of view that “mathematicians will become machines” is, as far as I’m concerned, deeply inaccurate. I hope that you at least reconsider this aspect of your view on HoTT/UF.

In any case, best of luck with your book – and thanks for hosting this discussion here. Combative or otherwise, I, for one, enjoyed it.

Best,

Dimitris

LikeLiked by 1 person

Richard Séguin“In my book I quote his prediction (in 2011) that within five years no journal would publish papers not accompanied by a machine-verifiable equivalent.”

This is sadly naive. As I discussed in a previous post, the software and machine-verifiable equivalent code would undoubtedly be obsolete and unusable within twenty years. The original software probably wouldn’t run on current operating systems, and the “equivalent” code would probably be nonsense to current verification software. Hence, it would be difficult or impossible to duplicate and/or verify the original proof guarantee, essentially nullifying its original usefulness.

To the credit of Gowers and his collaborator (name?), they are (if I recall correctly) trying to make proof generators that actually produce output in standard human mathematical writing. However, even if initially successful, I think it would be an extremely long time before such software could produce well organized big proofs or new theory in very long papers or books. Proofs are just the beginning. Following that are all the structural and aesthetic decisions, hair pulling, rewriting, etc. that the powerful human intuition struggles with just to make such things enjoyable to read.

LikeLike

mathematicswithoutapologiesPost authorI did promise to myself that I wouldn’t return to this discussion, but some interesting points were made, and some no less interesting misinterpretations of my intentions, and I can use them as a bridge to the next topics.

First, it’s easy enough to distinguish Lenstra’s rhetorical flourish from the announced intentions of Gowers and Voevodsky, because the latter have given numerous lectures on this very topic; Gowers has even managed inadvertently to convince one anonymous blogger that his projects pose such a danger to mathematical practice that he has named his blog “Stop Timothy Gowers!”

I also want to flag this sentence:

“one has to separate familiarity, the weight of a hundred years or so of domination in foundational thinking, and other sociological factors, from more fundamental considerations.”

This reflects a real difference in perspective, because I don’t believe there are considerations more fundamental than sociology and history. For better or worse, mathematics is what human beings choose to make it. There is no external guidance, certainly not from an operating system. I do sympathize with RIchard’s concern with meaning, and in my book I suggest that meaning in mathematics is derived from the historically situated practice. I suggest that but I don’t try to demonstrate it, because I see that as the role of philosophers, and I’m not a philosopher. (And I also sympathize with Mike Shulman’s point that the role of philosophers is not to complain about mathematicians’ choice of semantics — or I would, if I were sure that I had actually chosen semantics.)

I did scroll down after Mike Shulman’s post to try to understand the context of the discussion, and I came to what looks like a substantial disagreement about whether or not HOTT is a genuinely synthetic theory for etc. What I don’t understand is why it’s so important to have a synthetic theory. What’s called synthetic differential geometry has little relevance, as far as I can tell, to the proof of the Poincaré conjecture; of course I may be wrong, but that’s the sort of problem I recognize as giving meaning to differential deometry.

And let me hasten to say, once again, that if so many talented people are putting their minds to HOTT/UF then something interesting is going to come of it, though it may not have the metaphysical import that some are seeking.

Now as for the “man vs. machine” question, I’m not really worried that computer scientists are conspiring to put mathematicians out of work. (Though I was intrigued to learn that, according to A.G., some of them are whispering among themselves that HOTT is a plot to take some of their funding.) The scenario that worries me is that those who hold real power, operating on the basis of what I have called “market rationality” or the “entrepreneurial mindset,” but which Wendy Brown calls by the more appropriate name of “neoliberal reason” in her new book “Undoing the Demos,” may seize on some of the more ideological claims to substitute machine-generated mathematics for pure mathematics. I am certain that it is perfectly consistent with neoliberal reason, more so than the free autonomous practice in which I chose to participate.

So there will be more politics when I return to this blog.

LikeLike

Richard Williamson“This reflects a real difference in perspective, because I don’t believe there are considerations more fundamental than sociology and history. For better or worse, mathematics is what human beings choose to make it. ”

I just wish to say that I agree entirely with the second sentence, and feel that this is not appreciated enough in thinking about the nature of mathematical foundations. Regarding sociological factors, I was referring specifically to the sociology of the mathematical community in the sentence that you discussed. Without entering into a philosophical debate, I would just say, regarding the first sentence, that I have a lot of sympathy for the point of view that I think you are suggesting, but that the ‘fundamental’ in my sentence was intended in the sense that, recognising the influence of the factors I listed, we should try to free ourselves from them in our thinking, and try to get to the heart of things without extraneous ‘baggage’.

“There is no external guidance, certainly not from an operating system.”

Again, I agree with the underlying point that I think you are making, but feel that this sentence goes too far. Human thinking about mathematics could perfectly well be informed by work on formalising mathematics on a computer, as it could by any other activity. And actually, I have worked on formalising mathematics in practise (in Coq), and have found this to be a highly rewarding activity from a mathematical point of view: one swiftly realises that one takes all kinds of things for granted, and thinking more deeply about these things can lead to very interesting mathematics (and interesting independently of any foundational considerations).

“What I don’t understand is why it’s so important to have a synthetic theory. ”

People differ in what they mean by a ‘synthetic’ theory of something, but an important aspect which I think most people would agree on is that the theory should deal directly with gadgets involved, and prescribe some fundamental things that we can do them, rather than define the gadget in terms of some more primitive theory, and then demonstrate within that more primitive theory that we can do the fundamental things. In synthetic differential geometry, for instance, one takes the notion of a ‘smooth object’ as primitive, and prescribes some fundamental things that we can do with these smooth objects: a smooth object is not a set with some extra structure.

The motivation can differ, but one important idea is that a synthetic theory should capture the essence of something according to our intuition and experience, removing ‘technical baggage’ that seems rather just a means to express that intuition and experience in some particular language (set theory, or whatever). Rather than always carry around and have to deal with this technical language, we just do it once and for all (constructing a ‘model’), and after that we can forget about it and just work in the elegant synthetic theory.

The case that we were discussing at the n-café is, I would say, rather specific, and one should not draw too much from it about synthetic theories in general. The idea of considering HoTT/UF as a synthetic theory of weak infinity-groupoids is that, as above, one should be able to work nicely with the notion as we expect to be able to do, without worrying about technicalities. Now the technicalities of weak infinity-groupoids as an algebraic/structural notion are not well understood: I think that Mike and all the other contributors to the discussion would agree with that. Where we disagree is the following: I feel that these technicalities cannot be brushed under the carpet, and that until we understand them better, we cannot claim to have a good understanding of weak infinity-groupoids. To say that we have a synthetic theory of them seems then to me to be little more than a vacuous statement. In particular, I do not regard the univalence axiom, which is central to the claim that we have a synthetic theory of weak $\infty$-groupoids, as ‘magically’ resolving all the technicalities and leading us to a good understanding of the notion. Most of the discussion involves me attempting to explain this in a broader context, namely the question of what does or does not give the primitive gadgets in a synthetic theory meaning.

In summary, regarding the ‘importance’ of a synthetic theory of weak infinity-groupoids, I would say that it is important to me because I think that the notion of a weak infinity-groupoid is very deep, and that a good understanding of the notion, which _would_ allow us to construct what I would consider to be a synthetic theory of weak infinity-groupoids, would lead to all kinds of very deep mathematical breakthroughs (yes, in number theory as well!). I am concerned that a superficial understanding of what a synthetic theory of weak infinity-groupoids should be (I do not refer to Mike and the other contributors to the n-café discussion here) could lead to a lack of understanding of the significance of weak $\infty$-groupoids in this deeper sense; all the more because this is a trend which already is dominant amongst a very large community of mathematicians, who take Lurie’s work unquestioningly as their starting point in $\infty$-category theory.

“What’s called synthetic differential geometry has little relevance, as far as I can tell, to the proof of the Poincaré conjecture”

Whilst I do not work actively on synthetic differential geometry myself, there is no reason why synthetic differential geometry could not be developed far enough to carry out those parts of Perelman’s proof of the Poincaré conjecture that fall within its remit. In particular, the fact that it is a synthetic theory has absolutely nothing to do with it.

LikeLike

JulesYou have strong opinions about HoTT, but at the same time you admit that you don’t know much about it. Well, I don’t know much about it either, but I do know a little bit.

Homotopy type theory is really an incremental improvement over dependent type theory, which gives it a more flexible notion of equality. Your statements seem to be equally applicable to dependent type theory in general than to homotopy type theory in particular.

There are several different points of discussion. One is how useful foundations are in the first place. Most mathematicians do not care much about set theory since they work at a higher level of abstraction. However, I’m going to assume that you are convinced about the importance of having some foundation, whether it’s ZFC or type theory or something else.

So lets look at ZFC. You define a function A -> B as a set of ordered pairs (x,y) with x in A and y in B. Then you require that for each x in A there is exactly one pair (x,y) in the set. However, ordered pairs are not primitive in ZFC, so you need to define them somehow on top of sets. We can use Kuratowski’s method and define (x,y) as {{x}, {x,y}}. Do you see how ugly this is getting? Next we define natural numbers. The usual way is to define 0 as the empty set, and define n as {0,1,2,..,n-1}, so that for example 3 = {{},{{}},{{},{{}}}}. You can go on and define integers, rationals, groups, fields, and everything else on top of sets. The issue is that everything lives in the same “space”: the space of sets that contain sets (ignore paradoxes for now). So you could compute the union of 3 and 8, or compute the union of 3 and the function x -> x/2, since they’re all just sets. If you want to do formal proofs in ZFC down to the axioms it gets even uglier.

In type theory on the other hand the situation is far better! Different kinds of objects live in different types, and you can’t do something silly like union 3 and the function x -> x/2. Furthermore, the usual mathematical structures like natural numbers, groups, etcetera have short definitions in type theory. For example one defines the natural numbers simply Nat = Z | S Nat, i.e. a natural number is either zero (Z) or the successor of a natural number (S Nat). This is much closer to how mathematicians really think about natural numbers. Proofs are also much shorter and closer to the intuitive proofs than formal proofs in ZFC. Another advantage of type theory is that the number of concepts is smaller. Sets and propositions are both captured and generalized by the concept of type. This probably hasn’t convinced you, but hopefully you see why many people think that if you’re going to do formal proofs, type theory is much better than ZFC.

The next question is whether we should do formal proofs. Note that in some areas of computer science, proofs are already routinely formalized in type theory, and checked by computer. Type theory / HoTT as it is is not adequate for formalizing all of mathematics. However, the obstacles are mainly of practical nature, not of fundamental nature. Most of modern mathematics relies on a huge collection of definitions and proofs. To formalize such a piece of mathematics one would first need to formalize all definitions and theorems it depends on. That is a lot of work, but it is work that only needs to be done once. However, the actual formalization is probably much more achievable than you might expect. Proofs are not too long, and simple proofs, even those that use induction, can often be completely automated. That said, the user interface of proof assistants is not good enough yet. Formal proofs are much less readable than informal ones, and a lot more work to write too, but the gap continues to get closed more and more. In some cases formal proof has an advantage already, because certain proofs or steps of proofs can be completely automated, either because some subset of the theory is decidable (such as presburger arithmetic, or the theory of uninterpreted functions), or because good heuristics exist (such as for evaluating integrals, like Maple/Mathematica already do). Just as happened with chess, eventually computers will become as good or better than humans at larger and larger classes of proofs and calculations.

Once you have a formal notion of proof, and a way to check proofs, finding the proof of a theorem becomes a search problem. One could enumerate all proofs one by one and check whether it proves the desired theorem T. Of course the set of proofs is infinite, so this could take forever if the theorem is false. One could simultaneously try to search for a proof of (not T). This still does not give you a terminating procedure, because the theorem could be independent of the formal system. However, most theorems we are interested in are not independent. Still, the search space is huge and it’s not practical to search for proofs like this, but there is no reason to believe that humans are fundamentally better at this than computers. Better heuristics and better ways of cutting down the search space will be developed. The game tree of chess can also not be completely enumerated because it’s too large, but computers are still far better at chess than humans. At one point in history this would be very hard to believe, but we now live in that reality. In my mind it’s almost certain that the same thing will happen for mathematical proofs eventually (though probably not in our lifetime). If this does happen it does not mean that humans are irrelevant in mathematics. You still need humans to state the right theorems and definitions to build a beautiful and useful mathematics, and humans will still want to understand why a theorem is true, not simply that it is true.

LikeLike

mathematicswithoutapologiesPost authorI have strong opinions about many things, but not about HOTT! What I said is that I don’t have a strong opinion in favor of HOTT, and this is treated by some people (along the lines of foie gras) as hostility. I feel quite strongly that people should suspend their prejudices and read what I am actually saying before leaping to conclusions.

I also am far from convinced that HOTT will be adopted as foundations by the majority of mathematicians, and I have tried to outline what (in my reasonably strong sociological opinion) would be needed for that to take place. Some readers may see this as a sign of hostility or some other regrettable reaction, but I can’t help that.

Certainly I don’t see the relevance of dependent type theory to my own work as a mathematician, and so whatever I wrote about HOTT is indeed just as relevant to any kind of type theory.

You write

“I’m going to assume that you are convinced about the importance of having some foundation, whether it’s ZFC or type theory or something else.”

But that’s an incorrect assumption. Chapter 7 is addressed (among other things) to the relevance of the metaphor of foundations to mathematical practice. Yu. I. Manin has written on this to great effect; I quote him and other mathematicians on the role of foundations and the different meanings this word has in mathematical use. Weil wrote a book called “Foundations of Algebraic Geometry” which has nothing to do with ZFC or anything like it.

Regarding the definition of natural numbers in ZFC, here is what one of my alter egos says in a dialogue with the other alter ego:

N.T.: You sound suspiciously like a logician. When logicians started worrying about using infinite numbers to count things, they convinced themselves they already didn’t know how to explain what they were doing when they counted finite collections of objects. Even one object, for that matter. But then they realized that just because they didn’t know how to count one object, they did in fact know how to count zero objects, because that’s how many objects you have counted before you have figured out how to start counting. On that insight, starting with zero the logicians built a whole theory of counting that includes a place for the square root of 2 as well as any other number you can think of. Some people still claim this is the only reliable way to talk about numbers. But I find it most helpful to think of a number as an answer to a question.

I have a lot of respect for logicians, and I have a lot of respect (in general) for philosophers. But philosophers who think mathematics is somehow “founded” on logic are, in my (strong) opinion, missing the point. This wouldn’t be such a problem if this attitude didn’t make it very difficult for philosophers attached to other perspectives to find university positions.

The other alter ego replies:

P.A.: I don’t find that at all helpful. Most questions can’t be answered by numbers.

N.T.: Of course not. A number is an answer to a question about numbers.

LikeLike

JulesI certainly agree that mathematics is not and will never be founded on logic, set theory, or type theory in the sense that mathematics *follows* from the foundations, but I do think that mathematics is built on those foundations. Or perhaps more accurate is to say that we use foundations to *model* different kinds of mathematics.

Take the natural numbers. Natural numbers are out there, and depending on your school of philosophy they are either an inherent thing in our universe, or they are a concept that is simply natural to our human psyche. We can state theorems about natural numbers, and they in some sense be empirically verified. For example the theorem that there are no solutions to a^n + b^n = c^n for n > 2, could be empirically disproven by giving a,b,c,n that solves the equation. So natural numbers as a concept are largely independent of whatever foundations we pick.

That does not mean that foundations are useless. They are a system in which we try to model things we are interested in. We can axiomatize the natural numbers in a foundational system, and then prove theorems about them in the system. We would then expect/hope that those theorems also empirically hold. This situation is quite similar to what Descartes did for geometry. He modeled geometry inside high school algebra, just like we can model natural numbers inside set theory. It’s the same story for limits / continuous functions: Weierstrass modeled them inside logic with his epsilon-delta definition. The same again when Lebesgue formalized the concept of volume with measure theory. There are of course numerous other examples. My point is that modeling something inside some other system is useful because it allows us to make more precise arguments. Of course one has to be careful that our model really corresponds to the thing we wanted to model.

In my view this is exactly the role of a foundation: it’s just a system that formalizes the basic rules of reasoning, within which one can model the rest of mathematics. It is the ground floor of the tower of models built upon models. Even if nothing else, finding a good foundational system is an interesting mathematical activity in itself. In my opinion that sells foundations short though. Foundations do have an effect on other mathematics. For example look at the open/closed set definition of topology vs the groupoid model of homotopy type theory. Or look at the definition of a category in type theory vs set theory.

LikeLike

mathematicswithoutapologiesPost authorI certainly agree that the study of foundations is a legitimate and important area of mathematics, and I consider the perspective that “mathematics is built on foundations” to be a legitimate philosophical position, although my own position is different.

I don’t believe that the case has been made convincingly that mathematics is in urgent need of new foundations. It remains to be seen whether type theory will eventually come to be accepted as more convenient or more natural or more fruitful or more intellectually credible than set theory as a common language for mathematical research. At present the proportion of mathematicians, apart from logicians, is small among those studying type theory. That may or may not change, but I don’t expect it to change on the basis of rhetoric alone.

LikeLike

Mike ShulmanI certainly agree that there are people out there with unreasonable hopes or expectations. But please don’t interpret such statements by a few people as representative of the HoTT/UF community as a whole. There are plenty of us in that community who strive to be more reasonable and less militant.

Personally, I think it’s fairly likely that eventually, most or all mathematical proofs will be verified with computers, but my guess (which I’ve said elsewhere too) is that it’ll take a century or so to get to that point, because it won’t happen until checking your proof with a computer is as easy as, or easier than, typesetting it in LaTeX. I

don’tbelieve that computers will ever replace human mathematicans; I think they will always be only a tool that we use, though one of ever-increasing power, and that we need to (and will) maintain the humanness of mathematics.LikeLike

David CorfieldI’m sorry to have come late to the discussion over the past three posts on this topic. For what they are worth, some comments from a philosopher interested in HoTT:

* The computer-checking aspect I find a distraction.

* That HoTT only allows you to speak in a ‘structural’ way I find interesting (https://golem.ph.utexas.edu/category/2015/04/the_structure_of_a.html). If it spares me having to hear from philosophers using their usual tools to capture what it means to speak about ‘the structure of’ a mathematical entity, I’ll be grateful.

* Plain HoTT currently reconstructs results only from the early days of homotopy theory, and is probably many years from the research front. On the other hand, Charles Rezk thought it worth his while reverse engineering a HoTT proof of the Blakers-Massey theorem (http://www.math.uiuc.edu/~rezk/freudenthal-and-blakers-massey.pdf)

* Adding extra features to HoTT allows for a language which can come closer to the research front. The kind of informal use of a variant (cohesive) of HoTT can be seen in Schreiber’s ‘Parameterized higher WZW terms’ (https://dl.dropboxusercontent.com/u/12630719/cwzw.pdf). No doubt it could all be done ‘externally’ in terms of infinity-toposes, but it wasn’t.

* Most ambitiously, there’s a chance to stimulate a much more interesting space for philosophical dialogue concerning language, perception, cognition, mathematics, computer science and physics. I intend to spell out at some point the kind of thing I have in mind at the n-Category Cafe. But we should be excited that there’s a new language on the block, one which has the tradition predicate logic (though now typed) and set theory (though now structural not material) as truncations (in a formal sense). How did more of us not realise sooner that dependent type theory is so much better a way to represent natural language than predicate logic (A. Ranta. Type Theoretical Grammar, Oxford University Press, Oxford, 1994.)?

LikeLike

David CorfieldOne more thing:

* It’s generally a good thing to find ways for different communities to .talk to each other, since constructions often travel well, and allows ‘triangulation’. So, linear logic, beloved of a kind of computer scientist, suggests that one look for a linear variant of HoTT, This is found via dependent linear type theory (http://ncatlab.org/nlab/show/dependent+linear+type+theory) and linear HoTT, which has a semantics in Grothendieck’s yoga of six functors in the Wirthmuller context case.

A slogan emerges (at that page): twisted generalized cohomology theory is ∞-categorical semantics of linear homotopy type theory.

Now we have opportunities: linear logic’s ‘exponential modality’ relates to the Fock space construction, and we’re now not far from the Goodwillie calculus and tangent ∞-toposes.

Ω∞ ◦ Σ∞ + (don’t know how to write + as subscript) is now seen as a decomposition in linear HoTT as an adjunction whose “left adjoint is the process of forming spaces of sections, hence quantum states (and whose induced comonad encodes free second quantization)”.

We’re seeing mathematical physics, computer science and algebraic topology talk to one another.

LikeLike

mathematicswithoutapologiesPost authorI’ll be brief because I’m leaving soon for England. (I’d be delighted if David made the trip to Cambridge on Tuesday for my presentation at the Newton Institute — link below.) I’m glad David mentioned Lurie’s work in his chapter. To judge by his brief remarks on the Breakthrough Prize video, and in his talk at the Breakthrough Prize Symposium at Stanford last November, Lurie has thought more subtly than most mathematicians about philosophy; he pointedly avoided some of the typical shortcuts, although he didn’t formulate a positive philosophy of mathematics, nor, I imagine, would he want to.

I’ve mentioned to David in private that I have been struggling with Lurie’s book (physically as well as intellectually). Early in my career I made the decision to get by with only a superficial understanding of homotopy theory, and now this has returned to haunt me. But it’s interesting to see just how this came about. The beginnings of derived algbraic geometry date back to the 60s, and Kapranov and Voevodsky (still a student at the time) were already working on higher categories when I was in Moscow in 1989-90. But I understand that the field really took off when several groups of mathematicians concluded that it provided the most reliable formalism for the kinds of counting problems that arise in physics (especially in mirror symmetry) and, from a somewhat different direction, for formalizing higher versions of topological quantum field theory. I hesitate to say anything more specific, because I am only aware of these developments as a distant spectator, but it was clear to me from the talks at last fall’s Geometric Representation Theory program at MSRI that these strands have merged, and that familiarity with the full range of techniques is now a requirement for entry into this field.

Derived algebraic geometry, and therefore infinity-categories, are becoming mainstream in the geometric Langlands program; Gaitsgory and his students and close collaborators are leading the charge, with Drinfeld caught up in the action. Around the same time Beilinson found a DAG proof of the comparison theorem in p-adic Hodge theory. These two circumstances led me to hope that the current dependence of the p-adic Langlands program on endless computations with complicated algebraic structures could be replaced by a geometric formalism in which the pathologies find a conceptual explanation. To understand my role in this you should try to imagine how a dog would express an intuition about some major social challenge — police brutality, say, or global warming. Whether or not the dog’s intuition is correct, it lacks the language to express it, and can only keep barking at the competent authorities until it is satisfied that its message has been acknowledged. My own barking has recently taken the form of a speculative paper that I have submitted to a special issue of a journal, and that I will post on my home page and arxiv once the editors confirm that it’s fit for serious consideration.

Two more points: just as the counting problems inspired by mirror symmetry favored the development of new geometric formalisms, the problem of counting points of varieties over finite fields was the spur for Grothendieck’s new (small f-) foundations for algebraic geometry. The foundations were adopted in response to a perceived need in a conventional counting problem. Nowadays, everyone uses etale cohomology but the actual counting of points of varieties over finite fields is a relatively small specialty, and is usually informed by a geometric outlook inspired by Grothendieck’s ideas about motives.

And it’s interesting to see that Rezk has reverse engineered a HOTT proof. Rezk is a hardcore mainstream algebraic topologist, and this says two things: that such people are paying attention to HOTT, and also that, as I would expect, they are the ones best placed to do so, because the intuitions derive from homotoopy theory and not from other branches of mathematics. I don’t think it’s a coincidence that the mathematicians active on the n-category cafe are practically all homotopy theorists. There’s actually a third thing: Rezk’s construction (which I don’t pretend to understand) reminds me of a similar incident in arithmetic geometry about 15 years ago. After Hrushovski gave a model-theoretic proof of the Manin-Mumford conjecture, Pink and Rössler reverse engineered it in algebraic geometry, eliminating the logic. This was at the time seen as a double gift: a conjecture in arithmetic geometry was proved by the means of arithmetic geometry, which is consistent with an epistemological virtue that is common in mathematics; and Pink and Rössler learned model theory so that the rest of us wouldn’t have to.

LikeLike

David CorfieldI’m sorry the term begins here with plenty of meetings next week, so I won’t be able to make the trip to Cambridge.

I think Mike Shuman said on another thread, HoTT’s fate goes hand in hand with the homotopification of maths (or something to that effect). No doubt one can then do things ‘simply’ in terms of infinity-toposes, but given that you admit to finding Lurie’s book hard going, it is just possible that HoTT presents an easier path. Urs Schreiber said about the Blakers-Massey result:

“I feel like highlighting the curious aspect that its, say, publication history now serves to make an additional case for HoTT all in itself: for it was most curious to see Charles Rezk – originator of higher topos theory – ask Favonia Mlatus to explain the new proof of Blakers-Massey to him, with Favonia – I think that’s fair to say – being unsure as to what might possibly be non-obvious and in need of explanation about his formal proof, while at the same time – I think that’s still fair to say – not being by far an expert on homotopy theory as Charles is. You see what I mean: with HoTT in his hands a relative newbie to homotopy theory was able to produce a proof that leading expert(s) in homotopy theory had sought in vain.”

Fun, if true.

But it’s not just homotopy theory in the strict sense that might be aided. As soon as one realise that physics needs not just gauge equivalence, but equivalenceS, and then gauge-of-gauge equivalences, etc. there’s an evident role for something which can express something path-of-path-…-of paths. A little cohesion in the mix is then needed. (By the way, Rezk came up with a new cohesion: http://www.math.uiuc.edu/~rezk/global-cohesion.pdf

I wonder if Minhyong Kim could find an application of HoTT in number theory. On the one hand he did once speak of an ‘arithmetic QFT’. On the other hand, I seem to remember him saying that \Pi_1 was enough.

Then also there’s the dream of ‘topological Langlands’ which generated some interesting answers, including one (http://mathoverflow.net/q/7283/447) where you get a mention:

“Unfortunately, I don’t have a clear idea myself about what Topological Langlands should be. The best I can say is that a number of elements that show up in the story of Langlands (especially local langlands, a la Harris and Taylor) show up in homotopy theory: things like Lubin-Tate formal groups, p-divisible groups, automorphic forms, Hecke operators.

One thought. The basic case of the Langlands correspondence is supposed to be class field theory. In topology, the analogue of class field theory (over the rationals, at least), appears (to me) to be the story about the “Adams conjecture”, which was proved around 1969 by Quillen and Sullivan.”

(Not quite sure why I keep mention Rezk.)

LikeLike

mathematicswithoutapologiesPost authorSorry I won’t be seeing you this time.

Lurie’s book is extremely clear. I find it difficult because of my limited experience with homotopy theory and because I don’t see where it’s going. Also because it’s much too heavy to carry around in the subway, as I did in Paris for two weeks last year. At some point a problem will appear that is directly relevant to my work and that requires Lurie’s methods to solve, and then, with the help of a few trusted specialists, I’ll be able to work my way through the vastness of Lurie’s works to the results I actually need. I anticipate that this will be much easier than trying to work through HOTT because none of my intuitions are aligned with type theory.

There has been periodic speculation about a Langlands program over fields of higher dimension, not only over number fields and function fields. This speculation involves algebraic K-theory, higher category theory, and the like. There is already a higher-dimensional class field theory involving K-theory and algebraic cycles (see http://www.renyi.hu/~szamuely/bour09.pdf) but a non-commutative version remains to be formulated. This is where one might expect homotopy theory to become relevant to number theory, and maybe the homotopy groups of spheres and class field theory will be seen as two specializations of a single theory. This is the sort of thing I had in mind in the last few paragraphs of my Chapter 7.

LikeLike

David CorfieldA couple of years later, note ‘A Generalized Blakers-Massey Theorem’ (https://arxiv.org/abs/1703.09050):

We prove a generalization of the classical connectivity theorem of Blakers-Massey, valid in an arbitrary higher topos and with respect to an arbitrary modality, that is, a factorization system (L,R) in which the left class is stable by base change. We explain how to rederive the classical result, as well as the recent generalization by Chach\’olski-Scherer-Werndli. Our proof is inspired by the one given in Homotopy Type Theory.

And follow-up, https://arxiv.org/abs/1703.09632.

LikeLike

Tha IlluminatorYou write: “the mathematics of practicing mathematicians is seen as an imperfect approximation to the “real” mathematics of formal systems, then we have a philosophical disagreement. Too many philosophers of mathematics hold this position (and philosophers who see things differently have a very hard time getting jobs).”

Can you provide citations for these ‘too many philosophers’? I’m a philosopher who travels in philosophy of math and logic circles, and to be honest this doesn’t really match up with my impressions/ experiences. (The simple historical argument is: real mathematics was around for centuries before Frege.) But I am certainly not the most sociologically-aware person, so I’m happy to be corrected…

LikeLike

mathematicswithoutapologiesPost authorMy claim was based on a number of informal discussions with “philosophers who see things differently” and was not independently verified. Will you accept this citation from the Stanford Encyclopedia of Philosophy as circumstantial evidence?

“First-order Zermelo-Fraenkel set theory is widely regarded as the standard of mathematical correctness, in the sense that a proof is correct if and only if it can be formalised as a formal proof in set theory.”

http://plato.stanford.edu/entries/tarski-truth/

If that doesn’t suffice, you’ll have to wait a few weeks until I have time to go through my archives. Some of my old writings on this topic were met with disapproval, even astonishment, on one of the philosophy of mathematics online discussion groups, but I don’t have the link handy.

LikeLike

Tha IlluminatorThank you! That’s a very clear statement of the view — exactly what I was hoping for.

I’ll just add that many logicians and philosophers of math see formal systems more as (something like) idealizations or artificial models of the (real) mathematical proofs that appear in journals and textbooks. However, the analogy is not quite perfect with idealizations in the natural sciences, because an idealization in e.g. physics usually(?) involves _simplifying_ the physical system being modeled (e.g. omitting friction or wind resistance, or letting the number of molecules in the system go to infinity), whereas an everyday proof in a journal article is usually far less complicated than its formalized analogue in the formal system.

But those ‘many logicians and philosophers of math’ I just described may be in the minority in the field as a whole; I’m honestly not certain.

LikeLike