Univalent Foundations: “No Comment.”

NoComment

“No Comment” was Jacob Lurie’s reaction when the panel of Breakthrough Prize laureates was asked, at last November’s Breakthrough Prize Symposium at Stanford, what they thought of the “prospects for the Univalent Foundations program and whether that is a contender for changing” foundations of mathematics.  The other panelists had deferred to Lurie, in some amusement; after Lurie’s “No Comment,” Terry Tao gave the diplomatic answer that “the person you need to ask is not sitting here.”  Of course this can also be understood to mean that the five laureats don’t feel concerned by the question.

The same questioner continued:  “Is anyone willing to bet against” the prediction that computer-verified proofs will be of “widespread use in mathematics” within 25 years?  Lurie immediately replied:  “I’ll take that!”  to which Richard Taylor added “Yes, me too.”  Terry Tao thought that some people, at least, would be using working with computer verification at that point.

From some recent comments on this blog it seems not all readers are happy with this attitude; please don’t blame me!

Simon Donaldson made a point (around 30:15) with which most of my colleagues would agree:  “One doesn’t read a mathematical paper, what one gets is the idea to reconstruct the argument it’s not that people (generally speaking) would be …checking the logic line by line — they would go and extract the fundamental idea; that’s really the essential thing.”  This is very similar to one of my earlier quotations from Atiyah:  the proof “isn’t the primary thing at all.”  I hope it’s clear just how radically different this is from the mainstream philosophical position, that the written proof is an imperfect approximation to an ideal formalized proof.  Donaldson (and elsewhere Atiyah, and even earlier Hardy) is saying that the written proof is an imperfect approximation to a pure stream of ideas.  In that optic, not only the language of a formal proof, but even the standardized language of a publishable proof, is an obstacle to understanding, a necessary evil in the absence of a pure language of ideas.

Richard Thomas, the geometer from Imperial College, reminded me of this exchange on Monday; but I shouldn’t have needed reminding, because I was sitting in the room during the whole Symposium, and I probably even took notes.  The Symposium — some 40 minutes of answers to questions posed by Breakthrough Prize co-founder, the billionaire Yuri Milner, followed by 20 minutes of answers to questions from the audience — can be viewed on Youtube and is quite interesting from start to finish.  (The sequence described in the previous paragraph starts at about 54:50; the “No Comment” is at 55:19.)  As I wrote in my Slate piece, Milner really does know what mathematics is about, though I remember at the time thinking that he seemed a little disappointed that the laureates showed so little interest in computer-assisted proof.  Maxim Kontsevich, who comes as close as any mathematician I know to speaking in pure streams of ideas, made some subtle and unexpected comments; maybe I’ll return to them later.  He did say (around 41:40) that AI is not so hard and that there may be mathematically intelligent computers in our lifetime.  When Milner asked what made him so optimistic, Kontsevich answered that he was actually pessimistic:  he doesn’t see any fundamental difficulty, but “it would be immoral to work on that.”

Tao was the most positive; at about 31:20 he expressed the hope that “some day we may actually write our papers, not in LaTeX … but in language that some smart software could convert into a formal language, and every so often you get a compilation error:  THE COMPUTER DOES NOT UNDERSTAND HOW YOU DERIVE THIS STEP.'”  To Milner’s question, “Do you think computer verification will get to that stage where it will verify every new paper?” Tao replied “Oh, I hope so.”

And this reminds me of a quite different concern I have — this is MH speaking now, not the panelists — about computer-verifiable proofs:  the prospect of technological lock-in.  The Wikipedia article has a perfectly good summary of the pitfalls.  For better or for worse, mathematical typesetting is now locked-in to LaTeX, but even if it weren’t as generally satisfactory as it is, it doesn’t constrain the imagination.  Algebraic geometers had the flexibility to slough off (small f-) foundations when they grew too confining or inflexible; but imagine what would happen if the technology of mathematical publication were yoked to a specific proof verification system (25, or 50, or 100 years from now).  Suppose (as is extremely likely) it’s a system that is well-adapted to verifying that a computer program will run as it should (the original, and perfectly understandable, motivation for automated proof verification).  Would mathematics then have to adapt in turn to the system, and would mathematicians who refuse to adapt — for example, because they find the system an obstacle to expressing their ideas — be forced to resort to samizdat, or even blogs, to get their work published?

101 thoughts on “Univalent Foundations: “No Comment.”

  1. DT

    The current system is already full of obstacles to expressing our ideas. My understanding is that Voevodsky’s project developed out of frustration with some of those obstacles.

    In any field it’s a struggle to turn informal arguments into formal ones. In algebraic topology over the decades they have been having to struggle harder and harder — I don’t think it’s the same in every field. Every so often the formalism of six operations, or something, has to be developed from scratch. Those redevelopments involve some cake and a lot of spinach.

    Lurie’s book “Higher Topos Theory” is an expanded form of his paper “On \infty-Topoi.” The paper is 50 pages of pure cake, beautifully and informally written. The book is 1000 pages long. There is some new cake there, but not 20 times as much. Even at a thousand pages, it doesn’t take as long to read as it took Lurie to write: the paper was on the arxiv in 2003. The book was published in 2009.

    Voevodsky might have gone in the direction he did, because he didn’t want to take that long to communicate what seem to him to be simple ideas. The dream is that the computer assistant will eat the spinach for you. The nightmare is that it will force-feed it to you.

    Some software developers are living in the nightmare, ask them about “Error establishing a database connection.” But a lot of software consumers are living in the dream, the modern system of buying diapers or using Mathematical Reviews really is a marvel.

    I’m sure it will make its march through the publication process eventually, but meanwhile the majority of Lurie’s output is samizdat. Not because he’s refused to adapt to the current system, but because adapting is taking a long time. Any idea that is, say, 10 times as difficult to wrangle into an Annals of Math book as Lurie’s ideas are, is for now beyond the reach of our discipline. Isn’t it conceivable that those ideas could exist, and be important? I’m rooting for Voevodsky.

    Liked by 2 people

    Reply
    1. Jacob Lurie

      To clarify: for the most part, I didn’t want to comment because I haven’t really been following the development of homotopy type theory and didn’t feel qualified to speak about it (especially on camera).

      But since I’m weighing in now, I think that DT’s comment is misleading. I agree that algebraic topologists (and other practitioners of what might be called “homotopy coherent mathematics”) face special obstacles in expressing their ideas. To a large extent, these difficulties come from the fact that the notion of “commutative diagram” which appears in ordinary mathematics needs to be replaced by a more sophisticated notion of “homotopy coherent diagram”. And while the commutativity of a diagram is a property, the homotopy coherence of a diagram involves an additional datum (a “witness”) which can be of a very complicated nature (depending on the diagram). The theory of quasicategories is an attempt to address this issue by packaging and manipulating these “witnesses” in an efficient way, so that you can talk about them without losing your mind.

      It does not seem to me that this issue is addressed by homotopy type theory at all. To my knowledge (and an expert should correct me if I’m wrong), it is an open question whether or not the notion of “homotopy coherent diagram” can even be formulated in the language of homotopy type theory. And I think that the expectation is that this question will have a negative answer unless the theory is modified in some fundamental way (at least this is my expectation based on thinking about a similar issue in a different context). The theory of quasicategories was developed in order to have a good language for talking about certain types of mathematical structures (for example, structured ring spectra) which can perhaps not even be defined in the setting of homotopy type theory (at least in a way that uses “types” to play the role of “spaces”).

      Even apart from the above issue about expressive power (which is on some level a technical point, though probably an important one), I also think it is very strange to suggest that the introduction of a formal system would simplify the expression of ideas that are already difficult to communicate in an informal way. “Homotopy coherent” mathematics is difficult to communicate largely because the datum of a homotopy coherent diagram is elaborate. It’s elaborate both for diagrams which commute for interesting reasons (which you might want to spell out) and for those which commute for obvious reasons (which, when writing informally, you would probably want to ignore). Working in a formal system, more or less by definition, means that you can’t ignore steps which are routine and focus attention on the ones that contain the fundamental ideas.

      By analogy: there are many very intricate proofs by induction in the mathematical literature. There’s also a first-order theory (Peano arithmetic) which can be regarded as a formalization of what it means to give a “proof by induction”. But I think it would be very strange to suggest that a very complicated inductive argument should become simple if it was rewritten as a formal deduction in Peano arithmetic (more likely, exactly the opposite would happen). Of course, the formalization of that argument might address other concerns (if, like Voevodsky, you are concerned that the argument might contain mistakes). And that’s not to say that Peano arithmetic isn’t interesting: just that it is primarily interesting for investigating questions in mathematical logic, not as a tool for proving new theorems about natural numbers.

      Liked by 6 people

      Reply
      1. Mike Shulman

        As someone who has been closely involved with the development of HoTT, I think that what Jacob says is mostly correct. The only thing I would like to point out is that when speaking about a “formal system” we might mean only a way to formalize existing arguments by “reducing them to primitives”, but we might also mean a system which allows us to prove things in new ways, such as by “packaging” things that used to be complicated individual arguments into powerful but easy-to-use tools. If the former, then yes, I agree that it would be weird to expect any simplification to result. But the latter is expressly designed to achieve simplification; indeed, the theory of quasicategories (mentioned in the previous paragraph) is roughly a formal system of this sort, although not a “fully formal” one since it is expressed in set-theoretic language. The point of the latter sort of system is specifically to allow you to ignore the “routine” steps, and moreover to expand the collection of things that count as “routine”.

        HoTT is both sorts of formal system: it allows us to formalize existing arguments, but it also gives us a new way to talk about homotopical objects, and so far it seems that that new way does sometimes give us simpler ways to say things. This is what gives some of us hope that eventually, a theory like HoTT could be made powerful enough to achieve even more useful simplifications. The obstacles mentioned in Jacob’s third paragraph are real, and it’s possible we won’t ever solve them; so current techniques such as those used in Higher Topos Theory are by no means out of date, and won’t be for quite some time. But I also don’t think there are a priori reasons to think that those obstacles are insurmountable, so it’s at least a direction worth pursuing.

        Liked by 4 people

      2. Urs Schreiber

        Homotopy type theory is the internal language of higher toposes. It is as fundamental to higher topos theory as tradtional intuitionistic logic is to topos theory. That HoTT was found by computer scientists instead of by topos theorists or by homotopy theorists already doesn’t reflect well on the latter two groups, but disregarding HoTT for how computer scientists happen to think about using it is worse. That’s like disregarding the theory of elliptic curves for the fact that computer scientists use it for something as mundane as encrypting online bank transfers.While the world is a better place since “HIgher topos theory” exists, the evident omission of the book, given its title, is any actual topos theory, beyond the fundamentals of non-elementary higher toposes, even if these took order 10^3 pages to set up. Homotopy type theory is, once brought to fruition, just that: the actual topos theory of higher elementary toposes. It is true that the computer scientists are unlikely to develop this to any satisfactoy degree. But this does not imply that homotopy type theory is not a topic of major interest in the intersection of topos theory and homotopy theory. I am hoping the distraction that the presence of computers in this discussion is causing will go away and make room for what is going to be the glorious future of elementary higher topos theory.

        Liked by 2 people

      3. Jacob Lurie

        So are you saying that the analogy with Peano arithmetic is flawed because homotopy type theory is “modular” in a way that classical first order logic is not, and therefore the costs associated with writing “fully formal” proofs are substantially lower? (And may someday be low enough that they are outweighed by accompanying benefits?)

        Or are you making a stronger claim that formalization via homotopy type theory could have advantages over “working informally” even for the purposes of communicating mathematical ideas to other people (as opposed to proof checking)?

        The latter seems like a claim that I’d have trouble believing, since the idea of “packaging things that used to be complicated individual arguments into powerful but easy-to-use tools” seems like something that we do all the time when working informally.

        Though one could make the case that some aspects of the culture of mathematics discourage us from taking full advantage of this. For example, there’s a sense that the payoff in a mathematics paper is supposed to be a theorem rather than a construction. I think people often write papers where the “real” goal is to highlight some interesting construction, but the “stated” goal is some propositional consequence of the existence of that construction. It may be that we could communicate certain ideas more effectively if we weren’t encouraged to “bend the narrative” in this way.

        Liked by 1 person

      4. DT

        Thanks for your reply. I should have explained better my remark about “simple ideas.” I didn’t mean to say that Voevodsky could have done it better than you!

        About simplicity and complexity, I had this VV quotation in mind:

        “the question of the convergence of applied and pure mathematics, that I have here what the picture is. Pure mathematics works with abstract models of high-level and low complexity (mathematics this small difficulty like to call elegance). Applied mathematics is working with more specific models, but a high level of complexity (number of variables, equations, etc.). Interesting application ideas of modern pure mathematics most likely lie in the field of high abstraction and high complexity. This area is now virtually inaccessible, largely due to the limited capacity of the human brain to work with such models.”

        That quotation is available to me because of Google Translate, a very complicated tool (but very simple for the user) that has done sensational things for human communication. Maybe one day a tool like that could do something sensational for mathematical communication. It’s a dream, not a prediction.

        If homotopy type theory has a flaw that makes it less suitable for reasoning about homotopy types than Peano arithmetic is for reasoning about numbers, I’m disappointed to hear it. But the criticism of formalism, that a fully formalized argument would be very complicated, doesn’t bother me. C code compiles to assembly. Python code compiles in a more complicated way, and runs about a thousand times slower than C. Both compilation process have complicated output, but the processes are purely mechanical, and the uncompiled code is easy for a human to read. Python is especially easy for mathematicians.

        So far, there is no useful high-level language in which you can write a proof that then “compiles” to a proof in a formal system. I have no expertise, but I think such a high-level language is desirable, and I hope that it’s possible. I’ve heard that one obstacle to creating such a language, is that it is difficult to formalize the metamathematical principle that an isomorphism between X and Y provides a procedure for turning theorems and proofs about X into theorems and proofs about Y — a very homotopy-theoretic problem. But why in principle should it be impossible?

        Anyway “I’m rooting for Voevodsky” is an emotion, not an opinion. There’s no contradiction between rooting for Voevodsky’s program, and rooting for its alternatives. I just hope somebody someday can get me out of the flybottle.

        Liked by 1 person

      5. Eric Finster

        It does not seem to me that this issue is addressed by homotopy type theory at all. To my knowledge (and an expert should correct me if I’m wrong), it is an open question whether or not the notion of “homotopy coherent diagram” can even be formulated in the language of homotopy type theory. And I think that the expectation is that this question will have a negative answer unless the theory is modified in some fundamental way (at least this is my expectation based on thinking about a similar issue in a different context). The theory of quasicategories was developed in order to have a good language for talking about certain types of mathematical structures (for example, structured ring spectra) which can perhaps not even be defined in the setting of homotopy type theory (at least in a way that uses “types” to play the role of “spaces”).

        Many of us who are interested in HoTT are very much interested in and concerned about this problem. And I share your doubt that there is some kind of clever trick which will resolve it in the system we have now. It seems likely that we will need to, as you say, modify the theory is some fundamental ways. But I rather think this is a good thing.

        For my part, I have always viewed HoTT as a part of the program to understand higher categories, and from this point of view, it is certainly not a finished product. But I do think it is a significant step forward in this larger program, a useful piece of the puzzle: our first really compelling example of a “higher categorical logic”. Just as ordinary category theory has enriched, and been enriched by, its connections with logical systems, so I think this connection is will prove useful in the study of higher categories, and in this regard, HoTT is an excellent first point of contact.

        Almost certainly there will be more such theories, and having HoTT as an example of what such a thing might look like is definitely a first step to finding them.

        Liked by 1 person

    2. Jacob Lurie

      DT, I certainly didn’t take any offense (and am perfectly open to criticisms of my book; I have several of my own).

      I think the statement “homotopy type theory has a flaw that makes it less suitable for reasoning about homotopy types than Peano arithmetic is for reasoning about numbers” could merit a bit of explanation. A better analogy would be this: in number theory, one effective way to study questions about the integers is to relate them to questions about rings of integers in larger number fields. To some extent, this type of reasoning can be translated into Peano arithmetic because elements of these rings of integers can be “coded” as n-tuples of integers. Similarly, in algebraic topology, we might want study spaces by relating them to other types of “homotopy-theoretic” structures (like, say, simplicial commutative rings). These ought to be things that we can “encode” in terms of spaces (simplicial commutative ring = space with a ring structure), but at present it is not clear how to carry this out in the language of homotopy type theory if what needs to be encoded involves an infinite hierarchy of coherence conditions (which is common). I think this is probably a side point, though… it could be that the problem just requires a clever idea (I would guess not), or it could be that it can be resolved by introducing a more elaborate theory which has the same basic structural features of the present one (I would guess so).

      I think where I mainly disagree is in the assumption that formalization is a remedy for the problem of communicating complex mathematics to other people. It seems to me that these are two different goals: making mathematics readable to humans, and making it readable to computers. These goals have some overlap (for example, elimination of ambiguities in mathematical definitions), but I see them primarily as pulling in opposite directions. This isn’t an objection to the pursuit of the second goal, just a statement that I wouldn’t conflate it with the first.

      Liked by 1 person

      Reply
  2. mathematicswithoutapologies Post author

    Since I live in a big city, I perceive no advantage in ordering diapers online; as for Mathscinet, it’s extremely convenient for writing letters of recommendation and so on but I note that there are already signs of lock-in, in that use of mathscinet and the associated classification systems are increasingly required by some funding agencies. Is the use of doi or the AMS classification more ominous than the Library of Congress classification? Probably not, but it’s good to be aware of what’s happening.

    No one has asked me to take a a position on the future of Voevodsky’s program, so I don’t have to do so. Maybe Lurie is just declining to be seen as a prophet.

    Liked by 1 person

    Reply
  3. Pingback: “Whatever is rigorous is insignificant.” | Mathematics without Apologies, by Michael Harris

  4. Pingback: Jacob Lurie explains and elaborates on his “No Comment” | Mathematics without Apologies, by Michael Harris

  5. Pingback: Lurie and Categorifying the Fourier Transform | Not Even Wrong

  6. mathematicswithoutapologies Post author

    I’m delighted that this enlightening exchange among specialists in homotopy theory is taking place on this blog. As long as it continues, I’m happy to stay in the background and cease my flawed attempts to summarize positions on questions that are far from my area of expertise. I also want to point out that Mike Shulman is making a separate attempt to sort these questions out on the n-Category Café, at

    https://golem.ph.utexas.edu/category/2015/05/the_revolution_will_not_be_for.html

    Just a few quick reactions to Urs Schreiber’s post. My internal logic (or metaphysics) does not allow the use of the definite article in sentences like

    Homotopy type theory is the internal language of higher toposes.

    I can only read the sentence as a prediction and a prescription but it is syntactically an assertion an a state of affairs and simultaneously a claim that the state of affairs could not be otherwise.

    Regarding “the distraction that the presence of computers in this discussion is causing,” I was told that 80-90% of participants in last spring’s program on HOTT at the IHP in Paris were either computer scientists or proof theorists. My source may have been mistaken, but probably not by much, and I would say that proportion reflects something other than a distraction.

    Liked by 1 person

    Reply
    1. Urs Schreiber

      It is a theorem that HoTT is the internal language of infinity-toposes. For HoTT without univalence one direction was shown independently by Shulman-Cisinski and by Gepner-Kock, the other direction is by Kapulkin. For HoTT with univalence included the remaining ingredients are due to Shulman. See here: http://ncatlab.org/homotopytypetheory/show/model+of+type+theory+in+an+%28infinity,1%29-topos

      I was one among the participants of that IHP meeting ( https://dl.dropboxusercontent.com/u/12630719/SchreiberParis2014.pdf ) one among he minority of people looking into HoTT who are not computer scientists.

      To repeat, that was my point above, that it is unfortunate and not justified by the content of the topic, that not more topos theorists and homotopy theorists realize that HoTT is genuinely their own subject, not to be left the computer scientists. Notable exceptions include Andre Joyal and Bjørn Dundas.

      Like

      Reply
      1. mathematicswithoutapologies Post author

        Sorry to be pedantic, but neither “internal” nor “language” appears anywhere on the page you indicated. I was unable to find any joint paper by Shulman and Cisinski; what appears to be the relevant paper by Gepner-Kock does not include the word “language.” It appears that you are using the word “internal language” as in this excerpt from n-lab:

        “We now consider the internal language of a locally cartesian closed category as a dependent type theory.”

        The explanation of this sentence has not yet been added to the site, and I have no idea what it means. But I must say that I find the tree structure of n-lab hopelessly labyrinthine and really only navigable by native speakers of its internal language.

        I do suspect there is a way to reformulate the “internal language” claim for HOTT in a less peremptory way, for example “There exists a model of ** in the axiomatic system **” which is the version I found in what may be the relevant paper by Cisinski (the one entitled “Univalent universes for elegant models of homotopy types”).

        The word “elegant,” I suspect, is another technical term, and I sincerely hope homotopy theorists don’t find themselves in a linguistic trap similar to the one Grothendieck left for commutative algebra when he introduced once and for all the notion of an “excellent ring.” Suppose somebody finds an even better class of rings? Then what?

        Like

      2. Jacob Lurie

        Urs, you seem to be making a number of assumptions:

        1) That there exists a unique “correct” notion of elementary infty-topos.

        2) That everyone who is interested in the notion of Grothendieck infty-topos (and perhaps, more generally, everyone who is interested either in the classical notion of topos or in algebraic topology) should also be interested in the notion of elementary infty-topos.

        3) That all of these people should therefore also be interested in the “syntactic” approach provided by homotopy type theory.

        I would dispute all of these assumptions.

        1): It’s true that there’s an established definition of “elementary topos” in the literature, which axiomatizes some of the features enjoyed by Grothendieck topoi. But there’s not a unique “correct” way to do this: for example, one often adds additional assumptions like the existence of a natural numbers object. I’m also quite fond of the notion which goes by the unwieldy name of “list-arithmetic pretopos”, which I think would be a great candidate for a even more “elementary” version of topos theory if the name weren’t already taken.

        2): The theory of (Grothendieck) topoi was originally developed by the Grothendieck school motivated by applications in algebraic geometry. In practice, algebraic geometers who want to understand things like etale cohomology only need a small fraction of the theory that was developed by Grothendieck, such as the notion of a Grothendieck topology and the associated notion of sheaf. Many do not see the need to learn the version topos theory presented in SGA, and I suspect that the number who are interested in the subsequently developed theory of elementary topoi is vanishingly small. I do not see this as a failing of the algebraic geometry community: the notion of Grothendieck topology was quite useful to them, so they internalized it. Related ideas may be interesting for other reasons, but have had limited relevance to algebraic geometry.

        I see the “higher” version of this as quite parallel. The notion of “sheaf of homotopy types” on a topological space X is a useful idea for people working in algebraic topology and related fields. So such people make use of specific examples of Grothendieck infty-topoi. It is not clear to me that these people really need a name for this structure, even less clear that they need to be familiar with a general theory incorporating “non-spatial” examples such as the one that is developed in my book, and even less clear that they would need to be familiar with an elementary version of the same theory.

        3) This seems to me to be on some level an aesthetic question. I personally have always found use of an “internal language” for phrasing proofs in topos theory to be distracting, and find it more elegant to phrase things directly in terms of diagrams.

        Now if you wanted to use some theory of elementary higher topoi as a foundation for mathematics, you could argue that it doesn’t make sense to take a “semantic” approach. In other words, it doesn’t make sense to write axioms for mathematics in the form “The collection of all types forms an infty-category with such and such properties” because the notion of infty-category isn’t sufficiently primitive itself. But a “syntactic” approach via an internal language would not share this problem.

        However, if you’re not trying to press the notion of “higher elementary topos” into service as a foundation of mathematics, but are just interested in investigating it as a mathematical structure in its own right, then you might be perfectly happy to proceed in this way, using one of the many existing mathematical formalizations of the notion of infty-category.

        As for me, the notion of elementary infty-topos is one that I’ve spent a bit of time thinking about from the “semantic” perspective (or, more precisely, I’ve thought about formulating a higher analogue of the notion of list-arithmetic pretopos). I did not include any discussion of this in my book because
        a) I’m not sure I know a good definition.
        b) I didn’t see any compelling applications or interesting things to say about them
        c) The book was plenty long already.
        The main reason for a) is because I run into the same technical issue as the homotopy type theory community, alluded to in my previous post. I’d be interested to see this issue resolved (my bet would be that a satisfactory resolution requires modifying basic concepts on some level, like trying to axiomatize infty-categories rather than infty-groupoids, but I would be very happy if this were not true).

        Liked by 1 person

      3. Ulrik Buchholtz

        Jacob said: “I’m also quite fond of the notion which goes by the unwieldy name of “list-arithmetic pretopos”, which I think would be a great candidate for a even more “elementary” version of topos theory if the name weren’t already taken.”

        You’re not alone in being fond of this notion! As you know, a natural internal language for list-arithmetic pretoposes is extensional type theory with Sigma types, disjoint sums types, quotient types and list types (cf. Maietti 2010). This can be seen as an expressive type theory at the level of primitive recursive arithmetic. Similarly, we’re currently investigating expressive, primitive recursive versions of HoTT. Several new complications occur, because of the intentional (i.e., homotopical) setting and because to make use of higher inductive types, we often need large eliminations, so these need to be regimented in order not to leave the realm of primitive recursion. On the other hand, it’s nice that they unify and generalize the list and quotient types.

        Liked by 1 person

      4. Jacob Lurie

        Ulrik,

        “As you know, a natural internal language for list-arithmetic pretoposes is extensional type theory with Sigma types, disjoint sums types, quotient types and list types (cf. Maietti 2010).”

        I’ve looked at this paper, but without speaking the language of type theory I wasn’t able to make heads nor tails of it.

        “Similarly, we’re currently investigating expressive, primitive recursive versions of HoTT.”

        That sounds quite interesting to me, but I have a basic confusion about it. I had thought that the dictionary relating category theory to type theory required the existence of function objects
        (so that morphisms from A to B can be represented by terms of the type given by functions from A to B). How does this work if I’m trying to think about a setting in which I don’t have function objects? (For example, the “initial” list-arithmetic pretopos which encodes primitive recursive arithmetic?)

        Like

      5. David Roberts

        @Jacob

        It’s Pi-types that need the LCC assumption, not type theory in toto.

        Dropping local Cartesian closeness is no problem, cf Weber’s arXiv:1106.1983, which can be used to sensibly define W-types without LCC. Maietti’s work assumes W-types for only polynomial functors arising from, I gather, coproduct inclusions, with the standing assumption one has a pretopos.

        Like

      6. Ulrik Buchholtz

        Jacob,

        as David said, we only need the correspondence, functions-at-large vis-a-vis Hom-sets, to have a translation between type theory and category theory, and certainly primitive recursive functions compose. So we don’t need (L)CC structure. Similarly, we can talk about internal (proof-irrelevant) logic just referring to monomorphisms, not needing a subobject classifier.

        At first sight, the same is true in the intensional/proof-relevant setting, where the “logic” is any slice category of fibrations, but to be able to say interesting things in this case it seems we do actually need something like exponential objects and object classifiers. Only, these have to be ramified/stratified or otherwise restricted in order to control the complexity.

        Like

      7. Mike Shulman

        Kapulkin’s result is nice, but it is only one step along the road to proving that homotopy type theory is the internal language of any kind of (∞,1)-category. He shows that from type theory one obtains a lcc (∞,1)-category, but as far as I can see he says nothing about the initiality of this (∞,1)-category, which is an essential part of an internal language correspondence.

        Like

    2. spitters37

      Urs is pointing out that HoTT can be interpreted in higher *Grothendieck* toposes.
      Following Mike Shulman and Andre Joyal, people are now working on developing the theory of *elementary* higher toposes. Once this is hammered out, Urs definitive statement is expected to be precise.

      Like

      Reply
  7. Urs Schreiber

    Jacob, I believe it suffices to assume that people are interested in better understanding higher topos theory in general. Not every algebraic geometer will have the ambition, of course, but various general abstractly minded people ought to. The comparison to draw is rather with, say, Johnstone’s topos theory book “Sketches of an Elephant”. Since the aim of the book is to understand toposes generally it switches freely from diagrammatics in external presentations over sites to internal Lawvere-Tierney language and internal logic, depending on whatever tool works best in a given case.

    A good example how this can work in higher topos theory is the recent elementary proof of the Blakers-Massey theorem. It was remarkable to see a complete homotopy theory newbie, who had the proof in HoTT, explain it to none less than Charles Rezk, who said he had tried to find it using traditional means but failed. Charles then “reverse engineered” a traditional proof from the HoTT proof, here http://ncatlab.org/nlab/show/Blakers-Massey+theorem#Rezk14

    For what it’s worth, these days, when I need some universal construction in an infinity-topos, I am routinely bouncing the problem off Mike Shulman, and typically he gives me the syntactic proof in HoTT considerably quicker than I have arranged the relevant diagrams, and then I use that to guide me in reverse-engineering my diagrammatics.

    Liked by 4 people

    Reply
    1. Jacob Lurie

      Urs,

      I have no objection to Peter Johnstone including whatever material he finds relevant in his texts about topos theory, or about Mike Shulman communicating arguments to you using the syntax of homotopy type theory.

      I don’t presently see any need for this language in my work, and I’m not sure why you feel that all “abstractly minded people ought to”. The idea that homotopy type theory was intended to provide an internal language for infty-topoi is one that has been familiar to me ever since browsing through the univalent foundations book (which has kindly been made freely available online), and I’m sure that many other people understand this as well.

      To assert that “homotopy theorists [don’t] realize that HoTT is genuinely their own subject” seems misleading to me. The analogy with Peano arithmetic seems apt here: I’m sure that there are many number theorists who are familiar with the axioms of Peano arithmetic and understand that they are intended to describe the natural numbers. But I think very few of them (perhaps none of them?) would regard the study of the formal system of Peano arithmetic as “their subject”, or feel compelled to write out theorems and proofs as formal deductions from the Peano axioms. Most people would regard the study of the Peano axioms and their consequences as a part of mathematical logic, and would recognize that it has a very different goals than, for example, the Langlands program (or anything else that number theorists are concerned with).

      I’m not sure that I’d draw as strong a conclusion from the Blakers-Massey example as you have. It’s true that people were interested in seeing an “elementary” proof of this result, and it’s a good thing that one has been provided. But I’m not inclined to infer from the fact that this proof was written in the syntax of homotopy type theory that it constitutes an application of homotopy type theory to algebraic topology, just as I wouldn’t consider a theorem of Peano arithmetic to be an “application of mathematical logic to number theory” because it happened to be written down in that format.

      Liked by 1 person

      Reply
  8. David Corfield

    Looking in as a philosopher from the sidelines, I would have been far less impressed with HoTT had it not allowed for extensions, such as the addition of cohesion or the addition of linearity, both of which provides the kind of expressivity to reach towards modern physics. Whether or not it aids an individual’s intuition to see the fracture squares for generalized differential cohomology expressed with cohesive notation (http://ncatlab.org/nlab/show/differential%20cohomology%20diagram), or whether it helps with the comprehension of the pullpush of integral transforms to see it through the eyes of linear HoTT (http://arxiv.org/abs/1402.7041) is no doubt dependent on one’s path. But that I can now see the family resemblance between the philosopher’s humble concept of necessity (as truth over possible worlds) and jet spaces, homotopy invariants, and integral transforms, I find delightful.

    In all cases of new formalisms, surely their worth depends on whether absorption provides the support for a more powerful intuition. Whether linear HoTT is worth it depends on the value of the constructions it suggests. So having just watched Jacob’s beautifully explained second Oregon lecture on ambidexterity (http://media.uoregon.edu/channel/2014/05/23/moursund-mathematics-lecture-series-dr-jacob-lurie-lecture-2-ambidexterity/), do I profit from understanding the relationship between, say, (i) in linear HoTT, dependent sum coinciding with dependent product up to invertible type, (ii) in algebraic topology, the twisted ambidexterity of a Wirthmuller isomorphism and (iii) in physics, anomalous systems of inner product state spaces (section 1.4.1 of http://arxiv.org/abs/1402.7041)?

    Liked by 1 person

    Reply
  9. Mike Shulman

    The threading on this blog seems to be a little bit broken — perhaps replies nested more than 2 deep don’t get recognized? — so I’m going to reply to everything all the way down here.

    @Urs: It is not true that “HoTT was found by computer scientists”. Voevodsky, who constructed the simplicial model and formulated the univalence axiom, is a mathematician. Awodey and Warren, who interpreted identity-elimination in terms of model-categorical lifting, are mathematicians (well, Awodey has a philosopher’s hat too, but I don’t think he has a computer scientist’s hat). HITs were invented by Lumsdaine, Bauer, Warren, and myself, all of whom are mathematicians. It’s true that overall more computer scientists have been drawn to the subject than mathematicians, and that computer scientists (e.g. Coquand and Licata) have made quite significant contributions as well, but mathematicians have been at the center since the beginning.

    @Urs 2: Jacob has already replied very clearly (and impressively politely) to this, so I’ll just add that I don’t think it is true (nor very nice) to say that his book doesn’t contain any actual topos theory. Internal logic is only 1/4 (or perhaps eventually 1/6) of Sketches of an Elephant, and as Jacob has said, even for 1-toposes there are yet other ways to look at the elephant than those which Johnstone sketched. At the same time, I do agree with what I take to be the intended point (which I don’t think Jacob replied specifically to, although I may have missed it) that developing higher analogues of the rest of the Elephant is interesting and even potentially useful, and that we should expect such a programme to be bound up with HoTT, since the latter is, at present, the best candidate we have for an “∞-Part-D”.

    @Michael (aka. mathematicswithoutapologies): It’s too bad that the nlab is hard to navigate, but Urs’s sentence is in fact true as an assertion of a state of affairs, at least if you ignore the implication that there could not be any other such internal language. Despite your habit of giving unintended emphasis to definite articles in quotations (those who have read your book may know what else I am referring to (-: ) I don’t think Urs intended that implication; the point is just that it’s the one we have at present.

    @Jacob: I need to think a bit more before I can answer you, except to say that I agree entirely that it is not even clear that there is a unique “correct” notion of “elementary 1-topos”; yet of course that doesn’t prevent there from being an interesting “elementary 1-topos theory” to study.

    Like

    Reply
  10. Mike Shulman

    @Jacob, I think that what Urs and I are failing to communicate is that HoTT is not just a syntax to formalize informal arguments that make sense independently of it. It’s a different “world of mathematics” in which the basic objects, called “types”, happen to behave kind of like the objects of a higher topos do in “good old” mathematics. There is indeed an underlying formal system to this world, just as Peano arithmetic is an underlying formal system for the world of natural numbers and ZFC is an underlying formal system for the world of sets in which “good old” mathematics is done. But one can do mathematics informally in that world as well — this is what we were trying to get across by writing The Book — and the resulting “informal mathematics” is not the same as the informal mathematics that a traditionally minded mathematician would write down, even when talking about homotopy theory. Translating between the two worlds is possible, but nontrivial, and this is completely independent of the respective formalizations of the two worlds. It so happens that with HoTT the formal system was discovered first and the informal one “back-formed” from it (and is, arguably, still in the process of formation), but that’s a historical distinction rather than a mathematical one.

    This is why it’s possible that an (informal!) proof in HoTT can be easier or simpler than an analogous (also informal) proof in set-theory-based higher topos theory — at least, to someone familiar with the language. I totally respect the fact that you find diagrammatic proofs easier to understand than internal-language ones, but as you say this is a matter of aesthetics on which reasonable individuals can differ. As Urs mentioned, I personally often find it easier to think internally now. And in general, I think having different points of view available for talking about the same things is always a plus: something that seemed hard from one viewpoint may seem easy from another. This is, I think, the point of the Blakers-Massey story: because HoTT teaches us a different way to think about higher topoi, it made it easier to find a proof that had eluded people thinking in the classical way. The proof can be translated back into diagrammatic language, as Charles has done, and in principle the resulting proof could have been found by higher topos theorists — but as a matter of historical fact it was not.

    I would not go so far as to claim that you and other homotopy theorists and higher topos theorists “should” care about or learn HoTT. At the present state of development of HoTT, the purely pragmatic short-term payoff, in terms of concrete new results or simplifications, is probably quite small relative to the amount of effort required to familiarize oneself with a new “world of mathematics”. It’s possible this will be the case for some time to come; after all, the internal language of ordinary 1-toposes hasn’t made much inroads into the communities of people applying toposes to other fields, even after decades. But there are homotopy theorists and higher topos theorists who are interested in HoTT, perhaps for other than purely pragmatic reasons (maybe they are interested in the philosophical or foundational implications, maybe they like computer formalization, or maybe they just find it fun to do something different), and once they’ve put in the effort, they may find something of value to them. Urs is one example. So I do think HoTT — informal and formal both — should be regarded as “part of”, or at least closely related to, the subjects of homotopy theory and higher topos theory.

    Like

    Reply
    1. Jacob Lurie

      Mike,

      “and the resulting “informal mathematics” is not the same as the informal mathematics that a traditionally minded mathematician would write down, even when talking about homotopy theory.”

      I would be interested to understand better what this means. Are you saying that “informal homotopy type theory” cannot be faithfully translated as “work in an infty-category satisfying certain axioms (existence of function objects, products, …)”?

      If so, could you give a concrete example which illustrates the difference?

      Like

      Reply
      1. Mike Shulman

        Well, it depends on what you mean by “translated”. There is a formal “translation” in the sense used by logicians. But I would say that this translation does not “faithfully” preserve the “intuitive content” of an argument: it makes it into a different, related, argument.

        For example, in HoTT we have the notion of a “type family” B:A→Type. This “translates” essentially into a morphism with codomain A; but inside HoTT, type families indexed by A behave differently from functions with codomain A. For instance, in the HoTT proof that π₁(S¹) = Z, we define a type family code:S¹→Type by S¹-recursion, so that code(base) = Z and code(loop) = λx.x+1. This definition is entirely analogous to the definition of, say, the factorial function by N-recursion so that 0! = 1 and (n+1)! = n! (n+1). But when translated into category theory, it looks quite different: it becomes an application of the descent theorem for ∞-colimits.

        There are other examples of this sort of thing. The proof of π₁(S¹)=Z has been generalized to a style of argument called “encode-decode”, which was used in the proof of Blakers-Massey. Another common style of argument is to reduce a complicated construction to one depending on a path (identification) with one free endpoint and then apply identity elimination. I blogged about another here. All of these can be “translated” into higher category theory, but the results look different and have to be thought about differently.

        Like

      2. Jacob Lurie

        “But I would say that this translation does not “faithfully” preserve the “intuitive content” of an argument: it makes it into a different, related, argument.”

        If I’m understanding your example correctly, I guess I disagree. The formal translation of this argument into category theory (using the description of S^1 as a homotopy colimit and the universal property of the “object classifier”) seems like it captures exactly what your notation is suggesting.

        As a broader point, this assertion:
        “This is why it’s possible that an (informal!) proof in HoTT can be easier or simpler than an analogous (also informal) proof in set-theory-based higher topos theory”
        seems to me to contradict the assertion that homotopy type theory has “semantics” in simplicial sets.

        For the Blakers-Massey story, the following vague parallel comes to mind: in 1973-1975, Quillen published several papers developing the foundations of algebraic K-theory. In 1976, he published a proof of the Quillen-Suslin theorem (every finitely generated projective module over a polynomial ring is free). I think it’s safe to say that this isn’t a coincidence: Quillen probably did a lot of thinking about projective modules over those years. But the proof of the Quillen-Suslin theorem didn’t need or use the apparatus of K-theory that Quillen developed, though the process of developing that apparatus might have given him some insights about projective modules which led him to the proof.

        Like

      3. Mike Shulman

        The point is that, as explained in the blog post I linked to, in HoTT the types literally have “elements”, and the elements of the “object classifier” literally are types. Thus we don’t have to go through the process of pulling them back along classifying morphisms but can work with them directly. Of course it’s similar; I’m just saying that it’s not the same. The informal version of that argument written in HoTT does not, on its face, make sense in an ∞-category, even informally; you have to do the translation.

        I also don’t see why you think “A can be easier than B” contradicts “A has semantics in B”. The “semantics” are a packaging up of a bunch of operations into a nice theorem which tells us that anything we do in one world becomes something in the other world, but that “something” might be a lot more complicated than the thing we started from.

        Like

      4. Jacob Lurie

        “Thus we don’t have to go through the process of pulling them back along classifying morphisms but can work with them directly.”

        This seems to be suggesting a distinction that I don’t really see in “informal” mathematics.
        When I think about an infty-category C (with a final object 1, say), the objects x of C “have elements” given by points of the mapping space Map(1,x), and the “elements” of a (small) object classifier are the (small) objects of C.

        “that “something” might be a lot more complicated than the thing we started from.”

        In the abstract that is true, but in this particular case it seems intuitively clear how the semantics are supposed to work, at least in principle (even if it is difficult to iron out the details).

        To put it another way, the rules of homotopy type theory all seem to encode the idea that we have various ways to build spaces with certain universal properties, and how to produce examples of points of those spaces. None of these constructions is mysterious in homotopy theory, so it’s hard for me to see how working within the (formal or informal) constraints of homotopy type theory could represent a gain (at least for purposes of proving theorems about ordinary spaces).

        Like

      5. Jacob Lurie

        Let me try again to put my finger on where our disagreement lies.

        Let me distinguish between three “styles” of doing mathematics: fully formal (computer-verifiable deductions from a formal system), semi-formal (where we do not employ a formal system, but also do not allow ourselves to apply our intuition to any mathematical concept) and informal (the way we actually think about things).

        Let me also pick a very basic example that is far from homotopy theory. Suppose I wanted to write out the definition of a semigroup.

        Definition 1: A semigroup is a set S with a multiplication map m: S x S -> S satisfying the associative law.

        Definition 2: A semigroup is a set S with a collection of maps m_n: S^n -> S, written
        as m_n( x_1, …, x_n) = x_1 x_2 … x_n, satisfying the obvious “multi-associative law”.

        One way of taking the “fully formal” perspective is to say that these definitions should be encoded in first order logic, where they give distinct first order theories (the first has a single binary operation and only one axiom, the second has many higher arity operations and many axioms).

        From a “semi-formal” perspective (in which I have written Definition 1 and partially written Definition 2), they are still not the same. If we had no experience with semigroups or the associative law, it might not be clear that Definitions 1 and 2 describe the same structure. Probably this is a theorem that appears early on in many books on abstract algebra.

        But from an “informal” perspective, I would say these definitions are the same. Or rather there’s a theorem that says they are equivalent, but this theorem has so thoroughly permeated our understanding that we don’t even notice using it in ordinary mathematical practice.

        If I’m understanding you correctly, your use of “informal” (for purposes of comparing homotopy type theory with category theory) corresponds to what I’ve described above as “semi-formal”.
        Even without going all the way to a formal system, if we compare the “basic maneuvers” that are permitted by the rules of homotopy type theory to the “basic maneuvers” that you can make with a category (or a quasi-category), say, they don’t look exactly the same, in the same way that Definitions 1 and 2 are not the same.

        But would you argue that they are still different from the point of view of what I’ve called “informal” mathematics above? If you permit me not just the bare definition of quasicategory, object classifier, homotopy colimit, etcetera, but also an understanding of how they are used, then I don’t see what distinguishes the argument you sketched from its “categorical” counterpart.

        Perhaps you would argue that my analogy is off because the equivalence Definitions 1 and 2 is much more evident than a comparison theorem between homotopy type theory and higher category theory (which, if I understand correctly, has not yet reached a final form?) But I would argue that if such a comparison is at least expected to be true, then one should not see a difference between working with the two theories “informally” (though it may be difficult to establish this definitively from a “semi-formal” or “fully formal” point of view).

        Liked by 1 person

      6. Mike Shulman

        Jacob, I’m somewhat confused by your distinction between “semi-formal” and “informal”, which I’ve never encountered before. What do you mean by “we do not allow ourselves to apply our intuition to any mathematical concept”? If you mean using intuitions to guide our proofs, then I can’t imagine even doing fully formalized mathematics without that. But if you mean treating intuitions as proofs, then I can’t imagine calling that any sort of mathematics.

        I would say that your definitions 1 and 2 are distinct theories (although the meaning of the word “theory” in this context is subject to some dispute) that are “Morita-equivalent”, in that they have equivalent categories of models. And since equivalent categories are the same, we generally identify semigroups(1) with semigroups(2). I would call all of that “informal”. At a formal level one would have to keep track of which notion of “semigroup” one was using at every place and transport all theorems across the equivalence between them as necessary. (I have to wonder whether it is a coincidence that this sort of transport is exactly one of the places where HoTT, and specifically the univalence axiom, is more convenient for formalization than ZFC is.)

        However, this explicit transport occasionally infects our informal language as well. The point of knowing that those two definitions are equivalent is that sometimes one of them is more convenient and sometimes the other one is, and we can choose which we prefer in any context. E.g. to prove that a given set is a semigroup, it may be easier to define the binary multiplication and prove associativity; but when stating and proving a theorem about all semigroups, it may be convenient to use the fact that we have n-ary multiplications for all n. But now suppose we want to apply the latter theorem to the former semigroup; we need to know and use that e.g. the 3-ary multiplication is obtained from the binary one as m(x,y,z) = m(x,m(y,z)). In other words, the “lower boundary” of informality is not fixed; sometimes we have to dip a bit further down into formality than other times.

        Similarly, suppose for the sake of argument that HoTT and ∞-category theory really are two equivalent views of the same thing. (I think this is not quite accurate, but for the present discussion let’s suppose it is.) Just as with semigroups, the whole point of knowing this equivalence is that sometimes one may be more convenient to use than the other. And sometimes when working informally we can ignore the details of how this equivalence operates, but other times we have to “look under the hood”. And since, as you say, this equivalence is “less evident” than in the case of semigroups, there is more difference between the theories, and translation between them is less trivial.

        In sum, the fact that two things are equivalent does not mean there is no difference between working with them, even informally. For a more familiar example, the model categories of topological spaces, simplicial sets, and various others such as Thomason’s model structure on small categories, are all (Quillen) equivalent — but that doesn’t mean there’s no difference between working with them, or reason to prefer one over the other in some context.

        Perhaps the example of the codes for π₁(S¹), which I chose for its simplicity, was too simple to exhibit any very interesting differences. A slightly more complicated example is the theory of n-types and n-truncation. In HoTT, a morphism is (by definition, if we like) n-truncated precisely when all of its fibers are n-types. In higher category theory this is not true for the usual meaning of “all”, and in consequence we find ourselves talking explicitly about n-truncated morphisms more often. I find it easier and more convenient to be able to think and talk only about n-types. Of course, if we translate the HoTT statement into category theory we get a true statement, but the word “all” becomes a dependent product operation and the phrase “is an n-type” becomes a monomorphism into the base, and so the result looks rather different, less familiar and intuitive and easy to use.

        Like

      7. Steve Awodey

        Hi Jacob,

        This statement:

        “To put it another way, the rules of homotopy type theory all seem to encode the idea that we have various ways to build spaces with certain universal properties, and how to produce examples of points of those spaces. None of these constructions is mysterious in homotopy theory, so it’s hard for me to see how working within the (formal or informal) constraints of homotopy type theory could represent a gain (at least for purposes of proving theorems about ordinary spaces).”

        would seem to apply to pretty much any axiomatization, and maybe that’s a good way to understand HoTT: as a (somewhat unusual) axiomatization. It isolates and emphasizes certain features, and completely ignores others. I take it there’s no question that it can sometimes be useful to do that?

        Like

      8. Jacob Lurie

        Steve,

        “I take it there’s no question that it can sometimes be useful to do that?”

        I am not disputing that, or that this is an interesting thing to axiomatize, or that this is an interesting way to axiomatize it.

        I’m only disputing the idea that this might be an effective tool for proving new theorems in classical homotopy theory (by the specific mechanism of verifying them within homotopy type theory and then using the semantics in simplicial sets).

        Take the 1-categorical analogue. The notion of elementary topos is obtained by axiomatizing some features of the category of sets and ignoring others. Any theorem you can prove for an arbitrary elementary topos is a theorem about the category of sets. But I don’t believe that you should expect to learn new things that you didn’t know about sets via this method. I’d even argue it’s impossible in principle for this to happen: take any proof that uses the words “elementary topos”, cross it out everywhere and replace it by “category of sets”, and now you have a new “topos-theory free” proof of the same theorem.

        Like

      9. Mike Shulman

        Any theorem you can prove for an arbitrary elementary topos is a theorem about the category of sets. But I don’t believe that you should expect to learn new things that you didn’t know about sets via this method. I’d even argue it’s impossible in principle for this to happen: take any proof that uses the words “elementary topos”, cross it out everywhere and replace it by “category of sets”, and now you have a new “topos-theory free” proof of the same theorem.

        Learning a new thing about sets that you didn’t previously happen to know about sets is in no way incompatible with the fact that it was possible to prove that thing using language that you previously knew.

        This reminds me of how people used to criticize (maybe some still do?) category theory as “just a language”. Any theorem in the language of category theory applied to a particular category (say, groups) can have all the categories excised from it by crossing out “object” and writing in “group” and crossing out “morphism” and writing in “homomorphism”. But that doesn’t mean that category theory was worthless in finding and proving that theorem.

        I personally believe that the “killer apps” of HoTT are likely to lie not in classical homotopy theory but in higher topos theory, where the distance between HoTT and the classical tools is greater and the benefits are more pronounced. However, I may as well mention that I personally have already used HoTT to prove things that I didn’t know even about classical homotopy theory (although it’s reasonably likely that you or someone else knew them); one example is the statement here that when we split a partially-coherent idempotent, we can recover the proof of idempotency from the splitting but not (in general) the original partial-coherence datum.

        Like

      10. Jacob Lurie

        Mike,

        “What do you mean by “we do not allow ourselves to apply our intuition to any mathematical concept”? If you mean using intuitions to guide our proofs, then I can’t imagine even doing fully formalized mathematics without that. But if you mean treating intuitions as proofs, then I can’t imagine calling that any sort of mathematics.”

        I guess I strongly disagree with this, and I think my semigroup example is a good one to illustrate the point.

        I think that in practice, there is a big difference between the way we treat concepts that have just been introduced (because we do not yet trust that they behave the way we intuitively expect) and the way we treat concepts that we understand very well. So, for example, if we were writing a textbook on abstract algebra, then right after we introduced the notion of group we might be very careful to write parentheses everywhere and highlight all of our applications of the associative law.

        However, once we’re satisfied that we properly understand a concept (like the idea that associativity allows us to write parentheses any way we want), we stop being careful and simply regard those manipulations as routine. Our intuition tells us that we could use the associative law (say) to prove that two expressions are equal, and we trust our intuition on that point to a sufficient degree that we don’t bother to justify it (either in writing or in thinking about it to ourselves).

        I think the claim that anyone who uses their intuition in this way is not really doing mathematics is an extreme one, and it would disqualify virtually all mathematics that is done in practice.

        “Similarly, suppose for the sake of argument that HoTT and ∞-category theory really are two equivalent views of the same thing. (I think this is not quite accurate, but for the present discussion let’s suppose it is.)”

        I think maybe this philosophical discussion is missing the point, because what I really wanted to understand was your view about this.

        Are the following statements known to be true, not known or expected to be true, unknown, or known to be false:

        1) From the syntax of homotopy type theory, one can construct an infty-category C whose objects are types.
        2) Statement 1) is true, and within C types that are given by various type-constructors enjoy universal properties suggested by the type-constructors (product types are products, etcetera).
        3) Statements 1) and 2) are true, and C is universal among such infty-categories.

        I have been operating under the assumption (which may be incorrect) that some version of these statements are at least expected to be true. If that is so, then I would disagree with this assertion:

        “HoTT is not just a syntax to formalize informal arguments that make sense independently of it. It’s a different “world of mathematics” in which the basic objects, called “types”, happen to behave kind of like the objects of a higher topos do in “good old” mathematics.”

        If statements 1), 2), and 3) are true (or even if some weaker versions of them were true), I would say that one -can- make sense of informal arguments about the subject matter independently of homotopy type theory. Namely, I would say that there is a particular mathematical object (the infty-category C) which makes perfect sense in “good old” mathematics, that homotopy type theory is a particular tool for studying it, but that it could be studied from other points of view.

        I’m asking because I suspect that I have thought about it this object, or some variant of it, and this gives me some intuition for what you’re talking about despite the fact that I don’t really understand the syntax you’re using. So I’m genuinely curious as to whether or not we are talking about the same thing, and if not I’d like to know what the actual difference is. The examples that you’ve given so far don’t seem like substantive differences, and the question of whether something is “intuitive and easy to use” is in the eye of the beholder.

        Liked by 1 person

      11. Mike Shulman

        Jacob, your description of how we treat parentheses and semigroups seems exactly right to me; I think we’re just using words in slightly different ways. I wouldn’t say that someone who does this is “treating intuitions as proofs” (which was, of course, my phrase and not yours to begin with); I would say instead that they are relying on intuition to convince themselves and others that the unstated gaps in the proof could easily (or at least straightforwardly) be filled by anyone with the time and inclination. But that’s just a confusion of words that was caused by my attempt to understand what you meant by “semi-formal”, and now I think I do understand.

        Thus, returning to the point at which you introduced that term, I would reply that when I say “informal” I mean the same thing that you do. Although I don’t think (and I don’t think you think either, given how you discussed the semigroup example) that there is always a clear line demarcating where “intution/gaps” are allowed in “informal” reasoning and where they are not; for one thing, the line moves over time and is different for different people.

        Moreover, as I said, in cases where the gap involves something like an equivalence of categories, even when being as informal as possible we do have to know and use the definition of that equivalence. In the case at hand, the syntax of HoTT and its interpretation in ∞-categories are distinct enough that applying that equivalence is a nonvacuous operation, especially for complicated theorems, and even for those of us who are familiar enough with it to do it easily or straightforwardly.

        Your statements (1)–(3) are indeed expected to be true in some form. (At present we don’t have proofs of them, but we do have partial results which almost imply that C is “weakly universal” in a certain sense — “almost” because even for the weak result there are a lot of i’s and t’s that haven’t yet been dotted and crossed, though everyone expects them to be straightforwardly analogous to those that have.) And I agree entirely that C is an object which makes perfect sense in good old mathematics and can be studied from other points of view.

        I think you misunderstood what I meant by “HoTT is not just a syntax to formalize informal arguments that make sense independently of it.” I didn’t mean that there are no informal arguments about the subject matter of HoTT outside of HoTT. What I meant is that arguments inside of HoTT (formal or informal) are not simply formalizations of pre-existing informal arguments outside of HoTT. There are translations back and forth, since the two are studying “the same thing” in a sense, but the inside and outside are “parallel points of view” with independent tools, techniques, and insights, so that some question may be easier to answer on one side or on the other.

        (I’m not going to continue the thread about usefulness of category theory; I think it’s gone down a wrong turning. I could go find examples in the published literature of the particular syllogism “X is a Y-category, all Y-categories are Z, therefore X is Z”, but that would be continuing to miss the point. The potential usefulness of HoTT does not consist simply of observing that something is an “elementary ∞-topos” or whatever and applying general theory about such things; rather, as I said above, it’s that HoTT is a different way of talking about and working with such ∞-categories.)

        Like

      12. Steve Awodey

        I’m having trouble getting into the right thread here — this is in reply to Jacob on the analogy HoTT : elementary topos :: sSets : Sets. I think this is a pretty good summary of the issue (modulo the difference between logical and categorical style axiomatics). The main point is that we won’t learn much about sets by just using the elementary topos laws, and similarly, we won’t learn much about classical homotopy theory by using the system of HoTT, and that’s probably true. But the value of proving something just from the topos laws — to state the obvious — is that it’s also true in other toposes, and we seem to have the same situation with respect to HoTT and other higher toposes. That’s the program that at least some of us are currently interested in working out. Whether we will end up being able to use this method to prove anything new (or prove anything already known in a new way) in classical homotopy theory, or even in higher topos theory, simply remains to be seen. The logical methods used in 1-topos theory may not seem to be really worth the trouble from the geometric standpoint — but they are interesting as a new field of application of logic, and maybe the same will turn out to be the case with HoTT.

        Like

      13. Gershom B

        It seems to me the following quote, from the famous Bourbaki article “The Architecture of Mathematics” (1950) is relevant to this discussion:

        “Each structure carries with it its own language, freighted with special intuitive references derived from the theories from which the axiomatic analysis described above has derived the structure. And, for the research worker who suddenly discovers this structure in the phenomena which he is studying, it is like a sudden modulation which orients at one stroke in an unexpected direction the intuitive course of his thought, and which illumines with a new light the mathematical landscape in which he is moving about.”

        Jacob is of course quite right to point out that category-theoretic insights usually arise in the context of different categories and the functors relating them, at a minimum. And the examples that come to mind for me typically involve presheaf structures and functor categories for that matter. This is similar to how it is typically trivial to observe that something forms a set — obviously more structure is needed to develop richer statements.

        But this in turn suggests one promising direction of HoTT research, which has been alluded to at various times in this discussion. There is no need for just “a single HoTT” just as there are already many variant type theories bearing some lineage to MLTT. So the axiomatization in this form gives the potential of bringing many well-known tools to bear directly on the relation between various structures all of which “initial HoTT” has models in.

        It has already been established that importing the tools developed by homotopy theorists into type theory and logic has yielded new insights and constructions in that realm. It seems a more open question the degree to which type-theoretic technology (outside of the system of MLTT itself, and the work on models of it) can cast questions in homotopy theory or higher topos theory into new light. The idea of linear homotopy type theory seems one promising avenue in this regard. The resurrection of cubical techniques as impelled by the desire to make HoTT “compute” could well be another — for example it may well be that the “uniform Kan condition” introduced on cubes could prove to be of broader interest and significance.

        In another direction entirely, the development of techniques for defining saturated structures more generally (i.e. not just categories) points to the possibility of univalent approaches entering directly into the mathematical vernacular in a variety of subjects. This is to say, if it is more convenient (and it is, often) to work with saturated categories than what the HoTT book calls “precategories” (and everyone else just calls “categories”), then people may start to do so more broadly. And if the same holds true for other constructions, then this too, over time, may start to change some elements of broader mathematical practice.

        Let me give two other examples of “type-theoretic ideas” finding use in the broader mathematical world (neither of which necessarily requires HoTT as opposed to plain old MLTT). In the NY HoTT reading group we’ve had a few presentations recently that tried to cast ancient results in new light to good effect. The first was on very basic concepts of algebraic geometry. It explained, among other things, how it is pleasant to treat the rational points of a scheme as a dependent product. The other was on a HoTT construction of differential and integral calculus. In this, the type-theoretic (and constructive) approach helped make especially evident the well-known fact that differential operators do not necessarily send functions to functions of the precisely same type, but rather map between function spaces. Neither speaker presented anything deeply new, but nonetheless both presentations underlined, in their own way, the utility of making function types, including indexing, syntactically explicit.

        (Insert the usual disclaimer here about my relative lack of expertise compared to other participants in this discussion. I don’t think I’m saying anything outright wrong, but I am likely repeating things that are already well known to all parties involved.)

        Like

      14. Jacob Lurie

        Mike,

        “Although I don’t think (and I don’t think you think either, given how you discussed the semigroup example) that there is always a clear line demarcating where ‘intution/gaps’ are allowed in ‘informal’ reasoning and where they are not; for one thing, the line moves over time and is different for different people.”

        I entirely agree with this. In the context of the broader discussion, I guess the point that I wanted to make is that steps like translating “morphisms into the classifier of X’s” into “having an X”
        by “pulling back the universal X” are very routine, at least for those who do types of mathematics where it is relevant (algebraic geometers and their moduli spaces, algebraic topologists and classifying spaces of vector bundles, topos theorists and their subobject classifiers). Not at the level of elimination of parentheses, but sufficiently routine that it’s hard for me to understand the distinction that you’re drawing between different ways of thinking. But perhaps this is just, as you say, because you’ve chosen very simple examples.

        I suspect that our disagreement, to the extent that we have one, is just the aesthetic one mentioned earlier. It maybe doesn’t have much to do with homotopy theory and is just a question of preference for diagrammatic or syntactic reasoning. In the 1-categorical setting, I prefer the former and don’t really feel like it misses intuitions given by the latter: I think those intuitions survive the “translation procedure” perfectly intact. In the higher categorical case the (expected) translation procedure is more elaborate. I would be surprised if there really were basic intuitions that did not translate, but even so I can appreciate how syntactic tools have the potential to be more useful than in the 1-categorical case (since I imagine that the “translation procedure” amounts to proving a nontrivial rectification theorem).

        Like

      15. Mike Shulman

        I’m happy to leave the discussion at a conclusion of aesthetic disagreement — at least until and unless HoTT finds its “killer app”. (-: I won’t claim that there are basic intuitions that don’t translate, but I will claim that the basic intuitions of HoTT are different from the usual intuitions used in higher category theory. One could translate them and learn to use the corresponding intuitions diagrammatically rather than syntactically, but as you say the syntactic approach is likely to retain an advantage.

        Like

      16. mathematicswithoutapologies Post author

        Jacob and Mike,

        Jacob just wrote,

        “In the higher categorical case the (expected) translation procedure is more elaborate. I would be surprised if there really were basic intuitions that did not translate…”

        If you’ll allow me to play the devil’s advocate, and bear in mind that I have no opinions of my own, let me ask: what compelling reason might there be to pursue a line of inquiry that did not admit translation of basic intuitions? Maybe it’s the basic intuitions that determine what one wants to call mathematics.

        I’m aware that basic intuitions evolve and have no necessary continuity from one generation to the next, but since Gerschom B. [sorry about the incorrect attribution, and thanks to Matt Emerton for pointing it out] wrote ‘how it is pleasant to treat the rational points of a scheme as a dependent product,” I thought I might mention that yesterday I attended a lecture on the final step of the proof of the André-Oort conjecture on rational points (more precisely, special points) on Siegel modular varieties, undoubtedly one of the all-time great theorems on rational points, whose separate steps are due to a great many people. I’m not going to make the obvious point that none of the steps treats the rational points in question as a dependent product, but rather that, although they make use of techniques from automorphic forms, analytic number theory, geometry of numbers, diophantine geometry, and algebraic geometry, among other nearby branches of mathematics, they are all recognizably part of contemporary algebraic number theory and practically all the constituent parts can be traced back to Gauss’s Disquisitiones. This shared language for thinking about numbers, which predates any thought of formalization or set theory, has been sufficiently robust to remain recognizable as it absorbed practically every available technique of modern mathematics, including Grothendieck’s categorical approach to algebraic geometry. This is why I feel comfortable not taking a position on foundational questions; because they don’t have any real bearing on what I do. This may change if higher categories proved indispensable in creating a complete theory of motives, for example, but for the moment I see no prospect of that.

        Like

      17. Jacob Lurie

        @Michael,

        I think that the disagreement that I was having with Mike centers not so much on what you would call the “large f” foundational issues but on the “small f” ones (analogous, say, to the initial skepticism of “traditional” algebraic geometers to the theory of schemes).

        I don’t think that I disagree with anything you wrote, except that I think that focus on “homotopy type theory as a foundation for mathematics” misses the point. I think that the problem of finding a foundation for mathematics (meaning a small number of concepts and assumptions from which the rest of mathematics can be derived) is an interesting one, from both a mathematical and philosophical point of view (perhaps more the latter than the former). It’s also a problem which I would regard as having been solved in the early 20th century. As with many mathematical problems, once they have been solved once, it’s much easier to solve them again. So at this point, there are lots of things that could be “foundations” for mathematics. I don’t think you should mistake the claim that homotopy type theory is such a thing for a claim that it is something that all mathematicians need to know, or that it should change the way mathematics is practiced. It’s just the claim (as I think someone put it earlier) that it is “mathematics-complete”, analogous (say) to the way that the “traveling salesman problem” is NP complete. There’s also the idea that you can “do mathematics in homotopy type theory”, but I don’t think you should take this as a prescription for reforming mathematical practice so much as an axiomatization that captures certain mathematical principles which are universally valid in certain contexts. So, for example, any theorem that you prove in this setting about groups in this setting would also be valid for sheaves of groups on a topological space. This means you can prove fewer theorems, but to isolate that there is something special about this smaller class of theorems isn’t to deny the validity or usefulness of theorems that do not belong to it.

        @Mike,

        I wonder if you would endorse the following way of looking at things. Let me distinguish between three “styles” in which homotopy theory can be studied:

        Traditional (as exemplified by, say, the theory of model categories): Statements are phrased in ways that make sense “on the nose”. Has the advantage of being most readily accessible from the perspective of “non-homotopy-theoretic” mathematics. Has the disadvantage that, because everything is required to hold “on the nose”, homotopy coherent constructions need to be rectified, and this is often a painful process.

        Higher category theoretic (as exemplified by, say, the theory of quasicategories): Nothing holds “on the nose”. Has the advantage that it’s no harder to talk about homotopy coherent diagrams than it is to talk about commutative diagrams. Has the disadvantage that it’s no easier to talk about commutative diagrams than it is to talk about homotopy coherent diagrams.

        Homotopy type theoretic: Finds some kind of middle ground between the two, where most of the discourse is “homotopy-invariant” but there’s a special class of demands that you can make “on the nose” (formally encoded by the notion of judgmental equality of terms).

        If so, then maybe I can understand better what kind of intuitions you might have been talking about earlier. It seems like “rectification” for a single example of judgmental equality isn’t likely to be impressive, but that homotopy type theory singles out a sizable (but precisely circumscribed) class of “equalities” that are all supposed to be -simultaneously- “rectifiable”. And the intuition that -that- should be possible seems like one that you might only be sensitive to after studying the formalism.

        Liked by 1 person

      18. Dimitris

        Dear Michael,

        (I hope this reply ends up under the right comment).

        “This is why I feel comfortable not taking a position on foundational questions; because they don’t have any real bearing on what I do. This may change if higher categories proved indispensable in creating a complete theory of motives, for example, but for the moment I see no prospect of that.”

        By “foundational” above do you mean “little-f foundational” or Foundational? (To use the f/F convention you had suggested a while back). Namely, are you talking about the foundations of a certain mathematical discipline (e.g. foundations of algebraic geometry) or about the discipline of the Foundations of mathematics (e.g. set theory or (dare I say) UF)?

        If you mean foundations, then your first sentence seems too strong. Surely Gauss’ Disquisitiones is also kind of foundation of a certain mathematical discipline? But if you mean Foundations then I agree completely with the first sentence but no longer understand the second sentence: why would a higher-categorical theory of motives increase your interest in Foundations (be they UF or set theory)?

        Or are you just making a point about any kind of formalism in general, be it foundational or Foundational?

        Liked by 1 person

      19. mathematicswithoutapologies Post author

        This is in answer to Dimitris’s comment on foundations vs. Foundations. (I agree that the interface is hard to use when there are several exchanges going on simultaneously, but I’m in no position to fix that.) Dimitris points out, accurately, an apparent inconsistency between the post on which he is commenting and positions I have taken elsewhere. What I meant, concretely, is that if a mathematical formalism based on HOTT/UF provided a way to construct the algebraic cycles needed to solve the outstanding questions about motives (including the Hodge and Tate conjectures, and while we’re at it why not throw in the Birch-Swinnerton-Dyer conjecture and the Bloch-Kato conjectures on special values of L-functions?) then I would be willing to start writing mathematics with types rather than sets; indeed, I would have no choice.

        At the same time, I think I agree with Jacob when he suggests (if I understand him correctly) that if the cycles can be constructed in HOTT/UF then it must be the case that they can be captured as well in set theory, and therefore I would have the choice of sticking with the latter after all.

        But in either case, my choice would involve no metaphysical commitment on the order of what is usually understood as Foundations. In practice I would be choosing to use one formal system or the other as a common language for communicating with colleagues. In other words, I would be choosing a small-f foundations in a way that might be interpreted as adherence to a Foundational program. I think this is the apparent inconsistency noticed by Dimitris.

        If I wanted to go out on a limb, I would like to say that, on aesthetic grounds — and here I mean the word in the original etymological sense, in other words as in Kant’s transcendental aesthetic, and not as a synonym for harmony or elegance — set theory seems closer than HOTT or any kind of type theory or homotopy theory to the primitive perceptions underlying the practice of mathematics. I say this because set theory arises both historically and in mathematical training in the study of functions (and therefore relations), and because my early exposure to structuralism left me irremediably programmed to believe that culture arises on the organization of perceptions of relations between relations. But it would be hopeless to try to defend this as a positive thesis, so I won’t even try.

        Like

      20. Jacob Lurie

        @Michael,

        “If you’ll allow me to play the devil’s advocate, and bear in mind that I have no opinions of my own, let me ask: what compelling reason might there be to pursue a line of inquiry that did not admit translation of basic intuitions? Maybe it’s the basic intuitions that determine what one wants to call mathematics.”

        In the context of the present discussion (not having to do with foundations), I would disagree with this. I would say that if there is some translation procedure (establishing the equivalence of two ways of looking at the same piece of mathematics), then it would be substantially more interesting and useful if basic intuitions did -not- translate. That’s the kind of situation where you could expect to have real gains from multiple points of view, because a problem might be substantially easier to solve or think about on one side than the other (for a concrete example, take the Fourier transform of square-integrable functions). And that’s why I’m interesting in this debate, from a purely pragmatic point of view. If type theory really does supply new ideas and intuitions that are useful for thinking about homotopy theory, then I want to know what they are! But none of the examples provided so far has been convincing in this regard: they’ve all been descriptions of things which are -already- part of the standard intuition.

        “I’m not going to make the obvious point that none of the steps treats the rational points in question as a dependent product”

        For clarification, I believe this is synonymous with “treating rational points of M as sections of the projection map M -> Spec Q”. Though I agree with the (perhaps implied) sentiment that we do not really need a new term for this.

        Liked by 1 person

      21. mathematicswithoutapologies Post author

        Jacob,

        “I would say that if there is some translation procedure (establishing the equivalence of two ways of looking at the same piece of mathematics), then it would be substantially more interesting and useful if basic intuitions did -not- translate.”

        I don’t think we mean the same thing by “basic intuitions,” because I agree with you, specifically with your example of the Fourier transform, and in this sense I’m already on record as disagreeing with myself, for example when I wrote (in Chapter 7):

        “Langlands conjectured (and in many cases proved) such tight relations between the two kinds of structures that one can only hope to be successful in either field [automorphic forms or algebraic number theory] today by learning to see each side of the Langlands correspondence as the avatar of the other—as exhibiting the other side’s otherwise inaccessible properties.”

        But this learning process precisely means incorporating the translation into the basic intuition, as I’m sure analysts do all the time when they use the Fourier transform. Many of the most difficult steps in the proof of the (stable) Arthur-Selberg trace formula depend on playing off the properties of the spectral decomposition against the geometric decomposition, which is precisely the analogue of the Fourier transform for non-commutative groups.

        What I had in mind was a more radical failure of translation of basic intuitions. But it’s a purely hypothetical scenario that by its very nature is inexpressible, so it may just be complete nonsense. (Or at least that’s what we’ll think until the computers get their own ideas, at which point it will probably be too late.)

        Like

    2. Jacob Lurie

      “Any theorem in the language of category theory applied to a particular category (say, groups) can have all the categories excised from it by crossing out “object” and writing in “group” and crossing out “morphism” and writing in “homomorphism”. But that doesn’t mean that category theory was worthless in finding and proving that theorem.”

      I guess I disagree with this. I don’t know any examples of difficult questions about groups that have been resolved by observing that groups form a category, proving a theorem about categories, and specializing that theorem to the category of groups.

      Liked by 2 people

      Reply
      1. Mike Shulman

        I don’t know any examples of difficult questions about groups that have been resolved by observing that groups form a category, proving a theorem about categories, and specializing that theorem to the category of groups.

        Why did the word “difficult” suddenly appear in there? It wasn’t there before. I chose groups because they’re familiar and easy to illustrate the point, not because they’re difficult. If you’re looking for difficult theorems, then we need to pick a category of more difficult objects. For a random example that comes to mind, consider the category of algebraic Kan complexes (and structured maps). Does it have coequalizers? Yes, because it’s the category of algebras for a finitary monad on a cocomplete category, so it is cocomplete. You could “compile that out” into a theorem whose statement and proof don’t mention categories and monads, but that doesn’t mean the language of category theory was worthless in finding and proving that result.

        Like

      2. Jacob Lurie

        “Does it have coequalizers? Yes, because it’s the category of algebras for a finitary monad on a cocomplete category, so it is cocomplete. You could “compile that out” into a theorem whose statement and proof don’t mention categories and monads, but that doesn’t mean the language of category theory was worthless in finding and proving that result.”

        If I needed that fact and was writing for a category-savvy audience, then that might well be a good proof. But I think it’s a terrible example to use as an illustration of the usefulness of category theory to a skeptic: it’s a jargon-heavy proof of a concrete statement which is easy to establish directly.

        Liked by 1 person

      3. Mike Shulman

        Are you, then, a skeptic about the usefulness of category theory? If I were trying to convince a skeptic, I would have to write something much longer. I assumed I was writing to someone who believed in the usefulness of category theory, so that I could use it as an analogy to try to explain how HoTT might be useful. I can’t give specific instances where HoTT has been useful, because as has already been pointed out, it’s an extremely young field; I’m just arguing that it’s not unreasonable to hope that it may be useful in the future, by analogy to how category theory is useful.

        Like

      4. Jacob Lurie

        Mike,

        I wouldn’t describe myself as a skeptic of the usefulness of category theory in general. But I’m skeptical that it can be useful by the specific mechanism that we’re discussing (that the very fact that something forms a category should lead to new insights about it).

        To put my objection to the algebraic Kan complex example a different way: it’s hard for me to imagine an audience which would accept a statement like
        “it’s the category of algebras for a finitary monad on a cocomplete category”
        as self-evident, but regard the existence of coequalizers as something that requires proof (on the other hand, it’s quite easy to imagine the reverse). If an undergraduate student taking an abstract algebra class from me came to my office hours with a question about how to understand amalgamated products of groups, and I said something like “well, first you need to understand finitary monads on the category of sets”, then I ought to be fired to mathematical malpractice.

        I think maybe you and I are using the word “application” in different ways. For me, to call something an application of category theory to group theory, it would not be sufficient to give an example of a theorem about categories and deduce from it a theorem about groups. The latter theorem would have to be something that was substantially more difficult (or impossible) to prove by more direct means.

        I think that there are applications of this nature: for example, there are lots of theorems about groups that are proven using ideas about group homology and cohomology, and that the language of category theory plays an essential role in setting this up. In this example I see a potential gain from the theory because there’s not one category in play but several, related by various functors and natural transformations, and it would be crazy not to have some axiomatic framework for describing the structural features that we see.

        I also agree with you that there’s value in thinking about “applications” understood in the more liberal sense that I think you mean: seeing specific results as instances of general ones. I’m just not sure I’d want to call them “applications” unless they result in simplifications or give new insights.

        Liked by 1 person

      5. Jacob Lurie

        One more thought on this:

        “You could ‘compile that out’ into a theorem whose statement and proof don’t mention categories and monads, but that doesn’t mean the language of category theory was worthless in finding and proving that result.”

        Maybe the reason that I disagree with you is that I think this is probably not accurate about how this result was “found”. I don’t know who first formulated the category-theoretic statement you’re citing, but my guess is that the existence of colimits was first observed in the case of concrete algebraic structures like groups, rings, etcetera by direct “generators and relations” arguments, and the category-theoretic generalization was obtained by a process of abstracting those concrete arguments. So I don’t see it as appreciably different from those arguments, even if it is stated in greater generality.

        For a simpler analogy, consider the following:
        Theorem 1: The kernel of a homomorphism f: A -> B of abelian groups is a subgroup of A.
        Theorem 2: The kernel of a homomorphism f: G -> H of arbitrary groups is a subgroup of G.

        Clearly Theorem 2 implies Theorem 1. Moreover, one might arrive at Theorem 2 by analyzing a proof of Theorem 1 and eliminating hypotheses which were not needed. But I would not call
        a deduction of Theorem 1 from Theorem 2 an “application of the theory of nonabelian groups to the theory of abelian groups”. If I’m only interested in Theorem 1, there’s no gain from deducing it in this way: it doesn’t make the proof any shorter and it doesn’t give any greater insight into the structure of abelian groups.

        Liked by 1 person

  11. Urs Schreiber

    Mike, I’ll resist the temptation to debate whether type theorists are computer scientists or mathematicians, and instead count that as one point on which I agree with our host; but no matter how badly I may be expressing myself, you know as well as me that the remarkable fact remains that the formal homotopy theory we are talking about was discovered by such people far remote from traditional homotopy theory when thinking about constructive identity, back in the 70s. Univalence and HITs etcs are “just”, if I may use this word, the add-ons to go from this locally cartesian closed homotopy theory further to elementary homotopy toposes.

    Jacob, I have no illusions that none of the results I alluded to have a chance to impress you as mathematical results, but my point is that if a kid can give a new elementary proof of Blakers-Massey using homotopy type theory, then this shows that HoTT as a tool has rather more porential than the poor Peano arithmetic that keeps being mentioned.

    I want to apologize if my previous comment on your book was rude, as Mike suggested, that was far from my intention. But maybe to avoid further such pitfalls, I’ll bow out now.

    Like

    Reply
  12. Pingback: Voevodsky trending | Mathematics without Apologies, by Michael Harris

  13. mathematicswithoutapologies Post author

    Let me suggest to Urs that he not bow out just yet. Something philosophically very interesting is going on in this discussion, and I am not sure whether it is obscured by all the references to homotopy theory or whether on the contrary it is inseparable from the homotopy theory. The various attempts at analogy lead me to believe it’s the former.

    Like

    Reply
    1. sntx

      At the risk of sounding nonsensical and invoking terminology from elsewhere, I’d say that one side is looking at ‘mathematical things’ as present-at-hand while Lurie et al. is looking it as ready-at-hand.

      Liked by 1 person

      Reply
      1. mathematicswithoutapologies Post author

        I don’t think that’s nonsensical at all; on the contrary, I think it’s very much to the point. Although my book doesn’t use that terminology the presumption is that mathematical practice is based on adopting the attitude that its referents are ready-to-hand.

        Like

  14. Jules

    “It does not seem to me that this issue is addressed by homotopy type theory at all. To my knowledge (and an expert should correct me if I’m wrong), it is an open question whether or not the notion of “homotopy coherent diagram” can even be formulated in the language of homotopy type theory.”

    Forgive me if this is stupid, since I know hardly anything about either subject, but isn’t the point of HoTT that a “homotopy coherent diagram” is simply a function in HoTT? A function must map equivalent values to equivalent values, and equivalent equivalences to equivalent equivalences, etc.

    Like

    Reply
    1. Eric Finster

      In HoTT, types are regarded as infty-groupoids, and thus, for groupoids, this is correct: a “diagram” is just a map. But in this quote, Jacob is referring to a “system of types indexed by a (higher) category”, not just a groupoid. And this notion suffers from coherence problems in HoTT: a length n path of composable arrows in your category will require an n-dimensional cell witnessing coherence of the maps involved. For simple categories (say pushouts or pullbacks) where there are relatively few compositions, this can be done by hand. But a general theory is still lacking.

      Like

      Reply
  15. Dimitris

    Let me elaborate on a remark I made in a previous comment on this blog (about HoTT/UF having both a mathematical and metamathematical side and it’s best not to confuse the two) that could help frame the discussion above. The way I see it there are (at least) four main layers of “discourse” in HoTT/UF. For each layer I will also list (a) an analogous result belonging to the analogous layer in set theory/traditional mathematics (b) a result in HoTT/UF that belongs to the layer in question:

    1) PHILOSOPHICAL: This is mainly about thinking of the basic objects of mathematics as homotopy types instead of sets (I personally prefer the term “abstract shapes” in order to dissociate them from homotopy theory and emphasize the fact that they are supposed to have some pre-formal, intuitive content on their own but this is a matter of taste – nevertheless I recommend reading everything I say in this paragraph with “abstract shapes” instead of homotopy types). The idea is that instead of the cumulative hierarchy of sets we have the hierarchy of homotopy types. Therefore, the same way that ZFC can be seen as a formalization of a pre-formal, intuitive cumulative hierarchy of sets so UF ( e.g. as MLTT+UA) can be seen as a formalization of a pre-formal, intuitive hierarchy of homotopy types. There could be many different formalizations of this hierarchy of homotopy types the same way that there are many (competing/complementary) formalizations of the cumulative hierarchy of sets. So at this level the “discourse” is not about formalizations but maybe about philosophical issues to do with sets vs. homotopy types e.g. which is more intuitive, which is more fundamental etc. etc.
    (a) Questions on the “real” universe of sets, e.g. is the Continuum Hypothesis (CH) in some sense true/false/unsettleable.
    (b) N/A — but something that e.g. could belong here is the question whether categories or groupoids are the “correct” higher-level version of sets.

    2) METAMATHEMATICAL: Here we have HoTT/UF as the study of certain formal systems, e.g. extensions of intensional Martin-Lof Type Theory, Cubical Type Theory etc. This study involves the usual study of a formal system, i.e. its models and model theory, its syntactic properties (e.g. canonicity, consistency strength) etc. Essentially what is going on here is the study of a new class of formal systems using well-known methods extended to more complicated settings. Some of these formal systems will of course be seen as formalizations of the hierarchy of homotopy types described in (1). But one can simply study them for their own sake ignoring (1) altogether. Using HoTT/UF itself as a system in which to *study* other formal systems (as e.g. ZFC is used to study formal systems, including itself) is still in its very early stages for the technical reasons alluded to above. But such investigations would also belong here. Also, the model theory of these HoTT/UF formal systems is more properly called mathematical (i.e. belongs to (3) below) same way that e.g. the model theory of groups is more properly called mathematical even if it can also be seen as the study of models of a formal system.
    (a) Independence of CH from ZFC.
    (b) The fact that the MLTT+Univalence has a model in simplicial sets. More syntactically: the fact that MLTT+Univalence is consistent relative to ZFC+2 inaccessibles.

    3) MATHEMATICAL: This is what most of the debate above has been about. Here HoTT/UF can be put to mathematical use by virtue of its being the internal language of a class of very interesting structures (infty-topoi). There could be other applications, exemplified e.g. by Urs’ cohesive HoTT and other extensions (cf. David’s comment.) There could be other mathematical uses it could be put to whether expository or otherwise that we are not currently aware of.
    (a) Pick your favourite theorem in algebra about an algebraic object defined as a set-with-structure
    (b) Elementary Blakers-Massey as described above

    4)COMPUTER-SCIENTIFIC (for lack of a better term): Here HoTT/UF is implemented in proof assistants. The main aim here is to create proof assistants better calibrated to deal with modern algebra.
    (a) The formalization of the proof of Kepler’s conjecture by Hales et al. (Flyspeck)
    (b) The synthetic calculation of the fundamental group of the circle as described e.g. in the HoTT book.

    (1)-(4) for me are “minimally independent” in the following sense: one can have interest in any one of them without having an interest in any of the others. This hopefully may help convince people that interest in HoTT/UF does not commit you to being a computer scientist, or a set-theoretic skeptic, or someone who says that number theorists should change the way they think about math etc. etc.

    Most of the debate above seems to me to confuse the distinction between (2) and (3). Of course there are undeniable connections and interplay between (2) and (3) but one thing I feel requires caution: the study *of* HoTT (which belongs to (2)) is not the same as the *use of the model theory of* HoTT.(which belongs to (3)). This is why, I feel, the Peano arithmetic example is slightly misleading because PA has a unique intended model (the natural numbers.) The fact that it fails to have one model up to isomorphism is an (inescapable, by Godel) formal “defect”. Whereas HoTT’s multitude of models is a virtue. Of course if you accept (1) then that probably forces you to accept some intended model that UF will then try to describe (e.g. simplicial sets if you want to think set-theoretically.) But the study of HoTT does not commit you to such a thing in much the same way that the study of theories in first-order logic does not commit you to ZFC. The study of Peano arithmetic on the other hand does commit you (in some sense) to an intended model which is why the analogy is slightly misleading. (Another way to put this: the model theory of Peano arithmetic is interesting only metamathematically. But the model theory of HoTT/UF is also interesting mathematically.)

    Also the above distinctions hopefully help clarify the point I made in my previous comments on this blog i.e. that to say that anyone interested in HoTT/UF is automatically invested in the mechanization of mathematics seems to me as extreme a caricature as saying that anyone who is interested in ZFC is committed to some sort of Platonism about sets. This, I feel, is just as unfair a caricature even for people exclusively or mainly interested in (4). So from this perspective it is certainly a distraction (as Urs suggested) to say that only computer scientists are interested in HoTT.

    Finally, let me say that as someone that (wrt HoTT/UF) is mainly interested in (2) and (1) it seems to me absurd and hopeless to go about telling mathematicians that they should change the way they think of things. This is because “core mathematics” is about structures that are sufficiently canonical (e.g. the natural numbers, the rationals, the reals) that *any* reasonable foundation will more or less agree on their properties/how to think of them. Whether you want to think of N as an h-set in HoTT or as a model of PA in ZFC is a question that does not and should not arise in mathematical practice. The most that seems reasonable to do (and I think neither Jacob, Mike or Urs would disagree with this) is to say to *some* mathematicians (e.g. people interested in abstract homotopy theory) “Look here’s a new tool that may come in handy for such and such reasons.” Take it, leave it, study it, ignore it — up to you…or up to discussion. “Is HoTT/UF in the sense of (3) useful and important?” is a matter of opinion/intuition and even (as suggested above) aesthetics. But there should at least be some agreement that the (3) aspect of HoTT is genuinely some kind of *tool* even if people disagree over its usefulness. (And with respect to the above discussion: it is certainly consistent to engage in (3) while maintaining a firm belief that ZFC is the “correct”/”preferred” foundation of mathematics.)

    To quickly summarize: it is certainly true that HoTT/UF comes with an all-encompassing philosophical/mathematical vision which includes all of (1)-(4). But it is certainly false that one cannot profitably pick up only some of (1)-(4) without accepting (or even outright rejecting) this all-encompassing vision.

    Liked by 1 person

    Reply
    1. David Corfield

      These are helpful distinctions, Dimitris, though since you could sum up the thrust of my work as pushing philosophy to take much more interest in (3), I’d probably want to rename (1) (perhaps ‘metaphysical’). In any case, I couldn’t imagine a useful discussion of the nature of the fundamental entities of mathematics that wasn’t informed by the state of the practice. There would hardly have been such a lengthy debate about the nature of the set concept, had mathematicians not developed and embraced such a concept. We should always have been receptive to the idea that they might devise something new, most likely over the course of decades.

      Like

      Reply
      1. Dimitris

        Fair enough. I wrote “philosophical” in order to emphasize that there’s an interesting pre-formal aspect to HoTT/UF, i.e. the view that the hierarchy of h-levels can play a similar role to the cumulative hierarchy of sets. So perhaps better to read “philosophical” as “non-technical”. (1) should be informed by (3), of course. But they are independent in the sense that one need not be an active contributor to (3) in order to say anything about (1). Understanding what is going on in (2),(3),(4) is of course preferable (if not essential) if you’re going to say anything about (1).

        Like

  16. Chris Austin

    With reference to Jacob Lurie’s comment:

    Working in a formal system, more or less by definition, means that you can’t ignore steps which are routine and focus attention on the ones that contain the fundamental ideas.

    Perhaps I could point out that on http://verish.info/, I have tried to develop a framework that overcomes precisely this problem. You can see how it works on the knowledge base pages, for example http://verish.info/ThingsAndFeatures.html. You click on a green Method, and the Application of that Method comes into view. Click on the Method again, and the Application disappears. The html and Javascript markup that achieves this can be generated automatically by the Verish program, whose C++ source code is provided on the website.

    Like

    Reply
  17. Pingback: Shakespeare without topology | Mathematics without Apologies, by Michael Harris

  18. Vladimir Voevodsky

    Hello,

    I have note read all that have been written in this discussion so maybe someone has already pointed out what I am going to say. “Homotopy type theory” is a field that is in its infancy.

    Calculus of Inductive Constructions, the type theory that we currently using to formalize mathematics in the univalent style is surprisingly convenient for doing mathematics at the level of sets and categories (and maybe 2-categories). As for mathematics of higher h-levels it is, in practice, of very little use. Note that this is a great advance of what was before as before we did not even have a system for doing abstract mathematics both formally and conveniently at the level of sets.

    At the moment a new type theory (or a group of related type theories) is being developed. These type theories are known as cubical type theories. They provide, or so we hope, a solution to some of the problems that make it difficult to use univalent formalization at higher h-levels.

    Still, they do not provide all that is needed and more advances and probably unexpected discoveries are ahead before we will be able to say that we have a formal system to work, univalently, with mathematical objects of all h-levels (BTW “univalently” means “in a formal language that does not contain statements that are not invariant under equivalences”).

    Vladimir.

    Liked by 1 person

    Reply
  19. Jack Morava

    Having learned only this morning (thanks for a heads-up to Emily Riehl) of this fascinating site, and having a serious understanding of neither higher category nor type theory, I’m writing to suggest that this discussion reads to an outsider as a passionate (and extremely well-informed) lovers’ quarrel.

    In the early 1970s, driven (I think) by a terrible job market, US topology split, destructively and bitterly, into `algebraic’ and `geometric’ camps. I suspect we’re seeing now a coming together rather than apart.

    Liked by 2 people

    Reply
  20. Richard Séguin

    Some random thoughts.
    (1) This thread is getting increasingly difficult to follow. When I went to bed last night there were 28 replies, and now there are 48. As it grows I have to scan the entire now-lengthy page to see where the new stuff is. It would be helpful if this blog had a recent comments feature.
    (2) I have a feeling that there are now people, say analysts or dynamical systems folks, visiting this site wondering what all the fuss is about, but going away scratching their heads wondering what’s in it for them.
    (3) I’m with Michael’s suspicion that there are some tense philosophical differences barely under the surface of, and obscured by, the technical discussion. If everyone were more forthright about this, this discussion would be even more useful and would bring in a wider audience (see (2)).
    (4) Vladimir said: “At the moment a new type theory (or a group of related type theories) is being developed. These type theories are known as cubical type theories. They provide, or so we hope, a solution to some of the problems that make it difficult to use univalent formalization at higher h-levels.” It’s my best guess that anyone working on new foundations will find themselves eventually running into new walls that become increasingly difficult to to surmount with ongoing tweaks, and someone is always going to be unhappy. There is probably a someone-is always-going-to-be-unhappy theorem in The Book, but we haven’t found that page yet. I’m not totally negative about this though. I think it’s always good that someone is challenging the status quo and exploring new ways of thinking.
    (5) I have more to say about use of computers, but that will have to wait.

    Like

    Reply
    1. Jacob Lurie

      Richard,

      “I’m with Michael’s suspicion that there are some tense philosophical differences barely under the surface of, and obscured by, the technical discussion. If everyone were more forthright about this, this discussion would be even more useful and would bring in a wider audience (see (2)).”

      I’m not sure I would describe the differences as “tense”. I think they can be made less technical, but that to do so runs the risk of caricature. Let me take a stab nonetheless.

      Consider the following two definitions of smooth manifold:

      Definition 1: A smooth manifold is a topological space X equipped with a covering by open
      sets U_i which are equipped with homeomorphisms f_i: U_i -> R^n for which the associated “transition functions” are smooth.

      Definition 2: A smooth manifold is a topological space X equipped with a sheaf of rings O_X such that the pair (X, O_X) is locally isomorphic to (R^n, O), where O denotes the sheaf of smooth functions on R^n.

      Thesis 1: These two definitions introduce what are for all practical purposes the same mathematical concept. Though they are not formulated identically, the equivalence between the two is something sufficiently routine (at least for those mathematicians that have digested both) that we use it without even noticing.

      Thesis 2: These two definitions define two distinct mathematical concepts. Though these concepts are equivalent in some sense, each approach provides essential intuitions that are lost in the procedure of translating between them.

      I would say that in (one of the) discussion(s) above, I am defending some version of Thesis 1 while Mike is defending some version of Thesis 2. But in a context where “smooth manifold” is replaced by a mathematical concept that we have a poorer understanding of, and where the equivalence of the two definitions is not yet a theorem (Note: I’m not sure I completely agree with Thesis 1 in the more complicated context, and I’m not claiming that Mike would endorse Thesis 2 in the simpler one.)

      Liked by 1 person

      Reply
      1. Jacob Lurie

        On reflection, I think my analogy is weak. Some aspect of the homotopy theory is probably essential to the nature of the disagreement. The point Mike is making (if I understand him correctly now) is that while two formalisms which attempt to describe a certain structure might be equivalent in the sense that all homotopy invariant ideas on one side can be translated to homotopy invariant ideas on the other, there might well be ideas which are -not- homotopy invariant. which do not translate across the equivalence but are nonetheless useful, giving one formalism an advantage over the other.

        I certainly agree that this sort of thing can happen, although I would contend that what is “lost in translation” is not basic intuitions but perhaps the convenience with which those intuitions can be converted into rigorous proofs.

        Like

      2. Mike Shulman

        @Jacob:

        The point Mike is making (if I understand him correctly now) is that while two formalisms which attempt to describe a certain structure might be equivalent in the sense that all homotopy invariant ideas on one side can be translated to homotopy invariant ideas on the other, there might well be ideas which are -not- homotopy invariant. which do not translate across the equivalence but are nonetheless useful, giving one formalism an advantage over the other.

        I don’t think that’s what I’m saying. It might be true, but the main point I’m making is what I already said here:

        I won’t claim that there are basic intuitions that don’t translate, but I will claim that the basic intuitions of HoTT are different from the usual intuitions used in higher category theory. One could translate them and learn to use the corresponding intuitions diagrammatically rather than syntactically, but as you say the syntactic approach is likely to retain an advantage.

        I already described an example from HoTT which, as far as I have ever seen, is not part of the usual intuition of higher category theory: the internal formulation and relationship of n-types and n-truncated maps. Of course I can’t say for sure that anything is not part of someone else’s intuition, but nowhere in the literature on higher category have I seen anything that looks like whan you get when you translate this definition from HoTT (which I haven’t actually given; the exact definition is due to Vladimir) into category theory.

        I know that you backed off from your Theses 1 and 2, but let me state anyway an analogous thesis that I would defend, both for ∞-toposes and for manifolds:

        “These two definitions define two distinct ways of looking at the same mathematical concept. Once fully digested, the equivalence between the two can be used in a routine way. Nevertheless, each definition provides distinct intuitions that are sometimes lost by this translation.”

        If we distill it down from the categorical heights, this brings us back to John Baez’s pithy quote about equality: “Every interesting equation is a lie.” The only obviously true equation is x=x, but it carries no (or little) information. Any other equation, like x=y, is “a lie” because x and y are not the same thing; yet what is interesting about the equality is that they are nevertheless the same in some way. But knowing the equation doesn’t mean that we should forget all about y and use only x; the point of having the equation is that we can pass back and forth between x and y, according to which is most appropriate in any given context.

        Someone who doesn’t know about sheaves of smooth functions is missing an important item in their toolbox for understanding manifolds; but so is someone who doesn’t know about transition functions. Even more elementarily, an equation like (x+1)²=x²+2x+1 doesn’t mean that there’s no point to the expression (x+1)² because we can always write x²+2x+1 instead; the point of the equation is that sometimes (x+1)² arises naturally while in other places we see x²+2x+1, and knowing that they are equal gives us a way to relate these situations.

        HoTT gives us a new point of view on higher categories, which is (at least conjecturally) “equivalent” to a more traditional category-theoretic perspective. But that equivalence doesn’t mean we ought to disregard one or the other. On the contrary, the whole point is that in studying the same objects, we can choose whichever point of view is more convenient for a given purpose (or, even, more aesthetically pleasing to whoever is talking).

        Since this many-tentacled discussion is becoming next to impossible to follow (and since I, at least, have now been reduced to quoting myself (-: ), I suggest that we break it off, or at least break it up. I don’t personally feel any great need to continue it; I’ve made my points, and I’m satisfied with the conclusions. I suggest that anyone who wants to continue some thread of the discussion make a new blog post somewhere about that particular topic, and post a link here.

        Like

      3. Urs Schreiber

        That HoTT is a different but equivalent formulation of more traditional homotopy theory: this is just what is meant by saying that it is the internal language of lcc infinity-categories, respectively infinity-toposes. It is the internal perspective of something that, for historical reasons, already happens to be more well known externally. Ths is just as for ordinary 1-caregories resp 1-toposes there is their external category theory in parallel with their internal intuitionistic logic, and both perspectives are equivalent but have different advantages for different applications. In the 1-categorical case these equivalences have been fully nailed down; there is a an equivalence of 2-categories of, on the one hand, dependent type theories and, on the other, of locally cartesian closed categories, and as with any equivalence of (higher) categories, instead of implying that one side of the equivalence is thereby redundant, it means that there is something to be gained by retaining both sides and having the freedom of moving back and forth.

        The thrust of the internal perspective is — and this is reverse to what has been suggested in some comments above — that it allows to take the “naive” reasoning that everyone emplyos when thinking about sets and if only the law of excluded middle and of uniqueness of identity proofs is omitted,then this naive reasoning is guaranteed to translate into a statement that holds true in every lcc infinity category resp. infinity-topos.

        For instance in HoTT one makes *verbatim* the naive statement that a fiber product of two maps f : A -> C and g : B -> C is “the collection of all a in A and b in B with identification of f(a) with g(b)” but this naive statement now comes out as the correct concept of homotopy fiber product in any lcc infinity-category, in any infinity-topos.

        Accordingly, the interest in HoTT in this way is not that it will provide new theorems in the traditional homtopy theory (of simplicial sets, say) but conversely that every *elementary* proof in the traditional theory now becomes a proof of new a theorem in every other lcc infinity category resp. infinity-topos.

        This is what the new elementary proof of the Blakers-Massey theorem is an example of. That it gives a new proof of the old theorem in simplicials sets/topological spaces is nice, that it gives a new proof of the previous corollary that therefore the statement also holds in infinity-sheaf infinity-toposes is nicer, but the real gain is that it is an unprecedented proof of a theorem that holds more generally in any “elementary infinity-topos”. It’s a new theorem that will have its external proofs, too, but the internal HoTT proof happens to be easy enough that. even non-homotopy theoretically (and non-infinity category theory) trained type theorists could find it. One suggests that much more will be possible once genuine homotopy-theory experts combine their expertise with the advatages that the internal language may offer.

        Liked by 1 person

      4. mathematicswithoutapologies Post author

        I’m sure this is interesting, but until someone explains to me the difference between an internal language and an external language I find this considerably less useful than the Heideggerian distinction between “present-at-hand” and “ready-to-hand” mentioned by sntx the day before yesterday. I get the impression that some contributors see us number theorists as sleepwalkers ignorant of the most basic notions of logic, and refrain from waking us for fear that we would fall off a cliff, like in one of those old Roadrunner cartoons, and bring all our theorems with us. But it really doesn’t work like that. We all speak dialects of a single language, which is neither internal nor external, and when the dialects are too different for easy communication we resort to what Peter Galison calls “pidgins” (and there’s a quotation in my Androids article in which David Corfield speculates that pidgins can be used to communicate with computers as well).

        But I don’t even have a pidgin for communicating with any type of type theorists.

        Liked by 1 person

      5. Richard Séguin

        Michael said: “until someone explains to me the difference between an internal language and an external language I find this considerably less useful than …”

        I had problems with this as well. Based on my own experience, those words do trigger vague notions in my head, but nothing quite concrete enough to get traction on.

        Like

      6. dcorfield

        Regarding this ‘internal language’ issue, Michael, did you read nLab’s page ‘internal logic’ (http://ncatlab.org/nlab/show/internal+logic). It’s far from perfect, but hopefully it conveys the gist of the idea that inference systems come in different strengths, that to a system there corresponds settings which ‘support’ that reasoning, and so that inferences within a given system can be interpreted in any example of the corresponding setting.

        So, for example, if you have a rich enough logic in which to define the concept of the reals, symmetric matrices and diagonalization, and to prove some result about diagonalization, you could look for suitable interpretations. Someone (I think Scedrov here http://www.sciencedirect.com/science/article/pii/0168007286900060) showed you could do something of this sort constructively, so in the kind of language that is internal to toposes. He could then interpret his argument in any topos, such as the topos of sheaves over the reals where it gives a result about diagonalizing matrices of continuous functions.

        Now If HoTT is the internal language of infinity-toposes, any proof in HoTT has an interpretation in any infinity-topos.

        Like

  21. mathematicswithoutapologies Post author

    I’m about to switch off my computer for the night. Richard Séguin is right to observe that this exchange is getting hard to follow, and it’s in part because several distinct themes are being discussed simultaneously. If I could see clearly how to separate them I would move the discussions to distinct headings and let them evolve independently (mainly during daylight hours in Paris). I’m open to suggestions.

    I was briefly convinced that one of the philosophical differences animating this exchange can be captured by Ian Hacking’s distinction between Leibnizian and Cartesian proofs. The former is the prototype of the formal proof, and Leibniz was already dreaming of a language that neatly matched the rules of thought and that could decide any question mechanically (and syntactically). The latter are the “surveyable” proofs of Wittgenstein, the ones that impose themselves as evident upon the understanding. But upon reflection, I decided that defenders of HOTT/UF as well as skeptics are looking for proofs that can be immediately apprehended, in other words, everyone is looking for Cartesian proofs. So the difference must be elsewhere.

    Like

    Reply
  22. TheMonster

    “I chose groups because they’re familiar and easy to illustrate the point, not because they’re difficult. If you’re looking for difficult theorems, then we need to pick a category of more difficult objects.”
    Last time I checked, groups were pretty hard.

    Liked by 1 person

    Reply
  23. Vijay Ganesh

    I think there are two inter-related and distinct issues here:

    1) The merits and demerits of HOTT as a new foundation for mathematics:

    This is not an easy question to settle. Some are unconvinced that HOTT is a useful foundation because there is cost associated with HOTT, namely, formalism. Additionally, the payoff is not clear to those who are used their own informal ways of doing math. Some argued that the payoff is clear because they were able to prove some known theorems using HOTT, which otherwise would have been difficult for them. Jacob Lurie et al. think that that is not sufficient payoff. For them, the real payoff is when you solve a hard open problem using HOTT, that otherwise is too difficult to solve.

    It is really hard to predict what the payoff will be in the long run.

    2) The use of computer-aided proof checker to check math proofs

    Here are there are two points. First, do we need them? Second, what is the cost. Put differently, can the mathematician still use informal language while use a formal computer-aided way to check proofs?

    2.1) Do we need computer-aided proof checkers: Here I don’t see what the debate is about. Mathematicians should welcome a “second opinion” or a “debugger” for their work. Those who are promoting the use of computer-aided proof checkers (HOTT + Coq is one way to do this) are simply saying “I need to double-check your work. This requires us to translate your work into a specific formalism, and check it using a proof assistant”.

    I don’t see what the issue is here. If someone is willing to translate your work and debug it for you, what is the complaint? If this approach finds serious errors (and there are examples of it), then we should promote this extra layer of scrutiny.

    2.2) The cost: It is clear there is a signifcant cost associated with using new unfamiliar formalisms. Perhaps the solution is to somehow compile informal mathematics down to formal one. Of course, at this point in time it is totally unclear to me how this can be done.

    Like

    Reply
    1. Jacob Lurie

      “Jacob Lurie et al. think that that is not sufficient payoff. For them, the real payoff is when you solve a hard open problem using HOTT, that otherwise is too difficult to solve.”

      I don’t think that’s an accurate statement of my position.

      First, I wasn’t commenting on the usefulness of homotopy type theory as a foundation for mathematics (if “usefulness” is even the right word in that context), but as a tool for studying particular mathematical structures.

      Second, I would agree that -one- real payoff is giving the solution to a hard open problem which is difficult to solve, and that I am skeptical that homotopy type theory will be useful to classical homotopy theory in this way. But I certainly don’t think that this is a test that all mathematics needs to pass in order to be worthwhile. Plenty of mathematics is worthwhile simply because it is interesting, even if it has limited utility for the purpose of resolving questions in the mathematics that came before it.

      Like

      Reply
  24. Max

    Hi Jacob.

    //…except that I think that focus on “homotopy type theory as a foundation for mathematics” misses the point. I think that the problem of finding a foundation for mathematics (meaning a small number of concepts and assumptions from which the rest of mathematics can be derived) is an interesting one, from both a mathematical and philosophical point of view (perhaps more the latter than the former). It’s also a problem which I would regard as having been solved in the early 20th century. As with many mathematical problems, once they have been solved once, it’s much easier to solve them again …

    I agree with your point here. But my scepticism is more wider. Namely, if one understand the mathematics as a world of the formal systems(small number of concepts and assumptions ) then the Foundation for this mathematics is “the formal system for the formal systems” that is impredicative closure unresolvable in this formal systems world.

    HoTT and HoTT+UA are examples of formal systems and so suffers from this “conceptual bug”: undecidability of consistency.

    In this light “informal” or “semi formal metatheories” looks as more plausible candidates for Foundation of this mathematics(formal systems concept).
    But what is meant? Is it mathematical concepts?
    If the mathematics is “formal” then no! 😉
    May be because of this reasons Brouwer was intuitively “averse” to logic and formalization, and Heyting “…regreted, when old, that he spent all his time formalizing intuitionistic mathematics rather than doing it”?

    //…It’s also a problem which I would regard as having been solved in the early 20th century. As with many mathematical problems, once they have been solved once, it’s much easier to solve them again…

    Here I disagree.
    In this sense(treating the math as a variety of formal systems and corresponding formal proofs), the Foundation is not a “mathematical” problem and thus can not “have been solved once …or agen…” by means of formal proofs in complete consistency.

    The formal proofs inseparable from some kind of formalisation(formal (meta)theory) . And despite of usefulness of proof assistant soft, there is no guarantee that a comp do not hangs up under a paradox(runtime error in form of unbounded recursion “infinite loop” caused by an impredicat, for example).

    Like

    Reply
  25. Steve Awodey

    Folks,
    It has become very difficult to follow this discussion because of the way the comments are threaded on the blog. I suggest we put all future comments at the very end and use the @personsname convention (or direct quotation) to indicate the referent of the comment.

    Like

    Reply
  26. mathematicswithoutapologies Post author

    @Steve,

    Although I find putting random punctuation marks in the wrong place ungainly in the extreme, I have no better idea, so I have to second your suggestion.

    @Max,

    I find it no easier to “understand the mathematics as a world of the formal systems” than to understand myself as a formal system.

    But I’m not going to pursue the point, because I’m going to spend the weekend with my family; I will check in from time to time to approve comments, but I’ll refrain from adding more of my own.

    Like

    Reply
  27. Mike Shulman

    @Jacob:

    I had thought that the dictionary relating category theory to type theory required the existence of function objects (so that morphisms from A to B can be represented by terms of the type given by functions from A to B). How does this work if I’m trying to think about a setting in which I don’t have function objects?

    Technically, the morphisms from A to B are represented by terms of type B containing a free variable of type A. When there are function types (especially function types satisfying definitional η), such a term is roughly equivalent to a term of type A -> B by λ-abstraction. However, the notion of term-containing-a-free-variable still makes sense in the absence of function types.

    By the way, let me also take this opportunity to say that to everyone complaining about the difficulty and confusingness of type theory: I hear you! I was that person myself once. I’m not sure why type theory is so hard for us mathematicians to understand. I think partly it has to do with the notation, and partly with a set of implicit assumptions about the nature of mathematics that type theorists share but most mathematicians don’t, but there’s more to it than that too and I don’t know what it is. It took me a long time to really understand type theory (and there are still parts of it that I think I don’t understand the way a “real” type theorist does). The HoTT Book was an attempt to explain type theory to mathematicians, and I’ve also written several blog posts trying to do the same thing; but I’ve gathered that all of these attempts “work” for some readers but not for others. So I don’t know any “magic bullet” to understand type theory that will work for everyone; probably your best bet, if you really want to understand it, is to read lots of stuff written by different people and eventually it will start to click together.

    Like

    Reply
  28. Jules

    In fact there are type systems without function objects, and they still have a type system. Depending on how much structure you have in your category you get a certain amount of structure in the corresponding type system / programming language. If you don’t have exponentials in your category you don’t have what a computer scientists calls “first class functions” in your programming language. The advantage of type theory is that it allows one to think on the level of what computer scientists call values. For example in Set you have sets, functions between sets, but those functions take as input a value in one set and give as output a value in another. In category theory one might think of f ∘ g, in type theory of λx. f(g(x)). There is a mechanical way to translate the latter into the former. This alternative perspective of working directly with values rather than indirectly through function composition can be a nice way to think. Type theory allows one to work with “values” in other categories. For example in HoTT a value is a point in a space. Things like λx. f(g(x)) still make sense by translating them into f ∘ g. Some constructions that are natural in type theory language like λy. h(λx. f(x,y)) can be more unwieldy when expressed in category language. If your category does not have function objects then things like f(λx. x) are type errors, and the program that does the mechanical translation will tell you that you did an illegal construction.

    Like

    Reply
    1. Jacob Lurie

      @Michael,

      Regarding the meaning of the term “internal language”: I’d prefer to phrase it as a distinction between “syntactic” and “semantic” ways of thinking, and it’s a distinction that one could make for reasoning about much more familiar objects.

      So, for example, suppose I wanted to prove a theorem about groups. Let me distinguish between two “styles” in which such a proof could be understood:

      a) In the “syntactic” style of first-order logic. A proof in this sense consists of a sequence of sentences in the first-order language of groups (meaning they are formal expressions built out of “and”, “or”, “for all”, etcetera), each of which is required to either be an axiom of group theory or to obtained from previous elements of the sequence using a rule of inference.

      b) In the “semantic” style, the way that mathematics is usually practiced. Here we imagine that we have a group and we use whatever tools we like in order to study it and draw the conclusion that we want. So we might use concepts that can’t be formulated in the first-order language of groups: for example, we might draw conclusions about G by studying subgroups of G, or linear representations of G, or ideals in the group ring Z[G].

      Godel’s completeness theorem asserts that any first-order statement about groups is provable in the syntactic sense if and only if it is true in all groups. So for such statements, any theorem that can be proven “semantically” can also be proven “syntactically” (the fine print, of course, is that this applies only to statements which can be formulated in the first-order language of groups. So, for example, it tells you nothing about theorems which concern -finite- groups, since the finiteness of a group is not a hypothesis that can be formulated in the first-order language of groups).

      I think that the debate that Mike and I were having above was about the utility of these two different approaches for thinking about a different sort of mathematical structure (an “elementary infinity topos”). Homotopy type theory can be viewed (and I think Mike will endorse this?) as an approach to formulating and proving theorems about these objects in the “syntactic” style (with the caveat that it is not the syntax of first order logic, but of type theory). My skepticism is about whether this approach can really have substantive advantages over the “semantic” style for the purpose of proving theorems (ignoring issues related to computer verification of proofs or philosophical objections to using transcendental methods to study something that is supposed to be foundational).

      Liked by 2 people

      Reply
      1. Mike Shulman

        Well, maybe I didn’t make my point sufficiently. I’d like to be done, but I don’t want the thread to end with misinformation. (-:

        I dispute your claim that mathematics is “usually practiced” in a “semantic” style. I would say that in practice, both syntax and semantics are an integral part of mathematics and cannot be separated. When we do mathematics, we generally have in mind that we are “talking about things”. However, what we actually write on paper are strings of symbols that follow (usually informal) rules. In other words, we “think semantically” but “prove syntactically”. The “soundness theorem” about the semantics of any formal system then tells us that if we have a model of the system, then everything we’ve proven syntactically is indeed true about that model. (What it means to “have” a model is variable; a formalist would say it just means we’ve proven an appropriate statement working syntactically in a different formal system, the “metatheory”.)

        In the example of groups, we “imagine that we have a group”, but what we write down when actually doing the proof are strings of symbols, including variables representing elements of that group, plus all the standard mathematical constructions, and also the apparatus of logic (e.g. and, or, for all, there exists). This is true even if we use higher-order constructions; in that case we just use a syntax that is a higher-order theory of groups rather than a first-order one. The presence of higher-order constructions has hardly anything to do with syntax vs semantics.

        Similarly, when working with an ∞-topos in the categorical way, we “imagine that we have an ∞-topos”, but what we write down are strings of symbols, including variables representing objects, morphisms, homotopies, and so on in that ∞-topos, plus various mathematical constructions and the apparatus of logic. These strings of symbols are not the syntax of homotopy type theory.

        To explain the “internal language” method from this point of view, consider first “the theory of nothing”. This is like the theory of groups, but instead of imagining that we have a group, we imagine that we have nothing at all. However, we still have (assuming we are using a higher-order theory) all the standard mathematical constructions and the apparatus of logic, so we can build all of mathematics. (In particular, we can prove theorems like “for any group G, blah blah”. The way we prove such a theorem is, of course, to assume given an arbitrary group G and proceed; this can be considered to be passing temporarily into the “theory of a group” and then back out again at the end of the proof. Type theorists describe this by talking about “contexts”.)

        Applied to “the theory of nothing”, the soundness theorem tells us that whenever we have nothing (which, of course, we always do), all of our theorems are true. In other words, proofs in mathematics are valid.

        The point now is that the phrase “whenever we have” can be interpreted in many different categories. Category-theoretically, the way the soundness (and its converse, completeness) theorems work is that, as you mentioned earlier, from the syntax of a given theory we can construct the initial category with appropriate structure containing a model of that theory. (The choice of first-order or higher-order logic is what determines the meaning of “appropriate structure”.) Thus, whenever we have a model of the theory of groups in some category, i.e. a group object in that category, then any theorem that we proved in the theory of groups is “true” (in the appropriate sense) about that group object. The “classical” semantics arises when the category in question is the category of sets; but we also get a sort of “free theorem” by choosing other categories. That is, when we prove a theorem about groups, we could easily have been thinking only about ordinary groups (in Set); but as long as our proof fits in an appropriate theory, the result we obtain is automatically true about topological groups, Lie groups, Hopf algebras, group schemes, and so on.

        Note that this is different from how we would naively proceed to prove a theorem about group objects in categories. That would lead us to work in “the theory of a category containing a group object” (without, of course, explicitly recognizing this, since mathematicians don’t usually think about their syntax). We would then use strings of symbols containing variables representing things like objects and morphisms of the category to prove our theorem. And we would apply it to a particular example — say, a topological group as a group object in Top — by regarding Top as an internal category in Set (deal with universe sizes however you wish) and our topological group as a group object in that internal category. Note that in this case, even though we are talking about arbitrary categories, we still only use semantics in Set.

        By contrast, when working with groups from the internal-language point of view, our strings of symbols contain variables representing things like elements of our group. E.g. associativity is stated as “for all x,y,z in G, we have x(yz)=(xy)z”; whereas for an explicitly described group object, associativity must be stated as the equality of two morphisms G×G×G -> G. Nevertheless, the machine of interpretation tells us that this element-wise associativity automatically gets interpreted in any category by the equality of these two morphisms.

        A nice example of the usefulness of this is that it is quite easy to prove that the inverse of an element in a group is unique. Therefore, it follows automatically that the inverse morphism of a group object in a category is unique. In particular, since (commutative) Hopf algebras are the same as group objects in the opposite category of (commutative) rings, it follows that the antipode of a Hopf algebra is unique. This can, of course, be proven by hand, but it’s surprisingly tricky to figure out what you have to do if you attack it directly. Probably the easiest way to do it is to take the usual proof for groups, translate it into commutative diagrams, and dualize it. The point of the internal language is that you don’t have to do all that at all; there’s a machine that does it for you automatically.

        If, as conjectured (and partially proven), HoTT is an internal language of ∞-toposes (or more general sorts of (∞,1)-categories), that would mean that we can expand the above description to use (∞,1)-categories instead of 1-categories. Proving something about an ∞-topos in the categorical way means working in “the theory of an ∞-topos” and looking at its models in Set, which are ∞-toposes. By contrast, using HoTT would mean working in “the theory of nothing”, but where the “ambient logic” includes not just first-order and higher-order constructions but “higher-homotopy-theoretic” ones, and then interpreting it into (∞,1)-categories. For instance, we could replay the example of groups but for, say, A_4-spaces (or, once the problems of infinite homotopy coherence are overcome, A_∞-spaces).

        Like

  29. Pingback: Language about language | Mathematics without Apologies, by Michael Harris

  30. Antoine Chambert-Loir

    Dear Michael,

    I am certainly not a specialist in homotopy theory, nor in homotopy type theory, but I perceive two distinct advantages in a new formalization system for mathematics such as HOTT.

    First, it is a fact that we basically all use the ZFC formalism. Of course, a lot of us use it like Monsieur Jourdain did prose. And maybe even more of us are kind of set-theoretical agnostics. But the truth is that ZFC-idiosyncrasies can be found everywhere in published mathematics. Do we really think that an equivalence class is the totality of the equivalent elements (hence a subset)? In some definitions of a graph, you have vertices and edges which are cardinality-2 subsets of vertices — what is the real significance of the remark “To avoid notational ambiguities, we shall tacitly assume that V \cap E = \emptyset.” (found in a classic book on graph theory)? Doesn’t it simply witness a flaw of our formalization system which allows for absurd but well-formed sentences such as “Pi is a group” (I think it is not, in fact), or ”3 is a topology on the space 2” (I think it is).

    Second, a lot of developments of mathematics in the 20th century has shown that formalizing, by which I hear, algebraizing, is extremly powerful; the most basic example could be the replacement of Betti numbers by (co)homology groups, to be then replaced by objects of derived categories, to be now replaced by ∞-topoi. That homotopy type theory allows to think of homotopy theory in a purely syntactic way is promising.

    Regards,

    Antoine

    Like

    Reply
    1. Mike Shulman

      Thanks for “3 is a topology on 2” — that’s an excellent example of a “nonsensical” statement permitted by ZFC which actually happens to be true (at least, under the von Neumann encoding of natural numbers).

      Like

      Reply
  31. Pingback: abc in and foundations | Mathematics without Apologies, by Michael Harris

  32. Pingback: Mathematica without apologies : le programme « Univalent foundations  | «HENOSOPHIA τοποσοφια μαθεσις υνι√ερσαλις οντοποσοφια

  33. Pingback: Le blog du Professeur John Armstrong : « The unapologetic mathematician  | «HENOSOPHIA τοποσοφια μαθεσις υνι√ερσαλις οντοποσοφια

  34. Pingback: #HhigherToposTheory: le livre de Jacob Lurie et son étude dans le cadre de l’Ouvert | Autrement qu'être : plan vital -ontologique vs plan spirituel d'immanence ενοσοφια μαθεσις

  35. Pingback: De #HigherToposTheory à #HomotopyTypeTheory | Autrement qu'être Mathesis uni∜ersalis Problema Universale Heidegger/Husserl être/conscience : plan vital-ontologique vs plan spirituel d'immanence CLAVIS UNIVERSALIS HENOSOPHIA PANSOPHIA ενο

  36. Sandro

    I know I’m late to the party, but I just wanted to address this point:

    And this reminds me of a quite different concern I have — this is MH speaking now, not the panelists — about computer-verifiable proofs: the prospect of technological lock-in. […] Would mathematics then have to adapt in turn to the system, and would mathematicians who refuse to adapt — for example, because they find the system an obstacle to expressing their ideas — be forced to resort to samizdat, or even blogs, to get their work published?

    Part of the point of computer science’s exploration of a wide variety of syntax-directed proof systems is to understand how they relate to each other, and how to build or customize such systems to solve specific problems. Still early days yet, but by the time mathematicians at large will be using tools like this regularly, they will have a toolset</i. for building syntax-directed proof systems, not necessarily one all-encompassing proof system. So they would not be restricted to a fixed syntax, but can effectively define their own suitable to their problem domain.

    Like

    Reply
  37. Pingback: Better than excellent – totallydisconnected

Leave a comment