This morning I had breakfast at the n-category café and was pleased to see that Mike Shulman had posted a draft of his chapter on Homotopy Type Theory and Univalent Foundations (HOTT/UF) for a book called *Categories for the Working Philosopher*, edited by Elaine Landry of UC Davis. I have been trying for several years to grasp the meaning of Voevodsky’s univalence axiom, and Shulman’s earlier explanations were as comprehensible as anything I had seen, but the meaning kept slipping away every time I turned my attention to something else (like number theory). Shulman’s draft has an explanation for that as well:

we should beware of judging a theory to be more intuitive or fundamental merely because we are familiar with it: intuition is not fixed, but can be (and is) trained and developed. At present, most mathematicians think of ∞-groupoids in terms of sets because they learned about sets early in their mathematical education; but even in its short existence the HOTT/UF community has already observed that graduate students who are “brought up” thinking in HOTT/UF form a direct understanding and intuition for it that sometimes outstrips that of those who “came to it late.”

Shulman’s new draft is as clear as one could wish, but I still can’t stay focused on the special role of univalence. It may just be that I don’t care enough. Shulman writes

over the past few decades ∞-groupoids have come to play a more and more central role in mathematics and even physics, starting from algebraic topology and metastasizing outwards into commutative algebra, algebraic geometry, differential geometry, gauge field theory, computer science, logic, and even combinatorics.

Note that number theory is not on that list; nor are geometric analysis, Fourier analysis, dynamical systems, or PDEs. I think this is no accident, but the reasons I am tempted to give don’t really stand up to scrutiny, so I’ll just leave it at that. As a matter of fact, though, ∞-groupoids have been creeping closer — “metastasizing” would be an ominous way of putting it — to number theory, by way of the geometric Langlands program. Increasingly I hear my colleagues confess to have been sneaking looks at Lurie’s *Higher Topos Theory* — at least the first chapter, which, according to some geometric Langlands specialists, contains everything we really need. (I note in passing that none of the chapters of Landry’s book is devoted to the philosophical significance of the use of categories by Grothendieck, much less to Lurie’s more recent work, and I find this surprising.) But none of the number theorists I know seems to feel an urgent need for new foundations — I should write Foundations, because the new (lower case f) foundations of p-adic geometry being developed by Peter Scholze and his collaborators are seen by many, at least retrospectively, as long overdue. I suspect this comment on Shulman’s post is the reason:

the driving force behind HoTT/UF as a foundation is not necessarily for it to become the “ultimate” foundation e.g. like ZFC. The driving force, rather, is to provide foundations formalizable in a proof assistant so that the use of proof-assistants can actually become a tool for working mathematicians. From this perspective it is clearly preferable to rely on a formal system that was tailor-made to be implemented in computers.

I don’t know of anyone in my branch of number theory who especially wants this kind of tool.

I once met Mike Shulman and was hoping to ask him to explain the univalence axiom, but long before we reached that point he stumped me by claiming that I was taking “set” as a primitive notion subject to axioms and that I should be able to do the same with “type.” Now that I’ve had some time to think about it, I’m sure I disagree, because I didn’t actually learn the ZFC axioms until I had been a mathematician for many years. So I must have picked up a non-axiomatic way of working with sets. As I write in the book,

The [mathematical] socialization outlined in chapter 2 had less to do with assimilating norms, values, and respect for a transhistorical hierarchy than with learning to use the word “exist” appropriately and effortlessly, to display a pragmatic “belief in” the world of mathematical objects adequate for seamless interaction with the community.

The HOTT/UF community is not pragmatist. I would call them militant enthusiasts. The militancy would explain their insistence on using the unwieldy and unfamiliar notation a:A for “a is an element of [the type] A” rather than the set-theoretic notation a ∈ A, presumably to help overcome the habit of “thinking in terms of sets.” Enthusiasm (and pantheism) are more complicated and are postponed to Part 2.

VoynichProbably most mathematicians think of a set – if they think of it at all – as a ‘collection of things’, right? Outside a small cross-section of current mathematical activities, it seems hard for this intuition to truly go wrong.

Looking forward to Part 2.

LikeLike

Michael MaloneyAs with anything else, we develop our understanding of things by successively better approximations. Everyone has an intuitive idea of what a set should be since our everyday lives involve collections. But our raw intuition must be honed over time. Generalizing our intuition of sets to include infinite collections is not something we learn in a weekend.

I actually made an interesting mistake today that might be relevant here. I tutor math at the elementary and high school level. Today, I tried to give a student a bit of a brainteaser, something that is “obvious” to a student who has taken analysis: what is the difference between the length of a line segment with its endpoints included and a segment with its endpoints removed. This high school grade paradox seemed fun for me, since the latter has fewer points, yet their lengths are the same. But I realized pretty quickly that the student had not yet studeid intervals. To him, the question was nonsense. “What does it even mean to remove the end? It’s still filled in!”

Thinking about my experiences with precalc students, I realize that even the idea that the line is made up of a set of individual points is largely taught. When you draw a curve, there are no “points” on it. It is smooth and continuous.

Similarly, we all remember being wowed by Cantor’s diagonal argument. It seemed so counter-intuitive, and from then on, we started paying attention to when we wanted to extend our intuitions about finite sets to infinite sets.

I guess my point is that even something as run-of-the-mill as infinite sets should not be treated as intuitive and obvious. They are incredibly powerful abstractions that we have cultivated after hundreds of years of trying to tackle analysis.

Infinite sets are modeled on the idea of finite sets and of membership inclusion. Given a grocery list, you can go through and check to see if you have everything in your cart. Because finite sets have decidable membership, it is baked into our very intuition that we can ask whether a given object is included. This seems like a good reason why the law of excluded middle seems so obviously true.

If set theory is about inclusion of objects in collections, then type theory is about construction of objects. A type is nothing more than a set of rules for how to create objects, how to use them, and how to identify them.

It does no immediate harm to think of types as sets. You can just imagine the collection of all objects you can construct with the rules. This is a strong way to net a lot of intuition, just as you would always picture finite sets. But just as with finite sets, you can miss some of the interesting (“weird”) behavior of more general types. While every valid application of its constructors produces an object of a given type, it is definitely not necessary that the constructors generate *all* objects of that type.

For instance, the circle type is the freely generated groupoid with one point and a loop on that point. I got into a debate a while ago about how many “points” made up the circle. The other party refused to admit there was more than just the base point. But my argument was that the loop consisted of many points itself. I also contend that the “number of points on the loop” is an invalid question. We can’t count them inside the type theory. It’s the answer my high school student would have given me if I asked him how many points are on the parabola. (The undergraduate math student would tell me “infinitely many” while the graduate would tell me “the cardinality of the continuum”).

The points in the loop are, in some sense, still points. But they are points which we cannot inspect directly. They must always be treated as a continuum. And that restriction is exactly what gives type theory a lot of its computational and topological robustness. Like a holomorphic function, it is so rigidly constrained that all of our results come out beautifully.

Another example of these intangible points would be infinitesimals in smooth infinitesimal analysis and synthetic differential geometry. The only infinitesimal we may construct in this theory in an empty context is 0. But when we universally quantify over all infinitesimals, our arguments must work for nonzero infinitesimals as well. Whether any infinitesimals actually exist or not is somewhat immaterial. The point is that if we *did* have them, our functions would have to respect them. And it is thanks to this added stipulation that all functions become automatically smooth. I like to think of these points as “ghost” points. You can’t see them, but they give the theory its shape.

So our sets from set theory are very close to a very narrow (but still useful) sort of type. Namely, types with decidable equality. Decidable equality (that is, LEM) keeps the types in perfect focus. There is simply no room for things like infinitesimals or any kind of higher dimensional structure to exist because it allows you to isolate points. The smoothness axiom used in smooth infinitesimal analysis works exactly because you cannot isolate infinitesimals. (That is, there is no way to define a nonsmooth function piecewise). They are ghost points for a reason: you’re not allowed to look at them.

So what is the univalence axiom? To a first approximation, there is little harm in saying that it allows you to turn bijection into equality. With a little song and dance about proof-relevance, that’s half the reason you even want to consider multiple equality proofs between two objects. When your type has higher dimensional data or ghost points, an equivalence must respect them as well. That way two equivalent types have the same shape as each other, in addition to being point-for-point substitutable.

From a pragmatic point of view, univalence is merely a computational workhorse. It allows us to work with whatever presentation of a problem is most suited. At least morally, any theorem that can be proved with univalence can be proved with a great deal of suffering. It gives us the right way to equate types

LikeLiked by 1 person

jaredMy interest is panentheism, but I look forward to your essay. The notation a:A comes from programming languages’ type systems.

LikeLike

Urs SchreiberRegarding univalence, allow me to suggest that it’s not all that subtle and elusive:

Given homotopy types in the presence of a universe type then there are a priori two different concepts of equivalence: on the one hand homotopy equivalence between two types and on the other hand a homotopy (path) between their “names” in the type universe. Univalence just says that these a priori different concepts of equivalence are equivalent.

It’s what everyone who considers universes for homotopy types would come up with. And indeed this concept was found by Charles Rezk and Jacob Lurie independently (and maybe earlier) as a theorem that holds in infinity-toposes.

Regarding thinking of sets and types:

Of course most everyone in practice uses just an informal concept of set, because the fully formal one is practically essentially useless. But there is such an informal concept of types just the same, and so it makes good sense to speak about whether somebody takes the informal concept of set as primitive, or that of type. The difference is easily described informally, too. For a set any two elements are either equal or not, while for a type there is for any two of its elements a type of ways of them being equivalent.

As a bonus, if you ever do opt to reason fully formally, then doing so with types is actually practicable.

LikeLike

Andreas BogkNice gauntlet you have there, please allow me to pick it up.

So, in one of the paragraphs, you deny thinking in set-theoretic terms, and then two paragraphs later, you claim that “a : A” was a notation for “a is an element of type A”?

Don’t you realize that “is an element of” is set-speak? Don’t you realize that even before explicitly being aware of ZFC, you had an intuitive idea of sets and were appying it?

“a : A” is a type judgement. It says “the expression a is of type A”, or “a inhabits the type A”. That’s not just a different notation from “a ∈ A”, it’s a completely different idea. For starters, while set theory allows “a ∈ A” and “a ∈ B”, with “A ≠ B”, in type theory this is not the case. Every expression a inhabits exactly one type A. In fact, given the structure of a, it is possible to derive the type A. This structure rises out of lambda calculus, which obviously you have failed to study.

You know, I’m just a random programmer, but I grew up believing that mathematicians are interested in a precise science, not the hand-waving kind. I’m shocked to find that this seems to be not the case, that you argue for glossing over essential differences in axiomatic theories, justifying that by calling people who study these precise differences “militants”.

And this in the face of interesting results, like the axiom of choice becoming derivable, and thus the lemma of choice, for cases which are morally equivalent to simple sets. This makes you sound like the person pointing out that in non-euclidean geometry, angles don’t sum up to 180° in a triangle, or the person explaining that the planetary orbit can’t possibly be elliptic, because this contradicts the earth being the center of the universe.

And as for the univalence axiom: the most surprising and useful result of it is that it is possible to show, that given UA, equivalence between types becomes decidable. Which of course doesn’t matter to people choosing to ignore machine-checked proofs of sufficient complexity.

So, here’s your gauntlet back. Looking forward to part two.

LikeLike

mathematicswithoutapologiesPost authorThese are all very interesting comments, especially the last one, which in its extreme aggressiveness exemplifies what I mean by “militant enthusiast” better than any fictional thought experiment I could have devised.

I intend to get back to some of these items after I have finished part 2, but right now I would just like to comment on this specimen of a certain kind of militancy:

“I grew up believing that mathematicians are interested in a precise science, not the hand-waving kind. I’m shocked to find that this seems to be not the case,…”

One of the reasons I wrote my book is to help dispel such misconceptions and thus to reduce the number of victims of gratuitous and possibly life-threatening shock. The problem has been around for a long time; Plato complained in the Republic that geometers were speaking nonsensically. But mathematicians resist the lesson. A common reaction of mathematicians reading my book — especially Chapter 7, which is the one most relevant to this discussion — is that my description of the attitude of (most) working mathematicians to philosophical logic is too obvious to be mentioned. Thanks to Andreas Bogk for demonstrating that it’s not the case.

Everyone from Plato on down is of course welcome to try to reform the practice of mathematics; my very modest goal is to help potential reformers know more about that practice. I’m not taking any position (why should anyone care what I think?); I’m just saying what I believe to be the common attitude of my profession. For example, it must be relevant that neither ZFC nor lambda-calculus is a standard component of either the undergraduate or the graduate mathematics curriculum. My undergraduate advisor gave me his mimeographed copy of the notes for the first (set-theoretic) part of SGA 4 on the express condition that I promise not to read it. Fume about it all you like; but you can’t fume about what you don’t know.

LikeLiked by 2 people

Richard SéguinThere are more examples of militancy and condescension in Bogk’s comment: “Don’t you realize that ‘is an element of’ is set-speak?” and “his structure rises out of lambda calculus, which obviously you have failed to study”.

In an old cowboy movie the appropriate response to these would be, “Them’s fightin’ words, boy!”

LikeLiked by 1 person

mathematicswithoutapologiesPost authorI’m not looking for a showdown, certainly not with an opponent prone to shooting himself in the foot.

I did revise my claim about number theorists looking for proof assistants. One of the tweets about this post had such funny wording — someone I have never met claims to be able to falsify my assertion that I don’t know any number theorists “who especially wants this kind of tool” — that I failed to note that the statement was too broad. Tim Gowers and Tom Hales are both at least part-time number theorists, and Tom’s interests overlap considerably with mine. I don’t think either of them is looking for proof assistants to work on my kind of number theory. The people I meet at specialized conferences are not.

LikeLike

Pingback: Math is from eros, programming is from logos? | Mathematics without Apologies, by Michael Harris

Gershom BI hope you don’t take Andreas’ comment as representative. Most of the people who have been working on HoTT have extensive backgrounds in classical mathematics and are well familiar with set theory. As a whole, I don’t think people are unfamiliar with common mathematical practice. One of the most interesting lines of discussion about HoTT is that in fact it helps us to more directly represent existing practice — capturing, for example, the distinction between “exists” and “merely exists” that mathematicians informally are aware of all the time, but don’t tend to make explicit. In this “constructivism 2.0” we don’t have _restrictions_ of existing mathematical practice, but an extension of mathematical language to more accurately capture distinctions that have lacked good syntax — not only truncation, but also higher equality structures, etc.

You write of “learning to use the word “exist” appropriately and effortlessly” and thus displaying a “pragmatic “belief in” the world of mathematical objects adequate for seamless interaction with the community.”

But I would suggest that this is more suited to the synthetic approach given by type theories than by ZFC. An object exists because we posit its axioms or because we, in some sense, construct it from objects already posited. In ZFC-foundations an object “exists” because we build it out of sets. Isn’t giving objects more directly _closer_ and not _further_ from informal mathematical practice?

To me, a key distinction between on the one hand older categorical approaches and HoTT approaches and on the other a “ZFC-notion of foundations” is that the former tend to be _open systems_. Mathematics is not a “finished product” but an ongoing, inventive practice, so it stands to reason that a good approach for capturing mathematics should allow new mathematical constructions to be introduced _directly_ and also _sensibly_. Encoding everything in ZFC provides the latter (as an “etalon of consistency”) but not the former.

From this standpoint, we can see the preference for `:` over `?` (Martin-Löf actually initially developed his dependent type theory over the latter, following Howard). The former does not fix a model. Some people have a first-order approximation of types as sets, and this gets them pretty far. But it may be better to have a first-order approximation of types as spaces, or types as propositions, etc. And depending on what field one comes from, one or another approximation may feel more natural. By choosing the neutral `:` we do not have to pick — and in fact a beautiful result is that the theory holds over all these cases (and others!).

Along these lines, within HoTT (or any old type theory actually) we can develop an internal theory of sets, as the HoTT book does (see chapter 10). It therefore makes sense to reserve the terminology for the membership notation for the theory of sets constructed internally to HoTT, which corresponds to classical set theory.

Furthermore, if we have something like “base_S1 ? S1” we could also be able to ask questions like “base_S1 ? S2” — but with types we can exclude this a-priori as not well formed, before even seeking a proof derivation. So we can have a formal system where certain sorts of “category errors” are not possible from the very start. The end result is a reduction in the “paperwork” necessary to fully characterize a mathematical statement — a very practical thing to have around.

This (I hope) is not an argument over the “one true way” — just a halting attempt to explain the choices in the use of one syntax over another. These choices are not about “set theory vs. type theory” but in fact were made in the context of the history of type theory, in the move from what is called an “extensional” to an “intensional” account of its workings — one that was accompanied by certain apparently small but very significant changes in its metatheory (with practical consequences for the resultant systems).

LikeLiked by 1 person

Gershom B(apologies for the unicode issue above — obviously for the question mark, read the membership relation symbol — don’t know why it failed to go through).

LikeLike

mathematicswithoutapologiesPost authorThank you (Gershom B) for these most recent comments. I haven’t yet said how I would like this to play out; I do say in the book what I think would need to happen in order for HOTT/UF to be adopted widely by mathematicians. Just two observations. First, the distinction between “informal mathematical practice” and “mathematical practice” looks artificial to me, unless you mean the difference between what mathematicians say and what we write. But then the former is what most of my colleagues would identify as the point; I quote Atiyah: “the proof is the check that I have understood, that’s all.… it isn’t the primary thing at all.” That’s not what you would hear from a philosopher of mathematics — unless the philosopher is David Corfield (who reminded me of the quotation) or one of a small group of the like-minded.

The second observation is that the very notion of “Foundations” is based on a metaphor that, I argue in Chapter 7, is not necessarily the most appropriate to account for mathematical practice. Instead of building mathematics from the bottom up, we have a tendency to build from the top down; except that the top is dimly perceived and unattainable. That observation is consistent with any choice of axiomatic starting point.

LikeLike

Gershom BI did not mean to draw a distinction between “informal mathematical practice” and “mathematical practice” in what I wrote, or at least not a division. In the one case I used “informal mathematical practice” I simply did that to distinguish it from not only written proofs, but fully formal proofs in the ZFC style. The idea was to contrast the HoTT and ZFC approach from the standpoint of “languages in which to make more precise things which we say with words”.

On the “notion of foundations” — I think you are closer to the approach held by at least some HoTT folks (I should be clear, many people involved in research related to HoTT have no foundational axe to grind at all, and are happy to simply view it as an interesting subfield of mathematics with a variety of nice properties and potential implications and applications) than you may realize — part of the difficulty has been precisely that the notion of a “foundations” suggested by the HoTT book is of a different _sort_ than what many “ZFC-foundations” people mean by foundations (and this has generated a fair amount of confusion and talking-past-one-another in the past). For example, consider the section “Top-down versus Bottom-up” in Awodey (2003) [http://www.andrew.cmu.edu/user/awodey/preprints/awodeyVhellman.pdf].

LikeLike

Mike ShulmanGershom, I think most of your comments are spot-on, but I disagree that the notion of foundations suggested by the HoTT book is any different from that supplied by ZFC. Both are formal systems into which mathematics can be encoded. (I elaborated on this point somewhat in my chapter draft.) Awodey’s point in the paper you mention is that

category theoryis not a “foundation” of this sort, but HoTT can be.Michael, I have to say it is a little frustrating to me when you write only “I explained X in the book”, because then, lacking a copy of your book, I am unable to respond to you. (-:

LikeLike

Gershom BMike: Let me try to make the idea that “the notion of foundations suggested by the HoTT book is any different from that supplied by ZFC” more precise. I don’t mean to speak of “HoTT as a foundation” vs. “ZFC as a foundation”. Rather, I mean to suggest that the idea that a foundation is a “formal system into which mathematics can be encoded” (which in your draft chapter you describe as “reasonably precise and objective, and agree[ing] with its common usage by most mathematicians”) is a notion of foundation that is _not_ necessarily shared by those who consider ZFC the “true” foundation. Since I do not agree with them, I hesitate to characterize their arguments. But it relates, for example, to the idea that a synthetic theory is not a proper theory, etc., and in some sense to ideas about the ontology of mathematical knowledge, etc. So by your definition of a foundation, yes, of course HoTT and ZFC both fulfill it. But there are other definitions of what constitutes a “foundation” (perhaps more _felt_ than _precisely stated_) by which ZFC fulfills the goal much better than HoTT. And perhaps a clever person can put forward a plausible definition of “foundation” which HoTT satisfies and ZFC does not.

One way in which I tend to see this, is “open” and “closed” systems, with HoTT like MLTT (and earlier categorical foundations programs) tending towards the former, and ZFC clearly being the latter. But it seems to me we can see the issue even if we look _only_ at the world of set theorists — consider the discussions of the “multiverse interpretation” as put forward by J.D. Hamkins.

By the way, it is my hunch that the groupoid interpretation of MLTT actually moves it closer to a system that more people would be willing to accept as “foundational” by some other criteria. In particular, just as much as we might feel that “sets” have some given “pre-mathematical” meaning, or “lambda terms” similarly, the same arguably holds for “spaces”. One could even argue that the notions of “here” and “there” and “getting there from here” are more primitive than “a collection of things” and thus better suited for a “pre-mathematical justification” of some formal system.

LikeLike

Mike ShulmanThanks for your reply, Gershom! I haven’t yet heard anyone describe a notion of “foundation” that I could see how would apply to ZFC any more than to HoTT. I don’t really know what you mean by “open” versus “closed”; it’s true that “HoTT” is a subject rather than a specific formal theory, but any particular such formal theory can be just as well-defined as ZFC, and on the other side ZFC is not fixed in stone either, e.g. people add things to it like large cardinals.

LikeLike

Gershom BBy an open system, I mean a system designed to allow the introduction of new types — i.e. where we cannot enumerate the universe. So we can give rules for W-types once-and-for-all, or other ways to introduce inductive types, etc. And we could say our universe was “all such types” — but in fact we only, in the classic formulation, say “at least all such types.” This goes to a philosophical point as I touched on above — it isn’t just because we don’t yet have a “good enough” account of “all the types we might want” but the argument goes that we cannot have such an account, because we don’t know what we may want in the future — it is up to us to invent new things, and we certainly will. So in an “open system” we just admit new types freely whenever we feel the need arises, as long as we can provide some account of why these types and their attendant rules are not nonsense (i.e., i suppose, at a minimum that they do not prove false).

There are “closed system” accounts of type theory, and some of them are neat, because you can do various sorts of “metaprogramming” by limiting yourself to say a universe of strictly positive types derived from sums, products, arrows and fixpoints or the like. In such formulations you can automatically derive various operations on any type by inductively defining such operations on the various components which the type must decompose into. McBride is very fond of these techniques. (See for example “The Gentle Art of Levitation” with Chapman, Dagand, and Morris, which gives “a closed, self-describing presentation of datatypes, where datatypes are just data”). The cost is that you now can’t admit other types, types for which those operations might not even make sense.

In MLTT it seemed tractable to say “all the types we’ll want are of a certain sort” but HoTT really underlines the open nature — it seems pretty clear to me that we’re not even close to imagining “all the useful HITs” — and then there are the possibilities regarding coinduction, etc. And who is to say that “higher dimensional path data” is the last word on the additional structure that we give our types anyway? It merely seems to be the most interesting additional structure that we have found thus far.

I agree that set theory in a sense has the same issues — clearly people extend ZFC with various large cardinals. This was what I was referring to with the “multiverse interpretation”. Some people (most?) think there is “the” ZFC + some axioms in which one builds “real math” and that there are other “curiosities” off to the side. To them, in “the” ZFC, CH either holds or it does not, even if there are models of the axioms that go both ways. In turn you then levy an argument about which “must be” the case because of how things “really” work, even if such an argument, clearly, can’t be advanced within ZFC itself, but only from some “prior” notion of how the world works which is considered to justify the choice of ZFC, etc. In the multiverse interpretation, you just pick the universe you’re interested in working in.

It occurs to me that, in a constructivist setting, the same “realism” that people use to argue for ZFC as “the” foundation, typically leads, of all things, to some sort of ultrafinitism.

LikeLike

Urs SchreiberA propos, as someone who just happened upon this blog following the smell of HoTT, and not knowing any background about the blog or its author, maybe the data point is of interest that it was rather unclear to me for a good while what “the book” is, that I see you keep referring to. (I know this phenomenon, that if one works on something intensively then after a whole it seems it must be visible to everyone…) Your blog front page might be lacking some prominent announcement saying that there is a book with the same title as the blog, and that the are meant to be going together to some extent (or something, I don’t know! ) Right now there is a prominent link “Excerpts”, but it is not made clear what one gets to see excerpts of, then there is a link to Princeton University Press which takes one to a page that still doesn’t say that there is a book one has to have the impulse to follow a link to “official page” to finally see a book displayed, and has to conclude that (if that indeed is the case, I don’t know!) this is the book that is being referred to elsewhere, for instance in the section on excerpts.

LikeLiked by 1 person

Tim CampionUrs has already basically said this, but let me try to be as pithy as possible:

Univalence says that isomorphic things are equal.

There. That’s the idea. For example, the field extension $Q[x]/x^2+1$ of $Q$ is _equal_ to the field extension defined by the vector space Q with multiplication defined by the laws of a ring (with 1 as unit) and $i^2=-1$. The upshot is that if we reason in HoTT, then we can’t say things that are not isomorphism-invariant (because we can’t say things that are not equality-invariant).

In order to do this consistently, though, we have to keep track of the _ways_ things are equal. For example, $Q(i)$ is equal to itself in two ways: one for each element of the Galois group. This idea is familiar to anyone who has ever thought about moduli problems.

As an added bonus, “isomorphism” should really be read as “the relevant notion of equivalence”. For example, equivalent categories (defined correctly) are equal. So are equivalent homotopy types.

LikeLike

John BaezIt will be fun, later, to look back at people getting in a fight over whether to use x:X or x∈X. To computer scientists, “typing judgements” like x:X are perfectly familiar: they often start a program with some “type declarations” that amount to saying stuff like “let x be an integer”. Mathematicians use them a lot too, like when they start a paper with “Let G be a group.” But most mathematicians who study just a little logic don’t get around to type theory, so they won’t have seen a formal treatment of typing judgements.

LikeLike

Mike ShulmanInteresting post! There’s a lot for me to digest in your post and the comments, but to start with, I hope you noticed that I specifically

objectedto the comment which you quoted about “the driving force”.I’m sorry that our conversation (when was it?) was unsuccessful; I’d love to take it up again. Personally I am continually surprised by the indifference and ignorance of mathematicians about foundations and by the absence of logic from mathematics curricula. I’ve been being surprised by it for years, so maybe I should be over it by now, but I’m not. (-:

LikeLike

Tim CampionWell, the tone of this post and the author’s comments really don’t seem to be soliciting friendly debate, but let me try anyway. (One point of befuddlement — the notation a: A may be unfamiliar, sure, but I don’t understand how it can be called unwieldy! It’s the same number of characters as a \in A and it’s easier to type in a platform without latex…)

It’s true that most mathematicians don’t think about foundations much – thinking about foundations too much gets in the way of doing mathematics! So it’s natural, when someone suggests you think about foundations, to counter that they are doing something pointless.

But foundations are important for at least two reasons –

1. (most obviously) Some questions have a surprising dependence on foundations. Think of Whitehead’s problem, or the projective dimension of various modules. Arguably the dependence on foundations makes these questions “uninteresting”, but then we wouldn’t know that they were uninteresting without being careful about foundations.

2. (more subtly) The foundational system in use affects the way we think at all levels. Let me give two examples.

(a.) You give the impression that mathematical maturity has primarily to do with understanding what “exists” means mathematically. It’s not clear what you mean by this (but please explain!), but one of the bugs of the word “exists” is that it doesn’t give you everything you want it to — there’s a big difference between proving that something exists satisfying some given property versus actually constructing an explicit example of something satisfying that property. Every mathematician needs to be sensitive to this. But we don’t really have the language to express this difference. This is an artifact of the foundational system we’re using. As Gershom B alludes to, though, one of the features of a system like HoTT is that it formalizes this distinction.

(b.) Another example is when we consider isomorphic objects to be interchangeable. Every mathematician needs to make arguments which rely on this idea — think of every time you invoke Noether’s n isomorphism theorems, or when you talk about “the” absolute Galois group. it can be a big headache to go through such an argument and make sure that everything you’re doing is kosher — technically, you need to keep transferring things along isomorphisms, and making sure that various diagrams commute. One of the features of HoTT (and univalence in particular) is that it formalizes the sense in which isomorphic objects are “the same”. So if you internalize the reasoning rules of HoTT, you can make these arguments naturally.

Point (2b.) is the entrypoint into the rising tide of infinity-groupoids, where we think not just about isomorphisms, but about higher isomorphisms. If you’re doing number theory, you’re doing a kind of geometry, so you care about some kind of moduli problems. And if you’re doing moduli problems, sooner or later you’re going to need groupoids, and then inevitably higher groupoids. There’s no escaping it. Homotopy type theorists are pragmatically laying the groundwork for the new reality that we all need to internalize how to think about higher equivalences correctly.

LikeLike

David RobertsAside to Tim C:

I guess you are using the word ‘kosher’ in the sense satisfying the principle of equivalence [1]? If so, nice to see it in the wild (I _think_ I proposed this term in place of the other, provocative term that was originally an in-joke generated by John Baez, but I can’t find the original reference)

[1] http://ncatlab.org/nlab/show/principle+of+equivalence

LikeLike

DimitrisThe quote from the comment section is mine and is taken out of context. The discussion there was very specific: I was replying to the objection that UF is not a synthetic theory of homotopy types (∞-groupoids) because it involves “hacking” Martin-Lof type theory (MLTT) via the Univalence Axiom and I was indicating “one perspective” from which such a choice of formal system was justified. In particular, I was referring to the fact that Voevodsky’s initial motivation was not a new Foundation, but rather to work on improving proof-assistants. This motivation comes from the fact that errors in his own work took decades to be discovered – a situation he found unacceptable. The fact that MLTT (which has a good implementation in computers) became the formal system of choice in this particular pursuit is somewhat inevitable, with hindsight. The fact that MLTT+UA is an interesting enough system to provide an alternative proposal for a complete Foundation came later and more gradually. So much is clear e.g. if you look here:

http://www.math.ias.edu/vladimir/files/univalent_foundations_project.pdf

In summary, people are not working on UF in order to develop proof assistants. But the need/desire for better proof assistant is (historically) *one of the many things* that drove and still drives the development of UF.

With respect to:

“I don’t know of anyone in my branch of number theory who especially wants this kind of tool.”

I’m sure this is the kind of thing number theorists would have said about many of the developments that e.g. led to Wiles’ proof of FLT. It’s impossible to judge what kind of tools – however unrelated – end up becoming useful, indeed essential in the future. What we “want” in the present is often a terrible guide about what we will *need* in the future.

LikeLiked by 1 person

Matthew EmertonDear Dimitris,

Actually, the developments that led to Wiles’s proof of FLT were all very much in the mainstream of number theory (going back to Fermat, Euler, Gauss, and Kummer, among other luminaries in the history of number theory). I realize that I’m nit-picking at your example in a way that it doesn’t really deserve (in that I’m being much more literal and specific than you likely intended to be), but I think there is some validity in doing so:

I think that the sense in which many working mathematicians are not that interested in foundations, or the use of proof assistants, is stronger than the sense in which certain mathematical developments may or may not be the flavour of the day/month/year at any particular moment. Similary, I think that the disconnection between logic/foundations and other areas of maths (e.g. that Mike Shulman alluded to in a comment above), although not absolute (there are people who know about both areas, and e.g. there are periodic conferences combining logic and number theory that try to bridge the gap between these two topics), is real, and wider than many of the other apparent gaps between different areas of mathematics.

Whether this is good or bad, and justified or not, I don’t want to weigh in on. But I’m not sure that just claiming “it could be useful in the future” is necessarily strong enough of an argument to convince many non-foundations-oriented mathematicians to pay attention to HTT (or any other foundational developments). Maybe that’s okay, too — I don’t know whether or not the HTT community wants more mathematicians to pay attention to it.

But if one looks at the Voevodsky quote discussed in the follow-up post, about using his proof assistant and no longer worrying about mistakes, I think many people (including the owner of this blog?) would interpret it as saying that he’s replaced the goal of human understanding by the goal of machine understanding. And that doesn’t necessarily appeal to everyone — and the lack of appeal might be quite deep. (It’s not at the level of analysis vs. algebra, which is a typical gap between different mathematicians’ views of what’s important or interesting, but at the level of human vs. machine, which is resonates at a more visceral level, and goes beyond the minutia of any particular technical area of mathematics.)

LikeLike

DimitrisDear Matthew,

Thanks for your reply. Firstly, let me say that after re-reading my post, I admit that I do sound unusually aggressive – but that’s only because I reacted (in the heat of the moment, as it were) to my words being used to make a point that was not at all consistent with what I was saying in the context in which I said it. Perhaps I should’ve phrased things in less cutting a fashion in my post above, but I’m sure you’ll agree that there’s nothing more frustrating than being taken out of context.

Now let me get to all your interesting points. Let me just give my two cents worth and explain myself a little better.

“Actually, the developments that led to Wiles’s proof of FLT were all very much in the mainstream of number theory (going back to Fermat, Euler, Gauss, and Kummer, among other luminaries in the history of number theory).”

In writing what I did I was thinking of a remark attributed to Lenstra (found e.g. on p. 245 of “Modular Forms and Fermat’s Last Theorem”) who said that twenty years ago he was firm in his conviction that he wanted to solve diophantine equations and he did not want to represent functors and was surprised to find himself representing functors in order to solve diophantine equations. In any case, mainstream or not mainstream, when all this machinery was being developed by Grothendieck in its full abstraction I don’t think it was fully clear how useful it would turn out to be in mainstream, concrete number theory. I don’t think I’m too far off the mark in making this point. But this is something that it’s better not to dwell on too much, since as you say I was merely drawing an analogy to illustrate why the poster’s statement sounded a bit too strong to me.

“Similary, I think that the disconnection between logic/foundations and other areas of maths (e.g. that Mike Shulman alluded to in a comment above), although not absolute (there are people who know about both areas, and e.g. there are periodic conferences combining logic and number theory that try to bridge the gap between these two topics), is real, and wider than many of the other apparent gaps between different areas of mathematics.”

Sure – but this sociological fact is often only used against logicians to somehow suggest that what logicians are doing is “out of touch” with “real” mathematics. Whereas the point is that, yes, of course it is detached from “real” mathematics because its object of study *is* “real” mathematics (rather than the object of study *of* “real” mathematics.) Few logicians (as I understand the term) would argue or even want to eliminate this “disconnect” that you mention – there being this “disconnect” is exactly what makes logic/foundations a discipline in its own right, distinct from “real” mathematics. I suppose you’re thinking primarily of mathematical logic/model theory when you speak of these conferences, since you refer to logic as an “area of mathematics”. The logic that I have in mind on the other hand is not an area of mathematics. It is metamathematics. HoTT/UF has both a metamathematical component and a mathematical component (as does set theory.) The metamathematical part is what we may call Univalent Foundations. The mathematical part is that it provides a synthetic theory of ∞-groupoids. (The division is simplistic, but made merely in order to illustrate my point.) So there is no need to interpret HoTT/UF (as I think – but correct me if I’m wrong – you are) as basically a metamathematical endeavour trying to disguise itself as new mathematics. It is one among many endeavours providing new insights, mathematical and metamathematical alike.

“But I’m not sure that just claiming “it could be useful in the future” is necessarily strong enough of an argument to convince many non-foundations-oriented mathematicians to pay attention to HTT (or any other foundational developments). Maybe that’s okay, too — I don’t know whether or not the HTT community wants more mathematicians to pay attention to it.”

I of course completely agree that anyone on the “inside” saying that something could be useful in the future cannot be sufficient justification to convince anyone on the “outside”. But I’m drawing attention, rather, to the contrapositive of this: what appears to those on the “outside” to be useful or not is equally unconvincing an argument for anyone on the “inside”. Your last sentence above hits the nail on the head. I have no idea where this idea of HTT people as “militant enthusiasts” comes from – no reasonable practicioner/admirer of HTT that I know of is trying to convince anyone to abandon the way they think of things, especially number theorists and other practicing mathematicians. Of course any new field brings with it an excess of excitement and hope and eagerness – but that doesn’t mean its filled with rabid zealots. It simply means that, as with all developing infants, it is struggling for survival and relevance…and learning in the process.

“But if one looks at the Voevodsky quote discussed in the follow-up post, about using his proof assistant and no longer worrying about mistakes, I think many people (including the owner of this blog?) would interpret it as saying that he’s replaced the goal of human understanding by the goal of machine understanding.”

Here I think there is some real confusion – or if not that, then some real disagreement – or perhaps I’m misunderstanding you. I can’t speak for Voevodsky, obviously, so I’ll speak for myself. I don’t know what you mean by “machine understanding” but it sounds to me like what you have in mind is something like *automated theorem-proving*, i.e. the idea that machines will prove new theorems and that we will defer to them the work usually done by mathematical intuition/aesthetic sense etc. But *proof assistants* (about which Voevodsky is speaking) are something completely different. You cannot use a proof assistant unless you’ve already come up with a proof. So the use of proof assistants changes next-to-nothing about the actual human mathematical creative process. The only thing proof assistants are supposed to change is the actual *checking* of a proof after it’s been *discovered*, thus assisting in eliminating or revealing technical errors, etc. In doing so they are supposed to optimize a process that already takes place in the mathematical community, i.e. the refereeing of new arguments. But, of course, at the current state of knowledge this process is extremely sub-optimal: it is still far more efficient (believe it or not) to rely on a human referee than to have a machine check an argument. In summary, an ideal proof assistant would *at most* be a kind of ideal referee, NOT some kind of mathematician-robot. Humans will still be responsible for coming up with the arguments to be checked and the whole creative process will still be up to them. Machines will help in verifying them. I don’t see how this could possibly be interpreted as some kind of “man vs. machine” scenario any more than (to use a standard analogy) typesetting in LaTeX can be considered a “man vs. machine” scenario. So I think to use HoTT/UF to make a point about “man vs. machine” is, at best, a stretch and, at worst, unfair – I know you yourself are not making such a point, but merely suggesting why such a point might appear reasonable to make. I’ve given some reasons about why to think that it’s not that reasonable.

LikeLiked by 2 people

Matthew EmertonIn reply to Tim Campion:

Dear Tim,

One difficulty I (and maybe others) have in trying to understand the significance of the univalence axiom is the following:

I am fairly used to the distinction between equality and isomorphism (in the sense that isomorphisms may not be unique), to using Galois groups in arguments, and even to using groupoids (having worked a little bit with stacks). I don’t particularly think of myself as using ZFC foundations, although I do typically think in terms of (naive) sets and categories. (My mental image of my reasoning is human language naive reasoning starting with a naive understanding of what the natural numbers are, in some sort of Kroneckerian spirit: although I’m open to the idea that I’m unconsciously employing a particular kind of set-theoretic foundations, rather than just truly working in a foundations-free manner.)

Coming to univalence: I don’t see what problem it’s solving, or (more or less the same thing) I don’t see how it is used (or supposed to be used). Is it something that only plays a role in formal reasoning (at the level of writing out truly formal proofs), in the same way that particular axioms of ZFC or predicate logic are never invoked in everyday mathematics (except perhaps implicitly), but only in formalizations that most of us never engage in? If so, that would explain why I don’t understand its role. (Just as I don’t know all the ZFC axioms, and I’m not confident I would understand the roles of all of them.) Or is it something different?

If it does play a role only at the level of writing out formal proofs, then one way to explain it to people not engaged in foundations would be to give toy examples of how its used, or to explain what aspect of informal mathematical reasoning it is trying to capture. Also, in imposing this axiom: does it allow us to prove more, or to prove less? Does it make it less likely that one will make a mistake in argument (because it forbids you from focussing on aspects of the structures under discussion that are not relevant, i.e. that are not preserved by isomorphism), or more likely to make a mistake (because there is one more axiom you are in danger of contradicting)? Or are these sort of questions barking up the wrong tree?

Of course, people working in any foundations (or on anything else) are doing their own work, and aren’t obliged to explain anything to random people posting on blogs, so feel free to ignore my questions!

Best wishes,

Matthew

LikeLike

Tim CampionThere are definitely people who have thought a lot more about these things than I have — I hope we’ll hear from them! I’m also not actually working in HoTT, just a (hopefully not too militant!) enthusiast. Let me discuss two problems that HoTT attempts to address. An important caveat is that the details of HoTT are still being worked out — but I will pretend in what follows that it is a fully-worked out system with all the properties that we can reasonably expect it to have.

1. Foundational problems.

One argument for HoTT is that it is in a lot of ways closer to the actual, informal foundations that many mathematicians use. For example, in HoTT the natural numbers are _defined_, essentially, as “the type with an element 0, a successor operation, and satisfying an induction principle”. Whereas in ZFC, you do something arbitrary, like defining the natural numbers to be “the smallest set \omega containing the empty set and, for each x \in \omega, containing x \cup \{x\}” (I think this is called the von Neumann definition). I think (but I’m interested whether you’d agree, or if I haven’t been clear enough to say) that the HoTT definition sounds a lot closer to the Kroneckerian picture you describe than the ZFC picture does. In general, the objects of HoTT are _defined_ by universal properties like this, which I think accords with mathematical practice. (But this feature is more a feature of type theory in general rather than HoTT – and univalence – in particular.)

There are a bunch of other ways in which HoTT seems closer to mathematical practice and to mathematicians’ informal views of mathematics, some of which are discussed in the comments, and which I could talk about more. These solve the _problem_ that many important ideas which are _meta-principles_ in standard foundations (e.g. I can “basically” define an object by a universal property, or I can “basically” treat isomorphic objects as the same) become actual theorems in HoTT (I can literally define an object by a universal property, or I can literally treat isomorphic objects as the same). It makes the practice of mathematics more precise. Why should we care about solving this problem? Well, maybe others can chime in ,but I have a vague notion that the better formalized a principle is, the more likely that we will be able to find ways to teach it efficiently, and thus learn it at an earlier stage in our mathematical careers. I’m not sure how well this stands up to scrutiny.

2. A synthetic theory of \infty-groupoids

This is a more concrete problem. If you have to work with (higher) groupoids, currently you have to go read something like the Stacks project or Lurie’s book, learn a kind of elaborate formalism, and develop a whole suite of intuitions about the right ways to formulate statements about higher groupoids that are going to be correct and provable, and you have to learn lots of technical details about how to go about actually proving such statements. With the utmost respect to the authors of existing texts, this is a somewhat painful and time-consuming process. It’s like reasoning about the natural numbers without having formalized the Peano postulates, but instead having to work with the von Neumann definition. HoTT aims to streamline this process, by providing something akin to the Peano postulates for higher groupoids — a synthetic theory as opposed to an analytic one. The gap between the intuitions you have to develop and the methods of proof you have available shrinks.

I should admit that I’m basically an undergrad and I haven’t actually learned this stuff, but here’s what I think should be an example. Intuitively, a stacks are sheaves, but with sets replaced by groupoids, and notions of equivalence “appropriately modified”. I think that in HoTT, this should literally be true. And any heuristic argument that you arrive at from this picture should translate directly into a proof, just as heuristic arguments about isomorphic objects being interchangeable translate into proofs. And the same goes for higher stacks, replacing sets by higher groupoids.

LikeLike

Matthew EmertonDear Tim,

Thanks a lot for this reply.

Cheers,

Matthew

LikeLike

Gershom BMatthew: Let me try to address some of your questions, as best as I can. I am by no means as much of an expert as many on this thread, but I’ll see what I can do.

“In imposing this axiom: does it allow us to prove more, or to prove less?” Like most synthetic axioms, it allows us to prove more than without it. However, it _also_ rules out certain other axioms we might wish to introduce. So in MLTT + HITs + UA I can prove things that I cannot with just MLTT + HITs. For example, I can define S1 as a higher inductive type even without univalence. But without univalence I cannot prove that there is more than one path from the basepoint to itself, and following from that I cannot prove that the fundamental group of S1 (as a HIT) is Z. I think of it as providing structure to the universe by “inflating” it, and thus allowing us to observe potentially latent structure. But of course now I cannot assume the opposite — that the _only_ path from the basepoint to itself is reflexivity. (But why would you want to prove that, you may ask. Well, sure. But there are other occasions in which it is extremely convenient to assume similar things.)

I think a very good argument for using univalence comes in Chapter 9 of the HoTT book — on categories. Typically, people had considered category theory non-constructive in the sense that many basic theorems, such as the adjoint functor theorem, require the axiom of choice. Solutions to this, such as anafunctors, that had been developed, required a fair amount of book-keeping. By Example 9.1.7 in the HoTT book, we can apply univalence to directly demonstrate that Set, and also set-level structures, give “saturated” categories that have an “internalized univalence.” So in this setting, we can get a very nice, direct, formulation of category theory, including the usual theorems, but without requiring choice.

So, in this setting, rather than using univalence to observe additional structure, we use it to “rule out” extra structure we don’t want in a uniform way — similarly to how one may use HITs to provide a nicer construction of quotients than is usual.

So I would suggest that mathematicians “informally” (or in non-formal discussions, if you prefer) use univalence-like principles all the time. For example, category theorists will often walk through constructions and then, tag on at the end “up to isomorphism, of course”. In other realms, as I understand it, one might construct a proper subfield of C that is isomorphic to C, and then proceed to just call it “C” as well, then on out, passing over that isomorphism as a given. Univalence gives a _much more convenient_ way to make this precise than the usual constructions, since it keeps track of all the passages across the isomorphism for you (or rather, as an axiom it asserts that you _can_ keep track of such things, and in a setting where univalence computes [i.e. is no longer an axiom but a constructive theorem] then in fact it actually keeps track).

By this mechanism, one hope might be that we can close the gap between formal and informal reasoning — allowing us to use high-level language while having a more direct and obvious path to turning that into a directly checkable form (and hence, having better rules-of-thumb by which to sanity-check the correctness of our high level reasoning, short of full formalization). So one would hope it would make you less likely to make a mistake. But, of course, if one is in the habit of reasoning elementwise, or of using principles which may non-obviously violate univalence, then, until you shake the habit, then mistakes may well occur more frequently (although one would expect that, except in very occasional situations, they would be of the easily repairable form).

LikeLike

Matthew EmertonDear Gershom,

Thanks very much for this; it definitely gives me some more concrete ideas to reflect on.

Cheers,

Matthew

LikeLike

Pingback: Real mathematics, real beer, and real Berliner-Weisse | Mathematics without Apologies, by Michael Harris

David CorfieldMichael wrote in the post “I note in passing that none of the chapters of Landry’s book is devoted to the philosophical significance of the use of categories by Grothendieck, much less to Lurie’s more recent work, and I find this surprising.”

“devoted” was perhaps hoping for too much, but Lurie does at least make an appearance in my chapter.

LikeLike

Martin EscardoThe slogan “Isomorphic types are equal” is a misrepresentation of the univalence axiom.

Martin-Lof type theory, well before HoTT or univalent foundations, had a so-called identity type Id.

What the univalence axiom says, together with the models that validate it, is not that isomorphism is equality, but rather that it is consistent that Martin-Lof identity is isomorphism , or rather the refined version of isomorphism that works for omega-groupoids, called equivalence:

The univalence axiom says that Martin-Lof identity is equivalence (the higher generalization of isomorphism for sets).

It is consistent in Martin-Loef type theory, which existed since the early 1970’s, that type identity is the same thing as equivalence.

So the identity type never, ever was equality. Its nature was always rather undecided. The consistency of the univalence axiom just says that one way to decide it is to let it be equivalence. And it is a good way to decide it, but I am not going to discuss this here, or defend it (although I am tempted to). Moreover, after univalence has been postulated, one can argue that identity types give a good notion of equality. I am not going to preach this here either, again resisting temptation.

The only point I wish to make here is this: the univalence axiom doesn’t say that isomorphic types are equal. It merely postulates that equivalent types are related by a construction called Id in Martin-Loef Type Theory, which was intended to be equality, but never was (not without the univalence axiom, and much less with it) equality, at least not in the usual sense of equality.

So, at the very least, the univalence axiom contributes to understand what Martin-Lof identity types could have been, and in particular it shows that it couldn’t have been equality in the traditional sense e.g. that understood by the eminent writer of the above post and the majority of the mathematical community.

We need to do a much better job explaining what the univalence axiom is.

To begin with, it is not that it is false in traditional type theory. It can’t even be formulated in ZFC.

The univalence axiom is not even wrong in ZFC.

LikeLike

mathematicswithoutapologiesPost authorThank you for this clarification. This is exactly the point I have understood, or rather failed to understand, for several years. My understanding has been that, whereas the kind of category theory to which algebraic geometers are accustomed works with a notion of equivalence that cannot be formalized well within ZFC (because, for example, the first thing an algebraic geometer does is define functors with target the category of sets, without a second thought), HOTT offers an alternative based on collapsing equivalence to something called the Martin-Lof identity type. In the established discourse of Grothendieck-style algebraic geometry, which has been extraordinarily successful for well over 50 years, there is no obvious place to fit the Martin-Lof identity type, nor is there any obvious need for it. Voevodsky is a major figure in algebraic geometry and he is enthusiastic about the univalence axiom, but his stated motivation is to make proofs machine-readable. Several people have denied on this blog that this motivation is widespread among promoters of HOTT; but as far as I know, they are not algebraic geometers. So I still don’t understand what algebraic geometers have to gain from adopting HOTT/UF, apart from new friends in their local computer science departments — and I’m the first to agree that it’s not a small thing to make friends in other departments.

I think I did a fairly good job in Chapter 7 explaining what’s at stake with identity and isomorphism in usual category theory — not for professional philosopers or homotopy theorists, but that’s not the majority of my intended audience.

LikeLiked by 1 person

Martin EscardoJoyal once joked in my presence (but I think he seriously meant what he said) that identity types in MLTT should have been called equivalence types, and written Eq rather than Id, where Eq could be conceived as “equality” with a stretch of the mind. Then the univalence axiom would just say that equivalence is equivalence, as it should be.

Regarding the reasons why people are interested in HoTT/UF, they are rather varied, and it suffices to say that the community includes mathematicians, logicians, and computer scientists.

The mathematicians are skeptical, because it involves logicians and computer scientists. The logicians are skeptical, because it involves computer scientists. The computer scientists are skeptical because it involves mathematicians. This is of course a joke: it was surprising to see all of them together for a whole year at the IAS making definite progress. This is still going on in a collegial way.

What this will lead to is for us to make wrong predictions.

LikeLike

Martin EscardoI meantto say at the very end:

To begin with, it is not that it is false in traditional *set* theory. It can’t even be formulated in ZFC.

LikeLike