Language about language

I’m cutting loose the “internal language” strand of the Gordian knot of the “No Comment” comment section.  Here is David Corfield’s last contribution:

Regarding this ‘internal language’ issue, Michael, did you read nLab’s page ‘internal logic’ (http://ncatlab.org/nlab/show/internal+logic). It’s far from perfect, but hopefully it conveys the gist of the idea that inference systems come in different strengths, that to a system there corresponds settings which ‘support’ that reasoning, and so that inferences within a given system can be interpreted in any example of the corresponding setting.
So, for example, if you have a rich enough logic in which to define the concept of the reals, symmetric matrices and diagonalization, and to prove some result about diagonalization, you could look for suitable interpretations. Someone (I think Scedrov here http://www.sciencedirect.com/science/article/pii/0168007286900060) showed you could do something of this sort constructively, so in the kind of language that is internal to toposes. He could then interpret his argument in any topos, such as the topos of sheaves over the reals where it gives a result about diagonalizing matrices of continuous functions.
Now If HoTT is the internal language of infinity-toposes, any proof in HoTT has an interpretation in any infinity-topos.

Indeed I tried to read the section on “internal logic” at nLab, and had my usual experience:  after at most 3 lines I am referred to another page for a term I don’t know, the same thing happens on the second page, and so on until I either reach a dead end or return to my starting point.  I have very rarely learned anything at nLab and I draw the conclusion that it was not designed for people with my basic understanding of mathematics.

Before I continue, I want to build on the “linguistic” metaphor I used in an earlier comment.  Although the architectural imagery underlying the “Foundations” metaphor persists in Bourbaki’s Architecture of Mathematics text, the imagery of communications is more appropriate to account for Bourbaki’s actual accomplishment, which was to provide a common language for most central branches of mathematics.  For a variety of historical reasons, Bourbaki chose to base the common language on the dialect of algebra, just as the national language of Italian unity was based on the Florentine dialect.  Until David’s comment, I was under the impression that nLab was written in a completely different language (Albanian, for example), but having read his explanation, I now recognize it as (a logically-inflected variant of) the dialect of homotopy theory.  In this optic, the proposal to adopt HOTT/UF as a new foundation for mathematics is analogous to the proposal to replace the Florentine dialect by the Neapolitan dialect as the national language of Italy.  This would be disorienting in the short term — imagine the BBC instructing its news anchors that they are henceforth to speak in Glasgow dialect — but might bring benefits (preserving the Union, say, in the BBC example).

(Here is a traditional Neapolitan song about the natural numbers.  Giuseppe Peano, however, was Piemontese.)

The nLab page on infinity topoi is not very informative, but I found Jacob Lurie’s 2003 article on the subject on the arxiv, and I’m pretty sure I could make sense of that if I needed to (with the help of a pidgin).  As a number theorist I have never actually had to use a topos as such, except when I wanted to look inside the black box of some theory of cohomology of schemes, which is practically never.  This may well change in the near future, because of the work of Peter Scholze and his collaborators, and especially the introduction of the proétale topology by Scholze and Bhargav Bhatt.

This represents the highest level of topos-theoretic sophistication attained by (large groups of) number theorists in quite some time, but the following excerpt from the notes to Scholze’s course last fall in Berkeley may come as a revelation to those who assume mathematics is not divided into dialects.

We are going to use the fact that in a topos, taking fibre products commutes with
colimits [citation needed].

Jacob’s latest, extremely helpful, comment arrived as I was finishing the above nonsense and is copied below, in order to keep the thread separate.  I’ll probably comment on this later, but in the meantime I’ll return to my planned weekend with my family.

Regarding the meaning of the term “internal language”: I’d prefer to phrase it as a distinction between “syntactic” and “semantic” ways of thinking, and it’s a distinction that one could make for reasoning about much more familiar objects.
So, for example, suppose I wanted to prove a theorem about groups. Let me distinguish between two “styles” in which such a proof could be understood:
a) In the “syntactic” style of first-order logic. A proof in this sense consists of a sequence of sentences in the first-order language of groups (meaning they are formal expressions built out of “and”, “or”, “for all”, etcetera), each of which is required to either be an axiom of group theory or to obtained from previous elements of the sequence using a rule of inference.
b) In the “semantic” style, the way that mathematics is usually practiced. Here we imagine that we have a group and we use whatever tools we like in order to study it and draw the conclusion that we want. So we might use concepts that can’t be formulated in the first-order language of groups: for example, we might draw conclusions about G by studying subgroups of G, or linear representations of G, or ideals in the group ring Z[G].
Godel’s completeness theorem asserts that any first-order statement about groups is provable in the syntactic sense if and only if it is true in all groups. So for such statements, any theorem that can be proven “semantically” can also be proven “syntactically” (the fine print, of course, is that this applies only to statements which can be formulated in the first-order language of groups. So, for example, it tells you nothing about theorems which concern -finite- groups, since the finiteness of a group is not a hypothesis that can be formulated in the first-order language of groups).
I think that the debate that Mike and I were having above was about the utility of these two different approaches for thinking about a different sort of mathematical structure (an “elementary infinity topos”). Homotopy type theory can be viewed (and I think Mike will endorse this?) as an approach to formulating and proving theorems about these objects in the “syntactic” style (with the caveat that it is not the syntax of first order logic, but of type theory). My skepticism is about whether this approach can really have substantive advantages over the “semantic” style for the purpose of proving theorems (ignoring issues related to computer verification of proofs or philosophical objections to using transcendental methods to study something that is supposed to be foundational).

45 thoughts on “Language about language

  1. Jon Awbrey

    There is a whole language and a corresponding literature that approaches logic and mathematics as a species of communication and information gathering, namely, the pragmatic-semiotic tradition passed on to us through the lifelong efforts C.S. Peirce. It is by no means a dead language, but it continues to fly beneath the radar of many trackers in logic and math today. Still, the resource remains for those who are ready intuit to dip.

    Like

    Reply
  2. Dimitris

    @Jacob (wrt his comment quoted above)

    Internal language/logic is certainly about using a formal syntax to talk about semantic objects. This can be seen from the fact that we say “L is the internal language of X” so that indicates that we have some pre-linguistic grasp (formal or otherwise) of the object X in question in whose internal language L we are interested. That said, the group example is slightly misleading (I feel) in one sense: we want to call L an internal language if there’s at least some precise sense in which it captures *everything* about X that we are interested in. One precise way of saying this is by requiring there to be an “equivalence” between the “category” of languages L and the “category” of objects X (I’m using quotation marks because these notions might not directly correspond to the technical notions, as indeed will be the case for infty-topoi.) For example in the case of (elementary) topoi there is such a correspondence (established in Lambek-Scott’s “Higher-Order Categorical Logic”) between the category of elementary topoi and logical morphisms and higher-order intuitionistic (simple) type theories (ITT) and interpretations between them. In the case of groups, there isn’t a similarly “tight” correspondence between the first-order axioms (say T_g) of groups and groups themselves. To illustrate: if you start with a model of these axioms, i.e. a group G and then produce the first-order theory T_G of all sentences made true by G (in the language of groups) you won’t necessarily get a conservative extension of T_g — e.g. if the group is abelian. But in the ITT / elementary topoi correspondence you actually do get a conservative extension in the (somewhat analogous) situation (the analogy here is not perfect but I won’t get into it more.) In the case of infty-topoi and HoTT this correspondence will have to be a correspondence between (inft,2)-categories and so vastly more complicated. But, at least, there seems to be a precise technical way to spell out *in what way* HoTT will be the internal language of infty-topoi.

    “Homotopy type theory can be viewed (and I think Mike will endorse this?) as an approach to formulating and proving theorems about these objects in the “syntactic” style (with the caveat that it is not the syntax of first order logic, but of type theory). My skepticism is about whether this approach can really have substantive advantages over the “semantic” style for the purpose of proving theorems (ignoring issues related to computer verification of proofs or philosophical objections to using transcendental methods to study something that is supposed to be foundational).”

    I think there is little doubt that the syntactic approach will not have substantive advantages over the semantic approach when we are interested in *specific* models of infty-topoi, e.g. those studied in classical homotopy theory. When X is a specific infty-topos of mathematical interest and one wants to prove something like: “The particular infty-topos X has property P” this will very likely involve using specific properties of X that far outstrip any axiomatic description of infty-topoi in general. But if one wants to prove theorems of the form “All infty-topoi have property P” then it seems to me that the syntactic approach is well-matched with the semantic one. Similarly, if we want to prove that the monster group has a specific property P it is useless to try and axiomatize the monster group and then deduce P syntactically. Or when we want to prove that a particular topos arising in algebraic geometry has a certain property P it’s doubtful that viewing it as an ITT will help. But if we are interested in a theorem of the form “All abelian groups have P”, “All infty-topois have P” etc. then proving it syntactically might not be such a bad idea — and indeed this is (essentially) the way in which such general results are proven in textbooks. So I guess your disagreement with Mike (if there’s still any) depends on what kind of theorems you have in mind. I’m guessing the kind of theorems you have in mind are of the first kind, about special properties of specific models? Or perhaps this distinction doesn’t make a difference to your point of view?

    Like

    Reply
    1. Jacob Lurie

      @Dimitris,

      “But if we are interested in a theorem of the form ‘All abelian groups have P’, ‘All infty-topois have P’ etc. then proving it syntactically might not be such a bad idea — and indeed this is (essentially) the way in which such general results are proven in textbooks.”

      I disagree with this. I imagine that someone writing a textbook on group theory might not pay much attention to whether or not their treatments of general theorems can be formulated in first-order logic. It’s probably easy to find examples of treatments where a theorem about a group G is proven by reasoning that involves thinking about all subgroups of G, for example. Of course, Godel’s theorem tells us that in principle we don’t need such reasoning (so long as the end result is a first-order statement). But the author and reader of such a textbook might not find that point to be particularly relevant, because they’re interested in thinking about groups rather than first-order logic.

      I also think you’re mistaking the point I was making. I wasn’t arguing that “semantic” methods can prove theorems that “syntactic” ones cannot (as you point out, there’s a sense in which this is impossible in principle, at least for certain types of theorems). It’s that I’m skeptical that that the reverse could be true (or that something could really be substantially easier to see on the “syntactic” side). Consequently, I personally don’t see the advantages to thinking about the subject in this particular way (just as I wouldn’t see the advantages of writing a textbook on group theory in a way that -required- the reader to have a background in first-order logic, even if I was only going to discuss theorems and concepts that did admit first-order formulations).

      Like

      Reply
      1. Dimitris

        “It’s that I’m skeptical that that the reverse could be true (or that something could really be substantially easier to see on the “syntactic” side). Consequently, I personally don’t see the advantages to thinking about the subject in this particular way (just as I wouldn’t see the advantages of writing a textbook on group theory in a way that -required- the reader to have a background in first-order logic, even if I was only going to discuss theorems and concepts that did admit first-order formulations).”

        Of course I cannot argue (nor do I want to) against your personal take on the subject — I gladly defer to authority. So allow me merely to point at one disanalogy between the case of groups and infty-toposes that might be relevant when making this argument. The machinery of first-order logic came much after the utility of groups in mathematics was firmly established. People had been using groups (implicitly or otherwise) long before the modern foundations of model theory were established with the theorems of Godel and the work of Tarski etc. So by the time syntactic tools became available group theory was quite mature as a mathematical discipline already. Thinking syntactically in the case of groups thus proved counterproductive (for many of the reasons that you have already outlined). But the case of HoTT/infty-toposes is different in this (historical) regard: syntactic methods for infty-toposes are becoming available almost at the same time as the theory of infty-toposes is being developed.

        This doesn’t take away from your point that using “second-order” methods to prove things about infty-toposes might be a better way to go about things even in the case when we want to establish “first-order” statements. But it does indicate that maybe the mathematical development of this particular discipline might benefit from syntactic methods in ways that previous disciplines could not. This is to say that in some sense the HoTT/infty-toposes situation is unique: I can’t think of many examples where syntax and semantics were being developed simultaneously (except perhaps axiomatic set theory, which is not a very relevant example here). For example Schreiber’s work on “cohesive HoTT” (which is “semantic” and properly mathematical since he mainly cares about physically-relevant models) is being developed together with its syntactic axiomatization as an extension of HoTT, with insights going back and forth.

        (None of this of course deals with your point about it being “substantially easier”, about which I have nothing precise (or imprecise) to say.)

        Like

      2. moziburullah

        I used to hear that the language of maths is sets, and
        then logic, and recently I heard quite a bit that the language is actually category theory. Personally, I think since mathematics is made by humans, for the benefit of other humans and since humans communicate via natural language, then it seems to me that the language of mathematics is natural language. Proof: take any mathematical paper and remove all natural language from it. Is it now a) more readable or b) unreadable? The answer is obviously b.

        The only good reason I can discern to formalise mathematics into a wholly formal language is so computers can read and ‘reason’ in it and so they can assist us in coping with the humungous amount of mathematical material that mathematicians have come up with. But at the rate the these large language models are improving in parsing natural language we might find that this might become unneccessary – but I don’t think so.

        I’m glad to hear that I’m not the only one finding trouble in navigating NLab. Life is too short. It needs a well-paid team of editors. I say well paid, since I don’t work in Academia but I was horrified to learn that 75% of teaching in US universities were carried out by poorly paid adjuncts. This is crazy. We are just hollowing out the grand tradition of universities. Presumably we’ll be replacing them with ChatMath bots. They’re cheaper. But of course to our own immense loss.

        Like

    2. Jacob Lurie

      @Dimitris

      “We want to call L an internal language if there’s at least some precise sense in which it captures *everything* about X that we are interested in.”

      This also seems a bit misleading. The statement “I am a Grothendieck topos” is one that can’t be formulated in the internal language of a topos, just like “I am a finite group” can’t be formulated in the first-order language of groups. But Grothendieck-ness of a topos and finiteness of a group are both conditions that we might be interested in.

      (Of course, it’s true that you can say something precise about what the internal language does for you. I just wouldn’t paraphrase it as “it captures *everything* we might are interested in”.)

      Like

      Reply
      1. Dimitris

        @Jacob

        “The statement “I am a Grothendieck topos” is one that can’t be formulated in the internal language of a topos, just like “I am a finite group” can’t be formulated in the first-order language of groups. But Grothendieck-ness of a topos and finiteness of a group are both conditions that we might be interested in.”

        Absolutely, I completely agree. Perhaps this is good distinction to make that (my use of) natural language blurs, so allow me to reiterate your point for the benefit of this discussion: the internal language of X is not really *about* X, but rather about the things that X consists of. The internal language of groups doesn’t talk *about* groups directly, but rather about elements of groups (better: elements that stand in a certain group-like relation to one another.) Similarly, HoTT as an internal language of infty-toposes will not talk *about* infty-toposes directly but about their objects, i.e. infty-groupoids.

        (But it is also worth pointing out that properties such as “I am a Grothendieck topos” *can* be expressed as properties of an internal language, just not the internal language of Grothendieck toposes themselves (at least not without extra features). I doubt anyone has ever worked this out explicitly, but it seems to me a straightforward exercise to express Grothendieck-toposness of a category as a property in the internal language of the category of categories (or an appropriately structured 2-category) using e.g. Giraud’s characterization.)

        Talking about the elements of groups/objects of infty-groupoids will of course allow us to express (indirectly) many properties of groups/infty-toposes — in particular those properties that are expressible in terms of elements/objects (e.g. “I am abelian”). But we will not be able to express “global” or “external” properties (e.g. “I am a finite group”) that we might care about. So I certainly agree with you (if I understand you correctly) that not all properties of X that we might be interested in will be captured by the internal language of X. What we *can* demand of an internal language however is that it captures “everything” that we want to say about the “objects” of X (elements, sheaves, infty-groupoids — whatever they may be) and about properties of X expressible in terms of the “objects” of X. (And “everything” here should best be understood as an appropriate technical criterion, e.g. an equivalence of syntactic and semantic categories of some sort, not some kind of metaphysical claim.)

        But now this makes me think that your disagreement is of a different sort altogether: you are essentially saying that the syntactic approach to infty-topoi (at least in the style of HoTT) will not be of interest/use when trying to prove theorems of a specific sort that you and other homotopy theorists are interested in because these theorems are *about* infty-toposes themselves (or how they relate to other structures.) If that’s the case then there’s no debate at all: I (for one) completely agree that HoTT will not be the appropriate tool for that kind of question. (But extensions of HoTT that can serve as the internal language of (∞,2)-toposes might — though this is science fiction.) Is this accurate? And if not then can I ask, if I may, what kind of theorems you have in mind here? Perhaps having a concrete example might help wrap up this discussion in a satisfying manner.

        Like

      2. Jacob Lurie

        @Dimitris,

        I agree with you that the analogy with “first order language of groups” isn’t perfect. But I think it captures some of the essence of the discussion in a way that’s more broadly accessible.

        “What we *can* demand of an internal language however is that it captures ‘everything’ that we want to say about the ‘objects’ of X (elements, sheaves, infty-groupoids — whatever they may be) and about properties of X expressible in terms of the ‘objects’ of X.”

        I don’t disagree, and that’s why I would characterize the nature of this disagreement as an aesthetic one. From my perspective, there’s a more efficient way to capture “everything that we want to say about the objects of X”: by thinking about X itself. I don’t personally see the appeal of repackaging this information to make it fit the syntax of first order logic or higher order logic or type theory (but clearly others do see some appeal in this).

        “will not be of interest/use when trying to prove theorems of a specific sort that you and other homotopy theorists are interested in because these theorems are *about* infty-toposes themselves”

        I don’t want to claim here to be representing homotopy theorists (or any other group larger than myself). And I’m not even sure that I can answer this question for myself: like I said earlier, I’m not confident that I yet know a good definition of “elementary infty-topos” or see a compelling application. I do find it to be an interesting and natural idea, but not from the point of view of “a homotopy theorist who is looking for new tools”. More like “someone with an interest in categorical logic who knows enough homotopy theory to understand what the program is about”.

        Like

    3. Max

      //—————@Dimitris—–
      Internal language/logic is certainly about using a formal syntax to talk about semantic objects. This can be seen from the fact that we say “L is the internal language of X” so that indicates that we have some pre-linguistic grasp (formal or otherwise) of the object X in question in whose internal language L we are interested. …

      Can I rephrase this as: the Internal language/logic L is a metatheory what provides the Tarski’s truth predicate for X, as a syntactically justified construction in this Internal language L?
      By “syntactically justified construction” I mean a constructive bool function on propositions/sentences in X.

      Thanks

      Like

      Reply
      1. Max

        From recent discussion I learn that indeed, the internal language is “big” one.
        As a literal perception about adjective “internal”, this sounds paradoxically. Nevertheless, this has “formal” justification in form of Tarski’s undefinability of truth(semantics).

        But what it meant from Foundational perspective?
        Is it bug or feature?

        In my opinion, traditional about meaning of Foundation, this is a bug, what shifts domain of discuss from “small problem” to “big problem”, or more formal, shifts to more higher order theory with more difficult consistency problem(if “more difficult” applicable here at all).

        But this is only half of bad news. It is remarkable that Tarski’s undefinability of truth is only conditional statement(as well as any (meta)theorems). Formally proven up to consistency of his own higher order metatheory(diagonal trick, associativity/substitution).
        In the formal systems reality=undecidability of consistency, this has no definite semantics.

        So unbounded recursions(cumulative hierarchy) like “language about language…” or “formal system(metatheory) of formal system…” have no closures(Language about all languages or Formal system of all formal systems) with definite semantics in the formal system concept.

        My point that the Mathematics is not only the Formal system concept. And the Foundations placed where there is no “truth pathology”: undecidability of consistency.

        The Process concept, as “generalized unbounded recursion”, very expressive for representation of all “interesting” effects in formal systems,
        in particularly, a cumulative hierarchy, impredicativity…

        One more actual indicative example: stochastic Markov process(or even Brownian motion) as a constructive(“first order”, “small”) model of higher category/groupoids:
        states-objects, shifts-morphisms.
        In this process there are effects of all orders (loops/higher morphism of all orders) and they are well-ordered along time!

        Like

  3. dcorfield

    It’s very unlikely that merely using the dialect ‘internal logic’ will lead to important new insights. It seems to me highly probable that it is necessary to speak a number of languages so as to be able to make them creatively combine. The most creative use of such a blend which includes a variant HoTT dialect is Urs Schreiber’s mix of cohesive HoTT, the external semantic infinity-topos dialect and the family of languages that is current mathematical physics, including Witten-ian, as in his note (https://dl.dropboxusercontent.com/u/12630719/cwzw.pdf) as a ‘digest’ of the lengthy (http://ncatlab.org/schreiber/show/differential+cohomology+in+a+cohesive+topos).

    I’m in no great position to judge, but at least involving this particular form of internal language dialect gives me a greater chance of glimpsing some of these constructions. It’s not unrelated to the attraction of a certain attitude I first encountered in John Baez’s writings that current research ideas can be simple. So I could even take his ideas and make what I think was the first suggestion that we look for a higher Klein geometry (http://ncatlab.org/nlab/show/higher+Klein+geometry), quotienting Lie 2-groups.

    Of course, there’s a question of whether ‘simple’ is largely a reflection of familiarity. But even if it’s just a matter of a collection of equally hard to learn languages brought together, the combination may allow questions to arise, even in people who don’t work as research mathematicians.

    How about this question, which came to me the other day? In the differential cohesive set-up there naturally appears a certain comonad (http://ncatlab.org/nlab/show/jet+comonad) which forms jet spaces. The coalgebras for this comonad are (prolongations of) partial differential equations. Now, were we to work this out in the setting of the differential cohesion to be found in arithmetic (http://ncatlab.org/nlab/show/differential+cohesion+and+idelic+structure), would it lead to the kind of arithmetic differential equations found in the work of Buium (http://ncatlab.org/nlab/show/arithmetic+jet+space)?

    It’s not

    Like

    Reply
  4. Richard Séguin

    Jacob’s comment definitely clarified things for me.

    Mathematics Without Apologies arrived in my mailbox this afternoon. It looks even more interesting than I expected, and I’ve moved it up to the top of my book list.

    Like

    Reply
  5. Mike Shulman

    Oops, I didn’t see this post. Let me copy my last reply to Jacob (feel free to delete it from there), so we can continue here.

    I dispute the claim that mathematics is “usually practiced” in a “semantic” style. I would say that in practice, both syntax and semantics are an integral part of mathematics and cannot be separated. When we do mathematics, we generally have in mind that we are “talking about things”. However, what we actually write on paper are strings of symbols that follow (usually informal) rules. In other words, we “think semantically” but “prove syntactically”. The “soundness theorem” about the semantics of any formal system then tells us that if we have a model of the system, then everything we’ve proven syntactically is indeed true about that model. (What it means to “have” a model is variable; a formalist would say it just means we’ve proven an appropriate statement working syntactically in a different formal system, the “metatheory”.)

    In the example of groups, we “imagine that we have a group”, but what we write down when actually doing the proof are strings of symbols, including variables representing elements of that group, plus all the standard mathematical constructions, and also the apparatus of logic (e.g. and, or, for all, there exists). This is true even if we use higher-order constructions; in that case we just use a syntax that is a higher-order theory of groups rather than a first-order one. The presence of higher-order constructions has hardly anything to do with syntax vs semantics.

    Similarly, when working with an ∞-topos in the categorical way, we “imagine that we have an ∞-topos”, but what we write down are strings of symbols, including variables representing objects, morphisms, homotopies, and so on in that ∞-topos, plus various mathematical constructions and the apparatus of logic. These strings of symbols are not the syntax of homotopy type theory.

    To explain the “internal language” method from this point of view, consider first “the theory of nothing”. This is like the theory of groups, but instead of imagining that we have a group, we imagine that we have nothing at all. However, we still have (assuming we are using a higher-order theory) all the standard mathematical constructions and the apparatus of logic, so we can build all of mathematics. (In particular, we can prove theorems like “for any group G, blah blah”. The way we prove such a theorem is, of course, to assume given an arbitrary group G and proceed; this can be considered to be passing temporarily into the “theory of a group” and then back out again at the end of the proof. Type theorists describe this by talking about “contexts”.)

    Applied to “the theory of nothing”, the soundness theorem tells us that whenever we have nothing (which, of course, we always do), all of our theorems are true. In other words, proofs in mathematics are valid.

    The point now is that the phrase “whenever we have” can be interpreted in many different categories. Category-theoretically, the way the soundness (and its converse, completeness) theorems work is that, as you mentioned earlier, from the syntax of a given theory we can construct the initial category with appropriate structure containing a model of that theory. (The choice of first-order or higher-order logic is what determines the meaning of “appropriate structure”.) Thus, whenever we have a model of the theory of groups in some category, i.e. a group object in that category, then any theorem that we proved in the theory of groups is “true” (in the appropriate sense) about that group object. The “classical” semantics arises when the category in question is the category of sets; but we also get a sort of “free theorem” by choosing other categories. That is, when we prove a theorem about groups, we could easily have been thinking only about ordinary groups (in Set); but as long as our proof fits in an appropriate theory, the result we obtain is automatically true about topological groups, Lie groups, Hopf algebras, group schemes, and so on.

    Note that this is different from how we would naively proceed to prove a theorem about group objects in categories. That would lead us to work in “the theory of a category containing a group object” (without, of course, explicitly recognizing this, since mathematicians don’t usually think about their syntax). We would then use strings of symbols containing variables representing things like objects and morphisms of the category to prove our theorem. And we would apply it to a particular example — say, a topological group as a group object in Top — by regarding Top as an internal category in Set (deal with universe sizes however you wish) and our topological group as a group object in that internal category. Note that in this case, even though we are talking about arbitrary categories, we still only use semantics in Set.

    By contrast, when working with groups from the internal-language point of view, our strings of symbols contain variables representing things like elements of our group. E.g. associativity is stated as “for all x,y,z in G, we have x(yz)=(xy)z”; whereas for an explicitly described group object, associativity must be stated as the equality of two morphisms G×G×G -> G. Nevertheless, the machine of interpretation tells us that this element-wise associativity automatically gets interpreted in any category by the equality of these two morphisms.

    A nice example of the usefulness of this is that it is quite easy to prove that the inverse of an element in a group is unique. Therefore, it follows automatically that the inverse morphism of a group object in a category is unique. In particular, since (commutative) Hopf algebras are the same as group objects in the opposite category of (commutative) rings, it follows that the antipode of a Hopf algebra is unique. This can, of course, be proven by hand, but it’s surprisingly tricky to figure out what you have to do if you attack it directly. Probably the easiest way to do it is to take the usual proof for groups, translate it into commutative diagrams, and dualize it. The point of the internal language is that you don’t have to do all that at all; there’s a machine that does it for you automatically.

    If, as conjectured (and partially proven), HoTT is an internal language of ∞-toposes (or more general sorts of (∞,1)-categories), that would mean that we can expand the above description to use (∞,1)-categories instead of 1-categories. Proving something about an ∞-topos in the categorical way means working in “the theory of an ∞-topos” and looking at its models in Set, which are ∞-toposes. By contrast, using HoTT would mean working in “the theory of nothing”, but where the “ambient logic” includes not just first-order and higher-order constructions but “higher-homotopy-theoretic” ones, and then interpreting it into (∞,1)-categories. For instance, we could replay the example of groups but for, say, A_4-spaces (or, once the problems of infinite homotopy coherence are overcome, A_∞-spaces).

    Like

    Reply
  6. Mike Shulman

    By the way, I’m sorry that you are unhappy enough with the nLab that you feel the need to make snide comments about it. Keep in mind its stated purpose:

    The purpose of the nnLab is to provide a public place where people can make notes about stuff. The purpose is not to make polished expositions of material; that is a happy by-product.

    Perhaps it would be even better to say “that is a happy by-product if and when it ever occurs”. The nLab is not wikipedia, not even “the wikipedia of mathematics” and not even “the wikipedia of higher category theory”. There’s no point to complaining about it not being something it was never intended to be.

    Like

    Reply
    1. mathematicswithoutapologies Post author

      I only make such comments when someone (usually David Corfield, but sometimes Google) refers me to nLab as a source of information. As long as no one expects me to understand anything that’s written there, I’m happy to ignore it.

      Like

      Reply
  7. mathematicswithoutapologies Post author

    Assuming I can map the “internal/external language” distinction onto the “syntax/semantics” distinction going back to Wittgenstein’s Tractatus and logical positivism, I can go with this in several directions. I could, for example, suggest that the main interest of the syntactic approach historically was to develop a means to ascertain the validity of a deduction mechanically (meaning by an automatic procedure; digital computers are unnecessary). I could also try to compare it to “isolating the formal aspects of a proof that generalize,” for example when one knows a theorem for GL(n) and tries to find a generalization that works for all reductive groups.

    But I’d rather wonder how people feel about comparing the syntactic approach to trying to ride a bicycle, or play a piano, by fully conscious application of Newton’s laws. In my “Avatars” talk at the conference organized by David Corfield and Brendan Larvor in 2009, I compared mathematics to a “consensual hallucination,” like virtual reality, and I continue to believe that the aim is to get (consensually) to the point where that hallucination is a second nature.

    Like

    Reply
    1. Jacob Lurie

      @Mike,

      I’m not sure I agree with your first point (that the syntax is essential to the practice of mathematics and cannot be separated from the semantics), but that’s part of a broad “philosophy of mathematics” discussion which doesn’t have much to do with homotopy type theory specifically. The rest I don’t disagree with exactly, but I think it reveals a difference between you and I are thinking about the subject.

      For you, an elementary infty-topos is something “big”: it’s a “world” where you can do mathematics. And you want to imagine yourself as someone who lives in that “world”, in which case the “internal language” provided by homotopy type theory is appropriate for describing the phenomena that you see there.

      For me, an elementary infty-topos is something “small” (in part I mean this literally: the theory I’d like to study would throw away some of the axioms, like the existence of function objects, so that it would admit smaller models). I think of it not -primarily- as a “place where one can do mathematics” (though I am aware of that point of view), but as a mathematical object of a specific kind. And many of the questions that seem relevant involve how it relates to other objects of the same kind, and these questions might not really “translate” into things that the internal logic can investigate.

      To take a simple example: let’s suppose that we’ve formulated a version of “elementary infty-topos theory” where we can talk about diagrams (meaning the technical obstacles alluded to earlier have been overcome, either by cleverness or by modifying the theory). Then it could develop a theory of “internal infty-categories” by mimicking the theory of complete Segal spaces. But in an elementary infty-topos X, one could also study simplicial objects of X that satisfy a Segal condition and the completeness axiom: let’s call these “quasi-internal infty-categories”. Roughly speaking, quasi-internal infty-categories would be things that have n-morphisms for every standard natural number n, but internal infty-categories would be things that have n-morphisms for any n that X thinks is a natural number. So I’d expect these notions to be different, but for the difference to live “infinitely high up”. It seems to me that studying the relationship between these notions is an example of what a theory of elementary infty-topoi (conceived as a “theory which mixes logic with homotopy theory”) should allow me to do. But of course, this can’t be done “internally” to X: it concerns the way that X sits inside the ambient world of “good old” mathematics.

      Like

      Reply
      1. Mike Shulman

        @Jacob: It’s true that there are both “big” and “little” aspects to topos theory. As you are undoubtedly aware (though other readers may not be), it’s traditional to fail to translate the French words gros and petit when talking about this distinction (those who are able to read the nLab can find more here). I would not say that I personally prefer the “big” perspective; in fact I was originally “converted” to higher topos theory by the “little” perspective (see e.g. here). But it is definitely true that internal languages are mostly on the “big” side, and so through my recent involvement in HoTT I’ve learned to appreciate the advantages of that point of view as well. The “little” perspective on topoi definitely benefits less from internal languages; so if your claim was only that you don’t expect HoTT to yield important insight on a particular class of questions about ∞-toposes, then that seems reasonably justified.

        I would, however, like to point out (briefly!) several ways in which the big and little perspectives cross-pollinate and benefit from each other.

        1. When studying “little” toposes we nevertheless often need to “do mathematics” with their objects to a certain extent. For instance, the cohomology of a topos is defined using abelian group objects (or perhaps spectra) inside that topos, in an internalized sense.

        2. Frequently, a bunch of “little toposes” (those that “have the same local models”) can be collected as the objects of a “big topos”.

        3. Even very small relationships between a few “little toposes” can be captured internally to an auxiliary topos. For instance, if we have a geometric morphism f:X->Y, then by gluing along the functor f^* we obtain a new topos, inside which X and Y sit as open and closed subtoposes, and in which both parts of the morphism f are visible by composing the relevant reflectors. The internal logic of this glued topos can then be used to reason about the geometric morphism f. In particular, the relationship between a topos X and the standard ∞-topos ∞Gpd, which is described by the unique geometric morphism X -> ∞Gpd and encodes information about X such as its local connectedness, compactness, and cohomology, can be discussed internally by gluing along that geometric morphism. More complicated gluing constructions for more complicated diagrams of toposes and morphisms are also possible. In one direction, this leads to Urs’s work on cohesive ∞-toposes and cohesive HoTT; but other than that, this avenue seems to be largely unexplored to date.

        Like

    2. Jon Awbrey

      I think that’s called coherentism, normally contrasted with or complementary to objectivism. It’s the philosophy of a gang of co-conspirators who think, “We’ll get off scot-free so long as we all keep our stories straight.”

      Like

      Reply
      1. mathematicswithoutapologies Post author

        And if we don’t, who puts us away?

        From the Stanford Encyclopedia:

        “There is an obvious objection that any coherence theory of justification or knowledge must immediately face. It is called the isolation objection: how can the mere fact that a system is coherent, if the latter is understood as a purely system-internal matter, provide any guidance whatsoever to truth and reality? Since the theory does not assign any essential role to experience, there is little reason to think that a coherent system of belief will accurately reflect the external world.”

        But in mathematics the external world is not involved (except perhaps as a distant memory), and there is no distinction between the objects and the (collective) experience.

        Like

      2. Jon Awbrey

        “And if we don’t, who puts us away?”

        One’s answer, or at least one’s initial response to that question will turn on how one feels about formal realities. As I understand it, reality is that which persists in thumping us on the head until we get what it’s trying to tell us. Are there formal realities, forms that drive us in that way?

        Discussions like these tend to begin by supposing we can form a distinction between external and internal, That is a formal hypothesis, not yet born out as a formal reality. Are there formal realities that drive us to recognize them, to pick them out of a crowd of formal possibilities?

        Like

      3. mathematicswithoutapologies Post author

        Between Jon Awbrey’s comments and Holybear’s comment on a different post, I see I’m going to have to plan a discussion on serious philosophy. That’s something I try to avoid, because it’s so much more natural for me to treat philosophical positions, acknowledged or apparent, as data for sociological analysis. The fact that I have a personal private philosophical position should be a matter of no interest whatsoever.

        But I do quote Peirce in the book:

        “…the intellectual powers essential to the mathematician [are] ‘concentration, imagination, and generalization.’ Then, after a dramatic pause, he cried” ‘Did I hear someone say demonstration?’ Why, my friends,’ he continued, ‘demonstration is merely the pavement on which the chariot of the mathematician rolls.'”

        To which I add:

        It can be argued whether, in the long war at the heart of the Indian epic Mahabharata, Arjuna the archer or his charioteer Krishna, avatar of the god Vishnu, is the true hero. The case could even be made70 that the chariot and Arjuna’s bow, not to mention the horses, should be granted equal standing. But only a certain philosophical cast of mind would think of giving top billing, Foundational as it were, to the pavement. . . .

        Like

    3. DT

      Formal logic is supposed to be a mathematical model for mathematical reasoning. Like any model, it’s not to be confused with the real thing. But it’s a good model if it gives us knowledge about the real thing, and a bad model otherwise.

      The mathematical models for bicycles (here is Francis Whipple’s from 1899) are so good, that they have led to engineering advances that make the experience of riding a bicycle better. The stars are not much improved by our astronomical models, but the experience of looking at them is. Maybe likewise the consensual hallucination can be improved.

      (But I don’t like the hallucination metaphor. Hallucinations are more transient, inconsistent, and dangerous than the objects we study.)

      Like

      Reply
      1. mathematicswithoutapologies Post author

        The second definition of “hallucination” in my OED, abbreviated, is “the apparent perception of an external object when no such object is actually present.” With a sufficiently liberal interpretation of perception, this could be a good description of the ability we seek to instill in our students, or of virtual reality.

        I agree with the first paragraph, though I’m skeptical about the knowledge obtained, both in my own practice as a mathematician and in how the model is reflected in mainstream philosophy of mathematics.

        An ethical stance is concealed in the second paragraph, and in many other posts. It is presumed that mathematics needs to be improved, like bicycles. (The Whipple model didn’t come through, but anyway I wouldn’t want to consult it the next time I ride a bicycle.) What is the reason for this presumption? Voevodsky made his reasons clear, but many other comments don’t want to be associated with his reasons. I have a very straightforward reason for dissatisfaction: I have spent much of my career proving special cases of conjectures that I have reason to believe will some day be proved in general, instantly rendering my own work futile. Unless it somehow turns out to have pointed in the direction relevant to a general proof. In most cases, I’d say the odds of this are pretty low. But this particular source of dissatisfaction is unlikely to be addressed by an improved bicycle.

        Like

  8. DT

    @Jacob,

    Out of curiosity, even if you don’t know a good definition of elementary infty-topos, do you know a good example of one? I’m not very familiar with these things, but I take it that finite sets form an elementary 1-topos. What in algebraic topology is similar to a finite set?

    A basic theorem about finite sets, is that the set of functions between two finite sets is also finite. (Maybe it is on topic to mention that Voevodsky’s predecessor Ed Nelson wrote a beautiful book “Predicative Arithmetic,” premised on his radical skepticism about this theorem.) I wonder what stands in for this theorem in higher toposes.

    @mathematicswithoutapologies,

    Also not exactly on topic, sorry:

    “I have a very straightforward reason for dissatisfaction: I have spent much of my career proving special cases of conjectures that I have reason to believe will some day be proved in general, instantly rendering my own work futile.”

    Proofs of special cases of the Langlands conjectures must eventually be somewhere between two extreme kinds of obsolescence: log tables killed by the calculator, and representational painting killed by the camera. Ray Davies in 1971 still liked Rembrandt Titian da Vinci and Gainsborough.

    If mathematics is a hallucination, then are you dosing to get high, or because you want to find out whether those conjectures are true? In my opinion there is already no doubt about whether the conjectures are true, so we must be waiting for them to be proved, eagerly or anxiously, for another reason.

    Like

    Reply
    1. mathematicswithoutapologies Post author

      “If mathematics is a hallucination, then are you dosing to get high, or because you want to find out whether those conjectures are true?”

      How about, “Because it’s my nature?” My last comment (about dissatisfaction and so on) was apparently personal but in fact it was intended to be ideal-typical, by which I mean that in my rational reconstruction of the society of mathematicians the kind of dissatisfaction I described is a constitutive element. But there’s no reason for anyone who doesn’t know me to care about my private psychological motivations; they’re not ideal-typical. So I can only answer the question with a tautology.

      On the other hand, I take this reply of Dennis Gaitsgory to a question by Ofer Gabber to be ideal-typical. (It’s around 34:30 of https://www.youtube.com/watch?v=S7f_SSZIXQQ.)

      If it was the Day of Judgment and you were God, you would ask me “What do you mean by derived algebraic geometry.” I’ll say “I don’t know and I don’t care. I just use it.”

      I have the feeling some of the people reading this blog are rooting for God not to let Dennis get away with this answer.

      Like

      Reply
      1. mathematicswithoutapologies Post author

        Watching the second half of Gaitsgory’s talk — which is really an excellent and leisurely introduction to how to use derived algebraic geometry for people like me — it occurs to me that arithmetic geometers may not be interested in proof assistants because we have Ofer Gabber. Here’s an excerpt from the nautil.us article:

        Whenever [Voevodsky] is tempted to take a shortcut—to say something like, ‘The proof of this is similar to the proof of that’ or ‘This is straightforward’ … or ‘This is obvious’ …—he is forced to ask himself, ‘Would I be able to get away with such shaky reasoning in the company of my trusty proof assistant?’

        Those of us who are used to speaking with Gabber in the audience know we can’t get away with shaky reasoning.

        Like

    2. Jacob Lurie

      DT,

      “Out of curiosity, even if you don’t know a good definition of elementary infty-topos, do you know a good example of one? I’m not very familiar with these things, but I take it that finite sets form an elementary 1-topos. What in algebraic topology is similar to a finite set?”

      The category of finite sets is an example of an elementary topos, but one which does not have a “natural numbers object” (which is hypothesis that you’ll see attached to many of the theorems in the literature on elementary topoi).

      One idea which is sometimes relevant to thinking about these things is to have some dichotomy between objects which are “small” and “large”. In ZFC, you have the dichotomy of sets (small) vs definable classes (large). I think they build this into homotopy type theory too: instead of having an “object classifier” (which I think necessarily leads to Russell-type paradoxes) there’s a “small object” classifier, along with an assumption that the collection of small objects has lots of good closure properties (for example, closure under forming function objects).

      I think one can go a long way with weaker assumptions than this: assume that there’s a dichotomy of “large vs small” and that the class of large objects has some modest closure properties (for example: given a small object X and a large object Y, there’s a large object Y^X classifying maps from X into Y), but that the class of small objects has relatively few. In this context, one could imagine “turning homotopy theory on” for the large objects but not for the small ones. The resulting theory would have models where the “large objects” are, say, spaces with countable homotopy groups, or spaces which admit a “recursive presentation” in some sense. But for “small objects” you could just take finite sets.

      Like

      Reply
      1. DT

        “In this context, one could imagine “turning homotopy theory on” for the large objects but not for the small ones.”

        Just to clarify, in the world you are sketching both the circle and the classifying space of 2-element sets are large objects?

        Like

      2. Jacob Lurie

        @DT,

        I would say that I’m thinking of an axiomatization which would imply the existence of S^1 and BZ/2 as large objects but which would not guarantee that they are small. However, it would still have many models in which they were small.

        As I’m guessing you know (based on the question) the notion of “finite” in homotopy theory bifurcates: sometimes one is interested in the fact that a space can be built from finitely many cells (like S^1), and sometimes one is interested in the fact that a space has finite homotopy (like BZ/2). These finiteness conditions are orthogonal: the only spaces which possess both properties are (up to homotopy equivalence) finite and discrete.

        For purposes of having a good theory of “small” objects, allowing either of the above examples might lead to undesirable consequences. For example, if S^1 is small and small objects are closed under homotopy fiber products, then Z is also small (which you might not want). If BZ/2 is small and you assume the existence of X^Y whenever Y is small, then you can form homotopy fixed points for actions of Z/2: this is a somewhat “transcendental” construction which you might not want to allow (for example, in the setting of classical homotopy theory, it can take countable inputs and produce uncountable outputs).

        I should say that I only know how to get these definitions “off the ground” (meaning, to prove that the resulting theory has nice features) for n-categories where n is finite (because of same issues that homotopy type theory has). Spaces like BZ/2 aren’t finite complexes but they can be built with finitely many cells in each dimension. As a consequence, I imagine it’s probably harmless (in the n-category setting) to add the kind of axioms which would guarantee that BZ/2 is also small.

        Like

      3. Mike Shulman

        We might conjecture other ways to produce non-Grothendieck ∞-toposes, parallelling some known ways to produce non-Grothendieck 1-toposes:

        1. Glue two Grothendieck ∞-toposes along an inaccessible left exact functor (i.e. take its comma category).
        2. More generally, consider the category of coalgebras for an inaccessible left exact comonad on a Grothendieck ∞-topos.
        3. Consider ∞-groupoids with a “uniformly continuous” action by some topological group (whatever that might mean)
        4. Consider the category of diagrams of small ∞-groupoids indexed by a large ∞-groupoid.
        5. Consider sheaves of ∞-groupoids on a large ∞-site satisfying some “local smallness” condition.
        6. Take a “free exact completion” (whatever that means) of a well-enough-behaved ∞-category.
        7. There should also be “realizability” ∞-toposes whose objects are “computable”, perhaps obtained as free exact completions.
        8. Take a filterquotient of a Grothendieck ∞-topos by a filter of subterminal objects.

        Whether or not any of these operations do in fact produce an elementary ∞-topos will, of course, depend on exactly how “elementary ∞-topos” is defined. In particular, the hardest thing to ensure about constructing an elementary 1-topos is usually the impredicative subobject classifier; if we allow our elementary ∞-toposes to be more predicative, then this might be less of an issue. On the other hand, it could be that some constructions of subobject classifiers are hard to generalize to construct (small-)object classifiers, even predicatively.

        Like

    3. David Roberts

      @DT

      The point in such proofs is not so much the binary yes/no of the answer, but the techniques and constructions in getting that answer. The accumulation of such techniques will eventually (or possibly not!) lead to a proof of the general result. Even if we can prove Fermat’s Last Theorem, and indeed the Modularity Theorem by other means now, at least in part, Wiles’ original proof is still valuable.

      Like

      Reply
  9. mathematicswithoutapologies Post author

    Let me try to ask a constructive question. In principle all of algebraic geometry (over general fields) can be reformulated in terms of commutative algebra (and homological algebra, if that’s separate) and therefore the geometry is redundant. But of course that’s not at all helpful if you want to be an algebraic geometer. Can either of these be seen as the internal language of the other?

    Or a more specific example. L. Lafforgue proved the Langlands correspondence for function fields in the language of algebraic stacks, but at the time I had the impression this was only for concision, and that everything could have been proved using schemes (as in Drinfeld’s original papers) if necessary. On the other hand, Ngo’s proof of the Fundamental Lemma was based on constructions and intuitions that do not seem natural in the setting of schemes, even though logically it’s all just commutative algebra. Is this in any way parallel to the distinction between syntax and semantics, or internal/external language, or is it all semantics?

    I pose the question in this way because, by a mysterious process on which existing philosophy provides very little insight, theories that are supposedly logically equivalent give rise to radically different intuitions. I repeat what I have already written: if intuitions leading to the proof of the Hodge conjecture could be provided by HOTT but not (or not easily) in ZFC, then I’m sure there would be a wave of defections to the former.

    (On the other hand, a first-rate arithmetic geometer confessed to me yesterday, although he was in no way ashamed of the fact, that he had no idea what was or was not contained in ZFC or the Peano axioms.)

    Like

    Reply
    1. Jacob Lurie

      @Michael,

      It’s true that stacks are defined in terms of schemes, schemes are defined in terms of commutative rings, commutative rings are defined in terms of sets, and so on. So that for any theorem stated or proved using these concepts, one can “compile out” (to use Mike’s terminology) their use and arrive at statements or proofs that don’t mention stacks, schemes, or commutative rings. But in this case, I think it would make many of those statements and proofs much more difficult to understand.

      I certainly wouldn’t want to argue that -in general- there isn’t a potential payoff for defining new concepts on top of old ones or having different ways to look at things. The theory of schemes is a perfect example: the notion of “affine scheme” is equivalent to the notion of “commutative ring” in some sense, but carries an intuition that you can otherwise miss (namely, that localization is an extremely powerful idea/technique in commutative algebra). My skepticism is more narrowly focused: I want to know if something similar is likely to happen in the case of homotopy type theory.

      Of course, this is a subjective question, whose answer (to paraphrase Mike) might move over time and be different for different people. For the moment, at least, I’m not convinced that learning the language of type theory is worth the effort that it would require, even though I am interested in the subject that it is used to study. But this might change: let’s say that next week, someone posted a homotopy type theory paper on the arxiv that resolved the technical problems associated with defining (semi)simplicial types. This is a problem that I’ve encountered and thought about without any type theory, and I’m curious enough about the solution that I’d certainly want to know how it works. So if I were to download and look at that paper, I imagine that my experience would lie somewhere on a spectrum between two extremes:

      a) The difficulties in understanding what is written are purely notational and terminological. After scratching my head a few times wondering things like “what the hell is an inductive-recursive type?”, I’m able to translate the essential ideas of the paper into a category-theoretic language that is more familiar to me, and this doesn’t have much impact on the length or essential content of proofs.

      b) The process of reading the paper forces me to grapple with a lot of new ideas, many of which are unfamiliar and perhaps cannot even be translated into the language I’m familiar with. In the end, the relevant proofs and constructions seem substantially longer and poorly motivated when spelled out in category-theoretic terms. And perhaps it’s not even possible to see directly why or how these constructions work on the category-theoretic side without proving a difficult “homotopy type theory to category theory translation theorem” of the kind that is expected to be possible.

      My present guess is that the reality would lie much closer to a) than to b). But it’s a question which can be settled empirically, at least if homotopy type theory manages to solve this problem (or to prove interesting new theorems in algebraic topology or something like that).

      Like

      Reply
      1. mathematicswithoutapologies Post author

        I agree with everything Jacob wrote; but I would set the bar higher — at a question I consider fundamental, such as the construction of a category of motives, or the Hodge conjecture, rather than a technical point. This may just be because homotopy theory is already a challenge for me. (When I run out of things to write I’ll tell the story of my three attempts to learn homotopy theory, the first two of which ended in ignominious failure, the third still in progress.)

        But then inevitably I find myself wondering whether a proof assistant, or even a formal system, can make the distinction between “technical” and “fundamental” questions. There seems to be no logical distinction. The formalist answer might involve algorithmic complexity, but I don’t think that sheds any useful light on the question. The materialist answer (often? usually?) amounts to just-so stories involving Darwin, and lions on the savannah, and maybe an elephant, or at least a mammoth. I don’t find these very satisfying either and would prefer to find something in between, and I would feel vindicated if it could be proved (in I don’t know what formal system) that the capacity to make such a distinction entaiils appreciation of music.

        Like

    2. Jon Awbrey

      Peirce proposed a distinction between corollarial and theorematic reasoning in mathematics that strikes me as similar to your distinction between technical and fundamental questions.

      I can’t say I have a lot of insight into how the line is drawn, but I recall a number of traditions pointing to the etymology of theorem as having to do with the observation of objects and practices whose depth of detail always escaped full accounting by any number of partial views.

      Like

      Reply
  10. Pingback: Signs Of Signs • 1 | Inquiry Into Inquiry

  11. Pingback: Signs Of Signs • 2 | Inquiry Into Inquiry

  12. Pingback: Signs Of Signs • 3 | Inquiry Into Inquiry

  13. Pingback: Signs Of Signs • 4 | Inquiry Into Inquiry

  14. Pingback: Lenguajes internos | Mathematics without Apologies, by Michael Harris

  15. Pingback: Performing truth | Mathematics without Apologies, by Michael Harris

Leave a comment