Ethics in Mathematics at Vassar


Slides from this colloquium talk are now available online.


Videos of the 2018 Mazur conference


The videos filmed and (barely) edited by filmmaker Oliver Ralfe were recently put online on the IHES YouTube channel.   Only about half of the events were filmed, but these include all the panel discussions (listed on the right hand side of the poster).   Here is the complete playlist:



Manjul BHARGAVA ‘Coming soon’



Poetry Panel



Persi DIACONIS – Barry Mazur as an Applied Mathematician



Philosophy/Law/Physics Panel



Jordan ELLENBERG – Heights on Stacks



History of Science Panel



Haruzo HIDA – Galois Deformation Ring and its Base Change to a Real Quadratic Field



Glenn STEVENS – Modular Symbols, K-theory, and Eisenstein Cohomology



Alexandra SHLAPENTOKH – Defining Valuation Rings and Other Definability Problems in Number Theory



Akshay VENKATESH – Derived Hecke Algebra for Weight One Forms and Stark Units



Wei ZHANG – Selmer Groups for Rankin-Selberg L-functions of GL(2)xGL(3)

Ethical Engagement



The webpage of the Cambridge University Ethics in Mathematics Project is still under construction, but the first three Discussion Papers are already available at this page, with more on the way.  The title page of the first of these papers, by Maurice Chiodo and Piers Bursill-Hall, is reproduced above.  The authors propose a four-level sequence of increasing ethical engagement on the part of mathematicians:

Level 1: Realising there are ethical issues inherent in mathematics.

Level 2: Doing something: speaking out to other mathematicians.

Level 3: Taking a seat at the tables of power.

Level 4: Calling out the bad mathematics of others

Although the authors hint at a preference for engagement at the highest level by as many mathematicians as possible, they are realistic about the obstacles in the near term.  Some readers of this blog may nevertheless be ready to get involved.  Reading the Discussion Papers is an obvious first step.

Genetic determinism once more obnubilates French readers

After a French friend informed a few of his American colleagues that the center-right weekly magazine Le Point had printed a translation of Ted Hill’s article in Quillette, in which he alleges that an article of his had been censored by both the Mathematical Intelligencer and the New York Journal of Mathematics on political grounds, I decided I had no alternative to wasting half an hour familiarizing myself with a few of the details.  Having done so, I am just going to reproduce the message I sent to my French friend.

But first:  the French verb obnubiler is usually translated “to obsess,” which has nothing in common with the English cognate obnubilate, which means literally to cloud.  But in fact, French dictionaries interpret obnubiler quite differently:  someone is described as obnubilé whose judgment is clouded or impeded by an obsession.  The obsessive and repeated attempts to explain differences in power and status by genetic factors is a good example of obnubilation in this sense.

Now for my message:

I really don’t want to be wasting my time on this, but I’m afraid I’m going to have to.  Here is a description of Quillette:

and here is an article by Gowers analyzing the claims in Hill’s alleged study.
There is a second post, in which Gowers goes to extreme lengths to give Hill’s theses the benefit of the doubt, while remaining unconvinced.
I’m not going to comment on the editorial process at the Intelligencer or the New York Journal of Mathematics, which is a matter of very little interest.  What I see is just one more strained effort to disguise as scientific inquiry a thoroughly artificial and simplistic framing of a complex interaction of phenomena for which one has nothing resembling a coherent model, motivated solely by the demonstrate that the present distribution of power and resources has a natural basis.  All of this has dramatic political implications and the “libertarians” with whom Quillette identifies may belong to all kinds of tendencies — Dawkins used to be some kind of leftist, Pinker is a [censored!] liberal, Charles Murray is definitely right-wing — but the organized forces overlap significantly with the alt-right.
The problem with this sort of online debate is that it’s presented as intellectual censorship, while in fact it’s something else entirely.  Most of the liberals who are confused by this framing would never defend the right of creationists or climate change deniers — or holocaust deniers — to equal time, in the name of freedom of expression.  But there is a surprising openness to polemics disguised as scientific analysis when the aim is to prove that women are inferior at one thing or another.
To my mind, the best response to claims about hereditary differences in intelligence is still Gould’s The Mismeasure of Man, which illustrates the lengths to which defenders of inequality will go in attempting to prove their theses.  That was written nearly 40 years ago.  (You may remember that he mentioned that at one point IQ tests had been used as scientific proof that Jews were intellectually inferior to northern Europeans.)  Gould wrote a shorter but no less devastating review of The Bell Curve in 1994.
Unfortunately this particular vampire has not yet been nailed to its tomb once and for all.  Here is what I wrote about this in the middle of an article about the responsibility of mathematicians, for the celebration of Reuben Hersh’s 90th birthday.

I want to discuss an older story, one in which the mathematical sciences play at most a supporting role, but that I think illustrates well how philosophical confusion about the nature of mathematics can interfere with informed judgment. Here is a sentence that, syntactically at least, looks like a legitimate question to which scientific investigation can be applied:

Does mathematical talent have a genetic basis?

On the one hand the answer is obviously yes: bonobos and dolphins are undoubtedly clever but they are unable to use the binomial theorem. The question becomes problematic only when the attempt is made to measure genetic differences in mathematical talent. Then one is forced to recognize that it is not just one question innocently chosen from among all the questions that might be examined by available scientific means. It has to be seen against the background of persistent prejudices regarding the place of women and racially-defined groups in mathematics. I sympathize as much as anyone with the hope that study of the cognitive and neurological basis of mathematical activities can shed light on the meaning of mathematics — and in particular can reinforce our understanding of mathematics as a human practice — but given how little we know about the relation between mathematics and the brain, why is it urgent to establish differences between the mathematical behavior of male and female brains? The gap is so vast between whatever such studies measure and anything resembling an appreciation of the difficulties of coming to grips with the conceptual content of mathematics that what really needs to be explained is why any attention, whatsoever, is paid to these studies. Ingrained prejudice is the explanation that Occam’s razor would select. But I’ve heard it argued often enough, by people whose public behavior gives no reason to suspect them of prejudice, that it would be unscientific to refuse to examine the possibility that the highlighted question has an answer that might be politically awkward. It’s the numerical form of the data, I contend, and the statistical expertise brought to bear on its analysis, that provide the objectivity effect, the illusion that one’s experiment is actually measuring something objective (and that also conveniently forestalls what ought to be one’s first reaction: why has Science devoted such extensive resources to just this kind of question?) The superficially mathematical format of the output of the experiment is a poor substitute for thought. Maybe something is being measured, but we have only the faintest idea of what it might be.

More concisely:  if the question is not scientific, then the answer won’t be scientific either.  Or even more concisely:  garbage in, garbage out.
I added some emphasis that was not, I think, in the original article.  I just want to conclude with a particularly helpful paragraph from Gould’s review of The Bell Curve.
Like so many conservative ideologues who rail against the largely bogus ogre of suffocating political correctness, Herrnstein and Murray claim that they only want a hearing for unpopular views so that truth will out. And here, for once, I agree entirely. As a card–carrying First Amendment (near) absolutist, I applaud the publication of unpopular views that some people consider dangerous. I am delighted that The Bell Curve was written–so that its errors could be exposed, for Herrnstein and Murray are right to point out the difference between public and private agendas on race, and we must struggle to make an impact on the private agendas as well. But The Bell Curve is scarcely an academic treatise in social theory and population genetics. It is a manifesto of conservative ideology; the book’s inadequate and biased treatment of data display its primary purpose—advocacy.
I think, though, that Gould would not have been so delighted to see the publication of the theses of The Bell Curve in a journal that seeks to maintain editorial standards.

Justice, finally, for Maurice Audin

Place Audin

Photo taken in Paris by Ammine May 26, 2004.


It was announced today that French President Emmanuel Macron

would acknowledge that Audin “died under torture stemming from the system instigated while Algeria was part of France.”

More details can be found in the article published today in the Guardian , in a 3-minute video on and a long article by the inevitable Cédric Villani.  The role of Laurent Schwartz in the story was recalled on this blog in 2015.

There is also a Place Maurice Audin in Algiers:

PlaceAudinAlger From Cédric Villani’s blog,

Guest post by Kevin Buzzard

Kevin Buzzard wrote to let me know that WordPress rejected his comment on an earlier post, presumably because it was too long.  I reproduce it verbatim below.  It deserves to be read closely, in its entirety.  I have some thoughts about it, and I will write about them at some point, but for now I just want to leave you with this question:  do you agree with the claim in the last line that mathematicians “will have to come to terms with” the distinction he identifies, and will the “terms” necessarily be those defined by computer scientists?

This comment will somehow sound ridiculous to mathematicians, but since learning about how to formalise mathematics in type theory my eyes have really been opened to how subtle the notion of equality is.

A few months ago I formalised the notion of a scheme in dependent type theory, and whilst this didn’t really teach me any algebraic geometry that I didn’t already know, it did teach me something about how sloppy mathematicians are. Mathematicians think of a presheaf on a topological space as a functor from the category of open sets of the space to somewhere else (sets, groups, whatever). I have a very clear model of this category in my head — the objects are open sets, and there’s at most one morphism between any two open sets, depending on whether or not one is a subset of the other (to fix ideas, let’s say there’s a morphism from U to V iff U is a subset of V, rather than the opposite category, so presheaves are contravariant functors). But actually when formalising this definition you find that mathematicians do not use this category, they use an equivalent category (whose definition I’ll explain in a second). When formalising maths on a computer, this is a big deal.

Of course mathematicians are very good *indeed* at identifying objects which are “the same to all intents and purposes, at least when it comes to what we are doing with them right now”, e.g. two groups which are canonically isomorphic or two categories which are equivalent, and conversely I would like to suggest that actually computer scientists are quite bad at doing this — they seem to me to be way behind in practice (I had terrific trouble applying a lemma about rings in an application where I “only” had rings which were canonically isomorphic to the rings in the lemma, because in the system I was using, Lean, the automation enabling me to do this sort of thing is not quite ready, although progress is being made quickly). My gut feeling is that this situation is because there are too many computer scientists and not enough mathematicians involved in the formalisation process, and that this will change. In fact one of the reasons for my current push to formalise the notion of a perfectoid space in dependent type theory (note: not homotopy type theory) is to get more mathematicians interested in this sort of thing.

But back to the equivalence. Here is the surprising thing I learnt. Let X be a topological space. The actual category mathematicians use when doing sheaf theory is this. An object is a string of symbols in whatever foundational system you’re using, which evaluates to an open set. For example, X is an open set, as is (X intersect X), as is the empty set, as is (X intersect (the empty set)). Mathematicians instantly regard things like X intersect X as equal to X, because….well…actually why are they equal? They’re equal because two sets are equal if and only if they have the same elements — this is an axiom of mathematics. But when formalising maths on a computer, keeping track of the axioms you’re using is exactly what you have to do (or more precisely, getting the computer to invoke the axioms automatically when you need them is what you have to do). So X equals X intersect X, because of a *theorem* (or in this case an axiom, which is a special case of a theorem if you like; most theorems use several axioms put together in clever ways, this is a bit of a degenerate case). Mathematicians are so used to the concept of sets behaving like the intuitive notion of “a collection of stuff” that it’s very easy to forget that X = X intersect X is *not true by definition in ZFC*, it is true by the very first axiom of ZFC, but this is still a theorem. The elements are the same by definition, but equality of the elements implying equality of the sets is a theorem.

So the computer scientist’s version of the category of open sets is something like this: objects are valid strings of characters which one can prove are equal to open subsets of X, and there’s a morphism between U and V if and only if there’s a proof that U is a subset of V. In particular, in the example above there’s a morphism from X to X intersect X, and also a morphism from X intersect X to X, because both inclusions are theorems of ZFC (let me stress again that whilst both theorems are trivial, neither one is “true by definition” — both theorems need axioms from the underlying theory, absurd though it may sound to stress it). This makes the objects isomorphic, but not equal. Equality is a subtle thing for them!

The conclusion of the above (which of course a mathematician would regard as a fuss about nothing) is that computer scientists don’t work with the mathematician’s “skeleton” category, they work with an equivalent category, and hence get a notion of a sheaf which is canonically isomorphic to, but not strictly speaking equal to (in this extremely anal sense), the mathematician’s notion.

And how did I notice this? Why do I even care? It was when trying to prove that the pushforward of a sheaf F via the identity map id : X -> X was isomorphic to the sheaf you started with. I needed to come up with an isomorphism to prove this, and my first attempt failed badly in the sense that it caused me a lot of work. In practice one needs a map from F(U) to F(id U), for U any open set, with id U the image of U under the identity map (which equals U, by a theorem, which uses an axiom, and hence which is not true by definition). My first attempt was this: “prove id U = U, deduce that F(id U) = F(U), and use the identity map”. I then had to prove that a bunch of diagrams commuted to prove that this was a morphism of sheaves, and it was a pain because I really wanted this to be a complete triviality (as it is to a mathematician). I ran this past Reid Barton and he instantly suggested that instead of using equality to map F(U) to F(id U), I use the restriction map instead, because id U is provably a subset of U so there’s a natural induced map. I was wrong to use equality! I had too quickly identified U and id U because I incorrectly thought they were equal by definition. They are actually equal because of a trivial theorem, but to a computer scientist they are equal, but not definitionally equal, subsets of X, and this makes all the difference. Switching to res, all the diagrams commuted immediately from basic properties of the restriction map and indeed the computer proved commutativity of the diagrams for me. I was stunned.

In dependent type theory, there is at most one map from U to V, depending on whether or not there is a proof that U is a subset of V — all proofs of this give the same map. In homotopy type theory, different proofs give different maps, and in this particular situation this is not what we want — we actually get the wrong category this way — so presumably the homotopy type theory people have to do something else. I am not yet convinced that homotopy type theory is the right way to do all of mathematics (it works great for some of it, for sure). I am now convinced that dependent type theory can do all “normal” mathematics (analysis, algebra, number theory, geometry, topology) so I’m sticking here, but what I have learnt in the last year is that computer scientists seem to have several (competing!) notions of equality, and it is a subtlety which mathematicians are conditioned to ignore from an early age and which they will have to come to terms with one day.