I’ve been saying for some time that most articles about controversies regarding the AI future of mathematics focus primarily on two questions — “Is it good or bad?” and “Will it work or not?” — while neglecting to reflect on the presuppositions that underlie these questions — what is the good of mathematics, and work to what end? — not to mention what should always be the first question to be addressed to any significant social development — cui bono, in whose interest? Within the limitations imposed by this conventional frame of reference, last week’s article in Quanta by Stephen Ornes was a good one. It provided a clear introduction to the subject matter for the well-informed amateurs as well as professionals who read Quanta, recalled the history of attempts to automate proofs — with a helpful reminder that these attempts were originally motivated by the need for computer scientists to verify the reliability of their programs, a theme treated in depth by Donald MacKenzie in his classic Mechanizing Proof — and surveyed some of the most ambitious contemporary projects and their applications within mathematics.
When I agreed to be interviewed for the article it was in the hope of nudging the discussion in the direction of the questions that are typically neglected. In responding to Ornes’s first message I made only one request:
If you decide you do want to use one or more quotations from me, I would want at least one of them to address this point: that what pure mathematicians find valuable about what we do is precisely that it provides a kind of understanding whose value is not determined by the logic of the market.
Artificial Intelligence (AI) has advanced tremendously and today promises personalized healthcare; enhanced national security; improved transportation; and more effective education, to name just a few benefits. Increased computing power, the availability of large datasets and streaming data, and algorithmic advances in machine learning (ML) have made it possible for AI research and development to create new sectors of the economy and revitalize industries. Continued advancement, enabled by sustained federal investment and channeled toward issues of national importance, holds the potential for further economic impact and quality-of-life improvements.The 2019 update to the National Artificial Intelligence Research and Development Strategic Plan [1], informed by visioning activities in the scientific community as well as interaction with the public, identifies as its first strategic objective the need to make long-term investments in AI research in areas with the potential for long-term payoffs in AI. The President’s Council of Advisors for Science and Technology has published Recommendations for Strengthening American Leadership in Industries of the Future [2], including AI, and calls for new and sustained research in AI to drive science and technology progress. The National AI Research Institutes program enables longer-term research and U.S. leadership in AI through the creation of AI Research Institutes.
This program is a joint government effort between the National Science Foundation (NSF), U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA), U.S. Department of Homeland Security (DHS) Science & Technology Directorate (S&T), and the U.S. Department of Transportation (DOT) Federal Highway Administration (FHWA). New to the program this year are contributions from partners in U.S. industry who share in the government’s goal to advance national competitiveness through National AI Research Institutes. This year’s industry partners are Accenture, Amazon, Google, and Intel Corporation.
This program solicitation invites proposals for full institutes that have a principal focus in one or more of the following themes, detailed in the Program Description:
• Theme 1: Human-AI Interaction and Collaboration
• Theme 2: AI Institute for Advances in Optimization
• Theme 3: AI and Advanced Cyberinfrastructure
• Theme 4: Advances in AI and Computer and Network Systems
• Theme 5: AI Institute in Dynamic Systems
• Theme 6: AI-Augmented Learning
• Theme 7: AI to Advance Biology
• Theme 8: AI-Driven Innovation in Agriculture and the Food System
Even if computers understand, they don’t understand in a human way.
Even if we do want to attribute understanding to computers, it won’t be the kind of understanding we attribute to human beings.
Even if we do want to attribute understanding to computers, it won’t be the kind of understanding we currently attribute to human beings.
With this reorientation from knowledge to power, it is no longer enough to automate information flows about us; the goal now is to automate us.
(S. Zuboff, The Age of Surveillance Capitalism, p. 15.) On Zuboff’s telling, Google, naturally, was the pioneer in this process.
Re: “Even if computers understand, they don’t understand in a human way.”
I like the simple-mindedness of that. (A simple mind is one with no proper normal submind.)
LikeLiked by 1 person
The answer to Quanta ‘s article ‘s title is : very- very far ..
As often with modern technology there are permanent fake promises ,mainly for ego- matters and search for fundings. .Mathematics is not just proof it’s also definitions and theories ;Give me a computer which writes down from scratch Langlands program or homotopy type theory and I buy it. Tim Gowers seems not to know how far we are from a Generalzed AI.It makes the Quanta usual articles on these topics close to science-fiction but useful for AI labs funding.
LikeLike
I consider the title of the article to be a good question — the best question is the one to which the appropriate reply is another question.
LikeLike
The fake promises may have something to do with ego but i find that most of the time they are more like a marketing brochure for venture capitalists; simply an attempt to get money. It serves as more proof than any that science and R&D need to be completely and robustly publicly funded. The very notion that one needs to appeal to private capital in order to advance sci&tech needs to die a very hard death.
LikeLike
I certainly agree — provided there is robust democratic oversight of public funding…
LikeLiked by 1 person
Fifty-plus years of roundhouse discussions about AI + ATP leave me with nothing new to say about it all, so maybe I’ll revisit my earliest thoughts on the subject. I used to like Ashby’s pre-AI notion of IA = Intelligence Amplification and I often used the catchword Intelliscope to sum up my sense of the project worth pursuing. We invented the telescope on analogy with the human eye by studying the anatomy and function of our naturally evolved organ of vision and gradually at first, astronomically over time extending its power to augment and correct our natural faculty and frailty. I think everyone gets the drift of that … It doesn’t mean we have to become cyborgs in any dystopian way — if we do it will be reckoned to some other factor in our erroneous essence or accidents of history.
LikeLike
Would you call Google an accident of history?
LikeLiked by 1 person
I see the warp driving Googly Eyes towards Panopticon … if it goes that way it will be more like the angry ape in our glassy essence than anything else external.
LikeLike
About your question “I don’t know what was in the back of Google’s mind when they decided to sponsor research into automating mathematical proof”: you can listen to Christian Szegedy’s answer at https://youtu.be/p_UXra-_ORQ?t=865. They see it as a nice intermediate goal towards general AI.
LikeLiked by 1 person
And what, I wonder, does Google have in mind for general AI? A patent or two, perhaps?
This is my chance to put in a plug for Rebooting AI by Gary Marcus and Ernest Davis. From the blurb:
Mathematical proof is not mentioned as an intermediate goal to anything in Rebooting AI. They also don’t address the vexed question of how to guarantee the “we” rather than “they” will be doing the creating. But the book, like the AI the authors hope to see, is full of common sense and deep understanding!
Davis, you may remember, panned MWA in Siam News, but reconsidered some of his criticisms afterwards.
LikeLike
re: general AI
https://deepmind.com/about
re: patents, here are a few recent filings
ACTION SELECTION FOR REINFORCEMENT LEARNING USING NEURAL NETWORKS https://www.freepatentsonline.com/y2020/0265313.html
TRAINING NEURAL NETWORKS USING A PRIORITIZED EXPERIENCE MEMORY https://www.freepatentsonline.com/y2020/0265312.html
Generating Target Sequences From Input Sequences Using Partial Conditioning https://www.freepatentsonline.com/y2020/0251099.html
Image processing with recurrent attention https://www.freepatentsonline.com/10748041.html
MULTI-AGENT REINFORCEMENT LEARNING WITH MATCHMAKING POLICIES https://www.freepatentsonline.com/y2020/0244707.html
LEARNING VISUAL CONCEPTS USING NEURAL NETWORKS
TRAINING DISTILLED MACHINE LEARNING MODELS
Training action selection neural networks using a differentiable credit function
Recurrent neural networks for online sequence generation
ENVIRONMENT NAVIGATION USING REINFORCEMENT LEARNING
ACTION SELECTION NEURAL NETWORK TRAINING USING IMITATION LEARNING IN LATENT SPACE
SAMPLING FROM A GENERATOR NEURAL NETWORK USING A DISCRIMINATOR NEURAL NETWORK
I get notifications for these by subscribing to researchers on Google Scholar.
re: Marcus https://webcache.googleusercontent.com/search?q=cache:ozQw-KAiHqgJ:https://twitter.com/ChrSzegedy/status/1289402060018864130+&cd=2&hl=en&ct=clnk&gl=us
I sympathize with his view (e.g. in Algebraic Mind) about necessity of symbols (without which math is impossible as every mathematician knows), but don’t see why deep learning wouldn’t conquer this, and am disappointed with his current AI/GPT-3 denialist attitude.
LikeLike
Michael the AI is going to help our understanding. It will find counterexamples to over-optimistic conjectures for us quickly. It will enable us to check diagrams commute. It will enhance us, like programs such as Stockfish enhance the chess community. And as they get better, they will just enhance us more.
LikeLiked by 1 person
Is that Kevin? You’re answering the first of the two inevitable questions: good or bad? That’s like the question: are you for or against chemotherapy? Maybe that sounds too hostile. How about: are you for or against surgery? The answer can only be: it depends on the circumstances and it depends on who owns the knife.
My sole reason for participating in this discussion is to stress that there are other much more interesting questions. Last week’s experience with Quanta illustrates the difficulties of incorporating them in a journalistic setting, which is structured to handle only facile yes-or-no questions, not questions like “what is understanding?” This is not a timeless category, not even for mathematicians. In fact, the question “what is a mathematician?” — which you’d think needs to be answered before you can begin to answer “what does it mean for a mathematician to understand?” — has had radically different answers at different times. During the period that produced mathematicians like you and me, the answer was conditioned by a variety of material factors, like the massive expansion of university education following WWII, and cultural norms, like the development of uniform practices of mathematical proof. The first thing you learn when you look into the prehistory of automated theorem proving is that the practice of “proof” was very different in the middle of the 20th century from what it had been in the middle of the 19th century, just as the connotations of “proof” were different before and after Principia Mathematica and before and after Gödel’s theorems. One reason philosophy of mathematics has scaled back its ambitions since Russell is that it is unable to explain in what ways, if any, “our understanding” has remained constant through these changes in the material and conventional aspects of mathematical practice.
The word enhance is a good one because it conjures up a future of robocop mathematicians endowed with turbocharged faculties of understanding, judgment, and reason. I suspect Google analytics enhance my ability to track the frequency of use of the word “enhance” in various connections, perhaps even to identify peak users. The word also has a technical meaning in contemporary category theory, and it would be interesting to analyze the parallels with Silicon Valley’s use of the term (as in: will my AI shell help me enhance Langlands duality to an equivalence of infinity-categories?). When you ask whether Bourbaki is enhanced relative to Riemann, say, you are asking whether and how material conditions and cultural norms transformed the meaning of “our understanding” over the course of a century.
The analogous question with respect to automated proof is addressed, explicitly, in Stephanie Dick’s 2015 dissertation, starting with its subtitle: “(Re)configuring Minds, Proof, and Computing in the Postwar United States.” The introduction directly addresses the prehistory of the Xena Project and all real and potential similar projects:
(my emphasis)
Will the robomathematicians who are our evolutionary successors remember us, even dimly, and identify their “understanding” with ours? That question, and the one about who owns the knife, are the ones that I would like to have seen discussed in the Quanta article.
LikeLike
Sorry — yes it is Kevin. I hadn’t realised I would be encrypted. I am enhanced by my Swiss army knife and my phone, and I fully intend to enhance myself with these machines too the moment I can teach them algebraic geometry
LikeLike
You’re going to teach algebraic geometry to your Swiss army knife? I would never object. The phone more obviously illustrates my point. Look around the next time you take the tube and ask yourself whether the transformation the phone has facilitated, that some people have compared to programming the consumers, doesn’t involve diminishing as well as enhancing.
I’m personally sympathetic to the idea of regulating Google and social media more generally as public utilities, but of course the regulators would have to be protected against interference from an unprincipled administration, and of course the meaning of “unprincipled” is also contested. Your Swiss army knife is your own business (as long as it doesn’t have internet connectivity, of course).
LikeLike
Kevin, are you still there? I have some more thoughts about “enhancement.”
LikeLiked by 2 people
https://arxiv.org/abs/1609.03543 Tai-Danae Bradley who writes the blog math3ma.com has a job now at ‘x company’ training the computers to develop deep self-reflection properties and empathy for the ‘computer in the mirror’ . My view is prime numbers have elementary feelings–composites don’t–they are Just mashups.
I’d like to teach algebraic geometry, set theory or category theory to my smart phone–had to get a new one yesterday (returned the one i bought the day before since it was too small so i got a different one for 10$ more–they just added to my 10$ old bill) . but i dont have any money to turn it on. (it reminds me of people who went to the amazon jungle and sold indigneous people refrigerators, washing machines etc. who didnt realize they needed a power source–i guess they are just art pieces now.)
LikeLike
Michael I am still here. I think you’re being a luddite! When I was commuting on the tube 20 years ago people weren’t glued to their phones but they were glued to the freebie papers, which were full of ads and celebrity gossip. What’s the difference? It’s just different companies controlling the people now. People are just using different devices to get entertainment — and with phones the entertainment is more interactive. I sit on the tube and write Lean code and I definitely regard this as an enhancement. We have schemes Michael. Schemes done properly in mathlib. I remember 14 years ago when Johan de Jong told me he wanted to start the stacks project and it seemed like an immense undertaking. Now I look 14 years into the future and imagine what it will be like when Lean knows all that stuff. Can you really argue against teaching that material to a machine? We have no idea what it will be able to do with it.
Kevin Buzzard AKA
LikeLiked by 1 person
There are two narratives about Luddism: the one about “refusal of inevitable progress” (which is somehow conceived as a neutral process independent of human agency) and the one about “resistance to dispossession.” The former, to quote E. P. Thompson’s The Making of the Working Class in England, is about
Jacobin explains Thompson’s version:
I have no problem identifying with Luddism as seen by Thompson. But I was thinking again about the word “enhance.” Since I know music is important to you, I wonder whether you see the progression live performance-LPs-CDs-mp3s/Spotify/YouTube as an example of the kind of enhancement you have in mind for mathematics. Each stage in the progression brings an expansion of the audience, a deterioration in sound quality (by this point my eardrums are shot but I’m told this is the case), and most importantly a concentration of the profits.
Technology transforms the user. We have adapted to low resolution mp3s and expropriation of our private data by Google and Facebook. (I would guess that the tabloids on the tube 20 years ago allowed greater scope for individual imagination than Facebook’s Newsfeed, if only because the latter is designed and constantly adjusted in order to maximize behavior that is profitable to the owner of the platform.) From an article about the European Court of Justice ruling on the right to be forgotten:
I use Google’s search engine every day, and I would continue to use it if it were a public utility and if it didn’t collect my behavioral data. I do not trust Google to develop general AI in the general interest.
LikeLiked by 1 person
There is a debate to be had about the role of google etc in society, but independent of that there is the question of whether we would like a gigantic database of algebro-geometric facts which we can query in a far more powerful way than google can query a database of web pages — we will be able to ask SMT solvers or hammers if they can prove our result by combining results in the database. The technology is there, and the more mathematicians that get involved the quicker we will be able to get a useful working tool into the hands of young researchers. Whether they choose to abuse this tool and stop learning algebraic geometry “properly” is an interesting question, but I think it’s right to give them the choice. Nobody knows how to do 2-descent on an elliptic curve any more apart from people who are my age or older, because if you want to find all the rational points on an explicit elliptic curve you just type it into a machine. I remember Richard Pinch once bemoaning this, but I think of it as progress. I multiply numbers together on my phone instead on a piece of paper. It’s all the same thing. It makes us faster. It makes us better.
LikeLiked by 1 person
To enliven the walks I’ve been taking for the past 6 months along the same 50 or so streets within walking distance of my apartment I started bringing my cellphone and headset along in order to retreat into my own virtual space. At first I was listening to mp3s and recordings from YouTube; then I tuned in to podcasts on France Culture to keep up with my other life; and most recently I’ve been listening to an audiobook version of Shoshanna Zuboff’s The Age of Surveillance Capitalism, which I e-borrowed from the New York Public Library. Have you read it? It’s a nightmare in the format of pure poetry. Every idea I’ve had on Silicon Valley industrial strategy and marketing rhetoric is developed there to all its logical conclusions, with full documentation. Although the words “theorem” and “Luddite” seem not to be anywhere in the book, one finds “enhance” on 23 pages and “progress” on 40 pages. Its 717 pages contain at least a paraphrase of every sentence I would write in connection with this discussion, and this evening I found myself wishing I had an AI that could just extract the best quotations in response to any comment on this thread — it would make me even happier than the AI that would automate my searches in the Stacks project just by reading the statements of my theorems.
I have no objection to your vision of queries and I’m not in the habit of bemoaning social developments, as opposed to the uncontrolled accumulation of wealth and power, to which the proper response is organizing rather than bemoaning. On the other hand, historians of science have taught me to be suspicious of words like “progress” or “better” — even when they are not being deployed by contemporary robber barons to justify their uncontrolled accumulation of wealth and power. The values of our profession evolve continuously in response to social and political factors, with technology playing a relatively small direct role. My productivity was multiplied by a factor of 2 or 3 as a result of word processing and instantaneous e-mail, but since I come from a union family I’m reluctant to call that “progress.” I’d just point out that the logic of the phenomena that Zuboff analyzes tend not only to concentration of power over future developments but also to forcing increasing ranges of behavior, including apparently autonomous scientific research, along tracks that the concentrations of power can exploit for extraction of additional “behavioral surplus,” to use Zuboff’s expression. So I see “the role of google etc” as actually central to the concerns expressed in the rest of your comment and not incidental.
LikeLike
Continuing my listening to Zuboff’s book as I walk back and forth beside the Hudson, I heard this analysis of the conflation of aggressive technological innovation with “progress”:
(Chapter 7, VIII)
I’ve tried to make this point several times, but not nearly as effectively.
Kevin and Patrick: if you are willing to sign a pledge not to use or develop technology that is being used or can reasonably be expected to be used to increase inequality and “enhance” surveillance, I’ll work with specialists to formulate a pledge that is adapted to proof verification. It might take a few months.
LikeLike
I don’t think it’s possible to guarantee that some random technology being developed can never be used for ill. But I’m pretty sure that developing an algebraic geometry database in a system capable of reasoning is not going to be high on the list of tools being used by the bad guys…
LikeLike
In real life, as opposed to automated proof verification, there are no guarantees. “Reasonably expected” means according to your best judgment. In other words, you get to change your mind about the technology if you see it’s being abused. Can I take that as a “yes”?
LikeLike
What if the NSA found some devious way to use the Langlands Philosophy? Would you stop thinking about it? Isn’t this equally likely?
LikeLike
The NSA has probably tried, and GCHQ must still be trying; otherwise why would they be supporting the London-Paris Number Theory Seminar?
I could take refuge in the fine print and argue that the Langlands program is only treated as “technology” when a speaker wants to emphasize a contrast with “ideas” — although I suppose general AI will blur the boundary between technology and concepts. Or I could insist on the “reasonable expectation” and point to Patrick’s comment on Google’s interest as evidence that is lacking in connection with the Langlands program.
The conclusion of my Times Higher Education Supplement article answers your question in one way:
As far as pledges are concerned, much of the work has already been done for us, as I pointed out on this blog in 2015. The 1984 Uppsala Code of Ethics for Scientists contains a four-part pledge, whose fourth part answers your question.
The Uppsala Codex was published in 1984 and needs to be updated to emphasize the implications of information technology. I would like to see a version of this pledge adopted by professional associations of mathematicians. Who could object?
LikeLike
I should think that people who have forgotten their immense privilege and just want a quiet life unencumbered by having to worry about political issues related to their work/funding might object 🙂
LikeLike
The ones who want a quiet life are in for a big surprise!
LikeLike
“Nietzsche a bien vu, pour l’avoir vécu lui-même, ce qui fait le mystère de la vie d’un philosophe. Le philosophe s’empare des vertus ascétiques – humilité, pauvreté, chasteté – pour les faire servir à des fins tout à fait particulières, inouïes, fort peu ascétiques en vérité1. Il en fait l’expression de sa singularité. Ce ne sont pas chez lui des fins morales, ni des moyens religieux pour une autre vie, mais plutôt les « effets » de la philosophie même. Car il n’y a pas du tout d’autre vie pour le philosophe. Humilité, pauvreté, chasteté deviennent dès maintenant les effets d’une vie particulièrement riche et surabondante, suffisamment puissante pour avoir conquis la pensée et s’être subordonné tout autre instinct – ce que Spinoza appelle la Nature : une vie qui ne se vit plus à partir du besoin, en fonction des moyens et des fins, mais à partir d’une production, d’une productivité, d’une puissance, en fonction des causes et des effets. Humilité, pauvreté, chasteté, c’est sa manière à lui (le philosophe) d’être un Grand Vivant, et de faire de son propre corps un temple pour une cause trop orgueilleuse, trop riche, trop sensuelle.”
Replace philosopher by mathematician and it will give a purpose to the mathematial activity that would not refer to market, thanks for your articles! and your book!
LikeLike
Pingback: Does mathematics “progress”? | Mathematics without Apologies, by Michael Harris
Pingback: Announcing a new newsletter on mechanizing mathematics | Mathematics without Apologies, by Michael Harris
Pingback: Resumen de lecturas compartidas durante septiembre de 2020 | Vestigium