**Source:**Xinzhiyuan
**Guide:**Although many people do not want to admit it, it is very likely that AI will overtake human mathematicians within ten years.
A few days ago, a paper by Caltech and MIT researchers using ChatGPT to prove mathematical theorems exploded and attracted great attention in the mathematics circle.
Jim Fan, the chief scientist of Nvidia, forwarded it excitedly, saying that AI mathematics Copilot has arrived, and the next person who discovers new theorems will be a fully automatic AI mathematician!
The New York Times also published an article recently, saying that mathematicians are ready, and AI will catch up with or even surpass the best human mathematicians within ten years.
And Tao Zhexuan himself reposted this article.
Siobhan Roberts participated in this year’s IPAM workshop held by Machine Assisted Proofs, and then she wrote this article on AI and mathematics based on her own experience and interviews
Today, mathematicians have to face up to the latest revolutionary force — AI.
In 2019, computer scientist Christian Szegedy, a former Google employee who now works at a Bay Area startup, predicted that computer systems would match or exceed the problem-solving abilities of the best human mathematicians within a decade. Last year, he revised the target date to 2026.
Carnegie Mellon University logician Jeremy Avigad (in blue) with students at the Formal Mathematics Summer School
Akshay Venkatesh, a 2018 Fields Medal winner and a mathematician at the Institute for Advanced Study in Princeton, is currently not interested in using AI, but he is very keen to discuss AI-related topics.
In an interview last year, Venkatesh said, “I want my students to realize that this field is going to change a lot.”
And recently his attitude is: “I have no objection to the deliberate, even deliberate use of AI to assist human understanding. But I firmly believe that we need to be mindful and cautious about the way we use it.”
In February of this year, the Institute of Theoretical and Applied Mathematics at UCLA held a workshop on “Machine-Assisted Proofs.”
The main organizer of the seminar is Tao Terence, a mathematician who won the Fields Medal in 2006 and works at UCLA.
He pointed out that the use of AI to assist mathematical proofs is actually a phenomenon worthy of attention.
It is only in recent years that mathematicians have begun to worry about the potential threat of AI, whether it is the destruction of mathematical aesthetics by AI or the threat to mathematicians themselves.
And outstanding community members are putting these issues on the table and starting to explore how to “break the taboo.”
The organizers of the summer school, from left to right: Avigad, Patrick Massot and Heather Macbeth
For millennia, mathematicians have adapted to the latest advances in logic and reasoning. But are they ready for AI?
Portrait of the 17th century Greek mathematician Euclid in the Getty Museum in Los Angeles: he is dressed in rags and holds up his treatise on geometry, Elements
For more than 2,000 years, the text of Euclid has been the paradigm for mathematical argument and reasoning.
Carnegie Mellon University logician Jeremy Avigad says Euclid began with an almost poetic “definition” on which to build the mathematics of his time—using basic concepts, definitions, and previous theorems, each successive The steps all “clearly follow” the previous steps, in such a way as to prove things.
Some complained that some of Euclid’s “obvious” steps were not so obvious, but Dr. Avigad said the system worked.
But after the 20th century, mathematicians were no longer willing to base mathematics on this intuitive geometric basis.
Instead, they developed formal systems with precise symbolic representations and mechanical rules.
Eventually, under such a system, mathematics could be translated into computer code.
In 1976, the four-color theorem became the first major theorem to be proved with the help of brute force calculations.
The four-color theorem: Four colors are enough to fill a map such that no two adjacent regions have the same color
There is such a math gadget called Proof Assistant, or Interactive Theorem Prover.
Step-by-step, mathematicians convert proofs into code, and then use software programs to check that the reasoning is correct.
The verification process is accumulated in a dynamic specification reference library, which is available to others.
Dr. Avigad, director of the Hoskinson Center for Formal Mathematics, said this type of formalization laid the foundation for today’s mathematics, just as Euclid tried to transcode the mathematics of that era, thereby providing it with a foundation.
Recently, Lean, an open-source proof assistant system, has once again attracted a lot of attention.
Lean was developed by current Amazon computer scientist Leonardo de Moura while at Microsoft.
Lean uses automated reasoning, powered by the old-school AI GOFAI, a symbolic AI inspired by logic.
So far, Lean has verified an interesting theorem that turns a sphere from the inside out, as well as a key theorem that unifies schemes across the mathematical sphere.
However, the proof assistant also has shortcomings: it will often complain that it does not understand the definitions, axioms or reasoning steps entered by the mathematician, so it is also named “proof complainer”.
Those complaints would make research cumbersome, but the kind of functionality that provides line-by-line feedback would also make the system useful for teaching, says Heather Macbeth, a mathematician at Fordham University.
This spring, Dr. Macbeth designed a “bilingual” course. She translated every problem on the blackboard into Lean code in the lecture notes, and students need to submit solutions in Lean and natural language.
“It gave them confidence,” Dr. Macbeth said, because they would receive instant feedback on when the proof was complete and whether each step along the way was right or wrong.
And after attending a workshop, mathematician Emily Riehl of Johns Hopkins University gave it a go.
Emily Riehl, a mathematician at Johns Hopkins University, has been using an experimental proof assistant
She used a proof assistant applet to prove the theorems in her previously published articles.
After using it, she was shocked. “I now understand the proof process much deeper than I ever did. My thinking is so clear that I can explain it to the dumbest computer.”
A group project that students participated in during the Mathematical Formalization Summer School
Another tool that computer scientists often use to solve some mathematical problems is called “violent reasoning”, but the mathematics community often scoffs at this method.
However, AI scientists don’t seem to care much about the ideas of mathematicians, and continue to use their own familiar methods to capture the “highlands” of mathematics.
Heule, a computer scientist at Carnegie Mellon University, used a 200T “SAT solver” file to solve the “Boolean Pythagoras triplet problem” in 2016.
“Nature” magazine said in the article: The proof of 200T is the largest proof process in history. Is it really math to use these tools to solve problems?
But in the view of computer scientist Heule, the author of the paper on solving the problem himself, “this approach is necessary to solve problems beyond the scope of human capabilities.”
Similarly, after defeating humans in a chess game (AlphaZero), DeepMind designed machine learning algorithms to solve protein folding (AlphaFold).
DeepMind published a paper arguing that the way they achieved these results was by using AI to guide human intuition to advance mathematics.
Yuhuai Wu, a former Google computer scientist who is now starting a business in the Bay Area, also said that the direction of his business is to use machine learning to solve mathematical problems.
His current project, Minerva, is a fine-tuned large language model for solving mathematical models.
In the future, he hopes the project will grow into an “automated mathematician” who can “solve math problems independently” as a general research assistant.
On the other hand, many mathematicians who have been in-depth contact with AI technology also raised concerns that AI is not taken seriously in mathematical research.
They believe that artificial intelligence technology can often “directly” help mathematicians “find” the answers they want.
Although mathematicians or AI experts have no idea how AI found this answer.
Geordie Williamson, a mathematician who has worked with DeepMind, once shared an experience of working with DeepMind.
In the process of his cooperation with DeepMind, a neural network discovered by DeepMind can predict the data value that he thinks is very important, and it is extremely accurate.
He’s trying really hard to understand how AI does it, because that could be the basis for a theorem.
But he still couldn’t understand the logic of AI in the end, and the people at DeepMind couldn’t do it either.
Like Euclid, neural networks somehow find the truth, but the logical reasons are hard to fathom.
Inference, on the other hand, from the mathematician’s point of view, is the essence of mathematics but a missing piece of the puzzle in machine learning.
In the tech world, the tech world would be perfectly content if there was a black box that would provide a solution to a problem most of the time.
AI is such a black box.
But mathematicians are not satisfied with this situation.
According to the mathematician, trying to understand how neural networks work raises fascinating mathematical questions.
And solving these problems will allow mathematicians to “make meaningful contributions to the world.”
What would we do if the world was flooded with AI-generated hypotheses?
Netizens sent soul torture to this, and I have doubts about the AI system’s new hypothesis/formula as the first step, because DeepMind has already done it in knot theory.
I wonder how the community will respond to the flood of new assumptions that AI outputs. It’s one thing to check a logical argument created by an AI; it’s another to be overwhelmed by millions of “oh, this might be true” suggestions. I don’t think our existing review and publication systems are ready for this.
How does this affect people’s trust in mathematics?
It has been argued that machines are not going to be able to do math anytime soon, but it could be seen changing the way research is done in the same way that machine learning models and computational power have changed the field of biology.
Some netizens said that since AlphaDev, I have been thinking about this problem, but the same program can build sorting algorithms, and can also use automatic proof checkers to prove mathematical theorems. The real question is whether it can be used to prove something important, not just a trivial discovery.
However, some netizens are still skeptical about whether GPT-like tools can really discover valuable truths.
Some netizens also pointed out that there may be a difference between humans and AI in understanding and paying attention to mathematics. AI proves what is true, while humans always focus on why it is true.
References: