It is said that toward the end of the nineteenth century Paul Wolfskehl, a German industrialist and amateur mathematician, was on the point of suicide. Some historians claim his depression was the result of a failed romance, others believe it was due to the onset of multiple sclerosis.
He appointed a date for his suicide and intended to shoot himself through the head at the stroke of midnight. In the hours before his planned suicide Wolfskehl visited his library and began reading about the latest research on the Last Theorem. Suddenly, he believed he could see a way of proving the theorem, and he became engrossed in exploring his newfound strategy.
After hours of algebra Wolfskehl realised that his method had reached a dead-end, but the good news was that the appointed time of his suicide had passed. Despite his failure, Wolfskehl had been reminded of the beauty and elegance of number theory, and consequently he abandoned his plan to kill himself. Mathematics had renewed his desire for life. Soon after his death in , the Wolfskehl Prize was announced, generating an enormous amount of publicity and introducing the problem to the general public.
Within the first year proofs were sent in, most of them from amateur problem-solvers, all of them flawed. Even the advent of computers was of no help, because, although they could be employed to help perform sophisticated calculations, they could at best deal with only a finite number of equations. Soon after the Second World War computers helped to prove the theorem for all values of n up to five hundred, then one thousand, and then ten thousand. In other words, for the first four million equations mathematicians had proved that there were no numbers that fitted any of them.
This may seem to be a significant contribution toward finding a complete proof, but the standards of mathematical proofs demand absolute confidence that no numbers fit the equations for all values of n. And so on ad infinitum. Infinity is unobtainable by the mere brute force of computerised number crunching. First manual searches and then years of computer sifting failed to find a solution.
Lack of a counter-example appeared to be strong evidence in favour of the conjecture. Then in Noam Elkies of Harvard University discovered the following solution:. In fact Elkies proved that there are infinitely many solutions to the equation. The moral of the story is that you cannot use evidence from the first million numbers to prove absolutely a conjecture about all numbers. The problem still held a special place in the hearts of number theorists, but now they viewed it in the same way that chemists thought about alchemy. Both were foolish, impossible dreams from a bygone age.
[PDF] Fermat’s Last Theorem - Semantic Scholar
The incident which began everything happened in post-war Japan, when Yutaka Taniyama and Goro Shimura, two young academics, decided to collaborate on the study of elliptic curves and modular forms. These entities are from opposite ends of the mathematical spectrum, and had previously been studied in isolations.
Elliptic curves, which have been studied since the time of Diophantus, concern cubic equations of the form:. The challenge is to identify and quantify the whole solutions to the equations, the solutions differing according to the values of a, b, and c. Modular forms are a much more modern mathematical entity, born in the nineteenth century. They are functions, not so different to functions such as sine and cosine, but modular forms are exceptional because they exhibit a high degree of symmetry. For example, the sine function is slightly symmetrical because 2p can be added to any number, x, and yet the result of the function remains unchanged, i.
However, for modular forms the number x can be transformed in an infinite number of ways and yet the outcome of the function remains unchanged, hence they are said to be extraordinarily symmetric. I will not describe the transformations in any further detail because they involve relatively complicated mathematics and the numbers in question x are so-called complex numbers, composed of real and imaginary parts. Despite belonging to a completely different area of the mathematics, Shimura and Taniyama began to suspect that the elliptic curves might be related to modular forms in a fundamental way.
It seemed that the solutions for any one of the infinite number of elliptic curves could be derived from one of the infinite number of modular forms.
Each elliptic curve seemed to be a modular form in disguise. This apparent unification became known as the Shimura-Taniyama conjecture, reflecting the fact that mathematicians were confident that it was true, but as yet were unable to prove it. The conjecture was considered important because if it were true problems about elliptic curves, which hitherto had been insoluble, could potentially be solved by using techniques developed for modular forms, and vice versa.
Relationships between apparently different subjects are as creatively important in mathematics as they are in any discipline. The relationship hints at some underlying truth that enriches both subjects. For example, in the nineteenth century theorists and experimentalists realised that electricity and magnetism, which had previously been studied in isolation, were intimately related.
This resulted in a deeper understanding of both phenomena. Electric currents generate magnetic fields, and magnets can induce electricity in wires passing close to them. This led to the invention of dynamos and electric motors, and ultimately the discovery that light itself is the result of magnetic and electric fields oscillating in harmony.
Even though the Shimura-Taniyama conjecture could not be proved, as the decades passed it gradually became increasingly influential, and by the s mathematicians would begin papers by assuming the Shimura-Taniyama conjecture and then derive some new result. In due course many major results came to rely on the conjecture being proved, but these results could themselves only be classified as conjectures, because they were conditional on the proof of the Shimura-Taniyama conjecture.
Despite its pivotal role, few believed it would be proved this century. Ribet demonstrated that this elliptic curve could not possibly be related to a modular form, and as such it would defy the Shimura-Taniyama conjecture. By proving one of the most important conjectures of the twentieth century, mathematicians could solve a riddle from the seventeenth century. Some mathematicians joked that, if anything, the Shimura-Taniyama conjecture was even further out of reach, because, by definition, anything that led to a proof of the Last Theorem must be impossible.
But for Wiles, anything that would lead to the Last Theorem was worth pursuing. He knew that this might be his only chance to realise his childhood dream and he had the audacity to attack the Shimura-Taniyama conjecture. As a graduate student at Cambridge University, he had concentrated on studying elliptic curves, and then as a professor at Princeton University he had continued his research, putting him in an ideal position for attempting a proof. As he embarked on his proof, Wiles made the extraordinary decision to conduct his research in complete secrecy. He did not want the pressure of public attention, nor did he want to risk others copying his ideas and stealing the prize.
In order not to arouse suspicion Wiles devised a cunning ploy that would throw his colleagues off the scent. During the early s he had been working on a major piece of research on a particular type of elliptic curve, which he was about to publish in its entirety until the discoveries of Ribet and Frey made him change his mind. Wiles decided to publish his research bit by bit, releasing another minor paper every six months or so.
This apparent productivity would convince his colleagues that Wiles was still continuing with his usual research. For as long as he could maintain this charade Wiles could continue working on his true obsession without revealing any of his breakthroughs. For the next seven years he worked in isolation, and his colleagues were oblivious to what he was doing.
The only person who knew of his secret project was his wife — he told her during their honeymoon. The number of elliptic curves and modular forms is infinite, and the Shimura-Taniyama conjecture claimed each elliptic curve could be matched with a modular. Indeed, there are several different theorem provers , computer applications that can be used to formalize mathematical theories and to verify their proofs.
One first has to encode the theory into a language that the proof tool can interpret. In a second stage, one has to inform the tool how validity of the theorem is proved. In fact, the word "theorem prover" is inadequate. The tool is almost never able to prove a theorem independently. It can however verify the validity of the arguments a human user offers to it. Working with a theorem prover feels like convincing the prover.
One distinguishes different quality criteria for theorem provers. A prover is called sound if it does not allow invalid theorems to be proved.
The intelligence of the prover is a measure of its ability to find proofs or proof steps independently. A third important criterion is user friendliness. This covers several aspects. For mathemematicians, the language of the prover should be easy to learn and indeed close to the mathematical language they are acquainted with. On the other hand, it should be relatively easy to guide the prover with correct arguments towards correct conclusions. Thirdly, when the prover is unable to accept a step, it should give a clear indication of the reason of failure. According to an estimate of J Moore, in more than half of the cases, his theorem prover was asked to prove theorems that were actually invalid.
Every sound prover rejects invalid or insufficient arguments, but user friendliness is also about how to reject them. Every prover has a type system to distinguish e. Rigidity and expressiveness of the type system form quality attributes of theorem provers, but what is best depends on the kind of proofs you need to verify. One needs a rich type system to express complicated matters.
Theorem provers with a rich type system typically have less intelligence. The more possibilities you have, the harder it is to choose between them. The theorem prover NQTHM has a poor type system but is very ingeneous in using mathematical induction. The prover PVS has a richer type system, is much more user friendly, but its release before the current one was unsound. The mathematics used in Wiles' proof of Fermat's Last Theorem is very complicated.
I assume that the same holds for PVS. I do not know the provers Coq and Mizar good enough, but I think they are adequate to express the mathematics. I also expect them to be sound, but with less inbuilt intelligence than PVS. Theorem provers should not be confused with computer algebra packages like Maple and Mathematica.
Since the nineties, many mathematicians use such tools from Computer Algebra to investigate and manipulate their formulas. These tools usually do not offer soundness. Their manipulations require side conditions that cannot be checked by the tool. To apply such a tool in computer verification, one needs a form of integration of the tool in the prover. Indeed, I think that computer verification of Wiles' proof will be very difficult without an integrated computer algebra tool in the theorem prover.
In principle, there are two things that can go wrong: the coding of the theory into the language of the prover, and unsoundness of the prover. Ultimately, computer verification of mathematical theories is not more than a considerable strengthening of the social process of acceptance of the result with its accompanying proof by the mathematical community. Human verification will be needed to ensure that the theory under consideration is correctly encoded into the language of the prover.
This is very well possible, when the language of the prover is reasonably readable and understandable.
- ISBN 13: 9781571461858?
- Battletech Technical Readout 3067?
- Your Answer.
The point of unsoundness is even less problematic. An unsound release of a theorem prover will soon be replaced, hopefully by a sound one. It is difficult and risky to make use of an unsoundness of a prover to validate unsound theorems. There is a more fundamental ways to forestall the unsoundness problem. In principle, a theorem prover should deliver a certificate for each proof. This is a file or set of files that contains the assertion that is proved, with all definitions and all axioms that are used in the proof, and with a witness of the proof that can be verified by a certifier.
A certifier is a much simpler tool than a theorem prover. It needs no intelligence and its soundness should be much easier to verify. Integration of computer algebra into a theorem prover, however, may give problems for the certificates or the certifiers. I regard the points raised in the previous item as minor, and therefore answer this question with yes.
Computer verification requires a single tool for the whole project. Indeed, if some theorem is proved with one theorem prover and then transferred to a different prover, the risks of applying the theorem in a slightly different context would undo the whole purpose of the project.
It is therefore important to choose a theorem prover that can support the whole project. This requires a very rich type system, and possibly also an integrated computer algebra tool. As far as I know, such a theorem prover is not yet available. On the other hand, I do think that the challenge is doable, and that it will be done in the coming fifty years.
Elliptic Curves, Modular Forms and Fermat’s Last Theorem, 2nd Edition
It is definitely not within reach. The project will have to cover large parts of current pure mathematics. Its scope is comparable with the Bourbaki project, where between and almost all of pure mathematics got a new foundation by the efforts of an originally French group of mathematicians who published some 20 books under the pseudonym N. The Fermat-Wiles project could be adopted by the Mizar group. This group has been active for 32 years.
They have developed many theories. I need to look into their work more seriously. Let me draw a fanciful sketch. If I were to start the project, I would first investigate the limits of the type system by trying to prove the first ten pages of [EGA1]. This amounts to a proof of Yoneda's Lemma for big categories. Yoneda's Lemma itself has been treated in Mizar and Coq, but I don't know about the applicability of these versions to big categories.
I would not start building a new theorem prover with integrated computer algebra facilities. Yet, the next part of the project would be the treatment of complex function theory and modular forms , in particular Weierstrass p-function and Eisenstein series. If these preliminary theories have been built, we turn to elliptic curves over the rational numbers with good and bad reduction , and the action of the Galois group on the Tate module.
All this should be doable within ten years. Still, we are only in the foothills, real mountaineering is yet to come. Do we need Deligne's proof of the Weil conjectures? To be corrected, and to be continued At this moment, one can say that all groups working on the formalization of mathematics work in this direction, as well as all groups who try to improve theorem provers and computer algebra systems.
World wide, this is quite a large community. When I focus on the verification of Wiles' proof of Fermat's Theorem, however, the simple answer is no. As mentioned above, computer verification of mathematical proofs started in the Netherlands in in the Automath project of N. The current Dutch work on formalizing mathematics is centered in Nijmegen, see the sites of Barendregt and Geuvers.
- Values in Medicine: What are We Really Doing to Patients? (Biomedical Law and Ethics Library).
- The Accidental Princess.
- The Lerma-Chapala Watershed: Evaluation and Management;
A related field is computer verification of computer algorithms. This did not start in the Netherlands, but significant work has been done here. Fermat's Last Theorem as such is just Arithmetic number theory. Due to the work of Frey, Serre, and Ribet, the problem is transferred to Algebraic Geometry and Analysis elliptic curves and modular forms.
Wiles' proof belongs to these fields. Computer verification of mathematical proofs belongs to Computational Logic, a joint offspring of Mathematical Logic and Computer Science. Computer algebra or symbolic computing is also a new field somewhere in between mathematics and compter science. Since Wiles' proof uses many results from various branches of mathematics, computer verification may need semantic database technology to search for relevant theorems. The total body of knowledge to be encoded is huge. The project may therefore be regarded as an exercise in software engineering.
Indeed, one can say that it is a matter of refactoring half of pure mathematics. Information systems, databases, and software engineering belong to computer science. Yet, all in all, I tend to regard the problem discussed in this page as a mathematical one, but mathematicians are generally more interested in the development of new mathematics than in the formalization and verification of "old" mathematics. Eventually a simpler proof of Fermat's Last Theorem may be found. That is a challenge for mathematicians, not for computer scientists.
Of course, were it to happen, then we would preferably treat the simpler proof first. Yet, for me, Bergstra's problem is to formalise and verify the theorem of Wiles that every semistable elliptic curve is modular. Treatment of this theorem requires formalisation of large parts of the theories of elliptic curves and of modular curves.
Of course, the mathematicians will try to smoothen Wiles' proof as much as possible, but whether this will yield sizeable simplifications remains to be seen.
Well, perhaps, but this is an even bigger challenge, and perhaps more for the field of artificial intelligence than for computer science proper. You may have a look at my home page. I did my PhD thesis in in the field of Algebraic Geometry singularities and actions of linear algebraic groups. In the eighties, I developed special purpose computer algebra programs to classify upper triangular matrices.
While doing this I moved to Computer Science. The last years, I turned to the theorem prover PVS to develop and prove the semantics of a refinement theory for concurrency. So, I am not a specialist in the field of this virtuel project, but I have experience in several related fields. Back to the top of the page.