On “the sequencing of the mathematical genome”
Mathematics is funnier than it gets credit for, and the best laugh I ever had about math involved a friend in college and a course so intimidating he almost quit his mathematics major after hearing the name. “Advanced Calculus—A rigorous approach,” it was called, a title that betrayed a martinet attitude. Whereas your average multivariable calc class was flabby and slack-ass, here you’d finally get some goddamn discipline, boy. You will throw up.
Word around the chalkboard was that every homework problem in “Advanced Calculus—A rigorous approach” required six sheets of paper, because you wrote out every nitpicky step and made every assumption explicit, no matter how obvious—not even arithmetic was taken for granted. For some reason I got endless delight terrorizing Mark by pointing out all the horrid, spindly-legged theorems in other books he would have to dissect in “Advanced Calculus—A rigorous approach,” predicting the logic would drive him actually mad. Every time I mentioned the class, I used its full draconian title, “Advanced Calculus—A rigorous approach,” and fear of it drove him to the brink both of hyperventilation and of dropping his major, which probably would have meant him dropping out of college.
Mark was spared by an administrative overhaul of the department, so he never took the class. For my part, I’d almost forgotten the whole incident until I came across a curious bundle of papers in a recent issue of the Notices of the American Mathematical Society—four treaties on the future of mathematical proofs, and specifically on how computers were going to take over a large burden of the work in mathematical proofs. However unpromising that topic sounds, it soon had my thoughts dilating like a marijuana smoker’s thoughts into all sorts of wild conjectures, because it turns out (1) “Advanced Calculus—A rigorous approach” was flabby and slack-ass compared to what’s coming in formal mathematics, and (2) the idea of mathematical beauty might soon be extinct.
First, a few lines of history (cribbed from the papers, natch): The first great revolution in math came from Pythagoras, Euclid, and the rest of the Greeks, who introduced the concept of proofs. You saw examples of this in your high-school geometry class. Next, in the 1800s, came the next big thing, rigor. Oddly, rigor in math is most easily recognized as a feeling—the scrotum-shrinking embarrassment that even people really, really good at college math feel upon realizing that some people are way the hell smarter. Namely, people who do original work in rigorous mathematics.
The next and latest revolution in math was the subject of the NAMS papers—formalization. Formalization means tracing math back to fundamental axioms, the kind of migrainous set theory that takes pages to explicate just why two is the successor of one. It turns out there’s currently a movement in mathematics—and the authors of the quartet of papers claim that not even most mathematicians realize at, or at least don’t admit it—but there’s a movement to make all mathematical proofs fully formal. To basically take a six-page homework problem from “Advanced Calculus—A rigorous approach” and apply even more destructive methods to every single line of that problem, expanding the amount of rigor geometrically if not exponentially.
Why? Because when you tease apart every atom of every line, you can actually convert very hard mathematical concepts into a series of very simple steps. The steps might seem stunted and overly obvious and useless, but they lo and behold add up to something in the end. It’s like someone explaining how a car engine works by starting with the theory of bolts screwing onto threads and going into incredibly arcane detail about it, and then repeating that nut-and-bolt explanation every time you came across another screw. You’d get pissed off, but you’d also probably understand how a car engine worked if he continued to break everything down to concepts that simple. That’s formal mathematics. And once you’ve checked all the ticky-tack steps between lines of a proof, you can be pretty darn sure it’s correct. One paper’s author called this “the sequencing of the mathematical genome.”
Computers enter the scene because there are so unbelievably many lines to check—in one admittedly extreme formalization scheme, it’s estimated it would take a trillion symbols to define the concept “1”—that only computers can even think about where to start. That means that computers would really, for the first time in math history, be running the show. There are all sorts of consequences for mathematicians here, including the mundane consequence of possibly turning over peer review to proof-checking software. (This would at least avoid embarrassments like the publication of the paper in 1993 that announced Andrew Wiles had cracked Fermat’s Legendary Last Theorem. Except he hadn’t. He’d left a gap—a gap that would have been obvious to a computer at least in a fully formalized proof. It took another year of work to nail the sucker down.)
But of all the issues surrounding computerized proofs, I’d like to focus on beauty.
There’s general consensus that really genius-level mathematics is beautiful—purely and uncorruptedly beautiful, the way colored light is, or angels. More particularly, it’s regarded as beautiful in a way that science is not. With a few exceptions—Einstein’s theories of relativity, string theory, maybe Newton and Darwin—no matter how much science impresses people, it rarely moves them aesthetically. Science and mathematics stand in roughly the same relation as journalism and fiction—the latter in each set being more admired because it gives us the sense of having moved in a wholly different realm of being.
Computers, to be blunt, threaten that beauty. One of the four authors in the bundle of papers, Freek Wiedijk, assures mathematicians that existing computers are good for checking proofs only, simply tidying up the real work and never, never ever conjuring up original theorems. The current generation of mathematicians will continue to live by their wits alone. But beyond that …? In a contradiction to Wiedijk, one of the other authors, Thomas C. Hales, admits in an aside that someday, who knows how soon, computers will be doing original work. Doing the work of human mathematicians.
To understand what work computers will horn in on, think again of the relationship between math and science. The former is uncannily, eerily prescient about the latter, and many seemingly esoteric mathematical ideas, ideas pursued for the pure fun and beauty of them, turned out to have applications in the real world. That’s partly because mathematics sets out to describe all the “rules” of all the logically consistent universes that could exist. Given a few select starting rules (axioms), and mathematicians show how to manipulate them. Change the rules, and they’ll show you how things are different. In this sense, mathematicians are working in the multiverse, and so of course sometimes they’ll have done work that applies to our lonely, local universe, even if it’s not obvious at first.
Computers doing formalized proofs will go even farther. The key thing to understand is that when you’re writing up a formalized proof, you are only allowed a certain repertoire of moves from any given configuration, no different than calculating end-games for chess. Some steps are valid, some aren’t. And once computers have learned how to take all the possible ticky-tack steps in a formal proof, they should be able—simply because of brute processing muscle—to map out all of the even illogical and inconsistent universes that exist. A much larger set. They’ll be able to run in every direction without stopping to think at the beginning—as any human would—that this angle looks idiotic. Computers don’t get embarrassed. Naturally, the computers will run into errors and contradictions in their attempts at proofs, at which point they’ll stamp them invalid. And some of what they prove will be truly banal. But sometimes during their exploring, the computers will bust through some unexpectedly subtle opening in some arcane string of symbols, and something incredible will open up, an underground cave. And when they’ve run this new idea all the way to the end—poof, a new mathematical proof, ex nihilo.
This isn’t going to make mathematicians happy. Not so much because they’ll be suddenly useless (who’s more useless now?, as they’d be the first to admit!) or because the computers will be somehow “smarter” than them, but because they’ll have to cede control of beauty. You can imagine a comparably teary-eyed-in-frustration scene with a novelist reading a book that was wholly fabricated from a few hundred lines of code, and having to admit it’s deeper and richer than anything she could have come up with.
The only solace for the world of mathematics is that computerized proofs may open up the game again for the rest of us schlubs, the people who never even attempted classes like “Advanced Calculus—A rigorous approach.” For, once proofs are fully formalized—once every last step is s-p-e-l-l-e-d o-u-t on a level that insults the intelligence—then we too can follow along. Just like with the car engine and the didactic mechanic, we could if we wanted to walk through every step and see how it all works and fits together. And if we’ve forgotten how the engine works overall by the end of the demonstration … well, at least we got to see a little, to understand for a moment.
Of course, we may not want to. Though fully formalized proofs really only got going in the very late 1900s, the idea of excessively detailed mathematics traces back to Alfred North Whitehead and Bertrand Russell and their seminal tome Principia Mathematica, published circa 1910. Russell later admitted that his intellect never recovered from the strain that writing out the Principia put on him. Going through mathematics on that level of rigor stripped out his gears. All he was good for afterward was winning the Nobel Prize in literature.