The Means Justify the Ends, or, Mathematicians are Sherlocks and Physicists are Mycrofts

by Jonathan Kujawa

A few weeks ago the Numberphile website posted a short video. The video discussed an “astounding” sum and got considerable press (the video has 1,523,719 views so far). It appeared on both Slate and 3QD. The sum? It's:


3QD included links to the firestorm the video created (I know, I know, I too was shocked that the Internet was up in arms over something). But I was surprised by the kerfuffle. The Numberphile videos I had seen featured James Grime giving well thought out discussions of interesting bits of math. I am happy to recommend them to anyone.

This video, however, is complete rubbish [1]. The hosts cram a remarkable number of mathematical outrages into 8 minutes. But they all come from a single anthropological source:

Mathematicians are Sherlocks and physicists are Mycrofts.

Both Sherlock and Mycroft Holmes are brilliant, devoted to reason, socially awkward, and sometimes downright unpleasant. But just the same we appreciate and even sometimes like them. Where they differ is in means versus ends.

Sherlock follows his reasoning wherever it leads. He never hesitates along the path of reason no matter the final outcome. As he says, “when you have excluded the impossible, whatever remains, however improbable, must be the truth.'' Sherlock believes that he serves only The Truth. The means justify the ends.

Sherlock's brother, on the other hand, is perfectly willing commit all manner of sins as long as the end result is the desired one. The ends justify the means. In the TV series at least, Mycroft serves Queen and Country. He has the luxury of an ultimate authority to judge what is right and what is wrong.

Just so, physicists have Mother Nature. They are free to commit all manner of (mathematical) abuses because they know at the end of the day Mother Nature will judge their work as right or wrong. All sins are forgiven if they give an answer which matches experimental data. Indeed, less than a minute into the Numberphile video the hosts show our seemingly ridiculous sum on page 24 of a standard reference on string theory. And indeed it's true that this sum is used in string theory, quantum physics, etc.

It is entirely reasonable for physicists to take this view. By playing fast and loose they travel farther and see more. They can do this safe in the knowledge that Mother Nature will eventually catch them if they go too far. And it should certainly be said that physics has always been a rich source of new ideas and insights for mathematics.

So what about the video? The hosts of the video are physicists and it shows. They do various fishy things and end up with the desired answer. My main complaint is that the video perpetuates the stereotype that mathematics is arbitrary, you have to follow rule without explanation, and if you get something in the end that flies in the face of common sense all you can do is shake your head and marvel at the craziness of it all. All of which, of course, is nonsense.

What about the math? First, what do we even mean by an infinite sum? Ask me to add two or two million numbers and I know what to do. With sufficient patience (or a fast computer) I can sum the numbers and give you the total. I initiate a process, that process eventually stops, and the outcome is the answer to your question.

But what if the process never stops? The short, unimaginative, and unsatisfying answer is that you can't add an infinite list of numbers. But let's at least give it a try. For example, for fun let's consider the following sum:


While we can't add all these numbers, we can at least add the first few and keep an eye on the running total:

With enough patience (or with Wolfram Alpha) we can sum the first 10,000 numbers:

Remarkably the running total seems to be growing slower and slower and perhaps even heading towards a number. It was a famous open problem in the 17th and 18th centuries to determine if this was indeed true and, if so, determine that number. Euler (the absolute master at infinite sums) finally showed in 1735 that the running total of this sum becomes arbitrarily close (in a certain precise sense) to a surprising number:


What's pi doing in there?

In any case, in modern terms we say this sum converges and that it converges to pi squared over six. We write it as:


The equals sign is misleading. It is easy to forget ourselves and take the equation to be saying that the infinite sum is pi squared over six. But really all it means is that the running total of the sum becomes arbitrarily close to pi squared over six. In the end I guess it all depends on what the meaning of the word is is.

As an aside, let me mention that if you replace the power 2 with another even number then the sum still converges and we have a beautiful formula for each number. If the power is an odd number we still know the sum converges but it is considered a tremendously hard open problem to determine the number to which the sum converges — even for the simplest case when the power is 3!

Many infinite sums having running totals which don't head to a particular number. In particular, the two stars of the video are just such sums. Our hosts start with the famous Grandi sum:


Here when we do the running totals we see that they oscillate between 1 and 0 and never converge to a number. In a calculus class and for most mathematicians this series fails to converge and that's the end of the story. But curiosity and need drove mathematicians to expand our point of view to include more flexible notions of what it means to converge.

For example, Cesàro summation looks instead at the averages of the running totals. Though they don't say it, this is what they do in the video when they say that Grandi's sum is 1/2. It's very nicely explained in this Numberphile video. This is not an unreasonable thing thing to do, but it should come with big flashing warning lights that they are leading us off the map into wild and exotic territory (here be dragons!). In these new lands you can also find Abel summation, Borel summation, analytic continuation, and, indeed, an entire book on how to deal with divergent sums (“Divergent Series'' by G. H. Hardy).

Even with the standard notion of convergence based upon the running total we find ourself in a bizarre new world. It is an easy and dangerous mistake to start thinking of this “sum” as an actual sum. Ordinary algebra rules don't apply. For example, in addition we can group terms however we like. But if we, in the words of Hardy, “operate on the formula in an entirely uncritical spirit” we can group terms to obtain both this:

Sumzeroand this:


I think we can all agree that zero doesn't equal one!

In fact, things are even worse. Let's consider the infinite sum:


This sum converges to, surprisingly enough, the natural logarithm of 2. But if instead we consider the same sum with all positive numbers:


then this sum's running total grows larger and larger without limit.

Riemann proved the truly astonishing result that in this situation the original sum can be made to converge to any number you like simply by rearranging the order the numbers are added. That is, sum the same numbers in a different order and it will now converge to -17, or root 2, or 6,789,341 or any other number you like!

With these results in mind it's no wonder mathematicians are queasy over the fast and loose algebra of our physics friends. But, of course, physicists have the comfort of knowing Mother Nature will tell them in the end if they got it right or wrong.

Let's turn back to our original sum. The running totals plainly grow larger and larger without bound. The standard notion of convergence matches our intuition: the sum should be considered infinite if it's anything at all.

So in what sense is -1/12 actually a reasonable interpretation as well? Euler came to this value through clever calculus arguments. We'll follow a more modern path as it leads us right through the famous Riemann Zeta function.

First let's talk about a more familiar sum. Let r be any positive real number and consider the sum of the powers of r:


As we learn in calculus, this sum converges whenever r is less than 1. Indeed, there is a simple formula for the number to which it converges:


For example, when r is 1/2 we find the sum converges to 2 and find ourselves in the midst of Zeno's Dichotomy Paradox. But we must be vigilant! This formula is correct for any r less than 1 but, for example, if we plug in 2 we get the nonsensical sum:


The sum we started our story with works analogously. If we let s be a positive real number, then we can consider the sum:


In calculus you prove this sum converges whenever s is greater than 1. For example, when we plug 2 in for s we just get back the sum we started with at the beginning of this story.

Riemann proved there is a function (his famous Zeta function) which gives the number to which this sum converges whenever s is greater than 1. So we again might write this as:


Now Riemann's function can be evaluated at any value for s we choose. If we are sloppy like before we can plug -1 into s on both sides. Riemann's Zeta function gives us -1/12 and the right side becomes the sum of the natural numbers. We finally we obtain our infamous sum:

A final note: s can be any complex number and the Zeta function still makes sense. The most famous open problem in mathematics is the Riemann Hypothesis. It's a simple question: Is it true that the only complex numbers s which make the Zeta function equal to zero have real part equal to 1/2? It was one of Hilbert's 23 problems a century ago and is now a Millennium Prize Problem.

Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever.

— N. H. Abel

[1] To be fair, the other Numberphile videos are a great service to mathematics and in this one they tackle a particularly difficult topic. While this video is shady, they surely don't deserve the heaps and heaps of white hot scorn they've received. Here you can read an essay by one of the hosts, Tony Padilla, where he explains the thinking behind the video.