Jordan Ellenberg in Wired:
In October 2006, Netflix announced it would give a cool seven figures to whoever created a movie-recommending algorithm 10 percent better than its own. Within two weeks, the DVD rental company had received 169 submissions, including three that were slightly superior to Cinematch, Netflix’s recommendation software. After a month, more than a thousand programs had been entered, and the top scorers were almost halfway to the goal.
But what started out looking simple suddenly got hard. The rate of improvement began to slow. The same three or four teams clogged the top of the leaderboard, inching forward decimal by agonizing decimal. There was BellKor, a research group from AT&T. There was Dinosaur Planet, a team of Princeton alums. And there were others from the usual math powerhouses — like the University of Toronto. After a year, AT&T’s team was in first place, but its engine was only 8.43 percent better than Cinematch. Progress was almost imperceptible, and people began to say a 10 percent improvement might not be possible.
Then, in November 2007, a new entrant suddenly appeared in the top 10: a mystery competitor who went by the name “Just a guy in a garage.” His first entry was 7.15 percent better than Cinematch; BellKor had taken seven months to achieve the same score. On December 20, he passed the team from the University of Toronto. On January 9, with a score 8.00 percent higher than Cinematch, he passed Dinosaur Planet.