Scientific ideas can have a life of their own. They can be forgotten, lauded or reworked into something very different from their creators’ original expectations. Personalities and peccadilloes and the unexpected, buffeting currents of history can take scientific discoveries in very unpredictable directions. One very telling example of this is provided by a paper that appeared in the September 1, 1939 issue of the “Physical Review”, the leading American journal of physics.
The paper had been published by J. Robert Oppenheimer and his student, Hartland Snyder, at the University of California at Berkeley. Oppenheimer was then a 35-year-old professor and had been teaching at Berkeley for ten years. He was widely respected in the world of physics for his brilliant mind and remarkable breadth of interests ranging from left-wing politics to Sanskrit. He had already made important contributions to nuclear and particle physics. Over the years Oppenheimer had collected around him a coterie of talented students. Hartland Snyder was regarded as the best mathematician of the group.
Oppenheimer and Snyder’s paper was titled “On Continued Gravitational Contraction”. It tackled the question of what happens when a star runs out of the material whose nuclear reactions make it shine. It postulated a bizarre, wondrous, wholly new object in the universe that must be created when massive stars die. Today we know that object as a black hole. Oppenheimer and Snyder’s paper was the first to postulate it (although an Indian physicist named Bishveshwar Datt had tackled a similar case before without explicitly considering a black hole). The paper is now regarded as one of the seminal papers of 20th century physics.
But when it was published, it sank like a stone. Read more »
This is the fifth in a series of essays on the life and times of J. Robert Oppenheimer. All the others can be found here.
Between December, 1941, when the United States entered the Second World War and July, 1945, when the war ended and two revolutionary weapons had been used against Japan, Robert Oppenheimer underwent an astonishing transformation that stunned his colleagues. From being an ivory tower intellectual who quoted French and Sanskrit poetry and who had led nothing bigger than an adoring group of graduate students and postdocs – not even a university department – he became the successful leader of the largest scientific and industrial enterprise in history, rubbing shoulders with cabinet secretaries and generals and directing the work of tens of thousands of individuals – Nobel laureates and janitors, physicists and chemists and mathematicians, engineers and soldiers and administrative staff. One cannot understand this transformation without tracing its seed back to momentous scientific and political world events in that troubled decade of the 1930s. I can barely scratch the surface of these events here; there is no better source that describes them than Richard Rhodes’s seminal book, “The Making of the Atomic Bomb.”
In December, 1938, working at the Kaiser Wilhelm Institute in Berlin, chemists Otto Hahn and Fritz Strassman found that uranium, when bombarded by neutrons, split into two small, almost equal fragments, a process that came to be called nuclear fission. This transformation was completely unexpected – the atomic nucleus was thought to be relatively stable. While physicists had bombarded elements with neutrons since the discovery of the elementary particle in 1932, all they had seen was the chipping off or building up of nuclei into elements one or two places above in the periodic table; the breaking up of uranium into much smaller elements like barium and xenon was stunning. When Hahn wrote his colleague Lise Meitner – an Austrian Jewish physicist in exile in Sweden – and her nephew Otto Frisch about this result, the two physicists prophetically figured out on a hike that the process would release energy that could be explained by Einstein’s famous equation, E =mc^2. When uranium breaks up, the two resulting pieces weigh slightly less than the parent uranium – that tiny difference in mass translates to a huge difference in energy according to Einstein’s formula. How huge? Several million times more than in the most energetic chemical reactions. Read more »
This is the fourth in a series of posts about J. Robert Oppenheimer’s life and times. All the others can be found here.
Robert Oppenheimer, said Hans Bethe, “created the greatest school of theoretical physics America has ever known.” Coming from Bethe, a physicist of legendary stature who received the Nobel Prize for figuring out what makes the stars shine and who published papers well into his nineties, this was high praise. Before Oppenheimer, it was almost mandatory for young American physics students to go to Europe to study at the feet of masters like Bohr or Born. After Oppenheimer brought back the fire from the continent, they only had to go to California to bask in its glow. Today, while Oppenheimer is most famous as the father of the bomb, it is very likely that posterity will judge his creation of the American school of modern physics as his most important accomplishment.
When he graduated from Göttingen with his Ph.D. in 1927, Oppenheimer’s reputation preceded him. He received ten job offers from universities like Harvard, Princeton and Yale. He chose to go to the University of California, Berkeley. There were two reasons that drew him to what was then a promising but not superlative outpost of physics far from the Eastern centers. Berkeley was, in his words, “a desert”, a place with enormous potential but one which did not have a flourishing tradition of physics yet. The physics department there had already hired Ernest Lawrence, an experimentalist who would become, with his cyclotron, the father of ‘big science’ in the country. Now they wanted a theorist to match Lawrence’s experimental acumen. Oppenheimer who had proven that he could hold his own with the most important physicists in Europe was a logical choice. Read more »
This is the third in a series of posts about J. Robert Oppenheimer’s life and times. All the others can be found here.
In 1925, there was no better place to do experimental physics than Cambridge, England. The famed Cavendish Laboratory there has been created in 1874 by funds donated by a descendant of the eccentric scientist-millionaire Henry Cavendish. It had been led by James Clerk Maxwell and J. J. Thomson, both physicists of the first rank. In 1924, the booming voice of Ernest Rutherford reverberated in its hallways. During its heyday and even beyond, the Cavendish would boast a record of scientific accomplishments unequalled by any other single laboratory before or since; the current roster of Nobel Laureates associated with the institution stands at thirty. By the 1920s Rutherford was well on his way to becoming the greatest experimental physicist in history, having discovered the laws of radioactive transformation, the atomic nucleus and the first example of artificially induced nuclear reactions. His students, half a dozen Nobelists among them, would include Niels Bohr – one of the few theorists the string-and-sealing-wax Rutherford admired – and James Chadwick who discovered the neutron.
Robert Oppenheimer returned back to New York in 1925 after a vacation in New Mexico to disappointment. While he had been accepted into Christ College, Cambridge, as a graduate student, Rutherford had rejected his application to work in his laboratory in spite of – or perhaps because of – the recommendation letter from his undergraduate advisor, Percy Bridgman, that painted a lackluster portrait of Oppenheimer as an experimentalist. Instead it was recommended that Oppenheimer work with the physicist J. J. Thomson. Thomson, a Nobel Laureate, was known for his discovery of the electron, a feat he had accomplished in 1897; by 1925 he was well past his prime. Oppenheimer sailed for England in September. Read more »
I’ve been thinking again about the relationship of scientists to the history of science. Lorraine Daston, the historian and philosopher of science, was recently interviewed for Marginalia, where her interviewer asked a strongly worded question. “Scientists are—I don’t want to put it too provocatively—but frankly they’re afraid of the history of their own discipline. What do you think that means?”
Daston was not quite willing to put all the blame at the feet of scientists. Historians of science, she remarked, have become more specialized, making their work less useful to scientists. Likewise, philosophers have failed to “remake of the concept of truth that does justice to the historical dynamism of science.” But then there’s the scientists themselves, who “consider almost anything which is not within their discipline, including other sciences, to be blather. So, there’s quite enough blame to go around in terms of explaining […] this impasse of mutual incomprehension.”
When this interview was released, it prompted some online chatter. Some scientists reading the interview did not see themselves in Daston’s characterization, since scientists do not, in general, consider themselves uninterested in the history of science—quite the opposite. The problem, for such history-interested scientists, was of approach rather than content.
To explore the basic distinction between “science history for scientists” and “science history for historians-and-philosophers-of-science,” I’ll use two complimentary books. Einstein’s Fridge: How the Difference Between Hot and Cold Explains the Universe, out last year from the science writer Paul Sen, exemplifies the former approach, where history provides a narrative scaffold to lead us gently toward our modern scientific theories. Inventing Temperature: Measurement and Scientific Progress, the 2004 work by the historian and philosopher of science Hasok Chang, is a classic example of the latter, where history is used as a proving ground to show that science and its history is more complicated than most scientists care to admit. Read more »
Werner Heisenberg was on a boat with Niels Bohr and a few friends, shortly after he discovered his famous uncertainty principle in 1927. A bedrock of quantum theory, the principle states that one cannot determine both the velocity and the position of particles like electrons with arbitrary accuracy. Heisenberg’s discovery foretold of an intrinsic opposition between these quantities; better knowledge of one necessarily meant worse knowledge of the other. Talk turned to physics, and after Bohr had described Heisenberg’s seminal insight, one of his friends quipped, “But Niels, this is not really new, you said exactly the same thing ten years ago.”
In fact, Bohr had already convinced Heisenberg that his uncertainty principle was a special case of a more general idea that Bohr had been expounding for some time – a thread of Ariadne that would guide travelers lost through the quantum world; a principle of great and general import named the principle of complementarity.
Complementarity arose naturally for Bohr after the strange discoveries of subatomic particles revealed a world that was fundamentally probabilistic. The positions of subatomic particles could not be assigned with definite certainty but only with statistical odds. This was a complete break with Newtonian classical physics where particles had a definite trajectory, a place in the world order that could be predicted with complete certainty if one had the right measurements and mathematics at hand. In 1925, working at Bohr’s theoretical physics institute in Copenhagen, Heisenberg was Bohr’s most important protégé had invented quantum theory when he was only twenty-four. Two years later came uncertainty; Heisenberg grasped that foundational truth about the physical world when Bohr was away on a skiing trip in Norway and Heisenberg was taking a walk at night in the park behind the institute. Read more »
Victor Weisskopf (Viki to his friends) emigrated to the United States in the 1930s as part of the windfall of Jewish European emigre physicists which the country inherited thanks to Adolf Hitler. In many ways Weisskopf’s story was typical of his generation’s: born to well-to-do parents in Vienna at the turn of the century, educated in the best centers of theoretical physics – Göttingen, Zurich and Copenhagen – where he learnt quantum mechanics from masters like Wolfgang Pauli, Werner Heisenberg and Niels Bohr, and finally escaping the growing tentacles of fascism to make a home for himself in the United States where he flourished, first at Rochester and then at MIT. He worked at Los Alamos on the bomb, then campaigned against it as well as against the growing tide of red-baiting in the United States. A beloved teacher and researcher, he was also the first director-general of CERN, a laboratory which continues to work at the forefront of particle physics and rack up honors.
But Weisskopf also had qualities that set him apart from many of his fellow physicists; among them were an acute sense of human tragedy and triumph and a keen interest in music and the humanities that allowed him to appreciate human problems and connect ideas from various disciplines. He was also renowned for being a wonderfully warm teacher. Many of these qualities are on full display in his wonderful, underappreciated memoir titled “The Joy of Insight: Passions of a Physicist”.
The memoir starts by describing Weisskopf’s upbringing in early twentieth century Vienna, which was then a hotbed of revolutions in science, art, psychology and music. The scientifically inclined Weisskopf came of age at the right time, when quantum mechanics was being developed in Europe. He was fortunate to study first at Göttingen which was the epicenter of the new developments, and then in Zurich under the tutelage of the brilliant and famously acerbic Wolfgang Pauli. It was Göttingen where Max Born and Heisenberg had invented quantum mechanics; by the time Weisskopf came along, in the early 1930s, physicists were in a frenzy to apply quantum mechanics to a range of well known, outstanding problems in nuclear physics, solid state physics and other frontier branches of physics. Weisskopf made important contributions to quantum electrodynamics which underlies much of modern physics. Read more »