John von Neumann emigrated from Hungary in 1933 and settled in Princeton, NJ. During World War 2, he contributed a key idea to the design of the plutonium bomb at Los Alamos. After the war he became a highly sought-after government consultant and did important work kickstarting the United States’s ICBM program. He was known for his raucous parties and love of children’s toys.
Enrico Fermi emigrated from Italy in 1938 and settled first in New York and then in Chicago, IL. At Chicago he built the world’s first nuclear reactor. He then worked at Los Alamos where there was an entire division devoted to him. After the war Fermi worked on the hydrogen bomb and trained talented students at the University of Chicago, many of whom went on to become scientific leaders. After coming to America, in order to improve his understanding of colloquial American English, he read Li’l Abner comics.
Hans Bethe emigrated from Germany in 1935 and settled in Ithaca, NY, becoming a professor at Cornell University. He worked out the series of nuclear reactions that power the sun, work for which he received the Nobel Prize in 1967. During the war Bethe was the head of the theoretical physics division of the Manhattan Project. He spent the rest of his long life working extensively on arms control, advising presidents to make the best use of the nuclear genie he and his colleagues had unleashed, and advocating peaceful uses of nuclear energy. He was known for his hearty appetite and passion for stamp collecting.
Victor Weisskopf, born in Austria, emigrated from Germany in 1937 and settled in Rochester, NY. After working on the Manhattan Project, he became a professor at MIT and the first director-general of CERN, the European particle physics laboratory that discovered many new fundamental particles including the Higgs boson. He was also active in arms control. A gentle humanist, he would entertain colleagues through his rendition of Beethoven sonatas on the piano.
Von Neumann, Fermi, Bethe and Weisskopf were all American patriots. Read more »
This is the eighth in a series of essays on the life and times of J. Robert Oppenheimer. All the others can be found here.
After his shameful security hearing, many of Oppenheimer’s colleagues thought he was a broken man, “like a wounded animal” as one colleague said. But Freeman Dyson, a young physicist who was as perceptive of human nature as anyone, saw it differently: “As far as we were concerned, he was a better director after the hearing than he was before.”
Director of what? Of the “one, true, platonic heaven”, the Institute for Advanced Study in Princeton, a place where the world’s leading thinkers could think and toil in unfettered surroundings. It was here that Oppenheimer entered the fourth and final act of his life, one that was to thrust him on the national and international stages. There is no doubt that the hearing deeply affected him, but instead of dooming him to a life of obscurity and seclusion, it invested him with a new persona, a new role as a public intellectual in which he performed magnificently. Far from being the end of his life, the hearing signaled a new beginning.
It had been an unpromising start. “Princeton is a madhouse”, Oppenheimer had written to his brother Frank in a 1935 letter, “its solipsistic luminaries shining in separate and helpless desolation.” The institute had been set up by funds from a wealthy brother and sister, Louis and Caroline Bamberger who, just before the depression hit, had fortuitously sold their department store to R. H. Macy’s for $11 million. The philanthropic Bambergers wanted to give back to the community and sought the advice of a leading educator, Abraham Flexner, as to how they should put the money to good use. Flexner dissuaded them from starting a medical school in Newark. Instead he had a novel idea. As an educator he knew the importance of pure, curiosity-driven research that may or may not yield practical dividends. Later in 1939 he wrote an influential article for Harper’s Magazine titled “The Usefulness of Useless Research” in which he laid out his vision. Read more »
This is the seventh in a series of essays on the life and times of J. Robert Oppenheimer. All the others can be found here.
The Bohrian paradox of the bomb – the manifestation of unlimited destructive power making future wars impossible – played into the paradoxes of Robert Oppenheimer’s life after the war. The paradox was mirrored by the paradox of the arena of political and human affairs, a very different arena from the orderly, predictable arena of physics that Oppenheimer was used to in the first act of his life. As Hans Bethe once said, one reason many scientists gravitate toward science is because unlike politics, science can actually give you right or wrong answers; in politics, an answer that may be right from one viewpoint may be wrong from another.
In the second act of his life, like Prometheus who reached too close to the sun, Oppenheimer reached too close to the centers of power and was burnt. In this act we also see a different Oppenheimer, one who could be morally inconsistent, even devious, and complicated. His past came to haunt him. The same powers of persuasion that had worked their magic on his students at Berkeley and fellow scientists at Los Alamos failed to work on army generals and zealous Washington bureaucrats. The fickle world of politics turned out to be one that the physicist with the velvet tongue wasn’t quite prepared for. Read more »
This is the sixth in a series of essays on the life and times of J. Robert Oppenheimer. All the others can be found here.
Colonel Leslie Groves, son of an Army chaplain who held discipline sacrosanct above anything else in life, had finished fourth in his class at West Point and studied engineering at MIT. He had excelled in the course of a long career in building and coordinating large-scale projects, culminating in his building the Pentagon, which was then the largest building under one roof anywhere in the world. In September, 1942, Groves was wrapping up and eager to get an overseas assignment when he was summoned by his superior, Lieutenant General Brehon Somervell. Somervell told Groves that he had been reassigned to an important project. When Groves irritably asked which one, Somervell told him that it was a project that could end the war. Groves had learned enough about the fledgling bomb program through the grapevine that his reaction was very simple – “Oh”.
Robert Oppenheimer is the most famous person associated with the Manhattan Project, but the truth of the matter is that there was one person even more important than him for the success of the project – Leslie Groves. Without Groves the project would likely have been impossible or delayed so much as to be useless. Groves was the ideal man for the job. By the fall of 1942, the basic theory of nuclear fission had been worked out and the key goal was to translate theory into practice. Enrico Fermi’s pioneering experiment under the football stands at the University of Chicago – effectively building the world’s first nuclear reactor – had made it clear that a chain reaction in uranium could be initiated and controlled. The rest would require not just theoretical physics but experimental physics, chemistry, ordnance and engineering. Most importantly, it would need large-scale project and personnel management and coordination between dozens of private and government institutions. To accomplish this needed the talents of a go-getter, a no-nonsense operator who could move insurmountable obstacles and people by the sheer force of his personality, someone who may not be popular but was feared and respected and who got the job done. Groves was that man and more. Read more »
This is the fifth in a series of essays on the life and times of J. Robert Oppenheimer. All the others can be found here.
Between December, 1941, when the United States entered the Second World War and July, 1945, when the war ended and two revolutionary weapons had been used against Japan, Robert Oppenheimer underwent an astonishing transformation that stunned his colleagues. From being an ivory tower intellectual who quoted French and Sanskrit poetry and who had led nothing bigger than an adoring group of graduate students and postdocs – not even a university department – he became the successful leader of the largest scientific and industrial enterprise in history, rubbing shoulders with cabinet secretaries and generals and directing the work of tens of thousands of individuals – Nobel laureates and janitors, physicists and chemists and mathematicians, engineers and soldiers and administrative staff. One cannot understand this transformation without tracing its seed back to momentous scientific and political world events in that troubled decade of the 1930s. I can barely scratch the surface of these events here; there is no better source that describes them than Richard Rhodes’s seminal book, “The Making of the Atomic Bomb.”
In December, 1938, working at the Kaiser Wilhelm Institute in Berlin, chemists Otto Hahn and Fritz Strassman found that uranium, when bombarded by neutrons, split into two small, almost equal fragments, a process that came to be called nuclear fission. This transformation was completely unexpected – the atomic nucleus was thought to be relatively stable. While physicists had bombarded elements with neutrons since the discovery of the elementary particle in 1932, all they had seen was the chipping off or building up of nuclei into elements one or two places above in the periodic table; the breaking up of uranium into much smaller elements like barium and xenon was stunning. When Hahn wrote his colleague Lise Meitner – an Austrian Jewish physicist in exile in Sweden – and her nephew Otto Frisch about this result, the two physicists prophetically figured out on a hike that the process would release energy that could be explained by Einstein’s famous equation, E =mc^2. When uranium breaks up, the two resulting pieces weigh slightly less than the parent uranium – that tiny difference in mass translates to a huge difference in energy according to Einstein’s formula. How huge? Several million times more than in the most energetic chemical reactions. Read more »
This is the fourth in a series of posts about J. Robert Oppenheimer’s life and times. All the others can be found here.
Robert Oppenheimer, said Hans Bethe, “created the greatest school of theoretical physics America has ever known.” Coming from Bethe, a physicist of legendary stature who received the Nobel Prize for figuring out what makes the stars shine and who published papers well into his nineties, this was high praise. Before Oppenheimer, it was almost mandatory for young American physics students to go to Europe to study at the feet of masters like Bohr or Born. After Oppenheimer brought back the fire from the continent, they only had to go to California to bask in its glow. Today, while Oppenheimer is most famous as the father of the bomb, it is very likely that posterity will judge his creation of the American school of modern physics as his most important accomplishment.
When he graduated from Göttingen with his Ph.D. in 1927, Oppenheimer’s reputation preceded him. He received ten job offers from universities like Harvard, Princeton and Yale. He chose to go to the University of California, Berkeley. There were two reasons that drew him to what was then a promising but not superlative outpost of physics far from the Eastern centers. Berkeley was, in his words, “a desert”, a place with enormous potential but one which did not have a flourishing tradition of physics yet. The physics department there had already hired Ernest Lawrence, an experimentalist who would become, with his cyclotron, the father of ‘big science’ in the country. Now they wanted a theorist to match Lawrence’s experimental acumen. Oppenheimer who had proven that he could hold his own with the most important physicists in Europe was a logical choice. Read more »
This is the third in a series of posts about J. Robert Oppenheimer’s life and times. All the others can be found here.
In 1925, there was no better place to do experimental physics than Cambridge, England. The famed Cavendish Laboratory there has been created in 1874 by funds donated by a descendant of the eccentric scientist-millionaire Henry Cavendish. It had been led by James Clerk Maxwell and J. J. Thomson, both physicists of the first rank. In 1924, the booming voice of Ernest Rutherford reverberated in its hallways. During its heyday and even beyond, the Cavendish would boast a record of scientific accomplishments unequalled by any other single laboratory before or since; the current roster of Nobel Laureates associated with the institution stands at thirty. By the 1920s Rutherford was well on his way to becoming the greatest experimental physicist in history, having discovered the laws of radioactive transformation, the atomic nucleus and the first example of artificially induced nuclear reactions. His students, half a dozen Nobelists among them, would include Niels Bohr – one of the few theorists the string-and-sealing-wax Rutherford admired – and James Chadwick who discovered the neutron.
Robert Oppenheimer returned back to New York in 1925 after a vacation in New Mexico to disappointment. While he had been accepted into Christ College, Cambridge, as a graduate student, Rutherford had rejected his application to work in his laboratory in spite of – or perhaps because of – the recommendation letter from his undergraduate advisor, Percy Bridgman, that painted a lackluster portrait of Oppenheimer as an experimentalist. Instead it was recommended that Oppenheimer work with the physicist J. J. Thomson. Thomson, a Nobel Laureate, was known for his discovery of the electron, a feat he had accomplished in 1897; by 1925 he was well past his prime. Oppenheimer sailed for England in September. Read more »
This is the second in a series of posts about J. Robert Oppenheimer’s life and times. All the others can be found here.
In the fall of 1922, after the New Mexico sojourn had strengthened his body and mind, Oppenheimer entered Harvard with an insatiable appetite for knowledge; in the words of a friend, “like a Goth looting Rome”. He wore his clothes on a spare frame – he weighed no more than 120 pounds at any time during his life – and had striking blue eyes. Harvard required its students to take four classes every semester for a standard graduation schedule. Robert would routinely take six classes every semester and audit a few more. Nor were these easy classes; a typical semester might include, in addition to classes in mathematics, chemistry and physics, ones in French literature and poetry, English history and moral philosophy.
The best window we have into Oppenheimer’s personality during his time at Harvard comes from the collection of his letters during this time edited by Alice Kimball Smith and Charles Weiner. They are mostly addressed to his Ethical Culture School teacher, Herbert Smith, and to his friends Paul Horgan and Francis Fergusson. Fergusson and Horgan were both from New Mexico where Robert had met them during his earlier trip. Horgan was to become an eminent historian and novelist who would win the Pulitzer Prize twice; Fergusson who departed Harvard soon as a Rhodes Scholar became an important literary and theater critic. They were to be Oppenheimer’s best friends at Harvard.
The letters to Fergusson, Horgan and Smith are fascinating and provide penetrating insights into the young scholar’s scientific, literary and emotional development. In them Oppenheimer exhibits some of the traits that he was to become well known for later; these include a prodigious diversity of reading and knowledge and a tendency to dramatize things. Also, most of the letters are about literature rather than science, which indicates that Oppenheimer had still not set his heart on becoming a scientist. He also regularly wrote poetry that he tried to get published in various sources. Read more »
Freeman Dyson combined a luminous intelligence with a genuine sensitivity toward human problems that was unprecedented among his generation’s scientists. In his contributions to mathematics and theoretical physics he was second to none in the 20th century, but in the range of his thinking and writing he was probably unique. He made seminal contributions to science, advised the U.S government on critical national security issues and won almost every award for his contributions that a scientist could. His understanding of human problems found expression in elegant prose dispersed in an autobiography and in essays and book reviews in the New Yorker and other sources. Along with being a great scientist he was also a cherished friend and family man who raised six children. He was one of a kind. Those of us who could call him a friend, colleague or mentor were blessed.
Now there is a volume commemorating his remarkable mind from MIT Press that is a must-read for anyone who wants to appreciate the sheer diversity of ideas he generated and lives he touched. From spaceships powered by exploding nuclear bombs to the eponymous “Dyson spheres” that could be used by advanced alien civilizations to capture energy from their suns, from his seminal work in quantum electrodynamics to his unique theories for the origins of life, from advising the United States government to writing far-ranging books for the public that were in equal parts science and poetry, Dyson’s roving mind roamed across the physical and human universe. All these aspects of his life and career are described by a group of well-known scientists and science writers, including his son, George and daughter, Esther. Edited by the eminent physicist and historian of science David Kaiser, the volume brings it all together. I myself was privileged to write a chapter about Dyson’s little-known but fascinating foray into the origins of life. Read more »
As someone who has been interested in both classical music and the history of physics for a long time, I have been intrigued by comparison of the styles between the two art forms. I use the term “art form” for physics styles deliberately since most of the best physics that has been done represents high art.
Just like with classical music, physics has been populated by architects and dreamers, careful workmen and inspired explorers, bursts of geniuses and sustained acts of creativity. It is worth spending some time discussing what the word “style” might even mean in a supposedly objective, quantitative field like physics where truth is divined through precise measurements and austere theories. The word style simply means a way of thinking, calculation and experiment, an idiosyncratic method that lends itself individually or collectively to figuring out the facts of nature. The fact is that there is no one style of doing physics, just like there is no one style of doing classical music. Physics has blossomed when it has benefited from an unpredictable diversity of styles; it has stagnated when a particular style hardened into the status quo. And just like classical music goes through periods of convention and experimentation, deaths and rebirths, so has physics.
If we take the three great eras of classical music – baroque, classical and romantic – and the leading composers pioneering these styles, it’s instructive to find parallels with the styles of some great physicists of yore. Johann Sebastian Bach who is my favorite classical musician was known for his precise, almost mathematical fugues, variations and concertos. Read more »
I’ll start this column with an over-generalization. Speaking roughly, scientific models can be classed into two categories: mechanical models, and actuarial models. Engineers and physical scientists tend to favor mechanical models, where the root causes of various effects are specified by their formalism. Predictable inputs, in such models, lead to predictable outputs. Biologists and social scientists, on the other hand, tend to favor actuarial models, which can move from measurements to inferences without positing secret causes along the way. By calling these latter models “actuarial,” I’m encouraging readers to think of the tabulations of insurance analysts, who have learned to appreciate that individuals may be unpredictable, even as they follow predictable patterns in the aggregate.
Operationally, these categories refer to different scientific practices. What I’ve called a difference between mechanical vs. actuarial models could just as well be sketched as a difference between theory-driven vs. data-driven models. Both strains have coexisted in science for the past few centuries.
Just for fun, we might attempt to caricature the history of modern science in the mechanical vs. actuarial terms introduced above. In the seventeenth century, Isaac Newton proposed a law of universal gravitation, applicable everywhere throughout the universe, which allowed naturalists to imagine that all physical effects, everywhere and for all time, were caused by physical laws, just waiting to be discovered. This view was developed to its philosophical extreme in the eighteenth century by the French mathematician, Pierre Laplace, who imagined that the universe at any particular moment implicitly contained the specifications for its entire past and future.
But in the nineteenth century, Charles Darwin introduced his theory of natural selection, which allowed naturalists to take actuarial models more seriously. Just as hidden order could cause the appearance of randomness, hidden randomness could cause the appearance of order. Read more »
Scientists like to think that they are objective and unbiased, driven by hard facts and evidence-based inquiry. They are proud of saying that they only go wherever the evidence leads them. So it might come as a surprise to realize that not only are scientists as biased as non-scientists, but that they are often driven as much by belief as are non-scientists. In fact they are driven by more than belief: they are driven by faith. Science. Belief. Faith. Seeing these words in a sentence alone might make most scientists bristle and want to throw something at the wall or at the writer of this piece. Surely you aren’t painting us with the same brush that you might those who profess religious faith, they might say?
But there’s a method to the madness here. First consider what faith is typically defined as – it is belief in the absence of evidence. Now consider what science is in its purest form. It is a leap into the unknown, an extrapolation of what is into what can be. Breakthroughs in science by definition happen “on the edge” of the known. Now what sits on this edge? Not the kind of hard evidence that is so incontrovertible as to dispel any and all questions. On the edge of the known, the data is always wanting, the evidence always lacking, even if not absent. On the edge of the known you have wisps of signal in a sea of noise, tantalizing hints of what may be, with never enough statistical significance to nail down a theory or idea. At the very least, the transition from “no evidence” to “evidence” lies on a continuum. In the absence of good evidence, what does a scientist do? He or she believes. He or she has faith that things will work out. Some call it a sixth sense. Some call it intuition. But “faith” fits the bill equally.
If this reliance on faith seems like heresy, perhaps it’s reassuring to know that such heresies were committed by many of the greatest scientists of all time. All major discoveries, when they are made, at first rely on small pieces of data that are loosely held. A good example comes from the development of theories of atomic structure. Read more »
Werner Heisenberg was on a boat with Niels Bohr and a few friends, shortly after he discovered his famous uncertainty principle in 1927. A bedrock of quantum theory, the principle states that one cannot determine both the velocity and the position of particles like electrons with arbitrary accuracy. Heisenberg’s discovery foretold of an intrinsic opposition between these quantities; better knowledge of one necessarily meant worse knowledge of the other. Talk turned to physics, and after Bohr had described Heisenberg’s seminal insight, one of his friends quipped, “But Niels, this is not really new, you said exactly the same thing ten years ago.”
In fact, Bohr had already convinced Heisenberg that his uncertainty principle was a special case of a more general idea that Bohr had been expounding for some time – a thread of Ariadne that would guide travelers lost through the quantum world; a principle of great and general import named the principle of complementarity.
Complementarity arose naturally for Bohr after the strange discoveries of subatomic particles revealed a world that was fundamentally probabilistic. The positions of subatomic particles could not be assigned with definite certainty but only with statistical odds. This was a complete break with Newtonian classical physics where particles had a definite trajectory, a place in the world order that could be predicted with complete certainty if one had the right measurements and mathematics at hand. In 1925, working at Bohr’s theoretical physics institute in Copenhagen, Heisenberg was Bohr’s most important protégé had invented quantum theory when he was only twenty-four. Two years later came uncertainty; Heisenberg grasped that foundational truth about the physical world when Bohr was away on a skiing trip in Norway and Heisenberg was taking a walk at night in the park behind the institute. Read more »
Considered the epitome of genius, Albert Einstein appears like a wellspring of intellect gushing forth fully formed from the ground, without precedents or process. There was little in his lineage to suggest genius; his parents Hermann and Pauline, while having a pronounced aptitude for mathematics and music, gave no inkling of the off-scale progeny they would bring forth. His career itself is now the stuff of legend. In 1905, while working on physics almost as a side-project while sustaining a day job as technical patent clerk, third class, at the patent office in Bern, he published five papers that revolutionized physics and can only be compared to Isaac Newton’s burst of high creativity as he sought refuge from the plague. Among these were papers heralding his famous equation, E=mc^2, along with ones describing special relativity, Brownian motion and the basis of the photoelectric effect that cemented the particle nature of light. In one of history’s ironic episodes, it was the photoelectric effect paper rather than the one on special relativity that Einstein himself called revolutionary and that won him the 1922 Nobel Prize in physics.
But in judging Einstein’s superlative achievements, both in terms of his birth and his evolution as a physicist, it is easy to think him of him as an entirely self-made genius. Nothing could be further from the truth. Einstein stood on the proverbial shoulders of giants – Newton, Mach, Faraday, Maxwell, Lorentz, among others – men who had laid the foundations of physics for two centuries before him and who he always had effusive praise for. But quite apart from learning from his intellectual ancestry, Einstein also honed useful habits and personal qualities that enabled him to triumph in his work. Too often when we read about brilliant men and women, there’s a tendency to enshrine and emphasize pure intellect and discard the personal qualities, as if the two were cleanly separable. But the fact of the matter is that raw brilliance and qualities are like genes and culture, each feeding off of each other and nurturing each other’s growth and success.
As psychologist Angela Duckworth described in her book “Grit”, genius without effort and determination can fail, or fail to live up to its great promise at the very least. And so it was for Einstein. Which makes it a matter of curiosity at the minimum ,and more promisingly a tool for measurably enhancing the efficiency of our own more modest work, to survey the personal qualities that Einstein embodied that made him successful. So what were these? Read more »
Two men walking in Princeton, New Jersey on a stuffy day. One shaggy-looking with unkempt hair, avuncular, wearing a hat and suspenders, looking like an old farmer. The other an elfin man, trim, owl-like, also wearing a fedora and a slim white suit, looking like a banker. The elfin man and the shaggy man used to make their way home from work every day. Passersby and motorists would strain their heads to look. Everyone knew who the shaggy man was; almost nobody knew who his elfin companion was. And yet when asked, the shaggy man would say that his own work no longer meant much to him, and the only reason he came to work was to have the privilege of walking home with the elfin man. The shaggy man was Albert Einstein. His walking companion was Kurt Gödel.
What made Gödel, a figure unknown to the public, so revered among his colleagues? The superlatives kept coming. Einstein called him the greatest logician since Aristotle. The legendary mathematician John von Neumann who was his colleague argued for his extraction from fascism-riddled Europe, writing a letter to the director of his institute saying that “Gödel is absolutely irreplaceable; he is the only mathematician about whom I dare make this assertion.” And when I made a pilgrimage to Gödel’s house during a trip to his native Vienna a few years ago, the plaque in front of the house made his claim to posterity clear: “In this house lived from 1930-1937, the great mathematician and logician Kurt Gödel. Here he discovered his famous incompleteness theorem, the most significant mathematical discovery of the twentieth century.”
The reason Gödel drew gasps of awe from colleagues as brilliant as Einstein and von Neumann was because he revealed a seismic fissure in the foundations of that most perfect, rational and crystal-clear of all creations – mathematics. Of all the fields of human inquiry, mathematics is considered the most exact. Unlike politics or economics, or even the more quantifiable disciplines of chemistry and physics, every question in mathematics has a definite yes or no answer. The answer to a question such as whether there is an infinitude of prime numbers leaves absolutely no room for ambiguity or error – it’s a simple yes or no (yes in this case). Not surprisingly, mathematicians around the beginning of the 20th century started thinking that every mathematical question that can be posed should have a definite yes or no answer. In addition, no mathematical question should have both answers. The first requirement was called completeness, the second one was called consistency. Read more »
‘Areopagitica‘ was a famous speech delivered by the poet John Milton in the English Parliament in 1644, arguing for the unlicensed printing of books. It is one of the most famous speeches in favor of freedom of expression. Milton was arguing against a parliamentary ordinance requiring authors to get a license for their works before they could be published. Delivered during the height of the English Civil War, Milton was well aware of the power of words to inspire as well as incite. He said,
For books are not absolutely dead things, but do preserve as in a vial the purest efficacy and extraction of that living intellect that bred them. I know they are as lively, and as vigorously productive, as those fabulous Dragon’s teeth; and being sown up and down, may chance to spring up armed men…
What Milton was saying is not that books and words can never incite, but that it would be folly to restrict or ban them before they have been published. This appeal toward withholding restraint before publication found its way into the United States Constitution and has been a pillar of freedom of expression and the press since.
Why was Milton opposed to pre-publication restrictions on books? Not just because he realized that it was a matter of personal liberty, but because he realized that restricting a book’s contents means restricting the very power of the human mind to come up with new ideas. He powerfully reminded Parliament,
Who kills a man kills a reasonable creature, God’s image; but he who destroys a good book, kills reason itself, kills the image of God, as it were, in the eye. Many a man lives a burden to the earth; but a good book is the precious lifeblood of a master spirit, embalmed and treasured up on purpose to a life beyond life.
Milton saw quite clearly that the problem with limiting publication is in significant part a problem with trying to figure out all the places a book can go. The same problem arises with science. Read more »
During a wartime visit to England in early 1943, John von Neumann wrote a letter to his fellow mathematician Oswald Veblen at the Institute for Advanced Study in Princeton, saying:
“I think I have learned a great deal of experimental physics here, particularly of the gas dynamical variety, and that I shall return a better and impurer man. I have also developed an obscene interest in computational techniques…”
This seemingly mundane communication was to foreshadow a decisive effect on the development of two overwhelmingly important aspects of 20th and 21st century technology – the development of computing and the development of nuclear weapons.
Johnny von Neumann was the multifaceted intellectual diamond of the 20th century. He contributed so many seminal ideas to so many fields so quickly that it would be impossible for any one person to summarize, let alone understand them. He may have been the last universalist in mathematics, having almost complete command of both pure and applied mathematics. But he didn’t stop there. After making fundamental contributions to operator algebra, set theory and the foundations of mathematics, he revolutionized at least two different and disparate fields – economics and computer science – and made contributions to a dozen others, each of which would have been important enough to enshrine his name in scientific history.
But at the end of his relatively short life which was cut down cruelly by cancer, von Neumann had acquired another identity – that of an American patriot who had done more than almost anyone else to make sure that his country was well-defended and ahead of the Soviet Union in the rapidly heating Cold War. Like most other contributions of this sort, this one had a distinctly Faustian gleam to it, bringing both glory and woe to humanity’s experiments in self-elevation and self-destruction. Read more »
Progress in science often happens when two or more fields productively meet. Astrophysics got a huge boost when the tools of radio and radar met the age-old science of astronomy. From this fruitful marriage came things like the discovery of the radiation from the big bang. Another example was the union of biology with chemistry and quantum mechanics that gave rise to molecular biology. There is little doubt that some of the most important future discoveries in science in the future will similarly arise from the accidental fusion of multiple disciplines.
One such fusion sits on the horizon, largely underappreciated and unseen by the public. It is the fusion between physics, computer science and biology. More specifically, this fusion will likely see its greatest manifestation in the interplay between information theory, thermodynamics and neuroscience. My prediction is that this fusion will be every bit as important as any potential fusion of general relativity with quantum theory, and at least as important as the development of molecular biology in the mid 20th century. I also believe that this development will likely happen during my own lifetime.
The roots of this predicted marriage go back to 1867. In that year the great Scottish physicist James Clerk Maxwell proposed a thought experiment that was later called ‘Maxwell’s Demon’. Maxwell’s Demon was purportedly a way to defy the second law of thermodynamics that had been proposed a few years earlier. The second law of thermodynamics is one of the fundamental laws governing everything in the universe, from the birth of stars to the birth of babies. It basically states that left to itself, an isolated system will tend to go from a state of order to one of disorder. A good example is how a bottle of perfume wafts throughout a room with time. This order and disorder was quantified by a quantity called entropy. Read more »
“All experience shows that even smaller technological changes than those now in the cards profoundly transform political and social relationships. Experience also shows that these transformations are not a priori predictable and that most contemporary “first guesses” concerning them are wrong.” – John von Neumann
Is the coronavirus crisis political or technological? All present analysis would seem to say that this pandemic was a result of gross political incompetence, lack of preparedness and impulsive responses by world leaders and government. But this view would be narrow because it would privilege the proximate cause over the ultimate one. The true, deep cause underlying the pandemic is technological. The coronavirus arose as a result of a hyperconnected world that made human reaction times much slower than global communication and the transport of physical goods and people across international borders. For all our skill in creating these technologies, we did not equip ourselves to manage the network effects and sudden failures in social, economic and political systems created by them. An even older technology, the transfer of genetic information between disparate species, was what enabled the whole crisis in the first place.
This privileging of political forces over technological ones is typical of the mistakes that we often make in seeking the root cause of problems. Political causes, greatly amplified by the twenty-four hour news cycle and social media, are illusory and may even be important in the short-term, but there is little doubt that the slow but sure grind of technological change that penetrates deeper and deeper into social and individual choices will be responsible for most of the important transformations we face during our lifetimes and beyond. On scales of a hundred to five hundred years, there is little doubt that science and technology rather than any political or social event cause the biggest changes in the fortunes of nations and individuals: as Richard Feynman once put it, a hundred years from now, the American Civil War would pale into provincial insignificance compared to that other development from the 1860s – the crafting of the basic equations of electromagnetism by James Clerk Maxwell. The former led to a new social contract for the United States; the latter underpins all of modern civilization – including politics, war and peace.
The question, therefore, is not whether we can survive this or that political party or president. The question is, can we survive technology? Read more »
Neil Shubin’s “Some Assembly Required” is a delightful book whose thesis can be summarized in one word – “repurposing”. As Steve Jobs once put it, “Good artists create. Great artists steal”. By that reckoning Nature is undoubtedly the most magnificent thief and the greatest artist of all time. Repurposing in the history of life will undoubtedly become one of the great paradigms of science, and its discovery has not only provided immense insights into evolutionary biology but also promises to make key contributions to our understanding and treatment of human disease.
Among many other achievements of Darwin’s great theory was the explanation and prediction that similar parts of organisms had similar functions even if they might have looked different. One of the truly remarkable features of “On the Origin of Species” is how Darwin gets almost everything right, how even throwaway lines attest to a level of understanding of life that was solidified only decades after this death. The idea of repurposing came about in the “Origin” partly as a reply to objections raised bya man named St. George Jackson Mivart. Mivart was in the curious position of being a man of the cloth who had first wholeheartedly embraced Darwin’s theory and studied with Thomas Henry Huxley, Darwin’s most ardent champion, before then rejecting it and mounting an attack on it, timidly at first and then vociferously. Mivart’s own tract on the subject, “On the Genesis of Species” made his not-so-subtle dig at Darwin’s book clear.
Mivart’s basic objection was similar to that raised then and later by creationists. Darwin’s theory crucially relied on transitional forms that enabled major leaps in life’s history; from fish to amphibian for instance or from arboreal life to terrestrial life. But in Mivart’s view, any such major transition would involve not just a sudden change in one crucial body part, say from gills to lungs, but a change in multiple body parts. Clearly the transition from water to land for instance involved hundreds if not thousands of changes in organs and structures for locomotion, feeding and breathing. But how could all these changes arise out of thin air? How could gills for instance suddenly turn into lungs in the first lucky fish that crawled out of water and learnt how to survive on land? This problem according to Mivart was insurmountable and a fatal flaw in Darwin’s theory. Darwin took Mivart’s objections seriously enough to include a substantial section addressing them in the sixth and definitive edition of his book, first published in 1872. In it he acknowledged Mivart’s problems with his theory, and then did away with them succinctly: There is no problem imagining organs being used in different species, Darwin said, as long as they are “accompanied by a change in function.” In writing this Darwin was even further ahead of his time than he imagined.Read more »