September 1, 1939: A tale of two papers

by Ashutosh Jogalekar

Scientific ideas can have a life of their own. They can be forgotten, lauded or reworked into something very different from their creators’ original expectations. Personalities and peccadilloes and the unexpected, buffeting currents of history can take scientific discoveries in very unpredictable directions. One very telling example of this is provided by a paper that appeared in the September 1, 1939 issue of the “Physical Review”, the leading American journal of physics.

The paper had been published by J. Robert Oppenheimer and his student, Hartland Snyder, at the University of California at Berkeley. Oppenheimer was then a 35-year-old professor and had been teaching at Berkeley for ten years. He was widely respected in the world of physics for his brilliant mind and remarkable breadth of interests ranging from left-wing politics to Sanskrit. He had already made important contributions to nuclear and particle physics. Over the years Oppenheimer had collected around him a coterie of talented students. Hartland Snyder was regarded as the best mathematician of the group.

Hartland Snyder (Image credit: Niels Bohr Library and Archives)

Oppenheimer and Snyder’s paper was titled “On Continued Gravitational Contraction”. It tackled the question of what happens when a star runs out of the material whose nuclear reactions make it shine. It postulated a bizarre, wondrous, wholly new object in the universe that must be created when massive stars die. Today we know that object as a black hole. Oppenheimer and Snyder’s paper was the first to postulate it (although an Indian physicist named Bishveshwar Datt had tackled a similar case before without explicitly considering a black hole). The paper is now regarded as one of the seminal papers of 20th century physics.

But when it was published, it sank like a stone. Read more »

On change

by Ashutosh Jogalekar

Ceanothus mothTwo weeks ago, outside a coffee shop near Los Angeles, I discovered a beautiful creature, a moth. It was lying still on the pavement and I was afraid someone might trample on it, so I gently picked it up and carried it to a clump of garden plants on the side. Before that I showed it to my 2-year-old daughter who let it walk slowly over her arm. The moth was brown and huge, almost about the size of my hand. It had the feathery antennae typical of a moth and two black eyes on the ends of its wings. It moved slowly and gradually disappeared into the protective shadow of the plants when I put it down.

Later I looked up the species on the Internet and found that it was a male Ceanothus silk moth, very prevalent in the Western United States. I found out that the reason it’s not seen very often is because the males live only for about a week or two after they take flight. During that time they don’t eat; their only purpose is to mate and die. When I read about it I realized that I had held in my hand a thing of indescribable beauty, indescribable precisely because of the briefness of its life. Then I realized that our lives are perhaps not all that long compared to the Ceanothus moth’s. Assuming that an average human lives for about 80 years, the moth’s lifespan is about 2000 times shorter than ours. But our lifespans are much shorter than those of redwood trees. Might not we appear the same way to redwood trees the way Ceanoth moths or ants appear to us, brief specks of life fluttering for an instant and then disappearing? The difference, as far as we know, is that unlike redwood trees we can consciously understand this impermanence. Our lives are no less beautiful because on a relative scale of events they are no less brief. They are brief instants between the lives of redwood trees just like redwood trees’ lives are brief instants in the intervals between the lives of stars.

I have been thinking about change recently, perhaps because it’s the standard thing to do for someone in their forties. But as a chemist I have thought about change a great deal in my career. The gist of a chemist’s work deals with the structure of molecules and their transformations into each other. The molecules can be natural or synthetic. They can be as varied as DNA, nylon, chlorophyll, rocket fuel, cement and aspirin. But what connects all of them is change. At some point in time they did not exist and came about through the union of atoms of carbon, oxygen, hydrogen, phosphorus and other elements. At some point they will cease to be and those atoms will become part of some other molecule or some other life form. Read more »

The Incommensurable Legacy of Thomas Kuhn

by David Kordahl

Left: Thomas Kuhn (1990). Right: His new book (2022).

Thomas Kuhn’s epiphany

In the years after The Structure of Scientific Revolutions became a bestseller, the philosopher Thomas S. Kuhn (1922-1996) was often asked how he had arrived at his views. After all, his book’s model of science had become influential enough to spawn persistent memes. With over a million copies of Structure eventually in print, marketers and business persons could talk about “paradigm shifts” without any trace of irony. And given the contradictory descriptions that attached to Kuhn—was he a scientific philosopher? a postmodern relativist? another secret third thing?—the question of how he had come to his views was a matter of public interest.

Kuhn told the story of his epiphany many times, but the most recent version in print is collected in The Last Writings of Thomas S. Kuhn: Incommensurability in Science, which was released in November 2022 by the University of Chicago Press. The book gathers an uncollected essay, a lecture series from the 1980s, and the existing text of his long awaited but never completed followup to Structure, all presented with a scholarly introduction by Bojana Mladenović.

But back to that epiphany. As Kuhn was finishing up his Ph.D. in physics at Harvard in the late 1940s, he worked with James Conant, then the president of Harvard, on a general education course that taught science to undergraduates via case histories, a course that examined episodes that had altered the course of science. While preparing a case study on mechanics, Kuhn read Aristotle’s writing on physical science for the first time. Read more »

Should a scientist have faith?

by Ashutosh Jogalekar

Niels Bohr took a classic leap of faith when postulating the quantum atom (Image: Atomic Heritage Foundation)

Scientists like to think that they are objective and unbiased, driven by hard facts and evidence-based inquiry. They are proud of saying that they only go wherever the evidence leads them. So it might come as a surprise to realize that not only are scientists as biased as non-scientists, but that they are often driven as much by belief as are non-scientists. In fact they are driven by more than belief: they are driven by faith. Science. Belief. Faith. Seeing these words in a sentence alone might make most scientists bristle and want to throw something at the wall or at the writer of this piece. Surely you aren’t painting us with the same brush that you might those who profess religious faith, they might say?

But there’s a method to the madness here. First consider what faith is typically defined as – it is belief in the absence of evidence. Now consider what science is in its purest form. It is a leap into the unknown, an extrapolation of what is into what can be. Breakthroughs in science by definition happen “on the edge” of the known. Now what sits on this edge? Not the kind of hard evidence that is so incontrovertible as to dispel any and all questions. On the edge of the known, the data is always wanting, the evidence always lacking, even if not absent. On the edge of the known you have wisps of signal in a sea of noise, tantalizing hints of what may be, with never enough statistical significance to nail down a theory or idea. At the very least, the transition from “no evidence” to “evidence” lies on a continuum. In the absence of good evidence, what does a scientist do? He or she believes. He or she has faith that things will work out. Some call it a sixth sense. Some call it intuition. But “faith” fits the bill equally.

If this reliance on faith seems like heresy, perhaps it’s reassuring to know that such heresies were committed by many of the greatest scientists of all time. All major discoveries, when they are made, at first rely on small pieces of data that are loosely held. A good example comes from the development of theories of atomic structure. Read more »

Complementarity and the world: Niels Bohr’s message in a bottle

by Ashutosh Jogalekar

Niels Bohr (Getty Images)

Werner Heisenberg was on a boat with Niels Bohr and a few friends, shortly after he discovered his famous uncertainty principle in 1927. A bedrock of quantum theory, the principle states that one cannot determine both the velocity and the position of particles like electrons with arbitrary accuracy. Heisenberg’s discovery foretold of an intrinsic opposition between these quantities; better knowledge of one necessarily meant worse knowledge of the other. Talk turned to physics, and after Bohr had described Heisenberg’s seminal insight, one of his friends quipped, “But Niels, this is not really new, you said exactly the same thing ten years ago.”

In fact, Bohr had already convinced Heisenberg that his uncertainty principle was a special case of a more general idea that Bohr had been expounding for some time – a thread of Ariadne that would guide travelers lost through the quantum world; a principle of great and general import named the principle of complementarity.

Complementarity arose naturally for Bohr after the strange discoveries of subatomic particles revealed a world that was fundamentally probabilistic. The positions of subatomic particles could not be assigned with definite certainty but only with statistical odds. This was a complete break with Newtonian classical physics where particles had a definite trajectory, a place in the world order that could be predicted with complete certainty if one had the right measurements and mathematics at hand. In 1925, working at Bohr’s theoretical physics institute in Copenhagen, Heisenberg was Bohr’s most important protégé had invented quantum theory when he was only twenty-four. Two years later came uncertainty; Heisenberg grasped that foundational truth about the physical world when Bohr was away on a skiing trip in Norway and Heisenberg was taking a walk at night in the park behind the institute. Read more »

The Problem Of The Inner: On The Subject-Ladenness Of Objectivity

by Jochen Szangolies

Figure 1: Cutting into a cake does not reveal the interior, but simply creates more—delicious—surface. Image credit: Caitlyn de Wild on Unsplash

Children, they say, are natural scientists (although opinion on what it is that makes them so appears divided). Each of us has probably been stumped by a question asked, out of the blue, that gives a sudden glimpse into the workings of a mind encountering the world for the first time, faced with the impossible task of making sense of it. While there may be an element of romanticisation at play, such moments also, on occasion, show us the world from a point of view where the assumptions that frame the adult world have not yet calcified into comforting certainties.

The questions I asked, as a child, where probably mostly of the ‘neverending chain of why’-sort (a habit I still haven’t entirely shed). But there was one idea that kept creeping up on me with an almost compulsive quality: how do I know what’s inside things? Is there anything, or is there just a dark nothing behind their flimsy outer skin? Granted, I probably didn’t phrase it in these terms, but there was a sort of vaguely realized worry that things might just suddenly go pop like an unsuspecting balloon pricked by a prankster’s needle, exposing themselves as ultimately hollow, mere shells.

It’s not such an easily dismissed idea. All we ever see of things are surfaces reflecting light. All we ever touch are exteriors. Even the tasting tongue, a favorite instrument of probing for the curious child, tastes nothing but what’s on the outside (incidentally, here’s something I always found sort of creepy: look at anything around you—your tongue knows exactly what it feels like).

You might think it’s a simple enough exercise to discover the inner nature of things—faced with, say, the deliciously decorated exterior of a cake, in the best analytic tradition, heed your inner lobster, whip out a knife and cut right into it to expose the sweet interior. But are you then truly faced with the cake’s inner nature? No—rather, you’re presented with the surface of the piece you cut out, and the rest remaining on the cake platter.

The act of cutting, rather than revealing the inner, just creates new exterior, by separating the cut object—you can’t cut your cake and leave it whole. Whenever threatened with exposure, the inner retreats behind fresh surface. Read more »

Justification and the Value-Free Ideal in Science

by Fabio Tollon

One of the cornerstones good of science is that its results furnish us with an objective understanding of the world. That is, science, when done correctly, tells us how the world is, independently of how we might feel the world to be (based, for example, on our values or commitments). It is thus central to science, and its claims to objectivity, that values do not override facts. An important feature of this view of science is the distinction between epistemic and non-epistemic values. Simply, epistemic values are those which would seem to make for good science: external coherence, explanatory power, parsimony, etc. Non-epistemic values, on the other hand, concern things like our value judgements, biases, and preferences. In order for science to work well, so the story goes, it should only be epistemic values that come to matter when we assess the legitimacy of a given scientific theory (this is often termed the “value-free ideal”). Thus, a central presupposition underpinning this value-free ideal is that we can in fact mark a distinction between epistemic and non-epistemic values Unfortunately, as with most things in philosophy, things are not that simple.

The first thing to note are the various ways that the value-free ideal plays out in the context of discovery, justification, and application. With respect to the context of discovery, it doesn’t seem to matter if we find that non-epistemic values are operative. While decisions about funding lines, the significance we attach to various theories, and the choice of questions we might want to investigate are all important insofar as they influence where we might choose to look for evidence, they do not determine whether the theories we come up with are valid or not.

Similarly, in the context of application, we could invoke the age-old is-ought distinction: scientific theories cannot justify value-laden beliefs. For example, even if research shows that taller people are more intelligent, it would not follow that taller people are more valuable than shorter people. Such a claim would depend on the value that one ascribes to intelligence beforehand. Therefore, how we go about applying scientific theories is influenced by non-epistemic values, and this is not necessarily problematic.

Thus, in both the context of validation and the context of discovery, we find non-epistemic values to be operative. This, however, is not seen as much of a problem, so long as these values do not “leak” into the context of justification, as it is here that science’s claims to objectivity are preserved. Is this really possible in practice though? Read more »

Should we Disregard the Norms of Assertion in Inter-scientific Discourse? A Response to a False Dilemma

by George Barimah, Ina Gawel, David Stoellger, and Fabio Tollon*

"Assertion" by Ina Gawel
“Assertion” by Ina Gawel

When thinking about the claims made by scientists you would be forgiven for assuming that such claims ought to be true, justified, or at the very least believed by the scientists themselves. When scientists make assertions about the way they think the world is, we expect these assertions to be, on the balance of things, backed up by the local evidence in that field.

The general aim of scientific investigations is that we uncover the truth of the matter: in physics, this might involve discovering a new particle, or realizing that what we once thought was a particle is in fact a wave, for example. This process, however, is a collective one. Scientists are not lone wolves who isolate themselves from other researchers. Rather, they work in coordinated teams, which are embedded in institutions, which have a specific operative logic. Thus, when an individual scientist “puts forward” a claim, they are making this claim to a collection of scientists, those being other experts in their field. These are the kinds of assertions that Haixin Dang and Liam Kofi Bright deal with in a recent publication: what are the norms that govern inter-scientific claims (that is, claims between scientists). When scientists assert that they have made a discovery they are making a public avowal: these are “utterances made by scientists aimed at informing the wider scientific community of some results obtained”. The “rules of the game” when it comes to these public avowals (such as the process of peer-review) presuppose that there is indeed a fact of the matter concerning which kinds of claims are worthy of being brought to the collective attention of scientists. Some assertions are proper and others improper, and there are various norms within scientific discourse that help us make such a determination.

According to Dang and Bright we can distinguish three clusters of norms when it comes to norms of assertions more generally. First, we have factive norms, the most famous of which is the knowledge norm, which essentially holds that assertions are only proper if they are true. Second, we have justification norms, which focus on the reason-responsiveness of agents. That is, can the agent provide reasons for believing their assertion. Last, there are belief norms. Belief norms suggest that for an assertion to be proper it simply has to be the case that the speaker sincerely believes in their assertion. Each norm corresponds to one of the conditions introduced at the beginning of this article and it seems to naturally support the view that scientists should maintain at least one (if not all) of these norms when making assertions in their research papers. The purpose of Dang and Bright’s paper, however, is to show that each of these norms are inappropriate in the case of inter-scientific claims. Read more »

The Science of Empire

by N. Gabriel Martin

1870 Index of Great Trigonometrical Survey of India

Henry Ward Beecher was one of the most prominent and influential abolitionists in the US prior to and during the Civil War. He campaigned against the “Compromise of 1850” in which the new state of California, annexed in the Mexican-American war, was agreed to be made a state without slavery in exchange for tougher laws against aiding fugitive slaves in the non-slavery states. In his argument against the Compromise of 1850, “Shall we compromise,” Beecher argued, according to his biographer Debby Applegate: “No lasting compromise was possible between Liberty and Slavery, Henry argued, for democracy and aristocracy entailed such entirely different social and economic conditions that ‘One or the other must die.’”[1]

In her Voice From the South, African-American author Anna Julia Cooper writes about hearing Beecher say “Were Africa and the Africans to sink to-morrow, how much poorer would the world be? A little less gold and ivory, a little less coffee, a considerable ripple, perhaps, where the Atlantic and Indian Oceans would come together—that is all; not a poem, not an invention, not a piece of art would be missed from the world.”[2]

Opposed to the enslavement of Africans on the one hand, utterly dismissive of their value on the other, for Beecher the problem of slavery would be just as well resolved if Thanos snapped his fingers and disappeared all Africans, as it would if slavery were abolished. Perhaps better. Beecher’s position isn’t atypical of human rights advocates, even today (although the way he puts it would certainly be impolitic today). When charities from Oxfam to Save The Children feature starving African children in their ads, the message isn’t that the impoverishment of those children inhibits their potential as the inheritors of a rich cultural endowment that goes back to the birth of civilisation, mathematics, and monotheism in Ancient Egypt. The message these humanitarian ads send is that the children are suffering and that you have the power to save them. As Didier Fassin writes: “Humanitarian reason pays more attention to the biological life of the destitute and unfortunate, the life in the name of which they are given aid, than to their biographical life, the life through which they could, independently, give a meaning to their own existence.”[3] Read more »

Computer Simulations And The Universe

by Ashutosh Jogalekar

There is a sense in certain quarters that both experimental and theoretical fundamental physics are at an impasse. Other branches of physics like condensed matter physics and fluid dynamics are thriving, but since the composition and existence of the fundamental basis of matter, the origins of the universe and the unity of quantum mechanics with general relativity have long since been held to be foundational matters in physics, this lack of progress rightly bothers its practitioners.

Each of these two aspects of physics faces its own problems. Experimental physics is in trouble because it now relies on energies that cannot be reached even by the biggest particle accelerators around, and building new accelerators will require billions of dollars at a minimum. Even before it was difficult to get this kind of money; in the 1990s the Superconducting Supercollider, an accelerator which would have cost about $2 billion and reached energies greater than those reached by the Large Hadron Collider, was shelved because of a lack of consensus among physicists, political foot dragging and budget concerns. The next particle accelerator which is projected to cost $10 billion is seen as a bad investment by some, especially since previous expensive experiments in physics have confirmed prior theoretical foundations rather than discovered new phenomena or particles.

Fundamental theoretical physics is in trouble because it has become unfalsifiable, divorced from experiment and entangled in mathematical complexities. String theory which was thought to be the most promising approach to unifying quantum mechanics and general relativity has come under particular scrutiny, and its lack of falsifiable predictive power has become so visible that some philosophers have suggested that traditional criteria for a theory’s success like falsification should no longer be applied to string theory. Not surprisingly, many scientists as well as philosophers have frowned on this proposed novel, postmodern model of scientific validation. Read more »