by Ashutosh Jogalekar
There is a sense in certain quarters that both experimental and theoretical fundamental physics are at an impasse. Other branches of physics like condensed matter physics and fluid dynamics are thriving, but since the composition and existence of the fundamental basis of matter, the origins of the universe and the unity of quantum mechanics with general relativity have long since been held to be foundational matters in physics, this lack of progress rightly bothers its practitioners.
Each of these two aspects of physics faces its own problems. Experimental physics is in trouble because it now relies on energies that cannot be reached even by the biggest particle accelerators around, and building new accelerators will require billions of dollars at a minimum. Even before it was difficult to get this kind of money; in the 1990s the Superconducting Supercollider, an accelerator which would have cost about $2 billion and reached energies greater than those reached by the Large Hadron Collider, was shelved because of a lack of consensus among physicists, political foot dragging and budget concerns. The next particle accelerator which is projected to cost $10 billion is seen as a bad investment by some, especially since previous expensive experiments in physics have confirmed prior theoretical foundations rather than discovered new phenomena or particles.
Fundamental theoretical physics is in trouble because it has become unfalsifiable, divorced from experiment and entangled in mathematical complexities. String theory which was thought to be the most promising approach to unifying quantum mechanics and general relativity has come under particular scrutiny, and its lack of falsifiable predictive power has become so visible that some philosophers have suggested that traditional criteria for a theory’s success like falsification should no longer be applied to string theory. Not surprisingly, many scientists as well as philosophers have frowned on this proposed novel, postmodern model of scientific validation.
Quite aside from specific examples in theory and experiment, perhaps the most serious roadblock that fundamental physics seems to be facing is that it might have reached the end of “Why”. That is to say, the causal framework for explaining phenomena that has been a mainstay of physics since its very beginnings might have ominously hit a wall. For instance, the Large Hadron Collider found the Higgs Boson, but this particle had already been predicted thirty years before. Similarly, the gravitational waves predicted by LIGO were a logical prediction of Einstein’s theory of relativity proposed almost a hundred years before. Both these experiments were technical tour de forces, but they did not make startling, unexpected new discoveries. Other “big physics” experiments before the LHC had validated the predictions of the Standard Model which is our best theoretical framework for the fundamental constituents of matter.
The problem is that the basic fundamental constants in the Standard Model like the masses of elementary particles and their numbers are ad hoc quantities. Nobody knows why they have the values they do. This dilemma has led some physicists to propose the idea that while our universe happens to be the one in which the fundamental constants have certain specific values, there might be other universes in which they have different values. This need for explanation of the values of the fundamental constants is part of the reason why theories of the multiverse are popular. Even if true, this scenario does not bode well for the state of physics. In his collection of essays “The Accidental Universe”, physicist and writer Alan Lightman says:
Dramatic developments in cosmological findings and thought have led some of the world’s premier physicists to propose that our universe is only one of an enormous number of universes, with wildly varying properties, and that some of the most basic features of our particular universe are mere accidents – random throws of the cosmic dice. In which case, there is no hope of ever explaining these features in terms of fundamental causes and principles.
Lightman also quotes the reigning doyen of theoretical physicists, Steven Weinberg, who recognizes this watershed in the history of his discipline:
We now find ourselves at a historic fork in the road we travel to understand the laws of nature. If the multiverse idea is correct, the style of fundamental physics will be radically changed.
Although Weinberg does not say this, what’s depressing about the multiverse is that its existence might always remain postulated and never proven since there is no easy way to experimentally test it. This is a particularly bad scenario because the only thing that a scientist hates even more than an unpleasant answer to a question is no answer at all.
Do the roadblocks that experimental and theoretical physics have hit combined with the lack of explanation of fundamental constants mean that fundamental physics is stuck forever? Perhaps not. Here one must remember Einstein when he said that “Our problems cannot be solved with the same thinking that created them”. Physicists may have to think in wholly different ways, to change the fundamental style that Weinberg refers to, in order to overcome the impasse.
Fortunately there is one tool in addition to theory and experiment which has not been prominently used by physicists but which has been used by biologists and chemists and which could help physicists do new experiments. That tool is computation. Computation is usually regarded separately from experiment, but computational experiments can be performed the same way that lab experiments can as long as the parameters and models underlying the computation are well defined and valid. In the last few decades, computation has become as legitimate a tool in science as theory and experiment.
Interestingly, this problem of trying to explain fundamental phenomena without being able to resort to deeper explanations is familiar to biologists: it is the old problem of contingency and chance in evolution. Just like physicists want to explain why the proton has a certain mass, biologists want to explain why marsupials have pouches that carry their young or why Blue Morpho butterflies are a beautiful blue. While proximal explanations for such phenomena are available, the ultimate explanations hinge on chance. Biological evolution could have followed an infinite number of pathways, and the ones that it did simply arose from natural selection acting on random mutations. Similarly one can postulate that while the fundamental constants could have had different values, the ones that they do have in our universe came about simply because of random perturbations, each one of which rendered a different universe. Physics turns into biology.
Is there a way to test this kind of thinking in the absence of concrete experiments? One way would be to think of different universes as different local minima in a multidimensional landscape. This scenario would be familiar to biochemists who are used to thinking of different folded structures for a protein as lying in different local energy minima. A few years back a biophysicist named Collin Stultz in fact made this comparison as a helpful way to think about the multiverse. Computational biophysicists test this protein landscape by running computer simulations in which they allow an unfolded protein to explore all these different local minima until it finds a global minimum which corresponds to its true folded state. In the last few years, thanks to growing computing power, thousands of such proteins have been simulated.
Similarly, I postulate that computational physicists could perform simulations in which they simulate universes with different values for the fundamental constants and evaluate which ones resemble our real universe. Because the values of the fundamental constants dictate chemistry and biology, one could well imagine completely fantastic physics, biology and chemistry arising in universes with different values for Planck’s constant or for the fine structure constant. A 0.001% difference in some values might lead to a lifeless universe with total silence, one with only black holes or spectacularly exploding supernovae, or one which bounced back between infinitesimal and infinite length scales in a split second. Smaller variations on the constants could result in a universe with silicon-based life, or one with liquid ammonia rather than water as life’s essential solvent, or one with a few million earth-like planets in every galaxy. With a slight tweaking of the cosmic calculator, one could even have universes where Blue Morpho butterflies are the dominant intelligent species or where humans have the capacity to photosynthesize.
All these alternative universes could be simulated and explored by computational physicists without the need to conduct billion dollar experiments and deal with politicians for funding. I believe that both the technology and the knowledge base required to simulate entire universes on a computer could be well within our means in the next fifty years, and certainly within the next hundred years. In some sense the technology is already within reach; already we can perform climate and protein structure simulations on mere desktop computers, so simulating whole universes should be possible on supercomputers or distributed cloud computing systems. Crowdsourcing of the kind done for the search for extraterrestrial intelligence or protein folding would be readily feasible. Another alternative would be to do computation using DNA or quantum computers: Because of DNA’s high storage and permutation capacity, computation using DNA can multiply required computational resources manyfold. One can also imagine taking advantage of natural phenomena like electrical discharges in interstellar space or in the clouds of Venus or Jupiter to perform large-scale computation; in fact an intelligence based on communication using electrical discharges was the basis of Fred Hoyle’s science fiction story “The Black Cloud”.
On the theoretical side, the trick is to have enough knowledge about fundamental phenomena and to be able to abstract away the details so that the simulation can be run at the right emergent level. For instance, physicists can already simulate the behavior of entire galaxies and supernovae without worrying about the behavior of every single subatomic particle in the system. Similarly, biologists can simulate the large-scale behavior of ecosystems without worrying about the behavior of every single organism in them. In fact physicists are already quite familiar with such an approach in the field of statistical mechanics where they can simulate quantities like temperature and pressure in a system without simulating every individual atom or molecule in it. And they have measured the values of the fundamental constants to many decimal places to use them confidently in the simulations.
In our hypothetical simulated universe, all the simulator would have to do would be to input slightly different values of the fundamental constants and then hard-code some fundamental emergent laws like evolution by natural selection and the laws of chemical bonding. In fact, a particularly entertaining enterprise would be to run the simulation and see if these laws emerge by themselves. The whole simulation would in one sense largely be a matter of adjusting initial values, setting the boundary value conditions and then sitting back and watching the ensuing fireworks. It would simply be an extension of what scientists already do using computers albeit on a much larger scale. Once the simulations are validated, they could be turned into user-friendly tools or toys that can be used by children. The children could try to simulate their own universes and can have contests to see which one creates the most interesting physics, chemistry and biology. Adults as well as children could thus participate in extending the boundaries of our knowledge of fundamental physics.
Large-scale simulation of multiple universes can help break the impasse that both experimentation and theory in fundamental physics are facing. Computation cannot completely replace experiment if the underlying parameters and assumptions are not well-validated, but there is no reason why this cannot happen as our knowledge of the world based on small-scale experiments grows. In fields like theoretical chemistry, weather prediction and drug development, computational predictions are becoming as important as experimental tests. At the very least, the results from these computational studies will constrain the number of potential experimental tests and provide more confidence in asking governments to allocate billions of dollars for the next generation of particle accelerators and gravitational wave detectors.
I believe that the ability to simulate entire universes is imminent, will be part of the future of physics and will undoubtedly lead to many exciting results. But the most exciting ones will be those that even our best science fiction writers cannot imagine. That is something we can truly look forward to.