Many scientists don’t know what they are doing. That is, they are so immersed in science, that they often do not step outside it for a wider philosophical perspective on what it is they do, while remaining convinced that science is somehow more correct than other ways of doing things. For example, a scientist might argue that she can treat malaria better than a witch doctor can. The witch doctor, of course, will say the opposite. If you ask the scientist why she thinks she is right, she will say that she can demonstrate her efficacy with an experiment: a large sample of cases of malaria which are treated by her method as well as with the witch doctor’s method (and maybe even a control group), after which she will perform a sophisticated statistical analysis on the data that she collects on all these cases, thus showing that her method is better. Now, if you object that her reasoning is circular, after all, she has just used the scientific method to show that the scientific method is correct (thereby only really showing that the scientific method is self-consistent), and don’t allow her to use science to prove science right (if the scientific method of proving something right were already acceptable to you, you wouldn’t be questioning her in the first place), she will tend to start getting desperate and try to make appeals to common sense, or even question your sanity (“Are you crazy? It’s obvious that witch doctor is a thieving fraud, taking people’s money and pretending to help them with his wacky chants,” etc.) And she will have a lingering suspicion that you have somehow tricked her with some sneaky rhetorical sophistry; she will continue to think that of course science is right, just look at what it can do!
So what’s going on here? I am not claiming that witch doctors (or astrologists, or parapsychologists, or faith-healers, or Uri Gellar, or Deepak Chopra, or other charlatans) are just as good as scientists, or even that they are right about anything at all (they are not); what I am saying is that there is no neutral ground on which to stand, and from the outside, as it were, proclaim the supremacy of science as the best avenue to truth. One must learn to live without such an absolute grounding. Even as clear-headed and careful a thinker as Richard Dawkins can sometimes get confused about this. At the end of an otherwise fascinating and inventive essay entitled “Viruses of the Mind” (Dawkins’s contribution to the volume Dennett and His Critics) in which he uses viruses as a metaphor for the various bad ideas (or memes) that “infect” brains in a culture (particularly the “virus” of religion), and also makes a parallel analogy with computer viruses, Dawkins asks if science itself might be a kind of virus in this sense. He then answers his own question:
No. Not unless all computer programs are viruses. Good, useful programs spread because people evaluate them, recommend them and pass them on. Computer viruses spread solely because they embody the coded instructions: ‘Spread me.’ Scientific ideas, like all memes, are subject to a kind of natural selection, and this might look superficially virus-like. But the selective forces that scrutinize scientific ideas are not arbitrary or capricious. They are exacting, well-honed rules, and . . . they favour the virtues laid out in textbooks of standard methodology: testability, evidential support, precision, . . . and so on.
Daniel Dennett spares me the need to respond to this very uncharacteristic bit of wishful silliness from Dawkins by doing so himself (and far better than I could):
When you examine the reasons for the spread of scientific memes, Dawkins assures us, “you find they are good ones.” This, the standard, official position of science, is undeniable in its own terms, but question-begging to the mullah and the nun–and to [Richard] Rorty, who would quite appropriately ask Dawkins: “Where is your demonstration that these ‘virtues’ are good virtues? You note that people evaluate these memes and pass them on–but if Dennett is right, people (persons with fully-fledged selves) are themselves in large measure the creation of memes–something implied by the passage from Dennett you use as your epigram. How clever of some memes to team together to create meme-evaluators that favor them! Where, then, is the Archimedean point from which you can deliver your benediction on science?”
[The epigram Dawkins uses and Dennett mentions above is this:
The haven all memes depend on reaching is the human mind, but a human mind is itself an artifact created when memes restructure a human brain in order to make it a better habitat for memes. The avenues for entry and departure are modified to suit local conditions, and strengthened by various artificial devices that enhance fidelity and prolixity of replication: native Chinese minds differ dramatically from native French minds, and literate minds differ from illiterate minds. What memes provide in return to the organisms in which they reside is an incalculable store of advantages — with some Trojan horses thrown in for good measure. . .
Daniel Dennett, Consciousness Explained
Below, Dennett continues his response to Dawkins…]
There is none. About this, I agree wholeheartedly with Rorty. But that does not mean (nor should Rorty be held to imply) that we may not judge the virtue of memes. We certainly may. And who are we? The people created by the memes of Western rationalism. It does mean, as Dawkins would insist, that certain memes go together well in families. The family of memes that compose Western rationalism (including natural science) is incompatible with the memes of all but the most pastel versions of religious faith. This is commonly denied, but Dawkins has the courage to insist upon it, and I stand beside him. It is seldom pointed out that the homilies of religious tolerance are tacitly but firmly limited: we are under no moral obligation to tolerate faiths that permit slavery or infanticide or that advocate the killing of the unfaithful, for instance. Such faiths are out of bounds. Out of whose bounds? Out of the bounds of Western rationalism that are presupposed, I am sure, by every author in this volume. But Rorty wants to move beyond such parochial platforms of judgment, and urges me to follow. I won’t, not because there isn’t good work for a philosopher in that rarefied atmosphere, but because there is still so much good philosophical work to be done closer to the ground.
Now I happen to agree more with Rorty on this, but that is not the point. What is important is that Rorty, Dennett and I, all agree that there is no neutral place (for Archimedes to stand with his lever) from where we can make absolute judgments about science (the way Dawkins is doing), or anything else. We must jump into the nitty gritty of things and be pragmatists, and give up the hope of knowing with logical certainty that we are right.
So how do scientists go about their business then? How do they know when they are onto something? These are questions that many sociologists, anthropologists, psychologists, philosophers of science, and scientists themselves have tried to answer, and the answers have filled many books. One thing comes up again and again, however, and especially when scientists themselves talk about what they do and how they do it: the importance of beauty. Scientists don’t just sit there dreaming up random hypotheses and then testing them to see if they are true. There are too many possible hypotheses to work this way. Instead, they try to think of beautiful things. This intrusion of the aesthetic into the hard, cold, austere realm of science is unexpected to many people, but it is surprisingly consistent. When Albert Einstein was asked what he would do if the measurements of bending starlight at the 1919 eclipse contradicted his general theory of relativity, he famously replied, “Then I would feel sorry for the good Lord. The theory is correct.” What he meant was that the theory is far too beautiful to be wrong. How do you tell when something is beautiful? That, I’m afraid, is a question too big for me. (Though if that kind of thing interests you, you may wish to have a look at this recent Monday Musing essay by Morgan Meis and the ensuing discussion in the comments area.) For now, we’ll have to make do with some you-know-it-when-you-see-it notion of beauty. (Kurt Vonnegut once said that to know if a painting is good, all you have to do is look at a million paintings. I can only mimic him and say that if you want to know what is beautiful in science, all you have to do is look at a lot of science.)
Yes, yes, I am slowly coming to my subject. (Hey, it’s my Monday Musing and I’m allowed to ramble on a bit!) We are now approaching the first anniversary of 3 Quarks Daily. The very first day that 3QD went online, July 31, 2004, I posted the sad news of Francis Crick’s death. Crick, of course, along with James Watson (and Rosalind Franklin, and Maurice Wilkins), was the co-discoverer of the molecular structure of DNA. (In possibly the most coy understatement ever published in the history of science, at the end of the momentous paper in which Watson and Crick detailed their discovery of the double helix–which can be unwound, each strand then re-pairing with other bases to form a new double helix identical to the original–thereby solving the problem of DNA replication, they wrote: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”) Crick won a Nobel for this work, but this is not all he did. He spent the latter part of his life as a distinguished neuroscientist, publishing much in this new field, including the book The Astonishing Hypothesis.
The years following the discovery of the structure of DNA were busy ones, not just for molecular biologists, but also for physicists and mathematicians (Crick himself had come to biology after obtaining a degree in physics), and specialists in codes, because the code instantiated in the double helix took some time to understand. George Gamow made significant contributions, and other physicists also took a crack at the problem, including a young Richard Feynman, and even Edward Teller proposed a wacky scheme.
Let me now, finally, attempt to deliver on the promise of my title. At some point in time, this much was clear: the molecular code consisted of four bases, A, T, C, and G. These form the alphabet of the code. Somehow, they encode the sequences of amino acids which specify each protein. There are twenty amino acids, but only four bases, so you need more than one base to specify each amino acid. Two bases will still not be enough, because there are only 42, or 16 possible combinations. A sequence of three bases, however, has 43, or 64 possible combinations, enough to encode the twenty amino acids and still have 44 combinations left over. Such a triplet of three bases which specify an amino acid is known as a codon. So how exactly is it done? What combinations stand for which amino acid? Nature is seldom wasteful, so people wondered why a combinatorial scheme which allows 64 possibilities would be used to specify a set of only 20 amino acids. Francis Crick had a beautiful answer. As we will see, it was also wrong.
What Crick thought was something like this: suppose you have a sequence of 15 bases (or 5 codons) which specifies some protein (remember, each codon specifies an amino acid), like GAATCGAACTAGAGT. This means the codon GAA (or physically, whatever amino acid that stands for), followed by the codon TCG, followed by AAC, and so on. But there are no commas or spaces to mark the boundaries of codons, so if you started reading this sequence after the first letter, you might think that it is the codon AAT, followed by CGA, followed by ACT, and so on. It is as if in English, if we had no spaces and only three letter words, you might read the first word in the string PATENT as PAT, or if by mistake (this would be easy to do if you had whole books filled with 3 letter words without spaces in between) you started reading at the second letter, as ATE, or starting at the third letter, as TEN, etc. Do you see the difficulty? This is known as the frame-shift problem. Now Crick thought, what if only a subset of the 64 possible codons is valid, and the rest are non-sense. Then, it would be possible that the code works in such a way that if you shift the reading frame in the sequence over by one or two places, what results are nonsense codons, which are not translated into protein or anything else. Again, let me try to explain by example: in the earlier English case, suppose you banned the words ATE and TEN (but allowed ENT to mean something), then PATENT could be deciphered easily because if you start reading at the wrong place you just end up with meaningless words, and you can just adjust your frame to the right or left. In other words, it would work like this: if ATG and GCA are meaningful codons, then TGG and GGC cannot be valid codons because we could frame shift ATGGCA and get those. Similarly if we combine the two valid codons above in the other order, we get GCAATG, which if shifted gives CAA and AAT, which also must be eliminated as nonsense. This kind of scheme is known as a comma-free code, as it allows sense to be made of strings without the use of delimiters such as commas.
Now, Crick worked out the combinatorial math (I won’t bore you with the details, Josh) and found that with triplets of 4 possible bases, one has to eliminate 44 of the 64 possiblilities as nonsense codons, to make a comma-free code. Voila! That leaves 20 valid codes for the 20 amino acids, saving parsimonious Nature from any sinful profligacy! This is what beauty in science is all about. Now, Crick had no evidence that this is indeed how the genetic code works, but the beauty of the idea convinced him that it must be true. In fact, the exact elegance of this scheme was such that all attempts at actually figuring out the genetic coding scheme for the next many years attempted to be compatible with the idea. Alas, it turned out to be wrong.
In the 60s, when the actual genetic coding schemes were finally figured out in labs where people managed to perform protein synthesis outside the cell using strings of RNA, it turned out that there are real codons which the comma-free code theory would have eliminated, and this nailed the coffin of Crick’s lovely idea shut forever. In fact, more than one codon sometimes codes for the same amino acid, while other codons are start and stop markers, acting as punctuation in the sentences of genetic sequences. It is now understood that nature is not prodigal, and uses this redundancy as an error correction measure. Computer simulations show that the actual code is nearly optimal when this error correction is taken into account. So it is quite beautiful, after all. Still, why did so many scientists think for so long that Crick must be right? Because in science, as in life, beauty is hard to resist.
Have a good week!
My other recent Monday Musings:
The Man With Qualities
Special Relativity Turns 100
Vladimir Nabokov, Lepidopterist
Stevinus, Galileo, and Thought Experiments
Cake Theory and Sri Lanka’s President