This post continues my series on the symmetry principle, with apologies to anyone who has been holding their breath since my last post five weeks ago. That was a piece of conceptual ground-clearing in which I argued that esoteric-seeming distinctions can make a big difference to the answers we give to important questions. In this post and the next I want to illustrate the point by distinguishing between different senses of the claim—which historians sometimes equate with the symmetry principle—that “people don't believe things just because they are true.” This may seem like an academic exercise, but the stakes are high. One is the professional competence of historians of science: if the symmetry principle is a central tenet of our field, and we are unable to give a clear account of it, we look bad. A second question is how historians of science engage with the public. I sometimes get the impression that, in the eyes of historians and sociologists of science, anyone who uses the words “truth,” “evidence,” or “reality” when discussing past or present science must be doing something wrong. If we do not make an effort to be more discriminating, we should not be surprised if the public is confused by our work or suspicious of it. A final issue is how scientific debates have been, or should be, carried out. This is the issue that motivated the Guardian post by Vanessa Heggie that prompted this series. As I read that post, Vanessa used the symmetry principle to draw attention to the social, personal, political and institutional factors behind beliefs that scientists had in the past and that we now consider true. She seemed to be saying that we should focus on those factors instead of, or at least in preference to, factors such as the evidence that scientists advanced for their beliefs or the facts that they had in their favour. The present-day analogue, Vanessa suggested, is that when we debate issues like climate change and religion we should consider how our social or political situation might have shaped our personal convictions on those topics. I expect that many present-day historians of science would agree with these sentiments. I disagree. I do think that social or political explanations for beliefs are important. But I think they are on a par with, rather than preferable too, more traditional explanations such as evidence or argument. More importantly, I think that the symmetry principle is irrelevant to that question. All the symmetry principle says is that we should give the same sort of explanation for true beliefs as we do for false ones. It is perfectly consistent with this principle to (for example) explain Galileo's belief that the moon was mountainous by appeal to the evidence he had in his favour—as long as we explain the beliefs of his rivals in the same way. The lesson for present-day debates, one might think, is not that we should pay attention to our own biases but that we should pay attention to the evidence or arguments advanced by people we disagree with. But isn't evidence more or less the same thing as truth? And isn't it bad practice for a historian or a sociologist of science to let questions of truth and falsity interfere with their naturalistic explanations of past beliefs? The only way to respond to such concerns—and to clarify the symmetry principle and to engage properly with the public—is to distinguish carefully between different ways in which “questions of truth and falsity” can enter into our explanations of scientists' beliefs. So here goes... 1. People believe things because they believe they have evidence in favour of those things. As I asserted in my first post in this series, it is silly to deny that evidence can play a major role in scientific debates. The evidence might not be very good, of course. And there are not many beliefs, if any, that are fully explained by the evidence possessed by the believer. But there is nothing wrong-headed about saying (for example) that Darwin believed in evolution partly because of his observations of the distribution of finches on the Galapagos islands. Likewise, there is nothing wrong-headed about trying to shake someone's confidence in an omniscient, benevolent God by observing the amount of needless suffering in the world. Anyone who denies this is likely to be accused, in my view justly, of being “tainted with relativism” or of “misunderstand[ing] how science works at a rather fundamental level” (as per the comment by DavidColquhoun in reply to the Vanessa's post). 2. People believe things to be true because they believe them to be true. In contrast to 1., this is a terrible explanation of what people believe. It is just wrong-headed to say, for example, that someone believes in God because they believe in God. It is like saying that the death of Maggie Thatcher was caused by the death of Maggie Thatcher. That's just not how explanations work. Nor is this how arguments work—people don't go around saying “God exists, therefore God exists.” 3. The fact that something is the case can explain why people believe it to be the case. It is quite common to lump this explanation together with the obviously defective 2. This is a mistake, because 3. is much more plausible than 2. Here's an illustration. It is silly to say (as per 2.) that Galileo appealed to the rockiness of the moon in order to justify his belief that the moon was rocky. But it is not silly to say (as per 3.) that the rockiness of the moon caused his belief that the moon was rocky. After all, the moon is rocky; and the rockiness of the moon was (partly) responsible for the shapes that Galileo saw in his telescope, which were turn (partly) responsible for his belief that the moon was rocky. (I owe this point, though not the illustration, to this article [paywall] by the philosopher Nick Tosh; see especially pp. 691-92). In fact, it is hard to see how scientists could consistently say true things about the natural world if their beliefs were not partly caused by the natural world. If there were no causal link between nature and scientist's beliefs about it, it would be a remarkable coincidence if the latter accurately described the former. (Again I owe this point to Nick Tosh, who discusses it on pp. 187-88 in this paper. I expect that this intuition, or something like it, underlay TonyLloyd's remark that “the trouble with this symmetry approach is that it does not just exclude truth but any relation of theory to the world.”) Nevertheless, in my experience this sort of explanation is rarely used by even the most “Whiggish” historians of science. This may be because a state of affairs can explain the false things, and not just the true things, that people believe about it. For example, one of Galileo's opponents was a Jesuit professor who thought that the blotches on the moon were due to density variations inside the moon (rather than to its surface relief). This belief, no less than Galileo's, was partly caused by the shapes the Jesuit professor saw in a telescope, which in turn were partly caused by the rockiness of the moon. 4. The fact that something is the case (about nature) can explain why people believe something else is the case. Here is an example of what I have in mind. William Gilbert was an English natural philosopher who believed that electrostatic repulsion, unlike electrostatic attraction, was not a real effect. A historian has explained this by noting that Gilbert used chaff and paper rather than metal objects as detectors of electricity. If he had used gold leaf instead, the argument goes, he would probably have observed the gold leaf to leap energetically from charged objects, rather than simply falling off them as chaff and paper tends to do. This explanation uses a present-day scientific commonplace (that metals are unusually good electrical conductors) to account for a different but related belief that Gilbert held (there is no such thing as electrostatic repulsion). To me this explanation is unobjectionable. In fact, without it we do not have a full historical explanation of the historical fact that Gilbert did not believe that repulsion was a real electrical effect. 5. People believe things because of factual evidence. This one is easily confused with 3, especially when it is expressed as “people believe things because of the facts.” That phrase could mean a) that people believe X because X is the case, or b) that people believe X because they have access to facts that count as evidence for X. The distinction matters. a) is plausible, but rarely used by historians (as per 3. above). By contrast, b) is often used by historians and laypeople, and with good reason—it is just a special case of 1., a special case in which the evidence comes in the form of “facts.” Now, people in science studies tend to be as suspicious of “facts” as they are of “truth,” as indicated by their tendency to surround both words with scare-quotes. Here as elsewhere, ambiguity has done much mischief. There are at least four senses of “fact” in common usage:
I. Any true statement. Here “fact” is to be contrasted with “opinion.” II. A particular kind of statement, namely one that is about raw data or empirical observations. Here the contrast is with an abstract or theoretical statement, as when we tell someone to “get their facts right.” (Note that this phrase only makes sense if there is such a thing as a “false fact”). III. A sub-set of the statements in II., namely those that are true rather than false, as when we use “it's a fact” to mean “it's true.” (When “fact” is used in this sense, there is no such thing as a “false fact.”) IV. A “matter of fact,” ie. a state of affairs in the world rather than a statement about the world.Sometimes popular authors exploit this four-fold ambiguity for rhetorical advantage. For example, in his book Why Evolution is True, the biologist Jerry Coyne often writes “evolution is a fact” when he means to make the (perfectly reasonable) claim that “the theory of evolution is almost certainly true.” Here he is using “fact” in the first sense listed above. But by using that word he invites readers to think that the statement “animals evolve by natural selection” is a datum rather than a theory inferred from the data. This kind of sleight-of-hand is annoying, whether deliberate or not. But it is no more annoying than the practice, fairly common in science studies, of saying “facts are socially constructed” or “facts change over time” or “facts can always be disputed” without saying which sense of “fact” is intended. To get back to the point: “X believed Y because of the factual evidence” is a perfectly acceptable explanation if “factual” is intended in sense II above. This is because the distinction between “fact” and “theory” (or something similar) dates back at least to Aristotle, and because people routinely appeal to data or observations or phenomena to support their theories or conjectures or explanations. Of course we should not forget that it usually requires quite a bit of work to go from raw sense data (like “there is a yellow patch in the top-left of my visual field”) to a fact in sense II (like “a third of Americans are obese”). And it takes more work again to go from a set of facts to a theory based on them. But these truisms do not mean that it is an error to say that people believe things because of the factual evidence. 6. People believe us when we are dogmatic about our beliefs. Perhaps the thing that historians of science mean to reject, by way of the symmetry principle, is the practice of believing one's own position so strongly as to disregard all counter-arguments. They are against the dogmatist who reasons like this: I am right; therefore all arguments against my position must be misleading arguments; therefore I can safely ignore everything my critics say. (If I read her correctly, Vanessa was describing this kind of dogmatism when she tweeted that there was “a clear link between 'it's true, therefore there's evidence, therefore belief.'”) There is obviously something wrong with this dogmatic line of reasoning: if it were sound, it would give us grounds for never changing our minds in the light of new evidence. It is not obvious exactly what is wrong with the argument—so much so, in fact, that philosophers have given it a name: the "Kripke-Harman dogmatism paradox." All the more reason, then, to think that people are vulnerable to that way of thinking, whether they realise it or not. But note that the avoidance of dogmatism does not require us to turn away from questions of evidence and towards questions of social, political, or institutional bias. The solution to dogmatism, one might think, is not to set the evidence aside but to give a fairer appraisal of a greater range of evidence. The historiographical equivalent is to give as much attention to the evidence advanced by the “losers” (ie. past scientists whose theories we consider false) as we do to the “winners.” *** It should be clear by now that the plausibility of the claim “people believe things because they are true” depends crucially on how you interpret that claim. In the next post I'll distinguish another five interpretations, beginning with one of the most important ones, namely “the truth-value of any given theory is obvious once you decide to consider the evidence.” Expand post.