Rabid Quill / A personal blog

No, Actually Falsifiability is Really Important

2018-04-07

I often compare the work of my favorite non-fiction authors to more concrete experiences to describe what it’s like to read them. Reading G.K. Chesterton is like biting into a perfectly cooked steak. C.S. Lewis like being brought to the top of a high mountain to get a clear view of things that used to be too close and confusing. Ludwig von Mises is a bit like watching a chessmaster carefully surprise you with a sudden checkmate.

Reading Adam Becker’s article against falsification felt like slogging hip-deep through a swamp at night.

His thesis is something like this: in science, people “fetishize” the scientific method’s imperatives of falsifiability and observability, insisting on them when the path to new ideas is actually less rigid. He uses the historical examples of Mercury and neutrinos to make the case that science doesn’t actually follow those principles to the letter, nor should it. Newtonian laws couldn’t explain the motion of Mercury. Mercury “falsified” Newton, but scientists stuck with the theory for a long time anyway. They hypothesized a planet Vulcan that was at the time “unobservable”. Finally, though, when Einstein published his theory, an explanation was available that explained Mercury without any invisible extra planets. Wolfgang Pauli, in order to deal with observations that apparently falsified the conservation of energy, hypothesized that there was an extra particle, a neutrino, that was unobservable for decades until totally unprecedented technology was brought to bear on the subject.

If he was saying that the scientific method is not the only path to useful forms of knowledge and truth I would completely agree. But here he is denigrating falsifiability – the absolute core of discovering legitimate claims about the material world – because it apparently stifles our creativity in finding new theories.

He says, “So invoking the invisible to save a theory from falsification is sometimes the right scientific move. Yet Pauli certainly didn’t believe that his particle could never be observed. He hoped that it could be seen eventually, and he was right. Similarly, Einstein’s general relativity was vindicated through observation. Falsification just can’t be the answer, or at least not the whole answer, to the question of what makes a good theory. What about observability?

“It’s certainly true that observation plays a crucial role in science. But this doesn’t mean that scientific theories have to deal exclusively in observable things. For one, the line between the observable and unobservable is blurry – what was once ‘unobservable’ can become ‘observable’, as the neutrino shows. Sometimes, a theory that postulates the imperceptible has proven to be the right theory, and is accepted as correct long before anyone devises a way to see those things.”

He then cites Ernst Mach’s (now considered incorrect) insistence that atoms weren’t real as an example of hewing too close to “observability.”

He concludes, “Some of the most interesting scientific work gets done when scientists develop bizarre theories in the face of something new or unexplained. Madcap ideas must find a way of relating to the world – but demanding falsifiability or observability, without any sort of subtlety, will hold science back. It’s impossible to develop successful new theories under such rigid restrictions. As Pauli said when he first came up with the neutrino, despite his own misgivings: ‘Only those who wager can win.’”

All of this looks to me like harping on those two ideas at the expense of the scientific method – of which these are only two pieces. I think Becker wouldn’t see a conflict in these historical examples if he thought of the scientific method as a way of gradually whittling away errors and using the best-available model with the knowledge available. I think the reason scientists of the past clung to Newton, imagined neutrinos, and argued over atoms is because falsifiability and observability are only rubrics in service of science’s main goal: the refinement of conceptual models of the world that accurately predict future observations. He even says in the article that Newtonian physics were only abandoned when a viable alternative emerged with Einstein. There is a principle of subsidiarity at work here. When you get an observation your model doesn’t predict, are you dealing with merely a strange exception with variables you didn’t know about, or do you need to throw out the whole theory? If Newtonian physics accurately predicted Neptune and seems to handle everything else just fine, how likely is it that Mercury is a red flag that the whole model is off? You don’t start by undermining the entire framework. You start by trying to explain the anomaly in terms of the framework. Disagreements over neutrinos and atoms were merely disagreements about what level of analysis the problem has risen to. Falsifiability properly applied to Newtonian laws didn’t, and shouldn’t have, thrown the theory out the first time Mercury was a few arcseconds off in the sky. What would they have replaced it with? It did signal something was amiss, and that’s why some went on the hunt for the planet Vulcan and why Relativity was celebrated as the glorious eureka moment that it was.

I am not really involved in the science community so maybe there are falsifiability fetishists who pan perfectly reasonable speculations on the basis that they aren’t perfect. Maybe there are some who insist on only entertaining new models that are definitely observable right now instead of observable/falsifiable in principle. But it might also be that post-modernism has penetrated even the sciences, and now even scientists are in the dark as to why they ever followed the scientific method in the first place. A post-modern view has a hard time choosing a level of analysis, or even seeing that there are levels of analysis, or ranking the usefulness of various analyses, since all analyses are potentially equally valid. The fact that Neil deGrasse Tyson rolls his eyes at the very mention of philosophy makes me wonder.

There actually is a right way to attack a knowledge gap like a failed experiment or anomalous observation, and the scientific method describes it.

Addendum: Someone made the point that Einsteinian physics never falsified or refuted Newton. Rather Einstein shows the Newtonian method to still be perfectly correct, but only at those scales above the quantum and below the relativistic. All the forces are always present, but depending on the scale of your analysis, some can be safely ignored. This point strengthens the argument that in the Newton-Einstein case, falsification and observability were always tools in service of progressively eliminating errors. Nothing was overturned; a great deal was preserved while adding a new dimension of understanding.