Ready, fire, aim: the myth of the scientific method

Scientific literature is like social media: its content disproportionately comprises successes and achievements. Just as social media seldom features mundane necessities like trips to the supermarket, scientific papers seldom feature abandoned experiments or fruitless pursuits. In fact, we generally work backwards from the results and conclusions when writing these papers. We start with the answer and present only the relevant methods. To a non-scientist, this may sound dishonest and deceptive, but it’s not. Its for the reader’s benefit: a linear narrative is much easier to follow than the actual story with its many tangents, setbacks, and realisations.

Anyone exposed to the process of scientific research quickly learns that it seldom follows the so-called scientific method, those dispassionate experiments meant to objectively test hypotheses. Science is better described as Ready, Fire, Aim. Yes, in that order. This phrase, borrowed from Neil Gershenfeld, concisely captures how science is an iterative procedure without a specific target. Put in some groundwork to figure out the general direction (Ready), but take what might otherwise be a shot in the dark (Fire), then spend time making sense of what you hit (Aim). You may well fail and hit nothing, but as Gershenfeld elaborates, you can’t hit anything unexpected by aiming first.

If the result confirms the hypothesis, then you’ve made a measurement. If the result is contrary to the hypothesis, then you’ve made a discovery – Enrico Fermi

Failure is central to science, so much so that Stuart Firestein used the word as the title of one of his two books about how science works. This post borrows heavily from the book and its prequel Ignorance. The Fermi quote above, for example, is borrowed.

Firestein elaborates on the idea behind the Fermi quote with the story of Noble prize winner Alan Hodgkin. If his students or postdocs had results that confirmed the expected, he would continue about his day. But the unexpected results made him sit down and light his pipe. To paraphrase Isaac Asimov (probably), the most exciting phrase in science is “That’s funny …”

Of the scientific papers I’ve written, the one I’m most proud of stemmed from a more modern version of a “that’s funny” moment, which is “that looks wrong, there must be a bug in my code”. Turns out there wasn’t a bug, but rather an excellent premise to what lead to a largely unplanned paper. Ironically, the proposed work that preceded and lead to this “that’s funny” moment was described as following the scientific method. My PhD committee member who said this meant it in a complimentary way, presumably because using the scientific method implies a level of diligence. Fortunately for me, the scientific method didn’t prevail and I discovered something new rather than making what might have been just an incremental advance.

So if I wasn’t applying the scientific method, what was I doing? I was doing science. As Firestein describes it, “science is an accumulation of procedures and modes of thinking together with facts. A mechanism for making mistakes that are productive and not catastrophic”. There’s no one way to do it. Everyone applies their own idiosyncrasies. Regardless of the exact procedures, continued investigation up many dark alleys and down many rabbit holes often progresses toward facts and scientific consensus or, at least, reasonable evidence and new ideas.

Don’t get me wrong: there’s a time and place for the scientific method. Certain fields need strictly defined hypotheses and rigorous protocols. In these cases, developments like registered reports (peer review before and after data collection) make sense. You can’t unsee data. In many fields, however, it doesn’t make sense to set up controlled experiments, nor do you need a hypothesis in order to write a paper. Take evolution, for example. Darwin didn’t just think of an evolution hypothesis and then decide to artificially separate some finches to see if they would change over generations. That would be awfully time consuming. Instead, he noted some curious differences between the birds on different islands and worked backward to explain why. In doing so, he developed his famous theory. (This is my poorly researched, non-biologist’s impression about how his theory developed. It’s likely incorrect. Don’t quote me on it.)

I’m glad to be in a field (oceanography) in which, like evolution, it doesn’t make sense to set up controlled experiments. Heck, in oceanographic fieldwork if you get your gear back, it was a successful program. If it recorded data—that’s icing on the cake. There’s something liberating about having elements of an experiment out of your control. The constraints of working with only what you have can drive both creativity and productivity. (It’s hard to let perfect be the enemy of good when you’re not in complete control of the outcome.)

“Let’s get the data, and then figure out the hypothesis” is Firestein’s advice to his own own grad students. Not only is this quote reminiscent of the titular Ready, Fire, Aim, it also resonates clearly with my own experience and, I imagine, many others. I’ll often have half a paper written before really understanding what the data tell me. Writing is when ideas are clarified and concepts begin to make sense. It’s a much more fluid process than the scientific method with its clearly defined steps and binary outcome of whether or not the hypothesis was correct. This flexibility is analogous to an interview in which, as Deborah Blum notes, if you’re too rigid in your prep work, too obsessive about your written questions, you lose those moments where the story may open up into something more.

But so what if there’s a disconnect between the scientific method and how science is actually practised? Well, there’s three major problems: science is largely funded as if the scientific method is predominant; undergrad science programs that overly embrace the scientific method will give a false impression of actual scientific research; and the general public end up with misconceptions about the veracity of individual scientific studies. Let’s consider each in turn.

Science is expensive, but sporadic breakthroughs more than make up for the cost. However, grant funding agencies want certainty that their money is going to good use. Perhaps counter-intuitively, by encouraging scientists to formulate long-term plans with measurable targets, accepted methodologies, and explicit hypotheses, grant agencies are spending their money ineffectively. In an open letter to The Gaurdian entitled We need more scientific mavericks, a number of well-established scientists called for a change in funding toward more high-risk, high-reward projects. They note that “nowadays, fields where understanding is poor are usually neglected because researchers must convince experts that working in them will be beneficial.” But big discoveries are just as likely, if not more, to arise outside of fashionable fields. (This is a paraphrasing of Richard Feynman that I’ve used before to help explain how the idea of scientific genius has evolved.)

Firestein devotes a whole chapter to this issue of funding. He notes that the National Institute of Health gave out 78 grants in 2013 for high-risk, high-reward projects. It’s not hard to infer what that implies about the other 5000 grants funded. Related to this lack of risk is “curiosity-driven research”. Unfortunately, this label is a criticism and a reasonable basis for an application being unsuccessful even though, as I’ve argued before, it’s primarily what scientists do. Firestein has an elegant suggestion as a compromise between high-risk ventures and the granting agencies’ desire for ostensible productivity: hedge our bets. Try many approaches, expecting that many will fail. His analogy is finding a lost camper. We shouldn’t go looking for the camper in a single-file line. We should fan out. Sure, most people won’t be successful individually, but together there’s a better probability of combined success.

The rigid and procedural approach to research imposed by grant funding is similar to the way it is often conveyed in undergraduate science programs, particularly the lab components. Labs typically start with a list of instructions that the students follow in a recipe-like fashion. There’s certainly an element of discovery and important experience with equipment and protocols, but the generalisable concept of deciding what methods to apply and how to answer the question is missing. Similarly, a lab report is expected to contain a step-by-step guide that would hypothetically allow someone else to recreate the experiment. Although that may seem like the point of a methods section in a scientific paper, it isn’t. Direct replications are few and far between and of limited value. Instead, a methods section highlights the validity of the procedures used and how reasonable measures have been taken to account for possible biases and confounding variables. The section should succinctly demonstrate that the conclusions are reached in a credible manner.

It’s not well recognised by the general public that scientific experiments may still contain bias, confounding variables, or incorrect interpretations. While efforts are made to avoid these, the results that follow do not constitute a fact, but merely evidence. Given enough evidence, built up over numerous studies, we reach scientific consensus. But individual studies should be treated with caution. Peer review is not a silver bullet, even though it’s sometimes considered a gold standard. In fact, the label peer review can confer a false sense of security to those unfamiliar with the process. This is particularly important when it applies to policy decisions. Peer review merely means that the paper has been vetted by two or three other scientists. It is immediately following this process that scientific results are disseminated to the public via mass media. This is months or years before the scientific community as a whole will have enough time to truly corroborate or dismiss the conclusions.

Ultimately, there’s a lot more imprecision, conjecture, and misstepping in science than you might expect. But scientific research is also much more fun, subjective, and creative than it gets credit for. And even when things don’t go as planned, it all still works out in the long run, because an expert is simply someone who has made all the mistakes they can in a narrow field. These are Neils Bohr’s words by the way, which I discovered (of course) in one of Firestein’s books. Seriously, go read them.

Author: Ken Hughes

Post-doctoral research scientist in physical oceanography

One thought on “Ready, fire, aim: the myth of the scientific method”

Comments are closed.

%d bloggers like this: