As scientists, we record our findings in perpetuity in PDFs— literally simulations of pieces of paper. It’s time to be more dynamic and invoke a proliferation of media types. We don’t need to get rid of the notion of a paper or stop using a PDF as the version of record. But we do need to complement them with something less static. What follows is an approach I recently took using video.
The final sentence of my latest paper (preprint) steers the reader to a video that stands in place of a Conclusion section. And I’m guessing this video is a much more compelling Conclusion than any possible combination of words.
Here’s the gist of the final paragraph (paraphrased to avoid jargon):
Our simulation was made possible by tuning against measurements from a new instrument. This observation-informed simulation depicts instabilities as they evolve throughout the day. It is best appreciated as an animation (doi.org/10.5281/zenodo.4306935).
Too many scientific figures are ugly. I see three possible reasons:
Laziness: scientists could make nice figures, but don’t put in the effort
Obliviousness: scientists are unaware their figures are ugly
Indifference: scientists care only about the data, but not their presentation
Take the following published scientific figure (suitably disguised):
Let’s list the problems: (1) Space is poorly used and data are cramped. (2) Text is bold for no reason. (3) Multiple fonts are used. (4) Tick marks are barely visible. (5) Some labels don’t fit in their respective boxes. (6) Axis values are unnecessarily repeated. (7) Dashed and dash-dotted lines are ugly. (8) Mathematical symbols are not italicised.
Save for the occasional pun in the title, scientific papers seldom contain intentional humour. But there’s entertainment to be had if you have the right mindset. Let me show you.
Relatability can be the basis of a good laugh. And as a scientist who routinely uses time series data, I can relate to the struggle of unwanted gaps in a dataset. So I was entertained when I came across the following sentence:
No data are available for 1991 and 1992 because the volcanic eruption of Mt Pinatubo in 1991 contaminated the signal. (ref)
Why, exactly, am I entertained, you ask? Partly, it’s the notion of a very expensive satellite being thwarted by a bit of ash. More so, it’s that the sentence is the epitome of scientific writing. A freakin’ volcanic eruption messes up two years worth of data, and yet it’s described in the same matter-of-fact tone as the other technical details like the satellite’s pixel resolution. Good luck finding any other types of writers who recount a long-lived effect of a natural disaster in a single sentence.
“A computer gives the average person, a high school freshman, the power to do things in a week that all the mathematicians who ever lived until thirty years ago couldn’t do.” That’s Ed Roberts quoted in Hackers, a book published in 1984. So let me update his quote with my own: “My laptop gives me the power to run simulations in an afternoon that the fastest computers thirty years ago would have struggled with”.
This power has a downside. Computers are so fast these days that I’ve become lazy—mathematically speaking. A few decades ago, in my field of physical oceanography, it was routine to manipulate partial differential equations and solve complex integrals. I can do these things, if I put my mind to it. But I seldom do; there’s no need. These days, even ordinary differential equations that I learned to solve in undergrad get plugged into Mathematica most of the time or relegated to some less-than-perfect numerical method. And I can’t remember the last time I did multiplication longhand:
During her rise, but before becoming a poker champion, Maria Konnikova was counselled by her coach that she was winning prize money in too many tournaments.
Wait, why wouldn’t she want to win prize money in every tournament? And what’s that go to do with a post about productivity in science? A loose answer to both questions: nonlinearity.
Maria’s initial goal in tournaments was to survive until enough other players had lost so that she reached the threshold, say the top 15%, to earn prize money. To reach this threshold, she was playing cautiously. Too cautiously, that is, for a realistic shot at the big money that goes to the top-placed finishers. Given how poker and its payouts work, a good player is better served by aiming high and winning a few large prizes (hence incurring many failures) compared to having many small wins.
Science poses the same conundrum. Instead of poker chips, we’re betting time. You can spend years on a high-risk, high-reward project and, if you’re lucky, you make a big breakthrough. Or you play it safe and produce incremental contributions.
Novel writers use an average of 100 clichés for every 100 000 words. Or about one every four pages. That’s what Ben Blatt found by comparing a range of novels against a list of 4000 clichés. How does scientific writing compare?
In one sense, scientific writing avoid clichés. A scientist isn’t going to write that their new results put the nail in the coffin of the outgoing theory, that they were careful to dot their i’s and cross their t’s so as to follow the methods of Jones et al. by the book, that Brown et al.’s finding is a diamond in the rough, or that two possible interpretations are six of one and half a dozen of the other.
In another sense, scientific writing is full of clichés. Our writing often feels like a fill in the blanks: the results of this study show X, these findings are in good agreement with Y, or Z is poorly understood and needs further study. Need more examples? Checkout the Manchester Academic Phrasebank, a collection of phrases from the academic literature that are “content neutral and generic in nature.”
“There is this scientific convention of: ‘You put the images on one side, then you put the text to decipher it on the other side.’” That’s Jonathan Corum, science graphics editor for the New York Times, politely critiquing one of the ways in which a typical scientific paper creates unnecessary work for the reader, or “cognitive overhead.”
Decipher is the key word above (and a word I’ll use again below). If deciphering is necessary, it will precede understanding, but that doesn’t mean it is necessary. “No one intends to build a product with large cognitive overhead, but it happens if there isn’t forethought and recognition for it.”
Einstein had it easy as a scientist. His most famous paper had no references and his work was seldom peer reviewed. In one instance in 1936, he withdrew a paper submitted to Physical Review on the grounds that he had not authorised it to be shown to a specialist before publication. In another instance, he asserts
Other authors might have already elucidated part of what I am going to say. […] I felt that I should be permitted to forgo a survey of the literature, […] especially since there is good reason to hope this gap will be filled by other authors.
Einstein, of course, didn’t actually have it easy—being forced to flee his native Germany is the obvious counter example. And he faced stiff competition in the scientific arena. I mean, have you ever been to a scientific conference in which half of the attendees had or would win a Nobel prize?
Feeling like your scientific papers aren’t getting the attention they deserve? Wanna bump up your citations counts for the next decade? Then, consider dying young. It apparently helps: a posthumous spike in recognition arises owing to the promotional efforts of colleagues.
This morbid example is but one of many arguments that citations in the scientific literature are not a true meritocracy. Another example: last month I hypothesised that many papers are cited only because they’re new, not because their content is new. It makes me think there’s a better way to rank references.
Scorning citation metrics is a favourite pastime of scientists (up there with scorning p values). Distilling a study’s quality to a single value is simplistic is the standard argument. But what if we double down? What if we focus more on numbers when it comes to citations?