The 100-scientific-papers rule

If you’re ready to submit a scientific paper, you will have read 100 related papers.

Why 100? Well that advice has no basis more reliable than my own meandering experience. It’s my take on what it takes these days to be well versed on a specific topic and its broader background.

A typical scientific paper these days includes 30–50 references. Personally, I’ve gone as low as 24 and as high as 77. Twenty years ago, these numbers would’ve been lower, perhaps half as many. But rather than dwell on issues of inflation of the academic coin, we’ll just stick with 30–50 papers as our rough guess for now.

By the time you’re writing your own paper, you should’ve read more papers than you cite. And if you do the math, I’m perhaps implying that you should read 2–3 papers for every one that gets cited. Explore the literature beyond its essentials, but only so far before you reach a point of diminishing returns. Reasonable advice, right?

Continue reading “The 100-scientific-papers rule”

Jupyter Notebooks are gone from my scientific workflow

TL;DR: I’ve just learned that the text editor Sublime Text can display images within Markdown files. Gone therefore is my need to use Jupyter Notebooks.


I was never a true convert to Jupyter Notebooks. I used them for several years, and saw their appeal, but they just didn’t quite feel right to me.

Most complaints against Notebooks are technical ones: they’re awkward to version control, they’re hard to debug, and they promote poor programming practices. But these issues are tangential to my complaints against Notebooks, which are are less concrete:

  • I’m always scrolling. It’s inefficient.
  • I don’t want to do work in a browser. Maybe it’s a weak reason, but I like keeping my scientific and programming tools separate from the browser.
  • Editing and navigating Notebooks feels clumsy. Maybe it’s a lack of practice, but I’d rather leverage the time I’ve invested in learning and setting up my text editor than spend time learning a bunch of new shortcuts specific to Notebooks.
Continue reading “Jupyter Notebooks are gone from my scientific workflow”

Non-scientific software that helps me get science done

This is a shout out to all the software that helps my science happen despite not necessarily being developed for scientific purposes.

Fair warning, the list skews toward Linux programs since that’s what I use in my day-to-day work.

Tmux

I spend a lot of time at the command line. Or rather, command lines (note the plural). I often have four open at once. And I want to see all four at once, and jump back and forth between them all. Separate terminal windows or tabs don’t cut it. But Tmux does.

Here’s a pared-down example of how I might typically use Tmux: two panes, with one for editing text and the other exploring exploring directories.

Not gonna lie, Tmux is awkward to start with. The default keyboard shortcuts aren’t intuitive, simple things like copy/paste functionality don’t necessary work as you’d expect them to, and many online resources are outdated because older versions of Tmux used configuration commands that are no longer compatible.

But Tmux is well worth the learning curve.

Continue reading “Non-scientific software that helps me get science done”

Introductions in scientific papers can give warped and inflated perspectives

A direct and quantifiable impact on science to come out of my PhD was the 50-odd times that I brewed coffee for the department morning tea. Scientists turned up and got coffee; I got thanked for helping make that happen.

Despite its impact, brewing coffee is not listed on my CV1. Instead, I have publications. Yet, compared to coffee, the direct impacts of these publications are hard to define.

Continue reading “Introductions in scientific papers can give warped and inflated perspectives”

Does your scientific paper smell?

In computer programming1, code smells are “surface indications that usually correspond to deeper problems in the system”. Duplicated code is one example. Copying a code fragment into many different places is generally considered bad form; Don’t Repeat Yourself is a well known principle of software development. However, duplicating code can be beneficial if, say, it makes the code easier to read and maintain.

Although code smells are undesirable, “they are not technically incorrect and do not prevent the program from functioning.”

By this description, I’d argue that smells also exist in scientific papers. Hence, I’m proposing a few of these easy-to-spot (aka sniffable) features that may point to a deeper underlying issue.

Continue reading “Does your scientific paper smell?”

Don’t write about your scientific paper, just write it

Scientific writing is obsessed with other scientific writing1 and itself.

Phrases like ‘this paper‘ and ‘this study‘ are everywhere in scientific writing2—which is not a problem per se. Used well, these phrases concisely differentiate the current study from others. Used poorly, these phrases fill the word count without adding value to the reader.

Never, for example, start a Conclusion with ‘In this paper, we showed . . .’ or ‘The main conclusions of this paper are  . . .“. The first few words of a Conclusion (any section, in fact) are precious. Don’t waste them reminding me that I’m reading a paper in which you’ve shown or concluded something. Tell me something profound—something about your science.

In this paper, we showed . . .” is a signpost (aka metadiscourse). It’s writing about the writing. And it’s a main reason that so much of science writing, like any academic writing, is so boring.

Continue reading “Don’t write about your scientific paper, just write it”

Line graphs: the best and worst way to visualise data

Line graphs are the Swiss army knives of data visualisation. They can be almost anything… which is both good and bad.

Line graphs are slow to interpret

Many graphs serve one clear purpose. Take the five graphs below:

Even without labels, it’s clear what role each of these graphs serves:

  • Pie chart—components of a total
  • Thermometer—progress toward a goal amount
  • Speedometer—percentage of the largest possible value
  • Histogram—distribution of values
  • Box plot—statistical summaries of several datasets

In other words, if I’m presented with one of the graphs above, I have an immediate head start on interpreting it. If, instead, I’m presented with a line graph, I’m forced to read the axes labels and limits first.

Deciphering text is the slow way to intake information. Shape is fastest, then colour, and only then text. This so-called Sequence of Cognition, popularised by Alina Wheeler, is something marketers need to know about.

Continue reading “Line graphs: the best and worst way to visualise data”

A better way to code up scientific figures

I typically write 100–200 lines of code each time I develop a scientific figure that is destined for publication. This is a dangerous length because it’s easy to create a functioning mess. With shorter code fragments, it’s feasible to start over from scratch, and with thousands of lines of code, it makes sense to invest time upfront to organise and plan. But in between these extremes lurks the appeal to write a script that feels coherent at the time, but just creates problems for future you.

Let’s say you want to create a moderately complicated figure like this:

A script for this figure could be envisaged as a series of sequential steps:

  1. Read data in from a csv file
  2. Remove any flagged data
  3. Create four subplots
  4. Plot the first line of data against time
  5. Label the y axis
  6. Set the y axis limit
  7. Repeat steps 4–6 for the second and third lines of data
  8. Add the coloured contours and grey contour lines
  9. Label the time axis
  10. Add various annotations
Continue reading “A better way to code up scientific figures”

Captioning a scientific figure is like commenting code

Comments within code are harmless, right? They don’t affect run-time, so you might as well use them whenever there’s any doubt something is unclear.

I hope you aren’t nodding your head, because a liberal use of comments is the wrong approach. Not all types of code comments are evil, but many are rightfully despised by programmers as (i) band-aid solutions to bad code, (ii) redundant, or even (iii) worse than no comment at all.

The same is true for scientific figures and their captions. In fact, many of the rules discussed in the post Best Practices for Writing Code Comments remain valid when we replace comments and code with captions and figures, respectively.

Continue reading “Captioning a scientific figure is like commenting code”