Line graphs are the Swiss army knives of data visualisation. They can be almost anything… which is both good and bad.
Line graphs are slow to interpret
Many graphs serve one clear purpose. Take the five graphs below:
Even without labels, it’s clear what role each of these graphs serves:
Pie chart—components of a total
Thermometer—progress toward a goal amount
Speedometer—percentage of the largest possible value
Histogram—distribution of values
Box plot—statistical summaries of several datasets
In other words, if I’m presented with one of the graphs above, I have an immediate head start on interpreting it. If, instead, I’m presented with a line graph, I’m forced to read the axes labels and limits first.
Deciphering text is the slow way to intake information. Shape is fastest, then colour, and only then text. This so-called Sequence of Cognition, popularised by Alina Wheeler, is something marketers need to know about.
I typically write 100–200 lines of code each time I develop a scientific figure that is destined for publication. This is a dangerous length because it’s easy to create a functioning mess. With shorter code fragments, it’s feasible to start over from scratch, and with thousands of lines of code, it makes sense to invest time upfront to organise and plan. But in between these extremes lurks the appeal to write a script that feels coherent at the time, but just creates problems for future you.
Let’s say you want to create a moderately complicated figure like this:
A script for this figure could be envisaged as a series of sequential steps:
Read data in from a csv file
Remove any flagged data
Create four subplots
Plot the first line of data against time
Label the y axis
Set the y axis limit
Repeat steps 4–6 for the second and third lines of data
Comments within code are harmless, right? They don’t affect run-time, so you might as well use them whenever there’s any doubt something is unclear.
I hope you aren’t nodding your head, because a liberal use of comments is the wrong approach. Not all types of code comments are evil, but many are rightfully despised by programmers as (i) band-aid solutions to bad code, (ii) redundant, or even (iii) worse than no comment at all.
The same is true for scientific figures and their captions. In fact, many of the rules discussed in the post Best Practices for Writing Code Comments remain valid when we replace comments and code with captions and figures, respectively.
Four distinct datasets (x vs y) that produce the same summary statistics (mean, variance, correlation coefficient, and line of best fit)
HSL colour space
A colour space that defines colours in terms of their Hue (e.g., red or blue), Saturation (vivid to washed out), and Lightness (white to black)
The area within a design (website, poster, figure, etc) that lacks text, images, or other elements
A questionable approach to research: Hypothesising After the Results are Known
Problems in which an answer cannot be estimated outright but is instead derived as the product of more easily estimated quantities (e.g., how many grains of rice are eaten across the world every year?)
Use cases of .png and .jpg images
The JPG format is optimised for photos, whereas PNGs are for graphs and diagrams
That vs which
That and which, although similar, have opposing implications about whether a clause is restrictive or not
Basing a decision on only numbers or other objective measures without reference to any qualitative factors
A tool for tracking and recording all changes to software and other digital files as they evolve
Serial position effect
The human tendency to better remember what happened at the start and the end and forget what happened in the middle
Zenodo and Figshare
Online repositories for datasets, code, and other research output
Your carbon footprint
A typical person living in a western country will have an annual footprint of 5–20 tonnes CO2
Well known scientists get cited more often than lesser known ones leading to a positive feedback loop
A metaphor for an answer that might gloss over details, be vague, or rely on many approximations
Scales that increase geometrically (e.g., 1, 2, 4, 8, 16, …) rather than linearly (2, 4, 6, 8, …)
“Data” is a plural
It can sound odd, but data were collected is correct and data was collected is not
A sentence structure to avoid because the initial words only make sense as the sentence nears it end
Regression to the mean
A statistical tendency for outliers in an initial experiment to deviate less in a subsequent experiment
Software for all manner of image manipulations and conversions that can be run from the command line
The default command line interface
A widely used approach for smoothing time-series data
The value 1.618…; an aesthetically pleasing aspect ratio for a rectangle among many other claims to fame
Types of map projections
Flattening the earth to a two-dimensional image can be achieved in numerous ways, each with its own pros and cons
DOI and PMID
Unique digital identifiers that can point to publications, datasets, software, and more
An early name in data visualisation and author of several books on the topic
Widows and orphans
A line at the beginning or end of a paragraph that is separated from the rest by a page break
Construction cost of the Large Hadron Collider
One of the most expensive scientific experiments took ~3 billion Swiss Francs to build (or ~5 billion US dollars back in 2001)
Tweaking a fundamentally flawed theory in a last-ditch effort to make it explain observations
Why governments fund basic scientific research
Among many reasons, basic scientific research (i) lowers the barrier for firms that want to develop new products and (ii) develops skilled scientists and engineers who can capitalise on research undertaken elsewhere
William Shockley’s thoughts on productivity
Shockley speculated that a small number of scientists can be exponentially more productive in total because the creation of a scientific paper is the combination of many individual tasks, and productivity in each of these tasks multiplies together to give overall productivity.
Adjusting the spacing between individual letters in text to improve aesthetics
How last authorship varies across fields
Depending on scientific field, the last author either did the least work, is the group leader, obtained funding for the project, or has a surname near the end of the alphabet
An open source project that simplifies and promotes interactive use of many programming languages
The uncertainty of a derived quantity (e.g., kinetic energy derived from speed and mass) can be calculated from the uncertainty of the input quantities following simple—though sometime tedious—arithmetic
SSH (Secure Shell)
The standard way to access a remote server via the command line
Difference between a hyphen and a minus sign
Although similar, they should not be confused; a hyphen (-) is a short dash used to combine words, whereas a minus sign is longer (−)
Optimal number of characters per line
A line of text should have 60–70 characters (counting spaces) for a single-column layout and 40–50 for multiple columns (see page 32 of Detail in Typography)
The .eps file type
A predecessor to PDF that was developed in the late 1980s and is almost obsolete
Stroke and fill
For line drawings, the edge is known as the stroke and the interior is known as the fill
The Greek alphabet
The order doesn’t matter, but knowing the individual letters is worthwhile
The smoothing of text to improve its appearance (especially relevant at coarse resolution)
The simplest way in most programming languages to make a computer do something again and again
Illusion of explanatory depth
Most people are overconfident in their understanding of a complex phenomenon or procedure until they try to explain it step by step
How regression works
Calculating a line of best fit is one of those things everyone should do manually at least once to understand the procedure that can otherwise be a black box
An approach to statistics in which probabilities are continually updated as new information is obtained
System 1 and 2 thinking
Two distinct ways of thinking: system 1 is fast and driven by intuition and emotion, whereas system 2 is slower and more deliberate
Use-inspired basic research, or the view that basic and applied research aren’t mutually exclusive
In any dispute the intensity of feeling is inversely proportional to the value of the issues at stake
The tendency to underestimate the time needed to complete a task (e.g., writing a scientific paper) even with prior experience in the same or similar tasks
Floating point numbers
The system used by computers that allows a small number of bits (a zero or one) to represent a wide range of numbers (e.g., 64 bits can be used to closely approximate any number, positive or negative, up to 1.8×10308)
The Dobzhansky Template
A format coined by scientist-turned-filmmaker Randy Olson that aims to drill down to the essence of an idea: Nothing in ___ makes sense except in light of ___ (e.g., nothing in biology makes sense except in light of evolution)
Newman design squiggle
A visual metaphor for the design process that works equally well for the process of doing science
Design laws, grounded in psychology, for how humans perceive combinations of objects or elements
An inefficient problem solving technique where you rely on your previous approaches that worked in the past despite there being better methods
Also known as the Law of Triviality, bike-shedding is giving undue emphasis on minor matters such as the design of bike sheds to be included within the development of a nuclear power plant
Subsets of a dataset, all of which have a negative statistical trend, can still produce a positive trend in the overall dataset
Work expands to fill the time available for its completion
When an expert in a given field trespasses into another and makes claims where they lack expertise
The strength or effect size of a scientific result tends to decline over successive replications
Base rate neglect
Misjudging the probability of an event due to more intuitive individuating information (e.g., thinking it’s more likely than not that someone who is 6-foot-8 plays basketball professionally, except that the chances are a fraction of 1%)
Identifiable victim effect
The desire to assist a specific individual facing a certain hardship but not a large, unknown group of people facing the same hardship
I used my laptop to scan the text of 360 scientific papers for use of the word exciting (and excited and other variants). I got 195 matches. That’d suggest that scientists imbue their writing with their own excitement for science. Except that 191 of those matches are physics jargon (as in wind excites ocean waves) rather than the everyday meaning. Remove those and we’re left with ~1% of papers indicating any excitement.
That’s a weird thing tolook into is what you’re thinking, so two bits of context. First, there’s lots to be learned about scientific writing by looking at word usage statistics; see my two previousposts. Second, I came across one of these rare uses of exciting with its everyday meaning, and it stood out! Which is messed up. It’s a common word, yet it struck me as out of place in a scientific paper. Not because I think it should be, but because it is.
For comparison, I looked at the words interesting and interestingly. The result: 237 matches, all of which correspond to their everyday usage. (Including interest and interested in my search more than doubles the number of matches.)
As scientists, we record our findings in perpetuity in PDFs— literally simulations of pieces of paper. It’s time to be more dynamic and invoke a proliferation of media types. We don’t need to get rid of the notion of a paper or stop using a PDF as the version of record. But we do need to complement them with something less static. What follows is an approach I recently took using video.
The final sentence of my latest paper (preprint) steers the reader to a video that stands in place of a Conclusion section. And I’m guessing this video is a much more compelling Conclusion than any possible combination of words.
Here’s the gist of the final paragraph (paraphrased to avoid jargon):
Our simulation was made possible by tuning against measurements from a new instrument. This observation-informed simulation depicts instabilities as they evolve throughout the day. It is best appreciated as an animation (doi.org/10.5281/zenodo.4306935).
Too many scientific figures are ugly. I see three possible reasons:
Laziness: scientists could make nice figures, but don’t put in the effort
Obliviousness: scientists are unaware their figures are ugly
Indifference: scientists care only about the data, but not their presentation
Take the following published scientific figure (suitably disguised):
Let’s list the problems: (1) Space is poorly used and data are cramped. (2) Text is bold for no reason. (3) Multiple fonts are used. (4) Tick marks are barely visible. (5) Some labels don’t fit in their respective boxes. (6) Axis values are unnecessarily repeated. (7) Dashed and dash-dotted lines are ugly. (8) Mathematical symbols are not italicised.
Save for the occasional pun in the title, scientific papers seldom contain intentional humour. But there’s entertainment to be had if you have the right mindset. Let me show you.
Relatability can be the basis of a good laugh. And as a scientist who routinely uses time series data, I can relate to the struggle of unwanted gaps in a dataset. So I was entertained when I came across the following sentence:
No data are available for 1991 and 1992 because the volcanic eruption of Mt Pinatubo in 1991 contaminated the signal. (ref)
Why, exactly, am I entertained, you ask? Partly, it’s the notion of a very expensive satellite being thwarted by a bit of ash. More so, it’s that the sentence is the epitome of scientific writing. A freakin’ volcanic eruption messes up two years worth of data, and yet it’s described in the same matter-of-fact tone as the other technical details like the satellite’s pixel resolution. Good luck finding any other types of writers who recount a long-lived effect of a natural disaster in a single sentence.
“A computer gives the average person, a high school freshman, the power to do things in a week that all the mathematicians who ever lived until thirty years ago couldn’t do.” That’s Ed Roberts quoted in Hackers, a book published in 1984. So let me update his quote with my own: “My laptop gives me the power to run simulations in an afternoon that the fastest computers thirty years ago would have struggled with”.
This power has a downside. Computers are so fast these days that I’ve become lazy—mathematically speaking. A few decades ago, in my field of physical oceanography, it was routine to manipulate partial differential equations and solve complex integrals. I can do these things, if I put my mind to it. But I seldom do; there’s no need. These days, even ordinary differential equations that I learned to solve in undergrad get plugged into Mathematica most of the time or relegated to some less-than-perfect numerical method. And I can’t remember the last time I did multiplication longhand: