What does scientific genius look like in the 21st century?

The names we typically associate with scientific genius are from several centuries or millennia ago. Think Newton, Einstein, Archimedes, Galileo, or Darwin. Even famed scientists that are modern by comparison (Richard Feynman, Francis Crick, or Linus Pauling) made discoveries many decades ago. Just as any sports fan will tell you it is pointless to compare athletes from different eras, the same is true, if not more so, for scientists. Whereas athletes are largely playing the same game as they were decades ago, science has changed. We aim to always answer new questions, address ever more complex and interdisciplinary issues, and occasionally develop experiments costing billions of dollars. How, then, does scientific genius manifest in the 21st century? Which circumstances are most conducive to developing scientific genius? And what traits does a genius in the modern scientific realm exhibit?

Genius is not necessarily correlated with current metrics of success as a scientist: citation and publication counts, name recognition, or sometimes even influence. These days, authorship can easily add up for a scientist (i) leading several teams, (ii) with many collaborators, or (iii) with extensive resources. Alternatively, many scientists are prolific simply by virtue of their field. In both high-energy physics and the subset of medical science involving long-term, cohort studies, scientists may average a new paper each week. Indeed, the prevalence of hyper-profilic authors has increased markedly over the last 20 years.

A lack of an obvious quantitative measure makes it difficult to define genius. Its dictionary definition, exceptional intellectual or creative power or other natural ability, is a conspicuously holistic and vague quality. Although there is some agreement around the threshold of an IQ of 160, the issue immediately arises of how a non-genius can develop a test to measure such a score.

It can be difficult to even decide which of two alternatives is more befitting of a genius. Are they a naturally gifted problem solver who can find an answer quickly using an original and inventive method? Or simply a persistent problem solver with many failures along the way? A combination, of course, but one that leans heavily toward persistence. Genius doesn’t exist without dedicated study, often years on the same problem. To quote the 18th-century naturalist Georges-Louis Leclerc, genius is only a greater aptitude for patience than common. Similarly, Thomas Edison’s failures far outnumber his successes.

At least by comparison with the past, the 21st-century culture of science, which promotes prolific publication and interdisciplinary collaboration, seems at odds with the patience and persistence necessary to cultivate genius. Some push back against this culture is therefore necessary if a genius is to emerge. In the late 1980s, Princeton faculty member Andrew Wiles spent seven years in his attic chasing a proof of Fermat’s Last Theorem, a three-century old problem. To demonstrate the usual measure of productivity, he delayed publication of some earlier work and instead released it as a series of minor papers every six months or so. Meanwhile, he apparently avoiding cycling in case he was too distracted thinking about mathematics.

Another unusual approach to finding sufficient time to think deeply on a given problem comes from Stanford computer scientist Donald Knuth: forgo email, as he did at the start of 1990. It’s a good choice given that email likely makes professors stupid. In fact, many tasks we now rely on computers for are probably making us stupid. Build a machine to improve human performance, and it will lead ironically to reduction in human ability is Hannah Fry’s succinct summary of the paper Ironies of Automation. This doesn’t imply that computers have slowed scientific progress; that would be absurd. But I’m sure they’ve reduced skills like mental arithmetic. Perhaps most famous for such skills is Richard Feynman. Reading about what he was capable of doing in his head (like estimate to within 10% the answer to nearly any mathematical problem that can be posed in less than 10 seconds) makes you see mental arithmetic not as a mundane, rote task, but rather some kind of portal to a truly authentic understanding of math and numbers. Indeed, on several occasions I’ve known older scientists who are, or at least were back in the day, proud of their mental arithmetic, perhaps for this very reason.

Unlike professors, finding uninterrupted time to think isn’t a problem for postdocs and PhD students (I speak from experience here). They aren’t necessarily making ground-breaking discoveries, however, because they are still on their way to base camp, when the true discoveries are the summit. If the Everest analogy isn’t clear, consider a passage from Feynman’s biography in which it was noted that it is no longer possible, as it had been a generation earlier, to bring undergrads up to the live frontier of a field like biology or physics. That was in the 1960s. These days it is a task just to keep up with literature at the rate it is published.

Perhaps the answer is to ignore the literature. This was famously Feynman’s approach (among ignoring other things like standard administrative and supervisory roles). He had two persuasive arguments. First, to discover a new law, even to produce only a slightly different result, you may need to start from scratch. For example, Newton’s law of gravity and Einstein’s relativity only differ close to the speed of light, but they are very different theories. Second, the biggest discoveries likely lie outside of the fashionable questions. Being a genius, or aspiring to be one, is therefore a gamble, a high-risk, high-reward pursuit. Ignore the literature and it may just give you the right combination of time and blissful ignorance. These days, however, you’ll more than likely just slowly reinvent someone else’s wheel.

The time required to make it to the forefront of knowledge is largely the reason that Nobel prize winners are getting older. The average age when prize-winning discoveries were made has increased from late 30s to late 40s between the beginning and end of the 20th century. Part of this age increase may be a shift from prizes based on theories, which are correlated with younger scientists, toward prizes based on experiment, which requires more experience and knowledge. At least that’s the message from a 2007 study of economics Nobel laureates. Depressingly to many of us, they also suggest that 25 is the peak age for producing the best conceptual work. This seems surprisingly low, but it does fit with the above idea of knowledge being an obstacle to genius.

As well as ignoring literature, an aspiring genius may want to push back against the current trend toward wide collaborations. Smaller teams, and by extension an individual, tend to disrupt (that ubiquitous but fuzzy business term) science and technology more than larger teams, who tend to produce more incremental progress.

Incremental progress is expressed metaphorically as the tortoise in Aesop’s famous fable. By contrast, the disruptive individual scientist who wants to be a genius fits the bill as the hare. But as noted earlier, a genius also needs years of patience if they are to make a truly innovative discovery. In that case, they need to be a tortoise. Maybe this combination of extremes, together with a need to explore unfashionable questions, is what makes genius so elusive.

Author: Ken Hughes

Post-doctoral research scientist in physical oceanography

%d bloggers like this: