Computers make me a worse mathematician, but a better scientist

“A computer gives the average person, a high school freshman, the power to do things in a week that all the mathematicians who ever lived until thirty years ago couldn’t do.” That’s Ed Roberts quoted in Hackers, a book published in 1984. So let me update his quote with my own: “My laptop gives me the power to run simulations in an afternoon that the fastest computers thirty years ago would have struggled with”.

This power has a downside. Computers are so fast these days that I’ve become lazy—mathematically speaking. A few decades ago, in my field of physical oceanography, it was routine to manipulate partial differential equations and solve complex integrals. I can do these things, if I put my mind to it. But I seldom do; there’s no need. These days, even ordinary differential equations that I learned to solve in undergrad get plugged into Mathematica most of the time or relegated to some less-than-perfect numerical method. And I can’t remember the last time I did multiplication longhand:

I just checked that I still remember how to do this. Though I did get the answer, it wasn’t on the first try. Eight-year-old me would have schooled current-day me if it was a race.

My field of physical oceanography requires a reasonable base level of mathematical knowledge. At the least, some vector calculus of the kind taught at a senior undergrad level. But it’s possible to take the math far beyond that. This is where my laziness kicks in.

Understanding the mathematical approaches in papers I read is a big time investment. Developing my own is even more so. And yet, equations are often beside the point. They’re “just the boring part of mathematics” as Stephen Hawking put it. Of course, boring isn’t the same as useless. A lot of the elegant theories can be described only in equations. In fact, such a description is often the best evidence that a given problem has been truly understood or solved.

Times have changed though. Looking for new mathematical results is less enticing. Much of the low-hanging fruit has been picked years ago. I don’t want to waste my time developing a mathematical argument only to then find out it was put forward decades ago. Fortunately, computers have made available a whole new orchard of fruit: questions involving the kind of arithmetic that computers excel at. Think trillions of operations, if not many more.

The optimal combination of theory, data, and computation

In Average is Over, Tyler Cowen discusses at length the changes he’s observed in economics (as a research field). The changes parallel those I see in physical oceanography. He suggests that economic theories have progressed little since the 90s. Instead of theorists, more economists are becoming empiricists. So much so that the theories being used are returning to a simpler level. A simple theory that holds up to empirical data is better than an elaborate theory not grounded in reality.

Powerful data crunching, and careful data gathering, is pushing out theoretical intuition.

Tyler Cowen

Chess is another pursuit that Cowen delves into. The gameplay of chess, with its strict rules and innumerable arrangements, favours computers. Humans lost their dominance in the game over computers back in 1996. So, if you can’t beat ’em, join ’em. Let both players have access to a computer and you end up with Freestyle Chess. This format hands the advantage to players who can best profit from their chess engine. A grandmaster who can’t exploit the power of their computer will lose to a computer savvy, but otherwise lesser, player.

Okay, the above may not be true anymore. These days, the computer teammate carries the team so much so that the human might only get in the way. But the moral of the story can apply elsewhere: a tech-savvy human and their computer are still a formidable combination outside the structured confines of a chess game. As in, y’know, the unstructured endeavour of tackling a complex scientific problem.

I’m not the only lazy one when it comes to math

Presumably, some physical oceanographers from days gone by relied heavily on elaborate mathematics, not because they wanted to, but because they had to. They might be able to look back smugly—back in my day, people knew how to invoke Fourier series and Laplace transforms to analytically integrate initial and boundary value problems—but would they have willingly learned and practised these skills if it wasn’t a pre-requisite at the time?

As per this post’s title, I know that I’m becoming a worse mathematician given my reliance (perhaps over-reliance) on computers. I expect the same is true for my peers. At the same time, computers and the Internet might actually make us better at math by facilitating rapid trial-and-error learning and giving us instant access to content on whatever theory we need to understand. But I doubt this is the case.

Let’s test this. My hypothesis is that the use of math in my field of physical oceanography has declined in a measurable sense. Just as Cowen suggested for economics, I expect that physical oceanographers are giving up on more elaborate theories and instead invoking a combination of simpler theory and brute-force computation.

My proxy for “use of math” is the number of equations in a paper. Specifically, I’m counting numbered equations, which are those given their own line in a paper. (Are there flaws with this? Of course. As I’ve said before, this is a blog post not a scientific paper.) My question is whether this has changed over the last 50 years. Conveniently, there’s a Journal of Physical Oceanography that was first published 50 years ago. There, that’s my methods. Like I said, it’s a blog post.

Papers with 30 or more equations came up 30% of the time between 1970 and 1990. This dropped to 19% for 2000–2020. For 50 or more equations, the percentages are 13% and 7%, respectively. Seventy papers were checked for each date range.

The data bear out my hypothesis. Mostly. The days of papers with, say, 50 or more equations haven’t disappeared, but they do occur only half as often.

An escape from painstaking mathematics

Speaking of decades past, Gary Smith recognises that “researchers worked hard to gather good data and thought carefully before spending hours, even days, on painstaking calculations.” I’m glad I don’t have to subject myself to this. Instead, thanks to computers, I can spend hours, even days, on painstaking analyses. Oh, wait…

Author: Ken Hughes

Post-doctoral research scientist in physical oceanography

4 thoughts on “Computers make me a worse mathematician, but a better scientist”

  1. I very much enjoyed this thought-provoking post — thanks for writing it.

    I’m a physicist but I am not the most mathematically adept. My mantra throughout my degree was “If I can’t code this, I don’t understand it.” After teaching undergrads for close to 25 years, I would now argue that computing is at least as important a skill as an aptitude for analytical maths in UG physics courses. Many students can churn their way through pages upon pages of, for example, commutator algebra or expectation value integrals in quantum mechanics, and yet emerge the other side with little understanding of just what the maths represents.

    Coding instead can play a key role in building understanding and intuition; students can see in “real time” the effects of changing parameters, especially if the code generates an animation/simulation. For example, I would argue that students will learn a lot more about measurement in quantum mechanics by coding a simulation along the lines of that at the bottom of this post — — than any number of lengthy pen-and-paper calculations. Similarly, I used to teach a course on Fourier analysis and found there were many students who could solve tricky Fourier integrals but yet were stumped by a question as simple as “If I narrow the width of the top hat function, what happens to its Fourier transform?” Getting them to calculate and plot the Fourier transform with Python (or whatever their preferred language might be) provides a layer of understanding that is too often buried beneath the mathematical machinery.

    1. I completely agree with how helpful the real-time parameter changing is for learning. Your mention of the Fourier transform in this context makes me think of one of the best textbooks that I’ve come across called ThinkDSP. It teaches digital signal processing primarily through coding exercises rather than mathematically. The concepts become so much easier to learn as a result.

  2. he question of the method of analysis arises: should we just consider the average?

    Personally, I would suggest a fit with an exponential exp (-k x nbeq)

    where nbeq is the number of equations on the paper, and K a constant which depends on the year.

    the idea of an exponential is simple: writing an equation requires effort, and this effort is a decision less and less likely with the number of equations to be written.

    Here we must be able to measure a k which varies with time quite easily.

Comments are closed.

%d bloggers like this: