Is the Pacific Northwest Really Overdue for Its Next Big Earthquake?
Many believe the Cascadia Subduction Zone ruptures every 250 years or so, but maybe there’s reason to believe that recurrence interval isn’t all that reliable.
Ask anyone in the Pacific Northwest, and they’ll likely tell you that we’re overdue for the “big one.” A large-scale catastrophic earthquake, most likely generated in the Cascadia Subduction Zone (CSZ) off the coast, is set to happen at any moment. They might even cite a recurrence interval of CSZ events to back up their claim: one is purported to happen every 240–500 years, and, since the last one was in the year 1700, we are right in that sweet spot for the next one. The New Yorker in 2015 said there were 41 CSZ earthquakes in the last 10,000 years, and 10,000 divided by 41 is 243.9. Therefore, every 243 years or so we can expect another big earthquake. It’s been 320 years since the last one, so we’re overdue.
This feels like a massive oversimplification, though, right?
Whether that Pacific Northwest resident knows it or not, that recurrence interval they cited is likely based on an influential study by Goldfinger et al. in 2012 and not The New Yorker. That widely-discussed recurrence interval is never mentioned in the 2012 Goldfinger et al. study, though. It’s largely a misconception about the research they did. I revisited that 2012 study recently to take a deeper look.
The results in that study are based on sediment cores collected off the coast of the Pacific Northwest that show layers of turbidite material that the researchers believe indicate earthquake events. Their assumption is that thicker layers of turbidite material in the cores equate to a stronger and/or longer earthquake, including whether the earthquake was a full rupture of the CSZ or partial (though this assumption has been questioned by other studies). They carbon dated the turbidite to estimate when the deposit was made and developed a timeline of CSZ earthquakes based on that. They found that 41 large-scale CSZ earthquakes have occurred over the last 10,000 years, 19 of which were the worst-of-the-worst full rupture type. This was truly remarkable research when it came out and is still vastly important for anyone working in the natural hazards field. Among other breakthroughs, it established the CSZ as more tectonically active than previously thought.
Plotting those 41 occurrences on a timeline made things tricky, though. It led to the conclusion that, on average, a major CSZ earthquake happens somewhere between the Strait of Juan de Fuca and Northern California every 246.5 years. This value is based on the average number of years between events, only slightly more sophisticated than dividing 10,000 by 41.
But averages are not always very telling, and historical averages can’t always be used for predicting what might happen in the future (or when something might happen, as in this case).
Let’s talk statistics for a bit
There are lots of ways of exploring the time span between CSZ earthquakes. Regression is an especially powerful statistical analysis that is also pretty accessible and easy to perform. In fact, Goldfinger et al. performed a handful of regressions in their 2012 study and found that there really is no discernible trend in years between CSZ events. There is so much variation in the gap years (a.k.a. interevent periods) that it’s not really predictable at all. The regression they performed found that there is no relationship between the strength of an earthquake and the time between events, calling into question the idea that strong earthquakes are followed by longer interevent periods. What this meant in their own words:
“We infer that the most likely interpretation is that Cascadia data may at times have a weak tendency to follow a time-predictable model.”
I ran a simple linear regression model using the interevent times they discovered and found that there is also no discernible relationship between earthquake occurrence and interevent time, regardless of earthquake magnitude (R2 = 0.0571). This can be interpreted as meaning the length of time between historical CSZ earthquakes is not very useful for predicting when the next one will occur since the linear regression model only got it right about 6% of the time. Another way to say it: the time between CSZ events is pretty random. That randomness makes the average a less reliable statistic for predicting the next big one.
Another way to test that 246.5 number (rounded up to 247) is by comparing how long interevent periods actually were with our expectation (i.e. 247 years) and testing the discrepancies for statistical significance. Based on the interevent times Goldfinger et al. discovered, I found that the 247-year expectation can be rejected. In the test I used, it’s not until I consider any interevent period between 297 and 197 years (247+/- 50 years) as being on-target with the average when I get a result that says I can’t prove the 247-year average wrong. This is because calculating the differences between the observed interevent period and the expectation of 247 years shows wide variation above and below that number. Some events happened pretty close to that 247-year expectation. Most did not, as you can see in the below figure of those observed-expectation differences. In fact, going back 10,000 years, it’s basically a 50–50 tossup that an interevent period was either longer or shorter than 247 years.
This is the fundamental issue when focusing on averages only. There is so much variation around that 247-year average that gets lost when all we do is look at one statistic, and that variation is crucial. Focusing on averages only does a real disservice to the great work published by Goldfinger et al. in 2012. They put their data through the statistical ringer and depending on the test used they had different results. For example, in addition to the regressions mentioned above, they checked to see if CSZ events occurred in clusters, and the statistics suggest they do (although those clusters can happen over hundreds of years). They found that periods of no major earthquakes between clusters can be 1,000 years or more and the most recent event in 1700 was part of a cluster with four other earthquakes occurring over ~1,500 years. They found that no clusters had more than five earthquakes, so maybe the 1700 event was the last one for the most recent cluster and we have another 700 years or so to prep for the next one.
Or not. We don’t really know.
Determinism vs. randomness
Numbers tend to suggest accuracy and preciseness, but that’s not always the case. The statistics presented in studies like Goldfinger et al. (2012) are the results of sophisticated calculations based on proxy data and a few assumptions. This isn’t to downplay the work involved or the quality of the research — both were outstanding! But the results need to be qualified a little bit, I think, since the public have latched onto this recurrence interval idea. Recurrence intervals are great for simplifying stochastic (i.e. random) processes, but we can’t forget the underlying processes are still stochastic.
I think the point here is that the 247-year recurrence interval based on 41 data points over 10,000 years is too deterministic for such a random event, and I don’t think that was supposed to be the real takeaway from the 2012 Goldfinger et al. study. Fitting a barbed, odd-shaped stochastic process into a nice and neat square box of determinism means having to shave off its complexities and accept a certain level of inaccuracy, error, or assumption. This happens often, like every time we plot a stochastic event on a timeline. But earthquakes, especially the rare large ones, do not have an exactly predictable recurrence interval, and to suggest they do is oversimplifying a process that we are still very much learning about.
This isn’t always a bad thing. Some level of simplification is important when communicating scientific results that have real public benefit and implications, very much the case for Goldfinger et al. (2012). But we need to be careful not to let the simplifications get away scot-free. In this case, it gives too much weight to an average that really isn’t very good at predicting what happened in the past, let alone the future. The Goldfinger et al. study does not contradict the notion that earthquakes are unpredictable. It merely applies statistics to explore the temporal distribution of the 41 events they discovered over a 10,000-year timespan to help us understand them a little better. Some of these statistics are more helpful than others. The average interevent period, to me, is less helpful as a statistic to hang our hats on. At the end of the day, it’s 41 happenings over 10,000 years. I’m not sure that’s enough to start throwing down deterministic recurrence intervals for an event that is notoriously unpredictable (to be clear, I don’t think Goldfinger et al. did that, but the public surely has).
We could be cheeky and do the inverse of what The New Yorker did and base our probability on number of events divided by years and say we have a 0.41% annual chance of a CSZ earthquake. Or do it by day, and the likelihood that any given day is a CSZ earthquake day is somewhere around 0.001%. But these are much too simplistic too. A better interpretation of the numbers is probably that a catastrophic earthquake can happen at any time.
I encourage you to read the whole study (long, I know) because there really are some great statistical analyses in it, and the authors explain which ones they feel are the most appropriate. Spoiler alert: the 247-year recurrence interval is not it. For Goldfinger et al., the big takeaway for those of us in the Seattle region is from their analysis using a Weibull distribution. It showed an average recurrence interval of 530 years, but what they highlight in their conclusions is that if we don’t have a CSZ event by 2060, we’ll have only outlasted 25% of the previous interevent periods for our section of the CSZ. For the more active section alongside northern California and southern Oregon, this number grows to 85% by 2060. These statistics are more useful in my opinion, showing just how rare (or not rare) a CSZ event would be within our near future. Every year we don’t have one becomes more and more a statistical rarity, especially along the southern portions of the CSZ. But does that mean the probability increases after 2060? It’s hard to say.
Even still, what is most exciting about the study is not really about stats. It’s that turbidites have been shown to be excellent proxies of earthquake dates and magnitude, as well as their potential for recording the original earthquake sequence, rupture pattern, and directivity. And Goldfinger et al. showcase a brilliant method for their collection and analysis. I’m not a geologist, but that’s an exciting development for anyone working to reduce natural hazard risk.
Finally, let’s be real here: trustworthy recurrence intervals or no, the CSZ will rupture again. We can nitpick statistics, but it is an inevitability. And the destruction outlined in The New Yorker article is in line with our best guess of how it will impact those of us in the Pacific Northwest. There is no questioning just how catastrophic it will be or how vastly important it is to make ourselves more resilient to it. Local, state, and federal agencies are working every day to mitigate earthquake and tsunami hazards around here, but they can’t do it all. If you’re living in the Pacific Northwest now, do yourself a favor and acquaint yourself with your personal seismic and tsunami risk. Make a plan that will save your life when that day comes.