Misplaced Pages

Earthquake prediction

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by J. Johnson (talk | contribs) at 22:09, 12 December 2013 (Reverting anonymous edit warring. See Talk page!!). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 22:09, 12 December 2013 by J. Johnson (talk | contribs) (Reverting anonymous edit warring. See Talk page!!)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
Part of a series on
Earthquakes
Types
Causes
Characteristics
Measurement
Prediction
Other topics

Earthquake prediction "is usually defined as the specification of the time, location, and magnitude of a future earthquake within stated limits", and particularly of "the next strong earthquake to occur in a region." This can be distinguished from earthquake forecasting, which is the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes, in a given area over periods of years or decades. This can be further distinguished from real-time earthquake warning systems, which, upon detection of a severe earthquake, can provide neighboring regions a few seconds warning of potentially significant shaking.

To be useful, an earthquake prediction must be precise enough to warrant the cost of increased precautions, including disruption of ordinary activities and commerce, and timely enough that preparation can be made. Predictions must also be reliable, as false alarms and canceled alarms are not only economically costly, but seriously undermine confidence in, and thereby the effectiveness of, any kind of warning.

With over 13,000 earthquakes around the world each year with a magnitude of 4.0 or greater, trivial success in earthquake prediction is easily obtained using sufficiently broad parameters of time, location, or magnitude. However, such trivial "successful predictions" are not useful. Useful prediction of large (damaging) earthquakes in a timely manner is generally notable for its absence, the few claims of success being controverted. Extensive searches have reported many possible earthquake precursors, but none have been found to be reliable.

In the 1970s there was intense optimism amongst scientists that some method of predicting earthquakes might be found, but by the 1990s continuing failure led many scientists to question whether it was even possible. While many scientists still hold that, given enough resources, prediction might be possible, many others maintain that earthquake prediction is inherently impossible.

The problem of earthquake prediction

"Only fools and charlatans predict earthquakes."

Charles Richter

Definition and validity

The prediction of earthquakes is plagued from the outset by two problems: the definition of "prediction", and the definition of "earthquake". This might seem trivial, especially of the latter: it would seem that the ground shakes, or it doesn't. But in seismically active areas the ground frequently shakes. Just not hard enough for most people to notice.

Approx. # of quakes per year, globally
Mag. Class. #
M = 8 Great 1
M = 7 Major 15
M = 6 Large 134
M = 5 Moderate 1319
M = 4 Small ~13,000

Notable shaking of the earth's crust typically results from one earthquake of Richter magnitude scale 8 or greater (M ≥ 8) somewhere in the world each year (the four M ≥ 8 quakes in 2007 being exceptional), and another 15 or so "major" M ≥ 7 quakes (but 23 in 2010). The USGS reckons another 134 "large" quakes above M 6, and about 1300 quakes in the "moderate" range, from M 5 to M 5.9 ("felt by all, many frightened"). In the M 4 to M 4.9 range – "small" – it is estimated that there are 13,000 quakes annually. Quakes less than M 4 – noticeable to only a few persons, and possibly not recognized as an earthquake – number over a million each year, or roughly 150 per hour.

With such a constant drumbeat of earthquakes various kinds of chicanery can be used to deceptively claim "predictions" that appear more successful than is truly the case. E.g., predictions can be made that leave one or more parameters of location, time, and magnitude unspecified. These are subsequently adjusted to include what ever earthquakes as do occur. These would more properly be called "postdictions". Alternately, "pandictions" can be made, with such broad parameters as will most likely match some earthquake, some time, some where. These are indeed predictions, but trivial, meaningless for any purpose of fore-telling, and quite useless for making timely preparations for "the next big one". Or multiple predictions – "multidictions" – can be made, each of which, alone, seems statistically unlikely. "Success" derives from revealing, after the event, only those that prove successful.

To be meaningful, an earthquake prediction must be properly qualified. This includes unambiguous specification of time, location, and magnitude. These should be stated either as ranges ("windows", error bounds), or with a weighting function, or with some definitive inclusion rule provided, so that there is no issue as to whether any particular event is, or is not, included in the prediction, so a prediction cannot be retrospectively expanded to include an earthquake it would have otherwise missed, or contracted to appear more significant than it really was. To show that a prediction is not post-selected ("cherry-picked") from a number of generally unsuccessful and unrevealed multi-dictions, it must be published in a manner that reveals all attempts at prediction, failures as well as successes.

To be deemed "scientific" a prediction should be based on some kind of natural process, and derived in a manner such that any other researcher using the same method would obtain the same result. Scientists are also expected to state their confidence in the reliability of the prediction, and their estimate of an earthquake happening in the prediction window by chance (discussed below).

2x2 contigency table showing the four possible outcomes

A prediction ("alarm") can be made, or not, and an earthquake may occur, or not; these basic possibilities are shown in the contingency table at right. Once the various outcomes are tabulated various performance measures can be calculated. E.g., the success rate is the proportion of all predictions which were successful , while the Hit rate (or alarm rate) is the proportion of all events which were successfully predicted . The false alarm ratio is the proportion of predictions which are false . This is not to be confused with the false alarm rate, which is the proportion of all non-events incorrectly "alarmed" .

These performance measures can be manipulated by adjusting the level (threshold) at which a prediction (alarm) is made. Raising the level improves the success rate (fewer predictions, but a greater percentage of them are successful), but also results in more missed earthquakes (type II errors). Lowering the level imrpoves the hit rate (more likely to catch an earthquake), but also results in more false alarms (type I errors). There is no inherently "right" level; the acceptable trade-off between missed quakes and false alarms depends on the societal valuation of these outcomes. As either of the success rate or the hit rate can be improved at the expense of the other, both should be considered when evaluating a prediction method.

Significance

"All predictions of the future can be to some extent successful by chance."

Mulargia & Gasperini 1992

While the actual occurrence – or non-occurrence – of a specified earthquake might seem sufficient for evaluating a prediction, scientists understand there is always a chance, however small, of getting lucky. A prediction is significant only to the extent it is successful beyond chance. Therefore they use statistical methods to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). They then evaluate whether the prediction – or a series of predictions produced by some method – correlates with actual earthquakes better than the null hypothesis.

A null hypothesis must be chosen carefully. E.g., many studies have naively assumed that earthquakes occur randomly. But earthquakes do not occur randomly: they often cluster in both space and time. In southern California it has been estimated that about 6% of M≥3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km." It has been estimated that in central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 30 km and 48 hours. While such statistics are not satisfactory for purposes of prediction (in giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes a random (Poisson) distribution of earthquakes. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes. Perhaps not as successful as a stopped clock, but still better, however slightly, than pure chance.

Use of an incorrect null hypothesis is only one way that many studies claiming a low but significant level of success in predicting earthquakes are statistically flawed. To avoid these and other problems the Collaboratory for the Study of Earthquake Predictability (CSEP) has developed a means of rigorously and consistently conducting and evaluating earthquake prediction experiments where scientists can submit a prediction method which is then prospectively evaluated against an authoritative catalog of observed earthquakes.

Consequences

Predictions of major earthquakes by those claiming psychic premonitions are commonplace, uncredible, and create little disturbance. Predictions by those with scientific or even pseudo-scientific qualifications often cause serious social and economic disruption, and pose a great quandary for both scientists and public officials.

Some possibly predictive precursors – such as a sudden increase in seismicity – may give only a few hours of warning, allowing little deliberation and consultation with others. As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability, or even political purging. But giving a warning – crying "wolf!" – of an earthquake that does not occur also incurs a cost. Not just of the emergency measures themselves, but of major civil and economic disruption. Geller describes the arrangements made in Japan:

... if ‘anomalous data’ are recorded, an ‘Earthquake Assessment Committee’ (EAC) will be convened within two hours. Within 30 min the EAC must make a black (alarm) or white (no alarm) recommendation. The former would cause the Prime Minister to issue the alarm, which would shut down all expressways, bullet trains, schools, factories, etc., in an area covering seven prefectures. Tokyo would also be effectively shut down.

The cost of such measures has been estimated at US$7 billion per day.

The quandary is that even when increased seismicity suggests that an earthquake is imminent in a given area, there is no way of getting definite knowledge of whether there will be a larger quake of any given magnitude, or when. If scientists and the civil authorities knew that (for instance) in some area there was an 80% chance of a large (M > 6) earthquake in a matter of a day or two, they would see a clear benefit in issuing an alarm. But is it worth the cost of civil and economic disruption and possible panic, and the corrosive effect a false alarm has on future alarms, if the chance is only 5%?

The Dilemma: To Alarm? or Not to Alarm?

Some of the trade-offs can be seen in the chart at right. By lowering "The Bar" – the threshold at which an alarm is issued – the chances of being caught off-guard are reduced, and the potential loss of life and property may be mitigated. But this also increases the number, and cost, of false alarms. If the threshold is set for an estimated one chance in ten of a quake, the other nine chances will be false alarms. Such a high rate of false alarms is a public policy issue itself, which has not yet been resolved.

To avoid the all-or-nothing ("black/white") kind of response the California Earthquake Prediction Evaluation Council (CEPEC) has used a notification protocol where short-term advisories of possible major earthquakes (M ≥ 7) can be provided at four levels of probability.

Prediction methods

Earthquake prediction, as a branch of seismology, is an immature science in the sense that it cannot predict from first principles the location, date, and magnitude of an earthquake. Research in this area therefore seeks to empirically derive a reliable basis for predictions in either distinct precursors, or some kind of trend or pattern.

Precursors

"... there is growing empirical evidence that precursors exist."

— Frank Evison, 1999

"The search for diagnostic precursors has thus far been unsuccessful."

— ICEF, 2011

An earthquake precursor could be any anomalous phenomena that can give effective warning of the imminence or severity of an impending earthquake in a given area. Reports of premonitory phenomena – though generally recognized as such only after the event – number in the thousands, some dating back to antiquity. In the scientific literature there have been around 400 reports of possible precursors, of roughly twenty different types. running the gamut "from aeronomy to zoology". But the search for reliable precursors has yet to have a convincing success. When (early 1990s) the IASPEI solicited nominations for a "Preliminary List of Significant Precursors" 40 nominations were made; five were selected as possible significant precursors, with two of those based on a single observation each.

After a critical review of the scientific literature the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research." Particularly:

In many cases of purported precursory behavior, the reported observational data are contradictory and unsuitable for a rigorous statistical evaluation. One related problem is a bias towards publishing positive rather than negative results, so that the rate of false negatives (earthquake but no precursory signal) cannot be ascertained. A second is the frequent lack of baseline studies that establish noise levels in the observational time series.

Although none of the following precursors are convincingly successful, they do illustrate both various kinds of phenomena which have been examined, and the optimism that generally attaches to any report of a possible precursor.

Animal behavior

There are many accounts of unusual phenomena prior to an earthquake, especially reports of anomalous animal behavior. One of the earliest is from the Roman writer Claudius Aelianus concerning the destruction of the Greek city of Helike by earthquake and tsunami in 373 BC:

For five days before Helike disappeared, all the mice and martens and snakes and centipedes and beetles and every other creature of that kind in the city left in a body by the road that leads to Keryneia. ... But after these creatures had departed, an earthquake occurred in the night; the city subsided; an immense wave flooded and Helike disappeared....

Aelianus wrote this in the Second Century, some 500 years after the event, so his somewhat fantastical account is necessarily about the myth that developed, not an eyewitness account.

Scientific observation of such phenomena is limited because of the difficulty of performing an experiment, let alone repeating one. Yet there was a fortuitous case in 1992: some biologists just happened to be studying the behavior of an ant colony when the Landers earthquake struck just 100 km (60 mi) away. Despite severe ground shaking, the ants seemed oblivious to the quake itself, as well as to any precursors.

In an earlier study, researchers monitored rodent colonies at two seismically active locations in California. In the course of the study there were several moderate quakes, and there was anomalous behavior. However, the latter was coincident with other factors; no connection with an earthquake could be shown.

Given these results one might wonder about the many reports of precursory anomalous animal behavior following major earthquakes. Such reports are often given wide exposure by the major media, and almost universally cast in the form of animals predicting the subsequent earthquake, often with the suggestion of some "sixth sense" or other unknown power. However, it is extremely important to note the time element: how much warning? For earthquakes radiate multiple kinds of seismic waves. The "p" (primary) waves travel through the earth's crust about twice as fast as the "s" (secondary) waves, so they arrive first. The greater the distance, the greater the delay between them. For an earthquake strong enough to be felt over several hundred kilometers (approximately M > 5) this can amount to some tens of seconds difference. The P waves are also weaker, and often unnoticed by people. Thus the signs of alarm reported in the animals at the National Zoo in Washington, D.C., some five to ten seconds prior to the shaking from the M 5.8 2011 Virginia earthquake, was undoubtably prompted by the p-waves. This was not so much a prediction as a warning of shaking from an earthquake that has already happened.

As to reports of longer-term anticipations, these are rarely amenable to any kind of study. There was an intriguing story in the 1980s that spikes in lost pet advertisements in the San Jose Mercury News portended an increased chance of an earthquake within 70 miles of downtown San Jose. This was a very testable hypothesis, being based on quantifiable, objective, publicly available data, and it was tested by Schaal (1988) – who found no correlation. Another study looked at reports of anomalous animal behavior reported to a hotline prior to an earthquake, but found no significant increase that could be correlated with a subsequent earthquake.

After reviewing the scientific literature the ICEF concluded in 2011 that

there is no credible scientific evidence that animals display behaviors indicative of earthquake-related environmental disturbances that are unobservable by the physical and chemical sensor systems available to earthquake scientists.

Changes in Vp/Vs

Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a significant success and likely breakthrough when Russian seismologists reported observing such changes in the region of a subsequent earthquake. This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly.

Study of this phenomena near Blue Mountain Lake (New York) led to a successful prediction in 1973. However, additional successes there have not followed, and it has been suggested that the prediction was only a lucky fluke. A Vp/Vs anomaly was the basis of Whitcomb's 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur. Other studies relying on quarry blasts (more precise, and repeatable) found no such variations; and an alternative explanation has been reported for such variations as have been observed. Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.

Radon emissions

Most rock contains small amount of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock.

Radon is attractive as a potential earthquake predictor because being radioactive it is easily detected, and its short half-life (3.8 days) makes it sensitive to short-term fluctuations. A 2009 review found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. But as the ICEF found in its review, the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation". Another review concluded that in some cases changes in radon levels preceded an earthquake, but a correlation is not yet firmly established.

Electro-magnetic variations

Various attempts have been made to identify possible pre-seismic variations in various electrical, electric-resistive, or magnetic phenomena. The most touted, and most criticized, is the VAN method of professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – of the National and Capodistrian University of Athens. In a 1981 paper they claimed that by measuring geoelectric voltages – what they called "seismic electric signals"(SES) – they could predict earthquakes of magnitude larger than 2.8 within all of Greece up to 7 hours beforehand. Later the claim changed to being able to predict earthquakes larger than magnitude 5, within 100 km of the epicentral location, within 0.7 units of magnitude, and in a 2-hour to 11-day time window. Subsequent papers claimed a series of successful predictions. Despite these claims, the VAN group generated intense public criticism in the 1980s by issuing telegram warnings, a large number of which were false alarms.

Objections have been raised that the physics of the claimed process is not possible. For example, none of the earthquakes which VAN claimed were preceded by SES generated SES themselves, as would have been expected. Further, an analysis of the wave propagation properties of SES in the Earth’s crust showed that it would have been impossible for signals with the amplitude reported by VAN to have been generated by small earthquakes and transmitted over the several hundred kilometers distances from the epicenter to the monitoring station.

Several authors have pointed out that VAN’s publications are characterized by not accounting for (identifying and eliminating) the possible sources of electromagnetic interference (EMI) to their measuring system. Taken as a whole, the VAN method has been criticized as lacking consistency while doing statistical testing of the validity of their hypotheses. In particular, there has been some contention over which catalog of seismic events to use in vetting predictions. This catalog switching can be used to conclude that, for example, of 22 claims of successful prediction by VAN 74% were false, 9% correlated at random and for 14% the correlation was uncertain.

In 1996 the journal Geophysical Research Letters presented a debate on the statistical significance of the VAN method; the majority of reviewers found the methods of VAN to be flawed, and the claims of successful predictions statistically insignificant. In 2001, the VAN method was modified to include time series analysis, and Springer published an overview in 2011. This updated method has not been critiqued or verified independently yet.

Further information: VAN method

In addition to terrestrial electro-magnetic variations, earthquake activity is correlated with some electromagnetic atmospheric phenomena. Notably, ionospheric disturbances producing electromagnetic signals precede some major seismic events by a few hours to days. These signal anomalies are small in magnitude and therefore difficult to study. A satellite launch is planned by China in 2014 to provide ionospheric data that may be compared with a ground-based seismic monitoring network, as part of China's earthquake program.

Trends

Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, wherefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so verge into earthquake forecasting.

Elastic rebound

Even the stiffest of rock is not perfectly rigid. Given a large enough force (such as between two immense tectonic plates moving past each other) the earth's crust will bend or deform. What happens next is described by the elastic rebound theory of Reid (1910): eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state, but now offset, and thereby accommodating inter-plate motion. In the process energy is released in various forms, including seismic waves. The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake), the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.

Characteristic earthquakes

The most studied earthquake faults (such as the Wasatch fault and San Andreas fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments. As the lengths and other characteristics of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. In that the strain accumulates steadily, it seems a fair inference that seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals. For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate should therefore inform us when to expect the next rupture; this is the approach generally used in forecasting seismic hazard.

This is essentially the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years. Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval). The appeal of such a method is in being derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests they differ sufficiently in other respects to question whether they have distinct characteristics.

The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself. Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles). Other studies have considered whether other factors need to be considered, such as the age of the fault. Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.

Seismic gaps

At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest. This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989—1991.

It has been asked: "How could such an obvious, intuitive model not be true?" Possibly because some of the underlying assumptions are not correct. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region"; statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well". Another study concluded: "The hypothesis of increased of earthquake potential after a long quiet period can be rejected with a large confidence."

Seismicity patterns (M8, AMR)

Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarms for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years. Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happen were skillfully predicted, or only the result of chance.

M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP. But a widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs. A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway). It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates seismicity."

Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate. That is: a plot of the cumulative number of foreshocks gets steeper just before the main shock.

Following formulation by Bowman et al. (1998) into a testable hypothesis and a number of positive reports, AMR seemed to have a promising future. This despite several problems, including not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep. But more rigorous testing has shown that apparent AMR trends likely result from how data fitting is done and failing to account for spatiotemporal clustering of earthquakes,; the AMR trends are statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) is reported to have fallen off since 2004.

Notable predictions

These are predictions, or claims of predictions, that are notable either scientifically or because of public notoriety, and claim a scientific or quasi-scientific basis, or invoke the authority of a scientist. To be judged successful a prediction must be a proper prediction, published before the predicted event, and the event must occur exactly within the specified time, location, and magnitude parameters. As many predictions are held confidentially, or published in obscure locations, and become notable only when they are claimed, there may be some selection bias in that hits get more attention than misses.

1973: Blue Mountain Lake, USA

A team studying earthquake activity at Blue Mountain Lake (BML), New York, made a prediction on August 1, 1973, that "an earthquake of magnitude 2.5–3 would occur in a few days." And: "At 2310 UT on August 3, 1973, a magnitude 2.6 earthquake occurred at BML". According to the authors, this is the first time the approximate time, place, and size of an earthquake were successfully predicted in the United States.

It has been suggested that the pattern they observed may have been a statistical fluke, that just happened to get out in front of a chance earthquake. It seems significant that there has never been a second prediction from Blue Mountain Lake; this prediction now appears to be largely discounted.

Earthquake prediction ... appears to be on the verge of practical reality....

Scholz, Sykes & Aggarwal 1973

1974: Hollister, USA

On the evening of 27 November 1974, Malcolm Johnston described to an informal group of earth scientists some data collected near Hollister, California, showing deformation of the earth's surface such as might portend an earthquake. Asked when this might happen colleague John H. Healy jested: "Maybe tomorrow." The timing could not have been better, as the next day (Thanksgiving) there was an M 5.2 earthquake near Hollister.

Though described as successful, this informal statement was not made as a prediction, and fell short as a prediction: it was indefinite in regards of time (maybe "tomorrow" — or maybe a decade later?) and magnitude. Yet this expectation was significant, as it was prompted by observation of two kinds of possible precursors (tilt and geomagnetic), and there was a sense that had they been able to process their data sooner they might have been able make a formal prediction. Later a third possible (although somewhat doubtful) precursor (a Vp/Vs variation) was discovered. That three possible precursors seemed to be in accord was considered "most encouraging" at the USGS. Coupled with the success at Blue Mountain Lake a year earlier and the reports from Haicheng six months later, this fostered much optimism among earth scientists in the late 1970s that short-term earthquake prediction would soon be attainable. This prompted a conference in 1975 to consider the public policy implications of earthquake forecasts in the United States.

1975: Haicheng, China

The M 7.3 Haicheng (China) earthquake of 4 February 1975 is the most widely cited "success" of earthquake prediction, and cited as such in several textbooks. The putative story is that study of seismic activity in the region lead the Chinese authorities to issue both a medium-term prediction in June, 1974, and, following a number of foreshocks up to M 4.7 the previous day, a short-term prediction on February 4 that large earthquake might occur within one to two days. The political authorities (so the story goes) therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out of doors. Although the quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes, the "effective preventative measures taken" were said to have kept the death toll under 300. This in a population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected.

However, although a major earthquake certainly occurred, there has been some skepticism about this narrative of effective measures taken on the basis of a timely prediction. This was during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists" and record keeping was disordered, making it difficult to verify details of the claim, even as to whether there was an ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line") has not been specified. It has been suggested that the evacuation was spontaneous, following the strong (M 4.7) foreshock that occurred the day before.

Many of the missing details have been filled in by a 2006 report that had access to an extensive range of records. This study found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists." Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". The light loss of life (which they set at 2,041) is attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure."

Further information: 1975 Haicheng earthquake

"... routine announcement of reliable predictions may be possible within 10 years...."

— NAS Panel on Earthquake Prediction, 1976

1976: Southern California, USA (Whitcomb)

On April 15, 1976, Dr. James Whitcomb presented a scientific paper that found, based on changes in Vp/Vs (seismic wave velocities), an area northeast of Los Angeles along the San Andreas fault was "a candidate for intensified geophysical monitoring". He presented this not as a prediction that an earthquake would happen, but as a test of whether an earthquake would happen, as might be predicted on the basis of Vp/Vs. This distinction was generally lost; he was and has been held to have predicted an earthquake of magnitude 5.5 to 6.5 within 12 months.

The area identified by Whitcomb was quite large, and overlapped the area of the Palmdale Bulge, an apparent uplift (later discounted), which was causing some concern as a possible precursor of large earthquake on the San Andreas fault. Both the uplift and the changes in seismic velocities were predicted by the then current dilatancy theory, although Whitcomb emphasized his "hypothesis test" was based solely on the seismic velocities, and that he regarded that theory unproven.

Whitcomb subsequently withdrew the prediction, as continuing measurements no longer supported it. No earthquake of the specified magnitude occurred within the specified area or time.

1976–1978: South Carolina, USA

Towards the end of 1975 a number of minor earthquakes in an area of South Carolina (USA) not known for seismic activity were linked to the filling of a new reservoir (Lake Jocassee). Changes in the Vp/Vs ratios of the seismic waves were observed with these quakes. Based on additional Vp/Vs changes observed between 30 December 1975 and 7 January 1976 an earthquake prediction was made on 12 January. A magnitude 2.5 event occurred on 14 January 1976.

In the course of a three-year study a second prediction was claimed successful for an M 2.2 earthquake on November 25, 1977. However, this was only "two-thirds" successful (in respect of time and magnitude) as it occurred 7 km outside of the prediction location.

This study also evaluated other precursors, such as the M8 algorithm, which "predicted" neither of these two events, and changes in radon emissions, which showed possible correlation with other events, but not with these two.

"... at least 10 years, perhaps more, before widespread, reliable prediction of major earthquakes is achieved."

– Richard Kerr, 1978

1978: Izu-Oshima-Kinkai, Japan

On 14 January 1978 a swarm of intense microearthquakes prompted the Japan Meteorological Agence (JMA) to issue a statement suggesting that precautions for the prevention of damage might be considered. This was not a prediction, but coincidentally it was made just 90 minutes before the M 7.0 Izu-Oshima-Kinkai earthquake. This was subsequently, but incorrectly, claimed as successful prediction by Hamada (1991), and again by Roeloffs & Langbein (1994).

1978: Oaxaca, Mexico

Ohtake, Matumoto & Latham (1981) claimed:

The rupture zone and type of the 1978 Oaxaca: southern Mexico earthquake (Ms = 7.7) were successfully predicted based on the premonitory quiescence of seismic activity and the spatial and temporal relationships of recent large earthquakes.

However, the 1977 paper on which the claim is based said only that the "most probable" area "may be the site of a future large earthquake"; a "firm prediction of the occurrence time is not attempted." This prediction is therefore incomplete, making its evaluation difficult.

After re-analysis of the region's seismicity Garza & Lomnitz (1979) concluded that though there was a slight decrease in seismicity, it was within the range of normal random variation, and did not amount to a seismic gap (the basis of the prediction). The validity of the prediction is further undermined by a report that the apparent lack of seismicity was due to a cataloging omission.

This prediction had an unusual twist in that its announcement by the University of Texas (UT) – for a destructive earthquake in Oaxaca at an undetermined date – came just ten days before the date given by another prediction for a destructive earthquake in Oaxaca. This other prediction had been made by a pair of Las Vegas gamblers; the local authorities had deemed it uncredible, and decided to ignore it. The announcement from a respectable institution unfortunately confirmed (for many people) the more specific prediction; this appears to have caused some panic. On the day of the amateur prediction there was an earthquake, but only a distinctly non-destructive M 4.2, which, as one mayor said, they get all the time.

" ... no general and definite way to successful earthquake prediction is clear."

— Ziro Suzuki, 1982

1981: Lima, Peru (Brady)

In 1976 Dr. Brian Brady, a physicist then at the U.S. Bureau of Mines, where he had studied how rocks fracture, "concluded a series of four articles on the theory of earthquakes with the deduction that strain building in the subduction zone might result in an earthquake of large magnitude within a period of seven to fourteen years from mid November 1974." In an internal memo written in June 1978 he narrowed the time window to "October to November, 1981", with a main shock in the range of 9.2±0.2. In a 1980 memo he was reported as specifying "mid-September 1980". This was discussed at a scientific seminar in San Juan, Argentina, in October 1980, where Brady's colleague, Dr. W. Spence, presented a paper. Brady and Spence then met with government officials from the U.S. and Peru on 29 October, and "forecast a series of large magnitude earthquakes in the second half of 1981." This prediction became widely known in Peru, following what the U.S. embassy described as "sensational first page headlines carried in most Lima dailies" on January 26, 1981.

On 27 January 1981, after reviewing the Brady-Spence prediction, the U.S. National Earthquake Prediction Evaluation Council (NEPEC) announced it was "unconvinced of the scientific validity" of the prediction, and had been "shown nothing in the observed seismicity data, or in the theory insofar as presented, that lends substance to the predicted times, locations, and magnitudes of the earthquakes." It went on to say that while there was a probability of major earthquakes at the predicted times, that probability was low, and recommend that "the prediction not be given serious consideration."

Unfazed, Brady subsequently revised his forecast, stating there would be at least three earthquakes on or about July 6, August 18 and September 24, 1981, leading one USGS official to complain: "If he is allowed to continue to play this game ... he will eventually get a hit and his theories will be considered valid by many."

On June 28 (the date most widely taken as the date of the first predicted earthquake), it was reported that: "the population of Lima passed a quiet Sunday". The headline on one Peruvian newspaper: "NO PASO NADA" ("Nothing happens").

In July Brady formally withdrew his prediction on the grounds that prerequisite seismic activity had not occurred. Economic losses due to reduced tourism during this episode has been roughly estimated at one hundred million dollars.

"Recent advances ... make the routine prediction of earthquakes seem practicable."

— Stuart Crampin, 1987

1985–1993: Parkfield, USA (Bakun-Lindh)

The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever. It was based on an observation that the Parkfield segment of the San Andreas Fault breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966. More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred. The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world", in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent. In the words of the Economist: "never has an ambush been more carefully laid for such an event."

1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake, on 28 September 2004, but without forewarning or obvious precursors. While the experiment in catching an earthquake is considered by many scientists to have been successful, the prediction was unsuccessful in that the eventual event was a decade late.

Further information: Parkfield earthquake

1987–1995: Greece (VAN)

Professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – claimed in a 1981 paper an ability to predict M ≥ 2.6 earthquakes within 80 km of their observatory (in Greece) approximately seven hours beforehand, by measurements of 'seismic electric signals'. In 1996 Varotsos and other colleagues claimed to have predicted impending earthquakes within windows of several weeks, 100–120 km, and ±0.7 of the magnitude.

The VAN predictions have been severely criticised on various grounds, including being geophysically implausible, "vague and ambiguous", failing to satisfy prediction criteria, and retroactive adjustment of parameters. A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters, more likely a lucky coincidence than a "success". In the end the VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them."

1989: Loma Prieta, USA

On October 17, 1989, the Mw 6.9 (Ms 7.1) Loma Prieta ("World Series") earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay area of California. The U.S. Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year. USGS staff subsequently claimed this quake had been "anticipated"; various other claims of prediction have also been made.

Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (Forecast is often limited to a probabilistic estimate of an earthquake happening over some time period, distinguished from a more specific prediction. However, in this case this distinction is not made.) None of these forecasts can be rigorously tested due to lack of specificity, and where a forecast does bracket time and location it is because of such a broad window (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows.

Of the several prediction methods used perhaps the most debated was the M8 algorithm used by Keilis-Borok and associates in four forecasts. The first of these foreacasts missed both magnitude (M 7.5) and time (a five-year window from Jan. 1, 1984, through Dec. 31, 1988). They did get the location, by including most of California and half of Nevada. A subsequent revision, presented to the NEPEC, extended the time window to July 1, 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July, 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta.

Harris describes two differing views about whether the Loma Prieta earthquake was predicted. One view argues it did not occur on the San Andreas fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted. The other view argues that it did occur in the San Andreas fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct. Hough states that "most seismologists" do not believe this quake was predicted "per se". In a strict sense there were no predictions, only forecasts, which were only partially successful.

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

Further information: 1989 Loma Prieta earthquake

1990: New Madrid, USA (Browning)

Dr. Iben Browning (a scientist by virtue of a Ph.D. degree in zoology and training as a biophysicist, but no training or experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses, including publication of a newsletter. He seems to have been enamored of the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the sun and the moon coincide to exert maximum stress on the earth's crust. Having calculated when these tidal forces maximize, Browning then "projected" what areas he thought might be ripe for a large earthquake. An area he mentioned frequently was the New Madrid Seismic Zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811-1812, which he coupled with the date of December 3, 1990.

Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events) the Loma Prieta earthquake of October 17, 1989. The National Earthquake Prediction Evaluation Council (NEPEC) eventually formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued October 18, 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake; examination of a transcript of his talk in San Francisco on October 10 showed he had said only: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake anywhere in California.

Though the AHWG report thoroughly demolished Browning's claims of prior success and the basis of his "projection", it made little impact, coming after a year of continued claims of a successful prediction, the endorsement and support of geophysicist David Stewart, and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in all major news media. The result was predictable. According to Tierney:

... no forecast associated with an earthquake (or for that matter with any other hazard in the U. S.) has ever generated the degree of concern and public involvement that was observed with the Browning prediction.

On December 3, despite tidal forces and the presence of some 30 TV and radio crews: nothing happened.

1998: Iceland (Crampin)

Crampin, Volti & Stefánsson (1999) claimed a successful prediction – what they called a stress forecast – of an M 5 earthquake in Iceland on 13 November 1998 through observations of what is called shear wave splitting. This claim has been disputed; a rigorous statistical analysis found that the result was as likely due to chance as not.

"The 2004 Parkfield earthquake, with its lack of obvious precursors, demonstrates that reliable short-term earthquake prediction still is not achievable."

Bakun et al. 2005

2004 & 2005: Southern California, USA (Keilis-Borok)

The M8 algorithm (developed under the leadership of Dr. Vladimir Keilis-Borok at UCLA) gained considerable respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes. Great interest was therefore generated by the announcement in early 2004 of a predicted M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004. In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state." The predicted earthquake did not occur.

A very similar prediction was made for an earthquake on or before August 14, 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled. This prediction also failed.

"Despite over a century of scientific effort, the understanding of earthquake predictability remains immature."

— ICEF, 2011

2009: L'Aquila, Italy (Giuliani)

At 3:32 in the morning of 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake. In the city of L'Aquila and surrounding area some sixty thousand buildings, including many homes, collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless. Hard upon the news of the earthquake came news that a Giampaolo Giuliani had predicted this earthquake, had tried to warn the public, but had been muzzled by the Italian government.

Closer examination shows a more subtle story. Giampaolo Giuliani is a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he has for some years been monitoring radon (a short-lived radioactive gas that has been implicated as an earthquake precursor), using instruments he has designed and built. Prior to the L'Aqulia earthquake he was unknown to the scientific community, and had not published any kind of scientific work. Giuliani's rise to fame may be dated to when he was interviewed on March 24, 2009, by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He reportedly said that this swarm was normal, and would diminish by the end of March. On March 30 L'Aquila was struck by a magnitude 4.0 tremblor, the largest to date.

One source says that on the 27th Giuliani warned the mayor of L'Aquila there could be an earthquake with 24 hours. As indeed there was – but none larger than about M 2.3.

On March 29 he made a second prediction. The details are hazy, but apparently he telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. This is the incident with the loudspeaker vans warning the inhabitants of Sulmona (not L'Aquila) to evacuate, with consequential panic. Nothing ensued, except Giuliano was cited for procurato allarme (inciting public alarm) and injoined from making public predictions.

After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before. Although he reportedly claimed to have "phoned urgent warnings to relatives, friends and colleagues" on the evening before the earthquake hit, the International Commission on Earthquake Forecasting for Civil Protection, after interviewing Giuliani, found that there had been no valid prediction of the mainshock before its occurrence.

Further information: 2009 L'Aquila earthquake

Difficulty or impossibility

As the preceding examples show, the record of earthquake prediction has been disappointing. Even where earthquakes have unambiguously occurred within the parameters of a prediction, statistical analysis has generally shown these to be no better than lucky guesses. The optimism of the 1970s that routine prediction of earthquakes would be "soon", perhaps within ten years, was coming up disappointingly short by the 1990s, and many scientists began wondering why. By 1997 it was being positively stated that earthquakes can not be predicted, which led to a notable debate in 1999 on whether prediction of individual earthquakes is a realistic scientific goal. For many the question is whether the prediction of individual earthquakes is merely hard, or intrinsically impossible.

Earthquake prediction may have failed only because it is "fiendishly difficult" and still beyond the current competency of science. Despite the confident announcement four decades ago that seismology was "on the verge" of making reliable predictions, there may yet be an underestimation of the difficulties. As early as 1978 it was reported that earthquake rupture might be complicated by "heterogeneous distribution of mechanical properties along the fault", and in 1986 that geometrical irregularities in the fault surface "appear to exert major controls on the starting and stopping of ruptures". Another study attributed significant differences in fault behavior to the maturity of the fault. These kinds of complexities are not reflected in current prediction methods.

Seismology may even yet lack an adequate grasp of its most central concept, elastic rebound theory. A simulation that explored assumptions regarding the distribution of slip found results "not in agreement with the classical view of the elastic rebound theory". (This was attributed to details of fault heterogeneity not accounted for in the theory.)

Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a state of self-organized criticality "where any small earthquake has some probability of cascading into a large event". It has also been argued on decision-theoretic grounds that "prediction of major earthquakes is, in any practical sense, impossible."

That earthquake prediction might be intrinsically impossible has been strongly disputed But the best disproof of impossibility – effective earthquake prediction – has yet to be demonstrated.

"... predicting earthquakes is challenging and maybe possible in the future ..."

Amoruso & Crescentini 2012 harvnb error: no target: CITEREFAmorusoCrescentini2012 (help)

See also

Notes

  1. Geller et al. 1997, p. 1616, following Allen (1976, p. 2070), who in turn followed Wood & Gutenberg (1935). In addition to specification of time, location, and magnitude, Allen suggested three other requirements: 4) indication of the author's confidence in the prediction, 5) the chance of an earthquake occurring anyway as a random event, and 6) publication in a form that gives failures the same visibility as successes.
  2. Kagan 1997b, p. 507.
  3. Kanamori 2003, p. 1205. See also ICEF 2011, p. 327. Not all scientists distinguish "prediction" and "forecast", but it is useful, and will be observed in this article.
  4. Thomas 1983.
  5. Atwood & Major 1998.
  6. Mabey 2001. Mabey cites 7,000, using some older data. The yearly distribution of earthquakes by magnitude, in both the United States and worldwide, can be found at the U.S. Geological Survey Earthquake Statistics page.
  7. E.g., the most famous claim of a successful prediction is that alleged for the 1975 Haicheng earthquake (ICEF 2011, p. 328), and is now listed as such in textbooks (Jackson 2004, p. 344). A later study concluded there was no valid short-term prediction (Wang et al. 2006), as described in more detail below.
  8. Geller 1996.
  9. Geller 1997, §2.3, p. 427; Console 2001, p. 261.
  10. Kagan 1997b; Geller 1997. See also Nature Debates.
  11. Quoted by Hough 2007, p. 253. Scholz 1997 quotes a variant: "Bah, no one but fools and charlatans try to predict earthquakes!"
  12. From USGS: Earthquake statistics and Earthquakes and Seismicity, following Noson, Qamar & Thorsen 1988, p. 11.
  13. The intensity (force) of shaking felt at a given location depends on the magnitude (energy released), distance from the hypocenter, orientation of the fault plane of the rupture, and local ground conditions.
  14. USGS: Earthquake Facts and Statistics
  15. USGS: Modified Mercalli Intensity Scale, level VI.
  16. Mabey 2001.
  17. See Jackson 1996a, p. 3772, for an example.
  18. Allen 1976, p. 2070; PEP 1976, p. 6.
  19. Geller 1997, §4.7, p. 437.
  20. Allen 1976, p. 2070.
  21. See Jolliffe & Stephenson 2003, §3.2.2, Nurmi 2003, §4.1, and Zechar 2008, Table 2.6, for details.
  22. This is a point which many scientific papers get wrong. See Barnes et al. 2009.
  23. These are the basic performance measures. Mason (2003) describes additional performance measures.
  24. Mason 2003, p. 48 and through out.
  25. Mulargia & Gasperini 1992, p. 32; Luen & Stark 2008, p. 302.
  26. Luen & Stark 2008; Console 2001.
  27. Jackson 1996a, p. 3775.
  28. Jones 1985, p. 1669.
  29. Console 2001, p. 1261.
  30. Luen & Stark 2008. This was based on data from Southern California.
  31. Hough 2010b relates how several claims of successful predictions are statistically flawed. For a deeper view of the pitfalls of the null hypothesis see Stark 1997 and Luen & Stark 2008.
  32. Zechar et al. 2010.
  33. The manslaughter convictions against the seven scientists and technicians in Italy are not for failing to predict the L'Aquila earthquake (where some 300 people died) as for giving undue assurance to the populace – one victim called it "anaesthetizing" – that there would not be a serious earthquake, and therefore no need to take precautions. Hall 2011; Cartlidge 2011. Additional details in Cartlidge 2012.
  34. It has been reported that members of the Chinese Academy of Sciences were purged for "having ignored scientific predictions of the disastrous Tangshan earthquake of summer 1976." Wade 1977.
  35. In January of 1999 there was a report (Saegusa 1999) that China was introducing "tough regulations intended to stamp out ‘false’ earthquake warnings, in order to prevent panic and mass evacuation of cities triggered by forecasts of major tremors." This was prompted by "more than 30 unofficial earthquake warnings ... in the past three years, none of which has been accurate."
  36. Geller 1997, §5.2, p. 437.
  37. The L'Aquila earthquake came after three months of tremors, but many devastating earthquakes hit with no warning at all.
  38. One study (Zechar 2008, p. 18, table 2.5) calculated (for the method and data studied) a result distribution of 2 correct predictions, 2 misses, and 19 false alarms. The report from China (Saegusa 1999) mentioned "more than 30" predictions, all of which were false alarms.
  39. Details at Southern SAF Working Group 1991, pp. 1–2. See Jordan & Jones 2010 for examples.
  40. Kagan 1999, p. 234, and quoting Ben-Menahem (1995) on p. 235; ICEF 2011, p. 360.
  41. PEP 1976, p. 9.
  42. Evison 1999, p. 769.
  43. ICEF 2011, p. 338.
  44. The IASPEI Sub-Commission for Earthquake Prediction defined a precursor as "a quantitatively measurable change in an environmental parameter that occurs before mainshocks, and that is thought to be linked to the preparation process for this mainshock." Console 2001, §2
  45. Geller 1997, p. 429, §3.
  46. E.g., Claudius Aelianus, in De natura animalium, book 11, commenting on the destruction of Helike in 373 BC, but writing five centuries later.
  47. Rikitake 1979, p. 294. Cicerone, Ebel & Britton 2009 has a more recent compilation
  48. Jackson 2004, p. 335.
  49. Geller (1997, p. 425): "Extensive searches have failed to find reliable precursors." Jackson (2004, p. 348): "The search for precursors has a checkered history, with no convincing successes." Zechar & Jordan (2008, p. 723): "The consistent failure to find reliable earthquake precursors...". ICEF (2009): "... no convincing evidence of diagnostic precursors."
  50. Wyss & Booth 1997, p. 424.
  51. ICEF 2011, p. 338.
  52. ICEF 2011, p. 361.
  53. From De natura animalium, book 11, quoted by Roger Pearse at A myth-take about Helice, the earthquake, and Diodorus Siculu. See also http://www.helike.org/.
  54. As an illustration of how myths develop: the destruction of Helike is thought by some to be the origin of the story of Atlantis.
  55. Lighton & Duncan 2005.
  56. Lindberg, Skiles & Hayden 1981.
  57. ABC News reported: "Sixth Sense? Zoo Animals Sensed Quake Early". Miller, Patrick & Capatides 2011
  58. According to a press release from the Zoo (National Zoo Animals React to the Earthquake, August 23, 2011; see also Miller, Patrick & Capatides 2011 ) most of the activity was co-seismic. It was also reported that the red lemurs "called out 15 minutes before the quake", which would be well before the arrival of the p waves. Lacking any other details it is impossible to say whether the lemur activity was in any way connected with the quake, or was merely a chance activity that was given significance for happening just before the quake, a failing typical of such reports.
  59. Geller (1997, p. 432) calls such reports "doubly dubious": they fail to distinguish precursory behavior from ordinary behavior, and also depend on human observers who have just undergone a traumatic experience.
  60. Otis & Kautz 1979.
  61. ICEF 2011, p. 336.
  62. Hammond 1973. Additional references in Geller 1997, §2.4.
  63. Scholz, Sykes & Aggarwal 1973; Smith 1975a.
  64. Aggarwal et al. 1975.
  65. Hough 2010b, p. 110.
  66. Allen 1983, p. 79; Whitcomb 1977.
  67. McEvilly & Johnson 1974.
  68. Lindh, Lockner & Lee 1978.
  69. ICEF 2011, p. 333. For a fuller account of radon as an earthquake precursor see Immè & Morelli 2012.
  70. Giampaolo Giuiliani's claimed prediction of the L'Aquila earthquake was based on monitoring of radon levels.
  71. Cicerone, Ebel & Britton 2009, p. 382.
  72. ICEF 2011, p. 334. See also Hough 2010b, pp. 93–95.
  73. Immè & Morelli 2012, p. 158.
  74. Park 1996.
  75. Varotsos, Alexopoulos & Nomicos 1981, described by Mulargia & Gasperini 1992, p. 32, and Kagan 1997b, §3.3.1, p. 512.
  76. Varotsos et al. 1986.
  77. Varotsos et al. 1986; Varotsos & Lazaridou 1991.
  78. Bernard 1992; Bernard & LeMouel 1996.
  79. Mulargia & Gasperini 1992; Mulargia & Gasperini 1996; Wyss 1996; Kagan 1997b.
  80. Varotsos & Lazaridou 1991.
  81. Wyss & Allmann 1996.
  82. Geller 1996.
  83. See the table of contents.
  84. Varotsos, Sarlis & Skordas 2011.
  85. Shen et al. 2011.
  86. Reid 1910, p. 22; ICEF 2011, p. 329.
  87. Wells & Coppersmith 1994, Fig. 11, p. 993.
  88. Zoback 2006 provides a clear explanation. Evans 1997, §2.2 also provides a description of the "self-organized criticality" (SOC) paradigm that is displacing the elastic rebound model.
  89. These include the type of rock and fault geometry.
  90. Schwartz & Coppersmith 1984; Tiampo & Shcherbakov 2012, p. 93, §2.2.
  91. UCERF 2008.
  92. Bakun & Lindh 1985, p. 619. Of course these were not the only earthquakes in this period. The attentive reader will recall that, in seismically active areas, earthquakes of some magnitude happen fairly constantly. The "Parkfield earthquakes" are either the ones noted in the historical record, or were selected from the instrumental record on the basis of location and magnitude. Jackson & Kagan (2006, p. S399) and Kagan (1997a, pp. 211–212, 213) argue that the selection parameters can bias the statistics, and that a sequences of four or six quakes, with different recurrence intervals, are also plausible.
  93. Bakun & Lindh 1985, p. 621.
  94. Jackson & Kagan 2006, p. S408 say the claim of quasi-periodicity is "baseless".
  95. Jackson & Kagan 2006.
  96. Kagan & Jackson 1991, p. 21,420; Stein, Friedrich & Newman 2005; Jackson & Kagan 2006; Tiampo & Shcherbakov 2012, §2.2, and references there; Kagan, Jackson & Geller 2012. See also the Nature debates.
  97. Young faults are expected to have complex, irregular surfaces, which impedes slippage. In time these rough spots are ground off, changing the mechanical characteristics of the fault. Cowan, Nicol & Tonkin 1996; Stein & Newman 2004, p. 185.
  98. Stein & Newman 2004
  99. Scholz 2002, p. 284, §5.3.3; Kagan & Jackson 1991, p. 21,419; Jackson & Kagan 2006, p. S404.
  100. Kagan & Jackson 1991, p. 21,419; McCann et al. 1979; Rong, Jackson & Kagan 2003.
  101. Jackson & Kagan 2006, p. S404.
  102. Lomnitz & Nava 1983.
  103. Rong, Jackson & Kagan 2003, p. 23.
  104. Kagan & Jackson 1991, Summary.
  105. See details in Tiampo & Shcherbakov 2012, §2.4.
  106. CEPEC 2004a. The CEPEC said these two quakes were properly predicted, but this is questionable lacking documentation of pre-event publication. The point is important because M8 tends to generate many alarms, and without irrevocable pre-event publication of all alarms it is difficult to determine if there is a bias towards publishing only the successful results. Lack of reliable documentation regarding these predictions is also why they are not included in the list below.
  107. Hough 2010b, pp. 142–149.
  108. Zechar 2008; Hough 2010b, pp. 145.
  109. Zechar 2008, p. 7. See also p. 26.
  110. Tiampo & Shcherbakov 2012, §2.1. Hough 2010b, chapter 12, provides a good description.
  111. Hardebeck, Felzer & Michael 2008, par. 6
  112. Hough 2010b, pp. 154–155.
  113. Tiampo & Shcherbakov 2012, §2.1, p. 93.
  114. Hardebeck, Felzer & Michael (2008, §4) show how suitable selection of parameters shows DMR: Decelerating Moment Release.
  115. Hardebeck, Felzer & Michael 2008, par. 1, 73.
  116. Mignan 2011, Abstract.
  117. Aggarwal et al. 1975, p. 718. See also Smith 1975 and Scholz, Sykes & Aggarwal 1973.
  118. Aggarwal et al. 1975, p. 719. The statement is ambiguous as to whether this was the first such success by any method, or the first by the method they used.
  119. Hough 2010b, p. 110.
  120. Suzuki (1982, p. 244) cites several studies that did not find the phenomena reported by Aggarwal et al. See also Turcotte (1991), who says (p. 266): "The general consensus today is that the early observations were an optimistic interpretation of a noisy signal."
  121. Hammond 1975, p. 419.
  122. Predictions need to be declared as such beforehand so they cannot be adjusted retrospectively.
  123. Hamilton 1976, p. 7.
  124. USGS Circular 729 1976.
  125. E.g.: Davies 1975; Whitham et al. 1976, p. 265; Hammond 1976; Ward 1978; Kerr 1979, p. 543; Allen 1982, p. S332; Rikitake 1982; Zoback 1983; Ludwin 2001; Jackson 2004, p. 335; ICEF 2011, pp. 328, 351.
  126. Jackson 2004, p. 344.
  127. Whitham et al. 1976, p. 266 provide a brief report. The report of the Haicheng Earthquake Study Delegation (Anonymous 1977) has a fuller account. Wang et al. (2006, p. 779), after careful examination of the records, set the death toll at 2,041.
  128. Raleigh et al. (1977), quoted in Geller 1997, p 434. Geller has a whole section (§4.1) of discussion and many sources. See also Kanamori 2003, pp. 1210-11.
  129. Quoted in Geller 1997, p. 434. Lomnitz (1994, Ch. 2) describes some of circumstances attending to the practice of seismology at that time; Turner 1993, pp. 456–458 has additional observations.
  130. Measurement of an uplift has been claimed, but that was 185 km away, and likely surveyed by inexperienced amateurs. Jackson 2004, p. 345.
  131. Kanamori 2003, p. 1211. According to Wang et al. 2006 foreshocks were widely understood to precede a large earthquake, "which may explain why various made their own evacuation decisions" (p. 762).
  132. Wang et al. 2006.
  133. Wang et al. 2006, p. 785.
  134. "... in well instrumented areas." PEP 1976, p. 2. Further on (p. 31) the Panel states: "A program for routine announcement of reliable predictions may be 10 or more years away, although there will be, of course, many announcements of predictions (as, indeed, there already have been) long before such a systematic program is set up." According to Allen (1982, p. S331) "a certain euphoria of imminent victory pervaded the earthquake-prediction community...." See Geller 1997 §2.3 for additional quotes.
  135. "Time-dependent Vp and Vp/Vs in an area of the Transverse Ranges of southern California", presented to the American Geophysical Union annual meeting. Allen 1983.
  136. Shapley 1976.
  137. Kerr 1981c.
  138. Allen 1983, p. 78.
  139. Whitcomb 1976 harvnb error: no target: CITEREFWhitcomb1976 (help), Allen 1983, p. 79; Whitcomb 1976 harvnb error: no target: CITEREFWhitcomb1976 (help).
  140. Stevenson, Talwani & Amick 1976.
  141. Talwani 1981, pp. 385, 386. These results are somewhat problematical as in the study ten Vs/Vp anomalies and ten M ≥ 2.0 events occurred, but only six coincided (Talwani 1981, pp. 381, 386, and see table), and there is no explanation why only two were selected as predictions.
  142. Kerr 1978.
  143. Geller 1997, §4.3, p. 435.
  144. Geller 1991.
  145. Ohtake, Matumoto & Latham 1981, p. 53.
  146. Ohtake, Matumoto & Latham 1977, p. 375. See also pp. 381–383.
  147. Whiteside & Haberman, 1989, quoted in Geller 1997, §4.2. Lomnitz (1994, p. 122) says: "We had neglected to report our data in time for inclusion".
  148. The UT administrative spokesman reportedly suggested on the order of the M 6.2 1972 Managua earthquake, where over 5,000 people died. Lomnitz 1983, p. 30.
  149. Lomnitz (1983), McNally (1979, p. 30), and Lomnitz (1994, pp. 122–127) provide details.
  150. Suzuki 1982, p. 235.
  151. Roberts 1983, §4, p. 151.
  152. Hough 2010, p. 114.
  153. Gersony 1982b, p. 231.
  154. Roberts 1983 §4, p. 151.
  155. Gersony 1982b, document 85, p. 247.
  156. Quoted by Roberts 1983, p. 151. Copy of statement in Gersony 1982b, document 86, p. 248.
  157. The chairman of the NEPEC later complained to the Agency for International Development that one of its staff members had been instrumental in encouraging Brady and promulgating his prediction long after it had been scientifically discredited. See Gersony (1982b), document 146 (p. 201) and following.
  158. Gersony 1982b, document 116, p. 343; Roberts 1983, p. 152.
  159. John Filson, deputy chief of the USGS Office of Earthquake Studies, quoted by Hough 2010, p. 116.
  160. Gersony 1982b, document 147, p. 422, U.S. State Dept. cablegram.
  161. Hough 2010, p. 117.
  162. Gersony 1982b, p. 416; Kerr 1981.
  163. Giesecke 1983, p. 68.
  164. Crampin 1987. The "recent advances" Crampin refers is his work with Shear Wave Splitting (SWS)
  165. Geller (1997, §6) describes some of the coverage. The most anticipated prediction ever is likely Iben Browning's 1990 New Madrid prediction (discussed below), but it lacked any scientific basis.
  166. Near the small town of Parkfield, California, roughly half-way between San Francisco and Los Angeles.
  167. Bakun & McEvilly 1979; Bakun & Lindh 1985; Kerr 1984.
  168. Bakun et al. 1987.
  169. Kerr 1984, "How to Catch an Earthquake". See also Roeloffs & Langbein 1994.
  170. Roeloffs & Langbein 1994, p. 316.
  171. Quoted by Geller 1997, p. 440.
  172. Kerr 2004; Bakun et al. 2005, Harris & Arrowsmith 2006, p. S5.
  173. Hough 2010b, p. 52.
  174. It has also been argued that the actual quake differed from the kind expected (Jackson & Kagan 2006), and that the prediction was no more significant than a simpler null hypothesis (Kagan 1997a).
  175. Varotsos, Alexopoulos & Nomicos 1981, described by Kagan 1997b, §3.3.1, p. 512, and Mulargia & Gasperini 1992, p. 32.
  176. Jackson 1996b, p. 1365; Mulargia & Gasperini 1996, p. 1324.
  177. Geller 1997, §4.5, p. 436: "VAN’s ‘predictions’ never specify the windows, and never state an unambiguous expiration date. Thus VAN are not making earthquake predictions in the first place."
  178. Jackson 1996b, p. 1363. Also: Rhoades & Evison (1996), p. 1373: No one "can confidently state, except in the most general terms, what the VAN hypothesis is, because the authors of it have nowhere presented a thorough formulation of it."
  179. Kagan & Jackson 1996,grl p. 1434.
  180. Geller 1997, Table 1, p. 436.
  181. Mulargia & Gasperini 1992, p. 37. They continue: "In particular, there is little doubt that the occurrence of a ‘large event’ (Ms ≥ 5.8 ) has been followed by a VAN prediction with essentially identical epicentre and magnitude with a probability too large to be ascribed to chance."
  182. Ms is a measure of the intensity of surface shaking, the surface wave magnitude.
  183. Harris 1998, p. B18.
  184. Garwin 1989.
  185. USGS staff 1990, p. 247.
  186. Kerr 1989; Harris 1998.
  187. E.g., ICEF 2011, p. 327.
  188. Harris 1998, p. B22.
  189. Harris 1990 harvnb error: no target: CITEREFHarris1990 (help), Table 1, p. B5.
  190. Harris 1998, pp. B10–B11.
  191. Harris 1990 harvnb error: no target: CITEREFHarris1990 (help), p. B10, and figure 4, p. B12.
  192. Harris 1990 harvnb error: no target: CITEREFHarris1990 (help), p. B11, figure 5.
  193. Geller (1997, §4.4) cites several authors to say "it seems unreasonable to cite the 1989 Loma Prieta earthquake as having fulfilled forecasts of a right-lateral strike-slip earthquake on the San Andreas Fault."
  194. Harris 1990, pp. B21–B22 harvnb error: no target: CITEREFHarris1990 (help).
  195. Hough 2010b, p. 143.
  196. Spence et al. 1993 (USGS Circular 1083) is the most comprehensive, and most thorough, study of the Browning prediction, and appears to be the main source of most other reports. In the following notes, where an item is found in this document the pdf pagination is shown in brackets.
  197. A report on Browning's prediction cited over a dozen studies of possible tidal triggering of earthquakes, but concluded that "conclusive evidence of such a correlation has not been found". AHWG 1990, p. 10 . It also found that Browning's identification of a particular high tide as triggering a particular earthquake "difficult to justify".
  198. According to a note in Spence, et al. (p. 4): "Browning preferred the term projection, which he defined as determining the time of a future event based on calculation. He considered 'prediction' to be akin to tea-leaf reading or other forms of psychic foretelling." See also Browning's own comment on p. 36 .
  199. Including "a 50/50 probability that the federal government of the U.S. will fall in 1992." Spence et al. 1993, p. 39 .
  200. Spence et al. 1993, pp. 9–11 , and see various documents in Appendix A, including The Browning Newsletter for November 21, 1989 (p. 26 ).
  201. AHWG 1990, p. iii . Included in Spence et al. 1993 as part of Appendix B, pp. 45–66 .
  202. AHWG 1990, p. 30 .
  203. Previously involved in a psychic prediction of an earthquake for North Carolina in 1975 (Spence et al. 1993, p. 13 ), Stewart sent a 13 page memo to a number of colleagues extolling Browning's supposed accomplishments, including predicting Loma Prieta. Spence et al. 1993, p. 29
  204. See Spence et al. 1993 throughout.
  205. Tierney 1993, p. 11.
  206. A subsequent brochure for a Browning video tape stated: "the media got it wrong." Spence et al. 1993, p. 40 . Browning died of a heart-attack seven months later (p. 4 ).
  207. Jordan & Jones 2011.
  208. Seher & Main 2004.
  209. CEPEC 2004a; Hough 2010b, pp. 145–146.
  210. CEPEC 2004a.
  211. CEPEC 2004a.
  212. CEPEC 2004b.
  213. ICEF 2011, p. 360.
  214. ICEF 2011, p. 320.
  215. Alexander 2010, p. 326.
  216. The Telegraph, 6 April 2009. See also McIntyre 2009.
  217. Hall 2011, p. 267.
  218. Kerr 2009.
  219. The Guardian, 5 April 2010.
  220. The ICEF (2011, p. 323) alludes to predictions made on February 17 and March 10.
  221. Kerr 2009; Hall 2011, p. 267; Alexander 2010, p. 330.
  222. Kerr 2009; The Telegraph, 6 April 2009.
  223. The Guardian, 5 April 2010; Kerr 2009.
  224. ICEF 2011, p. 323, and see also p. 335.
  225. Geller 1997 found "no obvious successes".
  226. PEP 1976, p. 2.
  227. Kagan (1997b, p. 505) said: "The results of efforts to develop earthquake prediction methods over the last 30 years have been disappointing: after many monographs and conferences and thousands of papers we are no closer to a working forecast than we were in the 1960s".
  228. Geller et al. 1997.
  229. Main 1999, "Nature debates".
  230. Geller et al. 1997, p. 1617.
  231. Scholz, Sykes & Aggarwal 1973.
  232. Kanamori & Stewart 1978, abstract.
  233. Sibson 1986.
  234. Cowan, Nicol & Tonkin 1996. More mature faults presumably slip more readily because they have been ground smoother and flatter.
  235. Schwartz & Coppersmith (1984, pp. 5696–7) argued that the characteristics of fault rupture on a given fault "can be considered essentially constant through several seismic cycles". The expectation of a regular rate of occurrence that accounts for all other factors was rather disappointed by the lateness of the Parkfield earthquake.
  236. Ziv, Cochard & Schmittbuhl 2007.
  237. Geller et al. 1997, p. 1616; Kagan 1997b, p. 517. See also Kagan 1997b, p. 520, Vidale 1996 and especially Geller 1997, §9.1, "Chaos, SOC, and predictability".
  238. Matthews 1997.
  239. E.g., Sykes, Shaw & Scholz 1999 and Evison 1999.
  240. "Despite over a century of scientific effort, the understanding of earthquake predictability remains immature. This lack of understanding is reflected in the inability to predict large earthquakes in the deterministic short-term sense." ICEF 2011, p. 360.

References

External links

Categories: