There have been a lot of interesting papers about the "manslaughter trial" of six seismologists and a government official in Italy, where justice pointed out that there was a failure to warn the population before the deadly earthquake in 2009, see e.g. "Trial Over Earthquake in Italy Puts Focus on Probability and Panic" on, "Italian scientists convicted of manslaughter for earthquake risk report" on, "Italian court ruling sends chill through science community" on, "Scientists on trial: At fault?" on or (probably the most interesting one) "The Verdict of the l’Aquila Earthquake Trial Sends the Wrong Message" on

First of all, I started less than 15 months ago to work on earthquake series and models, and I am still working on the second paper, but so far, what I've seen is that those series are very noisy. When working on a large scale (say\pm500km), it is still very difficult to estimate the probability that there will be a large earthquake on a large period of time (from one year to a decade). Even including covariate such as foreshocks. So I can imagine that it is almost impossible to predict something accurate on a smaller scale, and on a short time frame. A second point is that I did not have time to look carefully at what was said during the trial: I just have been through what can be find in articles mentioned above.

But as a statistician, I really believe, as claimed by Niels Bohr (among many others) that "prediction is very difficult, especially about the future". Especially with a 0/1 model (warning versus not warning). In that case, you have the usual type I and type II errors (see e.g. for more details),

  • type I is "false positive error" when you issue a warning, for nothing. A "false alarm" error. With standard "test" words, it is like when a pregnancy tests predict that someone is pregnant, but she's not. 
  • type II is "false negative error" failing to assert something, what is present.Here, it is like when a pregnancy tests predict that someone is not pregnant, while she actually is.

The main problem is that statisticians wish to design a test with both errors as small as possible. But usually, you can't. You have to make a trade-off. The more you want to protect yourself against Type I errors (by choosing a low significance level), the greater the chance of a Type II error. This is actually the most important message in all Statistics 101 courses.

Another illustration can be from the course I am currently teaching this semester, precisely on prediction and forecasting techniques. Consider e.g. the following series

Here, we wish to make a forecast on this time series (involving a confidence interval, or region). Something like

The aim of the course is to be able to build up that kind of graph, to analyze it, and to know exactly what were the assumptions used to derive those confidence bands. But if you might go to jail for missing something, you can still make the following forecast

From this trial, we know that researchers can go to jail for making a type II error. So, if you do not want to go to jail, make frequent type I error (from this necessary trade-off). Because so far, you're more likely not to go to jail for that kind of error (the boy who cried wolf kind). Then, you're a shyster, a charlatan, but you shouldn't spend six years in jail ! As mentioned on Twitter, that might be a reason why economist keep announcing crisis! That might actually be a coherent strategy...