Here’s an interesting discussion between Nassim Taleb and Andrew Gelman on Taleb’s The Black Swan. First, Gelman’s criticisms:
The book is about unexpected events (“black swans”) and the problems with statistical models such as the normal distribution that don’t allow for these rarities. From a statistical point of view, let me say that multilevel models (often built from Gaussian components) can model various black swan behavior. In particular, self-similar models can be constructed by combining scaled pieces (such as wavelets or image components) and then assigning a probability distribution over the scalings, sort of like what is done in classical spectrum analysis of 1/f noise in time series. For some interesting discussion in the context of “texture models” for images, see the chapter by Yingnian Wu in my book with Xiao-Li on applied Bayesian modeling and causal inference. (Actually, I recommend this book more generally; it has lots of great chapters in it.)
That said, I admit that my two books on statistical methods are almost entirely devoted to modeling “white swans.” My only defense here is that Bayesian methods allow us to fully explore the implications of a model, the better to improve it when we find discrepancies with data. Just as a chicken is an egg’s way of making another egg, Bayesian inference is just a theory’s way of uncovering problems with can lead to a better theory.
Next, Taleb responds to Gelman:
[Gelman] On page 127-128, Taleb discusses the distinction between uncertainty and randomness (in my terminology, the boxer, the wrestler, and the coin flip). I’d only point out that coins and dice, while maybe not realistic representations of many sources of real-world uncertainty, do provide useful calibration. Similarly, actual objects rarely resemble “the meter” (that famous metal bar that sits, or used to sit, in Paris), but it’s helpful to have an agreed-upon length scale. We have some examples in Chapter 1 of Bayesian Data Analysis of assigning probabilities empirically (for football scores and record linkage).
N[assim] – The way I see it is that a framework that accepts no metaprobabilities is defective, period. This is where a Bayesian like you will never accept the difference “Knightian nonKnightian”. The only one I accept is qualitative: ludic/nonludic.