Black Swan theory isn't about denying science; it's about recognizing the limits of science.
Taleb's position (and argument in this paper) is that we need to recognize instances where we don't know the answer instead of making decisions based on assumptions that we do.
You can't discount the apocalypse. You just can't. And this pan-phobic concept "black swan" is just various degrees of unknowable apocalypses that we are supposed to account for based upon their very unaccountableness. I don't buy it. Read the tea leaves first, then tell me about swan feathers.
The housing bubble was BLATANT. No black swans involved. I remember talking about it with friends in 2005. No big deal, really.
"The housing bubble was BLATANT. No black swans involved."
Yes, but what was not blatant was that the housing bubble popping would kill Lehman Brothers, cause runs on banks and almost bring down the whole world economy.
Taleb's point is that you can't go around behaving like any system is, to borrow a phrase "too big to fail" - or rather too cleverly thought out/regulated to fail. To some extent this is simple engineering. I can test the resilience of my cluster to network failure by pulling out an ethernet cable and see what happens - maybe there's a small hiccup while systems fall back onto local resources, maybe nothing fails over properly and I am hosed for 12 hours. As I understand Taleb, his argument is that the economic powers (bankers, traders, politicans) spend too much time trying to figure out how to prevent network failure, rather than making sure that when the network does go out, it doesn't take your whole mission down with it. Because, no matter how well you think you have engineered the system, the chance of your network going out somehow, sometime is non-zero.
I do agree with the posters that Taleb overstates the cleverness of his own insight; but unfortunately, in his field, it seems that people really were too dumb/greedy to grasp this. Irrespective of what you think of his interpretation of events or the validity of his suggestions (and I too question them), I think as systems engineers we can appreciate the risk of designing political and economic systems under the assumption that they can't fail.
the problem isn't that people were too dumb/greedy to notice. it's that they weren't punished for not noticing it. you'd be amazed what a little incentive does to people's willingness to self educate.
If it really was that blatant, why did it happen? Doesn't the fact that it burst by definition mean that most people did not recognize it?
But Taleb makes a different point: Discounting the apocalypse is exactly what he's arguing against, because this is essentially what LTCM et al were doing with their models. They calculated probabilities to make sure that the estimated payoff was greater than the estimated loss, but they got the distributions wrong. What you should try to do is make sure that the apocalypse can't happen, that you aren't exposed so that one outlier will take you down. Because due to the nature of the system (there is only a sample of one of the world economy) you simply can't gather data that can pin down the wings of the distributions to the precision you need.
According to the models used at the time, the Black Monday 1987 was something like a 30-sigma event. If the distributions truly were gaussian, such an event should never happen in the history of the Universe. So you have a choice between thinking that we just had an incredibly bad luck, or the model is wrong. I'd go with the latter.
There are 127 million housing units in the U.S. At the peak of the bubble, 7 million houses/year were sold. A total of roughly 30 million houses were sold in the boom years of 2002-2007, assuming there's no double-counting (people who sold a house multiple times), which is probably a pessimistic assumption. That means that over half of U.S. households took absolutely no part in the bubble - they just sat tight.
Now look at the inventory numbers from the last two years. They go up rather dramatically. This means there were more sellers than buyers, i.e. a large number of people realized that home prices were selling for more than they thought the house was worth, and quickly put their house on the market to take advantage of this. Sure looks like people recognized the bubble to me, it's just that by definition they can't all get out before it bursts.
My interpretation is that most people did recognize the bubble - it's just that they're not the ones that the New York Times writes stories about. They held onto their home and neither bought nor sold, instead just going about their business. Or they sold at the peak, rented for a bit, and are now buying back in at half price and pocketing a few hundred grand. Smart people tend not to brag about how they just made a killing off other people's stupidity. It prevents people from being stupid again, which limits the opportunities for future killings. Besides, it tends to kill the mood at parties.
...I wouldn't have made any connection to US housing and foreign equities, but I was looking at both, and I knew both were bubbly. I've since lost money on a few stupid trades, so it's not like I'm an expert or anything. But the fundamentals are usually clear to all. Right now, the macros on S&P PEs, long-term treasuries, and high-yield,
"Taleb's position (and argument in this paper) is that we need to recognize instances where we don't know the answer instead of making decisions based on assumptions that we do."
Have you ever worked at an investment bank? I suspect you have not. Wall Street does not run on the scientific method. It runs on greed, fear and politics. The ones who check the assumptions and know the limits of their models don't have the power to make the decisions. This is the problem. And the Taleban apparently haven't yet figured it out. Quoting Janet Tavakoli: "malfeasance, not models, disrupted the global economy, and Taleb still gets that part wrong."
Last but not least: there's not such thing as a "blackswan theory". You can't steal ideas from probability theory and statistics that have been known for centuries, rename them, wrap them up nicely, and call them new or original.
Malfeasance enabled the models, the models enabled malfeasance. Declaring one thing "the culprit" and not the other is to fail to recognize the dynamic nature of the system. Even in a world of perfect angels, Taleb shows that broken models would still bring the system down. Even in a world of perfect models, unscrupulous jerks can bring the system down. It's all too related to pull apart.
So, what is the solution? Destroy the financial industry? Good luck.
Taleb is an anti-intellectual. He believes that all models are wrong and useless. To make it worse, he advocates that no further knowledge can be attained. The only knowledge and truth that is right is the "black swan theory". Come on, gimme a break!!! Aren't we HN'ers smarter than that? Why is Taleb's drivel being upvoted?!
The idea I've gotten from reading Taleb (articles, not books) is not that all models are useless, it's that assuming a normal distribution is often unjustified, and that once-in-a-century events come along quite a bit more often than that. Sure; he invented a catchphrase, not the exponential distribution--but "Black Swan" is more than just a cheap Australian Cabernet if it'll generate more positive awareness of systemic risks.
"You can't steal ideas from probability theory and statistics that have been known for centuries, rename it, wrap it nicely and call it a new, original idea."
Do you have any references to work centuries old that describes what Taleb calls 'Black Swans'?
No need for references. Probability was started by a bunch of bored french aristocrats who wanted to understand risk and uncertainty. Read what they wrote. They're the original thinkers.
Last but not least: in the late 1990s, Lehman Brothers had a trading desk dedicated solely to extremely unlikely events. That was before the so-called "black swan" theory came along. They made money only once. On 9/11.
Taleb's good ideas are not original, and his original ideas are no good.
To be fair, the work of Pascal and the Chevalier de Mere does not give much insight on how a water leak may turn into a mudslide. Whatever you may think of Taleb, his work is more of a popularization of catastrophe theory than it is basic probability. He even says as much in his writings.
Taleb's position (and argument in this paper) is that we need to recognize instances where we don't know the answer instead of making decisions based on assumptions that we do.