This was my second time reading The Black Swan by Nassim Taleb although admittedly I think I was a bit young the first time to fully absorb the content. That is not to say that I didn't get the TL;DR of "hey sometimes stuff happens that you can't predict that's meaningful", but what I missed was a lot of the nuance in the actual application of the principles to my life.

This blog post does not proclaim to explain all that Taleb hopes to communicate nor at this point can I hope to fully understand it. This post is primarily for my own benefit. By writing it, I hope to understand my thoughts for future reference and to help formalize my understanding of the concepts.

Some Core Ideas

He starts of quite strong stating that his thoughts on the book are that they are a "philosophical treatise on the weaknesses of the human mind". This is a powerful thought that sets a great tone for the discussions and examples in the book.

Absence of evidence is not evidence of absence

This is probably my favorite point and one that has changed how I make decisions. The statement is quite simple, the implications and the application to your life is not.

This is most often due to the narrative fallacy, human nature's desire to spill tales to help us comprehend the world. For one reason or another, this concept did not stick with me the first time I read the book. However, this particular fallacy has clung to me like a barnacle on a ship's hull. Powerfully, this fallacy applies directly to history (my college major).

Our narrative of history is wrong. Period. History is written by the winners, by those that choose to impose their biases, thoughts, preconceptions and desires on future audiences. A narrative is simply too powerful for people to avoid. In fact, sense-making is a narrative process; thus we are likely committing this fallacy simply by trying to make sense of events that are fundamentally insensible. "The more we try to turn to history into anything other than an enumeration of accounts to be enjoyed with minimal theorizing, the more we get into trouble".

The narrative landmine perfectly communicates why this point is so important. It's simple for us to try to argue that an absence of evidence is evidence of absence. I've never seen a million dollars in cash so a million dollars in cash must not exist! I've only seen white swams so there must not be any other colored swans! These simple examples communicate the point. The problem is that when you treat the former as the latter, especially when you're putting something on the line, you're exposing yourself to a risk that cannot be predicted.

As another example, the turkey problem perfectly illustrates what this can mean.

The Turkey Problem

You are a turkey. You wake up everyday and are fed. This happens for days, weeks, and then months. As the leaves turn and winter approaches, what can you reasonably expect?

If you are a turkey, you expect to be fed. That is all that you know and any "reasonable" forecast would say that food will come at the same time the next day. Unfortunately, the next day is thanksgiving and on that day an event more significant than all others happens and you are killed.

The absence of evidence for your demise, is not evidence of the absence of said demise. It just means you haven't seen it. Your model of the world "food in the morning may be right" but the game might not be what you think it is.

Mediocristan vs Extremistan

There are two worlds of randomness: mediocristan and extremistan.

Mediocristan is a well-behaved world. In this world, gaussian randomness prevails. This means that the "law of large numbers rules" and that sum of all random parts results in smoothness. These are the well-behaved random variables, they're smooth and easy to model.

Mediocristan is human heights. If you were to take all the people on the plane that I am on and line them up to one another by height, we could then calculate an average. This average would be well-behaved and reflective of the population we are modeling. Removing a single individual (or adding another one) won't affect the average drastically.

Extremistan, on the other hand, is a strange world. In this world, mandelbrotian randomness prevails. The law of large numbers does not apply here. Recursive patterns of randomness generate arbitrary complexity making it impossible (shelf this for later...) to predict what exactly will happen. In this world, "to understand the future enought to predict it, you need to incorporate elements of this future itself".

Extdremistan is human income. If you were to take all the people on the plane that I am on and line them up to one another by income, we could then calculate an average. This average, however, would be very fragile to removing of outlier individuals. For instance, one might wager that there are individuals on this plane that have more money than the entire rest of the plane combined.

These two worlds behave drastically different and have different implications. For instance, losses in extremistan can be huge and impossible to model. Losses in mediocristan are predictable and, by property of the standard deviation, scoped. While in theory there exists a person 12 feet tall, in practice I will never encounter this individual and so I have no reason to worry about it. The same cannot be said for wealth and things that are calculated as opposed to natural. Wealth can disappear or appear in an instant, your height cannot.

The Ludic Fallacy

This one is particulary important for the student of data analysis. It's quite simple. The Ludic Fallacy is when you think a game represents well the world you are trying to model. For instance, the random walk model in modern finance is not justified - even if it appears to conform to that model based on historicaly data. "A model may be right but maybe the game is different than anticipated".

Quite simply, it doesn't matter if your model is right or write. "What matters is not how often you are right, but houw large your cumulative errors are". Quite simply, if you have super simple model that prevents massive errors, you're better off than a complex model that is very accurate in the common case but wildly off in the all important outlier case.

As soon as you change the assumptions embedded in your model, the model can fail spectacularly. Therefore, avoid these assumptions.

Skin in the Game and Expert Problems

The export problem exists in "professions that deal with the future and base studies on the non-repeatable past". These professions are perfectly setup for the prior fallacies that we discussed and the retroactive explainability of extreme events.

Finance is a perfect example (for Taleb), you simply cannot repeat the past and therefore "to understand the future enough to predict it, you need to incorporate elements of this future itself". However, I would also think that startups and technology are other examples. It's super hard to predict which companies will be successful, in fact the assumptions in these models are often wrong. However, startups are a perfect case of "retroactive explainability" where hindsight bias makes it seem like it was all so obvious that a particular thing (or company would succeed).

Skin in the game arises a solution to the expert problem. With "experts" in these sorts of professions or areas, those very experts should put their money where their mouth is. In short, "advice is cheap". These experts don't actually have any skin in the game and because they do not, they do not suffer the consequences of their loss. This incentivizes the wrong things, in fact, it incentivizes talk not action.

Swans

The title alludes to this, so it's somewhat obvious but a final take-away is that of black swans - events that are unpredictable, consequential, and have retroactive explainability. These events cannot be predicted, period. The least we can hope to do is make them grey swans or "modelable extreme events" or something akin to known unknowns. Black swans are the unknown unknowns.

Conclusion and my key take aways

While the style of writing does not match my own exactly, I do appreciate it. One thing that seems to repeat itself is that people I respect often focus on the importance of decision making. What makes me admire someone is their own ability to admittedly discuss the faults in their own decision making. Taleb seems to fall into the latter category, but his twitter feed sometimes suggests that that is not the case.

Let me return to my takeaways. I think the most concrete is that being practical is better than theoretical. Prove something out with repeatable data and go from there. Success consists of avoiding losses, not necessarily trying to derive profit. Therefore it behooves an individual to invest in preparedness, not in prediction and to rank beliefs not by our certainty in them but by the harm that they may cause if our assumptions are wrong.

One key takeaway, that makes me fault my own decision making, is that of "retroactive explainability". I dove into this point under the "absence of evidence" bit, but I think this is a fault of my own decision making. Having studied history, you try to explain things as a narrative - because that's how we reason about things. This is not a good habit because these are not repeatable - it's an excellent example of the expert problem! This shortcoming of mine is something that I actively unlearn.

There are lots of lessons that would bore others to read (if anyone even reads this :)) but they're there still. Fittingly, an absence of evidence is not evidence of absence.