Skin in the Game, by Nicholas Nassim Taleb

whitecoast

The Living Force
FOTCM Member
I haven't seen a thread on this book yet, so I wanted to start one because I always find Nicholas Taleb to be such an engrossing read. Skin in the Game is the fifth and latest volume in his main opus INCERTO, which consists of all the books (the others being Fooled by Randomness, Black Swam, The Bed of Procrustes, and Antifragile. Nicholas Taleb worked as a day trader in the stock market for over a decade, and after which became so incensed by how academia dealt with the subject of risk taking in statistics -- namely, in a way that didn't match at all the way things worked in the Real World -- that he eventually changed careers to become a mathematical researcher and essayist.

Skin in the Game is by far my favorite so far, since it provides in essence a mathematical framework for understanding ethics -- namely that people need to be responsible for their actions in order for any system in which they exist (which can be as small as a town council or as large as civilizations and ecosystems), with responsibility meaning you partake in the gains AND the loses of the risks your actions entail. Just hearing that kind of definition makes the sound sterile and dry, but I can assure you it is riddled with hundreds of practical examples from linguistics, restaurant industries, journalism, politics, legal tradition and the like that make it very easy to grasp what Taleb means. He recently penned an introduction to Skin in the Game on his Medium blog:

When selecting a surgeon for your next brain procedure, should you pick a surgeon who looks like a butcher or one who looks like a surgeon? The logic of skin in the game implies you need to select the one who (while credentialed) looks the least like what you would expect from a surgeon, or, rather, the Hollywood version of a surgeon.

The same logic mysteriously answers many vital questions, such as 1) the difference between rationality and rationalization, 2) that between virtue and virtue signaling, 3) the nature of honor and sacrifice, 4) Religion and signaling (why the pope is functionally atheist) 5) the justification for economic inequality that doesn’t arise from rent seeking, 6) why to never tell people your forecasts (only discuss publicly what you own in your portfolio) and, 7) even, how and from whom to buy your next car.

What is Skin in the Game? The phrase is often mistaken for one-sided incentives: the promise of a bonus will make someone work harder for you. For the central attribute is symmetry: the balancing of incentives and disincentives, people should also penalized if something for which they are responsible goes wrong and hurts others: he or she who wants a share of the benefits needs to also share some of the risks.

My argument is that there is a more essential aspect: filtering and the facilitation of evolution. Skin in the game –as a filter –is the central pillar for the organic functioning of systems, whether humans or natural. Unless consequential decisions are taken by people who pay for the consequences, the world would vulnerable to total systemic collapse. And if you wonder why there is a current riot against a certain class of self-congratulatory “experts”, skin the game will provide a clear answer: the public has viscerally detected that some “educated” but cosmetic experts have no skin in the game and will never learn from their mistakes, whether individually or, more dangerously, collectively.

Have you wondered why, on high-speed highways there are surprisingly few rogue drivers who could, with a simple manoeuver, kill scores of people? Well, they would also kill themselves and most dangerous drivers are already dead (or with suspended license). Driving is done under the skin in the game constraint, which acts as a filter. It’s a risk management tool by society, ingrained in the ecology of risk sharing in both human and biological systems. The captain who goes down with the ship will no longer have a ship. Bad pilots end up in the bottom of the Atlantic Ocean; risk-blind traders become taxi drivers or surfing instructors (if they traded their own money).

Systems don’t learn because people learn individually –that’s the myth of modernity. Systems learn at the collective level by the mechanism of selection: by eliminating those elements that reduce the fitness of the whole, provided these have skin in the game. Food in New York improves from bankruptcy to bankruptcy, rather than the chefs individual learning curves –compare the food quality in mortal restaurants to that in an immortal governmental cafeteria. And in the absence of the filtering of skin in the game, the mechanisms of evolution fail: if someone else dies in your stead, the built up of asymmetric risks and misfitness will cause the system to eventually blow-up.

Yet the social science and the bureaucrato-BSers have missed and keeps missing that skin in the game is an essential filter. Why? Because, outside of hard science, scholars who do not have skin in the game fail to get that while in academia there is no difference between academia and the real world, in the real world, there is. They teach evolution in the classrooms but, because they are not doers, they don’t believe that evolution applies to them; they almost unanimously vote in favor of a large state and advocate what I’ve called “Soviet-Harvard top-down intelligent design” in social life.

As illustrated by the story of the surgeon, you can tell, from the outside, if a discipline has skills and expertise, from the presence of the pressures of skin in the game and some counterintuitive consequences. But what we call “empty suits”, of the kind you see in think tanks or large corporations –those who want to increasingly run our lives or intervene in Libya — look like actors playing the part, down to their vocabulary and the multiplicative meetings. Talk is cheap and people who talk and don’t do are easily detectable by the public because they are too good at talking.

Plumbers, bakers, engineers, and piano tuners are judged by their clients, doctors by their patients (and malpractice insurers), and small town mayors by their constituents. The works of mathematicians, physicists, and hard scientists are judged according to rigorous and unambiguous principles. These are experts, plus or minus a margin of error. Such selection pressures from skin in the game apply to perhaps 99% of the population. But it is hard to tell if macroeconomists, behavioral economists, psychologists, political “scientists” and commentators, and think-tank policymakers are experts. Bureaucrato-academics tend to be judged by other bureaucrats and academics, not by the selection pressure of reality. This judgment by peers only, not survival, can lead to the pestilence of academic citation rings. The incentive is to be published on the right topic in the right journals, with well sounding arguments, under easily some contrived empiricism, in order to beat the metrics.

Accountants (that is, bankruptcy or its absence), not other “peer” forecasters, nor referees using metrics should be judging forecasters.

Metrics are always always gamed: a politician can load the system with debt to “improve growth and GDP”, and let his successor deal with the delayed results.

Alas, you can detect the degradation of the aesthetics of buildings when architects are judged by other architects. So the current rebellion against bureaucrats whether in DC or Brussels simply comes from the public detection of a simple principle: the more micro the more visible one’s skills. To use the language of complexity theory, expertise is scale dependent. And, ironically, the more complex the world becomes, the more the role of macro-deciders “empty suits” with disproportionate impact should be reduced: we should decentralize (so actions are taken locally and visibly), not centralize as we have been doing.

In addition, owning one’s risk was an inescapable moral code for past four millennia, until very recent times. War mongers were required to be warriors. Fewer than a third of Roman emperors died in their bed (assuming those weren’t skillfully poisoned). Status came with increased exposure to risk: Alexander, Hannibal, Scipio, and Napoleon were not only first in battle, but derived their authority from a disproportionate exhibition of courage in previous campaigns. Courage is the only virtue that can’t be faked (or gamed like metrics). Lords and knights were individuals who traded their courage for status, as their social contract was an obligation to protect those who granted them their status. This primacy of the risk-taker, whether warrior (or, critically, merchant), prevailed almost all the time in almost every human civilization; exceptions, such as Pharaonic Egypt or Ming China, in which the bureaucrat-scholar moved to the top of the pecking order were followed by collapse.

Many chapters of the book are available on Medium here. Some of my favorite chapters there were An Expert Called Lindy, (how Time often the best judge of veracity or truth), Inequality and Skin in the Game (which in truth is about why the masses despise some rich people and love others), Why One Should Eat His Own Turtles (on the history of trade , disclosure, risk-sharing enterprises in the ancient world as it related to ethical practices at the time) and On Interventionistas and Their Mental Defects (Taleb is rather 'woke' on the topic of Libya and Syria-- this post was also part of a preface to the introductory chapter).

Last chapter is here:
The Logic of Risk Taking
A central chapter that crystallizes all my work.

Time to explain ergodicity, ruin and (again) rationality. Recall from the previous chapter that to do science (and other nice things) requires survival but not the other way around?



1*DIdL2p9rvm1Am6zvRlMM5w.png

The difference between 100 people going to a casino and one person going to a casino 100 times, i.e. between (path dependent) and conventionally understood probability. The mistake has persisted in economics and psychology since age immemorial.
Consider the following thought experiment.

First case, one hundred persons go to a Casino, to gamble a certain set amount each and have complimentary gin and tonic –as shown in the cartoon in Figure x. Some may lose, some may win, and we can infer at the end of the day what the “edge” is, that is, calculate the returns simply by counting the money left with the people who return. We can thus figure out if the casino is properly pricing the odds. Now assume that gambler number 28 goes bust. Will gambler number 29 be affected? No.

You can safely calculate, from your sample, that about 1% of the gamblers will go bust. And if you keep playing and playing, you will be expected have about the same ratio, 1% of gamblers over that time window.

Now compare to the second case in the thought experiment. One person, your cousin Theodorus Ibn Warqa, goes to the Casino a hundred days in a row, starting with a set amount. On day 28 cousin Theodorus Ibn Warqa is bust. Will there be day 29? No. He has hit an uncle point; there is no game no more.

No matter how good he is or how alert your cousin Theodorus Ibn Warqa can be, you can safely calculate that he has a 100% probability of eventually going bust.

The probabilities of success from the collection of people does not apply to cousin Theodorus Ibn Warqa. Let us call the first set ensemble probability, and the second one time probability (since one is concerned with a collection of people and the other with a single person through time). Now, when you read material by finance professors, finance gurus or your local bank making investment recommendations based on the long term returns of the market, beware. Even if their forecast were true (it isn’t), no person can get the returns of the market unless he has infinite pockets and no uncle points. The are conflating ensemble probability and time probability. If the investor has to eventually reduce his exposure because of losses, or because of retirement, or because he remarried his neighbor’s wife, or because he changed his mind about life, his returns will be divorced from those of the market, period.

We saw with the earlier comment by Warren Buffett that, literally, anyone who survived in the risk taking business has a version of “in order to succeed, you must first survive.” My own version has been: “never cross a river if it is on average four feet deep.” I effectively organized all my life around the point that sequence matters and the presence of ruin does not allow cost-benefit analyses; but it never hit me that the flaw in decision theory was so deep. Until came out of nowhere a paper by the physicist Ole Peters, working with the great Murray Gell-Mann. They presented a version of the difference between the ensemble and the time probabilities with a similar thought experiment as mine above, and showed that about everything in social science about probability is flawed. Deeply flawed. Very deeply flawed. For, in the quarter millennia since the formulation by the mathematician Jacob Bernoulli, and one that became standard, almost all people involved in decision theory made a severe mistake. Everyone? Not quite: every economist, but not everyone: the applied mathematicians Claude Shannon, Ed Thorp, and the physicist J.-L. Kelly of the Kelly Criterion got it right. They also got it in a very simple way. The father of insurance mathematics, the Swedish applied mathematician Harald Cramér also got the point. And, more than two decades ago, practitioners such as Mark Spitznagel and myself build our entire business careers around it. (I personally get it right in words and when I trade and decisions, and detect when ergodicity is violated, but I never explicitly got the overall mathematical structure –ergodicity is actually discussed in Fooled by Randomness). Spitznagel and I even started an entire business to help investors eliminate uncle points so they can get the returns of the market. While I retired to do some flaneuring, Mark continued at his Universa relentlessly (and successfully, while all others have failed). Mark and I have been frustrated by economists who, not getting ergodicity, keep saying that worrying about the tails is “irrational”.

Now there is a skin in the game problem in the blindness to the point. The idea I just presented is very very simple. But how come nobody for 250 years got it? Skin in the game, skin in the game.

It looks like you need a lot of intelligence to figure probabilistic things out when you don’t have skin in the game. There are things one can only get if one has some risk on the line: what I said above is, in retrospect, obvious. But to figure it out for an overeducated nonpractitioner is hard. Unless one is a genius, that is have the clarity of mind to see through the mud, or have such a profound command of probability theory to see through the nonsense. Now, certifiably, Murray Gell-Mann is a genius (and, likely, Peters). Gell-Mann is a famed physicist, with Nobel, and discovered the subatomic particles he himself called quarks. Peters said that when he presented the idea to him, “he got it instantly”. Claude Shannon, Ed Thorp, Kelly and Cramér are, no doubt, geniuses –I can vouch for this unmistakable clarity of mind combined with depth of thinking that juts out when in conversation with Thorp. These people could get it without skin in the game. But economists, psychologists and decision-theorists have no genius (unless one counts the polymath Herb Simon who did some psychology on the side) and odds are will never have one. Adding people without fundamental insights does not sum up to insight; looking for clarity in these fields is like looking for aesthetic in the attic of a highly disorganized electrician.

Ergodicity
As we saw, a situation is deemed non ergodic here when observed past probabilities do not apply to future processes. There is a “stop” somewhere, an absorbing barrier that prevents people with skin in the game from emerging from it –and to which the system will invariably tend. Let us call these situations “ruin”, as the entity cannot emerge from the condition. The central problem is that if there is a possibility of ruin, cost benefit analyses are no longer possible.

Consider a more extreme example than the Casino experiment. Assume a collection of people play Russian Roulette a single time for a million dollars –this is the central story in Fooled by Randomness. About five out of six will make money. If someone used a standard cost-benefit analysis, he would have claimed that one has 83.33% chance of gains, for an “expected” average return per shot of $833,333. But if you played Russian roulette more than once, you are deemed to end up in the cemetery. Your expected return is … not computable.

Repetition of Exposures
Let us see why “statistical testing” and “scientific” statements are highly insufficient in the presence of ruin problems and repetition of exposures. If one claimed that there is “statistical evidence that the plane is safe”, with a 98% confidence level (statistics are meaningless without such confidence band), and acted on it, practically no experienced pilot would be alive today. In my war with the Monsanto machine, the advocates of genetically modified organisms (transgenics) kept countering me with benefit analyses (which were often bogus and doctored up), not tail risk analyses for repeated exposures.

Psychologists determine our “paranoia” or “risk aversion” by subjecting a person to a single experiment –then declare that humans are rationally challenged as there is an innate tendency to “overestimate” small probabilities. It is as if the person will never again take any personal tail risk! Recall that academics in social science are … dynamically challenged. Nobody could see the grandmother-obvious inconsistency of such behavior with our ingrained daily life logic. Smoking a single cigarette is extremely benign, so a cost-benefit analysis would deem one irrational to give up so much pleasure for so little risk! But it is the act of smoking that kills, with a certain number of pack per year, tens of thousand of cigarettes –in other words, repeated serial exposure.

Beyond, in real life, every single bit of risk you take adds up to reduce your life expectancy. If you climb mountains and ride a motorcycle and hang around the mob and fly your own small plane and drink absinthe, your life expectancy is considerably reduced although not a single action will have a meaningful effect. This idea of repetition makes paranoia about some low probability events perfectly rational. But we do not need to be overly paranoid about ourselves; we need to shift some of our worries about bigger things.

Note: The flaw in psychology papers is to believe that the subject doesn’t take any other tail risks anywhere outside the experiment and will never take tail risks again. The idea of “loss aversion” have not been thought through properly –it is not measurable the way it has been measured (if at all mesasurable). Say you ask a subject how much he would pay to insure a 1% probability of losing $100. You are trying to figure out how much he is “overpaying” for “risk aversion” or something even more stupid, “loss aversion”. But you cannot possibly ignore all the other present and future financial risks he will be taking. You need to figure out other risks in the real world: if he has a car outside that can be scratched, if he has a financial portfolio that can lose money, if he has a bakery that may risk a fine, if he has a child in college who may cost unexpectedly more, if he can be laid off. All these risks add up and the attitude of the subject reflects them all. Ruin is indivisible and invariant to the source of randomness that may cause it.
I believe that risk aversion does not exist: what we observe is, simply a residual of ergodicity.
Who is “You”?
Let us return to the notion of “tribe” of Chapter x. The defects people get from studying modern thought is that they develop the illusion that each one of us is a single unit, without seeing the contradiction in their own behavior. In fact I’ve sampled ninety people in seminars and asked them: “what’s the worst thing that happen to you?” Eighty-eight people answered “my death”.

This can only be the worst case situation for a psychopath. For then, I asked those who deemed that the worst case is their own death: “Is your death plus that of your children, nephews, cousins, cat, dogs, parakeet and hamster (if you have any of the above) worse than just your death? Invariably, yes. “Is your death plus your children, nephews, cousins (…) plus all of humanity worse than just your death? Yes, of course. Then how can your death be the worst possible outcome?[1]

Thus we get the point that individual ruin is not as big a deal as the collective one. And of course ecocide, the irreversible destruction of the environment, is the big one to worry about.



1*X3OhXQNnhG45U2MfGfBaXA.png

Hierarchy of risks –Taking personal risks to save the collective are both “courage” and “prudence” since you are lowering risks for the collective
To use the ergodic framework: My death at Russian roulette is not ergodic for me but it is ergodic for the system. The precautionary principle, in the formulation I did with a few colleagues, is precisely about the highest layer.

About every time I discuss the precautionary principle, some overeducated pundit suggests that “we cross the street by taking risks”, so why worry so much about the system? This sophistry usually causes a bit of anger on my part. Aside from the fact that the risk of being killed as a pedestrian is one per 47,000 years, the point is that my death is never the worst case scenario unless it correlates to that of others.

I have a finite shelf life, humanity should have an infinite duration.

Or

I am renewable, not humanity or the ecosystem.

Even worse, as I have shown in Antifragile, the fragility of the components is required to ensure the solidity of the system. If humans were immortals, they would go extinct from an accident, or from a gradual buildup of misfitness. But shorter shelf life for humans allows genetic changes to accompany the variability in the environment.

Courage And Precaution Aren’t Opposite

How can courage and prudence be both classical virtues? Virtue, as presented in Aristotle’s Nichomachean Ethics includes: sophrosyne (σωφροσύνη), prudence, a form of sound judgment he called more broadly phronesis. Aren’t these inconsistent with courage?

In our framework, they are not at all. They are actually, as Fat Tony would say, the same ting. How?

I can exercise courage to save a collection of kids from drowning, and it would also correspond to some form of prudence. I am sacrificing a lower layer in Figure x for the sake of a higher one.

Courage, according to the Greek ideal that Aristotle inherited–say the Homeric and the ones conveyed through Solon, Pericles, and Thucydides, is never a selfish action:

Courage is when you sacrifice your own wellbeing for the sake of the survival of a layer higher than yours.

As we can see it fits into our table of preserving the sustainability of the system.

A foolish gambler is not committing an act of courage, especially if he is risking other people’s funds or has a family to feed. And other forms of sterile courage aren’t really courage.[2]

Rationality, again
The last chapter presented rationality in terms of actual decisions, not what is called “beliefs” as these may be adapted to prevent us in the most convincing way to avoid things that threaten systemic survival. If superstitions is what it takes, not only there is absolutely no violation of the axioms of rationality there, but it would be technically irrational to stand in its way.

Let us return to Warren Buffett. He did not make his billions by cost benefit analysis, rather, simply by establishing a high filter, then picking opportunities that pass such threshold. “The difference between successful people and really successful people is that really successful people say no to almost everything.” He wrote. Likewise our wiring might be adapted to “say no” to tail risk. For there are zillion ways to make money without taking tail risk. There are zillion ways to solve problems (say feed the world) without complicated technologies that entail fragility and an unknown possibility of tail risks.

Indeed, it doesn’t cost us much to refuse some new shoddy technologies. It doesn’t cost me much to go with my “refined paranoia”, even if wrong. For all it takes is for my paranoia to be right once, and it would have saved my life.

Love Some Risks
Antifragile revolves around the idea that people confuse risk of ruin with variations –a simplification that violates a deeper, more rigorous logic of things. It makes the case for risk loving, systematic “convex” tinkering, taking a lot of risks that don’t have tail risks but offer tail profits. Volatile things are not necessarily risky, and the reverse. Jumping from a bench would be good for you and your bones, while falling from the twenty-second floor will never be so. Small injuries will be beneficial, never larger ones. Fearmonging about some class of events is fearmonging; about others it is not. Risk and ruin are different tings.

[/i]Throughout the book you get treated to a lot of ancient allegories from the Near East and Greek Mythology, as well as several good thrashings of Saudi Arabia, Salafism, the US State Dept, GMOs, mainstream journalists, and some of the New Atheists also. Lots of respect for Assad and Putin therein also, reading between the lines.:-)
 
Yep, this one was a good read. I like Taleb's aphoristic style, too. We read a few excerpts on a recent Truth Perspective here: The Truth Perspective: Postmodern Trump Derangement Syndrome and Skin in the Game -- Sott.net

He's got respect for Caesar too - back when leaders had courage and skin in the game, unlike politicians today who just cater to polls and campaign donors.

His description of inequality was great. People are fine with inequality - what they don't like are rent-seekers who game the system and don't have accountability/skin in the game. That ties in with what Lobaczewski wrote about revolutions, competence, and upward/downward adjustment.

I hadn't read any Taleb before this one, but now I plan on doing so.
 
I hadn't read any Taleb before this one, but now I plan on doing so.

The first book of his I read was Antifragile. I thought it contained some concepts applicable to The Work as well I thought. NNT would probably define The Work as developing an antifragile personality: one that gains from volatility by exposing oneself voluntarily to stressors and using self observation and networking to gain insight into the constituent parts of our personality (little i’s or programs) and which are fragile and worth discarding and which are robust and worth keeping. JBP has many similar things to say about working on oneself (but y’all already knew that). The opposite of this work is treating attaching oneself to the whole of oneself, including the suboptimal parts, and thereby transferring the fragility of these little i’s to the external world. Looking at the inverted triangle in the LOGIC OF RISK TAKING chapter, we can see that the self isn’t actually at the bottom of the ethical hierarchy but rather the small i’s which are. Just as it is unethical to for us to take the upsides of a risk while externalizing the consequences to acquaintances, our society, etc, it is also unethical for a small part of us (an impulse, a mood, a pathological micromachine) to get dopamine and serotonin hits at the expense of the overall well-being of the organism and its society and ecosystem. That kind of disregard is the essence of the criminal mind.

Another thing I liked about Antifragile was how it pointed out that most technological advancement came about by tinkering and garage experimentation, rather than by big research budgets set out to solve a particular problem. That definitely seems to be corroborated by the forum’s experience with the medical establishment and grassroots healthcare practices.
 
Back
Top Bottom