I usually keep interesting Twitter threads for Saturdays, but this one is deserving of additional commentary and additional reading links:
Unintended consequences, or externalities, or spillovers (some might say these aren’t really interchangeable terms) are a well studied phenomenon in economics, as Anup Malani himself says. Read the classic paper, and listen to many, many conversations over on EconTalk about the topic to get a better understanding.
Anup Malani says towards the end of that first tweet that he has seen economics give explanations for 1 (but not for 2, which we’ll get to later). So what are the explanations for 1? For us to understand the explanations, let us first take a look at the examples that Anup Malani cites in this thread:
And as he himself says in a subsequent tweet in this thread, these phenomena can be understood using simple price theory. Driving becomes safer as a consequence of the introduction of seat belt laws. What does one do with this increased safety? One can either benefit from it by maintaining the same driving speed as before, or one can “spend” the increased benefit by increasing the driving speed.
If, for example, you are the kind of person who thought you were a safe and competent driver before wearing seat belts, you might now think that wearing the seat belt makes you safer still. But you were ok with the level of safety you had before – you are now “extra” safe. But that’s “too” safe for you, so you up the speed at which you drive. Students who have studied basic micro before, kudos if you were reminded of this. If you haven’t formally studied micro before, please do watch that video.
So, price theory helps us “get” how to think about unintended consequences. Although this does raise the rather interesting question about whether one should have anticipated these effects in advance (us economists, we don’t like using simple words. We say ex-ante instead of in advance. It’s the same thing.)
And if we could have anticipated these effects in advance, then were they really unintended consequences in the first place? Something to think about, eh? Maybe that’s why these terms (externalities, unintended consequences and spillovers) aren’t really interchangeable?
But now we get to the second case. Part two of Anup Malani ‘s first tweet in this thread: that which does not kill you makes you stronger.
What are examples of this phrase? (Here’s the Wikipedia article about the phrase itself)
Vaccines, of course! They make you sick sometimes (you are, after all, injecting yourself with a dramatically weakened version of the virus), but they leave you stronger in the sense that you body “learns” how to cope with the virus if it actually does enter you body. Vaccines don’t kill you, and they make you stronger!
But that’s an example from the field of biology. What about economic systems?
Anup Malani gives excellent three excellent examples in his thread, I’ll just mention one over here. Please take a look at the other two as well.
Import of cloth from India, where wages were low, threatened the domestic British cloth-making industry. It didn’t kill this industry, but it did threaten it. And the British responded by increasing mechanization. And the rest is, well, literally history.
Now, this is where Anup Malani ‘s Twitter thread really takes off for me. Price theory, he says, helps us understand unintended consequences. What theory helps us understand “that which does not kill you makes you stronger”?
Anup Malani says we don’t really know.
Might Nassim Nicholas Taleb’s book, Anti-Fragile, have an answer?
Antifragility is a property of systems in which they increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. The concept was developed by Nassim Nicholas Taleb in his book, Antifragile, and in technical papers.https://en.wikipedia.org/wiki/Antifragility
I’m not sure, and I might be wrong about this, but I think the answer is yes, it does have an answer. Read, in the context of what we’re speaking about in this blog post, chapter 4 from this book very carefully. Here’s one excerpt from that chapter to help you get started:
Nietzsche’s famous expression “what does not kill me makes me stronger” can be easily misinterpreted as meaning Mithridatization or hormesis. It may be one of these two phenomena, very possible, but it could as well mean “what did not kill me did not make me stronger, but spared me because I am stronger than others; but it killed others and the average population is now stronger because the weak are gone.” In other words, I passed an exit exam. I’ve discussed the problem in earlier writings of the false illusion of causality, with a newspaper article saying that the new mafia members, former Soviet exiles, had been “hardened by a visit to the Gulag” (the Soviet concentration camps). Since the sojourn in the Gulag killed the weakest, one had the illusion of strengthening. Sometimes we see people having survived trials and imagine, given that the surviving population is sturdier than the original one, that these trials are good for them. In other words, the trial can just be a ruthless exam that kills those who fail. All we may be witnessing is that transfer of fragility (rather, antifragility) from the individual to the system that I discussed earlier. Let me present it in a different way. The surviving cohort, clearly, is stronger than the initial one—but not quite the individuals, since the weaker ones died. Someone paid a price for the system to improve.Taleb, Nassim Nicholas. Antifragile (p. 76). Penguin Books Ltd. Kindle Edition.
Mithridization? Here you go. Hormesis? Click here.
This blog post isn’t the place to get into the details of chapter four from Anti-Fragile. Please, do read the entire chapter (and the entire book!)
It is, of course, all too possible that I’m entirely wrong in saying that this is a good answer to Anup Malani’s question. And if you think so, I would love to learn how I’m wrong.
But for the moment, this is my first pass answer to a fascinating question at the end of a lovely thread.