Ethan Mollick is an Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship. He is also author of The Unicorn’s Shadow: Combating the Dangerous Myths that Hold Back Startups, Founders, and Investors. His papers have been published in top management journals and have won multiple awards. His work on crowdfunding is the most cited article in management published in the last seven years. Prior to his time in academia, Ethan cofounded a startup company, and he currently advises a number of startups and organizations. As the Academic Director and cofounder of Wharton Interactive, he works to transform entrepreneurship education using games and simulations. He has long had interest in using games for teaching, and he coauthored a book on the intersection between video games and business that was named one of the American Library Association’s top 10 business books of the year. He has built numerous teaching games, which are used by tens of thousands of students around the world.
Here is the Interactive (that’s the name of the site, hence the capitalization) website, and it has a lovely little pun for its title. Here is his Google Scholar page, and here is his academic page. He doesn’t have a Wikipedia page, but here is an interesting Twitter threadabout Wikipedia written by him. And if you insist on a Wikipedia page, well, you have to qualify to be able to read it. Can you eat glass? Here is a tweet by him, it’ll allow you to make progress on following Ethan Mollick on all platforms.
In short, Ethan Mollick is that all-too-rare example of a person who is consistently interesting, and from whom you’ll get to learn a lot. And he is now, as I mentioned, on Substack.
His first post on Substack tells you how to be more creative, and if I may be allowed to paraphrase his advise, it boils down to chilling and sleeping. That’s the kind of under-rated advice the world really needs right now!
The sleep suggestion is important. It is really clear that sleep is critical to successful idea generation, especially in the context of making entrepreneurs more creative. The effects go beyond just creativity, however. People who are sleep-deprived not only generate lower-quality ideas but become bad at differentiating between good ideas and bad ones. Worse still, research shows that sleep-deprived individuals become more impulsive and are more likely to act on the bad ideas they generate. That means that a chronically sleep-deprived person would be more likely to come up with bad ideas, think they are good, and suddenly quit their job to pursue them! So, creativity starts with a good night’s sleep, and if you can’t manage that, a 75-minute nap has been found to do almost as a good a job in putting people in the right frame of mind to be creative.
The world needs more people who are interesting, helpful, creative and interested in making the world progress. Please do follow Ethan Mollick wherever possible, and learn how to help make the world a more interesting, and therefore better, place. What else is there in life, no?
I used to think there was such a thing as development economics. There are still richer and poorer countries, of course, but is there a “development economics,” a special type of economics for poor countries? I don’t think so. Maybe there once was. In the twentieth century, divergence in per-capita GDP increased big time and it was a burning question why poor countries weren’t on the same development path as the developed nations. Starting around 1990-2000, however, we have seen convergence. Most countries are now on the same path. Poorer countries and richer countries are becoming more alike, sometimes for good and sometimes for bad.
Here’s the Wikipedia article on what constitutes development economics:
Development economics is a branch of economics which deals with economic aspects of the development process in low- and middle- income countries. Its focus is not only on methods of promoting economic development, economic growth and structural change but also on improving the potential for the mass of the population, for example, through health, education and workplace conditions, whether through public or private channels.
Both Alex’s definition and the Wikipedia definition focus on how low- and middle- income countries need a different kind of economic theory when it comes to growth in these parts of the world. But why do these countries need a special kind of economics?
I suspect an answer most economists would agree on (talk about courting controversy!) is that these countries are likely to have a poorer quality of institutions and property rights. The legal system may not work as well as intended and the quality of political institutions might be worse along at least some dimensions, and that’s just for starters. So it’s not so much the case that a special kind of growth theory is needed, but that some of the assumptions underpinning the model are fundamentally different.
But as Alex mentions in his post, these assumptions may not be applicable, because convergence has taken place. The post is difficult to extract from, and I would recommend that you go ahead and read it in its entirety. But perhaps the most surprising thing in Alex’s latest post is the fact that this convergence has happened because poorer countries have caught up, more or less – but also because richer countries have become worse along some dimensions:
More generally, poorer and richer countries face many of the same problems today: infrastructure, low-skill workers and technological change, climate adaption and so forth. Is the latest paper on cash transfers, pollution, or corruption about a poor country or a rich country? It’s hard to tell. Poor countries still have their own unique problems, of course, but those problems are best analyzed by country rather than by income category. India is not the same as Thailand or Peru. I see little that unites poor countries under the rubric development economics.
And anecdotally, I’m sure we’ve all experienced ways in which poorer countries are not just better off than before, but also have materially better institutions and processes.
The question is, does that then mean that there is no longer such a thing as development economics?
I would disagree with Alex’s stance, and say that development economics very much remains a relevant subject, and that for two reasons.
First, because with convergence, we’re not answering the same question that we were earlier – it is not so much about the fact that we need to focus on how to have poor countries grow faster, but just about how to make the world grow faster. There still remain, to be clear, countries that remain poor, and even within countries that have developed faster, there are regions that remain poor – but the focus of development economics should be different today than it was, say, in the 1960’s.
And viewed from this framework, it is the answer to the first question that has changed from about sixty years ago. Figuring out why it has changed is now a fascinating part of development studies. And the lessons one can learn by thinking about this helps us try and figure out what we can do to make the world a better place. To give you just one of many possible examples, you might want to think about which factors helped South Korea grow so rapidly in the last six to seven decades, and then ask which of these factors are replicable in an Indian context today. Not all factors will be applicable, and India today is not what India was back then, nor is it today what South Korea was back then.
Also, Pakistan cannot learn the same lessons that India might, because the ground reality in both countries is different in terms of resources, climate, geography, population, income levels, political institutions and so much more. And Sri Lanka will have a different set of lessons that are applicable, and all the African nations is a whole other story… and well, so on. Alex mentions this in his blogpost, of course.
But while it is true that there is little that unites poor countries today under the rubric of development economics, I don’t take that to mean that there is no such thing as development economics. Rather, I would argue that this simply means that the low hanging fruit in development economics have been picked, and development economics has now become an even more challenging and interesting field than before.
But “what can we do to make the world a better place?” will forever remain a valid and urgent question, so development economics, for me, will remain a fascinating subject to think about.
An interesting pattern recurs across the careers of great scientists: an annus mirabilis (miracle year) in which they make multiple, seemingly independent breakthroughs in the span of a single year or two.
This is how the short, extremely readable essay begins, and the topic of the essay is not just observations about the remarkable number of scientists who have had these miracle years, but also ruminations about how we might get more people to have these years.
Newton, Darwin and Einstein are three people who Dwarkesh argues exemplify best the idea of a miracle year, and well, it is hard to argue against that list. But it’s not just this rather intimidating list – Copernicus, Von Neumann, Gauss, Linus Torvalds and Ken Thompson are also mentioned. (By the way, I’ve decided to not link to the Wikipedia pages in each case deliberately. Please do look up any name that you are unfamiliar with, because they’re all worth a Google search or two).
Dwarkesh argues, and to my mind fair convincingly, that the phenomenon isn’t just coincidence (read his post to understand why).
But that then raises two important questions: one, what were the causal factors that were common across all of these cases? Two, are these factors replicable – how do we increase the probability of many more folks having these miracle years?
As regards the first, Dwarkesh offers up two main hypotheses. The first:
In Kuhnian terms, you could say that the great scientists found a new paradigm and then spent a year gobbling up all of the important, low hanging discoveries before their competitors could catch up.
If the word “Kuhnian” is new to you, you’re in luck. Read this highlight, and then, if you can spare the time, the rest of the article. Also read the Wikipedia article, while you are at it. So ok, one idea is that these scientists were responsible for a paradigm shift, and as Dwarkesh says, they gobbled up all the important low-hanging fruit in these (apologies for the mixed up metaphor) unchartered territories.
A related hypothesis, and the two need not be mutually exclusive, is offered later on in the essay:
Perhaps there’s a brief window in a person’s life where he has the intelligence, curiosity, and freedom of youth but also the skills and knowledge of age. These conditions only coincide at some point in a person’s twenties. It wouldn’t be surprising if the combination of fluid intelligence (which declines steeply after your 20s) and crystalized intelligence (which accumulates slowly up till your 50s and 60s) is highest during this time.
As always, read the rest of the essay, but I want to focus on the concluding paragraph of Dwarkesh’s essay:
Given how many of the great scientific discoveries have come about during miracle years, we should do everything we can to help smart Twentysomethings have an annus mirabilis. We should free them from rote menial work, prevent them from being overexposed to the current paradigm, and give them the freedom to explore far-fetched ideas without arbitrary deadlines or time-draining obligations.
I have chosen to not excerpt the very last sentence of the essay, in which he laments about how the excerpt above is the very opposite of a modern PhD program. Slight disagreement – I would say this is the very opposite of higher education in general – in particular, the “overexposed to the current paradigm” bit truly resonated with me.
Higher education is far too much about received wisdom and current orthodoxy, and I don’t think we do enough in terms of leaving opportunities for serendipitous conversations, chance encounters, heated debates and the freedom to explore potentially heretical ideas – no matter the field of study.
Sure classes are important, but there is so much more to education than memorizing definitions from a textbook. The cultivation practices currently in place aren’t going to generate too many annus mirablis’, more’s the pity.
I hope you do, and I think I do – know economics, that is.
But I’ve always thought about economics (how to get the most out of life), here on earth. I haven’t thought about what economics might be like on other planets, on space stations, or on whatever else lies ahead of us in terms of both space and time (pun kind of intended).
Paul Krugman had a fun paper about this written more than forty(!) years ago. The paper is freely available, and you can download it over here, but there is also a Wikipedia article about it, if you would prefer to begin there.
As the Wikipedia article says, the summary of the paper was this:
How should interest rates on goods in transit be computed when the goods travel at close to the speed of light? This is a problem because the time taken in transit will appear less to an observer traveling with the goods than to a stationary observer.
But a much more recent post by Robin Hanson invites us to do a serious analysis of a no-longer-ridiculous subject: how should one think about social analysis of a future that is much more about space travel.
We understand space tech pretty well, and people have been speculating about it for quite a long time. So I’m disappointed to not yet see better social analysis of space futures. In this post I will therefore try to outline the kind of work that I think should be done, and that seems quite feasible.
I teach Principles of Economics for a living, but have only very rarely (well ok, almost never) thought about Principles of Economics as it relates to space travel. As Tyler Cowen might say, most of the basic principles will remain the same, and demand curves will slope downwards, but what will actually change?
This is surprisingly hard to think about, because I tend to just assume that economics is always earth bound. And it takes me time to wrap my head around the fact that I’m thinking about economics in a very different context. Robin Hanson helps us overcome this initial hurdle:
Here is the basic approach: 1. Describe how a space society differs from others using economics-adjacent concepts. E.g., “Space econ is more X-like”. 2. For each X, describe in general how X-like economies differ from others, using both historical patterns and basic econ theory. 3. Merge the implications of X-analysis from the different X into a single composite picture of space.
His first example about X is that of lower density. Or, in plainer English, space is just going to be really far away from everything else. I mean, really far away. What does that mean for an economy, when it is just ridiculously far away from everything else?
Let’s think through this a bit. Can, say, thinking about Neom be similar to thinking about this problem? Or Naypyidaw? Or are we talking about a completely different problem, because of the vast difference in terms of distance? And if you say it is a completely different problem, why do you say so?
Are we talking about travel costs being significantly different? What about the cost of communication (both within that base, and back to Earth)? Which resources become more valuable because this base is s far away, and which resources are valuables “just” because they are scarce on that base? Will, as Robin Hanson points out, lower density mean lower product variety, and what will that imply for this economy? How should one think about Dixit-Stiglitz in this context?
Read the whole thing, of course, but Robin Hanson points out a variety of ways in which space economics is going to be different. I’ll highlight just a few below:
It’s going to be very far way, as we just discussed
It’s going to be wildly different in terms of resource economics
What about population growth?
As I said, I struggle to think about this just because my mental framework thinks about economics in a very Earthian (yes, this is now a word) context. And that precisely why I enjoyed reading this blogpost so much, because it gives me a very pleasant headache about stuff I thought I knew.
And I hope you’ll spend some time with this very pleasant headache too! 🙂
The Higg Index is an apparel and footwear industry self-assessment standard for assessing environmental and social sustainability throughout the supply chain. Launched in 2012, it was developed by the Sustainable Apparel Coalition, a nonprofit organization founded by a group of fashion companies, the United States government Environmental Protection Agency, and other nonprofit entities.
I had no clue that such a thing existed, but it would seem that a lot of apparel stores use this index as a way to advertise the fact that the products that they’re selling have been produced in a sustainable manner.
The Higg Index is spread across three categories: product tools, facility tools and brand and retail tool.
I came across the Higg Index in a New York Times article that warns us about depending too much on an index of this sort:
An explosion in the use of inexpensive, petroleum-based materials has transformed the fashion industry, aided by the successful rebranding of synthetic materials like plastic leather (once less flatteringly referred to as “pleather”) into hip alternatives like “vegan leather,” a marketing masterstroke meant to suggest environmental virtue. Underlying that effort has been an influential rating system assessing the environmental impact of all sorts of fabrics and materials. Named the Higg Index, the ratings system was introduced in 2011 by some of the world’s largest fashion brands and retailers, led by Walmart and Patagonia, to measure and ultimately help shrink the brands’ environmental footprints by cutting down on the water used to produce the clothes and shoes they sell, for example, or by reining in their use of harmful chemicals. But the Higg Index also strongly favors synthetic materials made from fossil fuels over natural ones like cotton, wool or leather. Now, those ratings are coming under fire from independent experts as well as representatives from natural-fiber industries who say the Higg Index is being used to portray the increasing use of synthetics use as environmentally desirable despite questions over synthetics’ environmental toll.
I don’t know enough about the Higg Index to able to tell you about whether it ‘makes sense’ or not, but this is a good way to start to think about incentives.
When you meet an index such as this one, some simple questions are worth asking:
How long has this index been around?
Who created it?
Who funds it?
Who uses it?
What did it replace, and why?
Are there other indices that do a similar job?
Try and answer these questions for the Higg Index, for example. The NYTimes article carries a slightly sceptical tone about the Higg Index (but is, ultimately, a balanced take) – once you finish answering these questions, try giving it a read, and then reach your own conclusions about its reliability.
And as usual, the most important lesson of them all: all the other indices that you may have come across, apply the same set of questions!
Here’s the excellent Our World In Data page about the topic, and here’s a lovely visualization of how the TFR has changed for the world and for India over time (please make sure to “play” the animation):
(I hope this renders on your screens the way it is supposed to. If not, my apologies, and please click here instead)
But now we have news: India’s TFR has now slipped below the replacement rate. Here’s Vivek Kaul in Livemint explaining what this means:
The recently released National Family Health Survey (NFHS-5) of 2019-2021 shows why. As per the survey, India’s total fertility rate now stands at 2. It was 3.2 at the turn of the century and 2.2 in 2015-2016, when the last such survey was done. This means that, on average, 100 women had 320 children during their child-bearing years (aged 15-49). It fell to 220 and now stands at 200. Hence, India’s fertility rate is already lower than the replacement level of 2.1. If, on average, 100 women have 210 children during their childbearing years and this continues over the decades, the population of a country eventually stabilizes. The additional fraction of 0.1 essentially accounts for females who die before reaching child-bearing age.
Of course, as with all averages, so also with this one: you can weave many different stories based on how you slice the data. You can slice it by urban/rural divides, you can slice it by states, you can slice it by level of education, you can slice it by religion – and each of these throws up a different point of view and a different story.
But there are three important things (to me) that are worth noting:
The TFR for India has not just come down over time, but has slipped below the global TFR in recent years.
This doesn’t (yet) mean that India’s population will start to come down right away, and that for a variety of reasons. As Vivek Kaul puts it: “So, what does this mean? Will the Indian population start stabilizing immediately? The answer is no. This is primarily because the number of women who will keep entering child-bearing age will grow for a while, simply because of higher fertility rates in the past. Also, with access to better medical facilities, people will live longer. Hence, India’s population will start stabilizing in around three decades.”
The next three to four decades is a period of “never again” high growth opportunity for India, because never again (in all probability) will we ever have a young, growing population.
Demography is a subject you need to be more familiar with, and if you haven’t already, please begin with Our World in Data’s page on the topic, and especially spend time over the section titled “What explains the change in the number of children women have?”
I ate a most delicious snack yesterday, and probably more of it than I should have. A student from Vasai had gotten along some fried surmai (which was also outstanding), but in a remarkable event where I am concerned, it was the vegetarian item that won the Kulkarni Tastebud contest yesterday.
The item in question was sukeli, and it looks like this:
This is what the website has to say about the product:
Dried bananas are 100% natural Dried Fruit, without any Artificial Flavor, Preservatives or Color. These are Soft, Chewy and full of Sweet Fruity flavor. These are rich in Potassium which regulates the Blood Pressure (BP). It contains Natural Sugar, Soluble Fiber, Minerals, Vitamins and Antioxidants which are essential to maintain good health. Dried Banana is great energy snack favorite of Athletes and Sportsperson. (sic)
I wouldn’t know about the nutritional qualities, but I can attest to the sukelis being soft, chewy and full of sweet fruity flavor. Think of it as an intensely addictive cross between bananas and jackfruits.
Do you know which is the most popular type of banana grown the world over? It is the Cavendish banana, and we produce a lot – a lot – of Cavendish bananas.
Here’s Vikram Doctor in on old piece from the Economic Times:
To get good bananas in Mumbai you must go to CST station. Not to take a train, but because every evening a few hawkers bring baskets of bananas to sell to the evening commuter crowd. The bananas are grown just outside Mumbai and they are small, fat and full of creamy-sweet flavour. They are usually not quite ripe when sold, and tend to ripen unevenly, but nearly all get sold by end of day. It used to be easier to get good bananas. Long Moira bananas from Goa, plantains from Mangalore that had to be kept till their skins turned black, several types of small bananas, thick red bananas, humble green Robustas and Rasthalis from Tamil Nadu so fat they were bursting out of their skin. Today you mostly just find one cheap green banana, one small banana and then the type that now rules the market. It is perfectly shaped, perfectly yellow coloured and a taste that is perfectly boring, but just banana tasting enough to be acceptable. Fruit vendors call it Golden Banana and push it hard, justifying its higher price by telling you it is meant for export. Which is correct since this is a kind of Cavendish, the banana variety that accounts for 99 per cent of the world market.
You might also want to listen to a podcast (now discontinued, alas, but it was truly excellent while it lasted) about the same topic. It used to be hosted by Vikram himself, and all episodes are worth a listen. It no longer seems to be online, more’s the pity, but perhaps the more intrepid readers might be able to surface a source for all of us? My top three episodes were about coffee, butter and bananas.
But back to bananas: as I said, we produce a lot of Cavendish bananas. Why? Well, economies of scale, in the jargon of my tribe. In English, we were optimizing for cost minimization:
Nature has a simple way to adapt to different climates: genetic diversity. Even if some plants react poorly to higher temperatures or less rainfall, other varieties can not only survive – but thrive, giving humans more options on what to grow and eat. But the powerful food industry had other ideas and over the past century, humans have increasingly relied on fewer and fewer crop varieties that can be mass produced and shipped around the world. “The line between abundance and disaster is becoming thinner and thinner and the public is unaware and unconcerned,” writes Dan Saladino in his book Eating to Extinction.
If a pest comes along that is particularly bad for that particular variety, well, we’re in deep doo-doo (and the infographic in the Guardian article is an excellent way to understand how we refuse to learn from history). And well, such a pest is now with us:
…diversity boosts the overall resilience in our food systems against new climate and environmental changes that can ruin crops and drive the emergence of new or more aggressive pathogens. It’s what enabled humans to produce food and thrive at high altitudes and in the desert, but rather than learn from the past, we’ve put all our eggs in a few genetic baskets. This is why a single pathogen, Panama 4, could wipe out the banana industry as we know it. It’s been detected in every continent including most recently Latin America, the world’s top banana export region, where entire communities depend on the Cavendish for their livelihoods. “It’s history repeating itself,” said banana breeder Fernando Garcia-Bastidas.
And when they say “every continent”, they aren’t exaggerating:
Tropical Race 4 (TR4), the virulent strain of fungus Fusarium oxysporum cubense that is threatening banana crop globally with the fusarium wilt disease, has hit the plantations in India, the world’s top producer of the fruit. The devastating disease which surfaced in the Cavendish group of bananas in parts of Bihar is now spreading to Uttar Pradesh, Madhya Pradesh and even Gujarat, and threatening to inflict heavy losses to the country’s ₹50,000-crore banana industry.
Well, OK, you might say, bring on the guys in the white coats and figure out the solution. Here’s Wikipedia on the topic, and the relevant section doesn’t have a reassuring heading: it’s called “Disease Management”. Personally, I would have felt better if it was more along the lines of disease eradication. But the first line of this section gives one a sinking feeling:
“As fungicides are largely ineffective, there are few options for managing Panama disease”
The Wikipedia article on Cavendish bananas is equally gloomy on the topic, save for a single line of some hope towards the end:
Cavendish bananas, accounting for around 99% of banana exports to developed countries, are vulnerable to the fungal disease known as Panama disease. There is a risk of extinction of the variety. Because Cavendish bananas are parthenocarpic (they don’t have seeds and reproduce only through cloning), their resistance to disease is often low. Development of disease resistance depends on mutations occurring in the propagation units, and hence evolves more slowly than in seed-propagated crops.[ The development of resistant varieties has therefore been the only alternative to protect the fruit trees from tropical and subtropical diseases like bacterial wilt and Fusarium wilt, commonly known as Panama disease. A replacement for the Cavendish would likely depend on genetic engineering, which is banned in some countries. Conventional plant breeding has not yet been able to produce a variety that preserves the flavor and shelf-life of the Cavendish. In 2017 James Dale, a biotechnologist at Queensland University of Technology in Brisbane, Australia produced just such a transgenic banana resistant to Tropical Race 4
It’s not just bananas, of course. The Guardian article is a lot of fun to read, and I would encourage you to arm yourself with a coffee and spend some time going through it. Corn is another great example!
Cost minimization is a great idea, but as with food, it is the dose that makes the poison. Long time readers will know that I have a bee in my bonnet about this, but I honestly think it is an idea that has been taken too far across multiple dimensions.
Ask what you’re optimizing for, and begin to worry if the answer is a single-minded focus on cost minimization.
Raghuram Rajan got this one right, in a different context: what matters is risk-adjusted returns. And it applies across multiple dimensions, not just finance. Moreover, when you design systems, think of risk across large horizons of time, not just short term optimization.
Be clear about what you’re optimizing for, and realize that cost minimization can only take you so far.
Simple lessons, but oh-so-underrated!
But hey, if you get a chance to try the sukeli out, please do!
Vijay Kelkar and Niranjan Rajadhakshya had on op-ed out in Livemint recently on the mess in the telecom sector, and their suggestions for (at least partially) resolving it:
It has been about a year since the Supreme Court instructed telecom companies to share not just their core telecom revenues with the government, but also to take into account promotional offers to consumers, income from the sale of assets, bad debts that were written off, and dealer commissions. The apex court has allowed the affected telecom companies to make a small upfront payment and then pay their excess AGR dues to the government in ten annual instalments, from fiscal year 2021-22 to 2030-31, in an attempt to ease their immediate burden, which has raised concerns about the financial stability of Bharti Airtel and Vodafone Idea. Analysts estimate that the extra annual payments by all telecom firms could be around ₹22,000 crore a year.
Their suggestions for the resolution of this problem involve the issuance of zero-coupon bonds by the telecom companies, along with an option for the government to acquire a 10% equity stake. As always, please read the whole thing.
Now, this may work, this may not work. The more I try to read about this issue, the more pessimistic I get about a workable solution. But we’re not going to get into the issue of finding a “workable” solution today. We’re going to learn about how to think about this issue.
That is, what model/framework should we be using to assess a situation such as this? Kelkar and Rajadhakshya obviously have a model in mind, and they hint at it in this excerpt:
There are three broad policy concerns that need to be addressed in the context of the telecom sector: consumer welfare, competition and financial stability. Possible tariff hikes to generate extra revenues to meet AGR commitments will hurt consumer access. The inability to charge consumers more could mean that the three-player telecom market becomes a duopoly, through either a firm’s failure or acquisition. The banks that have lent to domestic telecom companies are also worried about their exposure in case AGR dues overwhelm the operating cash flows of these companies.
So a solution is necessary, they say, because we need to have a stable telecom market that doesn’t hurt
a) the consumers,
b) the current players in this sector and
c) the financial sector that has exposure in terms of loans to the telecom sector
To this list I would add the following:
d) make sure the government doesn’t get a raw deal (and raw is a tricky, contentious and vague word to use here, but we’ll go with it for now)
e) make sure new entrants aren’t deterred from entering this space (if and when that will happen)
f) suppliers to the telecom sector shouldn’t be negatively impacted
In other words, any solution to the problem must be as fair as possible to all involved parties, shouldn’t change the status quo far too much in any direction, shouldn’t hinder the entry of new competition, and should give as fair a deal as possible to consumers.
Students who are familiar with marketing theory are going to roll their eyes at this, but for the blissfully uninitiated, this is the famous Five Forces Analysis.
Porter’s Five Forces Framework is a method for analysing competition of a business. It draws from industrial organization (IO) economics to derive five forces that determine the competitive intensity and, therefore, the attractiveness (or lack thereof) of an industry in terms of its profitability.
Michael Porter’s Five Forces Framework can be traced back to the structure-conduct-performance paradigm, so in a sense, it really is an industrial organization framework:
In economics, industrial organization is a field that builds on the theory of the firm by examining the structure of (and, therefore, the boundaries between) firms and markets. Industrial organization adds real-world complications to the perfectly competitive model, complications such as transaction costs, limited information, and barriers to entry of new firms that may be associated with imperfect competition. It analyzes determinants of firm and market organization and behavior on a continuum between competition and monopoly, including from government actions.
The point is that if you are a student trying to think through this (or any other problem of a similar nature), you should have a model/framework in mind. “If I am going to recommend policy X”, you should be thinking to yourself, “how will that impact Jio? Airtel? Vi? How will that impact government revenues? What signals will I be sending to potential market entrants? Will consumers be better off, and if so, are we saying that they will be better off in the short run, or on a more sustainable basis?”
Now sure, the diagram doesn’t include government, but the Wikipedia article on the Five Forces does speak about it later, as does the excerpt above from the Wikipedia article on Industrial Organization. More importantly, this framework gives one the impression that we’re dealing with a static problem, with no considerations given for time.
I would urge you to think about time, always, as a student of economics. Whether it be the circular flow of income diagram, or the five forces diagram, remember that your actions will have repercussions on the industry in question not just today, but for some time to come.
So whether you’re the one coming up with a solution, or you’re the one evaluating somebody else’s solution, you should always be evaluating these solutions with some framework in your mind. And tweaking the Five Forces model to suit your requirements is a good place to start!
He is famous for a variety of reasons, but macroeconomics students of a particular vintage might remember him for advocating austerity in the aftermath of the 2008 crisis (remember when that was the biggest problem our world had seen?). Here is one paper he co-authored during that time.
There are many reasons to be a fan of Alesina’s work, as Larry Summers points out in this fine essay written in his honour. I think it a bit of a stretch to say that he invented the academic field of political economy, or even revived it, but he certainly did more to bring in front and centre than most other economists. In fact, for the last two years, he was my pick for getting the Nobel Prize, and it would certainly have been a well deserved honour.
I haven’t read all books written by him, but did read (and enjoyed) The Size of Nations, particularly because it helped me think through related aspects of the problem (Geoffrey West and Bob Mundell and their works come to mind – but that is another topic altogether). Here is a short review of that book by David Friedman, if you are interested in learning more.
A Fine Theorem (a blog you should subscribe to anyway) has a post written in his honor (along with O.E. Williamson’s, who also passed away recently) that is worth reading.
I’ll be walking through some of his work with the BSc students at the Institute, in order to familiarize them with it, and will be repeating the exercise in honour of O.E. Williamson on Thursday. This post is to help me get my thoughts in order before the talk – but I figured some of you might also enjoy learning more about Alesina’s work.
My favorite paper written by him is “Distributive Politics and Economic Growth” written with Dani Rodrik. That’ll be the focal point of my talk today – but I will address what little I know of his body of work as well.
Reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings.
Blah Blooh Bleeh Blah. Right?
Well, this is the test that will tell us if a person has got the corona virus or not. So listen up!
Coronaviruses, so named because they look like halos (known as coronas) when viewed under the electron microscope, are a large family of RNA viruses. The typical generic coronavirus genome is a single strand of RNA, 32 kilobases long, and is the largest known RNA virus genome. Coronaviruses have the highest known frequency of recombination of any positive-strand RNA virus, promiscuously combining genetic information from different sources when a host is infected with multiple coronaviruses. In other words, these viruses mutate and change at a high rate, which can create havoc for both diagnostic detection as well as therapy (and vaccine) regimens.
But as best as I can tell, detecting the corona virus becomes pretty difficult unless it turns into DNA, which can be done by a process called Reverse Transcription.
The first, PCR, or polymerase chain reaction, is a DNA amplification technique that is routinely used in the lab to turn tiny amounts of DNA into large enough quantities that they can be analyzed. Invented in the 1980s by Kary Mullis, the Nobel Prize-winning technique uses cycles of heating and cooling to make millions of copies of a very small amount of DNA. When combined with a fluorescent dye that glows in the presence of DNA, PCR can actually tell scientists how much DNA there is. That’s useful for detecting when a pathogen is present, either circulating in a host’s body or left behind on surfaces.
But if scientists want to detect a virus like SARS-CoV-2, they first have to turn its genome, which is made of single-stranded RNA, into DNA. They do that with a handy enzyme called reverse-transcriptase. Combine the two techniques and you’ve got RT-PCR.
So, here’s how it works, best as I can tell:
That article I linked to from Wired has a more detailed explanation, including more detailed answers about the “how”, if you are interested. Please do read it fully!
Now, which kit to use to extract RNA from a snot sample, which dye to use, which PCR machine to use – all of these and more are variables. Think of it like a recipe – different steps, different ingredients, different cooking methods. Except, because this is so much more important than a recipe, the FDA wags a finger and establishes protocol.
That protocol doesn’t just tell you the steps, but it also tells you whether you are authorized to run the test at all or not. And that was, uh, problematic.
For consistency’s sake, the FDA opted to limit its initial emergency approval to just the CDC test, to ensure accurate surveillance across state, county, and city health departments. “The testing strategy the government picked was very limited. Even if the tests had worked, they wouldn’t have had that much capacity for a while,” says Joshua Sharfstein, a health policy researcher at Johns Hopkins School of Public Health and the coauthor of a recent journal article on how this testing system has gone awry. “They basically were saying, we’re going to use a test not only developed by CDC, but CDC has to wrap it up and send it to the lab, and it’s just going to be state labs doing it.”
The effect was that the nation’s labs could only run tests using the CDC’s kits. They couldn’t order their own primers and probes, even if they were identical to the ones inside the CDC kits. And when the CDC’s kits turned out to be flawed, there was no plan B.
By the way, if you want a full list of the various protocols that are listed by the WHO, they can be found here.
Another in-demand approach would look for antibodies to the virus in the blood of patients, a so-called serological test. That’d be useful, because in addition to identifying people with Covid-19, it could tell you if someone was once infected but then recovered. “The better your surveillance, the more cases you’re going to catch, but even with perfect surveillance you won’t catch everything,” says Martin Hibberd, an infectious disease researcher at the London School of Hygiene and Tropical Medicine who helped develop one of the first tests for the coronavirus SARS in the early 2000s. “Until we’ve got a full test of this type of assay, we don’t know how many cases we’ve missed.”
A serological test would also probably be cheaper than a PCR-based one, and more suited to automation and high-throughput testing. A researcher in Singapore is testing one now.
Serological assays are of critical importance to determine seroprevalence in a given
population, define previous exposure and identify highly reactive human donors for the generation of convalescent serum as therapeutic. Sensitive and specific identification of Coronavirus SARS-Cov-2 antibody titers will also support screening of health care workers to identify those who are already immune and can be deployed to care for infected patients minimizing the risk of viral spread to colleagues and other patients.
But whether you use any variant of the RT-PCR or the serological test, given the sheer number of kits required, there is going to be crazy high demand, and a massive supply chain problem.
Along with, what else, politics, and bureaucracy:
The Wired article is based on reporting in the US, obviously, but there are important lessons to be learned here for all countries, including India.
Here are some links about where India stands in this regard:
I’ll be updating the blog at a higher frequency for the time being – certainly more than once a day. Also (duh) all posts will be about the coronavirus for the foreseeable future.
If you are receiving these posts by email, and would rather not, please do unsubscribe.