India’s Demographics and the Total Fertility Rate

For many, many years, this was my slide on India’s TFR in lectures I used to give on India’s demographics:

Wikipedia (Old data)

What is TFR? Here’s Wikipedia:

“The total fertility rate (TFR) of a population is the average number of children that would be born to a woman over her lifetime if:

  1. she were to experience the exact current age-specific fertility rates (ASFRs) through her lifetime
  2. she were to live from birth until the end of her reproductive life.”

Hans Rosling had a better, more intuitive term: babies per women. Here’s an excellent chart from Gapminder, although ever so slightly outdated:

Click here to see the original chart, and please press on the play button to see this change over time

Here’s the excellent Our World In Data page about the topic, and here’s a lovely visualization of how the TFR has changed for the world and for India over time (please make sure to “play” the animation):

(I hope this renders on your screens the way it is supposed to. If not, my apologies, and please click here instead)

But now we have news: India’s TFR has now slipped below the replacement rate. Here’s Vivek Kaul in Livemint explaining what this means:

The recently released National Family Health Survey (NFHS-5) of 2019-2021 shows why. As per the survey, India’s total fertility rate now stands at 2. It was 3.2 at the turn of the century and 2.2 in 2015-2016, when the last such survey was done. This means that, on average, 100 women had 320 children during their child-bearing years (aged 15-49). It fell to 220 and now stands at 200.
Hence, India’s fertility rate is already lower than the replacement level of 2.1. If, on average, 100 women have 210 children during their childbearing years and this continues over the decades, the population of a country eventually stabilizes. The additional fraction of 0.1 essentially accounts for females who die before reaching child-bearing age.

https://www.livemint.com/opinion/columns/the-women-who-went-missing-in-our-demographic-dividend-11652200177580.html

And here’s the breakup by state, updated for the latest results:

By iashris.com – https://indiainpixels.xyz, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=112844699

Of course, as with all averages, so also with this one: you can weave many different stories based on how you slice the data. You can slice it by urban/rural divides, you can slice it by states, you can slice it by level of education, you can slice it by religion – and each of these throws up a different point of view and a different story.

But there are three important things (to me) that are worth noting:

  1. The TFR for India has not just come down over time, but has slipped below the global TFR in recent years.
  2. This doesn’t (yet) mean that India’s population will start to come down right away, and that for a variety of reasons. As Vivek Kaul puts it:
    “So, what does this mean? Will the Indian population start stabilizing immediately? The answer is no. This is primarily because the number of women who will keep entering child-bearing age will grow for a while, simply because of higher fertility rates in the past. Also, with access to better medical facilities, people will live longer. Hence, India’s population will start stabilizing in around three decades.”
  3. The next three to four decades is a period of “never again” high growth opportunity for India, because never again (in all probability) will we ever have a young, growing population.

Demography is a subject you need to be more familiar with, and if you haven’t already, please begin with Our World in Data’s page on the topic, and especially spend time over the section titled “What explains the change in the number of children women have?”

Maximizing Soul: The Banana Edition

I ate a most delicious snack yesterday, and probably more of it than I should have. A student from Vasai had gotten along some fried surmai (which was also outstanding), but in a remarkable event where I am concerned, it was the vegetarian item that won the Kulkarni Tastebud contest yesterday.

The item in question was sukeli, and it looks like this:

https://craftoindia.com/sukeli-sun-dried-banana-of-maharashtra-1kg.html

This is what the website has to say about the product:

Dried bananas are 100% natural Dried Fruit, without any Artificial Flavor, Preservatives or Color. These are Soft, Chewy and full of Sweet Fruity flavor. These are rich in Potassium which regulates the Blood Pressure (BP). It contains Natural Sugar, Soluble Fiber, Minerals, Vitamins and Antioxidants which are essential to maintain good health. Dried Banana is great energy snack favorite of Athletes and Sportsperson. (sic)

https://craftoindia.com/sukeli-sun-dried-banana-of-maharashtra-1kg.html

I wouldn’t know about the nutritional qualities, but I can attest to the sukelis being soft, chewy and full of sweet fruity flavor. Think of it as an intensely addictive cross between bananas and jackfruits.

Ah, but which kind of bananas? Nendran or Rajeli varieties and no other.


Do you know which is the most popular type of banana grown the world over? It is the Cavendish banana, and we produce a lot – a lot – of Cavendish bananas.

Here’s Vikram Doctor in on old piece from the Economic Times:

To get good bananas in Mumbai you must go to CST station. Not to take a train, but because every evening a few hawkers bring baskets of bananas to sell to the evening commuter crowd. The bananas are grown just outside Mumbai and they are small, fat and full of creamy-sweet flavour. They are usually not quite ripe when sold, and tend to ripen unevenly, but nearly all get sold by end of day. It used to be easier to get good bananas.
Long Moira bananas from Goa, plantains from Mangalore that had to be kept till their skins turned black, several types of small bananas, thick red bananas, humble green Robustas and Rasthalis from Tamil Nadu so fat they were bursting out of their skin.
Today you mostly just find one cheap green banana, one small banana and then the type that now rules the market. It is perfectly shaped, perfectly yellow coloured and a taste that is perfectly boring, but just banana tasting enough to be acceptable.
Fruit vendors call it Golden Banana and push it hard, justifying its higher price by telling you it is meant for export. Which is correct since this is a kind of Cavendish, the banana variety that accounts for 99 per cent of the world market.

https://economictimes.indiatimes.com/blogs/onmyplate/no-getting-away-from-cavendish-in-banana-republic/

You might also want to listen to a podcast (now discontinued, alas, but it was truly excellent while it lasted) about the same topic. It used to be hosted by Vikram himself, and all episodes are worth a listen. It no longer seems to be online, more’s the pity, but perhaps the more intrepid readers might be able to surface a source for all of us? My top three episodes were about coffee, butter and bananas.

But back to bananas: as I said, we produce a lot of Cavendish bananas. Why? Well, economies of scale, in the jargon of my tribe. In English, we were optimizing for cost minimization:

Nature has a simple way to adapt to different climates: genetic diversity.
Even if some plants react poorly to higher temperatures or less rainfall, other varieties can not only survive – but thrive, giving humans more options on what to grow and eat.
But the powerful food industry had other ideas and over the past century, humans have increasingly relied on fewer and fewer crop varieties that can be mass produced and shipped around the world. “The line between abundance and disaster is becoming thinner and thinner and the public is unaware and unconcerned,” writes Dan Saladino in his book Eating to Extinction.

https://www.theguardian.com/food/ng-interactive/2022/apr/14/climate-crisis-food-systems-not-ready-biodiversity

The first two lines of that extract above are now a problem, because mass production of one type of banana is the agricultural equivalent of putting all your eggs in one basket. And this is a problem for the same reason that putting all your savings into one asset class is a bad idea.

If a pest comes along that is particularly bad for that particular variety, well, we’re in deep doo-doo (and the infographic in the Guardian article is an excellent way to understand how we refuse to learn from history). And well, such a pest is now with us:

…diversity boosts the overall resilience in our food systems against new climate and environmental changes that can ruin crops and drive the emergence of new or more aggressive pathogens. It’s what enabled humans to produce food and thrive at high altitudes and in the desert, but rather than learn from the past, we’ve put all our eggs in a few genetic baskets.
This is why a single pathogen, Panama 4, could wipe out the banana industry as we know it.
It’s been detected in every continent including most recently Latin America, the world’s top banana export region, where entire communities depend on the Cavendish for their livelihoods.
“It’s history repeating itself,” said banana breeder Fernando Garcia-Bastidas.

https://www.theguardian.com/food/ng-interactive/2022/apr/14/climate-crisis-food-systems-not-ready-biodiversity

And when they say “every continent”, they aren’t exaggerating:

Tropical Race 4 (TR4), the virulent strain of fungus Fusarium oxysporum cubense that is threatening banana crop globally with the fusarium wilt disease, has hit the plantations in India, the world’s top producer of the fruit. The devastating disease which surfaced in the Cavendish group of bananas in parts of Bihar is now spreading to Uttar Pradesh, Madhya Pradesh and even Gujarat, and threatening to inflict heavy losses to the country’s ₹50,000-crore banana industry.

https://www.thehindubusinessline.com/economy/agri-business/india-in-a-race-against-wilt-in-cavendish-banana/article23650060.ece

Well, OK, you might say, bring on the guys in the white coats and figure out the solution. Here’s Wikipedia on the topic, and the relevant section doesn’t have a reassuring heading: it’s called “Disease Management”. Personally, I would have felt better if it was more along the lines of disease eradication. But the first line of this section gives one a sinking feeling:

“As fungicides are largely ineffective, there are few options for managing Panama disease”

The Wikipedia article on Cavendish bananas is equally gloomy on the topic, save for a single line of some hope towards the end:

Cavendish bananas, accounting for around 99% of banana exports to developed countries, are vulnerable to the fungal disease known as Panama disease. There is a risk of extinction of the variety. Because Cavendish bananas are parthenocarpic (they don’t have seeds and reproduce only through cloning), their resistance to disease is often low. Development of disease resistance depends on mutations occurring in the propagation units, and hence evolves more slowly than in seed-propagated crops.[
The development of resistant varieties has therefore been the only alternative to protect the fruit trees from tropical and subtropical diseases like bacterial wilt and Fusarium wilt, commonly known as Panama disease. A replacement for the Cavendish would likely depend on genetic engineering, which is banned in some countries. Conventional plant breeding has not yet been able to produce a variety that preserves the flavor and shelf-life of the Cavendish. In 2017 James Dale, a biotechnologist at Queensland University of Technology in Brisbane, Australia produced just such a transgenic banana resistant to Tropical Race 4

https://en.wikipedia.org/wiki/Cavendish_banana#Diseases

It’s not just bananas, of course. The Guardian article is a lot of fun to read, and I would encourage you to arm yourself with a coffee and spend some time going through it. Corn is another great example!

https://www.theguardian.com/food/ng-interactive/2022/apr/14/climate-crisis-food-systems-not-ready-biodiversity

Cost minimization is a great idea, but as with food, it is the dose that makes the poison. Long time readers will know that I have a bee in my bonnet about this, but I honestly think it is an idea that has been taken too far across multiple dimensions.


So what is to be done? I have absolutely no expertise in what the agricultural/scientific solutions might be, but as with JIT supply chains that focus on China, office redesign that sucks all the fun out of work and so much else besides, so also with this.

Ask what you’re optimizing for, and begin to worry if the answer is a single-minded focus on cost minimization.

Raghuram Rajan got this one right, in a different context: what matters is risk-adjusted returns. And it applies across multiple dimensions, not just finance. Moreover, when you design systems, think of risk across large horizons of time, not just short term optimization.

Be clear about what you’re optimizing for, and realize that cost minimization can only take you so far.

Simple lessons, but oh-so-underrated!

But hey, if you get a chance to try the sukeli out, please do!

The Vajpayee Moment in Telecom, IO and Porter’s Five Forces

Vijay Kelkar and Niranjan Rajadhakshya had on op-ed out in Livemint recently on the mess in the telecom sector, and their suggestions for (at least partially) resolving it:

It has been about a year since the Supreme Court instructed telecom companies to share not just their core telecom revenues with the government, but also to take into account promotional offers to consumers, income from the sale of assets, bad debts that were written off, and dealer commissions. The apex court has allowed the affected telecom companies to make a small upfront payment and then pay their excess AGR dues to the government in ten annual instalments, from fiscal year 2021-22 to 2030-31, in an attempt to ease their immediate burden, which has raised concerns about the financial stability of Bharti Airtel and Vodafone Idea. Analysts estimate that the extra annual payments by all telecom firms could be around ₹22,000 crore a year.

https://www.livemint.com/opinion/online-views/a-new-vajpayee-moment-for-the-troubled-indian-telecom-sector-11631123688457.html

Their suggestions for the resolution of this problem involve the issuance of zero-coupon bonds by the telecom companies, along with an option for the government to acquire a 10% equity stake. As always, please read the whole thing.


Now, this may work, this may not work. The more I try to read about this issue, the more pessimistic I get about a workable solution. But we’re not going to get into the issue of finding a “workable” solution today. We’re going to learn about how to think about this issue.

That is, what model/framework should we be using to assess a situation such as this? Kelkar and Rajadhakshya obviously have a model in mind, and they hint at it in this excerpt:

There are three broad policy concerns that need to be addressed in the context of the telecom sector: consumer welfare, competition and financial stability. Possible tariff hikes to generate extra revenues to meet AGR commitments will hurt consumer access. The inability to charge consumers more could mean that the three-player telecom market becomes a duopoly, through either a firm’s failure or acquisition. The banks that have lent to domestic telecom companies are also worried about their exposure in case AGR dues overwhelm the operating cash flows of these companies.

https://www.livemint.com/opinion/online-views/a-new-vajpayee-moment-for-the-troubled-indian-telecom-sector-11631123688457.html

So a solution is necessary, they say, because we need to have a stable telecom market that doesn’t hurt

a) the consumers,

b) the current players in this sector and

c) the financial sector that has exposure in terms of loans to the telecom sector

To this list I would add the following:

d) make sure the government doesn’t get a raw deal (and raw is a tricky, contentious and vague word to use here, but we’ll go with it for now)

e) make sure new entrants aren’t deterred from entering this space (if and when that will happen)

f) suppliers to the telecom sector shouldn’t be negatively impacted

In other words, any solution to the problem must be as fair as possible to all involved parties, shouldn’t change the status quo far too much in any direction, shouldn’t hinder the entry of new competition, and should give as fair a deal as possible to consumers.


Take a look at this diagram:

https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysis#/media/File:Elements_of_Industry_Structure.svg (Credit: Denis Fadeev)

Students who are familiar with marketing theory are going to roll their eyes at this, but for the blissfully uninitiated, this is the famous Five Forces Analysis.

Porter’s Five Forces Framework is a method for analysing competition of a business. It draws from industrial organization (IO) economics to derive five forces that determine the competitive intensity and, therefore, the attractiveness (or lack thereof) of an industry in terms of its profitability.

https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysis

Michael Porter’s Five Forces Framework can be traced back to the structure-conduct-performance paradigm, so in a sense, it really is an industrial organization framework:

In economics, industrial organization is a field that builds on the theory of the firm by examining the structure of (and, therefore, the boundaries between) firms and markets. Industrial organization adds real-world complications to the perfectly competitive model, complications such as transaction costs, limited information, and barriers to entry of new firms that may be associated with imperfect competition. It analyzes determinants of firm and market organization and behavior on a continuum between competition and monopoly, including from government actions.

https://en.wikipedia.org/wiki/Industrial_organization

The point is that if you are a student trying to think through this (or any other problem of a similar nature), you should have a model/framework in mind. “If I am going to recommend policy X”, you should be thinking to yourself, “how will that impact Jio? Airtel? Vi? How will that impact government revenues? What signals will I be sending to potential market entrants? Will consumers be better off, and if so, are we saying that they will be better off in the short run, or on a more sustainable basis?”

Now sure, the diagram doesn’t include government, but the Wikipedia article on the Five Forces does speak about it later, as does the excerpt above from the Wikipedia article on Industrial Organization. More importantly, this framework gives one the impression that we’re dealing with a static problem, with no considerations given for time.

I would urge you to think about time, always, as a student of economics. Whether it be the circular flow of income diagram, or the five forces diagram, remember that your actions will have repercussions on the industry in question not just today, but for some time to come.


So whether you’re the one coming up with a solution, or you’re the one evaluating somebody else’s solution, you should always be evaluating these solutions with some framework in your mind. And tweaking the Five Forces model to suit your requirements is a good place to start!

Alberto Alesina: In Memoriam

Alberto Alesina passed away a couple of days ago, while on a hike with his wife. This is his Wikipedia page, while here is his Harvard faculty page.

He is famous for a variety of reasons, but macroeconomics students of a particular vintage might remember him for advocating austerity in the aftermath of the 2008 crisis (remember when that was the biggest problem our world had seen?). Here is one paper he co-authored during that time.

There are many reasons to be a fan of Alesina’s work, as Larry Summers points out in this fine essay written in his honour. I think it a bit of a stretch to say that he invented the academic field of political economy, or even revived it, but he certainly did more to bring in front and centre than most other economists. In fact, for the last two years, he was my pick for getting the Nobel Prize, and it would certainly have been a well deserved honour.

I haven’t read all books written by him, but did read (and enjoyed) The Size of Nations, particularly because it helped me think through related aspects of the problem (Geoffrey West and Bob Mundell and their works come to mind – but that is another topic altogether). Here is a short review of that book by David Friedman, if you are interested in learning more.

A Fine Theorem (a blog you should subscribe to anyway) has a post written in his honor (along with O.E. Williamson’s, who also passed away recently) that is worth reading.

I’ll be walking through some of his work with the BSc students at the Institute, in order to familiarize them with it, and will be repeating the exercise in honour of O.E. Williamson on Thursday. This post is to help me get my thoughts in order before the talk – but I figured some of you might also enjoy learning more about Alesina’s work.

My favorite paper written by him is “Distributive Politics and Economic Growth” written with Dani Rodrik. That’ll be the focal point of my talk today – but I will address what little I know of his body of work as well.

DNA, RNA, RT-PCR, Testing Methods, Supply Chains… and Politics

What is Reverse Transcription Polymerase Chain Reaction?

Reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings.

Blah Blooh Bleeh Blah. Right?

Well, this is the test that will tell us if a person has got the corona virus or not. So listen up!

The corona virus is in the form of RNA:

Coronaviruses, so named because they look like halos (known as coronas) when viewed under the electron microscope, are a large family of RNA viruses. The typical generic coronavirus genome is a single strand of RNA, 32 kilobases long, and is the largest known RNA virus genome. Coronaviruses have the highest known frequency of recombination of any positive-strand RNA virus, promiscuously combining genetic information from different sources when a host is infected with multiple coronaviruses. In other words, these viruses mutate and change at a high rate, which can create havoc for both diagnostic detection as well as therapy (and vaccine) regimens.

But as best as I can tell, detecting the corona virus becomes pretty difficult unless it turns into DNA, which can be done by a process called Reverse Transcription.

With the newly formed DNA, replicate it – have it reproduce a lot, basically. That’s where PCR comes in. And with that (and a fluroscent dye that is added to make detection easier) you have a sample that you can check for the presence of the corona virus.

The first, PCR, or polymerase chain reaction, is a DNA amplification technique that is routinely used in the lab to turn tiny amounts of DNA into large enough quantities that they can be analyzed. Invented in the 1980s by Kary Mullis, the Nobel Prize-winning technique uses cycles of heating and cooling to make millions of copies of a very small amount of DNA. When combined with a fluorescent dye that glows in the presence of DNA, PCR can actually tell scientists how much DNA there is. That’s useful for detecting when a pathogen is present, either circulating in a host’s body or left behind on surfaces.

But if scientists want to detect a virus like SARS-CoV-2, they first have to turn its genome, which is made of single-stranded RNA, into DNA. They do that with a handy enzyme called reverse-transcriptase. Combine the two techniques and you’ve got RT-PCR.

So, here’s how it works, best as I can tell:

Coronavirus Detection Steps

 

That article I linked to from Wired has a more detailed explanation, including more detailed answers about the “how”, if you are interested. Please do read it fully!

Now, which kit to use to extract RNA from a snot sample, which dye to use, which PCR machine to use – all of these and more are variables. Think of it like a recipe – different steps, different ingredients, different cooking methods. Except, because this is so much more important than a recipe, the FDA wags a finger and establishes protocol.

That protocol doesn’t just tell you the steps, but it also tells you whether you are authorized to run the test at all or not. And that was, uh, problematic.

For consistency’s sake, the FDA opted to limit its initial emergency approval to just the CDC test, to ensure accurate surveillance across state, county, and city health departments. “The testing strategy the government picked was very limited. Even if the tests had worked, they wouldn’t have had that much capacity for a while,” says Joshua Sharfstein, a health policy researcher at Johns Hopkins School of Public Health and the coauthor of a recent journal article on how this testing system has gone awry. “They basically were saying, we’re going to use a test not only developed by CDC, but CDC has to wrap it up and send it to the lab, and it’s just going to be state labs doing it.”

The effect was that the nation’s labs could only run tests using the CDC’s kits. They couldn’t order their own primers and probes, even if they were identical to the ones inside the CDC kits. And when the CDC’s kits turned out to be flawed, there was no plan B.

By the way, if you want a full list of the various protocols that are listed by the WHO, they can be found here.

Back to the Wired article:

Another in-demand approach would look for antibodies to the virus in the blood of patients, a so-called serological test. That’d be useful, because in addition to identifying people with Covid-19, it could tell you if someone was once infected but then recovered. “The better your surveillance, the more cases you’re going to catch, but even with perfect surveillance you won’t catch everything,” says Martin Hibberd, an infectious disease researcher at the London School of Hygiene and Tropical Medicine who helped develop one of the first tests for the coronavirus SARS in the early 2000s. “Until we’ve got a full test of this type of assay, we don’t know how many cases we’ve missed.”

A serological test would also probably be cheaper than a PCR-based one, and more suited to automation and high-throughput testing. A researcher in Singapore is testing one now.

Here’s an early paper on the topic, if you are interested.

Serological assays are of critical importance to determine seroprevalence in a given
population, define previous exposure and identify highly reactive human donors for the generation of convalescent serum as therapeutic. Sensitive and specific identification of Coronavirus SARS-Cov-2 antibody titers will also support screening of health care workers to identify those who are already immune and can be deployed to care for infected patients minimizing the risk of viral spread to colleagues and other patients.

As far as I can tell, this method has not been deployed at all thus far, and that applies to India as well. Here’s a Wikipedia article about the different methods of detecting Covid-19 – it’s about more than that, the first section applies here. Here’s an article from Science about a potential breakthrough.

But whether you use any variant of the RT-PCR or the serological test, given the sheer number of kits required, there is going to be crazy high demandand a massive supply chain problem.

Along with, what else, politics, and bureaucracy:

 


The Wired article is based on reporting in the US, obviously, but there are important lessons to be learned here for all countries, including India.

Here are some links about where India stands in this regard:

 

I’ll be updating the blog at a higher frequency for the time being – certainly more than once a day. Also (duh) all posts will be about the coronavirus for the foreseeable future.

If you are receiving these posts by email, and would rather not, please do unsubscribe.

Thanks for reading!

 

Understanding Afghanistan A Little Bit Better

“Here is a game called buzkashi that is played only in Afghanistan and the central Asian steppe. It involves men on horseback competing to snatch a goat carcass off the ground and carry it to each of two designated posts while the other players, riding alongside at full gallop, fight to wrest the goat carcass away. The men play as individuals, each for his own glory. There are no teams. There is no set number of players. The distance between the posts is arbitrary. The field of play has no boundaries or chalk marks. No referee rides alongside to whistle plays dead and none is needed, for there are no fouls. The game is governed and regulated by its own traditions, by the social context and its customs, and by the implicit understandings among the players. If you need the protection of an official rule book, you shouldn’t be playing. Two hundred years ago, buzkashi offered an apt metaphor for Afghan society. The major theme of the country’s history since then has been a contention about whether and how to impose rules on the buzkashi of Afghan society.”

That is an excerpt from an excerpt – the book is called Games Without Rules, and the author, Tamim Ansary, has written a very readable book indeed about the last two centuries or so of Afghanistan’s history.

It has customs, and it has traditions, but it doesn’t have rules, and good luck trying to impose them. The British tried (thrice) as did the Russians and now the Americans, but Afghanistan has proven to be the better of all of them.

Let’s begin with the Russians: why did they invade?


 

One day in October 1979, an American diplomat named Archer K. Blood arrived at Afghanistan’s government headquarters, summoned by the new president, whose ousted predecessor had just been smothered to death with a pillow.

While the Kabul government was a client of the Soviet Union, the new president, Hafizullah Amin, had something else in mind. “I think he wants an improvement in U.S.-Afghan relations,” Mr. Blood wrote in a cable back to Washington. It was possible, he added, that Mr. Amin wanted “a long-range hedge against over-dependence on the Soviet Union.”

Pete Baker in the NYT speaks of recently made available archival history, which essentially reconfirms what seems to have been the popular view all along: the USSR could not afford to let Afghanistan slip away from the Communist world, no matter the cost. And as Prisoners of Geography makes clear, and the NYT article mentions, there was always the tantalizing dream of accessing the Indian Ocean.

By the way, somebody should dig deeper into Archer K. Blood, and maybe write a book about him. There’s one already, but that’s a story for another day.


Well, if the USSR invaded, the USA had to be around, and of course it was:

The supplying of billions of dollars in arms to the Afghan mujahideen militants was one of the CIA’s longest and most expensive covert operations. The CIA provided assistance to the fundamentalist insurgents through the Pakistani secret services, Inter-Services Intelligence (ISI), in a program called Operation Cyclone. At least 3 billion in U.S. dollars were funneled into the country to train and equip troops with weapons. Together with similar programs by Saudi Arabia, Britain’s MI6 and SAS, Egypt, Iran, and the People’s Republic of China, the arms included FIM-43 Redeye, shoulder-fired, antiaircraft weapons that they used against Soviet helicopters. Pakistan’s secret service, Inter-Services Intelligence (ISI), was used as an intermediary for most of these activities to disguise the sources of support for the resistance.

But if you are interested in the how, rather than the what – and if you are interested in public choice – then do read this review, and do watch the movie. Charlie Wilson’s War is a great, great yarn.

 


 

AP Photo, sourced from the Atlantic photo essay credited below.

Powerful photographs that hint at what the chaos of those nine years must have been like, from the Atlantic.

 


 

And finally, from the Guardian comes an article that seeks to give a different take on “ten myths” about Afghanistan, including the glorification of Charlie Wilson:

 

This myth of the 1980s was given new life by George Crile’s 2003 book Charlie Wilson’s War and the 2007 film of the same name, starring Tom Hanks as the loud-mouthed congressman from Texas. Both book and movie claim that Wilson turned the tide of the war by persuading Ronald Reagan to supply the mujahideen with shoulder-fired missiles that could shoot down helicopters. The Stingers certainly forced a shift in Soviet tactics. Helicopter crews switched their operations to night raids since the mujahideen had no night-vision equipment. Pilots made bombing runs at greater height, thereby diminishing the accuracy of the attacks, but the rate of Soviet and Afghan aircraft losses did not change significantly from what it was in the first six years of the war.

Afghanistan Today

After Poland and Germany, let’s pick an Asian country to understand better for the month of March. And given the recent deal that has been signed, about which more below, let’s begin with Afghanistan.

As always, begin with the basics. The gift that is Wikipedia, on Afghanistan:

“Afghanistan is a unitary presidential Islamic republic. The country has high levels of terrorism, poverty, child malnutrition, and corruption. It is a member of the United Nations, the Organisation of Islamic Cooperation, the Group of 77, the Economic Cooperation Organization, and the Non-Aligned Movement. Afghanistan’s economy is the world’s 96th largest, with a gross domestic product (GDP) of $72.9 billion by purchasing power parity; the country fares much worse in terms of per-capita GDP (PPP), ranking 169th out of 186 countries as of 2018.”

And from the same article…

The country has three rail links: one, a 75-kilometer (47 mi) line from Mazar-i-Sharif to the Uzbekistan border; a 10-kilometer (6.2 mi) long line from Toraghundi to the Turkmenistan border (where it continues as part of Turkmen Railways); and a short link from Aqina across the Turkmen border to Kerki, which is planned to be extended further across Afghanistan. These lines are used for freight only and there is no passenger service.

Now, as opposed to how I structured the essays on Poland and Germany, I intent to begin with the now and work my way backwards. This is primarily because of what Afghanistan is in the news for:

The joint declaration is a symbolic commitment to the Afghanistan government that the US is not abandoning it. The Taliban have got what they wanted: troops withdrawal, removal of sanctions, release of prisoners. This has also strengthened Pakistan, Taliban’s benefactor, and the Pakistan Army and the ISI’s influence appears to be on the rise. It has made it unambiguous that it wants an Islamic regime.

The Afghan government has been completely sidelined during the talks between the US and Taliban. The future for the people of Afghanistan is uncertain, and will depend on how Taliban honours its commitments and whether it goes back to the mediaeval practices of its 1996-2001 regime.

Doesn’t bode well for India, obviously, but doesn’t bode well for the United States of America either, says Pranay Kotasthane.

And the New York Times says a complete withdrawal of troops, even over the period currently specified, may not be a great idea. Ongoing support is, according to that newspaper, necessary:

More important than troops, potentially, is the willingness of the international community to continue to finance the Afghan government after a peace deal.

“The real key to whether Afghanistan avoids falling into an even longer civil war is the degree to which the United States and NATO are willing to fund and train the Afghan security forces over the long term,” Mr. Stavridis said. “When Vietnam collapsed and the helicopters were lifting off the roof of the U.S. Embassy, it was the result of funding being stopped.”

But it’s not just military funding! Afghanistan needs a lot of the world’s support in the years to come. Water, for example, will be a contentious issue in the years to come, and that’s putting it mildly.

Afghanistan doesn’t face a water shortage – it’s unable to get water to where it’s needed. The nation loses about two thirds of its water to Iran, Pakistan, Turkmenistan, and other neighbors because doesn’t harness its rivers. The government estimates that more than $2 billion is needed to rehabilitate the country’s most important irrigation systems.

And water, of course, is just one of many issues. Health, education, reforming agriculture, roads – it’s an endless list, and it will need all kinds of ongoing and sustained help.

So, amid all of this, what should India be doing?

Meanwhile, India’s interests in Afghanistan haven’t changed. India hopes to build up Afghanistan’s state capacity so that Pakistan’s desires of extending control can be thwarted. Given this core interest in a changed political situation, what’s needed in the long-term in the security domain is to build the strength of the Afghan National Defense and Security Forces (ANDSF). Without a strong ANDSF — which comprises the army, police, air force, and special security forces — peace and stability in Afghanistan will remain elusive. India’s aim should be to help the Islamic Republic of Afghanistan and ANDSF claim monopoly over the legitimate use of physical force.

But, the article presciently warns us of the same what/how problem we first encountered in studying the Indian budget:

In short, the budget might itself not be the biggest issue. The US has pumped nearly $3.6bn on average every year for the last 19 years solely on reconstruction of the ANDSF, a support that is likely to continue even if the US withdraws its soldiers. The bigger problems are insufficient processes to plan and execute budgets resulting in unused funds and lack of infrastructure leading to pay shortfalls.

Now, to unpack all of this, we need to study the following: the Soviet invasion and its aftermath, American involvement in the region, the rise of the Taliban, leading up to Operation Enduring Freedom, 2002. That’s next Wednesday!

How do you interact with your computer?

“Alexa, play Hush, by Deep Purple.”

That’s my daughter, all of six years old. Leave aside for the moment the pride that I feel as a father and a fan of classic rock.

My daughter is coding.


My dad was in Telco for many years, which was what Tata Motors used to call itself  back in the day. I do not remember the exact year, but he often regales us with stories about how Tata Motors procured its first computer. Programming it was not child’s play – in fact, interacting with it required the use of punch cards.

I do not know if it was the same type of computer, but watching this video gives us a clue about how computers of this sort worked.


The guy in the video, the computer programmer in Telco and my daughter are all doing the same thing: programming.

What is programming?

Here’s Wikiversity:

Programming is the art and science of translating a set of ideas into a program – a list of instructions a computer can follow. The person writing a program is known as a programmer (also a coder).

Go back to the very first sentence in this essay, and think about what it means. My daughter is instructing a computer called Alexa to play a specific song, by a specific artist. To me, that is a list of instructions a computer can follow.

From using punch cards to using our voice and not even realizing that we’re programming: we’ve come a long, long way.


It’s one thing to be awed at how far we’ve come, it is quite another to think about the path we’ve taken to get there. When we learnt about mainframes, about Apple, about Microsoft and about laptops, we learnt about the evolution of computers, and some of the firms that helped us get there. I have not yet written about Google (we’ll get to it), but there’s another way to think about the evolution of computers: we think about how we interact with them.

Here’s an extensive excerpt from Wikipedia:

In the 1960s, Douglas Engelbart’s Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.

Much of the early research was based on how young children learn. So, the design was based on the childlike primitives of eye-hand coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as later used by adult professionals.

Engelbart’s work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.

The GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface “including icons and the first use of pop-up menus”.[3]

In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC’s innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.

If you feel like diving down this topic and learning more about it, Daring Fireball has a lot of material about Alan Kay, briefly mentioned above.

So, as the Wikipedia article mentions, we moved away from punch cards, to using hand-eye coordination to enter the WIMP era.

It took a genius to move humanity into the next phase of machine-human interaction.


The main tweet shown above is Steven Sinofsky rhapsodizing about how Steve Jobs and his firm was able to move away from the WIMP mode of thinking to using our fingers.

And from there, it didn’t take long to moving to using just our voice as a means of interacting with the computers we now have all around us.

Voice operated computing systems:

That leaves the business model, and this is perhaps Amazon’s biggest advantage of all: Google doesn’t really have one for voice, and Apple is for now paying an iPhone and Apple Watch strategy tax; should it build a Siri-device in the future it will likely include a healthy significant profit margin.

Amazon, meanwhile, doesn’t need to make a dime on Alexa, at least not directly: the vast majority of purchases are initiated at home; today that may mean creating a shopping list, but in the future it will mean ordering things for delivery, and for Prime customers the future is already here. Alexa just makes it that much easier, furthering Amazon’s goal of being the logistics provider — and tax collector — for basically everyone and everything.


Punch cards to WIMP, WIMP to fingers, and fingers to voice. As that last article makes clear, one needs to think not just of the evolution, but also about how business models have changed over time, and have caused input methods to change – but also how input methods have changed, and caused business models to change.

In other words, understanding technology is as much about understanding economics, and strategy, as it is about understanding technology itself.

In the next Tuesday essay, we’ll take a look Google in greater detail, and then about emergent business models in the tech space.

 

On the history of laptops

How and why did we move away from desktop computers towards laptops? Although this next question isn’t the focus of today’s links, it is worth asking in this context: has the tendency to miniaturize accelerated over time? Mainframes to desktops, desktops to laptops, and then netbooks, phones, tablets to wearables – and maybe, in the near future, implants?

(Note to self: it might be worth thinking through how attention spans have also been miniaturized over the same period, and the cultural causes and effects of this phenomenon.)

But for us to be able to answer these questions, we first need to lay the groundwork in terms of understanding how we moved away from mainframes to laptops.

The first portable computer was the IBM 5100, released in September 1975. It weighed 55-pounds, which was much lighter and more portable than any other computer to date. While not truly a laptop by today’s standards, it paved the way for the development of truly portable computers, i.e. laptops.

The first laptop weighed near enough 25 kilograms. Insert large-eyed emoji here.

Though the Compass wasn’t the first portable computer, it was the first one with the familiar design we see everywhere now. You might call it the first modern laptop.

The Compass looked quite different than the laptops of 2016 though. It was wildly chunky, heavy and expensive at $8,150. Adjusted for inflation, that’s over $20,000 by today’s standards. It also extended far outward behind the display to help with heating issues and to house the computing components.

As with this series that we run on Tuesdays, as much for the photographs as for the text.

The portable micro computer the “Portal” of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. The Portal was a portable microcomputer designed and marketed by the studies and developments department of the French firm R2E Micral in 1980 at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64K byte RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (in separate blocks), a 32-character screen, a floppy disk (capacity – 140,000 characters), a thermal printer (speed – 28 characters/second), an asynchronous channel, a synchronous channel, and a 220-volt power supply. Designed for an operating temperature of 15–35 °C, it weighed 12 kg and its dimensions were 45 × 45 × 15 cm. It ran the Prologue operating system and provided total mobility.

The Wikipedia article on the history of laptops is full of interesting snippets, including the excerpt above. In fact, interesting enough to open up a related article about the history of the Intel 80386, from which the excerpt below:

Early in production, Intel discovered a marginal circuit that could cause a system to return incorrect results from 32-bit multiply operations. Not all of the processors already manufactured were affected, so Intel tested its inventory. Processors that were found to be bug-free were marked with a double sigma (ΣΣ), and affected processors were marked “16 BIT S/W ONLY”. These latter processors were sold as good parts, since at the time 32-bit capability was not relevant for most users. Such chips are now extremely rare and became collectible.

Every now and then, there are entirely unexpected, but immensely joyful payoffs to the task of putting together these set of links. I started off reading about the evolution of laptops, and wanted to post a link about the development of LCD screens, without which laptops simply wouldn’t be laptops. And I ended up reading about, I kid you not, carrots.

Yes, carrots.

Liquid crystals were accidentally discovered in 1888 by Austrian botanist Friedrich Reinitzer while he studied cholesteryl benzoate of carrots. Reinitzer observed that when he heated cholesteryl benzoate it had two melting points. Initially, at 294°F (145°C), it melted and turned into a cloudy fluid. When it reached 353°F (179°C), it changed again, but this time into a clear liquid. He also observed two other characteristics of the substance; it reflected polarized light and could also rotate the polarization direction of light.

Surprised by his findings, Reinitzer sought help from German physicist Otto Lehmann. When Lehmann studied the cloudy fluid under a microscope, he saw crystallites. He noted that the cloudy phase flowed like a liquid, but that there were other characteristics, such as a rod-like molecular structure that was somewhat ordered, that convinced Lehmann that the substance was a solid. Lehmann continued to study cholesteryl benzoate and other related materials. He concluded the cloudy fluid represented a newly discovered phase of matter and called it liquid crystal.

On The State of Higher Education in India (#1 of n)

Quite unexpectedly, I have ended up writing what will be an ongoing series about discovering more about the Indian Constitution. It began because I wanted to answer for myself questions about how the Indian Constitution came to be, and reading more about it has become a rather engaging rabbit hole.

Increasingly, it looks as if Mondays (which is when I write about India here) will now alternate between essays on the Indian Constitution and the topic of today’s essay: the state of (higher) education in India.

The series about the Constitution is serendipity; the series about education is an overwhelming passion.

I’ve been teaching at post-graduate institutions for the past decade now, and higher education in India is problematic on many, many counts. I’ll get into all of them in painstaking detail in the weeks to come, today is just about five articles you might want to read to give yourself an overview of where we are.

In the last 30 years, higher education in India has witnessed rapid and impressive growth. The increase in the number of institutions is, however, disproportionate to the quality of education that is being dispersed.

That is from the “Challenges” section of the Wikipedia article on higher education in India. The section highlights financing, enrollment, accreditation and politics as major challenges. To which I will add (and elaborate upon in the weeks to come) signaling, pedagogy, evaluation, overemphasis on classroom teaching, the return on investment – (time and money both), relevance, linkages to the real world, out-of-date syllabi, and finally under-emphasis on critical thinking and writing.

“Educational attainment in present-day India is also not directly correlated to employment prospects—a fact that raises doubts about the quality and relevance of Indian education. Although estimates vary, there is little doubt that unemployment is high among university graduates—Indian authorities noted in 2017 that 60 percent of engineering graduates remain unemployed, while a 2013 study of 60,000 university graduates in different disciplines found that 47 percent of them were unemployable in any skilled occupation. India’s overall youth unemployment rate, meanwhile, has remained stuck above 10 percent for the past decade.”

That is from an excellent summary of higher education in India. It is a very, very long read, but I have not been able to find a better in-one-place summary of education in India.

A series of charts detailing some statistics about higher education in India, by the Hindu. For reasons I’ll get into in the weeks to come, the statistics are somewhat misleading.

Overall, it seems from this survey, which shows impressive strides on enrollment, college density and pupil-teacher ratio, that we have finally managed to fix the supply problem. Now, we need to focus on the quality.

Swarajyamag reports on the All India Survey on Higher Education (AISHE) in India, 2016-17. As the report mentions, we have come a long way in terms of fixing the supply problem in higher education – we now need to focus on the much more important (and alas, much more difficult) problem of quality.

“Strange as it might look, the quality of statistics available for our higher education institutes has been much poorer than our statistics on school education. Sensing this gap, the central government instituted AISHE in 2011-12. We now have official (self-reported and unverified) statistics on the number and nature of higher education institutions, student enrolment, and pass-out figures along with the numbers for teaching and non-teaching staff. Sadly, this official survey does not tell us much about the quality of teaching, learning or research. There is no equivalent of Pratham’s ASER survey or the NCERT’s All India School Education Survey.”

That is from The Print ,and it takes a rather dimmer view than does Swarajyamag. With reference to the last two links especially, read both of them without bias for or against, beware of mood affiliation!

Education needs to become much, much, much more relevant than it currently is in India, and half of the Mondays to come in 2020 will be about teaching myself more about this topic. I can’t wait!