DNA, RNA, RT-PCR, Testing Methods, Supply Chains… and Politics

What is Reverse Transcription Polymerase Chain Reaction?

Reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings.

Blah Blooh Bleeh Blah. Right?

Well, this is the test that will tell us if a person has got the corona virus or not. So listen up!

The corona virus is in the form of RNA:

Coronaviruses, so named because they look like halos (known as coronas) when viewed under the electron microscope, are a large family of RNA viruses. The typical generic coronavirus genome is a single strand of RNA, 32 kilobases long, and is the largest known RNA virus genome. Coronaviruses have the highest known frequency of recombination of any positive-strand RNA virus, promiscuously combining genetic information from different sources when a host is infected with multiple coronaviruses. In other words, these viruses mutate and change at a high rate, which can create havoc for both diagnostic detection as well as therapy (and vaccine) regimens.

But as best as I can tell, detecting the corona virus becomes pretty difficult unless it turns into DNA, which can be done by a process called Reverse Transcription.

With the newly formed DNA, replicate it – have it reproduce a lot, basically. That’s where PCR comes in. And with that (and a fluroscent dye that is added to make detection easier) you have a sample that you can check for the presence of the corona virus.

The first, PCR, or polymerase chain reaction, is a DNA amplification technique that is routinely used in the lab to turn tiny amounts of DNA into large enough quantities that they can be analyzed. Invented in the 1980s by Kary Mullis, the Nobel Prize-winning technique uses cycles of heating and cooling to make millions of copies of a very small amount of DNA. When combined with a fluorescent dye that glows in the presence of DNA, PCR can actually tell scientists how much DNA there is. That’s useful for detecting when a pathogen is present, either circulating in a host’s body or left behind on surfaces.

But if scientists want to detect a virus like SARS-CoV-2, they first have to turn its genome, which is made of single-stranded RNA, into DNA. They do that with a handy enzyme called reverse-transcriptase. Combine the two techniques and you’ve got RT-PCR.

So, here’s how it works, best as I can tell:

Coronavirus Detection Steps

 

That article I linked to from Wired has a more detailed explanation, including more detailed answers about the “how”, if you are interested. Please do read it fully!

Now, which kit to use to extract RNA from a snot sample, which dye to use, which PCR machine to use – all of these and more are variables. Think of it like a recipe – different steps, different ingredients, different cooking methods. Except, because this is so much more important than a recipe, the FDA wags a finger and establishes protocol.

That protocol doesn’t just tell you the steps, but it also tells you whether you are authorized to run the test at all or not. And that was, uh, problematic.

For consistency’s sake, the FDA opted to limit its initial emergency approval to just the CDC test, to ensure accurate surveillance across state, county, and city health departments. “The testing strategy the government picked was very limited. Even if the tests had worked, they wouldn’t have had that much capacity for a while,” says Joshua Sharfstein, a health policy researcher at Johns Hopkins School of Public Health and the coauthor of a recent journal article on how this testing system has gone awry. “They basically were saying, we’re going to use a test not only developed by CDC, but CDC has to wrap it up and send it to the lab, and it’s just going to be state labs doing it.”

The effect was that the nation’s labs could only run tests using the CDC’s kits. They couldn’t order their own primers and probes, even if they were identical to the ones inside the CDC kits. And when the CDC’s kits turned out to be flawed, there was no plan B.

By the way, if you want a full list of the various protocols that are listed by the WHO, they can be found here.

Back to the Wired article:

Another in-demand approach would look for antibodies to the virus in the blood of patients, a so-called serological test. That’d be useful, because in addition to identifying people with Covid-19, it could tell you if someone was once infected but then recovered. “The better your surveillance, the more cases you’re going to catch, but even with perfect surveillance you won’t catch everything,” says Martin Hibberd, an infectious disease researcher at the London School of Hygiene and Tropical Medicine who helped develop one of the first tests for the coronavirus SARS in the early 2000s. “Until we’ve got a full test of this type of assay, we don’t know how many cases we’ve missed.”

A serological test would also probably be cheaper than a PCR-based one, and more suited to automation and high-throughput testing. A researcher in Singapore is testing one now.

Here’s an early paper on the topic, if you are interested.

Serological assays are of critical importance to determine seroprevalence in a given
population, define previous exposure and identify highly reactive human donors for the generation of convalescent serum as therapeutic. Sensitive and specific identification of Coronavirus SARS-Cov-2 antibody titers will also support screening of health care workers to identify those who are already immune and can be deployed to care for infected patients minimizing the risk of viral spread to colleagues and other patients.

As far as I can tell, this method has not been deployed at all thus far, and that applies to India as well. Here’s a Wikipedia article about the different methods of detecting Covid-19 – it’s about more than that, the first section applies here. Here’s an article from Science about a potential breakthrough.

But whether you use any variant of the RT-PCR or the serological test, given the sheer number of kits required, there is going to be crazy high demandand a massive supply chain problem.

Along with, what else, politics, and bureaucracy:

 


The Wired article is based on reporting in the US, obviously, but there are important lessons to be learned here for all countries, including India.

Here are some links about where India stands in this regard:

 

I’ll be updating the blog at a higher frequency for the time being – certainly more than once a day. Also (duh) all posts will be about the coronavirus for the foreseeable future.

If you are receiving these posts by email, and would rather not, please do unsubscribe.

Thanks for reading!

 

Understanding Afghanistan A Little Bit Better

“Here is a game called buzkashi that is played only in Afghanistan and the central Asian steppe. It involves men on horseback competing to snatch a goat carcass off the ground and carry it to each of two designated posts while the other players, riding alongside at full gallop, fight to wrest the goat carcass away. The men play as individuals, each for his own glory. There are no teams. There is no set number of players. The distance between the posts is arbitrary. The field of play has no boundaries or chalk marks. No referee rides alongside to whistle plays dead and none is needed, for there are no fouls. The game is governed and regulated by its own traditions, by the social context and its customs, and by the implicit understandings among the players. If you need the protection of an official rule book, you shouldn’t be playing. Two hundred years ago, buzkashi offered an apt metaphor for Afghan society. The major theme of the country’s history since then has been a contention about whether and how to impose rules on the buzkashi of Afghan society.”

That is an excerpt from an excerpt – the book is called Games Without Rules, and the author, Tamim Ansary, has written a very readable book indeed about the last two centuries or so of Afghanistan’s history.

It has customs, and it has traditions, but it doesn’t have rules, and good luck trying to impose them. The British tried (thrice) as did the Russians and now the Americans, but Afghanistan has proven to be the better of all of them.

Let’s begin with the Russians: why did they invade?


 

One day in October 1979, an American diplomat named Archer K. Blood arrived at Afghanistan’s government headquarters, summoned by the new president, whose ousted predecessor had just been smothered to death with a pillow.

While the Kabul government was a client of the Soviet Union, the new president, Hafizullah Amin, had something else in mind. “I think he wants an improvement in U.S.-Afghan relations,” Mr. Blood wrote in a cable back to Washington. It was possible, he added, that Mr. Amin wanted “a long-range hedge against over-dependence on the Soviet Union.”

Pete Baker in the NYT speaks of recently made available archival history, which essentially reconfirms what seems to have been the popular view all along: the USSR could not afford to let Afghanistan slip away from the Communist world, no matter the cost. And as Prisoners of Geography makes clear, and the NYT article mentions, there was always the tantalizing dream of accessing the Indian Ocean.

By the way, somebody should dig deeper into Archer K. Blood, and maybe write a book about him. There’s one already, but that’s a story for another day.


Well, if the USSR invaded, the USA had to be around, and of course it was:

The supplying of billions of dollars in arms to the Afghan mujahideen militants was one of the CIA’s longest and most expensive covert operations. The CIA provided assistance to the fundamentalist insurgents through the Pakistani secret services, Inter-Services Intelligence (ISI), in a program called Operation Cyclone. At least 3 billion in U.S. dollars were funneled into the country to train and equip troops with weapons. Together with similar programs by Saudi Arabia, Britain’s MI6 and SAS, Egypt, Iran, and the People’s Republic of China, the arms included FIM-43 Redeye, shoulder-fired, antiaircraft weapons that they used against Soviet helicopters. Pakistan’s secret service, Inter-Services Intelligence (ISI), was used as an intermediary for most of these activities to disguise the sources of support for the resistance.

But if you are interested in the how, rather than the what – and if you are interested in public choice – then do read this review, and do watch the movie. Charlie Wilson’s War is a great, great yarn.

 


 

AP Photo, sourced from the Atlantic photo essay credited below.

Powerful photographs that hint at what the chaos of those nine years must have been like, from the Atlantic.

 


 

And finally, from the Guardian comes an article that seeks to give a different take on “ten myths” about Afghanistan, including the glorification of Charlie Wilson:

 

This myth of the 1980s was given new life by George Crile’s 2003 book Charlie Wilson’s War and the 2007 film of the same name, starring Tom Hanks as the loud-mouthed congressman from Texas. Both book and movie claim that Wilson turned the tide of the war by persuading Ronald Reagan to supply the mujahideen with shoulder-fired missiles that could shoot down helicopters. The Stingers certainly forced a shift in Soviet tactics. Helicopter crews switched their operations to night raids since the mujahideen had no night-vision equipment. Pilots made bombing runs at greater height, thereby diminishing the accuracy of the attacks, but the rate of Soviet and Afghan aircraft losses did not change significantly from what it was in the first six years of the war.

Afghanistan Today

After Poland and Germany, let’s pick an Asian country to understand better for the month of March. And given the recent deal that has been signed, about which more below, let’s begin with Afghanistan.

As always, begin with the basics. The gift that is Wikipedia, on Afghanistan:

“Afghanistan is a unitary presidential Islamic republic. The country has high levels of terrorism, poverty, child malnutrition, and corruption. It is a member of the United Nations, the Organisation of Islamic Cooperation, the Group of 77, the Economic Cooperation Organization, and the Non-Aligned Movement. Afghanistan’s economy is the world’s 96th largest, with a gross domestic product (GDP) of $72.9 billion by purchasing power parity; the country fares much worse in terms of per-capita GDP (PPP), ranking 169th out of 186 countries as of 2018.”

And from the same article…

The country has three rail links: one, a 75-kilometer (47 mi) line from Mazar-i-Sharif to the Uzbekistan border; a 10-kilometer (6.2 mi) long line from Toraghundi to the Turkmenistan border (where it continues as part of Turkmen Railways); and a short link from Aqina across the Turkmen border to Kerki, which is planned to be extended further across Afghanistan. These lines are used for freight only and there is no passenger service.

Now, as opposed to how I structured the essays on Poland and Germany, I intent to begin with the now and work my way backwards. This is primarily because of what Afghanistan is in the news for:

The joint declaration is a symbolic commitment to the Afghanistan government that the US is not abandoning it. The Taliban have got what they wanted: troops withdrawal, removal of sanctions, release of prisoners. This has also strengthened Pakistan, Taliban’s benefactor, and the Pakistan Army and the ISI’s influence appears to be on the rise. It has made it unambiguous that it wants an Islamic regime.

The Afghan government has been completely sidelined during the talks between the US and Taliban. The future for the people of Afghanistan is uncertain, and will depend on how Taliban honours its commitments and whether it goes back to the mediaeval practices of its 1996-2001 regime.

Doesn’t bode well for India, obviously, but doesn’t bode well for the United States of America either, says Pranay Kotasthane.

And the New York Times says a complete withdrawal of troops, even over the period currently specified, may not be a great idea. Ongoing support is, according to that newspaper, necessary:

More important than troops, potentially, is the willingness of the international community to continue to finance the Afghan government after a peace deal.

“The real key to whether Afghanistan avoids falling into an even longer civil war is the degree to which the United States and NATO are willing to fund and train the Afghan security forces over the long term,” Mr. Stavridis said. “When Vietnam collapsed and the helicopters were lifting off the roof of the U.S. Embassy, it was the result of funding being stopped.”

But it’s not just military funding! Afghanistan needs a lot of the world’s support in the years to come. Water, for example, will be a contentious issue in the years to come, and that’s putting it mildly.

Afghanistan doesn’t face a water shortage – it’s unable to get water to where it’s needed. The nation loses about two thirds of its water to Iran, Pakistan, Turkmenistan, and other neighbors because doesn’t harness its rivers. The government estimates that more than $2 billion is needed to rehabilitate the country’s most important irrigation systems.

And water, of course, is just one of many issues. Health, education, reforming agriculture, roads – it’s an endless list, and it will need all kinds of ongoing and sustained help.

So, amid all of this, what should India be doing?

Meanwhile, India’s interests in Afghanistan haven’t changed. India hopes to build up Afghanistan’s state capacity so that Pakistan’s desires of extending control can be thwarted. Given this core interest in a changed political situation, what’s needed in the long-term in the security domain is to build the strength of the Afghan National Defense and Security Forces (ANDSF). Without a strong ANDSF — which comprises the army, police, air force, and special security forces — peace and stability in Afghanistan will remain elusive. India’s aim should be to help the Islamic Republic of Afghanistan and ANDSF claim monopoly over the legitimate use of physical force.

But, the article presciently warns us of the same what/how problem we first encountered in studying the Indian budget:

In short, the budget might itself not be the biggest issue. The US has pumped nearly $3.6bn on average every year for the last 19 years solely on reconstruction of the ANDSF, a support that is likely to continue even if the US withdraws its soldiers. The bigger problems are insufficient processes to plan and execute budgets resulting in unused funds and lack of infrastructure leading to pay shortfalls.

Now, to unpack all of this, we need to study the following: the Soviet invasion and its aftermath, American involvement in the region, the rise of the Taliban, leading up to Operation Enduring Freedom, 2002. That’s next Wednesday!

How do you interact with your computer?

“Alexa, play Hush, by Deep Purple.”

That’s my daughter, all of six years old. Leave aside for the moment the pride that I feel as a father and a fan of classic rock.

My daughter is coding.


My dad was in Telco for many years, which was what Tata Motors used to call itself  back in the day. I do not remember the exact year, but he often regales us with stories about how Tata Motors procured its first computer. Programming it was not child’s play – in fact, interacting with it required the use of punch cards.

I do not know if it was the same type of computer, but watching this video gives us a clue about how computers of this sort worked.


The guy in the video, the computer programmer in Telco and my daughter are all doing the same thing: programming.

What is programming?

Here’s Wikiversity:

Programming is the art and science of translating a set of ideas into a program – a list of instructions a computer can follow. The person writing a program is known as a programmer (also a coder).

Go back to the very first sentence in this essay, and think about what it means. My daughter is instructing a computer called Alexa to play a specific song, by a specific artist. To me, that is a list of instructions a computer can follow.

From using punch cards to using our voice and not even realizing that we’re programming: we’ve come a long, long way.


It’s one thing to be awed at how far we’ve come, it is quite another to think about the path we’ve taken to get there. When we learnt about mainframes, about Apple, about Microsoft and about laptops, we learnt about the evolution of computers, and some of the firms that helped us get there. I have not yet written about Google (we’ll get to it), but there’s another way to think about the evolution of computers: we think about how we interact with them.

Here’s an extensive excerpt from Wikipedia:

In the 1960s, Douglas Engelbart’s Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.

Much of the early research was based on how young children learn. So, the design was based on the childlike primitives of eye-hand coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as later used by adult professionals.

Engelbart’s work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.

The GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface “including icons and the first use of pop-up menus”.[3]

In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC’s innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.

If you feel like diving down this topic and learning more about it, Daring Fireball has a lot of material about Alan Kay, briefly mentioned above.

So, as the Wikipedia article mentions, we moved away from punch cards, to using hand-eye coordination to enter the WIMP era.

It took a genius to move humanity into the next phase of machine-human interaction.


The main tweet shown above is Steven Sinofsky rhapsodizing about how Steve Jobs and his firm was able to move away from the WIMP mode of thinking to using our fingers.

And from there, it didn’t take long to moving to using just our voice as a means of interacting with the computers we now have all around us.

Voice operated computing systems:

That leaves the business model, and this is perhaps Amazon’s biggest advantage of all: Google doesn’t really have one for voice, and Apple is for now paying an iPhone and Apple Watch strategy tax; should it build a Siri-device in the future it will likely include a healthy significant profit margin.

Amazon, meanwhile, doesn’t need to make a dime on Alexa, at least not directly: the vast majority of purchases are initiated at home; today that may mean creating a shopping list, but in the future it will mean ordering things for delivery, and for Prime customers the future is already here. Alexa just makes it that much easier, furthering Amazon’s goal of being the logistics provider — and tax collector — for basically everyone and everything.


Punch cards to WIMP, WIMP to fingers, and fingers to voice. As that last article makes clear, one needs to think not just of the evolution, but also about how business models have changed over time, and have caused input methods to change – but also how input methods have changed, and caused business models to change.

In other words, understanding technology is as much about understanding economics, and strategy, as it is about understanding technology itself.

In the next Tuesday essay, we’ll take a look Google in greater detail, and then about emergent business models in the tech space.

 

On the history of laptops

How and why did we move away from desktop computers towards laptops? Although this next question isn’t the focus of today’s links, it is worth asking in this context: has the tendency to miniaturize accelerated over time? Mainframes to desktops, desktops to laptops, and then netbooks, phones, tablets to wearables – and maybe, in the near future, implants?

(Note to self: it might be worth thinking through how attention spans have also been miniaturized over the same period, and the cultural causes and effects of this phenomenon.)

But for us to be able to answer these questions, we first need to lay the groundwork in terms of understanding how we moved away from mainframes to laptops.

The first portable computer was the IBM 5100, released in September 1975. It weighed 55-pounds, which was much lighter and more portable than any other computer to date. While not truly a laptop by today’s standards, it paved the way for the development of truly portable computers, i.e. laptops.

The first laptop weighed near enough 25 kilograms. Insert large-eyed emoji here.

Though the Compass wasn’t the first portable computer, it was the first one with the familiar design we see everywhere now. You might call it the first modern laptop.

The Compass looked quite different than the laptops of 2016 though. It was wildly chunky, heavy and expensive at $8,150. Adjusted for inflation, that’s over $20,000 by today’s standards. It also extended far outward behind the display to help with heating issues and to house the computing components.

As with this series that we run on Tuesdays, as much for the photographs as for the text.

The portable micro computer the “Portal” of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. The Portal was a portable microcomputer designed and marketed by the studies and developments department of the French firm R2E Micral in 1980 at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64K byte RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (in separate blocks), a 32-character screen, a floppy disk (capacity – 140,000 characters), a thermal printer (speed – 28 characters/second), an asynchronous channel, a synchronous channel, and a 220-volt power supply. Designed for an operating temperature of 15–35 °C, it weighed 12 kg and its dimensions were 45 × 45 × 15 cm. It ran the Prologue operating system and provided total mobility.

The Wikipedia article on the history of laptops is full of interesting snippets, including the excerpt above. In fact, interesting enough to open up a related article about the history of the Intel 80386, from which the excerpt below:

Early in production, Intel discovered a marginal circuit that could cause a system to return incorrect results from 32-bit multiply operations. Not all of the processors already manufactured were affected, so Intel tested its inventory. Processors that were found to be bug-free were marked with a double sigma (ΣΣ), and affected processors were marked “16 BIT S/W ONLY”. These latter processors were sold as good parts, since at the time 32-bit capability was not relevant for most users. Such chips are now extremely rare and became collectible.

Every now and then, there are entirely unexpected, but immensely joyful payoffs to the task of putting together these set of links. I started off reading about the evolution of laptops, and wanted to post a link about the development of LCD screens, without which laptops simply wouldn’t be laptops. And I ended up reading about, I kid you not, carrots.

Yes, carrots.

Liquid crystals were accidentally discovered in 1888 by Austrian botanist Friedrich Reinitzer while he studied cholesteryl benzoate of carrots. Reinitzer observed that when he heated cholesteryl benzoate it had two melting points. Initially, at 294°F (145°C), it melted and turned into a cloudy fluid. When it reached 353°F (179°C), it changed again, but this time into a clear liquid. He also observed two other characteristics of the substance; it reflected polarized light and could also rotate the polarization direction of light.

Surprised by his findings, Reinitzer sought help from German physicist Otto Lehmann. When Lehmann studied the cloudy fluid under a microscope, he saw crystallites. He noted that the cloudy phase flowed like a liquid, but that there were other characteristics, such as a rod-like molecular structure that was somewhat ordered, that convinced Lehmann that the substance was a solid. Lehmann continued to study cholesteryl benzoate and other related materials. He concluded the cloudy fluid represented a newly discovered phase of matter and called it liquid crystal.

On The State of Higher Education in India (#1 of n)

Quite unexpectedly, I have ended up writing what will be an ongoing series about discovering more about the Indian Constitution. It began because I wanted to answer for myself questions about how the Indian Constitution came to be, and reading more about it has become a rather engaging rabbit hole.

Increasingly, it looks as if Mondays (which is when I write about India here) will now alternate between essays on the Indian Constitution and the topic of today’s essay: the state of (higher) education in India.

The series about the Constitution is serendipity; the series about education is an overwhelming passion.

I’ve been teaching at post-graduate institutions for the past decade now, and higher education in India is problematic on many, many counts. I’ll get into all of them in painstaking detail in the weeks to come, today is just about five articles you might want to read to give yourself an overview of where we are.

In the last 30 years, higher education in India has witnessed rapid and impressive growth. The increase in the number of institutions is, however, disproportionate to the quality of education that is being dispersed.

That is from the “Challenges” section of the Wikipedia article on higher education in India. The section highlights financing, enrollment, accreditation and politics as major challenges. To which I will add (and elaborate upon in the weeks to come) signaling, pedagogy, evaluation, overemphasis on classroom teaching, the return on investment – (time and money both), relevance, linkages to the real world, out-of-date syllabi, and finally under-emphasis on critical thinking and writing.

“Educational attainment in present-day India is also not directly correlated to employment prospects—a fact that raises doubts about the quality and relevance of Indian education. Although estimates vary, there is little doubt that unemployment is high among university graduates—Indian authorities noted in 2017 that 60 percent of engineering graduates remain unemployed, while a 2013 study of 60,000 university graduates in different disciplines found that 47 percent of them were unemployable in any skilled occupation. India’s overall youth unemployment rate, meanwhile, has remained stuck above 10 percent for the past decade.”

That is from an excellent summary of higher education in India. It is a very, very long read, but I have not been able to find a better in-one-place summary of education in India.

A series of charts detailing some statistics about higher education in India, by the Hindu. For reasons I’ll get into in the weeks to come, the statistics are somewhat misleading.

Overall, it seems from this survey, which shows impressive strides on enrollment, college density and pupil-teacher ratio, that we have finally managed to fix the supply problem. Now, we need to focus on the quality.

Swarajyamag reports on the All India Survey on Higher Education (AISHE) in India, 2016-17. As the report mentions, we have come a long way in terms of fixing the supply problem in higher education – we now need to focus on the much more important (and alas, much more difficult) problem of quality.

“Strange as it might look, the quality of statistics available for our higher education institutes has been much poorer than our statistics on school education. Sensing this gap, the central government instituted AISHE in 2011-12. We now have official (self-reported and unverified) statistics on the number and nature of higher education institutions, student enrolment, and pass-out figures along with the numbers for teaching and non-teaching staff. Sadly, this official survey does not tell us much about the quality of teaching, learning or research. There is no equivalent of Pratham’s ASER survey or the NCERT’s All India School Education Survey.”

That is from The Print ,and it takes a rather dimmer view than does Swarajyamag. With reference to the last two links especially, read both of them without bias for or against, beware of mood affiliation!

Education needs to become much, much, much more relevant than it currently is in India, and half of the Mondays to come in 2020 will be about teaching myself more about this topic. I can’t wait!

One on inflation, and four on Germany’s reunification

As a student of economics, I think I’ve read one article too many on Germany’s inflation. In fact, one of the many joys of writing this blog has been discovering how bad inflation was in other parts of the world: the version of economic history that I have studied has underplayed this.

(Name four countries that experienced hyperinflation: Germany! Zimbabwe! Venezuela! Uhhhhhh…..)

But that being said, learning more about Germany this month wouldn’t be complete without at least one article about it’s hyperinflation. And the reason I enjoyed the one I excerpt from below is because while it is full of interesting anecdotes about the period of hyperinflation, it also speaks about how it all ended – and with what consequences. And a fun fact which you may have not known earlier: the root of the word credit means to believe. That’s modern finance, in a nutshell.

Obviously, though the currency was worthless, Germany was still a rich country — with mines, farms, factories, forests. The backing for the Rentenmark was mortgages on the land and bonds on the factories, but that backing was a fiction; the factories and land couldn’t be turned into cash or used abroad. Nine zeros were struck from the currency; that is, one Rentenmark was equal to one billion old Marks. The Germans wanted desperately to believe in the Rentenmark, and so they did. “I remember,” said one Frau Barten of East Prussia, “the feeling of having just one Rentenmark to spend. I bought a small tin bread bin. Just to buy something that had a price tag for one Mark was so exciting.”

All money is a matter of belief. Credit derives from Latin, credere, “to believe.” Belief was there, the factories functioned, the farmers delivered their produce. The Central Bank kept the belief alive when it would not let even the government borrow further.

The political “give” that was needed to get the political, economic, cultural and civilizational “take”, in an interesting article from DW. The set of links at the bottom of this article are also worth a read. (Note that I have added the WIkipedia link to the 2 Plus 4 Agreement, it is not there in the original).

The 2 plus 4 Agreement, also called the Treaty on the Final Settlement with Respect to Germany, recognized all European borders established after World War II, resolving this outstanding dispute once and for all. Bonn and Berlin’s signatures to the treaty meant that a newly reunited Germany would recognize national borders as they stood, not as they once were. Coupled with the reduction in military concentrations, the acceptance of current borders was a significant step toward an enduring peace in Europe at large.

An unusually short excerpt by my own standards, but this is the last sentence in the Wikipedia article about German reunification. It deserves to be read in the full, the entire article, especially if you were under the impression that reunification in Germany was relatively quick, painless and that there was much happiness all round.

The absorption of eastern Germany, and the methods by which it had been accomplished, had exacted a high price throughout all of Germany.

But there is an argument to be made that it was worth it, because one way of thinking about it is this: West Germany purchased access to culture by sharing economic prosperity, while East Germany purchased access to economic propserity by sharing culture. Costs matter, but maybe, just maybe, culture trumps economics?

“On average, people in the East are less successful, less productive and not as wealthy. Materially speaking, they’re less happy,” Seemann said. “But that’s exactly why cultural diversity in the eastern states plays a more important role than in the West. People in eastern Germany are aware that there are things which are more important than making money and paying taxes. They see the arts as a creative process of ‘togetherness.’ We need to strengthen this consciousness, because that’s the only way to ensure culture and society continues to thrive — regardless of where we stand economically in the years to come.”

Note that there are links at the bottom of this article about whether lessons from German reunification can apply to Korea. Alas, the article says no. I am an Indian, so double the alas for me, please.

And finally, a reminder that these things take time! This article is about the reunification of not Germany, but of the German language. Note that the East Germans had to adapt, and not the other way around. Maybe, just maybe, economics trumps culture?

The former East and West Germany have grown closer together in many areas over the past 26 years. At the same time, some differences are still marked precisely by the former border between East and West, such as economic strength, family structure and wealth. Furthermore, stereotypes about Wessis and Ossis have still not been consigned to history. According to a study carried out by the Berlin Institute for Population and Development, it will take another generation before German unity is firmly anchored in people’s minds. It has, however, long been reflected in the way they speak.

Four articles about the creation of Germany, and one about its reunification

I had the pleasure of being in Germany for about a week in January, and it is a country that I would love to visit again. For a variety of reasons, it must be said, not the least of which is that Germany’s reputation when it comes to beer is entirely deserved. What’s more, every single German I met told me that January was probably the worst time of the year to visit if beer was the main thing on the agenda, which only makes my argument stronger.

As a side note: every single German with him I spoke about beer also said that Oktoberfest is by now vastly overrated. I would have expected that in any case, but it was a useful reaffirmation.

In February, we will learn more about Germany, but I plan to not write about the two world wars at all. Not, of course, because they are not worth writing about, but because I would expect most people reading this to know about them in any case.

Instead, I propose to link to articles about the following topics: the founding of Germany, the reconstruction of Germany after the end of the IInd World War, Germany’s (almost horrified) fascination with inflation today, and conclude with a rather longish article about my impressions of Germany from my visit there, and what I hope to learn more about as a consequence. If you feel very strongly about any topic that should be included in addition to these, please let me know!

Germany, as perhaps most of you reading this already know, became Germany the nation – in the sense that we understand it today – only in 1871. Whether this was in response to rising feelings of nationalism in other parts of Europe, or because of Otto Van Bismarck, or a combination of the two will forever be  a matter of surmise.

In the Gründerzeit period following the unification of Germany, Bismarck’s foreign policy as Chancellor of Germany under Emperor William I secured Germany’s position as a great nation by forging alliances, isolating France by diplomatic means, and avoiding war. Under Wilhelm II, Germany, like other European powers, took an imperialistic course, leading to friction with neighbouring countries. Most alliances in which Germany had previously been involved were not renewed. This resulted in the creation of a dual alliance with the multinational realm of Austria-Hungary, promoting at least benevolent neutrality if not outright military support. Subsequently, the Triple Alliance of 1882 included Italy, completing a Central European geographic alliance that illustrated German, Austrian and Italian fears of incursions against them by France and/or Russia. Similarly, Britain, France and Russia also concluded alliances that would protect them against Habsburg interference with Russian interests in the Balkans or German interference against France.

This is what Germany looked like at the outset:

At its birth Germany occupied an area of 208,825 square miles (540,854 square km) and had a population of more than 41 million, which was to grow to 67 million by 1914. The religious makeup was 63 percent Protestant, 36 percent Roman Catholic, and 1 percent Jewish. The nation was ethnically homogeneous apart from a modest-sized Polish minority and smaller Danish, French, and Sorbian populations. Approximately 67 percent lived in villages and the remainder in towns and cities. Literacy was close to universal because of compulsory education laws dating to the 1820s and ’30s.

And there was a reason this mattered. Germany was about to change in a whole different variety of ways.

The person most responsible for this was, of course, Otto Van Bismarck. The entire Wikipedia article makes for fascinating reading, not just the excerpt below.

Imperial and provincial government bureaucracies attempted to Germanise the state’s national minorities situated near the borders of the empire: the Danes in the North, the Francophones in the West and Poles in the East. As minister president of Prussia and as imperial chancellor, Bismarck “sorted people into their linguistic [and religious] ‘tribes'”; he pursued a policy of hostility in particular toward the Poles, which was an expedient rooted in Prussian history. “He never had a Pole among his peasants” working the Bismarckian estates; it was the educated Polish bourgeoisie and revolutionaries he denounced from personal experience, and “because of them he disliked intellectuals in politics.”Bismarck’s antagonism is revealed in a private letter to his sister in 1861: “Hammer the Poles until they despair of living […] I have all the sympathy in the world for their situation, but if we want to exist we have no choice but to wipe them out: wolves are only what God made them, but we shoot them all the same when we can get at them.” Later that year, the public Bismarck modified his belligerence and wrote to Prussia’s foreign minister: “Every success of the Polish national movement is a defeat for Prussia, we cannot carry on the fight against this element according to the rules of civil justice, but only in accordance with the rules of war.”With Polish nationalism the ever-present menace, Bismarck preferred expulsion rather than Germanisation

One of the best books that I read about the history of Europe in this period is a book called The War That Ended Peace, by Margaret Macmillan. Read the book, but begin with this review:

After Hitler’s war, though, English-speaking historians were more likely to see a pattern of German aggression stretching back before 1914, and in 1961 the Hamburg historian Fritz Fischer made the controversial case (bitterly opposed by most German historians) that Germany had mounted a pre-emptive strike. The “Fischer thesis” became the orthodoxy for a while, but has been plausibly challenged in recent years by historians who have pointed the finger almost everywhere except at Berlin. The current consensus seems to be that there is no consensus. There is, finally, the question of the decisions made by a score or so of men (and they were all men) in half a dozen capitals.

As noted above, we now fast forward to the year 1990 (or thereabouts). There is much more to German reunification than the fall of the Berlin Wall.

The East German government started to falter in May 1989, when the removal of Hungary’s border fence with Austria opened a hole in the Iron Curtain. It caused an exodus of thousands of East Germans fleeing to West Germany and Austria via Hungary. The Peaceful Revolution, a series of protests by East Germans, led to the GDR’s first free elections on 18 March 1990, and to the negotiations between the GDR and FRG that culminated in a Unification Treaty.[1] Other negotiations between the GDR and FRG and the four occupying powers produced the so-called “Two Plus Four Treaty” (Treaty on the Final Settlement with Respect to Germany) granting full sovereignty to a unified German state, whose two parts were previously bound by a number of limitations stemming from their post-World War II status as occupied regions.

Next Wednesday, I’ll link to five articles about the reconstruction of post-war Germany and associated topics.

Understanding Horizons, Understanding Time

The more I think about time, the more confused I get. The more I read about time, the more I cannot help but think about time.

In today’s post, I hope to be able to inspire you to get as confused about time as I am.

Before we get to the five links, here are some questions for you.

Should I have a gulab jamun after lunch today? If you are anything at all like me, your answer is likely to be a resounding “aye!”

Do you know who might want to say no? 70 year old Ashish (assuming I live to be that age) might not be such a big fan of I having that gulab jamun today.

Should 38 year old Ashish (for that is how old I am right now) listen to the entreaties of a 70 year old Ashish who doesn’t exist?

Well, if 38 year old Ashish wants 70 year old Ashish to have a chance of existing, I think it makes sense to ditch that damn dessert.

But, uh, good luck trying to convince 38 year old Ashish at 1.45 pm of the importance of thinking about the hypothetical existence of 70 year old Ashish.

That’s the problem of time discounting.

How important is the future, compared to the present?

Think of it in terms of gulab jamuns or interest rates offered to you by the bank, it’s the same thing. A weeekend trip to Goa (38 year old Ashish says yes!), or a fixed deposit in the bank (70 year old Ashish says yes!)?

Now: that was the easy bit. Let’s amp things up a little.

Do you wish your parents had saved a little bit more when they were younger? Hell, imagine if your grandparents hadn’t had that gulab jamun when they were young, and put the money in a fixed deposit instead. Go as far back in time as you wish, and imagine how important a rupee saved a couple of centuries ago would have been today – for you.

But, um, by that measure, shouldn’t you be saving every single rupee you can today for your child’s tomorrow? The argument holds whether you have children or not, by the way. If you wish your great-great-great-grandfather had been more financially responsible at age 27, when he was unmarried and without kids, then that goes for you today as well!

And all that being said, let’s get cracking with today’s set of links!

  1. “Time discounting research investigates differences in the relative valuation placed on rewards (usually money or goods) at different points in time by comparing its valuation at an earlier date with one for a later date”…
    ..
    ..
    says the very simple introduction to time (temporal) discounting on behavioraleconomics.com. While you’re on that page, also look up hyperbolic discounting.
    ..
    ..
  2. “Someone with a high time preference is focused substantially on their well-being in the present and the immediate future relative to the average person, while someone with low time preference places more emphasis than average on their well-being in the further future.Time preferences are captured mathematically in the discount function. The higher the time preference, the higher the discount placed on returns receivable or costs payable in the future.”
    ..
    ..
    That is from Wikipedia, and as homework, ask yourself if you should live life with a zero discount rate attached to most things.*
    ..
    ..
  3. “What has become known as the “Ramsey formula” says that the rate at which one should discount an increase in consumption that occurs in the future depends on three key factors, elaborated upon below: our pure rate of time preference, our expectations about future growth rates, and our judgment about whether and how fast the marginal utility of consumption declines as we grow wealthier”
    ..
    ..
    So here’s a way to understand the point above: I was in Europe on work recently. Should I have splurged on a three star Michelin meal in Paris? Or banked the money I might have spent over there and gone for three such meals when I was 70 instead? Will such a meal at age 70 hold the same importance for me as it does now?**
    ..
    ..
  4. “When brain science was young, it was thought that the frontal lobe had no particular function. There were famous cases such as that of Phineas Gage, a railway worker who, in an explosion, had a long iron rod driven through the front of his brain. The rod was removed and Gage, miraculously, survived, seemingly with his intelligence, language and memory intact. Before long he was back at work.However, observation of others with frontal lobe damage soon revealed the cost – problems with planning, and also, strangely, a reduction in feelings of anxiety. What was the link between the two? Both planning and anxiety are related to thinking about the future. Frontal lobe damage leaves people living in a permanent present, and as a result they will not be bothering to make plans, so can’t be anxious about them.”
    ..
    ..
    That is from a review of one of the finest books I have read, Stumbling on Happiness, by Daniel Gilbert. Read the book, please. I promise you that it is worth your (excuse the pun) time.
    ..
    ..
  5. “But there’s an alternative path. Generations overlap, and so by doing more to empower younger people today, we give somewhat more weight to the interests of future people compared to the interests of present people. This could be significant. Currently, the median voter is 47.5 years old in the USA; the average age of senators in the USA is 61.8 years. With an aging population, these numbers are very likely to get higher over time: in developed countries, the median age is project to increase by 3 to 7 years by 2050 (and by as much as 15 years in South Korea). We live in something close to a gerontocracy, and if voters and politicians are acting in their self-interest, we should expect that politics as a whole has a shorter time horizon than if younger people were more empowered.”
    ..
    ..
    Via Marginal Revolution, this lovely, thought-provoking essay by William Macaskill. As both the MR blog post and Macaskill are careful to point out, this necessarily implies that younger people should be more informed, for such a system to have even a shot at succeeding.

 

But hey, that’s as good an argument as any for the existence of this blog!

 

*Yes, you should, far as I can tell. But god, it’s hard!

**If you were wondering, the answer is no. I didn’t go for that meal. I wish I had though!

 

 

Tech: Understanding Mainframes Better

My daughter, all of six years old, doesn’t really know what a computer is.

Here’s what I mean by that: a friend of hers has a desktop in her bedroom, and to my daughter, that is a computer. My laptop is, well, a laptop – to her, not a computer. And she honestly thinks that the little black disk that sits on a coffee table in our living room is a person/thing called Alexa.

How to reconcile – both for her and for ourselves – the idea of what a computer is? The etymology of the word is very interesting – it actually referred to a person! While it is tempting to write a short essay on how Alexa has made it possible to complete the loop in this case, today’s links are actually about understanding mainframes better.

Over the next four or five weeks, we’ll trace out the evolution of computers from mainframes down to, well, Alexa!

  1. “Several manufacturers and their successors produced mainframe computers from the late 1950s until the early 21st Century, with gradually decreasing numbers and a gradual transition to simulation on Intel chips rather than proprietary hardware. The US group of manufacturers was first known as “IBM and the Seven Dwarfs”: usually Burroughs, UNIVAC, NCR, Control Data, Honeywell, General Electric and RCA, although some lists varied. Later, with the departure of General Electric and RCA, it was referred to as IBM and the BUNCH. IBM’s dominance grew out of their 700/7000 series and, later, the development of the 360 series mainframes.”
    ..
    ..
    Wikipedia’s article on mainframes contains a short history of the machines.
    ..
    ..
  2. “Mainframe is an industry term for a large computer. The name comes from the way the machine is build up: all units (processing, communication etc.) were hung into a frame. Thus the maincomputer is build into a frame, therefore: MainframeAnd because of the sheer development costs, mainframes are typically manufactured by large companies such as IBM, Amdahl, Hitachi.”
    ..
    ..
    This article was written a very long time ago, but is worth looking at for a simple explanation of what mainframes are. Their chronology is also well laid out  – and the photographs alone are worth it!
    ..
    ..
  3. “Although only recognized as such many years later, the ABC (Atanasoff-Berry Computer) was really the first electronic computer. You might think “electronic computer” is redundant, but as we just saw with the Harvard Mark I, there really were computers that had no electronic components, and instead used mechanical switches, variable toothed gears, relays, and hand cranks. The ABC, by contrast, did all of its computing using electronics, and thus represents a very important milestone for computing.”
    ..
    ..
    This is your periodic reminder to please read Cixin Liu. But also, this article goes more into the details of what mainframe computers were than the preceding one. Please be sure to read through all three pages – and again, the photographs alone are worth the price of admission.
    ..
    ..
  4. A short, Philadelphia focussed article that is only somewhat related to mainframes, but still – in my opinion – worth reading, because it gives you a what-if idea of the evolution of the business. Is that really how the name came about?! (see the quote about bugs below)
    ..
    ..
    “So Philly should really be known as “Vacuum Tube Valley,” Scherrer adds: “We want to trademark that.” He acknowledged the tubes were prone to moths — “the original computer bugs.”
    ..
    ..
  5. I’m a sucker for pictures of old technology (see especially the “Death to the Mainframe” picture)