Variants of Concern

I didn’t do so well last year in terms of posting regularly around this time onwards, and it was because thinking about Covid overwhelmed me. I’ve studiously tried to avoid writing about it since then as much as possible, but this post is an exception to that self-imposed rule.


First, about the “second” wave in India. We’ve been here before, about a century ago. I’d written down notes from Laura Spinney’s excellent book, The Pale Rider in March of last year, and there was this bullet point:

The flu struck in three waves, and the second wave was by far the deadliest.

https://econforeverybody.com/2020/03/18/notes-from-pale-rider-by-laura-spinney/

There are many possible reasons for why the second wave is likely to be much worse than the first, and I do not know enough to be able to even speculate which one is the most likely. But both a century ago and now, the second wave was by far and away the worst:

Please, read the whole thread.


And here we are, a century down the road. Via the excellent, indefatigable Timothy Taylor, this little book. And from that little book, this not-so-little excerpt:

Manaus, a city on the Amazon River of more than 2 million, illustrates the dangers of complacency. During the first wave of the pandemic, Manaus was one of the worst-hit locations in the world. Tests in spring 2020 showed that
over 60 percent of the population carried antibodies to SARS-CoV-2. Some policymakers speculated that “herd immunity”—the theory that infection rates fall after large population shares have been infected— had been attained.

That belief was a mirage. A resurgence flared less than eight months later, flooding hospitals suffering from shortages of oxygen and other medical supplies. The pandemic’s second wave left more dead than the first.

Scientists discovered a novel variant in this second wave that went beyond the mutations identified in the United Kingdom and South Africa. This new variant, denominated P.1, has since turned up in the United States, Japan, and
Germany. Scientists speculate that a high prevalence of antibodies in the first wave may have helped a more aggressive variant to propagate. The hopes for widespread herd immunity may be dashed by the emergence of more infectious
virus variants.

Since the outbreak in Manaus in January 2021, P.1 has now spread throughout Brazil. The variant is much more transmissible than those that had been circulating previously in the country. High transmissibility and the absence of
measures and behaviors to stem the dissemination of the virus have led to the worst health system collapse in Brazilian history. The country has been on the front pages of major news outlets around the world not only due to the dramatic
situation that is currently unfolding but also because of the global threat posed by a major country with an uncontrolled epidemic.

https://www.piie.com/sites/default/files/documents/piieb21-2.pdf

The point is not to read more about the P1 variant. That is a worthy exercise, and you can see this, this and this for starters. But the point that I want to make is this – well, the points I want to make are these:

  1. The one other instance we have of a global pandemic tells us that the second wave was deadlier.
  2. That seems to be the case this time around as well, because the same virus has mutated into a variety of different forms over the past year in different parts of the world.
    1. Each of these so-called “variants-of-concern” will have different impacts, both in their countries of “origin” and (inevitably) elsewhere.
    2. How variant x affects individual y in region z is down to a long list of potential factors.
  3. And therefore 2021 already is, and will continue to be, worse in many ways compared to 2020.

And again, not just because of the P1 variant. That is simply one (worrisome, to be sure) variant – there are many more, and there will be more still to come.


Bottomline: we’re just getting started with the second wave. It isn’t the beginning of the end – it is the end of the beginning.

Reproducibility and Replicability

I and a colleague conducted a small behavioral economics and experimental economics workshop for our students at the Gokhale Institute. It was a very small, very basic workshop, but one of the things that came up was the reproducibility problem, or as Wikipedia puts it, the replication crisis.

The replication crisis (also called the replicability crisis and the reproducibility crisis) is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate or reproduce. The replication crisis most severely affects the social sciences and medicine. The phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in the field of metascience.

https://en.wikipedia.org/wiki/Replication_crisis

And further on in that same article:

A 2016 poll of 1,500 scientists reported that 70% of them had failed to reproduce at least one other scientist’s experiment (50% had failed to reproduce one of their own experiments).[9] In 2009, 2% of scientists admitted to falsifying studies at least once and 14% admitted to personally knowing someone who did. Misconducts were reported more frequently by medical researchers than others.

https://en.wikipedia.org/wiki/Replication_crisis

The basic idea behind replicability is very simple: you should be able to take the data and the code from the paper you are reading/reviewing, and replicate the results obtained. You don’t have to agree with the choice of method, or with the results or with anything – you should be able to replicate the results, that’s all.

One basic standard of economic research is surely that someone else should be able to reproduce what you have done. They don’t have to agree with what you’ve done. They may think your data is terrible and your methodology is worse. But as a minimal standard, they should be able to reproduce your result, so that the follow-up research can then be in a position to think about what might have been done differently or better. This standard may seem obvious, but during the last 30 years or so, the methods for reproducibility have been transformed.

https://conversableeconomist.blogspot.com/2021/01/the-reproducibility-challenge-with.html

Now (to me, at any rate) this is interesting enough in and of itself, but at the risk of becoming a little meta, reading the rest of Tim Taylor’s post is worth it because it raises so many interesting issues.

The first is a link to a lovely overview of the problem by Lars Vilhuber, published in the Harvard Data Science Review. It is relatively simple to read, and is recommended reading. For example, Vilhuber draws a careful distinction between replicability and reproducibility, and is full of interesting nuggets of information. I’ll list out the major ones (major to me) here. Note that I have simply copy-pasted from the link:

  1. Publication of research articles specifically in economics can be traced back at least to the 1844 publication of the Zeitschrift für die Gesamte Staatswissenschaft (Stigler et al., 1995).
  2. As the first editor of Econometrica, Ragnar Frisch noted, “the original data will, as a rule, be published, unless their volume is excessive […] to stimulate criticism, control, and further studies” (Frisch, 1933)
  3. …only 17.4% of articles in Econometrica in 1989–1990 had empirical content (Stigler et al., 1995)
  4. As Dewald et al. (1986) note: “Many authors cited only general sources such as Survey of Current Business, Federal Reserve Bulletin, or International Financial Statistics, but did not identify the specific issues, tables, and pages from which the data had been extracted.”
  5. Among reproducibility supplements posted alongside articles in the AEA’s journals between 2010 and 2019, Stata is the most popular (72.96% of all supplements), followed by Matlab (22.45%; Vilhuber et al., 2020) (Note: Do check figure 2 at the link. Fascinating stuff.)
  6. It was concluded that “there is no tradition of replication in economics” (McCullough et al., 2006).
  7. The extent of the use of replication exercises in economics classes is anecdotally high, but I am not aware of any study or survey demonstrating this.
  8. The most famous example in economics is, of course, the exchange between Reinhart and Rogoff, and graduate student Thomas Herndon, together with professors Pollin and Ash (Herndon et al., 2014; Reinhart & Rogoff, 2010). (Note to students: this is a fascinating tale. Read up about it!)

There is much more at the link of course, but Tim Taylor’s post does a good job of extracting the key points. I’m noting them here in bullet point fashion, but you really should read the entire thing.

  1. Economic data – our understanding of the phrase needs to change, because a lot of it is in fact not publicly available today.
  2. “Vilhuber writes: “In 1960, 76% of empirical AER [American Economic Review- articles used public-use data. By 2010, 60% used administrative data, presumably none of which is public use …””
  3. Restricted Access Data Environments is a new thing that I discovered while writing this blogpost. “…where accredited researchers can get access to detailed data, but in ways that protect individual privacy. For example, there are now 30 Federal Statistical Data Research Centers around the country, mostly located close to big universities.” We could do with something like this in India. Actually, we would be a lot happier with just dbie working the way it was supposed to, but that’s for another day.
  4. Data that is given by creating a sub-sample, data that is ephemeral (try researching Instagram stories, for example) and data that you need to pay for are all challenging, and relatively recent, developments.
  5. I worked for four years in the analytics industry, so believe me when I say this. Data cleaning is a huge issue.
  6. Tim Taylor writes five paragraphs after this one, but this is a glorious para, worth quoting in full:
    “As a final thought, I’ll point out that academic researchers have mixed incentives when it comes to data. They always want access to new data, because new data is often a reliable pathway to published papers that can build a reputation and a paycheck. They often want access to the data used by rival researchers, to understand and to critique their results. But making access available to details of their own data doesn’t necessarily help them much.”

If there are those amongst you who are considering getting into academia, and are wondering what field to specialize in, reproducibility and replicability are fields worth investigating, precisely because they are relatively underrated today, and are only going to get more important tomorrow.

That’s a good investment to make, no?

The Long, Slow, But Inevitable Death of the Classroom

If you read enough about Robert Solow, this quote coming up is but a matter of time:

You can see the computer age everywhere but in the productivity statistics

http://www.standupeconomist.com/pdf/misc/solow-computer-productivity.pdf

Much the same could be said about internet based learning technologies if you tried to measure it in colleges and universities before March 2020. We had lip service being paid to MOOC’s and all that, but if we’re being honest, that’s all it was: lip service.

Things have changed around a bit since then, I think.

We’ll get to that later on this post, but let’s go back to the seeing computers everywhere but in the productivity statistics bit for the moment. Paul David, an American economist, wrote a wonderful essay called “The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox“, back in 1990.

I think of this essay as an attempt to respond to the question Robert Solow had posed – why isn’t the data reflecting the ubiquitousness of the computer in the modern workplace? Read the essay: it’s a very short, very easy read.

Paul David draws an analogy between the move away from steam as a source of power, back at the end of the 19th century.

In 1900, contemporary observers well might have remarked that the electric dynamos were to be seen “everywhere but in the productivity statistics!”

David, P. A. (1990). The dynamo and the computer: an historical perspective on the modern productivity paradox. The American Economic Review80(2), 355-361.

Adjusting to a new technology, it turns out, takes time.

Steam-powered manufacturing had linked an entire production line to a single huge steam engine. As a result, factories were stacked on many floors around the central engine, with drive belts all running at the same speed. The flow of work around the factory was governed by the need to put certain machines close to the steam engine, rather than the logic of moving the product from one machine to the next. When electric dynamos were first introduced, the steam engine would be ripped out and the dynamo would replace it. Productivity barely improved.
Eventually, businesses figured out that factories could be completely redesigned on a single floor. Production lines were arranged to enable the smooth flow of materials around the factory. Most importantly, each worker could have his or her own little electric motor, starting it or stopping it at will. The improvements weren’t just architectural but social: Once the technology allowed workers to make more decisions, they needed more training and different contracts to encourage them to take responsibility.

https://slate.com/culture/2007/06/what-the-history-of-the-electric-dynamo-teaches-about-the-future-of-the-computer.html

Again, please read the whole thing, and also read this other article by Tim Harford from the BBC, “Why didn’t electricity immediately change manufacturing?” The article, by the way, is an offshoot of a wonderful podcast called “50 Things That Made The Modern Economy“. Please listen to it!

But here’s the part that stood out for me from that piece I excerpted from above:

“Eventually, businesses figured out that factories could be completely redesigned on a single floor. Production lines were arranged to enable the smooth flow of materials around the factory. Most importantly, each worker could have his or her own little electric motor, starting it or stopping it at will.”

https://slate.com/culture/2007/06/what-the-history-of-the-electric-dynamo-teaches-about-the-future-of-the-computer.html

Colleges and universities are today designed around the basic organizational unit of a classroom, with each classroom being “powered” by a professor.

Of the many, many things that the pandemic has done to the world, what it has done to learning is this:

each worker learner could have his or her own little electric motor personal classroom, starting it or stopping it at will.

In fact, I had a student tell me recently that she prefers to listen to classroom recordings later, at 2x, because she prefers listening at a faster pace. So it’s not just starting or stopping at will, it is also slowing down or speeding up at will.

Today, because of the pandemic, we are at an extreme end of the spectrum which describes how learning is delivered. Everybody sits at home, and listens to a lecture being delivered (at least in Indian universities, mostly synchronously).

When the pandemic ends, whenever that may be, do we swing back to the other end of the spectrum? Does everybody sit in a classroom once again, and listens to a lecture being delivered in person (and therefore synchronously)?

Or does society begin to ask if we could retain some parts of virtual classrooms? Should the semester than be, say, 60% asynchronous, with the remainder being doubt solving sessions in classroom? Or some other ratio that may work itself out over time? Should the basic organizational unit of the educational institute still be a classroom? Does an educational institute still require the same number of in person professors, still delivering the same number of lectures?

In other words, in the post-pandemic world…

How long before online learning starts to show up in the learning statistics?

Additional, related reading, for those interested:

  1. Timothy Taylor on why “some of the shift to telecommuting will stick
  2. An essay from the late, great Herbert Simon that I hadn’t read before called “The Steam Engine and the Computer
  3. The role of computer technology in restructuring schools” by Alan Collins, written in 1990(!)

Five articles from economists about tackling the crisis

  1. Arnold Kling advises us to not worry about “going back to normal”. It’s about winning the war, no matter what it takes. There will be a new normal at the end of it, and no one today knows what that normal will be like. Focus on winning!

    “We are acting as if our biggest worry is how to get back to our “normal,” pre-war economy. Our biggest challenge instead is to win the war, after which we will transition to an economy that looks considerably different, just as the post-WWII economy was quite different from the pre-war economy.”

  2. I found this fascinating: on how the RBI is working to keep our financial system alive.

    As the country goes on a self-imposed lockdown to fight the coronavirus contagion, a crack team of 150 people, in hazmat suits, is keeping India’s financial system up and running since March 19 from an unknown location in a completely quarantined environment. These 150 people, including 37 officials from critical departments of the Reserve Bank of India (RBI), such as debt management, reserve management and monetary operations, and third-party service providers, are now in charge of the business continuity plan of the central bank, designed in a way that could help create a benchmark for such exigencies in the future as well.

  3. Greg Mankiw comes up with a form of social insurance. Here’s a challenge for you – how would you game this system?

    Let’s send every person a check for X dollars every month for the next N months. In addition, levy a surtax in 2020–due in April 2021 or perhaps spread over several years–equal to N*X*(Y2020/Y2019), where Y2020 is a person’s earnings in 2020 and Y2019 is a person’s earnings in 2019. The surtax would be capped at N*X.

  4. Via MR, how about pausing time?

    Sometimes, the best solutions to big problems are very simple. Regarding the current outbreak of COVID-19, I propose a solution that—on the surface—might seem preposterous, but if one manages to stay with it and really think through the potential benefits, then it emerges as a much more credible course of action.I propose temporarily stopping time. This means that today’s date, Tuesday, March 17th, 2020, will remain the current date until further notice. This also means that everything that happens in time (e.g. mortgage due dates, payrolls, travel bookings, stock market trading, contractor gigs, concerts, sporting events) will be paused. It also means that all of these events remain on the books, and will continue as planned once time is resumed.

     

  5. And on the same point, Tim Taylor:

    “Here, I want to focus a bit on a theme that comes up in a number of the essays: the idea that sensible economic policy can put the economy in the freezer for a few months, and the pull the economy out of the freezer, thaw it out, and restart it. I find myself in the awkward position here of largely being in agreement with this policy as a short-run approach, and at the same time also feeling that the ultimate consequences of the policy are going to be more difficult than a number of authors are envisaging. ”

    (An aside: WordPress formatting has already cut years from my life, and will continue to do so. That last bit is, to be clear, an excerpt. That is clear to me, to you – but not to WordPress)

A paper by Barro et al about mortality and economic performance in 1918

A working paper from Barro et al about the potential effects of the coronavirus on mortality and economic activity. I learnt of the paper via Tim Taylor’s blog. Quick points of note below:

  • Their estimate is of 39 million deaths, while Laura Spinney says anywhere between 50 million to a 100 million. Basically, we don’t know – but a lot!
  • I learnt of this data source. India is missing, but there is still a lot to learn.
  • Assuming they get their 39 million number right, they say that India saw 16 million deaths. About 43% of all deaths worldwide, as per their estimate.
  • Roughly 1/3rd of the world got the flu. 2% mortality rate of all people on the planet, 6% of those who got it.
  • Below is the conclusion from their regression analyses:

 

Further, this death rate corresponds in our regression analysis to declines in the typical country by 6 percent for GDP and 8 percent for consumption. These economic declines are comparable to those last seen during the global Great
Recession of 2008-2009. The results also suggest substantial short-term declines in real returns on stocks and short-term government bills. Thus, the possibility exists not only for unprecedented numbers of deaths but also for major global economic dislocation

This is a working paper, subject to change, and the data is unreliable at best. But the bootom line is that this will at least be as bad as 2008-2009 in terms of economics, maybe worse. And let’s hope and pray that given our capability to deal with health issues, relative to 1918, our mortality rates are nowhere near as bad.

Social distancing matters.

Econ101: Policy Responses to a Pandemic

If you haven’t played it already, go ahead and give this game a try: The Fed Chairman Game. I have a lot of fun playing this game in class, especially with students who have been taught monetary policy. It usually turns out to be the case that they haven’t understood it quite as well as they think the have! (To be clear, that’s the fault of our educational system, not the students.)

But the reason I started with that is because the game always throws up a scenario that mimics a crisis, and asks you what you would do if you were the Chair of the Fed.

In this case, policymakers the world over are now staring at a very real crisis, and they need to be asking themselves: what should we do?


 

There are two broad answers, of course: monetary policy, and fiscal policy.

The Federal Reserve has cut interest rates to zero, and while it has other tools to stimulate the economy, a crisis like this requires fiscal as well as monetary responses. The legislation passed thus far has been important, but another round of fiscal policy will be required immediately to fully address this crisis.

A robust fiscal response can provide income support to households, ensure broad and continuous access to safety net programs, provide incentives for employers to avoid layoffs, provide loans to small businesses, give liquidity cushions to households and firms, and otherwise stimulate the economy.

That’s a write-up from Brookings. The specifics follow in that article, but the article makes the point that more of the lifting will need to  be done by fiscal, rather than monetary policy. And that is true for a variety of reasons,  which the article does not get into, but long story short – fiscal, more than monetary.

But, ok, fiscal policy of what kind? Should we give money to firms or to workers? Here’s Paul Krugman with his take…

And here’s Alex Tabarrok with his response:

So what’s the correct answer? Well, as we’ve learnt before, and will learn again, macro is hard! In an ideal world, all of the above, but as is manifestly clear, we are not in an ideal world. If we must choose between giving money to firms or to people, to whom should we give it? My opinion? People first, businesses second. This is, of course, a US centric discussion, what’s up with India?


 

Here’s, to begin with, a round-up from around the world – you can search within it for India’s response thus far.

Calls are getting louder for governments to support people and businesses until the new coronavirus is contained. The only questions are how much money to shovel into the economy, how to go about doing it, and whether it will be enough.

Already, officials from Paris to Washington DC are pulling out the playbook used in Asia for slowing the spread of Covid-19: they’re restricting travel and cracking down on public gatherings. While those measures have the potential to reduce deaths and infections, they will also damage business prospects for many companies and cause a synchronized worldwide disruption.

Here’s the FT from two weeks ago about the impending slow down:

Venu Srinivasan, whose company TVS is one of India’s largest makers of motorcycles and scooters, said the business had lost about 10 per cent of production in February owing to a lack of Chinese-made parts for the vehicles’ fuel injection system. He added that TVS has now managed to find a new supplier.

But Mr Srinivasan said he was bracing for India’s recovery to take longer than anticipated. “One would have expected a V-shaped recovery, but instead you have an L shaped recovery,” he said. “It’s been the long haul.”

R Jagannathan in the LiveMint suggests this:

This is how it could be designed. Any unemployed urban youth in the 20-30 age group could be promised 100 days of employment and/or skilling options paid for by the government at a fixed daily rate of ₹300 (or thereabouts, depending on city). At an outlay of ₹30,000 per person annually, the unemployed can be put to work in municipal conservancy services, healthcare support, traffic management, and other duties, with the money also being made available for any skill-acquiring activity chosen by the beneficiary (driver training for Ola-Uber, logistics operations, etc). All companies could be given an opportunity to use the provisions of the Apprentices Act to take on more trainees, with the apprenticeship period subsidized to the limit of ₹30,000 per person in 2020-21. If the pilot works, it could be rolled out as a regular annual scheme for jobs and skills. Skilling works best in an actual jobs environment.

 

He also mentions making the GST simpler, which the Business Standard agrees with:

Certainly, the rationalisation of GST will also affect government revenues. However, a simpler and more transparent system would allow greater collection and reduce evasion. The government will receive a windfall this year from lower crude oil prices. The moment to move on the structural reform agenda is now. The GST Council has done well to address the inverted duty structure in mobile phones. Further rationalisation will give confidence to the market that the government is serious about reforms. It was promised that GST would remain a work in progress, and that the GST Council would act often to improve it. So far, however, the changes have been marginal and haphazard. A more structured and rational approach, which outlines a quick path to a single rate, would pay dividends for the economy in the longer run. It would also be an effective way to manage the immediate effects of a supply shock such as is being caused by the pandemic.

Also from the Business Standard, a report on the government now considering (not happened yet) relaxing bad loan classification rules for sectors hit by the corona virus. That’s pretty soon going to be every sector!


 

Assorted Links about the topic – there’s more to read than usual, please note.

Here is Tyler Cowen on mitigating the economic impacts from the coronavirus crisis.

Here’s Bill Dupor, via MR, about the topic:

First, incentivize behavior to align with recognized public health objectives during the outbreak.

Second, avoid concentrating the individual financial burden of the outbreak or the policy response to the outbreak.

Third, implement these fiscal policies as quickly as possible, subject to some efficiency considerations.

Again, via MR, New Zealand’s macro response.

Arnold Kling is running a series on the macro response to the crisis.

Claudia Sahm proposes direct payment to individuals:

This chapter proposes a direct payment to individuals that would
automatically be paid out early in a recession and then continue annually
when the recession is severe. Research shows that stimulus payments that
were broadly disbursed on an ad hoc (or discretionary) basis in the 2001 and
2008–9 recessions raised consumer spending and helped counteract weak
demand. Making the payments automatic by tying their disbursement to
recent changes in the unemployment rate would ensure that the stimulus
reaches the economy as quickly as possible. A rapid, vigorous response to
the next recession in the form of direct payments to individuals would help
limit employment losses and the economic damage from the recession.

Here are the concrete proposals, the entire paper is worth a read:

Automatic lump-sum stimulus payments would be made to individuals
when the three-month average national unemployment rate rises by
at least 0.50 percentage points relative to its low in the previous 12
months.
• The total amount of stimulus payments in the first year is set to
0.7 percent of GDP.
• After the first year, any second (or subsequent) year payments would
depend on the path of the unemployment rate.

 

Macroeconomics IS HARD!

Economics in the times of COVID-19, there is already a book. I learnt about it from Tim Taylor’s blogpost. I have not read the book, but will soon.

The NYT, two weeks ago, on the scale of the problem facing policymakers.

 

EC101: Links for 28th November, 2019

  1. “The zeroth step, of course, is being open to the process of unlearning. We come with our own biases, shaped by our varied experiences and perceptions. But our experience or knowledge is not always indicative of the macroreality. An unrelenting hold on what we have already learnt is the equivalent of the sunk cost fallacy in economics.”
    ..
    ..
    Pranay Kotasthane has a new newsletter out, and it is worth subscribing to. Stay humble and curious is the gist of his zeroth lesson, and the other points are equally important. Go read, and in my opinion, subscribe.
    ..
    ..
  2. “China is still a one-party state, but it owes much of its current prosperity to an increase in liberty. Since Mao died, his former subjects have won greater freedom to grow the crops they choose, to set up businesses and keep the profits, to own property, and to move around the country. The freedom to move, though far from absolute, has been transformational. Under Mao, peasants were banned from leaving their home area and, if they somehow made it to a city, they were barred from buying food, notes Bradley Gardner in “China’s Great Migration”. Now, there are more rural migrants in China than there are cross-border migrants in the world.”
    ..
    ..
    The rest of this article from the Economist is about migration to the cities – and I find myself in complete agreement – many, many more people in India need to live in her cities. But also see this!
    ..
    ..
  3. “Mazzucato traced the provenance of every technology that made the iPhone. The HTTP protocol, of course, had been developed by British scientist Tim Berners-Lee and implemented on the computers at CERN, in Geneva. The internet began as a network of computers called Arpanet, funded by the US Department of Defense (DoD) in the 60s to solve the problem of satellite communication. The DoD was also behind the development of GPS during the 70s, initially to determine the location of military equipment. The hard disk drive, microprocessors, memory chips and LCD display had also been funded by the DoD. Siri was the outcome of a Stanford Research Institute project to develop a virtual assistant for military staff, commissioned by the Defense Advanced Research Projects Agency (DARPA). The touchscreen was the result of graduate research at the University of Delaware, funded by the National Science Foundation and the CIA.”
    ..
    ..
    Mariana Mazzucato, about whom more people should know, on the role of the government in today’s economy.
    ..
    ..
  4. “Back in the early 1970s, Xerox had figured out a strategy to block competitors in the photocopying business. It took out lots of patents, more than 1,000 of them, on every aspect of the photocopy machine. As old patents expired, new ones kicked in at a rate of several hundred new patents each year. Some of the patents were actually used by Xerox in producing the photocopy machine; some were not. There was no serious complaint about the validity of any individual patent. But taken as a whole, Xerox seemed to be using the patent system to lock up its monopoly position in perpetuity. Under antitrust pressure from the Federal Trade Commission, Xerox in 1975 signed a consent decree which, along with a number of other steps, required licensing its 1,700 photocopier patents to other firms.”
    ..
    ..
    Timothy Taylor adds grist to the anti-patent mill.
    ..
    ..
    “Thinking about how to facilitate a faster and broader dispersion of knowledge and productivity gains seems like a potentially important part of explaining the current economic picture and suggesting a policy agenda.”
    ..
    ..
    That’s the concluding part of the blog post. Just sayin’!
    ..
    ..
  5. Every time I begin to think I kind of understand macroeconomics

EC101: Links for 3rd October, 2019

  1. Everything is correlated.
    ..
    ..
  2. For students at Gokhale Institute for sure, but elsewhere too: the Stiglitz essay prize.
    ..
    ..
  3. Capitalim vs Socialism.
    ..
    ..
  4. On reforming the PhD.
    ..
    ..
  5. On complements, substitutes, YouTube and reading.

Ec101: Links for 4th July, 2019

  1. “I’m more worried about the part where the cost of basic human needs goes up faster than wages do. Even if you’re making twice as much money, if your health care and education and so on cost ten times as much, you’re going to start falling behind. Right now the standard of living isn’t just stagnant, it’s at risk of declining, and a lot of that is student loans and health insurance costs and so on.What’s happening? I don’t know and I find it really scary.”
    ..
    ..
    An article that spanned an entire book (about which more below). But do read this article very, very carefully, especially if you think you really understand microeconomics.
    ..
    ..
  2. “Here, for example, are two figures which did not make the book. The first shows car prices versus car repair prices. The second shows shoe and clothing prices versus shoe repair, tailors, dry cleaners and hair styling. In both cases, the goods price is way down and the service price is up. The Baumol effect offers a unifying account of trends such as this across many different industries. Other theories tend to be ad hoc, false, or unfalsifiable.”
    ..
    ..
    A short excerpt from an article on the book that materialized from the article on Slate Star Codex above (and by the way, you might want to start following Slate Star Codex). I have linked to some of them already, but do scroll through to click on “Other posts in this series” to read them all.
    ..
    ..
  3. “The 23 times increase in the relative price of the string quartet is the driving force of Baumol’s cost disease. The focus on relative prices tells us that the cost disease is misnamed. The cost disease is not a disease but a blessing. To be sure, it would be better if productivity increased in all industries, but that is just to say that more is better. There is nothing negative about productivity growth, even if it is unbalanced.”
    ..
    ..
    An excerpt from an excerpt, admittedly, but still well worth your time, to help you understand why the cost disease isn’t really a disease. It’s all about productivity, and how it grows unevenly (and hey, that’s a good thing!)
    ..
    ..
  4. “State intervention to fix market failures that preclude the emergence of domestic producers in sophisticated industries early on, beyond the initial comparative advantage.
    Export orientation, in contrast to the typical failed industrial policy of the 1960s–1970s, which was mostly import substitution industrialisation (ISI).
    The pursuit of fierce competition both abroad and domestically with strict accountability. ”
    ..
    ..
    You really should be reading How Asia Works by Joe Studwell – everybody should read that book, and multiple times. But that being said, here is the TL;DR version.
    ..
    ..
  5. “There doesn’t seem to be evidence that hiring from outside is better. What evidence does exist seems to be that internal hires get up the learning curve faster, and often don’t need as much of an immediate pay bump. If you persuade someone to leave their current employer by offering more money, what you get is a worker whose top priority is “more money,” rather than on work challenges and career opportunities. (“As the economist Harold Demsetz said when asked by a competing university if he was happy working where he was: `Make me unhappy.’”)”
    ..
    ..
    Tim Taylor on the difficulty of hiring (and retaining) right.

Links for 31st May, 2019

  1. “For economists, the idea of “spending” time isn’t a metaphor. You can spend any resource, not just money. Among all the inequalities in our world, it remains true that every person is allocated precisely the same 24 hours in each day. In “Escaping the Rat Race: Why We Are Always Running Out of Time,” the Knowledge@Wharton website interviews Daniel Hamermesh, focusing on themes from his just-published book Spending Time: The Most Valuable Resource.”
    ..
    ..
    Almost a cliche, but oh-so-true. The one non-renewable resource is time. A nice read, the entire set of excerpts within this link.
    ..
    ..
  2. ““Bad writing makes slow reading,” McCloskey writes. Your reader has to stop and puzzle over what on earth you mean. She quotes Quintilian: “One ought to take care to write not merely so that the reader can understand, but so that he canot possibly misunderstand.” This is harder than it sounds. As the author of several books, I’ve learned that many readers take out of a book whatever thoughts they took into it. Still, what else is worth aiming for if you want to communicate your ideas?”
    ..
    ..
    As the first comment below the fold says, she herself doesn’t follow her own advice all the time (and yes, that is putting it mildly), but the book that Diane Coyle reviews in this article is always worth your time. Multiple re-readings, in fact. Also, I am pretty good at writing bad prose myself, which is why I like reading this book so much.
    ..
    ..
  3. “Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.”
    ..
    ..
    I wouldn’t blame you for thinking that the author of this essay should read the book reviewed above first – but if you aren’t familiar with falsification, you might want to begin by reading this essay.
    ..
    ..
  4. “Upheaval, by Jared Diamond. I’m a big fan of everything Jared has written, and his latest is no exception. The book explores how societies react during moments of crisis. He uses a series of fascinating case studies to show how nations managed existential challenges like civil war, foreign threats, and general malaise. It sounds a bit depressing, but I finished the book even more optimistic about our ability to solve problems than I started.”
    ..
    ..
    Bill Gates has this annual tradition of  recommending five books for the summer – and I haven’t read a single one of the five he has recommended this year. All of them seem interesting – Diamond’s book perhaps more so than others.
    ..
    ..
  5. “Books don’t work for the same reason that lectures don’t work: neither medium has any explicit theory of how people actually learn things, and as a result, both mediums accidentally (and mostly invisibly) evolved around a theory that’s plainly false.”
    ..
    ..
    To say that I am fascinated by this topic is an understatement – and I have a very real, very powerful personal incentive to read this especially attentively. That being said, I can’t imagine anybody not wanting to learn about how we learn, and why we learn so poorly.