Futurology from 1967

Did no work of science fiction/futurology anticipate miniaturization? Genuine question.

AI/ML: Some Thoughts

This is a true story, but I’ll (of course) anonymize the name of the educational institute and the student concerned:

One of the semester end examinations conducted during the pandemic at an educational institute had an error. Students asked about the error, and since the professor who had designed the paper was not available, another professor was asked what could be done. Said professor copied the text of the question and searched for it online, in the hope that the question (or a variant thereof) had been sourced online.

Alas, that didn’t work, but a related discovery was made. A student writing that same question paper had copied the question, and put it up for folks online to solve. It hadn’t been solved yet, but the fact that all of this could happen so quickly was mind-boggling.

The kicker? The student in question had not bothered to remain anonymous. Their name had been appended with the question.

Welcome to learning and examinations in the time of Coviid-19.


I have often joked in my classes in this past decade that it is only a matter of time before professors outsource the design of the question paper to freelance websites online – and students outsource the writing of the submission online. And who knows, it may end up being the same freelancer doing both of these “projects”.

All of which is a very roundabout way to get to thinking about Elicit, videos about which I had put up yesterday.

But let’s begin at the beginning: what is Elicit?

Elicit is a GPT-3 powered research assistant. Elicit helps you classify datasets, brainstorm research questions, and search through publications.

https://www.google.com/search?q=what+is+elicit.org

Which of course begs a follow-up question: what is GPT-3? And if you haven’t discovered GPT-3 yet, well, buckle up for the ride:

GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years. Advances in computational power — and new mathematical techniques — have enabled L.L.M.s of GPT-3’s vintage to ingest far larger data sets than their predecessors, and employ much deeper layers of artificial neurons for their training.
Chances are you have already interacted with a large language model if you’ve ever used an application — like Gmail — that includes an autocomplete feature, gently prompting you with the word ‘‘attend’’ after you type the sentence ‘‘Sadly I won’t be able to….’’ But autocomplete is only the most rudimentary expression of what software like GPT-3 is capable of. It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt.

https://www.nytimes.com/2022/04/15/magazine/ai-language.html

It’s wild, there’s no other way to put it:


So, OK, cool tech. But cool tech without the ability to apply it is less than half of the story. So what might be some applications of GPT-3?

A few months after GPT-3 went online, the OpenAI team discovered that the neural net had developed surprisingly effective skills at writing computer software, even though the training data had not deliberately included examples of code. It turned out that the web is filled with countless pages that include examples of computer programming, accompanied by descriptions of what the code is designed to do; from those elemental clues, GPT-3 effectively taught itself how to program. (OpenAI refined those embryonic coding skills with more targeted training, and now offers an interface called Codex that generates structured code in a dozen programming languages in response to natural-language instructions.)

https://www.nytimes.com/2022/04/15/magazine/ai-language.html

For example:

(Before we proceed, assuming it is not behind a paywall, please read the entire article from the NYT.)


But about a week ago or so, I first heard about Elicit.org:

Watch the video, play around with the tool once you register (it’s free) and if you are at all involved with academia, reflect on how much has changed, and how much more is likely to change in the time to come.

But there are things to worry about, of course. An excellent place to begin is with this essay by Emily M. Blender, on Medium. It’s a great essay, and deserves to be read in full. Here’s one relevant extract:

There is a talk I’ve given a couple of times now (first at the University of Edinburgh in August 2021) titled “Meaning making with artificial interlocutors and risks of language technology”. I end that talk by reminding the audience to not be too impressed, and to remember:
Just because that text seems coherent doesn’t mean the model behind it has understood anything or is trustworthy
Just because that answer was correct doesn’t mean the next one will be
When a computer seems to “speak our language”, we’re actually the ones doing all of the work

https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

I haven’t seen the talk at the University of Edinburgh referred to in the extract, but it’s on my to-watch list. Here is the link, if you’re interested.

And here’s a Twitter thread by Emily M. Blender about Elicit.org specifically:


In response to this critique and other feedback, Elicit.org have come up with an explainer of sorts about how to use Elicit.org responsibly:

https://ought.org/updates/2022-04-25-responsibility

Before we proceed, I hope aficionados of statistics have noted the null hypothesis problem (which error would you rather avoid) in the last sentence of pt. 1 in that clipping above!


So all that being said, what do I think about GPT3 in general and elicit.org in particular?

I’m a sucker for trying out new things, especially from the world of tech. Innocent until proven guilty is a good maxim for approaching many things in life, and to me, so also with new tech. I’m gobsmacked to see tools like GPT3 and DallE2, and their applications to new tasks is amazing to see.

But that being said, there is a lot to think about, be wary of and guard against. I’m happy to keep an open mind and try these amazing technologies out, while keeping a close eye on what thoughtful critics have to say.

Which is exactly what I plan to do!

And for a person with a plan such as mine, what a time to be alive, no?

Have you tried Elicit.org yet?

Video 1:

And Video 2:

The Case For Doubling Spending on R&D

Timothy Taylor, author of the blog The Conversable Economist, has a nice post out on the case for doubling R&D spending. He speaks of doubling spending on R&D by the US government, but the point is equally applicable to all governments, including India’s.

The post is a reflection on a chapter in an e-book published by the Aspen Group. The chapter has been written by Benjamin F. Jones, and is titled “Science and Innovation: The Under-Fueled Engine of Prosperity.” (pp. 272 in the PDF that has been linked to above). Timothy Taylor shares an extract that ought to familiar to us in terms of the direction in which scientific progress has been headed, and perhaps even the magnitude – but every now and then, it helps to remind ourselves how far we’ve come:

Real income per-capita in the United States is 18 times larger today than it was in 1870 (Jones 2016). These gains follow from massive increases in productivity. For example, U.S. corn farmers produce 12 times the farm output per hour since just 1950 (Fuglie et al. 2007; USDA 2020). Better biology (seeds, genetic engineering), chemistry (fertilizers, pesticides), and machinery (tractors, combine harvesters) have revolutionized agricultural productivity (Alston and Pardey 2021), to the point that in 2018 a single combine harvester, operating on a farm in Illinois, harvested 3.5 million pounds of corn in just 12 hours (CLASS, n.d.). In 1850, it took five months in a covered wagon to travel west from Missouri to Oregon and California, but today it can be done in five hours—traveling seven miles up in the sky. Today, people carry smartphones that are computationally more powerful than a 1980s-era Cray II supercomputer, allowing an array of previously hard-to-imagine things—such as conducting a video call with distant family members while riding in the back of a car that was hailed using GPS satellites overhead.

https://conversableeconomist.com/2022/04/19/the-case-for-doubling-us-rd-spending/

The latter part of the extract, which I’ve not quoted here, is about the increase in life expectancy, and is also worth reading. Post the extract, Timothy Taylor goes on to speak about how it is important to celebrate the fact that we were able to push out vaccines in the space of a little less than a year, which is a stellar achievement. And indeed it is! You might have differing opinions about the efficacy of these vaccines, and you might even be of the opinion that the firms doth profit too much from their creation, but I hope you agree that the fact that we were able to do this at all, and as rapidly as we did, is testimony to have far we have come as a civilization.

As an aside, read also this Washington Post editorial about the discovery of the virus, and how the message didn’t get out nearly quickly enough (duh.)

Both points are important to understand as students. Which two points, you ask? That progress as a civilization depends on two things: the rate of technological progress, and the underlying culture that enables it, embraces it and uses it properly. For reading the editorial, I came away with the opinion that China had the technology, but lacked the culture.

I would urge you to think about how this might resonate with each of us as individuals: we have the technology to be ever more productive, and the technology improves every year. But have we built for ourselves a culture of allowing ourselves to use this technology as efficiently as we should? What about the institutions that each of us work for or study in? What about the countries we stay in? Technological progress without an enabling culture doesn’t work, and as students of productivity (that’s one way to think about studying economics), you need to be students of both aspects.


Anyway, back to scientific progress. One of the points that Jones makes in his chapter is that the US has been lagging behind the current leaders on two different metrics: total R&D expenditure as a percentage of GDP, and public R&D expenditure as a share of GDP. China’s R&D expenditure has seen an annual increase of 16% since the year 2000, while the US is at 3% annual growth.

What about India, you ask? Here’s a chart from an Indian Express article about the topic:

https://indianexpress.com/article/opinion/columns/unesco-stats-on-global-expenditure-on-r-d-7775626/

As the article points out, let alone trying to compute the rate of increase, we actually seem to be on a downward trajectory for a metric called GERD, which stands for Gross Domestic Expenditure on Research and Development. Here’s the link to the data from the World Bank.

https://data.worldbank.org/indicator/GB.XPD.RSDV.GD.ZS?end=2018&locations=IN&start=2000&view=chart

We clearly need to do better. That article in the Indian Express ends with this paragraph:

A commitment from the Centre to raise GERD to 1 per cent of the GDP in the next three years could be one of the most consequential decisions taken in the 75th year of India’s independence.

https://indianexpress.com/article/opinion/columns/unesco-stats-on-global-expenditure-on-r-d-7775626/

And that is a nice segue back to the blog post that we started today’s post with. If you’re asking (and I hope you are!) questions along the lines of why it should be the government and not the private sector, I have two answers for you. One, the truth always lies somewhere in the middle, and so you need both private and government spending. And two, there is an economic argument for your consideration:

Jones’s essay reviews the argument, fairly standard among economists, that a pure free market will tend to underinvest in new technologies, because in a pure free market the innovator will not capture the full value of an innovation. Indeed, if firms face a situation where unsuccessful attempts at innovation just lose money, while successful innovations are readily copied by others, or the underlying ideas of the innovation just lead to related breakthroughs for others, then the incentives to innovate can become rather thin, indeed. This is the economic rationale for government policies to support research and development: direct support of basic research (where the commercial applications can be quite unclear), protection of intellectual property like patents and trade secrets, tax breaks for companies that spend money on R&D, and so on.

https://conversableeconomist.com/2022/04/19/the-case-for-doubling-us-rd-spending/

Now, how much of the lifting should be done by government, and how much should be done by the private sector is a debate that will never end, but here is an EFE post that might help you start to think through the process.


Timothy Taylor and Benjamin F. Jones argue that the US needs to spend more on R&D, and that the U.S. government should do more in this regard.

My contention is two-fold: that this point applies with even more urgency in the Indian context, and that an enabling culture is an equally important concept, but an underrated one the world over.

Supply and Demand, Complements and Substitutes and Dalle-E 2

Before we begin, and in case some of you were wondering:

Early last year, San Francisco-based artificial intelligence company OpenAI launched an AI system that could generate a realistic image from the description of the scene or object and called it DALL.E. The text-to-image generator’s name was a portmanteau coined after combining the artist Salvador Dali and the robot WALL.E from the Pixar film of the same name.

https://analyticsindiamag.com/whats-the-big-deal-about-dall-e-2/

Dall-E 2 is amazing. There are ethical issues and considerations, sure, but the output form this AI system is stunning:

A rabbit detective sitting on a park bench and reading a newspaper in a Victorian setting (Source)

And just in case it isn’t clear yet, no such painting/drawing/art existed until this very sentence, the one that is the caption, was fed to the AI. And it is the AI that “created’ this image. Go through the entire thread.


This has led, as might be expected, to a lot of wondering about whether artists are going to be out of a job, and the threats of AI to humanity at large. I do not know enough to be able to offer an opinion one way or the other where the latter is concerned, but I do, as an economist, have some points to make about the former.

These thoughts were inspired by reading Ben Thompson’s latest (freely available) essay on Dall-E 2, titled “DALL-E, the Metaverse, and Zero Marginal Content“. He excerpts from the OpenAI website in his essay, and this sentence stood out:

DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

https://openai.com/dall-e-2/

And that begs an age-old question where economists are concerned: is technology a complement to human effort, or a substitute for it? The creators of Dall-E 2 seem to agree with Steve Jobs, and think that the AI is very much a complement to human ingenuity, and not a substitute for it.

I’m not so sure myself. For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?

But for the moment, I agree: this is an exciting new way to generate content, and is likely to work best when used as a complement by artists. Note that this is based on what I’ve seen and read – I have not myself had a chance to use or play around with Dall-E 2.


The title of today’s blog post is about substitutes and complements, which we just finished talking about in the previous section, but it also includes references to demand and supply. What about demand and supply?

Well, Ben Thompson talks about ways to think about social media firms today. He asks us to think about Facebook for example, and asks us to reflect upon where the demand and the supply for Facebook as a service comes from.

Here’s my understanding, for having read Ben Thompson’s essay: Facebook’s demand comes from folks like you and I wanting to find out what, well, folks like you and I are up to. What are our friends, our neighbors, our colleagues and our acquaintances up to? What are their friends, neighbors, colleagues and acquaintances up to? That’s the demand.

What about the supply? Well, that’s what makes Facebook such a revolutionary company – or at least, made it revolutionary back then. The supply, as it turns out, also came from folks like you and I. We were (and are) each others friends, neighbors, colleagues and acquaintances. Our News Feed was mostly driven by us in terms of demand, and driven by us in terms of supply. Augmented by related stuff, and by our likes and dislikes, and news sources we follow and all that, but demand and supply comes from our own networks.

TikTok, Thompson says, is also a social network, and supply and demand is also user driven, but it’s not people like us that create supply. It is just, well, people. TikTok “learns” what kind of videos we like to see, and the algorithm is optimized for what we like to see, regardless of who has created it.

But neither Facebook nor TikTok are in the business of generating content for us to see. The former, to reiterate, shows us stuff that our network has created or liked, while the latter shows us stuff that it thinks we will like, regardless of who has created it.

But how long, Ben Thompson’s essay asks, before AI figures out how to create not just pictures, but entire videos. And when I say videos, not just deep fakes, which already exist, but eerily accurate videos with depth, walkthroughs, nuance, shifting timelines and all the rest of it.

Sounds far-fetched?

Well, I remember taking an hour to download just one song twenty years ago, and I can now stream any song in the world on demand. And soon (already?) I will be able to “create” any song that I like, by specifying mood, genre, and the kind of lyrics I want.

How long before I can ask AI to create a movie just for me? Or just me and my wife? Or a cartoon flick involving me and my daughter? How long, in other words, before my family’s demand for entertainment is created by an AI, and the supply comes from that AI being able to tap into our personal photo/video collection and make up a movie involving us as cartoon characters?

Millions of households, cosily ensconced in our homes on Saturday night, watching movies involving us in whatever scenario we like. For homework, read The Secret Life of Walter Mitty by Thurber (the short story, please, not the movie!), Snowcrash by Neal Stephenson, and The Seven Basic Plots by Baker.


There are many tantalizing questions that arise from thinking about this, and I’m sure some have struck you too. But I don’t want to get into any of them right now.

Today’s blog post has a very specific point: it doesn’t matter how complicated the issue at hand is. Simple concepts and principles can go a very long way in helping you frame the relevant questions required for analysis. Answering them won’t be easy, as in this case, but hey, asking (some of) the right questions is a great place to start.

Dall.E 2

It’s been a week or so since I’ve seen this, and I remain gobsmacked

Meet the MIT Banana Lounge

Yup, really. There’s a place in MIT where you can lounge around and eat bananas. A lot of bananas.

https://twitter.com/iaincheeseman/status/1513467068351451137

How many is a lot, you ask? 280,000 bananas in this academic year alone. This is a project run by the Undergraduate Association at MIT, and they also place pianos around campus for folks to give it a try, and for those of us who prefer a more sedate outlook towards life, they also have a hammocks team, who are doing exactly what you hope they would.

The bananas are for free, by the way. If you happen to be on the MIT campus, you can drop in and chomp away to your heart’s content, courtesy an MIT alum who’s also been known to, um, do other stuff besides.

Cool stuff, right?


The reason I bring this up is because I and a student at GIPE were chatting the other day about questions that her juniors were asking her. And the question was about how they didn’t have “enough R projects” to do. (R, for the uninitiated, is a software that econ nerds like to freak out over.)

I’m always a little befuddled when students say they don’t have projects to work on, or are looking for datasets to work on. The lazy answer to give to queries such as this is something along the lines of Kaggle, or Google’s Dataset Search. There’s hundreds of such data sources available online for free, and they’re one simple Google search away, so that’s one reason for my befuddlement.

But the primary source of my befuddlement is the fact that students in possession of a software looking for a dataset is very much a case of the cart being put in front of the horse! Software is a tool that helps you in the work you’re doing. But the approach that most students take is that they have the chops to use the software, and they don’t know what work to do.


You could always try and see if you can get an alumni to buy bananas, and forecast demand for bananas!

Trend! Seasonality! Forecasting! For bananas consumed on campus.


I ask you: which is a cooler story to tell? A story in which you say that you downloaded a dataset from the internet and did some modeling with it…

OR

A story in which you say that you and a bunch of your friends got together and convinced your college to give a room to stock bananas, convinced an alumni member to sponsor these bananas, figured out the logistics to procure, transport and store these bananas, and used a tool called R (or Python, or SAS or SPSS or whatever) to forecast demand?

The second option teaches you project management, the art of pitching a proposal, teamwork, logistics and coding. And so much more besides! It builds a story that works for the team, the institute, the community, and you use a statistical software the way it was meant to be used: as a tool that makes your life easier.

I know which story gets my vote.


You could build shared calendars, YouTube playlists using Google Sheets, demos for sampling using Google Sheets, or anything else that takes your fancy. Use Statsguru to analyze cricket stats using Python, automate the creation of book recommendation websites, or well, give bananas away for free.

But datasets for projects?

You’re limited by your imagination alone.

Why Is Reading the News Online Such a Pain?

Livemint, Hindu Business Line, Business Standard, Times of India, The New York Times, The Hindu, The Washington Post, The Economist, Bloomberg Quint and Noah Smith’s Substack.

These are, as of now, my sources of news online that I pay for.

There are other newsletters that I subscribe to and pay for (The Browser is an excellent example), and I read stuff published in other newspapers too, but I’m restricting myself to only the current news sources that I pay for. I would like to subscribe to the Financial Times and to Stratechery too, but my budget line begins to cough firmly and insistently at this point, more’s the pity.

But here’s the thing: reading news online sucks.


Some are worse than others, and I’m very much looking at you, Business Standard. Their app is a joke, and the number of times one has to sign in while reading the paper on a browser isn’t funny. Some are, relatively speaking, better. The NYT website and app are both pretty good, as is the Economist. But still, it isn’t friction free, and there really should be a way to get the user experience to be better than it is right now.

And more than better, a more urgent word is uniform. Here’s a simple use case: let’s say I want to read articles on the current lockdown in Shanghai. I have to go to each website, and either run a search, or navigate to the appropriate section. But on each website, the search button will be located in a slightly different place, with a slightly different user experience. Each website while have their own navigation system. Each website will have different ways to filter search results.

Some will allow you to copy excerpts, some won’t. Some will allow clips and force an appendage at the end (“Read More At XYZ” – I’m looking at you, ToI). But by the time I finish visiting the third website to read about the topic I wanted to – current lockdowns in Shanghai – I’m pretty much done out of sheer exasperation.


It shouldn’t be this hard!

Workarounds kind of exist. For example, I can add the RSS feeds to Feedly, or any other feed reader of your choice. If you’re not familiar with Feedly, or RSS readers in general, here is an old post about it. But the reason I say kind of is because most (if not all) newspapers will not provide the full article in the RSS feed. You have to click through to read the full thing.

Not much use, is it?

Which, to be clear, is entirely understandable. User tracking, ads, and all the rest of it, I get it. But it does mean that Feedly isn’t a great way to keep track of all these articles in one place.

What I would really like is an app/service that aggregates all news sources in full in one place, and allows me to sign in to premium news sources via that app/service.

Does such a service exist? Or are there workflows that solve this problem?

Please, do let me know!

On Containerization

It is such a horrible sounding word, containerization. But you’d be amazed at the change it has brought about in the world:

The number of goods carried by containers skyrocketed from 102 million metric tons in 1980 to about 1.83 billion metric tons as of 2017.

https://prospect.org/economy/hidden-costs-of-containerization/

What is containerization? Well, simply put, it is what has made international trade in goods so much more cheaper than before.

Prior to the standardization of shipping containers between the 1960s and 1970s, most goods were stowed aboard cargo ships in individually counted units known as “break-bulk cargo.” Longshoremen, in crews of up to 25 men at a time, would manually load and unload cargo by hand in a time-consuming and laborious process that would take days. Ships would sit idle at port for far longer than they would be sailing at sea, making ocean shipping impractical, costly, and unreliable. Thus, most consumer goods were manufactured regionally and shipped by truck or rail; imports were rather limited and expensive.
It was not until 1956 that a trucking company owner named Malcolm McLean converted two old World War II oil tankers into the world’s first container ships. McLean, with the assistance of engineer Keith Tantlinger, designed a 33-foot steel intermodal container that could be easily lifted by cranes, placed snugly on the back of trucks and train cars, and locked to reduce theft. It would take only a few hours to unload a ship as opposed to days. Typically, the cost of hand-loading a ship would be about $5.86 per ton. With McLean’s new system, the price dropped to only 16 cents per ton.

https://prospect.org/economy/hidden-costs-of-containerization/ (Emphasis added)

By the way, the Wikipedia article on McLean is fascinating. If you want to understand what “backward integration” means in practice, it is highly recommended.


An excellent place to begin to learn more about containerization is this excellent podcast episode by Tim Harford. That page also points us to a book which serves as an excellent introduction to the subject – the book is simply called “The Box”.

The last chapter in that book helps us understand the inevitable march towards increased efficiency at all costs, culminating in what happened in the Suez Canal last year:

Today, the average ship is capable of carrying over 20,000 containers at any given time. Many ships are absurdly gargantuan, with some as long as the length of the Empire State Building. Between 1980 and 2020, the deadweight tonnage of container ships has grown from about 11 million metric tons to around 275 million metric tons.

https://prospect.org/economy/hidden-costs-of-containerization/

The article I’ve been quoting from (I came across it via The Browser) is a good exploration into some of the more problematic aspects of how the shipping industry has evolved over time, and speaks empathetically about the plight of those who work on board these ships. Cost cutting, long wait times, flags of convenience, changes to sea side towns/cities, the impact on local ecology and much else is described therein, and is worth a read.

The part that fascinated me the most was this:

Prior to the 1980s, the Shipping Act of 1916 regulated the relatively modest ocean carrier industry like a public utility. Prices were transparent and there were no exclusive agreements for volume shippers; anyone wanting to ship cargo could access the same rates. The United States Shipping Board, later the Federal Maritime Commission (FMC), regulated prices and practices, and subsidies assisted domestic shipbuilding. The act enabled smaller companies to enter ocean shipping with stable prices to weather downturns.
But the Shipping Act of 1984, and later the Ocean Shipping Reform Act of 1998, took down this architecture. It allowed shipping companies to consolidate, and eliminated price transparency, facilitating secret deals with importers and exporters. The FMC was defanged as a regulator. Almost immediately, containerization took off. The number of goods carried by containers skyrocketed from 102 million metric tons in 1980 to about 1.83 billion metric tons as of 2017.

https://prospect.org/economy/hidden-costs-of-containerization/

Web 3.0: But What Is It Anyway?

The bad news first: I don’t really know. The word “really” is redundant in that sentence, I have no clue what Web 3.0 is about.

But writing about something, and that in the public domain, is a great way to learn, and what I’m going to try and do is build up a series of occasional posts about Web 3.0. Bear in mind that I am the exact opposite of an expert when it comes to this topic, and these posts are being written as much to help myself learn about this topic as anything else. But that being said, I hope you get to learn something from this exercise as well!


What makes for a “good” economic system?

A system, to me, is things that work together to generate some output. That output maybe planned, unplanned or both. The things that are working together may be living, non-living or both. They may be working together on the basis of a conscious plan, or otherwise.

An economic system is one (to me) in which these things that are working together have at least an implicit knowledge of the fact that there is a cost attached to whatever it is that they’re doing when they’re a part of the system. If they were not to be doing x, they could have done y instead. And so choosing to do x has at least the cost of not being able to do something else. There could be other costs as well. These costs could be measured in terms of money, or in terms of time, or perhaps something else. But those costs exist, they matter, and they can be (at least implicitly) priced. That makes it an economic system.

What is a “good” economic system? A good economics system is one in which some (and preferably all) of the following things happen:

  • As much output is generated as is possible…
  • using as little inputs as possible.
  • This output is generated in as sustainable a fashion as possible, that is, without destroying the ability to produce more output in the long run
  • All potential and actual sources of inputs are given a level-playing field as far as possible. One input isn’t discriminated against relative to the other.
  • The system works with as little friction as possible
  • The system has an appropriate level of risk-management built into it.

But what does this mean in practice, using real world examples? Consider the system that I am a part of, the education system at the Gokhale Institute of Politics and Economics, which is where I work.

  • We should be able to produce as much learning as possible
  • using as little of our teaching resources (classrooms, faculty members, software, non-teaching staff, electricity etc) as possible.
  • We should not work our resources into the ground over the long run – we should not work our resources too hard. It shouldn’t, paradoxically, become harder to recruit people into academia.
  • We ought to be indifferent to whether learning happens because of books in the library, faculty members in the classroom, or videos from Coursera or YouTube.
  • Requests such as letters of recommendation, transcripts to apply to foreign universities – all kinds of administrative tasks, really – should be handled as quickly and painlessly as possible
  • The system should be able to handle shocks (big and small). A faculty member not turning up on a particular day shouldn’t bring the system to a halt, and a pandemic shouldn’t bring operations to a halt either.

There’s much, much more to a “good” economic system, of course, but hopefully you see what I mean. Try and build up an idea of what is a good economic system for whatever system you happen to be a part of. It can be your household, your corporate job, this country or any other system, small or large.


Now, Web 3.0: rather than try to understand what it does in terms of the technology, or in terms of the jargon that it seems so very riddled with, let us try and understand it in terms of what makes for a good economic system.

That is, unless Web 3.0 adds to the things that makes a “good” economic system better, or reduces the things that makes a “good” economic system worse (or both), it doesn’t really change the world in any meaningful way.

So, Web 3.0…

  • Does it help increase the output of a system?
  • Or maintain the same level of output, while reducing the commensurate levels of inputs?
  • Does it increase the sustainability of the system?
  • Does it level the playing field for all inputs?
  • Does it reduce friction?
  • Does it help build in better risk management?

If it does all of these things, it really is a magic wand. If it dos at least one of these things better, but not at the cost of making any of the others much worse, then it is a useful thing. If it does none of these, it is plain hype.

This is my framework for trying to wrap my head around, well, anything. And hopefully it helps us understand Web 3.0!


I will say this much: writing this out helped me understand this write-up much better:

Send USDC from a wallet with your ENS to the entity’s ENS and get digital mirror assets back into your wallet. These assets are held in a mirrortable, which is an on-chain replica of a primary cap table maintained in an off-chain system like Carta for compliance purposes. The terms of these assets are kept current via periodic updates of the mirrortable’s smart contract.

https://balajis.com/mirrortable/

Read the rest of this write-up, and try and see if you can fit this use-case into my framework. In my next article in this series, that is exactly what I will try to do – and let’s compare notes! 🙂