Before we begin, and in case some of you were wondering:
Early last year, San Francisco-based artificial intelligence company OpenAI launched an AI system that could generate a realistic image from the description of the scene or object and called it DALL.E. The text-to-image generator’s name was a portmanteau coined after combining the artist Salvador Dali and the robot WALL.E from the Pixar film of the same name.
Dall-E 2 is amazing. There are ethical issues and considerations, sure, but the output form this AI system is stunning:
And just in case it isn’t clear yet, no such painting/drawing/art existed until this very sentence, the one that is the caption, was fed to the AI. And it is the AI that “created’ this image. Go through the entire thread.
This has led, as might be expected, to a lot of wondering about whether artists are going to be out of a job, and the threats of AI to humanity at large. I do not know enough to be able to offer an opinion one way or the other where the latter is concerned, but I do, as an economist, have some points to make about the former.
And that begs an age-old question where economists are concerned: is technology a complement to human effort, or a substitute for it? The creators of Dall-E 2 seem to agree with Steve Jobs, and think that the AI is very much a complement to human ingenuity, and not a substitute for it.
I’m not so sure myself. For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?
But for the moment, I agree: this is an exciting new way to generate content, and is likely to work best when used as a complement by artists. Note that this is based on what I’ve seen and read – I have not myself had a chance to use or play around with Dall-E 2.
The title of today’s blog post is about substitutes and complements, which we just finished talking about in the previous section, but it also includes references to demand and supply. What about demand and supply?
Well, Ben Thompson talks about ways to think about social media firms today. He asks us to think about Facebook for example, and asks us to reflect upon where the demand and the supply for Facebook as a service comes from.
Here’s my understanding, for having read Ben Thompson’s essay: Facebook’s demand comes from folks like you and I wanting to find out what, well, folks like you and I are up to. What are our friends, our neighbors, our colleagues and our acquaintances up to? What are their friends, neighbors, colleagues and acquaintances up to? That’s the demand.
What about the supply? Well, that’s what makes Facebook such a revolutionary company – or at least, made it revolutionary back then. The supply, as it turns out, also came from folks like you and I. We were (and are) each others friends, neighbors, colleagues and acquaintances. Our News Feed was mostly driven by us in terms of demand, and driven by us in terms of supply. Augmented by related stuff, and by our likes and dislikes, and news sources we follow and all that, but demand and supply comes from our own networks.
TikTok, Thompson says, is also a social network, and supply and demand is also user driven, but it’s not people like us that create supply. It is just, well, people. TikTok “learns” what kind of videos we like to see, and the algorithm is optimized for what we like to see, regardless of who has created it.
But neither Facebook nor TikTok are in the business of generating content for us to see. The former, to reiterate, shows us stuff that our network has created or liked, while the latter shows us stuff that it thinks we will like, regardless of who has created it.
But how long, Ben Thompson’s essay asks, before AI figures out how to create not just pictures, but entire videos. And when I say videos, not just deep fakes, which already exist, but eerily accurate videos with depth, walkthroughs, nuance, shifting timelines and all the rest of it.
Well, I remember taking an hour to download just one song twenty years ago, and I can now stream any song in the world on demand. And soon (already?) I will be able to “create” any song that I like, by specifying mood, genre, and the kind of lyrics I want.
How long before I can ask AI to create a movie just for me? Or just me and my wife? Or a cartoon flick involving me and my daughter? How long, in other words, before my family’s demand for entertainment is created by an AI, and the supply comes from that AI being able to tap into our personal photo/video collection and make up a movie involving us as cartoon characters?
Millions of households, cosily ensconced in our homes on Saturday night, watching movies involving us in whatever scenario we like. For homework, read The Secret Life of Walter Mitty by Thurber (the short story, please, not the movie!), Snowcrash by Neal Stephenson, and The Seven Basic Plots by Baker.
There are many tantalizing questions that arise from thinking about this, and I’m sure some have struck you too. But I don’t want to get into any of them right now.
Today’s blog post has a very specific point: it doesn’t matter how complicated the issue at hand is. Simple concepts and principles can go a very long way in helping you frame the relevant questions required for analysis. Answering them won’t be easy, as in this case, but hey, asking (some of) the right questions is a great place to start.
The bad news first: I don’t really know. The word “really” is redundant in that sentence, I have no clue what Web 3.0 is about.
But writing about something, and that in the public domain, is a great way to learn, and what I’m going to try and do is build up a series of occasional posts about Web 3.0. Bear in mind that I am the exact opposite of an expert when it comes to this topic, and these posts are being written as much to help myself learn about this topic as anything else. But that being said, I hope you get to learn something from this exercise as well!
What makes for a “good” economic system?
A system, to me, is things that work together to generate some output. That output maybe planned, unplanned or both. The things that are working together may be living, non-living or both. They may be working together on the basis of a conscious plan, or otherwise.
An economic system is one (to me) in which these things that are working together have at least an implicit knowledge of the fact that there is a cost attached to whatever it is that they’re doing when they’re a part of the system. If they were not to be doing x, they could have done y instead. And so choosing to do x has at least the cost of not being able to do something else. There could be other costs as well. These costs could be measured in terms of money, or in terms of time, or perhaps something else. But those costs exist, they matter, and they can be (at least implicitly) priced. That makes it an economic system.
What is a “good” economic system? A good economics system is one in which some (and preferably all) of the following things happen:
As much output is generated as is possible…
using as little inputs as possible.
This output is generated in as sustainable a fashion as possible, that is, without destroying the ability to produce more output in the long run
All potential and actual sources of inputs are given a level-playing field as far as possible. One input isn’t discriminated against relative to the other.
The system works with as little friction as possible
The system has an appropriate level of risk-management built into it.
But what does this mean in practice, using real world examples? Consider the system that I am a part of, the education system at the Gokhale Institute of Politics and Economics, which is where I work.
We should be able to produce as much learning as possible
using as little of our teaching resources (classrooms, faculty members, software, non-teaching staff, electricity etc) as possible.
We should not work our resources into the ground over the long run – we should not work our resources too hard. It shouldn’t, paradoxically, become harder to recruit people into academia.
We ought to be indifferent to whether learning happens because of books in the library, faculty members in the classroom, or videos from Coursera or YouTube.
Requests such as letters of recommendation, transcripts to apply to foreign universities – all kinds of administrative tasks, really – should be handled as quickly and painlessly as possible
The system should be able to handle shocks (big and small). A faculty member not turning up on a particular day shouldn’t bring the system to a halt, and a pandemic shouldn’t bring operations to a halt either.
There’s much, much more to a “good” economic system, of course, but hopefully you see what I mean. Try and build up an idea of what is a good economic system for whatever system you happen to be a part of. It can be your household, your corporate job, this country or any other system, small or large.
Now, Web 3.0: rather than try to understand what it does in terms of the technology, or in terms of the jargon that it seems so very riddled with, let us try and understand it in terms of what makes for a good economic system.
That is, unless Web 3.0 adds to the things that makes a “good” economic system better, or reduces the things that makes a “good” economic system worse (or both), it doesn’t really change the world in any meaningful way.
So, Web 3.0…
Does it help increase the output of a system?
Or maintain the same level of output, while reducing the commensurate levels of inputs?
Does it increase the sustainability of the system?
Does it level the playing field for all inputs?
Does it reduce friction?
Does it help build in better risk management?
If it does all of these things, it really is a magic wand. If it dos at least one of these things better, but not at the cost of making any of the others much worse, then it is a useful thing. If it does none of these, it is plain hype.
This is my framework for trying to wrap my head around, well, anything. And hopefully it helps us understand Web 3.0!
I will say this much: writing this out helped me understand this write-up much better:
Send USDC from a wallet with your ENS to the entity’s ENS and get digital mirror assets back into your wallet. These assets are held in a mirrortable, which is an on-chain replica of a primary cap table maintained in an off-chain system like Carta for compliance purposes. The terms of these assets are kept current via periodic updates of the mirrortable’s smart contract.
At the moment, and that as a consequence of having written all of this out, this is where I find myself: China is optimizing for power, and is willing to give up on innovation in the consumer internet space. America is optimizing for innovation in the consumer internet space, and is willing to cede power to big tech in terms of shaping up what society looks like in the near future. Have I framed this correctly? If yes, what are the potential ramifications in China, the US and the rest of the world? What ought to be the follow-up questions? Why? Who else should I be following and reading to learn more about these issues?
How might I have been wrong? V Ananta Nageswaran and Nitin Pai wrote posts recently that helped me learn about some answers to at least the first of my questions above.
Let’s find out how I might have been wrong!
Noah Smith had hypothesized that the tech crackdown is because China’s goals are about asserting its power internationally. And not soft power, but the tanks and boots on the ground type power.
China may simply see things differently. It’s possible that the Chinese government has decided that the profits of companies like Alibaba and Tencent come more from rents than from actual value added — that they’re simply squatting on unproductive digital land, by exploiting first-mover advantage to capture strong network effects, or that the IP system is biased to favor these companies, or something like that. There are certainly those in America who believe that Facebook and Google produce little of value relative to the profit they rake in; maybe China’s leaders, for reasons that will remain forever opaque to us, have simply reached the same conclusion.
Now, it’s unclear if the opportunity costs of talent are so stark in China that the government must crack down on consumer internet companies in order to incentivise people to get into hardware. But Smith’s explanation is consistent with the popular view that China’s leaders are astute and inscrutable strategists who think really long term. .. .. My answer is simple: it’s about political power. In fact, if we frame the question differently, the answer becomes readily apparent: “Why is the autocratic leader of the Chinese Communist Party attacking media companies that directly reach almost everyone in the country?” Because size, reach and control of consumer data gives them narrative power comparable to what the Party has. Further, the ability to tap foreign capital gives them more freedom, albeit of the kind with Chinese characteristics. The Party doesn’t like that. And Xi likes it even less. That is why he moved aggressively to pre-empt a challenge to the Party’s narrative dominance and preserve its monopoly on power.
Another way to think about it: it is about soft power, but the soft power that the CCP would like to project to its own people. There is only one storyteller that shapes the societal narrative in China, and anybody else who wants to play is going to be cut down to size. Ruthlessly.
(Of course, it is not just about soft power being projected to its own people. But nobody in China is crazy enough to want to play the hard power game with the CCP. That’s a well established monopoly. But Nitin is saying that the CCP wants all aspects of power to be within its complete control, soft and hard.)
As he puts it towards the end of his post:
It’s consistent with what it has been doing since Mao Zedong’s time: ruthlessly cutting down challenges to its hold on Chinese minds. That’s it, folks. Nothing more to see here.
Ananta Nageswaran also blogged about this yesterday:
In the meantime, a blog post by Noah Smith, an economics teacher and a (former?) columnist for Bloomberg wrote that China’s crackdown on consumer-internet companies was to ensure that China’s financial and intellectual resources were not diverted for creating low value addition. It did not strike him that such an explanation – if it were true – did not do any credit to China. It reeks of central planning and omniscience. Two, even if it were true and even if it was meant to be a benign explanation, malign explanations cannot be ruled and need not be ruled out. Mutually exclusive explanations help keep the narrative simple and, two, it helps make the narrator appear smart because he/she has figured out the ultimate explanation. More often that not, reality is grey. Or, it has many shades.
In other words, he’s saying that even if what Noah is saying makes sense, there is more to it than that. It’s not just the opportunity cost of having some of the best minds in China work on consumer tech. What else might it be? Ananta Nageswaran finds himself in agreement with Nitin Pai:
I agree. It is political power and the interpretation (of Xi and correctly so) that information (Nitin calls it mindshare) about people’s behaviour that these companies have give them the ability (and the chance) to set the narrative later, in Xi’s thinking, seizing it from the CCP.
A minor point I would like to make here: I don’t think information and mindshare are the same thing, though they certainly are related. The information that tech firms have allows them to shape (sometimes in entirely unexpected ways!) the narrative, and therefore influence mindshare. Information is the tool and mindshare is the outcome – or at least, that is how I see it.
Please read Sanjay Anandram’s quotes from that blogpost too. I learnt about (and am going to shamelessly borrow) the RFRE principle.
So is it Noah’s story, or Nitin and Ananta Nageswaran’s? Regular readers know what’s coming next: the truth lies somewhere in the middle! Or at least, that’s my take, and it seems to be Ananta Nageswaran’s as well:
Of the three explanations that have been on offer, Noah Smith’s is the least persuasive. In some respects, Nitin and Sanjay are aligned and they diverge in some other aspects. As always, the real motivation behind some of the recent decisions of the government in China will have elements of all three and more.
To a student reading this: spectrum based thinking is a gift. Reasonable people can and should argue about where the truth lies, but always think intervals, never point estimates.
And having read all of the pieces that I have linked to across these two posts, I find myself in the same space on the spectrum as Ananta Nageswaran. That is, it’s not just the Noah Smith/Dan Wang argument at play (regarding which, Noah has updates. Scroll to the bottom of the post where he links to pieces that bolster his argument). But it is more about the CCP asserting its power.
Ananta Nagewaran ends with a Bruno Maçães quote: “the main players compete not under a common set of rules but in order to define what the rules are”.
It is a weird coincidence, but I just introduced some students to Frederich List yesterday. The more things change…
I’ve been mulling over three separate columns/posts/interviews over the past few days. Today’s post was supposed to be me reflecting on my thoughts about all of them together, but as it turns out, I have more questions than I do thoughts.
Worse (or if you think like I do, better) I don’t even have a framework to go through these questions in my own head. That is to say, I do not have a mental model that helps me think about which questions to ask first, and which later, and why.
So this is not me copping out from writing today’s post. This is me asking all of you for help. What framework should I be using to think about these three pieces of content together?
All three posts revolve around technology, and two are about the Chinese tech crackdown. Two are about innovation in tech and America. And one of the three is, obviously, the intersection set.
The first is a write-up from Noah Smith’s Substack (which you should read, and if you can afford it, pay for. Note that I am well over my budget for subscribing to content for this year, so I don’t. But based on what I have read of his free posts, I have no hesitation in recommending it to you.)
In other words, the crackdown on China’s internet industry seems to be part of the country’s emerging national industrial policy. Instead of simply letting local governments throw resources at whatever they think will produce rapid growth (the strategy in the 90s and early 00s), China’s top leaders are now trying to direct the country’s industrial mix toward what they think will serve the nation as a whole. And what do they think will serve the nation as a whole? My guess is: Power. Geopolitical and military power for the People’s Republic of China, relative to its rival nations. If you’re going to fight a cold war or a hot war against the U.S. or Japan or India or whoever, you need a bunch of military hardware. That means you need materials, engines, fuel, engineering and design, and so on. You also need chips to run that hardware, because military tech is increasingly software-driven. And of course you need firmware as well. You’ll also need surveillance capability, for keeping an eye on your opponents, for any attempts you make to destabilize them, and for maintaining social control in case they try to destabilize you.
As always, read the whole thing. But in particular, read his excerpts from Dan Wang’s letters from 2019 and 2020. It goes without saying that you should subscribe to Dan Wang’s annual letters (here are past EFE posts that mention Dan Wang). As Noah Smith says, China is optimizing for power, and is willing to pay for it by sacrificing, at least in part, the “consumer internet”.
That makes sense, in the sense that I understand the argument.
The second is an excellent column in the Economist, from its business section. Schumpeter is a column worth reading almost always, but this edition in particular was really thought-provoking. The column starts off by comparing how China and the United States of America are dealing with the influence of “big” technology firms.
As the column says, when it comes to the following:
The speed with which China has dealt with the problem
The scope of its tech crackdown
The harshness of the punishments (fines is just one part of the Chinese government’s arsenal)
… China has America beat hollow. As Noah Smith argues, China is optimizing for power, and has done so for ages. As he mentions elsewhere in his essay, “in classic CCP fashion, it was time to smash”. Well, they have.
But the concluding paragraph of the Schumpeter column is worth savoring in full, and over multiple mugs of coffee:
But autarky carries its own risks. Already, Chinese tech darlings are cancelling plans to issue shares in America, derailing a gravy train that allowed Chinese firms listed there to reach a market value of nearly $2trn. The techlash also risks stifling the animal spirits that make China a hotbed of innovation. Ironically, at just the moment China is applying water torture to its tech giants, both it and America are seeing a flurry of digital competition, as incumbents invade each other’s turf and are taken on by new challengers. It is a time for encouragement, not crackdowns. Instead of tearing down the tech giants, American trustbusters should strengthen what has always served the country best: free markets, rule of law and due process. That is the one lesson America can teach China. It is the most important lesson of all.
This makes sense, in the sense that I understand the argument being made. Given what little I understand of economics and how the world works, I am in complete agreement with the idea being espoused.
The third is an interview of Mark Zuckerberg by Casey Newton of the Verge.
It is a difficult interview to read, and it is also a great argument for why we should all read more science fiction (note that the title of today’s post is a little bit meta, and that in more ways than one). Read books by Neal Stephenson. Listen to his conversation with Tyler Cowen. Read theseessays by Matthew Ball.
Towards the end of the interview, Casey Newton asks Mark Zuckerberg about the role of the government, and the importance of public spaces, in the metaverse. Don’t worry right now if the concept of the metaverse seems a little abstract. Twenty years ago, driverless cars and small devices that could stream for you all of the world’s content (ever produced) also seemed a little abstract. Techno-optimism is great, I heavily recommend it to you.
Here is Mark Zuckerberg’s answer:
I certainly think that there should be public spaces. I think that’s important for having healthy communities and a healthy sphere. And I think that those spaces range from things that are government-built or administered, to nonprofits, which I guess are technically private, but are operating in the public interest without a profit goal. So you think about things like Wikipedia, which I think is really like a public good, even though it’s run by a nonprofit, not a government. One of the things that I’ve been thinking about a lot is: there are a set of big technology problems today that, it’s almost like 50 years ago the government, I guess I’m talking about the US government here specifically, would have invested a ton in building out these things. But now in this country, that’s not quite how it’s working. Instead, you have a number of Big Tech companies or big companies that are investing in building out this infrastructure. And I don’t know, maybe that’s the right way for it to work. When 5G is rolled out, it’s tough for a startup to really go fund the tens of billions of dollars of infrastructure to go do that. So, you have Verizon and AT&T and T-Mobile do it, and that’s pretty good, I guess. But there are a bunch of big technology problems, [like] defining augmented and virtual reality in this overall metaverse vision. I think that that’s going to be a problem that is going to require tens of billions of dollars of research, but should unlock hundreds of billions of dollars of value or more. I think that there are things like self-driving cars, which seems like it’s turning out to be pretty close to AI-complete; needing to almost solve a lot of different aspects of AI to really fully solve that. So that’s just a massive problem in terms of investment. And some of the aspects around space exploration. Disease research is still one that our government does a lot in. But I do wonder, especially when we look at China, for example, which does invest a lot directly in these spaces, how that is kind of setting this up to go over time. But look, in the absence of that, yeah, I do think having public spaces is a healthy part of communities. And you’re going to have creators and developers with all different motivations, even on the mobile internet and internet today, you have a lot of people who are interested in doing public-good work. Even if they’re not directly funded by the government to do that. And I think that certainly, you’re going to have a lot of that here as well. But yeah, I do think that there is this long-term question where, as a society, we should want a very large amount of capital and our most talented technical people working on these futuristic problems, to lead and innovate in these spaces. And I think that there probably is a little bit more of a balance of space, where some of this could come from government, but I think startups and the open-source community and the creator economy is going to fill in a huge amount of this as well.
I think he’s saying that the truth lies somewhere in the middle, and god knows I’m sympathetic to that argument. But who decides where in the middle? Who determines the breadth of this spectrum, governments or businesses? With what objective, over what time horizon, and with what opportunity costs?
At the moment, and that as a consequence of having written all of this out, this is where I find myself:
China is optimizing for power, and is willing to give up on innovation in the consumer internet space. America is optimizing for innovation in the consumer internet space, and is willing to cede power to big tech in terms of shaping up what society looks like in the near future.
Have I framed this correctly? If yes, what are the potential ramifications in China, the US and the rest of the world? What ought to be the follow-up questions? Why? Who else should I be following and reading to learn more about these issues?
I don’t have the answers to these questions, and would appreciate the help.
Every now and then (and I wish it was more often), I like reading something so much that I don’t just take notes, I put them down here rather than in Roam. It forces me take more careful, structured notes, and the act of writing it all down allows for more thoughts to bubble up – which is the whole point, no?
Before we begin, a quick aside: most of my thoughts and reactions to the interview are because of what I do, and where I’m located. I am in charge of one course at my University, and am also in charge of placements. This University is located in India. So my excerpts, and my reaction to those excerpts are contingent on these two things.
We’ll follow the usual format: excerpts, and then my thoughts.
Consider the three primary markers of the American Dream, or more generally middle class success — housing, education, and health care. You have written at length on how all three of these success markers seem further and further out of reach for many regular people. I think — and you would agree? — that these three deficits are not only causing problems for how people live and how the economy functions, but are fouling our politics quite dramatically.
Education, in India at any rate, can either scale, or it can maintain quality. It has never been able to do both. How to increase scale without losing quality, and how to maintain affordable quality without gaining scale – both of these are really, really difficult questions to answer. The impossible trilemma of higher education in India, as it were. The BSc programme at the Gokhale Institute is (in my opinion, and it is of course a biased one) affordable quality. Far from perfect, I’ll be the first one to admit, and could always be a whole lot better, but I genuinely do think we’re doing good work. But scaling is impossible. And we all know of educational institutes that have managed to scale really well, but don’t do so well when it comes to quality.
And when I say quality, it is very much a “you know it when you see it” definition I am going with. Not NAAC reports or percentage of students placed.
Housing, education, and health care are each ferociously complex, but what they have in common is skyrocketing prices in a world where technology is driving down prices of most other products and services.
The Archimedes reference is obvious, but that’s not the reason I want to focus so much on just this one sentence. How exactly is software a lever on the entire world? Marc gives the examples of Lyft and Airbnb in the interview, but they’re the outcomes for having deployed software. The inner mechanism (I think) is that software goes a very long way towards reducing transaction costs, search costs and therefore overall friction in economic transactions.
The guy driving the rickshaw, and waiting for a customer at a traffic intersection isn’t aware of the person two blocks away who is outside their apartment building, waiting for a ride. Search costs. These are minimized because of the app.
The whole “bhaiya, xyz jaana hai” – “Itna duur, itna late, double bhada” – “kya bhaiya, itna thodi lagta hai” song and dance is avoided (although not always in a way that is fair to the rickshaw driver). Transaction costs. These are minimized because of the app.
And so more transactions take place than they would have if Uber/Lyft/Ola had not been around. And the same is true for Zomato, or Swiggy, or Airbnb or… you get the picture. This (I think) is the lever at play. More gets done because software is involved.
There are legitimate worries about whether the system is always fair, always perfect – and the short answer is always “no”. A better question to ask is if the world is better for these services being around – and the short answer (I think) is “yes”. The best question to ask is how these services could be made better – and Andreessen has suggestions later on in the interview about this.
Software is alchemy that turns bytes into actions by and on atoms.
A lovely way to think about what software does, when used well.
Everywhere software touches the real world, the real world gets better, and less expensive, and more efficient, and more adaptable, and better for people. And this is especially true for the real world domains that have been least touched by software until now — such as housing, education, and health care.
The Baumol effect has some potentially disturbing implications:
When we recognize that all prices are relative prices the following simple yet deep facts follow: If productivity increases in some industries more than others then, ceteris paribus, some prices must increase. Over time, all real prices cannot fall.
As a society it appears that with greater wealth we have wanted to consume more of the goods like education and health care that have relatively slow productivity growth. Thus, preferences have magnified the Baumol effect.
But I think what Marc Andreessen is (in effect) saying is this: sure, even accounting for the Baumol effect, are there ways to reduce search and transaction costs in education, healthcare and housing? And if yes, can we drive down prices in these sectors while maintaining (or even increasing!) quality? That’s the power and potential of software.
And yes, each one of us will react with differing levels of skepticism to the proposition. That’s fine, and I’d say desirable. But the idea is worth thinking about, no? (And if you say no, it’s not, I’d love to hear why you think so.)
It’s more the importance of communication as the foundation of everything that people do, and how we open up new ways for people to communicate, collaborate, and coordinate. Like software, communication technology is something that people tend to pooh-pooh, or even scorn — but, when you compare what any one of us can do alone, to what we can do when we are part of a group or a community or a company or a nation, there’s no question that communication forms the backbone of virtually all progress in the world. And so improving our ability to communicate is fundamental.
Remember the “If you’re the smartest person in the room, you’re in the wrong room” quote2? The potential advantage of Clubhouse, Spaces (or whatever it will be called on all the apps that copy the concept) is that it solves for the geographic constraint when it comes to your menu of rooms to choose from. That’s what makes Twitter so great too – you never have to worry about being the smartest person on Twitter (and I mean that in the nicest way possible!).
It is a great time to be young and angry about the quality of the education system, because the internet can solve some of your problems better than was ever possible in the past.
It’s so striking that in our primarily textual technological world, people are instantly enthusiastic about the opportunity to participate in oral culture online — there is something timeless about talking in groups, whether it’s around a campfire 5,000 years ago or on an app today
At the Gokhale Institute, we were lucky enough to listen to a talk by Visvak where he spoke about some of the positive aspects of Clubhouse during the recent elections in Tamil Nadu. The ability to listen to, and possibly chat with, people with skin in the game who are actually Doing The Work, is truly remarkable. And again, what we have isn’t perfect, and it could be better, and there will be problems. The question to ask is if the world is better with Clubhouse (and it’s imitations) or without? And the better question to ask is how to improve upon it. But when I have the opportunity to listen to Krish Ashok talk about food on Twitter Spaces, and I see people just straight up ask Krish Ashok to host a Spaces about chai – well, what a time to be alive. No? (And if you say no, it’s not, I’d love to hear why you think so.)
Substack is causing enormous amounts of new quality writing to come into existence that would never have existed otherwise — raising the level of idea formation and discourse in a world that badly needs it. So much of legacy media, due to the technological limitations of distribution technologies like newspapers and television, makes you stupid. Substack is the profit engine for the stuff that makes you smart.
I don’t exactly disagree with Marc Andreessen over here; but I have a lot of questions. I’ll list them here:
Substack is a substitute for blogs or newsletters, but with the additional ability to charge payments from subscribers for some (or all) of your posts. Is that a good definition of what Substack is?
Not all Substack writers will initially get enough paying subscribers. In fact, I think it is safe to say that most will never get (enough) paying subscribers. If the first sentence in the excerpt above is to be agreed with, what other incentive is at play for “enormous amounts of new quality writing” to come through? This is not intended as sarcasm or implied criticism – I really would like to know.
Especially in India, I completely agree that legacy media makes you stupid. It is the middle part of that sentence that I am not so sure about: I do not think it is just the technological limitations of distribution technologies that is at play. It’s a much broader question, but what other factors would you think are at play, and how does Substack help mitigate those other problems?
Is bundling inevitable on Substack? Shouldn’t it be? How will this play out? Will Revue stand a better chance as a bundle because it can be combined with so many other offerings?
This isn’t a complete list of questions, and I am not sure of the answers. But this is the part of the interview that I understood the least, for sure.
A longish excerpt in a longish post, but a very important one:
M.A.: My “software eats the world” thesis plays out in business in three stages: 1. A product is transformed from non-software to (entirely or mainly) software. Music compact discs become MP3’s and then streams. An alarm clock goes from a physical device on your bedside table to an app on your phone. A car goes from bent metal and glass, to software wrapped in bent metal and glass. 2. The producers of these products are transformed from manufacturing or media or financial services companies to (entirely or mainly) software companies. Their core capability becomes creating and running software. This is, of course, a very different discipline and culture from what they used to do. 3. As software redefines the product, and assuming a competitive market not protected by a monopoly position or regulatory capture, the nature of competition in the industry changes until the best software wins, which means the best software company wins. The best software company may be an incumbent or a startup, whoever makes the best software.
So this is the part of the Domino’s story that struck me more than anything, when he simply declared for all to hear, we no longer think of ourselves as a pizza company. We think of ourselves as a technology company. I said, excuse me? Well, turns out, they’re headquartered in Ann Arbor, Michigan. They’ve got 800 people working in headquarters. Fully 400 of those, half of their headquarters employees, are engaged in software analytics and big data. They really– once they finally got the product right, they really are, from this point going forward, as much a technology company as they are a food company. And many of the initiatives have to do with making it as easy, as convenient, as kind of natural and impulsive almost to order Domino’s, much more so than any other pizza company.
But, but, but – and this is where the “what I do and where I’m from part” really comes into play – has higher education in India successfully (or even partially) gone through Marc Andreessen’s three stage transformation?
Short answer, no.
Long answer: because “assuming a competitive market not protected by a monopoly position or regulatory capture” doesn’t apply in the case of higher education in India (yet). See this, this, this, and this from earlier on in EFE.
But especially see this! College, as I’ve written in this post, is a bundle. It sells you the learning (Coursera), the signaling (LinkedIn) and the peer network (Starbucks):
If you want to go up against college as a business, you need to sell the same thing that college is selling. And the college sells you a bundle. A business that seeks to do better than college must do better on all three counts, not just on learning. All of the online learning businesses – Coursera is just one very good example – aren’t able to fill all of the three vertices just yet. And that’s why education hasn’t been truly shaken down by the internet just yet: Because college today is more about signaling than it is about learning, and because when you pay money to a college, you are getting a bundle.
And partially by regulatory capture (UGC approved degree, yay!) and partly by cultural conformity (Sharmaji ka beta went to IIT. Whaddya mean, you will learn from YouTube. Kuch bhi!) we still celebrate getting into a “top” college.
Since “top” colleges know this, there is no incentive for them to change. And since ed-tech firms in India also know this, they design excellent software that is designed simply to get students into these colleges3.
And so we in the education sector in India continue to wait for the revolution.
Phew! That’s enough for today. I’ll be back tomorrow with Part II of my reflections on this interview.
that is a sweeping generalization, yes. I’m more than happy to be corrected on this. Please tell me more about ed-tech firms that are about learning for its own sake, not about entrance examinations[↩]
The background to this is that Tyler Cowen had written a book some years ago called The Great Stagnation. The basic thesis in that book is that innovation was slowing down, since the low hanging fruit in terms of technical innovation had already been picked. But the book also spoke about how this was not to say that innovation was forever going to be slow – it’s just that it had slowed down around then.
He wasn’t the only one, by the way. There were quite a few folks who were less than impressed with technological progress aobut a decade ago. Everybody has heard of the comparison between Twitter and flying cars, but there’s much more where that came from:
In the 2010s, we largely decided that we were in the middle of a technological stagnation. Tyler Cowen’s The Great Stagnation came out in 2011, Robert Gordon’s The Rise and Fall of American Growth came out in 2016. Peter Thiel declared that “we wanted flying cars, instead we got 140 characters”. David Graeber agreed. Paul Krugman lamented the lack of new kitchen appliances. Some economists asked whether ideas were simply getting harder to find. When the startup Juicero came out with a fancy new kitchen appliance, it was widely mocked as a symbol of what was wrong with the tech industry. “Tech” became largely synonymous with software companies, particularly social media, gig economy companies, and venture capital firms. Many questioned whether those sorts of innovations were making society better at all. So it’s fair to say that the 2010s were a decade of deep techno-pessimism.
By the way, on a related note (although this deserves its own post, which will be out tomorrow) you may want to read this post by Morgan Housel in this regard.
In any case, Covid-19 has in some ways accelerated innovation, and that’s the point that Bruno Macaes1 is making in the article above.
Take transportation and energy: the demand for driverless cars and delivery vans boomed last year because people were fearful of getting infected. In response companies quickly scaled up their plans. Last October, for example, Waymo announced the launch of a taxi service that is fully driverless. Walmart announced in December its plans to use fully autonomous box trucks to make deliveries in Arkansas later this year. As retail goes online as a result of the pandemic, massive delivery volumes are now placing greater pressure on others to follow suit.
Note that without Covid-19, we would be having debate about automation, jobs and how technology is promoting inequality. That may well be true. But this is precisely why we study opportunity costs in college!
Perhaps the most interesting (to me) advance this past year has been in terms of we humans understanding how protein folding happens. Understanding is perhaps the wrong word to use (and note that I know as much biology as forecasters know about the future), but we have trained machines to understand it.
As I understand it (and please note once again that I am no expert) this has the potential to change by orders of magnitude how we approach the treatment of a variety of diseases in this century.
But if you are anything like me, you are also curious to know about what else has been going on this past year. Again, before we proceed: this post is about the “what” in terms of scientific advancement. Tomorrow is a rumination about the “why”.
First, I’d referred to this interview in an earlier post, an interview of Patrick Collison by Noah Smith. It refers to some of what we have been speaking about, but much more as well:
I think the 2020s are when we’ll finally start to understand what’s going on with RNA and neurons. Basically, the prevailing idea has been that connections between neurons are how cognition works. (And that’s what neural networks and deep learning are modeled after.) But it looks increasingly likely that stuff that happens inside the neurons — and inside the connections — is an important part of the story. One suggestion is that RNA is actually part of how neurons think and not just an incidental intermediate thing between the genome and proteins. Elsewhere, we’re starting to spend more time investigating how the microbiome and the immune system interact with things like cancer and neurodegenerative conditions, and I’m optimistic about how that might yield significantly improved treatments. With Alzheimer’s, say, we were stuck for a long time on variants of plaque hypotheses (“this bad stuff accumulates and we have to stop it accumulating”)… it’s now getting hard to ignore the fact that the immune system clearly plays a major — and maybe dominant — role. Elsewhere, we’re plausibly on the cusp of effective dengue, AIDS, and malaria vaccines. That’s pretty huge.
The tiny red vertical line tells you when the cause of the disease was identified, and the tiny green vertical line tells you when the cure was licensed in the United States of America. And now think of what happened with Covid-19!2
It is easy to get caught up in the short term pessimistic narrative, and be overwhelmed by it. It happened to me last year, as I am sure it did to many, many other people on this planet. I gave up on what until then had been my proudest achievement in terms of my work: posting here every single day.
So when things are really bad and grim (and again, this is not over yet), look to the bright side. And not just because it’s a good thing to do! But also because the bright side is likely to be brighter precisely because of everything else being so goddamn dark.
Tomorrow, I’ll attempt to answer a question I have, and I am sure you do as well: why?
I don’t know how to type out a c with a cedilla in WordPress, my apologies[↩]
Please note, covid-19 ain’t over yet, especially here in India. That’s not the point though. The point is to ask if the kind of progress we have made this past year would even have been possible in the past.[↩]
The reverse is also probably true, more’s the pity[↩]
First, what is Palantir Technologies? Here’s Wikipedia – note that I have combined sentences across different paragraphs in this excerpt:
Palantir Technologies is a public American software company that specializes in big data analytics. Headquartered in Denver, Colorado, it was founded by Peter Thiel, Nathan Gettings, Joe Lonsdale, Stephen Cohen, and Alex Karp.
The company is known for three projects in particular: Palantir Gotham, Palantir Metropolis and Palantir Foundry. Palantir Gotham is used by counter-terrorism analysts at offices in the United States Intelligence Community (USIC) and United States Department of Defense…
…Palantir Metropolis is used by hedge funds, banks, and financial services firms…
…Palantir Foundry is used by corporate clients such as Morgan Stanley, Merck KGaA, Airbus, and Fiat Chrysler Automobiles NV
Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this “data integration” is to help organizations make better decisions, and many of Palantir’s customers consider its technology to be transformative.
But the story gets more interesting in the very next line in the article…
Karp claims a loftier ambition, however. “We built our company to support the West,” he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company’s early days, Palantir employees, invoking Tolkien, described their mission as “saving the shire.”
There’s two questions at play here, really. First, what does Palantir Technologies do (that’s the first excerpt from the NYT story)? And second, why does it do what it does (and that’s the excerpt right above)?
Now, the reason I find this so interesting is that the instinctive argument that you might want to make against Palantir Technologies is “but privacy!”. And the second excerpt above is, in a sense, Palantir’s response.
Although Palantir claims it does not store or sell client data and has incorporated into its software what it insists are robust privacy controls, those who worry about the sanctity of personal information see Palantir as a particularly malignant avatar of the Big Data revolution. Karp himself doesn’t deny the risk. “Every technology is dangerous,” he says, “including ours.”
Technology is technology – what you do with it is what matters is a rather old argument, but that’s the argument that is being used here. There’s more though – if we don’t, somebody else will. Better the known devil, etc.
Once the data has been integrated, it can be presented in the form of tables, graphs, timelines, heat maps, artificial-intelligence models, histograms, spider diagrams and geospatial analysis. It is a digital panopticon, and having sat through several Palantir demos, I can report that the interface is impressive — the search results are strikingly elegant and easy to understand.
Elsewhere in the article, the author speaks about how the work isn’t glamorous, and is really just glorified plumbing. Well, maybe – but as anybody who has lived in a house will tell you, it is plenty important. Good plumbing is plumbing you don’t notice, but reap the benefits of – and that seems to be Palantir’s USP.
While Thiel provided most of the early money, the start-up secured an estimated $2 million from In-Q-Tel, a venture-capital firm that finances the development of technologies that can help the C.I.A. Karp says the real value of the In-Q-Tel investment was that it gave Palantir access to the C.I.A. analysts who were its intended clients.
Did In-Q-Tel pay to help start Palantir, or did it hire consultants for 2 million dollars? Did Palantir agree to work for only 2 million dollars to get access to the CIA?
Bottom-line: the world is a non-zero sum game.
According to Thiel, their conversations generally took place late at night in the law-school dorm. “It sounds too self-aggrandizing, but I think we were both genuinely interested in ideas,” he says. “He was more the socialist, I was more the capitalist. He was always talking about Marxist theories of alienated labor and how this was true of all the people around us.”
This excerpt is from a section which is about Karp figuring out his education and career, and we learn about his Jewish, rebellious background as well. I found this clip interesting because from Peter Thiel’s viewpoint, succeeding in selling the idea behind Palantir to Karp is one of the biggest validations there could possibly be. If he bought into the story, well, there must be something to it. Second, what better way to maintain checks and balances than to have somebody like Karp running the show?
In fact, Thiel hiring Karp for this job becomes more and more interesting the more you learn about Karp. Thiel has a quote in the article about needing someone who was smart and scrappy, but left unsaid, perhaps, is someone who was very unlike Thiel. And not just unlike Thiel, also unlike the typical CEO. A person who worries about the alienation of labor, likes solitary pursuits, and dreams of being an intellectual in Europe isn’t the person you would have in mind as the typical CEO of a firm like Palantir. But that, it would seem, was the whole point. Well, that, and being a bachelor by choice wouldn’t hurt, given the traveling nature of the job.
(Although there is a section in the article in which Karp insists that he being who he is hasn’t helped him or Palantir.)
Karp and Thiel say they had two overarching ambitions for Palantir early on. The first was to make software that could help keep the country safe from terrorism. The second was to prove that there was a technological solution to the challenge of balancing public safety and civil liberties — a “Hegelian” aspiration, as Karp puts it.
Karp and Thiel make for a Hegelian pair themselves!
When I asked Thiel about the risk of abuse with Palantir, he answered by referring to the company’s literary roots. “The Palantir device in the Tolkien books was a very ambiguous device in some ways,” he said. “There were a lot of people who looked into it and saw more than they should see, and things went badly wrong when they did.” But that didn’t mean the Palantir itself was flawed
He continued: “The plot action was driven by the Palantir being used for good, not for evil. This reflected Tolkien’s cosmology that something that was made by the good elves would ultimately be used for good.”
A moment later, he added: “That’s roughly how I see it, that it is ultimately good and still very dangerous. In some ways, I think that was reflected in the choice of the name.”
I found this fascinating, and I also found it useful to think about this from the Wikipedia article about the original Palantir:
A major theme of palantír usage is that while the stones show real objects or events, they are an unreliable guide to action, and it is often unclear whether events are past or future: what is not shown may be more important than what is selectively presented. Further, users with sufficient power can choose what to show and what to conceal
The technology is what it is – and as Karp himself points out, it is susceptible to misuse. More importantly, the technology in combination with the person(s) who are using it is, at least potentially, an even more dangerous tool.
Karp made clear that he was opposed to Trump’s immigration policies: “There are lots of reasons I don’t support the president; this is actually also one of them.” He told me that he was “personally very OK with changing the demographics of our country” but that a secure border was something that progressives should embrace. “I’ve been a progressive my whole life,” he said. “My family’s progressive, and we were never in favor of open borders.” He said borders “ensure that wages increase. It’s a progressive position.” When the left refuses to seriously address border security and immigration, he said, the right inevitably wins. To the extent that Palantir was helping to preserve public order, it was “empirically keeping the West more center-left.”
To understand a big data firm started by one the world’s most successful VC’s, one should end up reading about a German philosopher from the 18th century – for what could possibly more Hegelian than that excerpt?
And finally, the last sentence in the article:
“Palantir,” he said, “is the convergence of software and difficult positions.”
I’ve said it before and I’ll say it again. If you are really and truly into the Apple ecosystem, you could do a lot worse than just follow the blog Daring Fireball. I mean that in multiple ways. His blog is one of the most popular (if not the most popular) blog on all things Apple. Popularity isn’t a great metric for deciding if reading something is worth your time, but in this case, the popularity is spot on.
But it’s more than that: another reason for following John Gruber’s blog is you learn about a trait that is common to this blog and to Apple: painstaking attention to detail. Read this article, but read especially footnote 2, to get a sense of what I mean. There are many, many examples of Apple’s painstaking attention to detail, of course, but this story is one of my favorites.
Prior to the patent filing, Apple carried out research into breathing rates during sleep and found that the average respiratory rate for adults is 12–20 breaths per minute. They used a rate of 12 cycles per minute (the low end of the scale) to derive a model for how the light should behave to create a feeling of calm and make the product seem more human.
But finding the right rate wasn’t enough, they needed the light to not just blink, but “breathe.” Most previous sleep LEDs were just driven directly from the system chipset and could only switch on or off and not have the gradual glow that Apple integrated into their devices. This meant going to the expense of creating a new controller chip which could drive the LED light and change its brightness when the main CPU was shut down, all without harming battery life.
Anybody who is an Android/PC user instead (such as I), can’t help but be envious of this trait in Apple products.
There are many, many books/videos/podcasts you can refer to to understand Apple’s growth in the early years, and any one reference doesn’t necessarily mean the others are worse, but let’s begin with this simple one. Here’s Wikipedia on the same topic, much more detail, of course.
Before it became one of the wealthiest companies in the world, Apple Inc. was a tiny start-up in Los Altos, California. Co-founders Steve Jobs and Steve Wozniak, both college dropouts, wanted to develop the world’s first user-friendly personal computer. Their work ended up revolutionizing the computer industry and changing the face of consumer technology. Along with tech giants like Microsoft and IBM, Apple helped make computers part of everyday life, ushering in the Digital Revolution and the Information Age.
The production function and complementary goods are two topics that every student of economics is taught again and again. Here’s how Steve Jobs explains it:
According to Sculley’s wishes, Steve Jobs was to represent the company externally as a new Apple chairman without influencing the core business. As Jobs got wind of these plans to deprive him of his power, he tried to arrange a coup against Sculley on the Apple board. Sculley told the board: “I’m asking Steve to step down and you can back me on it and then I take responsibility for running the company, or we can do nothing and you’re going to find yourselves a new CEO.” The majority of the board backed the ex-Pepsi man and turned away from Steve Jobs.
And the one guy who helped Steve Jobs achieve his vision for Apple once Jobs came back was, of course, Jony Ive. This is a very, very long article but a fun read, not just about the relationship between Ive and Jobs, but also about Ive and Apple. Jony Ive no longer works at Apple of course (well, kinda sorta), but you can’t understand Apple without knowing more about Ive.
Jobs’s taste for merciless criticism was notorious; Ive recalled that, years ago, after seeing colleagues crushed, he protested. Jobs replied, “Why would you be vague?,” arguing that ambiguity was a form of selfishness: “You don’t care about how they feel! You’re being vain, you want them to like you.” Ive was furious, but came to agree. “It’s really demeaning to think that, in this deep desire to be liked, you’ve compromised giving clear, unambiguous feedback,” he said. He lamented that there were “so many anecdotes” about Jobs’s acerbity: “His intention, and motivation, wasn’t to be hurtful.”
Apple has bee, for most of its history, defined by its hardware. That still remains true, for the most part. But where does Apple News, Apple Music, Apple TV fit in?
Apple, in the near future will be as much about services as it is about hardware, and maybe more so. That’s, according to Ben Thompson, the most likely (and correct, in his view) trajectory for Apple.
CEO Tim Cook and CFO Luca Maestri have been pushing the narrative that Apple is a services company for two years now, starting with the 1Q 2016 earnings call in January, 2016. At that time iPhone growth had barely budged year-over-year (it would fall the following three quarters), and it came across a bit as a diversion; after all, it’s not like the company was changing its business model
“And 5G will be going to work behind the scenes, in ways that will emerge over time. One important benefit of the technology is its ability to greatly reduce latency, or the time it takes for devices to communicate with one another. That will be important for the compatibility of next-generation devices like robots, self-driving cars and drones.For example, if your car has 5G and another car has 5G, the two cars can talk to each other, signaling to each other when they are braking and changing lanes. The elimination of the communications delay is crucial for cars to become autonomous.”
.. Brian X. Chen with a rather more prosaic list for 2020.
“In life sciences, we’ll have greater understanding of the dynamics of how our microbiome – the tiny organisms, including bacteria, that live in the human body – influences multiple systems in our body, including our immune systems, metabolic processes and other areas. This will result in seminal discoveries related to a variety of conditions, including autoimmune diseases, pre-term birth and how our metabolism is regulated. Regenerative medicine approaches to creating new tissues and organs from progenitor cells will expand significantly. ”
.. The World Economic Forum weighs in on the issue.
“Even if the gene drive works as planned in one population of an organism, the same inherited trait could be harmful if it’s somehow introduced into another population of the same species, according to a paper published in Nature Reviews by University of California Riverside researchers Jackson Champer, Anna Buchman, and Omar Akbari. According to Akbari, the danger is scientists creating gene drives behind closed doors and without peer review. If someone intentionally or unintentionally introduced a harmful gene drive into humans, perhaps one that destroyed our resistance to the flu, it could mean the end of the species.”
.. Fast Company ponders a world in which Black Mirror is non-fiction.
“Ten years from now is “the end of the
classroom as we know it,” George Kembel of the Stanford d.school
writes. Professors will be a “team of coaches,” and class projects
will be like Choose Your Own Adventure — open-ended and actually pretty fun.”
Fast Company again, this time in 2010, trying to figure out what the world looks like by the time it is 2020. I can assure you that they got education completely wrong.