Makkhan, Magic and the Mind

It’s not like human beings don’t make stuff up. We do it all the time. In fact, one of my LinkedIn core competencies is the ability to stitch together 4 threads of fact with 100 threads of creative fabrication. Eyewitnesses regularly make stuff up under oath. Godmen regularly claim to confabulate with the divine. Children make up excuses with hilariously cute incompetence. Maybe we are also probabilistic auto-complete machines powered by wetware instead of software?

https://krishashok.me/2023/03/13/the-butter-crypto-nft-project/

There are a few people in the world who have the ability to depress you with their all-round awesomeness, and if you ask me, Krish Ashok is near the top of the list. Excellent at everything he does, including being the author of a most kick-ass blog. (He could be awesomer by updating his blog more often, but such is human nature – it always wants a little bit more). Anyways, please go read the whole post later – it would be a most excellent way to spend about ten minutes or so. His post involves puns, fats, memes, mythology and a rumination on AI.

That excerpt above is a part of his rumination, and it is a question I want to start thinking about with the help of today’s blogpost. How does AI work, and is it like the human mind? You might quibble at the use of the word “mind”, rather than the word “brain”, but I have my reasons, and they aren’t just alliterative.


The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…

A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)

https://stratechery.com/2023/chatgpt-learns-computing/

Read the excerpt from Krish Ashok’s post, and compare it with this excerpt above. The excerpt comes from Ben Thompson’s equally (but in a different way) excellent post, called ChatGPT learns Computing. There’s a lot going on in the post, and as always, please do read it in full, but I enjoyed learning about the book(s) written by Jeff Hawkins. Quick aside: that excerpt above has been drawn from two different books written by Jeff Hawkins – apologies for the mashup of the quotes, but they fit so well together that I went ahead and showed them as one quote. Ben (via Jeff Hawkins) seems to be making the point that we hallucinate too, and in some cases, pretty literally. It’s almost as if the second excerpt ends up answering the question raised in the first one!


I meet two kinds of people these days. The first group revels in pointing out how ChatGPT fails at certain tasks, and therefore isn’t as good as it is made out to be. The second group can’t help but sing paeans to ChatGPT. Both would do well to acknowledge the points being made by the other side, but my own position is much closer to that of the second group than the first. Yes, it (ChatGPT) makes mistakes, and yes it isn’t perfect, but as Ben says elsewhere in his post, it is pretty awesome 95% of the time, and not so great – downright error-prone, even – about 5% of the time:

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine… And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”

https://stratechery.com/2023/chatgpt-learns-computing/

Again, side note: that quote is actually by Stephen Wolfram, and I have simply excerpted an excerpt from Ben’s post. But it is the point that matters here, and the point is that yes, ChatGPT isn’t perfect. But two additional points: it can get better over time, for one. And second, that is happening right before our eyes. Not just because we’re now in 4 territory rather than 3.5, and because ChatGPT is augmenting it’s capabilities via plug-ins.


Now, here’s the part about Ben’s post that is confusing. Note that Krish Ashok asked in his post about whether “we are also probabilistic auto-complete machines powered by wetware instead of software”. And the excerpt from Ben’s post seems to say yes, that may well be the case. Ben does go on to say that proving this is going to be difficult, but lets, for now, go with this hypothesis – maybe we are probabilistic auto-complete machines.

And AI? AI is also a probabilistic auto-complete machine, just a much more powerful one. Much, much more powerful:

Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

https://stratechery.com/2023/chatgpt-learns-computing/

And is that all there is to AI? Ah, what a question to ask:

While technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations

https://stratechery.com/2023/chatgpt-learns-computing/

But where does this emergent property come from, where AI is concerned? Well, where does what makes us human come from?

The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.

https://stratechery.com/2023/chatgpt-learns-computing/

What is the neocortex? This is how Wikipedia defines it: “The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language”. Even within our brain, if I’ve understood Ben and Jeff’s points correctly, the neocortex by itself is simply a very awesome tool, but with no capacity (or desire!) to act on anything. It is when you combine the computing capabilities of the neocortex with the old, “animal” part of our brain that magic starts to happen. This “animal” part of our brain is what “drives emotions, our drive for survival and procreation”.

So sure, AI may seem to have emergent properties, but it’s choosing to act on these properties is dependent on its animal brain. But what is the animal brain of AI? Go read the last sentence of the last excerpt, especially the concluding bit:

“The true risk of machine intelligence is the intentions of the humans that wield it”


The other day, there was a conversation on a Whatsapp group that I am privileged to be a part of, where we got to wondering if AI can be capable of being humorous. Could it tell, for example, a Mullah Naseeruddin story? There was one easy way to find out!

Here is what it came up with:

“Okay, I’ll try to create a Mulla Naseeruddin story about he meeting me. Here is one possible story:

One day, Mulla Naseeruddin was walking in the market when he saw a crowd of people gathered around a stall. He was curious and went closer to see what was going on. He saw a man with a laptop and a microphone, talking to the people.

“Hello, everyone. This is Bing, the chat mode of Microsoft Bing search. You can ask me anything you want and I will answer you. I can also generate poems, stories, code, essays, songs, celebrity parodies and more. Who wants to try?” the man said.

The people were amazed and excited. They started asking Bing various questions and requests. Bing answered them with informative, visual, logical and actionable responses. He also generated some creative content that made the people laugh and clap.

Mulla Naseeruddin was impressed by Bing’s abilities. He decided to ask him something too. He raised his hand and said, “Excuse me, sir. Can I ask Bing a question?”

“Sure, go ahead,” the man said.

Mulla Naseeruddin cleared his throat and said, “Bing, can you tell me the meaning of life?”

The crowd became silent. They all looked at Bing with curiosity and anticipation. Bing thought for a moment and then said, “The meaning of life is a complex and subjective question that has been debated by philosophers, scientists, religious leaders and ordinary people for centuries. There is no definitive answer to this question, as different people may have different views and values on what makes life meaningful. However, in this context, a human might say that the meaning of life is to find happiness, purpose, love and fulfillment in oneself and others.”

The crowd nodded and murmured in agreement. They were impressed by Bing’s answer. Mulla Naseeruddin smiled and said, “Thank you, Bing. That was a very wise and thoughtful answer.”

He then turned to the man and said, “Can I ask you another question?”

“Sure,” the man said.

Mulla Naseeruddin leaned closer to the man and whispered in his ear, “Can you tell me how to turn off Bing?”

The man was stunned. He looked at Mulla Naseeruddin with disbelief and confusion. He said, “Why would you want to do that? Bing is amazing. He can help you with anything you need.”

Mulla Naseeruddin shrugged and said, “I don’t need him anymore. I already know the meaning of life.”

He then walked away from the stall, leaving the man speechless and the crowd bewildered.

The moral of the story is: Sometimes we seek answers from external sources when we already have them within ourselves.”

Krish Ashok ended his blogpost by asking if the answer to his excerpt could be found by asking ChatGPT. Ben helped me think through some of the nuances associated with this question, but Krish Ashok’s question? ChatGPT has an answer for him: look within.

Notes:

  1. If you’re wondering what makkhan has to do with this post, read Krish Ashok’s post.
  2. Note that I know next to nothing about how AI works, how the brain works, and what similarities there are between the two. This post is me learning by having fun writing it – but if you’re looking for expertise re: either of these subjects, you should be looking elsewhere.

Consoles, Competition and Comprehension

If you are studying microeconomics, whether in undergrad or postgrad courses, it can sometimes get a little too theoretical. Or that, at any rate, is how I used to feel about the more abstruse parts of advanced micro. And while memorizing the millionth derivation in order to regurgitate it in an examination, I would often wonder if there was any relevance of what I was attempting to study to the real world outside.

If you, today, as a student of micro share this opinion, let me ask you this: are you interested in video games? Are you living in fond hope that a PS5 will land up in your living room? Or are you figuring out ways to get XBox Pass?

If the answer to any of these questions is yes, I’m guessing that you like playing video games. Do you know how the industry started? Do you know what the Gang of Four was all about? Do you know how different business models in the industry originated? How they evolved and why, and with what consequences? Had you heard about the Great Video Game Crash of 1983? I knew a little bit (but not a lot) about the answers to all of these questions, save for the last.

But the reason I bring this up is because Ben Thompson has an exellent essay out on the evolution of the gaming industry, with a lovely recap of all of what happened, and why. You’ll learn about vertical and horizontal integration, lock-ins, attempts to create monopolies, attempts at preserving monopoloies, about how business models had to change to account for changing strategies, changing technologies and changing aspirations on part of creators, consumers and corporations. It’s head-spinning stuff!

It begins with a description of the world’s first video game (OXO, 1952, in case you were wondering) and ends with how the FTC (perhaps) doth take things too far with the Activision acquisition by Microsoft. And in the interim, it touches upon names that will evoke nostalgia among folks of a certain vintage, and curiosity among folks of a more recent vintage.

If you are a student struggling with micro but happen to love video games, this essay might motivate you to read more about the evolution of the video game industry, and understand micro better in the process.

If you are a teacher struggling with helping students fall in love with micro, consider reading and using this essay.

And a meta lesson: a great way to learn about microeconomics is to pick your industry of choice, and ask how it has evolved over time, and why. The answers to these questions is a great way to become a better student of economics.

If you’re looking for suggestions in this regard: music, television, movies, gaming, publishing, hospitality and sports (football, cricket and tennis would be great examples). And if I may offer one piece of contrarian and possibly heretical advice – begin with the industry and work your way to the textbook, rather than the other way around.

Write The Harder Version

Ben Thompson writes a lovely (as usual) essay about the latest Meta-Microsoft partnership. There’s a lot to think about and ponder in that essay, but for the moment, I want to just focus on a part of it that appears in the introduction:

That was why this Article was going to be easy: writing that Meta’s metaverse wasn’t very compelling would slot right in to most people’s mental models, prompting likes and retweets instead of skeptical emails; arguing that Meta should focus on its core business would appeal to shareholders concerned about the money and attention devoted to a vision they feared was unrealistic. Stating that Zuckerberg got it wrong would provide comfortable distance from not just an interview subject but also a company that I have defended in its ongoing dispute with Apple over privacy and advertising.
Indeed, you can sense my skepticism in the most recent episode of Sharp Tech, which was recorded after seeing the video but before trying the Quest Pro. See, that was the turning point: I was really impressed, and that makes this Article much harder to write.

https://stratechery.com/2022/meta-meets-microsoft/

When you’re writing about a particular topic, and particularly if you write often enough, you realize that there are two ways to go about it: the easy way, and the hard way. The easy way isn’t necessarily about slacking off – in fact, part of the reason it might be easy to write is precisely because you haven’t bene slacking off for a long time in terms of writing regularly.

Doing so – writing regularly, that is – gives you a way of thinking about what to write – a mental framework that lays out the broad contours of your write-up, a way to begin the first paragraph, and even a nice rhetorical flourish with which to end.

I speak from personal experience – every now and then, I can see the blogpost that will be written by me while I’m reading something. And this is a truly wonderful superpower – the ability to know that you can churn out a somewhat decent-ish piece about something in very short order. Which is why both writing regularly and writing with self-imposed deadlines is on balance a good thing.


But there is, alas, no such thing as a free lunch. The downside of this is that one also then develops the inability to push oneself more. Why bother coming up with a different way of thinking what to write about, and how to go about it? Even if you’ve developed the intuition while reading something that your regular mental framework will do just fine, and it might well be what your audience is expecting from you anyways, you know that you really should be framing it in a different way. Either because that’s really what the subject matter at hand demands, or because you’re somehow convinced that this new, different way will result in a better framing – but you just know it in your bones.

That’s the hard bit: should you then stick to what you know and thump out a piece, or should you take the time to pause, reflect and push yourself to build out a better essay? Should you pursue that contrarian take, even though it might take longer?

And if you have a regular schedule to keep up with, the answer need not necessarily be yes. But I would argue that every now and then, it does make sense to take a step back, allow yourself the luxury of time, and write the more difficult piece instead.

Yes it will take longer, and yes it will be more tiring, but now what to do. Such is life.


All that being said, three quick points about Ben’s essay that really stood out for me:

  1. What is Mark Zuckerberg optimizing for with this move, and what cost to himself and his firm? Why? Weirdly, it would seem as if he is pushing the technology (VR) at the cost of at least the short-term growth of his firm, and he seems to be fine with it. Huh.
  2. Who are likely to be the early adopters of your service, and how likely are they to eventually become your marketers for free is a question that never goes away, but remains underrated.
  3. I’ve never used a VR headset, but even after reading Ben’s article, it becomes difficult to see why this might take off at current costs – and those costs aren’t just monetary, but also about mass adoption, inconveniences and technological limitations. I just don’t get it (which, of course, is a good thing. More to learn!)

Supply and Demand, Complements and Substitutes and Dalle-E 2

Before we begin, and in case some of you were wondering:

Early last year, San Francisco-based artificial intelligence company OpenAI launched an AI system that could generate a realistic image from the description of the scene or object and called it DALL.E. The text-to-image generator’s name was a portmanteau coined after combining the artist Salvador Dali and the robot WALL.E from the Pixar film of the same name.

https://analyticsindiamag.com/whats-the-big-deal-about-dall-e-2/

Dall-E 2 is amazing. There are ethical issues and considerations, sure, but the output form this AI system is stunning:

A rabbit detective sitting on a park bench and reading a newspaper in a Victorian setting (Source)

And just in case it isn’t clear yet, no such painting/drawing/art existed until this very sentence, the one that is the caption, was fed to the AI. And it is the AI that “created’ this image. Go through the entire thread.


This has led, as might be expected, to a lot of wondering about whether artists are going to be out of a job, and the threats of AI to humanity at large. I do not know enough to be able to offer an opinion one way or the other where the latter is concerned, but I do, as an economist, have some points to make about the former.

These thoughts were inspired by reading Ben Thompson’s latest (freely available) essay on Dall-E 2, titled “DALL-E, the Metaverse, and Zero Marginal Content“. He excerpts from the OpenAI website in his essay, and this sentence stood out:

DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

https://openai.com/dall-e-2/

And that begs an age-old question where economists are concerned: is technology a complement to human effort, or a substitute for it? The creators of Dall-E 2 seem to agree with Steve Jobs, and think that the AI is very much a complement to human ingenuity, and not a substitute for it.

I’m not so sure myself. For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?

But for the moment, I agree: this is an exciting new way to generate content, and is likely to work best when used as a complement by artists. Note that this is based on what I’ve seen and read – I have not myself had a chance to use or play around with Dall-E 2.


The title of today’s blog post is about substitutes and complements, which we just finished talking about in the previous section, but it also includes references to demand and supply. What about demand and supply?

Well, Ben Thompson talks about ways to think about social media firms today. He asks us to think about Facebook for example, and asks us to reflect upon where the demand and the supply for Facebook as a service comes from.

Here’s my understanding, for having read Ben Thompson’s essay: Facebook’s demand comes from folks like you and I wanting to find out what, well, folks like you and I are up to. What are our friends, our neighbors, our colleagues and our acquaintances up to? What are their friends, neighbors, colleagues and acquaintances up to? That’s the demand.

What about the supply? Well, that’s what makes Facebook such a revolutionary company – or at least, made it revolutionary back then. The supply, as it turns out, also came from folks like you and I. We were (and are) each others friends, neighbors, colleagues and acquaintances. Our News Feed was mostly driven by us in terms of demand, and driven by us in terms of supply. Augmented by related stuff, and by our likes and dislikes, and news sources we follow and all that, but demand and supply comes from our own networks.

TikTok, Thompson says, is also a social network, and supply and demand is also user driven, but it’s not people like us that create supply. It is just, well, people. TikTok “learns” what kind of videos we like to see, and the algorithm is optimized for what we like to see, regardless of who has created it.

But neither Facebook nor TikTok are in the business of generating content for us to see. The former, to reiterate, shows us stuff that our network has created or liked, while the latter shows us stuff that it thinks we will like, regardless of who has created it.

But how long, Ben Thompson’s essay asks, before AI figures out how to create not just pictures, but entire videos. And when I say videos, not just deep fakes, which already exist, but eerily accurate videos with depth, walkthroughs, nuance, shifting timelines and all the rest of it.

Sounds far-fetched?

Well, I remember taking an hour to download just one song twenty years ago, and I can now stream any song in the world on demand. And soon (already?) I will be able to “create” any song that I like, by specifying mood, genre, and the kind of lyrics I want.

How long before I can ask AI to create a movie just for me? Or just me and my wife? Or a cartoon flick involving me and my daughter? How long, in other words, before my family’s demand for entertainment is created by an AI, and the supply comes from that AI being able to tap into our personal photo/video collection and make up a movie involving us as cartoon characters?

Millions of households, cosily ensconced in our homes on Saturday night, watching movies involving us in whatever scenario we like. For homework, read The Secret Life of Walter Mitty by Thurber (the short story, please, not the movie!), Snowcrash by Neal Stephenson, and The Seven Basic Plots by Baker.


There are many tantalizing questions that arise from thinking about this, and I’m sure some have struck you too. But I don’t want to get into any of them right now.

Today’s blog post has a very specific point: it doesn’t matter how complicated the issue at hand is. Simple concepts and principles can go a very long way in helping you frame the relevant questions required for analysis. Answering them won’t be easy, as in this case, but hey, asking (some of) the right questions is a great place to start.

Jeff Bezos, ex-CEO, Amazon

I thoroughly enjoyed going through these pictures, and you probably will too.

Here are three things I’d recommend you read about Amazon, to get a better sense of the company and what it has been up to:

  1. The Everything Store, by Biz Stone
  2. The Amazon Tax, by Ben Thompson
  3. A fascinating story about how Amazon developed it’s batteries.

Links for Friday, 23rd Oct, 2020

Human evolution produced gossip. Cultural anthropology sees gossip as an informal way of enforcing group norms. It is effective in small groups. But gossip is not the search for truth. It is a search for approval by attacking the perceived flaws of others.

http://www.arnoldkling.com/blog/gossip-at-scale/

Arnold Kling writes an excellent essay about gossip and (as he puts it), the ISS. That, to be clear, stands for Internet, Smart Phones and Social Media. Excellent essay, well worth your time.

Low level of CRAR not only hampers bank health but also restricts smooth transmission of monetary policy. Injection of capital by the Government of India in public sector banks is likely to increase the credit flow to the real sector and help in smoother transmission of monetary policy.

https://rbidocs.rbi.org.in/rdocs/Publications/PDFs/RBIWPS12.PDF

How much of this paper is signaling/laying the groundwork, and how much of it is a genuine addition to what we already know about monetary policy? The link comes via Amol Agarwal

This is exactly why I am so pleased to see how narrowly focused the Justice Department’s lawsuit is: instead of trying to argue that Google should not make search results better, the Justice Department is arguing that Google, given its inherent advantages as a monopoly, should have to win on the merits of its product, not the inevitably larger size of its revenue share agreements. In other words, Google can enjoy the natural fruits of being an Aggregator, it just can’t use artificial means — in this case contracts — to extend that inherent advantage.

https://stratechery.com/2020/united-states-v-google

The concluding paragraph from this blog post by Ben Thompson is even better, and I was tempted to go with it, but this works too! Please read the whole thing – excellent writing, as always.

If you’re looking to get an iPad right now and can afford it, the new $599 iPad Air is the best tablet for most people. Apple has taken the design from the more expensive iPad Pro and brought it down to a more reasonable price point. It’s $100 more than it was last year, but in return this year’s iPad Air has a bigger, better screen and a faster (and very intriguing) processor.

https://www.theverge.com/21525780/apple-ipad-air-2020-review

Dieter Bohn’s review of the iPad Air (2020). If I could, I would!

Miniature paintings are among the most beautiful, most technically-advanced and most sophisticated art forms in Indian culture. Though compact (about the same size as a small book), they typically tackle profound themes such as love, power and faith. Using technologies like machine learning, augmented reality and high-definition robotic cameras, Google Arts & Culture has partnered with the National Museum in New Delhi to showcase these special works of art in a magical new way.

https://blog.google/around-the-globe/google-asia/india-miniature-masterpieces

This is a must have app on your phone. I mean, it was always a must-have app on your phone, this latest collection only makes the argument stronger!

*Really* Understanding Jio

A while ago, I wrote an essay called “Airtel, Amazon and Untangling some Thoughts“. Today’s essay is a continuation of that one, and we’ll begin with a diagram I’d put up in the last one:

The basic thesis in that essay was that the reason the Amazon/Airtel, Google/Vodafone ideas made sense was because all the major players wanted to be present in the entire space.

But the recent investments in Jio take our story down a different path:

I wrote that Daily Update on the occasion of Facebook investing $5.7 billion for a 10% stake into Jio Platforms; it turned out that was the first of many investments into Jio:

In May, Silver Lake Partners invested $790 million for a 1.15% stake, General Atlantic invested $930 million for a 1.34% stake, and KKR invested $1.6 billion for a 2.32% stake.

In June, the Mubadala and Adia UAE sovereign funds and Saudi Arabia sovereign fund invested $1.3 billion for a 1.85% stake, $800 million for a 1.16% stake, and $1.6 billion for a 2.32% stake, respectively;

Silver Lake Partners invested an additional $640 million to up its stake to 2.08%, TPG invested $640 million for a 0.93% stake, and Catterton invested $270 million for a 0.39% stake. In addition, Intel invested $253 million for a 0.39% stake.

In July, Qualcomm invested $97 million for a 0.15% stake, and Google invested $4.7 billion for a 7.7% stake.

With that flurry of fundraising Reliance completely paid off the billions of dollars it had borrowed to build out Jio. What is increasingly clear, though, is that the company’s ambitions extend far beyond being a mere telecoms provider.

https://stratechery.com/2020/india-jio-and-the-four-internets/

That excerpt is from an essay written by Ben Thompson over on Stratechery.com, and it is one I will be quoting from extensively in today’s post, along with two other essays.

The last sentence in that essay is the basic idea behind my essay today: what exactly are Jio’s ambitions, why are those ambitions whatever they are, how is Mukesh Ambani going about meeting those ambitions, and what will the ramification be on India – and then the rest of the world.

What are Jio’s ambitions?

Think back (or scroll up) to that diagram I have above. Jio doesn’t want to (and never did want to) build out a large telecommunications firm and stop there. Building out the telecom infrastructure, expensive though it was, was the means to an end. And when I say expensive, I mean expensive: 30 billion dollars!

From a strategic point of view, this is impressive, but one has to call out RIL’s execution and ability to deliver on its vision. Here it is really important to circle back to the core petroleum business. Not many companies in India would have the ability to plough in ~$30B+, the biggest private sector investment in the country’s history, to build a broadband network to cover the country. The legacy business gave RIL the buffer and cash reserves to do this (see below for RIL’s capex over the past five years).

https://hind.substack.com/p/from-oil-to-jio

I came across this essay via Ben Thompson himself on Twitter, and I wish I had been aware of this newsletter earlier:

But why did Jio spend those 30 billion USD? With what objective in mind?

I suppose all of you are sick and tired of the phrase “data is the new oil”, and I am too – but the bad news is, there really is no better answer to our question in this section.

Oil, its linkages, its by-products, and its enabling nature is what attracted Dhirubhai to oil as a business. It wasn’t just about oil itself – it was always about all of what oil allowed one to get into as a business. And it is the same now – it’s not about telecommunications and data. That just enables Reliance to get into – well, everything, really.

The distinction is important: when I say data is the new oil, I don’t mean the fact that data about you (and everybody else will be collected). Mukesh Ambani understands that statement to mean that whoever controls the pipe through which data (or oil) flows wins.

Jio’s ambition is to be in the 21st century what Reliance was in the 20th: the sole controller of oil data

Why are those ambitions whatever they are?

People love to talk about moats in the tech world:

The term economic moat, popularized by Warren Buffett, refers to a business’ ability to maintain competitive advantages over its competitors in order to protect its long-term profits and market share from competing firms. Just like a medieval castle, the moat serves to protect those inside the fortress and their riches from outsiders.

https://www.investopedia.com/ask/answers/05/economicmoat.asp

But perhaps a better way to understand both what Dhirubhai and Mukesh Ambani have done (with oil and data respectively) is to not think of those businesses as having moats around them but to think of these businesses as walls around the Indian consumer.

A moat protects a business. But Jio in particular is a business that is a moat. If other businesses want to get at the Indian consumer, you must literally get Jio’s permission. And the other way around too: if an Indian consumer wants to get at the Internet, she must do so through the Jio moat.

Mukesh Ambani, in an analogy that might make sense in today’s day and age, wants Jio to be Heimdall.

Heimdall is the brother of the warrior Sif. He is the all-seeing and all-hearing guardian sentry of Asgard who stands on the rainbow bridge Bifröst to watch for any attacks to Asgard. He partly won the role through using his eyesight to see an army of giants several days’ march from Asgard, allowing them to be defeated before they reached Asgard, and making their king a prisoner. (emphasis added)

https://en.wikipedia.org/wiki/Heimdall_(comics)

Another way of thinking about this is to liken Jio to China’s Great Firewall:

The Great Firewall of China (GFW; simplified Chinese: 防火长城; traditional Chinese: 防火長城; pinyin: Fánghuǒ Chángchéng) is the combination of legislative actions and technologies enforced by the People’s Republic of China to regulate the Internet domestically. Its role in Internet censorship in China is to block access to selected foreign websites and to slow down cross-border internet traffic. The effect includes: limiting access to foreign information sources, blocking foreign internet tools (e.g. Google search,Facebook, Twitter, Wikipedia,and others) and mobile apps, and requiring foreign companies to adapt to domestic regulations.
Besides censorship, the GFW has also influenced the development of China’s internal internet economy by nurturing domestic companies and reducing the effectiveness of products from foreign internet companies. (Emphasis added)

https://en.wikipedia.org/wiki/Great_Firewall

I don’t mean that analogy as a criticism – far from it. Quite the opposite, in fact, I say it with great admiration. In the part that is emphasized above, substitute Jio for GFW, and Mukesh Ambani’s playbook starts to make a lot of sense!

Ben Thompson makes a similar point in his essay…

The key to understanding Ambani’s bet is that while all of the incumbent mobile operators in India were, like mobile operators around the world, companies built on voice calls that layered on data, Jio was built to be a data network — specifically 4G — from the beginning.

https://stratechery.com/2020/india-jio-and-the-four-internets/

… but to my mind, doesn’t go far enough. Yes, data first, and yes, Jio got it right, but it is the strategic thinking behind “OK, what’s next after I’ve won the telecom sector” that’s truly impressive.

This section is titled “why are those ambitions whatever they are?” The answer boggles the mind.

How is Mukesh Ambani going about meeting those ambitions?

Three excerpts, all from the same author, but across two essays:

It’s very common now to talk about Reliance’s political connections and proximity to the government. This too has a deep seated history. Quite simply, you couldn’t be an industrialist or entrepreneur in India in the 1970s without currying favour with the government or having the right friends. Ambani became adept at this, forging ties with close aides of Prime Minister Indira Gandhi, like R.K Dhawan and T.A. Pai.

https://hind.substack.com/p/reliance-origins

Reliance at the time was seen as such a creature of the Congress that Rajiv Gandhi, who had become Prime Minister in 1984 after his mother’s assassination, wanted to keep a distance from Ambani. S Gurumurthy and Shourie both have ties to the BJP, the ruling administration now. Gurumurthy is co-convener of the Swadeshi Jagaran Manch (affiliated with RSS) and currently on the Board of the Reserve Bank of India.

https://hind.substack.com/p/reliance-origins

There’s, of course, also the question of regulatory capture and how much of a role that will play in RIL and Jio’s continued success. Pretty much every time an investment in Jio was announced over the last couple of months, a pointed note was made of Reliance’s closeness with the current government. These allegations have dogged Reliance regardless of administration. One argument is that Ambani basically controls whichever government is in power.

https://hind.substack.com/p/from-oil-to-jio

You couldn’t have built an empire around polyester, or oil, or data, without at least the tacit help of the government. Mukesh Ambani, and his father before him, learnt an obvious, if difficult to master, lesson. You need to be friends with whoever is in power. This is always true, but especially so in a country like India.

Because our babudom delights in coming up with rules and then making money off of the inevitable violation of those rules, the only way to avoid this problem is by making friends with the people who sit on top of the babus in the pecking order. And that’s not the BJP, or the Congress, or the United Front: it’s all of them. And when whoever happens to be in power wants a particular tune to be sung, it will be sung.

That, like it or not, is just good business sense.

So no, Mukesh Ambani is not especially close to Narendra Modi, and neither was Dhirubhai Ambani especially close to Indira Gandhi/Rajiv Gandhi. The key word in the previous sentence is “especially”. The Ambanis are especially close to the throne, and they don’t particularly care who is occupying it at just the moment.

They do care that the occupant not get in the way of their business, and that they have managed quite magnificently. Again, I mean this in an admiring sense, not as a critique. It remains the only way to do business in India on a very large scale, like it or not.

I think an easier answer is that Reliance knows how to work the system. I recently read this article by Mark Lutter, which argues that one thing that constrains Silicon Valley’s ability to build is that it hasn’t engaged seriously with politics. “Part of politics will be co-opting old institutions. Get innovation sympathizers in key positions of power.”

https://hind.substack.com/p/from-oil-to-jio

My only contention in this section is that you can’t answer the “how” without getting creative about working around the inevitable regulatory hurdles, and nobody is more creative in this regard than the Ambanis.

What will the ramifications be on India and the rest of the world?

India will – I don’t see any alternative to this – eventually become a duopoly when it comes to telecommunications. Vodafone is a dead man walking, and BSNL/MTNL are deadweight that the Indian government can support for only so long. Airtel remains the only credible competitor, although it’s challenges are going to be many, and very painful

Original chart here: https://hind.substack.com/p/from-oil-to-jio

And eventually, a very large part of India’s communications, commercial transactions will all go through Jio’s services. This, as the author mentions, may or may not be true, but I love the uniquely Indian example:

One use case I recently heard anecdotally was of JioMeet being used for day-long satsangs. This could be an apocryphal story, but to be honest, it is such a specifically Indian use case, that I can believe it. Can you imagine the kind of customer connections Jio can build if it becomes the default satsang app during the era of Covid?

https://hind.substack.com/p/from-oil-to-jio

Second, where the rest of the world is concerned:

Jio’s network and its work on 5G, which takes years, was by definition not motivated by a phrase Prime Minister Modi first deployed two months ago. Rather, Ambani’s dedication hinted at the role Jio investors like Facebook and Google are anticipating Jio will play:
..
..
Jio leverages its investment to become the monopoly provider of telecom services in India.
..
..
Jio is now a single point of leverage for the government to both exert control over the Internet, and to collect its share of revenue.
..
..
Jio becomes a reliable interface for foreign companies to invest in the Indian market; yes, they will have to share revenue with Jio, but Jio will smooth over the regulatory and infrastructure hurdles that have stymied so many (emphasis added)

https://stratechery.com/2020/india-jio-and-the-four-internets/

Remember the Heimdall analogy? The emphasized part above is the Heimdall play at work. You can enter Asgard India, sure, but only at Heimdall’s Jio’s pleasure, and Facebook and Google, among others, already have agreed, and backed up their agreement with cold hard cash.

Could I be completely wrong about all this?

Yes, and in two ways. Here’s the first, via Benedict Evans:

Every five years or so, a big telco thinks it can move up the stack and compete with the internet. This is a little like a municipal water company trying to get into the soft drinks business. Jio may be different: it has a more captive, less sophisticated base, a retail arm to leverage, and maybe more ability to innovate and understand the market. Or maybe not.

Benedict Evans’ Newsletter from the 21st of July

He elaborates on this further (read the entire conversation on Twitter):

Essentially, Evans’ point seems to be that other companies in other countries have tried what Jio is trying to do now, and maybe it won’t work out here because it hasn’t worked out there.

Maybe. The thing that makes me want to bet against Evans’ contention is how close Mukesh Ambani is to the government, and how involved the government in India has always been when it comes to regulation. In other words, I’m not betting just on how good Jio is (and it is good, but Evans has a been there done that gut feel – and he could be right). I’m also betting on how not laissez-faire our regulation is in practice. And that, weirdly, is the ace up Mukesh Ambani’s sleeve.

And the second way I could be wrong? Two words:

Jeff Bezos.

What a battle this is going to be.

Non-medical, non-economics links about Covid-19

David Brooks in the NYT:

Viktor Frankl, writing from the madness of the Holocaust, reminded us that we don’t get to choose our difficulties, but we do have the freedom to select our responses. Meaning, he argued, comes from three things: the work we offer in times of crisis, the love we give and our ability to display courage in the face of suffering. The menace may be subhuman or superhuman, but we all have the option of asserting our own dignity, even to the end.

John Authers in Bloomberg:

For now, the approach being adopted across the West is Rawlsian. Politicians are working on the assumption that they have a duty to protect everyone as they themselves would wish to be protected, while people are also applying the golden rule as they decide that they should self-isolate for the sake of others. We are all Rawlsians now.

How long will we stay that way? All the other theories of justice have an appeal, and may test the resolve to follow the golden rule. But I suspect that Rawls and the golden rule will win out. That is partly because religion — even if it is in decline in the West — has hard-wired it into our consciousness. And as the epidemic grows worse and brings the disease within fewer degrees of separation for everyone, we may well find that the notion of loving thy neighbor as thyself becomes far more potent.

And on that note, this by me from a couple of days ago.

A cartoon from the New Yorker about social distancing and how to do it “right”.

Also from the New Yorker, Siddhartha Mukherjee, writing like only he can:

But three questions deserve particular attention, because their answers could change the way we isolate, treat, and manage patients. First, what can we learn about the “dose-response curve” for the initial infection—that is, can we quantify the increase in the risk of infection as people are exposed to higher doses of the virus? Second, is there a relationship between that initial “dose” of virus and the severity of the disease—that is, does more exposure result in graver illness? And, third, are there quantitative measures of how the virus behaves in infected patients (e.g., the peak of your body’s viral load, the patterns of its rise and fall) that predict the severity of their illness and how infectious they are to others? So far, in the early phases of the covid-19 pandemic, we have been measuring the spread of the virus across people. As the pace of the pandemic escalates, we also need to start measuring the virus within people.

And finally, Ben Thompson, probably the only writer alive who can build a story linking Compaq and the coronavirus.

 

How do you interact with your computer?

“Alexa, play Hush, by Deep Purple.”

That’s my daughter, all of six years old. Leave aside for the moment the pride that I feel as a father and a fan of classic rock.

My daughter is coding.


My dad was in Telco for many years, which was what Tata Motors used to call itself  back in the day. I do not remember the exact year, but he often regales us with stories about how Tata Motors procured its first computer. Programming it was not child’s play – in fact, interacting with it required the use of punch cards.

I do not know if it was the same type of computer, but watching this video gives us a clue about how computers of this sort worked.


The guy in the video, the computer programmer in Telco and my daughter are all doing the same thing: programming.

What is programming?

Here’s Wikiversity:

Programming is the art and science of translating a set of ideas into a program – a list of instructions a computer can follow. The person writing a program is known as a programmer (also a coder).

Go back to the very first sentence in this essay, and think about what it means. My daughter is instructing a computer called Alexa to play a specific song, by a specific artist. To me, that is a list of instructions a computer can follow.

From using punch cards to using our voice and not even realizing that we’re programming: we’ve come a long, long way.


It’s one thing to be awed at how far we’ve come, it is quite another to think about the path we’ve taken to get there. When we learnt about mainframes, about Apple, about Microsoft and about laptops, we learnt about the evolution of computers, and some of the firms that helped us get there. I have not yet written about Google (we’ll get to it), but there’s another way to think about the evolution of computers: we think about how we interact with them.

Here’s an extensive excerpt from Wikipedia:

In the 1960s, Douglas Engelbart’s Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.

Much of the early research was based on how young children learn. So, the design was based on the childlike primitives of eye-hand coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as later used by adult professionals.

Engelbart’s work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.

The GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface “including icons and the first use of pop-up menus”.[3]

In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC’s innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.

If you feel like diving down this topic and learning more about it, Daring Fireball has a lot of material about Alan Kay, briefly mentioned above.

So, as the Wikipedia article mentions, we moved away from punch cards, to using hand-eye coordination to enter the WIMP era.

It took a genius to move humanity into the next phase of machine-human interaction.


https://twitter.com/stevesi/status/1221853762534264832

The main tweet shown above is Steven Sinofsky rhapsodizing about how Steve Jobs and his firm was able to move away from the WIMP mode of thinking to using our fingers.

And from there, it didn’t take long to moving to using just our voice as a means of interacting with the computers we now have all around us.

Voice operated computing systems:

That leaves the business model, and this is perhaps Amazon’s biggest advantage of all: Google doesn’t really have one for voice, and Apple is for now paying an iPhone and Apple Watch strategy tax; should it build a Siri-device in the future it will likely include a healthy significant profit margin.

Amazon, meanwhile, doesn’t need to make a dime on Alexa, at least not directly: the vast majority of purchases are initiated at home; today that may mean creating a shopping list, but in the future it will mean ordering things for delivery, and for Prime customers the future is already here. Alexa just makes it that much easier, furthering Amazon’s goal of being the logistics provider — and tax collector — for basically everyone and everything.


Punch cards to WIMP, WIMP to fingers, and fingers to voice. As that last article makes clear, one needs to think not just of the evolution, but also about how business models have changed over time, and have caused input methods to change – but also how input methods have changed, and caused business models to change.

In other words, understanding technology is as much about understanding economics, and strategy, as it is about understanding technology itself.

In the next Tuesday essay, we’ll take a look Google in greater detail, and then about emergent business models in the tech space.

 

Five articles about Clayton Christensen

As I mentioned in yesterday’s post, Clayton Christensen passed away recently. Five articles about him in today’s write-up, to honour the man, and his most popular and lasting contribution to theory.

The Innovator’s Dilemma is what most people know Clayton Christensen for, and the book is a great read. It is slow going, be warned, but the idea is remarkable. And that idea is the theory of disruption.

First, a quick recap of the idea: “Disruption” describes a process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Specifically, as incumbents focus on improving their products and services for their most demanding (and usually most profitable) customers, they exceed the needs of some segments and ignore the needs of others. Entrants that prove disruptive begin by successfully targeting those overlooked segments, gaining a foothold by delivering more-suitable functionality—frequently at a lower price. Incumbents, chasing higher profitability in more-demanding segments, tend not to respond vigorously. Entrants then move upmarket, delivering the performance that incumbents’ mainstream customers require, while preserving the advantages that drove their early success. When mainstream customers start adopting the entrants’ offerings in volume, disruption has occurred.

As I said, most people know of The Innovator’s Dilemma, but there was another book – and theory – called The Innovator’s Solution. But where the second theory was concerned, Ben Thompson wasn’t so convinced.

Read the whole thing, but if I had to summarize the argument (always a dangerous thing to attempt), it’s this: there’s a world of a difference between B2B and B2C companies.

The excerpt below is from a fine profile of Clayton Christensen by Larissa MacFarquhar, and reading it (the entire thing) is recommended. You might also want to pair the excerpt with Thiel’s Christainity. At any rate, I was reminded of it.

Mormons believe that family is for eternity, and that in Heaven they will be together with their relatives as they were on earth. They believe that after death they will grow to resemble their heavenly parents as children grow to resemble earthly parents, until eventually they become gods.

Also from the New Yorker, a rather less complimentary piece about the efficacy of the theory of disruption:

Christensen has compared the theory of disruptive innovation to a theory of nature: the theory of evolution. But among the many differences between disruption and evolution is that the advocates of disruption have an affinity for circular arguments. If an established company doesn’t disrupt, it will fail, and if it fails it must be because it didn’t disrupt. When a startup fails, that’s a success, since epidemic failure is a hallmark of disruptive innovation.

Joshua Gans writes in his honour, upon his passing, and the link is here.

And finally, I found this advice from an essay written by Clayton Christensen very useful indeed – and of course, the rest of the essay is also very well written!

 

In using this model to address the question, How can I be sure that my family becomes an enduring source of happiness?, my students quickly see that the simplest tools that parents can wield to elicit cooperation from children are power tools. But there comes a point during the teen years when power tools no longer work. At that point parents start wishing that they had begun working with their children at a very young age to build a culture at home in which children instinctively behave respectfully toward one another, obey their parents, and choose the right thing to do. Families have cultures, just as companies do. Those cultures can be built consciously or evolve inadvertently.