Airtel, Amazon, and Untangling Some Thoughts

Capital Mind, one of my favorite blogs to read, recently posted an excellent write-up on how Bharti Airtel is faring over the last three to four years. You might have to sign up in order to read it, but happily, Capital Mind allows you a free trial, so you should still be able to access it.

Why do I think you should read it, and why am I talking about this today? Because we need to think about telecommunications, technology, monopolies, scale, regulations, FDI in order to understand why Amazon may well be interested in buying Airtel, or at least in owning a stake.

Airtel has issued a boilerplate disclaimer since, but, well. Come on.

But hang on a second. We first need to get a basic framework in place, before we start thinking about everything else.

Our framework will consist of three things (or actions) that we tend to do on the internet, three international behemoths that are very, very interested in India, and three telecommunications firms that are very heavily invested in India.

First, the three things that all of us tend to do on the Internet. We create content, we consume content and we engage in commerce.

Let’s begin with the one in the middle. When you’re lying on your sofa at three in the morning, flicking through Netflix’s endless library of content, you are very much a consumer. When you roll your eyes at the latest dripping-with-insanity forward you receive on your family Whatsapp group, you are a consumer of content online. When you run a search for a PDF that will help you finish an assignment in college: consumer. For most of us, the internet enables us to consume stuff at levels that have never before been possible. Music, videos, podcasts, the written word: all consumption.

Now let’s move on to the one on the left: creation. You weren’t around when I wrote these words that you are reading, but I was creating stuff. Your latest Instagram story? Creation of content. The latest GIF that you created before sharing it? Creation of content. Do you upload videos on Insta/YouTube/Vimeo? Do you blog? Do you create podcasts? All content creation.

And finally, the reason Jeff and Mukesh are as rich as they are: commerce. Online shopping is literally blowing up in front of our eyes in terms of value. Amazon, Flipkart, Nykaa, all of Jio’s online shopping, MakeMyTrip, Oyo, Airbnb, Uber, Zomato… the list is endless, and exhausting. That’s the third thing you do online. Commerce.

Ah, you might say. What about a Whatsapp call with friends, or a Skype call with family, or a Zoom online seminar (god help us all)? That’s arguably consumption and creation at the same time, no?

Well yes. Or call it communication.

Creation and consumption of content are really two sides of the same coin, and when they happen simultaneously, they are all of what we spoke about in the last paragraph.

There is a world of thinking to be done about the blue rectangle on the left. About how Google cornered the consumption of stuff online using Gmail, Google Maps etc, by piggybacking on it’s search monopoly, and about how Facebook took away the search monopoly by creating its own walled garden and making Google search irrelevant within it, and how Google tried to respond with Google Buzz-no-Wave-no-Plus-no-WhateverNext and failed… and this can go on. But here’s the quick takeaway:

When it comes to content creation, or content consumption, or communication, Google and Facebook have the market pretty much tied up between them. The battle for who wins between them will continue for a while, and it will be a fascinating story, but for our purposes, it is enough to realize for today that Google and Facebook are mostly on the left. That’s not entirely true (Google Play Store, Froogle, Facebook Marketplace being just some examples), but it’s good enough for now.

Google and Facebook are mostly communication based firms who dabble in commerce.

And Amazon, of course, is the easiest example to think of when it comes to online commerce. The Amazon app, sure, but also its delivery and logistics arm, and, of course, AWS. If I want to buy stuff online, Amazon is literally the first – and more often than not, the only – thing that comes to mind. Zomato and Swiggy for food, Uber and Ola for travel, OYO and Airbnb for hotels/lodging, MakeMyTrip/Cleartrip/Yatra for travel are also very valid examples. But we’re, as consumers, not passively consuming content over here in this space: it is a very specific transactional approach.

But then things began to get complicated.

Consider Amazon. Commerce company, very much so. But what about Amazon Prime Video? What about Amazon Prime Music? What about Amazon Photos? What about Alexa and the Echo family?

Or consider Google. What about, as we have already mentioned, the Google Play Store? What about Froogle? What about Play Movies, Play Books?

Our neat little framework now has overlaps, and there are insurgencies along this virtual boundary. But we can add to our framework to help us keep it relatively simple:

Google, which is a firm that started life as a software firm, then started to make hardware as well (Nexus, Pixel, Chromebooks, Pixel tablets, Google Glass etc). Of course people could create and consume content on these devices. Of course these devices would help Google learn more about the people who owned these devices. But wouldn’t it be great (Google thought) if we could make moaarrrrr money by using this gleaned information ourselves? Hey, let’s get into commerce.

Facebook tried to say the same thing, but with rather more limited success.

And Amazon, a firm that started life as an online seller, started to make hardware as well, precisely so as to learn more about people’s consumption habits online and offline. That’s the Echo devices, the Kindle, the Firestick and so on.

And don’t forget Apple! They no longer can rely on selling hardware alone for growth, mostly because they have already sold all the devices they possibly can to as many people as they possibly can (at least in the USA, but they’re coming for you too). And so, services! Apple Music, Apple TV, iCloud – all of these are not hardware related, they’re all about consumption of content.

So even our latest attempt at simplifying the framework fails, because none of these blue rectangles are neat and delineated: firms from every blue rectangle want to be present in the creation, consumption and commerce space.

They want to do this for a variety of reasons, but the most important reason is simply the following: they’d much rather get a “360 degree” view of their consumer, without having to rely on some other firm to share information.

If, for example, Jio manufactures the device I use to go online (JioPhone), and I log on to that device to watch JioTV, and visit Ajio to buy clothes using that device, and post about the sneakers I just bought on a social media platform owned by Jio (or well, something like that), then I’ve obviated the need for Google! Neither the device, nor the steaming platform, nor the shopping platform, nor the social media has anything to do with Google. How then, does Google know me enough to advertise effectively to me?

But, if I use a Pixel phone to stream content on my Chromecast device, and buy a pair of sneakers on Flipkart (well, in a parallel universe…) and post about it on whatever is Google’s next attempt to build a social media platform, I’m living entirely in the Google Universe.

It’s no longer about companies living in one blue rectangle, you see. It is about one company dominating all blue rectangles, and so knowing everything there is to know about the consumer. That’s the end game here.

And speaking of all blue rectangles…

And that, my friends, is why Amazon wants to be friends with Airtel, Google wants to be friends with Vodafone.

Because Mukesh has his finger in each of these pies, and Mark has acknowledged as much.

Homework: as a consumer, and as an investor, which of the three are you betting on? Amazon, Google or Jio? Why?

Complements, Substitutes and Examinations

Writing all of what I wrote in February 2020 was a lot of fun, and gave rise to a series of interesting, and interlinked ideas.

In today’s essay, I want to explore one of these interlinked ideas: I want to riff on the concept made famous by Steve Jobs: the computer as a bicycle for the mind. But with an Econ 101 twist to the topic!

I’ve already linked to the video where Steve Jobs speaks about this, but just in case you haven’t seen it, here’s the video:

As I mentioned in the post “Apple Through Five Articles“, Steve Jobs was essentially saying that the computer is a complementary good for the mind: that the mind becomes far more powerful, far more useful as a tool when used in conjunction with a computer.

A complement refers to a complementary good or service used in conjunction with another good or service. Usually, the complementary good has little to no value when consumed alone, but when combined with another good or service, it adds to the overall value of the offering. A product can be considered a compliment (sic) when it shares a beneficial relationship with another product offering, for example, an iPhone complements an app.

One way to understand Apple is to understand that Jobs effectively ensured that Apple built better and better computers. Apple has continued to do that even after Jobs has passed on, but they’ve been building computers all along. You can call them Macs and iPhones and iPads and Apple Watches, but they’re really computers.

But that’s not the focus of this piece. The focus of this piece is to think about this as an economist. If the mind is made more useful when it is able to complement the processing power of the computer, then the world is obviously more productive now that many more minds are being complemented with many more computers. I writing this piece on my laptop, and you reading it on your device is the most appropriate example – or so we shall assume.

But viewed this way, I would argue that we get the design of most of our examinations wrong. Rote memorization, or “mugging up” is still the default method for evaluating whether a student has learnt a particular subject. Mugging up is just another way of saying that we need to substitute for the computer, not complement it!

When we reject open book examinations, when we reject the ability to write a paper using laptops/tablets that are connected to the internet, when we force students to substitute for computers, rather than use them to write better, richer, more informed answers, we’re actively rejecting the analogy of the bicycle for the mind.

To say nothing, of course, of the irrelevance of forcing people to write examinations for three hours using pen and paper. But that’s a topic for another day.

Right now, suffice it to say that when it comes to examinations in India, Steve Jobs would almost certainly have not approved.

Bottom line: If computers are a complement, our examinations are incorrectly designed, and we end up testing skills that are no longer relevant.

And the meta-skill you might take away from this essay is the fact that a lot of ideas in economics are applicable in entirely surprising and unexpected areas!

I hope some of you disagree, and we can argue a bit about this. I look forward to it! 🙂

How do you interact with your computer?

“Alexa, play Hush, by Deep Purple.”

That’s my daughter, all of six years old. Leave aside for the moment the pride that I feel as a father and a fan of classic rock.

My daughter is coding.


My dad was in Telco for many years, which was what Tata Motors used to call itself  back in the day. I do not remember the exact year, but he often regales us with stories about how Tata Motors procured its first computer. Programming it was not child’s play – in fact, interacting with it required the use of punch cards.

I do not know if it was the same type of computer, but watching this video gives us a clue about how computers of this sort worked.


The guy in the video, the computer programmer in Telco and my daughter are all doing the same thing: programming.

What is programming?

Here’s Wikiversity:

Programming is the art and science of translating a set of ideas into a program – a list of instructions a computer can follow. The person writing a program is known as a programmer (also a coder).

Go back to the very first sentence in this essay, and think about what it means. My daughter is instructing a computer called Alexa to play a specific song, by a specific artist. To me, that is a list of instructions a computer can follow.

From using punch cards to using our voice and not even realizing that we’re programming: we’ve come a long, long way.


It’s one thing to be awed at how far we’ve come, it is quite another to think about the path we’ve taken to get there. When we learnt about mainframes, about Apple, about Microsoft and about laptops, we learnt about the evolution of computers, and some of the firms that helped us get there. I have not yet written about Google (we’ll get to it), but there’s another way to think about the evolution of computers: we think about how we interact with them.

Here’s an extensive excerpt from Wikipedia:

In the 1960s, Douglas Engelbart’s Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.

Much of the early research was based on how young children learn. So, the design was based on the childlike primitives of eye-hand coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as later used by adult professionals.

Engelbart’s work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.

The GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface “including icons and the first use of pop-up menus”.[3]

In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC’s innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.

If you feel like diving down this topic and learning more about it, Daring Fireball has a lot of material about Alan Kay, briefly mentioned above.

So, as the Wikipedia article mentions, we moved away from punch cards, to using hand-eye coordination to enter the WIMP era.

It took a genius to move humanity into the next phase of machine-human interaction.


The main tweet shown above is Steven Sinofsky rhapsodizing about how Steve Jobs and his firm was able to move away from the WIMP mode of thinking to using our fingers.

And from there, it didn’t take long to moving to using just our voice as a means of interacting with the computers we now have all around us.

Voice operated computing systems:

That leaves the business model, and this is perhaps Amazon’s biggest advantage of all: Google doesn’t really have one for voice, and Apple is for now paying an iPhone and Apple Watch strategy tax; should it build a Siri-device in the future it will likely include a healthy significant profit margin.

Amazon, meanwhile, doesn’t need to make a dime on Alexa, at least not directly: the vast majority of purchases are initiated at home; today that may mean creating a shopping list, but in the future it will mean ordering things for delivery, and for Prime customers the future is already here. Alexa just makes it that much easier, furthering Amazon’s goal of being the logistics provider — and tax collector — for basically everyone and everything.


Punch cards to WIMP, WIMP to fingers, and fingers to voice. As that last article makes clear, one needs to think not just of the evolution, but also about how business models have changed over time, and have caused input methods to change – but also how input methods have changed, and caused business models to change.

In other words, understanding technology is as much about understanding economics, and strategy, as it is about understanding technology itself.

In the next Tuesday essay, we’ll take a look Google in greater detail, and then about emergent business models in the tech space.

 

Apple through five articles

I’ve said it before and I’ll say it again. If you are really and truly into the Apple ecosystem, you could do  a lot worse than just follow the blog Daring Fireball. I mean that in multiple ways. His blog is one of the most popular (if not the most popular) blog on all things Apple. Popularity isn’t a great metric for deciding if reading something is worth your time, but in this case, the popularity is spot on.

But it’s more than that: another reason for following John Gruber’s blog is you learn about a trait that is common to this blog and to Apple: painstaking attention to detail. Read this article, but read especially footnote 2, to get a sense of what I mean. There are many, many examples of Apple’s painstaking attention to detail, of course, but this story is one of my favorites.

There is no end to these kind of stories, by the way. Here’s another:

Prior to the patent filing, Apple carried out research into breathing rates during sleep and found that the average respiratory rate for adults is 12–20 breaths per minute. They used a rate of 12 cycles per minute (the low end of the scale) to derive a model for how the light should behave to create a feeling of calm and make the product seem more human.

But finding the right rate wasn’t enough, they needed the light to not just blink, but “breathe.” Most previous sleep LEDs were just driven directly from the system chipset and could only switch on or off and not have the gradual glow that Apple integrated into their devices. This meant going to the expense of creating a new controller chip which could drive the LED light and change its brightness when the main CPU was shut down, all without harming battery life.

Anybody who is an Android/PC user instead (such as I), can’t help but be envious of this trait in Apple products.

There are many, many books/videos/podcasts you can refer to to understand Apple’s growth in the early years, and any one reference doesn’t necessarily mean the others are worse, but let’s begin with this simple one. Here’s Wikipedia on the same topic, much more detail, of course.

Before it became one of the wealthiest companies in the world, Apple Inc. was a tiny start-up in Los Altos, California. Co-founders Steve Jobs and Steve Wozniak, both college dropouts, wanted to develop the world’s first user-friendly personal computer. Their work ended up revolutionizing the computer industry and changing the face of consumer technology. Along with tech giants like Microsoft and IBM, Apple helped make computers part of everyday life, ushering in the Digital Revolution and the Information Age.

The production function and complementary goods are two topics that every student of economics is taught again and again. Here’s how Steve Jobs explains it:

 

You can’t really learn the history of Apple without learning about John Sculley firing Steve Jobs.

According to Sculley’s wishes, Steve Jobs was to represent the company externally as a new Apple chairman without influencing the core business. As Jobs got wind of these plans to deprive him of his power, he tried to arrange a coup against Sculley on the Apple board. Sculley told the board: “I’m asking Steve to step down and you can back me on it and then I take responsibility for running the company, or we can do nothing and you’re going to find yourselves a new CEO.” The majority of the board backed the ex-Pepsi man and turned away from Steve Jobs.

And the one guy who helped Steve Jobs achieve his vision for Apple once Jobs came back was, of course, Jony Ive. This is a very, very long article but a fun read, not just about the relationship between Ive and Jobs, but also about Ive and Apple. Jony Ive no longer works at Apple of course (well, kinda sorta), but you can’t understand Apple without knowing more about Ive.

Jobs’s taste for merciless criticism was notorious; Ive recalled that, years ago, after seeing colleagues crushed, he protested. Jobs replied, “Why would you be vague?,” arguing that ambiguity was a form of selfishness: “You don’t care about how they feel! You’re being vain, you want them to like you.” Ive was furious, but came to agree. “It’s really demeaning to think that, in this deep desire to be liked, you’ve compromised giving clear, unambiguous feedback,” he said. He lamented that there were “so many anecdotes” about Jobs’s acerbity: “His intention, and motivation, wasn’t to be hurtful.”

Apple has bee, for most of its history, defined by its hardware. That still remains true, for the most part. But where does Apple News, Apple Music, Apple TV fit in?

Apple, in the near future will be as much about services as it is about hardware, and maybe more so. That’s, according to Ben Thompson, the most likely (and correct, in his view) trajectory for Apple.

CEO Tim Cook and CFO Luca Maestri have been pushing the narrative that Apple is a services company for two years now, starting with the 1Q 2016 earnings call in January, 2016. At that time iPhone growth had barely budged year-over-year (it would fall the following three quarters), and it came across a bit as a diversion; after all, it’s not like the company was changing its business model

From Mainframes to Personal Computing: The Journey

From mainframes to desktops, from desktops to laptops, from laptops to phones, and from phones to watches. So far. As I said in the previous edition of Tech Tuesdays, my daughter doesn’t think of Alexa as a computer, but what is Alexa if not one?

We are an empowered species today, for most of us – not all, to be sure, but most of us – carry around with us more computing power than was used to send people to the moon. Not only do we take it for granted, but the programming itself is done at such a high level that we aren’t even aware that we’re programming a machine.

For example: when my daughter says to Alexa, “Set a timer for five minutes”, she’s really programming a computer to emit a series of beeps in three hundred seconds, starting now. But this wasn’t always the case. There was a time when people were excited about the fact that they could get a machine home into which they would have to laboriously (by our current standards) input a series of instructions for it to do certain things.

What kind of machines were these? Who made them, for what reason? What changed in terms of ease of use, design, and available accessories – and with what results for us, as society?

In today’s set of five links, we take a look at the answers to some of these questions.

  1. “At the time that IBM had decided to enter the personal computer market in response to Apple’s early success, IBM was the giant of the computer industry and was expected to crush Apple’s market share. But because of these shortcuts that IBM took to enter the market quickly, they ended up releasing a product that was easily copied by other manufacturers using off the shelf, non-proprietary parts. So in the long run, IBM’s biggest role in the evolution of the personal computer was to establish the de facto standard for hardware architecture amongst a wide range of manufacturers. IBM’s pricing was undercut to the point where IBM was no longer the significant force in development, leaving only the PC standard they had established. Emerging as the dominant force from this battle amongst hardware manufacturers who were vying for market share was the software company Microsoft that provided the operating system and utilities to all PCs across the board, whether authentic IBM machines or the PC clones.”
    ..
    ..
    The excerpt above comes a long way into the Wikipedia article, and the correct way to read this article, if you ask me, is to scan through it, rather than read every single word. But the excerpt, for an economist, is the most interesting part, for it explains how Microsoft became Microsoft – because of an ill-thought out strategy by IBM!
    ..
    ..
  2. “Although the company knew that it could not avoid competition from third-party software on proprietary hardware—Digital Research released CP/M-86 for the IBM Displaywriter, for example—it considered using the IBM 801 RISC processor and its operating system, developed at the Thomas J. Watson Research Center in Yorktown Heights, New York. The 801 processor was more than an order of magnitude more powerful than the Intel 8088, and the operating system more advanced than the PC DOS 1.0 operating system from Microsoft. Ruling out an in-house solution made the team’s job much easier and may have avoided a delay in the schedule, but the ultimate consequences of this decision for IBM were far-reaching.”
    ..
    ..
    As economists, we’re interested in understanding the fact that we have the power to be vastly more productive now that we all have our own personal computers, sure, but we’re also interested in finding out why firms who were the giants of their time (lookin’ at you, IBM) didn’t make the transition over to being the giants of the personal computing era. We’re interested in this in and of itself, of course, but also so that we can apply these lessons to the giants of our time.
    ..
    ..
  3. “The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbart’s presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.”
    ..
    ..
    I learnt about this only while researching links for this series: the mother of all demos that inspired, essentially, what we know as personal computing today. Fascinating stuff!
    ..
    ..
  4. “Now that computers were up and running with Microsoft’s operating system, the next step was to build tools to streamline the user experience. Today, it’s hard to imagine a world where computers didn’t run programs such as Microsoft Word, PowerPoint and Excel.These “productivity applications,” as Microsoft calls them, were revolutionary tools for getting work done. They automated many of the aspects of word processing, accounting, creating presentations and more. Plus, Microsoft’s deal with Apple allowed it to develop versions of these programs for Macintosh computers.Over the years, Microsoft has provided updates to the Office Suite, from additional programs (such as Outlook and Access) to additional features.”
    ..
    ..
    Yes, this is a listicle, but a useful, mostly informative one. The excerpt above comes midway through the article, and the rest of the article speaks about Microsoft’s attempted move towards becoming a hardware focused firm, and the subsequent move towards being, well, a software focused firm under Nadella. We’ll be focusing on Microsoft next Tuesday, so consider this an appetizer.
    ..
    ..
  5. “The iPhone’s potential was obviously deep, but it was so deep as to be unfathomable at the time. The original iPhone didn’t even shoot video; today the iPhone and iPhone-like Android phones have largely killed the point-and-shoot camera industry. It has obviated portable music players, audio recorders, paper maps, GPS devices, flashlights, walkie-talkies, music radio (with streaming music), talk radio (with podcasts), and more. Ride-sharing services like Uber and Lyft wouldn’t even make sense pre-iPhone. Social media is mobile-first, and in some cases mobile-only. More people read Daring Fireball on iPhones than on desktop computers.”
    ..
    ..
    John Gruber rhapsodizing about  a whole variety of things, but mostly about how the iPhone was the culmination of the long journey that began with the move away from mainframes. It is exhilarating to realize how far we’ve come! Two weeks from now, we’ll also take a look at Apple’s long journey.

Tech: Links for 17th September, 2019

  1. “Never attribute to malice that which is adequately explained by stupidity“.
    ..
    ..
  2. Jason Snell reviews the iPhone launch event.
    ..
    ..
  3. … as does Ben Thompson.
    ..
    ..
  4. The importance of the U1 chip.
    ..
    ..
  5. I don’t quite remember how I landed up here, but this was interesting for a variety of reasons. On a company called OKCredit.

Tech: Links for 2nd July, 2019

Five articles from tech, but about something that took place about twelve years ago.

  1. “One of the most important trends in personal technology over the past few years has been the evolution of the humble cellphone into a true handheld computer, a device able to replicate many of the key functions of a laptop. But most of these “smart phones” have had lousy software, confusing user interfaces and clumsy music, video and photo playback. And their designers have struggled to balance screen size, keyboard usability and battery life.”
    ..
    ..
    Thus began Walt Mossberg’s review of the first ever iPhone. That review is fun to read in order to understand how far smartphones have come since then, and we we took for granted then, and do now.
    ..
    ..
  2. “With the iPhone XS and Apple Neural Engine, the input isn’t an image, it’s the data right off the sensors. It’s really kind of nuts how fast the iPhone XS camera is doing things in the midst of capturing a single image or frame of video. One method is to create an image and then apply machine learning to it. The other is to apply machine learning to create the image. One way Apple is doing this with video is by capturing additional frames between frames while shooting 30 FPS video, even shooting 4K. The whole I/O path between the sensor and the Neural Engine is so fast the iPhone XS camera system can manipulate 4K video frames like Neo dodging bullets in The Matrix.”
    ..
    ..
    That was then, this is now – well, this is also last year. John Gruber on how far we’ve come – he reviews the iPhone XS, and reading both reviews one after the other points to how far we’ve come.
    ..
    ..
  3. “…I’m not convinced that anyone at Google fully thought through the implication of favoring Android with their services. Rather, the Android team was fully committed to competing with iOS — as they should have been! — and human nature ensured that the rest of Google came along for the ride. Remember, given Google’s business model, winning marketshare was perfectly correlated with reaping outsized profits; it is easy to see how the thinking and culture that developed around Google’s core business failed to adjust to the zero-sum world of physical devices. And so, as that Gundotra speech exemplified, Android winning became synonymous with Google winning, when in fact Android was as much ouroboros as asset.”
    ..
    ..
    It’s not just technology that changed then – entire ecosystems and business models had to be changed, updated, pilfered. Microsoft, obviously, but most significantly, Google.
    ..
    ..
  4. “There’s that word I opened with: “future”. As awesome as our smartphones are, it seems unlikely that this is the end of computing. Keep in mind that one of the reasons all those pre-iPhone smartphone initiatives failed, particularly Microsoft’s, is that their creators could not imagine that there might be a device more central to our lives than the PC. Yet here we are in a world where PCs are best understood as optional smartphone accessories.I suspect we will one day view our phones the same way: incredibly useful devices that can do many tasks better than anything else, but not ones that are central for the simple reason that they will not need to be with us all of the time. After all, we will have our wearables.”
    ..
    ..
    One risk that all of us run is to think of the future in terms of what exists now – which is one reason why 2007 was such big news for tech and then for all of us. What might a similar moment be in the near future? Earlier, you had to have a computer, and it was nice to have a smartphone. Now, you have to have a smartphone, and it is nice to have a computer. When might it be nice to have a smartphone, while you have to have a ‘wearable’?
    ..
    ..
  5. “The Defense Advanced Research Projects Agency (DARPA) is developing a new “Molecular Informatics” program that uses molecules as computers. “Chemistry offers a rich set of properties that we may be able to harness for rapid, scalable information storage and processing,” Anne Fischer, program manager in DARPA’s Defense Sciences Office, said in a statement. “Millions of molecules exist, and each molecule has a unique three-dimensional atomic structure as well as variables such as shape, size, or even color. This richness provides a vast design space for exploring novel and multi-value ways to encode and process data beyond the 0s and 1s of current logic-based, digital architectures.” ”
    ..
    ..
    Not just the rather unimaginable (to me, at any rate) thought of molecules as computers (did I get that right?!), but also a useful timeline of how calendars have evolved. Also note how the rate of “getting better” has gotten faster over time!

Tech: Links for 25th June, 2019

I have linked to some of these piece in the past, but this set of posts is still useful in terms of creating a common set of links in one place for you to understand how to think about Aggregation Theory. If you can afford it, I heavily recommend Stratechery!

  1. “What is the critical differentiator for incumbents, and can some aspect of that differentiator be digitized?
    ..
    ..
    If that differentiator is digitized, competition shifts to the user experience, which gives a significant advantage to new entrants built around the proper incentives
    ..
    ..
    Companies that win the user experience can generate a virtuous cycle where their ownership of consumers/users attracts suppliers which improves the user experience”
    ..
    ..
    Begin here: this piece explains what aggregation theory is all about, and why it matters.
    ..
    ..
  2. “Super-Aggregators operate multi-sided markets with at least three sides — users, suppliers, and advertisers — and have zero marginal costs on all of them. The only two examples are Facebook and Google, which in addition to attracting users and suppliers for free, also have self-serve advertising models that generate revenue without corresponding variable costs (other social networks like Twitter and Snapchat rely to a much greater degree on sales-force driven ad sales).”
    ..
    ..
    Aggregators on steroids: what exactly makes Google and Facebook what they are? This article helps you understand this clearly. Also read the article on super aggregators itself.
    ..
    ..
  3. “There is a clear pattern for all four companies: each controls, to varying degrees, the entry point for customers to the category in which they compete. This control of the customer entry point, by extension, gives each company power over the companies actually supplying what each company “sells”, whether that be content, goods, video, or life insurance.”
    ..
    ..
    This article explains the FANG playbook, and how they became what they are today: Facebook, Amazon, Netflix, Google.
    ..
    ..
  4. “To explain why, it is worth examining all four companies with regards to:Whether or not they have a durable monopoly
    What anticompetitive behavior they are engaging in
    What remedies are available
    What will happen in the future with and without regulator intervention”
    ..
    ..
    Ben Thompson states just above this paragraph that he is neither a lawyer nor an economist. But the last two questions in the list above show that he’d make a pretty good economist. He is, in essence, asking what is the opportunity cost of breaking up these firms. As the song goes: with the bad comes the good, and the good comes the bad.
    ..
    ..
  5. “All those apps are doing is providing an algorithm that lowers search costs and makes booking easy. Expedia didn’t design, build and maintain the airplane that flew him to Sydney; build or operate the airport; train pilots; or find, produce, refine and transport the necessary jet fuel to power the plane over its continental voyage. Uber didn’t design and manufacture the car used to transport him to his hotel; find, produce, and process the raw materials that go into it (such as steel and aluminium); or actually drive him from the airport to his hotel. AirBnB didn’t design, build, maintain, or clean the house he stayed in, nor supply it with electricity. UberEats and OpenTable didn’t grow and process any raw foodstuffs, or use them to cook a meal, and TripAdvisor didn’t design, manufacture or operate any of the tourist attractions he visited.In fact, all these companies did was write some pretty simple code that made matching buyers with sellers easier and more efficient, and the real question that should be being asked is whether these platform companies are extracting too much value from the supply chain relative to their value-add, and whether that is likely to be a sustainable situation in the long term, or will invite potential disruption and/or an eventual supply-side/regulatory response.”
    ..
    ..
    BUT, on the other hand, perhaps this is just old wine in a new bottle?

Etc: Links for 14th June, 2019

  1. “But here is a simple truth that many of us seem to resist: living too long is also a loss. It renders many of us, if not disabled, then faltering and declining, a state that may not be worse than death but is nonetheless deprived. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.”
    Ezekiel J. Emmanuel on how long he wants to live. Worth reading to ponder questions of mortality and what it means to each of us. Also worth reading up on: memento mori.
    ..
    ..
  2. “Indeed, the German hyperinflation was not even the worst of the twentieth century; its Hungarian equivalent, dating to 1945-46, was so much more severe that prices in Budapest began to double every 15 hours. (At the peak of this crisis, the Hungarian government was forced to announce the latest inflation rate via radio each morning, so workers could negotiate a new pay scale with their bosses, and issue the largest denomination banknote ever to be legal tender: the 100 quintillion (1020) pengo note. When the debased currency was finally withdrawn, the total value of all the cash then in circulation in the country was reckoned at 1/10th of a cent. [Bomberger & Makinen pp.801-24; Judt p.87])”
    ..
    ..
    I wasn’t aware of what the topic of this essay is about – which is not contained in the excerpt above. Somewhat shamefully, I wasn’t even aware of the Hungarian episode quoted above! Read more, sir, read more!
    ..
    ..
  3. “Consider the first time a right-handed player tries to dribble with the left hand. It’s awkward, clumsy. Initially, the nerves that fire off signals to complete that task are controlled in the front cortex of the brain. Over time, with countless repetitions, those nerve firings become more insulated. The myelin sheath builds up. Eventually, less effort is required to use that left hand, and the brain processes it as second nature.The same is possible with pressure, according to neurologists. With repetition, stress can be transformed into fortitude.”
    ..
    ..
    Put yourself in pressure situations, and repeatedly. That’s the only way, this article says, to handle pressure. Lovely read!
    ..
    ..
  4. “The project in Colombia, a partnership with the nonprofit Conservation International, involves protecting mangrove forests, which can store 10 times as much carbon as terrestrial forests. In its first two years, the program is expected to reduce carbon emissions by 17,000 metric tons, roughly equal to the next decade of emissions from the lidar-equipped survey vehicles that update Apple Maps. “This is rare for Apple to say, but we are telling other companies to copy us on this,” Jackson says.”
    ..
    ..
    I have only glanced through this article, and haven’t come close to reading all the entires (a true rabbit hole), but there’s lots of small interesting snippets here about creativity. Not so much, based on what I’ve seen of the “how to be creative”, but rather descriptions of folks who are creative.
    ..
    ..
  5. “The (c)rapture I felt was likely a case of “poophoria,” explains Anish Sheth, the gastroenterologist and coauthor of toilet-side staple What’s Your Poo Telling You? “Some have compared it to a religious experience, others an orgasm,” he says. The exact science is unknown, but Sheth thinks the sensation may result from “a slightly prolonged buildup, an overdistension of the rectum, and immediate collapse by passing a sizable stool, which fires the vagus nerve and releases endorphins.” Lights-out pooping, Sheth adds, may “help with a proper rate of exit.””
    ..
    ..
    Truly etc., this. The Wired magazine on, well, pooping in the dark.

Links for 10th May, 2019

  1. “Thus in his famous 1969 paper “Information and Efficiency-Another viewpoint” (reprinted in his The Organization of Economic Activity, vol.2, Blackwell, 1988), Harold coined the notion of the “nirvana fallacy” in criticising Kenneth Arrow’s claim using the Arrow-Debreu framework (in his Economic Welfare and the Allocation of Resources for Invention) that with ‘market failure’, government intervention could make markets more efficient. Demsetz argued that this assumed a perfect government whilst failing to consider if the actual intervention could be perfect. “Those who adopt the nirvana viewpoint seek to discover discrepancies between the ideal and the real and if discrepancies are found, they deduce the real is inefficient.”
    ..
    ..
    Deepak Lal on the passing of two economists, one of whom I am enjoying reading more of these days.
    ..
    ..
  2. “So while Spotify might get the better of Apple in this particular fight, it’s no angel. Its excessive collection of user data forced its CEO to apologize after a consumer backlash in 2015. It is one of several targets of a complaint under Europe’s stringent new GDPR data privacy rules. And let’s not forget that its own business model tends toward market dominance. An antitrust victory in the battle with Apple would be welcome for Spotify shareholders; for suffering musicians it would mean far less.”
    ..
    ..
    If you have been following the Spotify – Apple debate/drama about Apple taking a 30% cut  – this is an article that does a good job of arguing both sides of the story.
    ..
    ..
  3. “What is the best way to protect and restore this public commons? Most of the proposals to change platform companies rely on either antitrust law or regulatory action. I propose a different solution. Instead of banning the current business model — in which platform companies harvest user information to sell targeted digital ads — new legislation could establish a tax that would encourage platform companies to shift toward a healthier, more traditional model.”
    ..
    ..
    A Pigouvian solution to a modern day problem?
    ..
    ..
  4. “But Mr. Munger was there to talk about anything on his mind, which is just about everything. His favorite activity, he says, is figuring out “what works and what doesn’t and why.””
    ..
    ..
    Anything that helps you understand how Charlie Munger thinks is worth a read. Clicking through tot hat link will give you a full transcript of the interview, and I’d recommend you do so.
    ..
    ..
  5. “Preventing an impact is possible — theoretically. Humans need only change the asteroid’s velocity by a few centimeters per second; over the course of several orbits around the sun, that change adds up to push the rock fully in front of or behind the Earth. But the proposed methods for deflection are expensive and untested.”
    ..
    ..
    Not that I mean to get your weekend off to a bad start