Makkhan, Magic and the Mind

It’s not like human beings don’t make stuff up. We do it all the time. In fact, one of my LinkedIn core competencies is the ability to stitch together 4 threads of fact with 100 threads of creative fabrication. Eyewitnesses regularly make stuff up under oath. Godmen regularly claim to confabulate with the divine. Children make up excuses with hilariously cute incompetence. Maybe we are also probabilistic auto-complete machines powered by wetware instead of software?

https://krishashok.me/2023/03/13/the-butter-crypto-nft-project/

There are a few people in the world who have the ability to depress you with their all-round awesomeness, and if you ask me, Krish Ashok is near the top of the list. Excellent at everything he does, including being the author of a most kick-ass blog. (He could be awesomer by updating his blog more often, but such is human nature – it always wants a little bit more). Anyways, please go read the whole post later – it would be a most excellent way to spend about ten minutes or so. His post involves puns, fats, memes, mythology and a rumination on AI.

That excerpt above is a part of his rumination, and it is a question I want to start thinking about with the help of today’s blogpost. How does AI work, and is it like the human mind? You might quibble at the use of the word “mind”, rather than the word “brain”, but I have my reasons, and they aren’t just alliterative.


The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…

A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)

https://stratechery.com/2023/chatgpt-learns-computing/

Read the excerpt from Krish Ashok’s post, and compare it with this excerpt above. The excerpt comes from Ben Thompson’s equally (but in a different way) excellent post, called ChatGPT learns Computing. There’s a lot going on in the post, and as always, please do read it in full, but I enjoyed learning about the book(s) written by Jeff Hawkins. Quick aside: that excerpt above has been drawn from two different books written by Jeff Hawkins – apologies for the mashup of the quotes, but they fit so well together that I went ahead and showed them as one quote. Ben (via Jeff Hawkins) seems to be making the point that we hallucinate too, and in some cases, pretty literally. It’s almost as if the second excerpt ends up answering the question raised in the first one!


I meet two kinds of people these days. The first group revels in pointing out how ChatGPT fails at certain tasks, and therefore isn’t as good as it is made out to be. The second group can’t help but sing paeans to ChatGPT. Both would do well to acknowledge the points being made by the other side, but my own position is much closer to that of the second group than the first. Yes, it (ChatGPT) makes mistakes, and yes it isn’t perfect, but as Ben says elsewhere in his post, it is pretty awesome 95% of the time, and not so great – downright error-prone, even – about 5% of the time:

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine… And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”

https://stratechery.com/2023/chatgpt-learns-computing/

Again, side note: that quote is actually by Stephen Wolfram, and I have simply excerpted an excerpt from Ben’s post. But it is the point that matters here, and the point is that yes, ChatGPT isn’t perfect. But two additional points: it can get better over time, for one. And second, that is happening right before our eyes. Not just because we’re now in 4 territory rather than 3.5, and because ChatGPT is augmenting it’s capabilities via plug-ins.


Now, here’s the part about Ben’s post that is confusing. Note that Krish Ashok asked in his post about whether “we are also probabilistic auto-complete machines powered by wetware instead of software”. And the excerpt from Ben’s post seems to say yes, that may well be the case. Ben does go on to say that proving this is going to be difficult, but lets, for now, go with this hypothesis – maybe we are probabilistic auto-complete machines.

And AI? AI is also a probabilistic auto-complete machine, just a much more powerful one. Much, much more powerful:

Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

https://stratechery.com/2023/chatgpt-learns-computing/

And is that all there is to AI? Ah, what a question to ask:

While technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations

https://stratechery.com/2023/chatgpt-learns-computing/

But where does this emergent property come from, where AI is concerned? Well, where does what makes us human come from?

The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.

https://stratechery.com/2023/chatgpt-learns-computing/

What is the neocortex? This is how Wikipedia defines it: “The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language”. Even within our brain, if I’ve understood Ben and Jeff’s points correctly, the neocortex by itself is simply a very awesome tool, but with no capacity (or desire!) to act on anything. It is when you combine the computing capabilities of the neocortex with the old, “animal” part of our brain that magic starts to happen. This “animal” part of our brain is what “drives emotions, our drive for survival and procreation”.

So sure, AI may seem to have emergent properties, but it’s choosing to act on these properties is dependent on its animal brain. But what is the animal brain of AI? Go read the last sentence of the last excerpt, especially the concluding bit:

“The true risk of machine intelligence is the intentions of the humans that wield it”


The other day, there was a conversation on a Whatsapp group that I am privileged to be a part of, where we got to wondering if AI can be capable of being humorous. Could it tell, for example, a Mullah Naseeruddin story? There was one easy way to find out!

Here is what it came up with:

“Okay, I’ll try to create a Mulla Naseeruddin story about he meeting me. Here is one possible story:

One day, Mulla Naseeruddin was walking in the market when he saw a crowd of people gathered around a stall. He was curious and went closer to see what was going on. He saw a man with a laptop and a microphone, talking to the people.

“Hello, everyone. This is Bing, the chat mode of Microsoft Bing search. You can ask me anything you want and I will answer you. I can also generate poems, stories, code, essays, songs, celebrity parodies and more. Who wants to try?” the man said.

The people were amazed and excited. They started asking Bing various questions and requests. Bing answered them with informative, visual, logical and actionable responses. He also generated some creative content that made the people laugh and clap.

Mulla Naseeruddin was impressed by Bing’s abilities. He decided to ask him something too. He raised his hand and said, “Excuse me, sir. Can I ask Bing a question?”

“Sure, go ahead,” the man said.

Mulla Naseeruddin cleared his throat and said, “Bing, can you tell me the meaning of life?”

The crowd became silent. They all looked at Bing with curiosity and anticipation. Bing thought for a moment and then said, “The meaning of life is a complex and subjective question that has been debated by philosophers, scientists, religious leaders and ordinary people for centuries. There is no definitive answer to this question, as different people may have different views and values on what makes life meaningful. However, in this context, a human might say that the meaning of life is to find happiness, purpose, love and fulfillment in oneself and others.”

The crowd nodded and murmured in agreement. They were impressed by Bing’s answer. Mulla Naseeruddin smiled and said, “Thank you, Bing. That was a very wise and thoughtful answer.”

He then turned to the man and said, “Can I ask you another question?”

“Sure,” the man said.

Mulla Naseeruddin leaned closer to the man and whispered in his ear, “Can you tell me how to turn off Bing?”

The man was stunned. He looked at Mulla Naseeruddin with disbelief and confusion. He said, “Why would you want to do that? Bing is amazing. He can help you with anything you need.”

Mulla Naseeruddin shrugged and said, “I don’t need him anymore. I already know the meaning of life.”

He then walked away from the stall, leaving the man speechless and the crowd bewildered.

The moral of the story is: Sometimes we seek answers from external sources when we already have them within ourselves.”

Krish Ashok ended his blogpost by asking if the answer to his excerpt could be found by asking ChatGPT. Ben helped me think through some of the nuances associated with this question, but Krish Ashok’s question? ChatGPT has an answer for him: look within.

Notes:

  1. If you’re wondering what makkhan has to do with this post, read Krish Ashok’s post.
  2. Note that I know next to nothing about how AI works, how the brain works, and what similarities there are between the two. This post is me learning by having fun writing it – but if you’re looking for expertise re: either of these subjects, you should be looking elsewhere.

Consoles, Competition and Comprehension

If you are studying microeconomics, whether in undergrad or postgrad courses, it can sometimes get a little too theoretical. Or that, at any rate, is how I used to feel about the more abstruse parts of advanced micro. And while memorizing the millionth derivation in order to regurgitate it in an examination, I would often wonder if there was any relevance of what I was attempting to study to the real world outside.

If you, today, as a student of micro share this opinion, let me ask you this: are you interested in video games? Are you living in fond hope that a PS5 will land up in your living room? Or are you figuring out ways to get XBox Pass?

If the answer to any of these questions is yes, I’m guessing that you like playing video games. Do you know how the industry started? Do you know what the Gang of Four was all about? Do you know how different business models in the industry originated? How they evolved and why, and with what consequences? Had you heard about the Great Video Game Crash of 1983? I knew a little bit (but not a lot) about the answers to all of these questions, save for the last.

But the reason I bring this up is because Ben Thompson has an exellent essay out on the evolution of the gaming industry, with a lovely recap of all of what happened, and why. You’ll learn about vertical and horizontal integration, lock-ins, attempts to create monopolies, attempts at preserving monopoloies, about how business models had to change to account for changing strategies, changing technologies and changing aspirations on part of creators, consumers and corporations. It’s head-spinning stuff!

It begins with a description of the world’s first video game (OXO, 1952, in case you were wondering) and ends with how the FTC (perhaps) doth take things too far with the Activision acquisition by Microsoft. And in the interim, it touches upon names that will evoke nostalgia among folks of a certain vintage, and curiosity among folks of a more recent vintage.

If you are a student struggling with micro but happen to love video games, this essay might motivate you to read more about the evolution of the video game industry, and understand micro better in the process.

If you are a teacher struggling with helping students fall in love with micro, consider reading and using this essay.

And a meta lesson: a great way to learn about microeconomics is to pick your industry of choice, and ask how it has evolved over time, and why. The answers to these questions is a great way to become a better student of economics.

If you’re looking for suggestions in this regard: music, television, movies, gaming, publishing, hospitality and sports (football, cricket and tennis would be great examples). And if I may offer one piece of contrarian and possibly heretical advice – begin with the industry and work your way to the textbook, rather than the other way around.

Write The Harder Version

Ben Thompson writes a lovely (as usual) essay about the latest Meta-Microsoft partnership. There’s a lot to think about and ponder in that essay, but for the moment, I want to just focus on a part of it that appears in the introduction:

That was why this Article was going to be easy: writing that Meta’s metaverse wasn’t very compelling would slot right in to most people’s mental models, prompting likes and retweets instead of skeptical emails; arguing that Meta should focus on its core business would appeal to shareholders concerned about the money and attention devoted to a vision they feared was unrealistic. Stating that Zuckerberg got it wrong would provide comfortable distance from not just an interview subject but also a company that I have defended in its ongoing dispute with Apple over privacy and advertising.
Indeed, you can sense my skepticism in the most recent episode of Sharp Tech, which was recorded after seeing the video but before trying the Quest Pro. See, that was the turning point: I was really impressed, and that makes this Article much harder to write.

https://stratechery.com/2022/meta-meets-microsoft/

When you’re writing about a particular topic, and particularly if you write often enough, you realize that there are two ways to go about it: the easy way, and the hard way. The easy way isn’t necessarily about slacking off – in fact, part of the reason it might be easy to write is precisely because you haven’t bene slacking off for a long time in terms of writing regularly.

Doing so – writing regularly, that is – gives you a way of thinking about what to write – a mental framework that lays out the broad contours of your write-up, a way to begin the first paragraph, and even a nice rhetorical flourish with which to end.

I speak from personal experience – every now and then, I can see the blogpost that will be written by me while I’m reading something. And this is a truly wonderful superpower – the ability to know that you can churn out a somewhat decent-ish piece about something in very short order. Which is why both writing regularly and writing with self-imposed deadlines is on balance a good thing.


But there is, alas, no such thing as a free lunch. The downside of this is that one also then develops the inability to push oneself more. Why bother coming up with a different way of thinking what to write about, and how to go about it? Even if you’ve developed the intuition while reading something that your regular mental framework will do just fine, and it might well be what your audience is expecting from you anyways, you know that you really should be framing it in a different way. Either because that’s really what the subject matter at hand demands, or because you’re somehow convinced that this new, different way will result in a better framing – but you just know it in your bones.

That’s the hard bit: should you then stick to what you know and thump out a piece, or should you take the time to pause, reflect and push yourself to build out a better essay? Should you pursue that contrarian take, even though it might take longer?

And if you have a regular schedule to keep up with, the answer need not necessarily be yes. But I would argue that every now and then, it does make sense to take a step back, allow yourself the luxury of time, and write the more difficult piece instead.

Yes it will take longer, and yes it will be more tiring, but now what to do. Such is life.


All that being said, three quick points about Ben’s essay that really stood out for me:

  1. What is Mark Zuckerberg optimizing for with this move, and what cost to himself and his firm? Why? Weirdly, it would seem as if he is pushing the technology (VR) at the cost of at least the short-term growth of his firm, and he seems to be fine with it. Huh.
  2. Who are likely to be the early adopters of your service, and how likely are they to eventually become your marketers for free is a question that never goes away, but remains underrated.
  3. I’ve never used a VR headset, but even after reading Ben’s article, it becomes difficult to see why this might take off at current costs – and those costs aren’t just monetary, but also about mass adoption, inconveniences and technological limitations. I just don’t get it (which, of course, is a good thing. More to learn!)

Supply and Demand, Complements and Substitutes and Dalle-E 2

Before we begin, and in case some of you were wondering:

Early last year, San Francisco-based artificial intelligence company OpenAI launched an AI system that could generate a realistic image from the description of the scene or object and called it DALL.E. The text-to-image generator’s name was a portmanteau coined after combining the artist Salvador Dali and the robot WALL.E from the Pixar film of the same name.

https://analyticsindiamag.com/whats-the-big-deal-about-dall-e-2/

Dall-E 2 is amazing. There are ethical issues and considerations, sure, but the output form this AI system is stunning:

A rabbit detective sitting on a park bench and reading a newspaper in a Victorian setting (Source)

And just in case it isn’t clear yet, no such painting/drawing/art existed until this very sentence, the one that is the caption, was fed to the AI. And it is the AI that “created’ this image. Go through the entire thread.


This has led, as might be expected, to a lot of wondering about whether artists are going to be out of a job, and the threats of AI to humanity at large. I do not know enough to be able to offer an opinion one way or the other where the latter is concerned, but I do, as an economist, have some points to make about the former.

These thoughts were inspired by reading Ben Thompson’s latest (freely available) essay on Dall-E 2, titled “DALL-E, the Metaverse, and Zero Marginal Content“. He excerpts from the OpenAI website in his essay, and this sentence stood out:

DALL-E is an example of how imaginative humans and clever systems can work together to make new things, amplifying our creative potential.

https://openai.com/dall-e-2/

And that begs an age-old question where economists are concerned: is technology a complement to human effort, or a substitute for it? The creators of Dall-E 2 seem to agree with Steve Jobs, and think that the AI is very much a complement to human ingenuity, and not a substitute for it.

I’m not so sure myself. For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?

But for the moment, I agree: this is an exciting new way to generate content, and is likely to work best when used as a complement by artists. Note that this is based on what I’ve seen and read – I have not myself had a chance to use or play around with Dall-E 2.


The title of today’s blog post is about substitutes and complements, which we just finished talking about in the previous section, but it also includes references to demand and supply. What about demand and supply?

Well, Ben Thompson talks about ways to think about social media firms today. He asks us to think about Facebook for example, and asks us to reflect upon where the demand and the supply for Facebook as a service comes from.

Here’s my understanding, for having read Ben Thompson’s essay: Facebook’s demand comes from folks like you and I wanting to find out what, well, folks like you and I are up to. What are our friends, our neighbors, our colleagues and our acquaintances up to? What are their friends, neighbors, colleagues and acquaintances up to? That’s the demand.

What about the supply? Well, that’s what makes Facebook such a revolutionary company – or at least, made it revolutionary back then. The supply, as it turns out, also came from folks like you and I. We were (and are) each others friends, neighbors, colleagues and acquaintances. Our News Feed was mostly driven by us in terms of demand, and driven by us in terms of supply. Augmented by related stuff, and by our likes and dislikes, and news sources we follow and all that, but demand and supply comes from our own networks.

TikTok, Thompson says, is also a social network, and supply and demand is also user driven, but it’s not people like us that create supply. It is just, well, people. TikTok “learns” what kind of videos we like to see, and the algorithm is optimized for what we like to see, regardless of who has created it.

But neither Facebook nor TikTok are in the business of generating content for us to see. The former, to reiterate, shows us stuff that our network has created or liked, while the latter shows us stuff that it thinks we will like, regardless of who has created it.

But how long, Ben Thompson’s essay asks, before AI figures out how to create not just pictures, but entire videos. And when I say videos, not just deep fakes, which already exist, but eerily accurate videos with depth, walkthroughs, nuance, shifting timelines and all the rest of it.

Sounds far-fetched?

Well, I remember taking an hour to download just one song twenty years ago, and I can now stream any song in the world on demand. And soon (already?) I will be able to “create” any song that I like, by specifying mood, genre, and the kind of lyrics I want.

How long before I can ask AI to create a movie just for me? Or just me and my wife? Or a cartoon flick involving me and my daughter? How long, in other words, before my family’s demand for entertainment is created by an AI, and the supply comes from that AI being able to tap into our personal photo/video collection and make up a movie involving us as cartoon characters?

Millions of households, cosily ensconced in our homes on Saturday night, watching movies involving us in whatever scenario we like. For homework, read The Secret Life of Walter Mitty by Thurber (the short story, please, not the movie!), Snowcrash by Neal Stephenson, and The Seven Basic Plots by Baker.


There are many tantalizing questions that arise from thinking about this, and I’m sure some have struck you too. But I don’t want to get into any of them right now.

Today’s blog post has a very specific point: it doesn’t matter how complicated the issue at hand is. Simple concepts and principles can go a very long way in helping you frame the relevant questions required for analysis. Answering them won’t be easy, as in this case, but hey, asking (some of) the right questions is a great place to start.

Tech: Links for 10th December, 2019

  1. “To be clear, both roles can be beneficial — platforms make the relationship between users and 3rd-parties possible, and Aggregators helps users find 3rd-parties in the first place — and both roles can also be abused.”
    ..
    ..
    The always excellent Ben Thompson on regulating monopolies online, drawing a distinction between platforms and aggregators. His articles, as I have mentioned before, are always a delight to read, and this one in particular is a great collection of links to articles he has written before. Plus, this article is inspiration, if you will, for the links that follow.
    ..
    ..
  2. “Columbia University law professor Tim Wu coined the term “network neutrality” in a 2003 paper about online discrimination. At the time, some broadband providers, including Comcast, banned home internet users from accessing virtual private networks (VPNs), while others, like AT&T, banned users from using Wi-Fi routers. Wu worried that broadband providers’ tendency to restrict new technologies would hurt innovation in the long term, and called for anti-discrimination rules.”
    ..
    ..
    An excellent explainer from Wired about Net Neutrality.
    ..
    ..
  3. “For years, I winced at how Big Tech approached regulatory matters. When they wade into policy matters, they fail to see the bigger picture — and the younger the company, the worse they are at this. The hole that Facebook has dug for itself is entirely because its leadership seemed to believe that if they stayed within the letter of the current law they wouldn’t be regulated. This is a completely naive and ahistorical view. And this view has prevented Facebook from innovating in their own policy space. Without that policy innovation, we are left with essentially nonsensical suggestions to break up Facebook — which wouldn’t actually solve any of the issues anyone has with Facebook.”
    ..
    ..
    If you’re looking to do research in this field, you can’t not read Joshua Gans. This is just one of many excellently argued articles. Do read the whole thing!
    ..
    ..
  4. The internet activist Nikhil Pahwa lists out his expectations about the future of internet regulation in India. Agree or disagree (as usual, I fall in the middle), it is worth reading.
    ..
    ..
  5. “More generally, however, the bigger Google gets the more countries it has a physical presence in (servers, sales staff and support etc.) and thus the more leverage individual countries, especially large countries, will have to degrade the services that Google offers not just within-country but to the world.”
    ..
    ..
    Alex Tabarrok gives a fun example and a chilling analysis in the same short blog post.