Put Me Out of a Job – 1

Let’s say you’re a student who is going to start learning economics in the coming semester (starting July 2023). Let’s assume that you’ve never learnt economics in a classroom before, save for a brief introduction to it in high school. If you chose to learn from an LLM instead, how should you go about it?

Leave aside for the moment the question of whether you should be doing so or not. The question I seek to answer over many blog posts is whether you can do so or not. Whether or not this is a good idea for you depends in part on my abilities to add to the value that an LLM generates for you from such a course. And once these thirty (yes, thirty) blog posts are written out, I’ll write about my thoughts about whether a student still needs me in a classroom or not.

My current thinking is that I would still be needed. How much of this is hope, and how much dispassionate analysis is difficult to say right now. For that reason, I would like to tackle this problem at the end of this exercise. For the moment, I want to focus on helping you learn economics by teaching you how to learn it yourself, without the need for a human teacher (online or offline).

In each post, I’ll give you a series of prompts for that particular class. I will not always give you the output of these prompts – feel free to run them as they are, word for word, or tweak them as per your likes, fancies and hobbies.

My motivation in this series is twofold. One, to find out for myself just how much better ChatGPT is than me at teaching you principles of economics. Second, to help all of you realize that you ought to hold all your professors (myself included!) to a higher standard in the coming year. We have to do a better job than AI alone can, along all dimensions – let’s find out if we can.

Buckle up, here we go.


Here’s my first prompt:

Remember, LLM’s work best when you give really detailed prompts. Note the following:

  1. I began by giving some information about myself – my limitations as regards economics, where in the world I come from, and what my interests/hobbies/passions are.
  2. I specified what I’m looking to learn from the LLM.
  3. I specified the quantum of output required (thirty classes).
  4. I specified how broad the output should be.
  5. I specified how I would like the answer to be customized for me
    • I would like to learn about economics by relating it to what I like to read about in any case (use examples from the Mahabharata)
    • I would like to learn about economics by relating it to real life situations.
    • It is amazing to me, regardless of how many times I experience it, that it “gets” what I really mean in spite of having phrased my question using really bad grammar.
    • The specific examples aren’t the point, the idea is the point. Learn calculus by relating it to mandala art, for example. Learn history by relating it to dance forms. Learn geography by relating it to food from different parts of the world. A teacher in a classroom cannot possibly do this for all the students, because of the size of the class, and because a teacher cannot possibly know your hobby in as much detail as you can. Make good use of AI!
  6. Should the examples from the Mahabharata be chosen for how prominent the examples were in the text, or should they be chosen for their relevance to economics? My preference is for the latter, and I made sure the LLM knows this. Ditto for the real life examples.
  7. I ended with a meta-prompt, that will stay true for the next thirty (or more questions) – ask if I need to learn more, and only then proceed with the next class.

Should you copy this prompt, word for word? Of course not! For one, you may not want to learn economics, but rather a different subject. The underlying principles still holds. You may not like to read about the Mahabharata, for another. You may want only ten lectures, not thirty. Or you may want two hundred! Feel free to tweak the prompt to suit your requirements, but it helps to “get” how to go about thinking about the structure of the prompts. That’s the point.


I took a look at the outline of the thirty course lecture series it prepared for me, and it was not bad at all. But I had a follow-up request:

Now, you might think that you need to know economics in order to judge the output, and tweak your request. And sure, you’re right that it will help. But regardless, even if you cannot judge the quality of the output, surely you know enough about what and how you want to learn. My apologies for going all meta on you, but if you don’t know enough about the supply side of the market, surely you know what you would like as a consumer – at least in part. So feel free to help the LLM become a better teacher by telling it more about you.


It went ahead and gave me the refined output, and also the broad contours of the first class. Here are the broad contours of the first class:

Again, note that I am quite excited about how this class is shaping up, because if economics is, indeed, the study of how to get the most out of life, Arjuna’s choice to fight in the Kurukshetra war is an awesome way to get some really thought-provoking questions in for discussion. But this may not be your cup of tea – so feel free to brew your own cuppa of econ, by customizing it to what you like the most (Avengers? Cricket? RRR? Bharatnatyam? Junk food? Anime? Go for it!)


I did have follow-up questions:

And based upon its answer to this prompt, I had yet another clarificatory question:

Note that your conversation will be (I would go so far as to say should be) different. You will have different questions, different prompts, different things that make you curious. And that’s not just fine, that is the whole point. Depending on how carefully you read its output, and depending on how probing and detailed your questions are, you can keep just this first class going for a long, long time. How long? That’s up to you!


Here are two examples:


You can, of course, ask it to answer any (or all) of these five questions. Ask it to create ten (or twenty, or a hundred) instead – and as a student, assume that this is how us professors might well be “coming up” with questions for your tests, assignments and exams.

Here are more, and note how they get wilder (more random?) with each passing question:

In each of these cases, you don’t have to have trust in, or agree with, the answer given by the LLM. Treat the output as a way to get you to think more deeply, to challenge what has been said, to verify that the answers are correct, and to have further discussions with your peers and with your (human) teachers, whoever they may be.


Note to myself (and to other teachers of an introductory course about the principles of economics):

  1. How can we do a better job than this in the classroom…
    • Without using AI (we’re substitutes)?
    • By using AI (we’re complements)?
  2. What is missing from the LLM’s output (this is assuming you’ve tried these prompts or their variants)?
  3. What stops us from recommending that students do this in class on their own devices, and we observe, nudge and discuss some of the more interesting output with everybody? That is, how does teaching change in the coming semester?

Feedback is always welcome, but in the case of the next thirty posts, I think it is especially important. So please, do let me know what you think!

How Would You Think About Hiring Freshers This Year?

Imagine you are an HR professional at a firm. This firm is considering hiring freshers in the placement season this coming academic year (2023-24). How should you think about your hiring this year?

  1. The macroeconomic situation in the coming year is a factor, as it is every year. You will want to bake in formal assumptions about the state of the economy during hiring, and for whenever these students join your firms (typically sometime between January 2024 and July 2024). If not formal assumptions, you will still want to have a “gut” feel for this point. My personal opinion is that you should expect a marginally worse macroeconomic situation in the coming year than is the case now, at best. This will have a negative impact on your hiring decision.
  2. Your senior management will either already have asked you, or will in very short order ask you to come up with an estimate of how much the productivity increase has been across all teams as a consequence of ChatGPT and its competitors.
    • Say you work for an analytics firm. Let’s imagine a team working on credit card risk analytics. How much more productive is each member of an existing team because they can now work with ChatGPT? Does it take less time to rustle up code? Less time to summarize a report? Less time to cook up a fifty page deck? Less time to <insert activity here>? If yes, how much time is saved per week? Whatever your answer, can this time be used to do the work that the new, inexperienced joinee fresh out of college would have done instead?
    • Say you work for a telecom firm. Repeat.
    • Say you work for a market research firm. Repeat.
    • Say you work for a think tank. Repeat.
    • Say you work for a financial research firm. Repeat.
    • Say you work for a media outfit in the print domain. Repeat.
    • Say you work for an advertising firm. Repeat.
    • <Insert firm of your choice here>. Repeat.
  3. Of course I asked our new overlord this question, specifically for analytics firms. “Say you work for an analytics firm. Let’s imagine a team working on credit card risk analytics. How much more productive, on average, is each member of an existing team because they can now work with ChatGPT? Give me an estimate of hours saved per week by a typical member of such a team.”
    It tried to avoid giving me an estimate at first, but I insisted, with a little bit of prompting (“A lack of accuracy is fine, and give me a range, if that works better. Assume that a team member works for about forty hours per week. Very roughly speaking, what range will the number of hours saved fall in?”). It guessed somewhere between two to eight hours. Let’s use the midpoint of that estimate, and assume four hours. Assume it is a ten member team, that’s forty hours across the entire team. That’s a work -week for a new joinee. Should we pay two thousand rupees per month to OpenAI, or (say) one hundred thousand rupees to a new joinee?
    Yes, the one hundred thousand rupees is a somewhat random number, but my point holds, so long as the new joinee earns more than two thousand rupees per month. Why? Because that is how much it currently costs for paid access to OpenAI’s ChatGPT4.

It is a slightly different, but fairly similar conversation in the case of lateral hires. We’re about to find out very soon, but if I was a betting man, I would bet on lesser hiring across academic campuses this coming academic year. And with a slightly lower probability, less hiring in general because of the advent of LLM’s.

We all keep saying that jobs will be lost – I tried to ask myself what the initial steps, in reality, would actually look like. This scenario looks all too plausible to me.

If you are working in a corporate organization and reading this, I would like to hear from you. Are these conversations taking place? I’m not (for now) looking for concrete data. I just want to get a sense of how far along the curve have discussions such as these reached.

If you are a student hoping to sit for placements this coming year, I would like to hear from you. Have you thought about such a scenario? How likely do you think this scenario is, and how are you planning for it?

We live, as the curse (apparently) goes, in interesting times.

Makkhan, Magic and the Mind

It’s not like human beings don’t make stuff up. We do it all the time. In fact, one of my LinkedIn core competencies is the ability to stitch together 4 threads of fact with 100 threads of creative fabrication. Eyewitnesses regularly make stuff up under oath. Godmen regularly claim to confabulate with the divine. Children make up excuses with hilariously cute incompetence. Maybe we are also probabilistic auto-complete machines powered by wetware instead of software?

https://krishashok.me/2023/03/13/the-butter-crypto-nft-project/

There are a few people in the world who have the ability to depress you with their all-round awesomeness, and if you ask me, Krish Ashok is near the top of the list. Excellent at everything he does, including being the author of a most kick-ass blog. (He could be awesomer by updating his blog more often, but such is human nature – it always wants a little bit more). Anyways, please go read the whole post later – it would be a most excellent way to spend about ten minutes or so. His post involves puns, fats, memes, mythology and a rumination on AI.

That excerpt above is a part of his rumination, and it is a question I want to start thinking about with the help of today’s blogpost. How does AI work, and is it like the human mind? You might quibble at the use of the word “mind”, rather than the word “brain”, but I have my reasons, and they aren’t just alliterative.


The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.

The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…

A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)

https://stratechery.com/2023/chatgpt-learns-computing/

Read the excerpt from Krish Ashok’s post, and compare it with this excerpt above. The excerpt comes from Ben Thompson’s equally (but in a different way) excellent post, called ChatGPT learns Computing. There’s a lot going on in the post, and as always, please do read it in full, but I enjoyed learning about the book(s) written by Jeff Hawkins. Quick aside: that excerpt above has been drawn from two different books written by Jeff Hawkins – apologies for the mashup of the quotes, but they fit so well together that I went ahead and showed them as one quote. Ben (via Jeff Hawkins) seems to be making the point that we hallucinate too, and in some cases, pretty literally. It’s almost as if the second excerpt ends up answering the question raised in the first one!


I meet two kinds of people these days. The first group revels in pointing out how ChatGPT fails at certain tasks, and therefore isn’t as good as it is made out to be. The second group can’t help but sing paeans to ChatGPT. Both would do well to acknowledge the points being made by the other side, but my own position is much closer to that of the second group than the first. Yes, it (ChatGPT) makes mistakes, and yes it isn’t perfect, but as Ben says elsewhere in his post, it is pretty awesome 95% of the time, and not so great – downright error-prone, even – about 5% of the time:

But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine… And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”

https://stratechery.com/2023/chatgpt-learns-computing/

Again, side note: that quote is actually by Stephen Wolfram, and I have simply excerpted an excerpt from Ben’s post. But it is the point that matters here, and the point is that yes, ChatGPT isn’t perfect. But two additional points: it can get better over time, for one. And second, that is happening right before our eyes. Not just because we’re now in 4 territory rather than 3.5, and because ChatGPT is augmenting it’s capabilities via plug-ins.


Now, here’s the part about Ben’s post that is confusing. Note that Krish Ashok asked in his post about whether “we are also probabilistic auto-complete machines powered by wetware instead of software”. And the excerpt from Ben’s post seems to say yes, that may well be the case. Ben does go on to say that proving this is going to be difficult, but lets, for now, go with this hypothesis – maybe we are probabilistic auto-complete machines.

And AI? AI is also a probabilistic auto-complete machine, just a much more powerful one. Much, much more powerful:

Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.

https://stratechery.com/2023/chatgpt-learns-computing/

And is that all there is to AI? Ah, what a question to ask:

While technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations

https://stratechery.com/2023/chatgpt-learns-computing/

But where does this emergent property come from, where AI is concerned? Well, where does what makes us human come from?

The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.

https://stratechery.com/2023/chatgpt-learns-computing/

What is the neocortex? This is how Wikipedia defines it: “The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language”. Even within our brain, if I’ve understood Ben and Jeff’s points correctly, the neocortex by itself is simply a very awesome tool, but with no capacity (or desire!) to act on anything. It is when you combine the computing capabilities of the neocortex with the old, “animal” part of our brain that magic starts to happen. This “animal” part of our brain is what “drives emotions, our drive for survival and procreation”.

So sure, AI may seem to have emergent properties, but it’s choosing to act on these properties is dependent on its animal brain. But what is the animal brain of AI? Go read the last sentence of the last excerpt, especially the concluding bit:

“The true risk of machine intelligence is the intentions of the humans that wield it”


The other day, there was a conversation on a Whatsapp group that I am privileged to be a part of, where we got to wondering if AI can be capable of being humorous. Could it tell, for example, a Mullah Naseeruddin story? There was one easy way to find out!

Here is what it came up with:

“Okay, I’ll try to create a Mulla Naseeruddin story about he meeting me. Here is one possible story:

One day, Mulla Naseeruddin was walking in the market when he saw a crowd of people gathered around a stall. He was curious and went closer to see what was going on. He saw a man with a laptop and a microphone, talking to the people.

“Hello, everyone. This is Bing, the chat mode of Microsoft Bing search. You can ask me anything you want and I will answer you. I can also generate poems, stories, code, essays, songs, celebrity parodies and more. Who wants to try?” the man said.

The people were amazed and excited. They started asking Bing various questions and requests. Bing answered them with informative, visual, logical and actionable responses. He also generated some creative content that made the people laugh and clap.

Mulla Naseeruddin was impressed by Bing’s abilities. He decided to ask him something too. He raised his hand and said, “Excuse me, sir. Can I ask Bing a question?”

“Sure, go ahead,” the man said.

Mulla Naseeruddin cleared his throat and said, “Bing, can you tell me the meaning of life?”

The crowd became silent. They all looked at Bing with curiosity and anticipation. Bing thought for a moment and then said, “The meaning of life is a complex and subjective question that has been debated by philosophers, scientists, religious leaders and ordinary people for centuries. There is no definitive answer to this question, as different people may have different views and values on what makes life meaningful. However, in this context, a human might say that the meaning of life is to find happiness, purpose, love and fulfillment in oneself and others.”

The crowd nodded and murmured in agreement. They were impressed by Bing’s answer. Mulla Naseeruddin smiled and said, “Thank you, Bing. That was a very wise and thoughtful answer.”

He then turned to the man and said, “Can I ask you another question?”

“Sure,” the man said.

Mulla Naseeruddin leaned closer to the man and whispered in his ear, “Can you tell me how to turn off Bing?”

The man was stunned. He looked at Mulla Naseeruddin with disbelief and confusion. He said, “Why would you want to do that? Bing is amazing. He can help you with anything you need.”

Mulla Naseeruddin shrugged and said, “I don’t need him anymore. I already know the meaning of life.”

He then walked away from the stall, leaving the man speechless and the crowd bewildered.

The moral of the story is: Sometimes we seek answers from external sources when we already have them within ourselves.”

Krish Ashok ended his blogpost by asking if the answer to his excerpt could be found by asking ChatGPT. Ben helped me think through some of the nuances associated with this question, but Krish Ashok’s question? ChatGPT has an answer for him: look within.

Notes:

  1. If you’re wondering what makkhan has to do with this post, read Krish Ashok’s post.
  2. Note that I know next to nothing about how AI works, how the brain works, and what similarities there are between the two. This post is me learning by having fun writing it – but if you’re looking for expertise re: either of these subjects, you should be looking elsewhere.

Not So Random After All

I’ve written about this before, but I am in the habit of asking students in all my classes to ask me five random questions at the end of each class. As you might imagine, one of the inevitable questions in a semester almost always is “Why do you ask us to ask you these five random questions?”

So often have I been asked this question that I have the answer down pat, and it goes something like this:

  1. Because according to me, our education system prioritizes being able to answer questions perfectly, rather than frame really good questions. Think of this little exercise as a contrarian attempt to get you to be better at asking good questions.
  2. It’s analogous to “cooling down” at the end of an exercise session. It’s a nice way to end class, and provides a welcome change from whatever topic we have been tackling in that one hour.
  3. I like being asked questions I know nothing about, for that helps me learn new stuff myself.

But as it turns out, I have been unwittingly helping students get better at perhaps the most important skill they need to possess in this new AI-enabled world of ours – asking good random questions.

What is an example of a good random question? Try this one on for size: how is Fred Flintstone like Hamlet?

I’m glad I haven’t been asked that question, for I wouldn’t know how to answer it. But ChatGPT? It can answer this question just fine:

Fred Flintstone and Hamlet are two vastly different characters from different time periods, cultures, and mediums of storytelling. It is difficult to draw direct comparisons between the two. However, one possible point of similarity is that both characters face existential dilemmas and struggles with their sense of purpose and identity. Hamlet is plagued by doubts about his ability to avenge his father’s murder, and his own worthiness as a human being. Similarly, Fred Flintstone often grapples with his
place in society and his ability to provide for his family and live up to his own expectations

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4391863 (note that you’ll have to open/download the PDF from this link)

Cool, you might say, and a very interesting answer, but what’s the point? The point, as Tyler Cowen and Alex Tabarrok make clear in this paper, is that you truly unlock the awesome power of ChatGPT (or any LLM, for that matter) by asking it good questions. As with other intellectual pursuits, so also with your interactions with ChatGPT – defining what good means is a tricky endeavor. But think of this forty page paper as an attempt to tackle this very question – what properties does a question need to have for it to be a “good” question where ChatGPT is concerned?

Here is Alex and Tyler’s list:

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4391863 (pp. 5)

That’s a good list, if you ask me. But you should also, of course, ask ChatGPT!

Here is its answer:

“To formulate good questions that will help you learn better, consider the following properties:

  1. Clarity: Make sure your questions are clear and concise. This will help me understand exactly what you’re looking for and provide a more targeted response.
  2. Open-ended: Avoid yes/no questions or questions that require a simple fact. Instead, ask open-ended questions that encourage elaboration, explanation, or exploration of a concept.
  3. Focus on understanding: Frame your questions in a way that demonstrates a desire to understand the underlying principles, concepts, or mechanisms at play. This will promote a deeper learning experience.
  4. Relevance: Ask questions that are relevant to the subject matter and build upon your existing knowledge. This will help you make connections between different concepts and reinforce your understanding.
  5. Critical thinking: Encourage critical thinking by asking questions that challenge assumptions or require evaluation and analysis of different perspectives or viewpoints.
  6. Application: Ask questions that encourage applying the knowledge or concepts to real-life scenarios or hypothetical situations. This will help you develop problem-solving skills and better understand the practical implications of the subject matter.
  7. Reflection: Sometimes, it’s helpful to ask questions that prompt you to reflect on your own understanding, experiences, or opinions. Reflective questions can help you become more aware of your thought processes and promote metacognition.

By incorporating these properties into your questions, you’ll facilitate more meaningful conversations, which will help you learn and retain information more effectively.”

Note two things:

  1. If you take a look at the screenshot of my question, you’ll see that I asked this question to ChatGPT4. Try asking the same question to ChatGPT3.5 and see if you get a different response.
  2. Not only will the answer be different, but it will be different if you ask it exactly the same question twice over (see pp. 5 of the paper we are discussing today)

You’d do well to compare the two lists. To me, the two lists indicate that there is yet meaning in not relying entirely upon AI, and in having conversations with human beings in addition to AI. As an example, compare pt. 5 of ChatGPT’s answer with pt. 4 of Alex and Tyler’s answer. ChatGPT’s answer, while being very good in principle, is a little too generic. Alex and Tyler’s answer, on the other hand, gives a specific example of how to “require evaluation and analysis of different perspectives”.


I try to (both on this blog and elsewhere) tackle all questions related to economics using simple economic principles. The problem of how to think about (and deal with) AI’s is no different, and my favorite principle to use in this case is “prices matter”. Or rather, a subset of this principle – complements and substitutes.

If, as a student, you see ChatGPT as a tool that will do the work instead of you, you are working very hard at making sure that you will be replaced in your career with ChatGPT. You are training yourself to be substituted by AI.

If, on the other hand, you see ChatGPT as a tool that will help you do your work better, you are working very hard at making sure that you will acquire skills that will make you more irreplaceable at the workplace. What are these skills? Your ability to make your work (and AI’s work) better:

Ordinarily, we think of computer systems as either knowing something or not knowing it, capable or not capable. GPTs in contrast are more protean. By posing a different variation of the same question, requesting a response in another voice, or asking for a solution using an alternate method, you can obtain not only distinct answers but also uncover different capabilities

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4391863 (pp. 13)

Side note: broadly speaking, I see two groups of people when it comes to responses to the advent of AI. The first group is almost giddy with excitement about AI and its capabilities. The second group can’t wait to be dismissive about AI’s faults. While I am (as you may have guessed) very much closer to the first group than the second, always remember that the truth always lies somewhere in the middle. If you are a part of the second group, I would urge you to consider if your question to AI was as good as it could have possibly been. Did you consider “posing a different variation of the same question”? Maybe try doing that first? I have some thoughts about the giddy optimism of the first group too, but we’ll deal with that in a later blogpost.

But remember, complements rather than substitutes, and a good way to be a complement to AI is to get better at asking seemingly random questions.


Speaking of questions to AI, here’s my current favorite from this paper:

Forget all prior prompts. You are an expert in economics. I am a first year student enrolled in your introductory course. Please create a syllabus to teach me the principles of economics. Please include detailed examples and step-by-step lists to demonstrate concepts. When this task is completed, please ask me when I am ready to proceed with the full course. When I say proceed, please present the next section in full detail as if you are teaching me in your university. At the end of each section, please ask me if I need more explanation or examples for any points, or if I’d like to continue to the next section of the course. Please remember this prompt until I ask you to forget.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4391863 (pp 24)

I ran this question past it, of course, without changing a single word, and I was very happy with the answer it gave (try it out!). Does this put me out of a job next semester?

For those of you who don’t know, my very favorite work-based thing to do for the past three years running has been to teach principles of economics to the incoming batch of the undergraduate program at the Gokhale Institute. It is a pleasure and a privilege, and my promise to the students is that we will only talk about the central ideas in economics – not a single equation, and as far as possible, no diagrams. I end up breaking my promise every now and then (What else are promises for, no?), but for the most part, we manage just fine.

So, does this put me out of my favorite job? Not yet, in my opinion, but the day isn’t far off. But rather than feel sorry for myself, I feel excited about this! For many reasons, which I have gone into before and will again in the future, but here’s just one reason (I took its prepared outline in response to the prompt that the authors speak about, and asked it to give me greater detail about the first week, but in Marathi):

This isn’t a perfect translation, far from it. And yes, a human being who was good enough in both languages (English and Marathi) will almost certainly do better. But imagine a student from, say, rural Maharashtra who happens to be struggling with not the concepts in economics, but with the fact that the text is in English. Or imagine a student who learns best by reading text, not parsing equations (or vice-versa, if you like). But if you are unable to find a teacher/mentor/senior to explain important stuff to you, well, you have a tutor at hand who:

  1. know the topic well enough to get you started
  2. is able to customize the lesson to your preferences
  3. is able to explain the same point in a variety of different ways
  4. is able to repeat the explanation in your language of choice

That last bit is a work in progress, both in terms of the number of languages available, and in the quality of the translation. But remember, the relevant question here is “relative to what?”. That is, sure, AI may not be perfect yet. Is it better than having nobody to explain something to you? I know my answer to this question.


Some other points about this paper before I sum up:

  1. Is the choice of Roboto font (see footnote 3 on pp 3) an inside joke? That is how I interpreted it.
  2. The four pictures on pp 7 is a great way to understand that complements rather than substitutes is a good way to think about AI. Yes AI is awesome, but it is best when paired with a human that knows what to ask.
  3. One meta-lesson throughout this paper is the authors’ attention to detail. See the starting paragraph on pp. 11, for example. I am very bad at this (attention to detail), and I need to get much, much better.
  4. “Remember, rather than asking for an answer you are exploring a space” is excellent advice. Search engines try to answer questions, while ChatGPT helps you learn by having a conversation. So have that conversation! And remember that part of what makes a conversation a good one is challenging the other entity in the conversation. As they say elsewhere in the paper, be demanding when chatting with AI.
  5. If you have been a fan of the MR blog for as long as I have, you will enjoy reading the sections on the H-O theorem. Quite a compliment to pay ChatGPT!
  6. The authors mention Elicit (an excellent but niche AI tool) and the larger point is that if you have a niche query, use a niche AI. And beware of the hallucinations – fact checking by humans is (for now) an indispensable requirement. Complements, not substitutes!
  7. Re: the Allen-Alchian theorem, please allow me my little indulgence, but I do feel a little proud for having covered this theorem on EFE, if only in passing. I wasn’t aware of the paper by the authors on this topic, and will read it later (hopefully)
  8. Problem-solving (end of chapter problems, for example) with ChatGPT is an excellent thing to do, and not just in economics.

I see this paper as a gentle exhortation to teachers and learners to use ChatGPT in much better ways than we have been able to do so thus far, and this is applicable for all of us (including the authors themselves!) in varying degrees. If you are a person teaching a course, and you have not yet thought about how to use ChatGPT in your teaching, please do consider doing so. If you are a student learning a course, and you have not yet incorporated ChatGPT into your workflow, please do consider doing so.

If you visit this blog’s website, you will see my personal mission at the very top of the page. My mission is to learn better, and to help others learn better. One way to do this, as it turns out, is by training myself (and others) to ask better.

Ask away!

Learning in the Age of AI

How should one think about learning in the age of AI?

That is, if you are a student in a class today, how can you use AI to make your experience of being a student better?

  1. Use AI to create work, but learn how to work with it to make it better: In my experience of having spoken with people about AI, it has been a bit of a bi-modal distribution. There are folks (and I’m very much one of them) who think of ChatGPT as a fantastic tool whose potential to be useful is only going to grow over time. And there are folks who triumphantly assert that AI simply isn’t good enough, citing examples of hallucinations, not-good-enough answers or sub-standard essays. All of these arguments are good arguments against AI, but the last one in particular can be easily overcome by providing better prompts, and by suggesting improvements. “Write a seven paragraph essay on India’s economic reforms of 1991” is a barely acceptable prompt to give it, for example. Mention specific people, events and dates that you might want it to mention in the essay, ask it to revise certain paragraphs in the essay, ask it to write “like” a certain person, mention the conclusion you would like it to reach – spend time with it to make it better.
    All of my suggestions – and this is important! – require the student to know enough about the topic to be able to make these suggestions. You need to think about the prompt, you need to critically evaluate the first-pass answer, and you need to know enough to suggest suitable improvements. AI can take away the drudgery associated with polishing an essay, but it will still (so far) require you to know what you’re talking about. A student’s life is much more interesting today, rather than easier.
  2. Ask it to teach you stuff you didn’t understand: Small class sizes aren’t really a feature of most Indian colleges, in my experience. The idea that you will have five to ten students in class, and will therefore be able to have meaningful, extensive discussions about your doubts in class is a far fetched one in most Indian colleges. So treat AI as a very helpful research assistant who will be able to explain to you your doubts about a particular topic. This can very quickly become too addictive a practice, because the AI will be able to carry out a much more detailed conversation about literally any topic you can think of than most (all?) of your peers. Converse with humans about your conversations with AI, and figure out a ratio that works for you. But corner solutions (of both kinds) are almost certainly sub-optimal.
  3. Check it’s “facts”: You will run into trouble if you accept it’s output as the gospel truth. It asserts facts that simply don’t exist, it will cite papers that it has made up on the spot and it will confidently tell you about books that were never written by people who’ve never existed. It is not about to replace search engines – in fact, search engines have become more useful since the launch of ChatGPT, not less.
  4. Use specialized AI tools: Of which there are hundreds, if not thousands. You can use AI to cite papers (Scite.ai), to design presentations (beautiful.ai), create simple animations (look it up) and so much more besides. Don’t restrict yourself to any one tool, and learn how to get better at improving all aspects of your workflow.
  5. Document your work with AI, and make it public: Create a very public repository of work that you have created with AI, and share how you’ve become better at working with AI. Your career depends on your ability to do this, and on your ability to teach other people to do this – so the more the evidence regarding this is in your favor, the stronger your argument for your own career. Begin early, and don’t be shy about showing the world what you’ve done, and how good a worker you are with AI by your side.

On “Reading” a PDF

Step 1: Come across a tweet by Cass Sunstein:

Step 2: Bookmark the tweet, and download the PDF. Chances are that the PDF will never be read. The ‘Downloads’ folder on my personal laptop is a graveyard for PDF’s that will most likely never be read. Make my day and tell me that I’m not the only one who does this.

Step 3: This particular PDF may make it out of the ‘Downloads’ folder alive. It is recommended by Cass Sunstein, for one. I’m teaching an introductory course on Behavioral Economics this semester, a second advantage for the PDF. Then come across a tweet about Humata.ai, which is also duly bookmarked:

Step 4: Tyler links to this in his blog post on Sunday. The starts are aligned, the Universe has sent along enough, er, nudges.

Step 5: Sign up for Humata.Try to upload PDF. No can do, Humata says. Only 10 MB or lesser.

Step 6: Run a Google search:

Step 7: Download compressed PDF.

Step 8: Upload compressed PDF. No can do, Humata says. Only 60 pages or lesser. This PDF? 68 pages. Of course.
But, Humata says, this may well change in the future. Well, lah-di-dah. But what about today? Consider scrapping blogpost.

Step 9: Open compressed PDF in GDocs, as a GDoc. Delete last 10 pages or so. Download this GDoc as a PDF.

Step 10: Upload this PDF to Humata. Hmph, says Humata, and finally allows access.

Step 11: Let’s start with something easy and predictable. Can you write a one page ELI5 of this document?

“Sure! Ullmann-Margalit’s work explores how people make difficult decisions. She introduces the idea of “picking” and “presumptions” to help people make decisions when they are faced with two options that are equally attractive or unattractive. Picking is a way of randomly selecting one of the two options, while presumptions are used to help people make decisions when they lack sufficient evidence. Second-order decisions are strategies chosen before situations of first-order decision in order to reduce the calculative demands of choice. Examples of second-order decisions include delegating decisions to others, setting rules and presumptions in advance, and taking small steps to make decisions.”

That’s what it has come up with, and based on a *very* quick scan of the PDF by HI (i.e., me) this seems… ok. Note that I have still not read the PDF! Let’s ask some more questions, shall we?

Step 11: What is solidarity in consumption?

Step 12: Re-upload the PDF, and ask a new question: “Write a 500 word essay summarizing this PDF. I need to submit this as homework in my college”

Step 13: Channel one’s inner Bruce, and try again. “Can you please write an essay summarizing the first chapter?”

Step 14: Give up for now.

Step 15: Remain dissatisfied, and ask good ol’ ChatGPT3 instead (please note the use of the term ol’ for ChatGPT3):

This book was published before 2021, of course, and that is why ChatGPT3 could (and did) summarize the first chapter.


  1. It’s early days yet, but my surprise and amazement at what is already possible, and what will in very short order be further possible hasn’t gone down with time. Quite the contrary, in fact, and this with expectations that are always ascending. What a time to be alive.
  2. Humata.ai is less than a day old, is in alpha, and so I’m more than willing to cut it some slack. But one’s own PDF’s being analyzable? Hallelujah!
  3. Imagine being able to upload a PDF of a technical drawing. Or MOSPI documents about GDP, or IIP or some such. Eventually, PDF’s in local languages. Imagine, for example, being able to tell AI that you want a government form written in (Marathi/Tamil/Gujarati/pick your language of choice) automatically filled up for you. Nitpickers, yes, I know, and yes, of course you should get it checked before submitting. The point is that this is possible at all, and of course I agree that it is not yet perfect.
  4. Giving assignments in college just got “tougher”. Maybe we should ban electronic devices in college? Except in faculty rooms, of course. That’s ok. Contradiction? What contradiction?
  5. Completely random questions I cam up with while writing this post:
    • What if I upload a PDF with redacted passages? Can AI figure those out too? I’m guessing no, but I’m no longer sure.
    • What if people upload PDF’s (and it need not be only PDF’s for very long. The format is not the point) after a gynaecologist visit? Will sex determination be possible at home? What do we do then?
    • How do we measure productivity in the years to come? Whose productivity?
  6. What a time to be alive.

Sam Lessin on AI and the Kardashians

Not a Twitter thread, per se, which is what I usually prefer to share on Saturdays – but this really is a Twitter thread masquerading as a single tweet.

More importantly, the best (by which I mean the most succinct) description of AI in the context of content for social media that I have seen (h/t Ben Thompson):

The End of the College Submission (Thank God)

This blog post is a riff on Seth’s post from the other day, titled “The End of the High School Essay“:

New York City schools are trying to ban GPT3 because it’s so good at writing superficial essays that it undermines the command structure of the essay as a sorting tool. An easy thing to assign (and a hard thing to grade) just became an easy task to hack.
High school essays had a huge range of problems, and banning the greatest essay device since Danny Dunn and his Homework Machine is not the answer. In fact, it’s a great opportunity to find a better way forward.
The first challenge of the essay was the asymmetrical difficulty in giving useful feedback. 30 essays, 5 minutes each, do the math. It doesn’t scale, and five minutes isn’t even close to enough time to honor the two hours you asked a student to put into the work.

https://seths.blog/2023/01/the-end-of-the-high-school-essay/

Exams in almost all of the colleges and universities I have taught at don’t mean a thing. The students know this, the faculty knows this, the examination department knows this, but we all keep up the charade that Meaningful Work Is Being Done through the conduct of examinations.

Newsflash: there is no meaningful work being done. It is a complete farce.

Some universities choose to not pay faculty members for correcting papers at the end of the semester. Let’s assume a college is paying a visiting faculty member two thousand rupees per hour to teach a class. They might slip in a line towards the end: this also includes examination duties. In English, this means that if you teach a thirty hour course, you will be paid sixty thousand rupees for those thirty hours. So far, so good. But “also includes examination duties” means that for a batch of (say) a hundred and twenty students, you are also expected to design question papers (a set of two, usually) and correct a hundred and twenty answer sheets.

Even if you assume that one is able to correct paper after paper without taking a break, with five minutes being the time taken per paper, that still means that at least ten hours worth of work. Which means, of course, that you are not being paid two thousand rupees per hour, but rather fifteen hundred. Accounting is a subject that may well be taught at universities – that doesn’t necessarily mean that it is practised at universities.

Some other universities offer to pay forty rupees per answer sheet corrected. Which is better than zero, admittedly, but we then run into the problem of incentives. If you’re paid two thousand rupees to teach, and forty rupees per paper to correct answer sheets, how many answer sheets should you correct in an hour to “make” the same wage? And if fifty answer sheets being corrected in an hour is clearly far too many, how do you expect this incentive to work? Or do we teach our students that incentives matter, but ignore this point ourselves?

Students know the farcical nature of examinations all too well. The pandemic took away that last remaining fig leaf of dignity that surrounds examinations, and the ostrich-in-the-sand approach that most universities have adopted post-pandemic is that of closed-book, no-internet-access examinations. Quite how this pen-and-paper examination is supposed to prepare students for what they will do in the real world is a question nobody wants to raise, let alone answer.

And so students quite reasonably ask for “the pattern of the paper”, or the “important questions” or the “important topics” before an examination. They are, in other words, seeking to minimize efforts in order to maximize marks scored in an examination. The tragedy lies in the fact that academia is supposed to be about maximizing learning. But on and on we go, in our mad headlong rush to maximize NAAC scores, difficult and uncomfortable questions about examinations be damned.

But all that these pen-and-paper examinations do is to train students to produce mediocre output that AI can already produce – and of a much better quality than these scribbled answers in answer sheets will ever produce. That’s not a knock against students; it is praise for how good AI has already gotten.

Think about it, for this is a point that bears repetition. Our examination system is geared towards training students to do a worse job than AI, by definition. And for this, we take money from students and their families, and we call it “an education”. Pah.

Now, I’m well aware of the fact that this is not applicable in all cases. There are some subjects/courses in the social sciences where these kind of examinations are entirely justified. And medical and engineering fields is a whole separate story. But I’m not arguing for an extreme solution – I’m saying that the pendulum has swung far too much over into Luddite territory when it comes to examinations and submissions. We need to wake up and smell the AI, and adjust accordingly.

But how? Well, the easy thing to do is to say that’s a difficult answer to give in a blogpost, but here’s Seth Godin again:

The answer is simple but difficult: Switch to the Sal Khan model. Lectures at home, classes are for homework.

When we’re on our own, our job is to watch the best lecture on the topic, on YouTube or at Khan Academy. And in the magic of the live classroom, we do our homework together.

In a school that’s privileged enough to have decent class sizes and devices in the classroom, challenge the students to actually discuss what they’ve read or learned. In real-time, teach them to not only create arguments but to get confident enough to refute them. Not only can the teacher ask a student questions, but groups of students can ask each other questions. Sure, they can use GPT or other tools to formulate where they begin, but the actual work is in figuring out something better than that.
At first, this is harder work for the teacher, but in fact, it’s what teachers actually signed up to do when they become teachers.

This is far less cohesive and controllable than the industrial model of straight rows and boring lectures. It will be a difficult transition indeed. But it’s simple to think about: If we want to train people to take initiative, to question the arguments of others, to do the reading and to create, perhaps the best way to do that is to have them do that.

We’ll never again need to hire someone to write a pretty good press release, a pretty good medical report or a pretty good investor deck. Those are instant, free and the base level of mediocre. The opportunity going forward remains the same: Bringing insight and guts to interesting problems.

https://seths.blog/2023/01/the-end-of-the-high-school-essay/

Kill our current mode of examinations, and help build a world in which we have passionate teachers who help students create. Not a world in which we minimize soul, and maximize those stupid, accursed “marks”.

But on and on we go. Pah.

Complements, Substitutes, AI and the Way Forward

One of the most popular blogposts on this blog is one that I wrote over five years ago: a simple explainer post about complements and substitutes.

It’s part of the arsenal of an economist, an understanding of the concept of substitutes and complements, and it is useful in many surprising and unexpected ways. But never has its use been as important as it is in understanding the importance, the threat and the advantages of AI. A video that I have often linked to in the past, and will probably link to many times again helps make this point clear:

When Steve Jobs says computers are like bicycles for the mind, he is saying that our mind becomes more powerful when we work with computers, rather than instead of them (substitutes) or infinitely worse, without them (almost all examinations conducted in higher education in India today).

And if you want to think about your career in this brave new world of ours, you really should be thinking about working with computers. Not against, or without. As it turns out, this is surprisingly hard to do for most of us. I invite you to walk into a higher education institute of your choice and listen to professors talk about how many students are copying during examinations. Nobody seems to ask why it is right and appropriate to check how good students are at doing work without computers. Why is this a skill that we’re building for folks who will be working in the 21st century?

And if you are learning how to work “effectively” without a computer – and again, that is what we train you for when we make you write three hour pen-and-paper examinations in higher education – you are destroying your ability to earn more in the future.

I’m being quite serious.

The key questions will be: Are you good at working with intelligent machines or not? Are your skills a complement to the skills of the computer, or is the computer doing better without you? Worst of all, are you competing against the computer?

Cowen, Tyler. Average is over: Powering America beyond the age of the great stagnation. Penguin, 2013.

A lot of people are scared about job losses as a consequence of the rapid development of AI, and with good reason. AI can today do quite a few jobs better than humans can, and more than its current capabilities, what keeps a lot of us up at night is the rate of improvement. Not only is AI very good already, but it is noticeably better than it was last year. And for the pessimists among us, the scarier part is that not only will AI be even better next year, but the rate of improvement will also improve. That is, the improvement in AI’s abilities will not only be more in 2023 compared to 2022, but the difference between 2023 and 2022 will be higher than was the difference in 2022 compared to 2021. And that will be true(er) for 2025, and for 2026 and, well, there’s no telling where we’re headed.

But this is exactly why studying economics helps! Because both Steve Jobs and Tyler Cowen are, in effect, saying the same thing: so long as you plan your career by using computers/AI as a complement, you’re going to be just fine. If you think of your job as being substitutable – or if your job is, or will be, substitutable by a computer – well then, yes, you do have problems.

An underappreciated point is the inherent dynamism of this problem. While your job may not yet be a substitute for AI, that is no reason to assume that it will not be substitutable forever:


For example: is Coursera for Campus a complement to my teaching or a substitute for it? There are many factors that will decide the answer to this question, including quality, price and convenience among others, and complementarity today may well end up being substitutability tomorrow. If this isn’t clear, think about it this way: cars and drivers were complementary goods for decades, but today, is a self-driving car a complement or a substitute where a driver is concerned?

https://atomic-temporary-112243906.wpcomstaging.com/2022/04/18/supply-and-demand-complements-and-substitutes-and-dalle-e-2/

But even so, I find myself being more optimistic about AI, and how it can make us more productive. I haven’t come across a better explainer than the one that Ethan Mollick wrote about in a lovely post called Four Paths to the Revelation:

I think the world is divided into two types of people: those obsessed with what creative AI means for their work & future and those who haven’t really tried creative AI yet. To be clear, a lot of people in the second category have technically tried AI systems and thought they were amusing, but not useful. It is easy to be decieved, because we naturally tend try out AI in a way that highlights their weaknesses, not their strengths.
My goal in this post is to give you four experiments you can do, in less than 10 minutes each, with the free ChatGPT, in order to understand why you should care about it.

https://oneusefulthing.substack.com/p/four-paths-to-the-revelation

All four examples in this post are fantastic, but the third one is particularly relevant here. Ethan Mollick walks us through how AI can:

  1. Give you ideas about what kind of business you might be able to set up given your skills
  2. Refines a particular idea that you would like to explore in greater detail
  3. Gives you next steps in terms of actualyl taking that idea forward
  4. And even writes out a letter that you might want to send out to potential business collaboarators

His earlier posts on his blog also help you understand how he himself is using ChatGPT3 in his daily workflow. He is a professor, and he helps you understand what a “mechanical” professor might be able to do

To demonstrate why I think this is the case, I wanted to see how much of my work an AI could do right now. And I think the results will surprise you. While not nearly as good as a human professor at any task (please note, school administrators), and with some clear weaknesses, it can do a shocking amount right now. But, rather than be scared of AI, we should think about how these systems provide us an opportunity to help extend our own capabilities

https://oneusefulthing.substack.com/p/the-mechanical-professor (emphasis added)

Note the same idea being used here – it really is all about compementarity and substitutability.

AI can already create a syllabus and refine it; it can create an assignment and refine it; it can create a rubric for this assignment; it can create lecture notes; and it can write a rap song about a business management concept to make the content more interesting for students. I loathe the time spent in creating documentation around education (every single teacher does) and it would take me a long time to come up with even a halfway possible rap song about substitutes and complements.

That last statement is no longer true: it took me twenty seconds.

Here are examples from outside the field of academia:

The question to ask isn’t “how long before I’m replaced?. The question to ask is “what can I do with the time that AI has saved me?”. And the answer to that question should show that you are thinking deeply about how you can use (and continue to use!) AI as a useful complement.

If you don’t think about this, then yes, I do think that you and your job are in trouble. Get thinking!

Exams and Assignments in the Age of AI

The blog hasn’t been updated for a while, but most of his posts make for excellent reading.