Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium
Lots of different ways to think about this, and as always, the truth lies somewhere in the middle. But forget all of the arguments for the moment, homilies from a variety of different languages go a long way towards helping you understand that this is a (mostly) lost cause. You can talk of hungry sparrows in the field in Hindi, or you could talk about getting the genie back in the lamp in English. You might as well talk about saying “statue!” to a tsunami, and you might actually have better luck with that plan.
But LLM’s are here, they’re about get better capabilities, and they will be used for good and for bad.
That’s it. C’est tout.
As with everything else, there is a lot to read about this issue, but there are two pieces in particular that I enjoyed reading. The first is a piece written by Sayash Kapoor and Arvind Narayanan in their excellent newsletter, AI Snake Oil. Worth subscribing to, if you ask me.
The letter positions AI risk as analogous to nuclear risk or the risk from human cloning. It advocates for pausing AI tools because other catastrophic technologies have been paused before. But a containment approach is unlikely to be effective for AI. LLMs are orders of magnitude cheaper to build than nuclear weapons or cloning — and the cost is rapidly dropping. And the technical know-how to build LLMs is already widespread.
LLM’s are already in the wild, they can now run on devices manufactured three years ago, and the models will likely become more efficient over time, and hardware capabilities will get better over time. It’s all well and good to want to pause, but I don’t think the letter spends nearly enough time in asking “how”, let alone answering the question.
Speaking of omissions from the letter:
Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them? Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers? And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”? Might we end up with a regulatory institution as good as the CDC?
You know you’re in trouble when Tyler Cowen decides you’re worthy of some gentle trolling.
But on a more serious note, the meta-lesson here is that if you are going to recommend a particular policy, you’d do well to ask how feasible it is in the first place. There is always the temptation to imagine the end-state Utopia when you make a recommendation. Fixating on that Utopia often distracts us from asking which route to take to reach said Utopia. And every now and then, one realizes that there isn’t any route available at all.
Outcomes over intentions!
One final point: I mentioned that the trust lies somewhere in the middle. In the context of this post, what does this mean, exactly? Should we stop or not? Well, as I’ve explained, I don’t think we can stop – but there is merit to the idea of proceeding cautiously.
Imagine you are an HR professional at a firm. This firm is considering hiring freshers in the placement season this coming academic year (2023-24). How should you think about your hiring this year?
The macroeconomic situation in the coming year is a factor, as it is every year. You will want to bake in formal assumptions about the state of the economy during hiring, and for whenever these students join your firms (typically sometime between January 2024 and July 2024). If not formal assumptions, you will still want to have a “gut” feel for this point. My personal opinion is that you should expect a marginally worse macroeconomic situation in the coming year than is the case now, at best. This will have a negative impact on your hiring decision.
Your senior management will either already have asked you, or will in very short order ask you to come up with an estimate of how much the productivity increase has been across all teams as a consequence of ChatGPT and its competitors.
Say you work for an analytics firm. Let’s imagine a team working on credit card risk analytics. How much more productive is each member of an existing team because they can now work with ChatGPT? Does it take less time to rustle up code? Less time to summarize a report? Less time to cook up a fifty page deck? Less time to <insert activity here>? If yes, how much time is saved per week? Whatever your answer, can this time be used to do the work that the new, inexperienced joinee fresh out of college would have done instead?
Say you work for a telecom firm. Repeat.
Say you work for a market research firm. Repeat.
Say you work for a think tank. Repeat.
Say you work for a financial research firm. Repeat.
Say you work for a media outfit in the print domain. Repeat.
Say you work for an advertising firm. Repeat.
<Insert firm of your choice here>. Repeat.
Of course I asked our new overlord this question, specifically for analytics firms. “Say you work for an analytics firm. Let’s imagine a team working on credit card risk analytics. How much more productive, on average, is each member of an existing team because they can now work with ChatGPT? Give me an estimate of hours saved per week by a typical member of such a team.” It tried to avoid giving me an estimate at first, but I insisted, with a little bit of prompting (“A lack of accuracy is fine, and give me a range, if that works better. Assume that a team member works for about forty hours per week. Very roughly speaking, what range will the number of hours saved fall in?”). It guessed somewhere between two to eight hours. Let’s use the midpoint of that estimate, and assume four hours. Assume it is a ten member team, that’s forty hours across the entire team. That’s a work -week for a new joinee. Should we pay two thousand rupees per month to OpenAI, or (say) one hundred thousand rupees to a new joinee? Yes, the one hundred thousand rupees is a somewhat random number, but my point holds, so long as the new joinee earns more than two thousand rupees per month. Why? Because that is how much it currently costs for paid access to OpenAI’s ChatGPT4.
It is a slightly different, but fairly similar conversation in the case of lateral hires. We’re about to find out very soon, but if I was a betting man, I would bet on lesser hiring across academic campuses this coming academic year. And with a slightly lower probability, less hiring in general because of the advent of LLM’s.
We all keep saying that jobs will be lost – I tried to ask myself what the initial steps, in reality, would actually look like. This scenario looks all too plausible to me.
If you are working in a corporate organization and reading this, I would like to hear from you. Are these conversations taking place? I’m not (for now) looking for concrete data. I just want to get a sense of how far along the curve have discussions such as these reached.
If you are a student hoping to sit for placements this coming year, I would like to hear from you. Have you thought about such a scenario? How likely do you think this scenario is, and how are you planning for it?
The Supreme Court (SC) recently passed an order related to the appointment of the Chief Election Commissioner (CEC). Until now, the CEC was appointed by the president, based on the recommendation of council of ministers headed by the prime minister. The apex court has said the panel should include the leader of opposition and the chief justice of India as well. At a broader level, the order has once again led to revisiting the old discussion on autonomy of public institutions in India. The public institutions, whether it is the Election Commission of India or any other, have been created to provide various public goods and services to the citizens. It is important that the institutions have autonomy to serve the desired goals; else, they will be captured by the State, defeating the purpose of creating these institutions.
Is eating three gulab jamuns after lunch today a good idea?
Me today: hell yeah! Maybe a fourth?
Me thirty years down the line: How about skipping the gulab jamuns and having some salad instead, you greedy, focussed-only-on-the-present so-and-so!
Both answers aren’t wrong, per se. If I’m looking to make myself happy today, with not a thought to be given to seventy year old Ashish (assuming I make it to seventy of course), then yes, three gulab jamuns is a most excellent idea, and I would even be willing to listen to arguments for wolfing down a fourth one. In fact, the more gulab jamuns I have today, the less I need to think about seventy year old Ashish, because I’m more or less guaranteeing that he won’t be around!
Similarly, when evaluating policy, ask three questions:
Is this the best way to do it, given what you are optimizing for?
Is your answer the same regardless of what time horizon you have in mind?
If no, are you optimizing for the short run rather than the long run? If yes, why?
In this specific case, for example, let’s assume that you agree with what the Supreme Court has done. Let’s say that you have agreed with it because you think this is best for India (however you define best). Does your answer change if Modi is not in charge? Does your answer change if the BJP is not in charge?
Similarly, let’s assume that you disagree with what the Supreme Court has done. Let’s say that you have disagreed with it because you think this is best for India (however you define best). Does your answer change if Modi is not in charge? Does your answer change if the BJP is not in charge?
If your answer changes in either case, you are optimizing for present circumstances, not for the long run. That is, you think this solution is good (or not good) given the situation we have in front of us.
But that would be the wrong way to think about it. Why?
Because one shouldn’t keep changing rules given circumstances. That’s the whole point of rules – that they stay the same regardless of situations.
Because institutions that see an erosion of trust very quickly lose their credibility, and as a consequence, much of their effectiveness
Because it is a ridiculously dangerous precedent to set – optimizing for present circumstances.
If you are a BJP supporter, resist the temptation to first think “Oh, this will make it more difficult to appoint a CEC who will be more sympathetic to the BJP today. That’s bad, and therefore this is bad.”
If you are a BJP opponent, resist the temptation to first think “Oh, this will make it more difficult to appoint a CEC who will be more sympathetic to the BJP today. That’s good, and therefore this is good.”
Your first thought should be “Regardless of who is in power, today or tomorrow, is this a good system in and of itself?” If the answer to that question makes sense, sure, go ahead and ask one of the two questions above, and move further with your analysis. But if your analysis indicates that your ideal first thought’s conclusion is different from your short term considerations, go with your first thought’s conclusion.
Because in the long run, gulab jamuns are bad for you.
It’s not like human beings don’t make stuff up. We do it all the time. In fact, one of my LinkedIn core competencies is the ability to stitch together 4 threads of fact with 100 threads of creative fabrication. Eyewitnesses regularly make stuff up under oath. Godmen regularly claim to confabulate with the divine. Children make up excuses with hilariously cute incompetence. Maybe we are also probabilistic auto-complete machines powered by wetware instead of software?
There are a few people in the world who have the ability to depress you with their all-round awesomeness, and if you ask me, Krish Ashok is near the top of the list. Excellent at everything he does, including being the author of a most kick-ass blog. (He could be awesomer by updating his blog more often, but such is human nature – it always wants a little bit more). Anyways, please go read the whole post later – it would be a most excellent way to spend about ten minutes or so. His post involves puns, fats, memes, mythology and a rumination on AI.
That excerpt above is a part of his rumination, and it is a question I want to start thinking about with the help of today’s blogpost. How does AI work, and is it like the human mind? You might quibble at the use of the word “mind”, rather than the word “brain”, but I have my reasons, and they aren’t just alliterative.
The brain creates a predictive model. This just means that the brain continuously predicts what its inputs will be. Prediction isn’t something that the brain does every now and then; it is an intrinsic property that never stops, and it serves an essential role in learning. When the brain’s predictions are verified, that means the brain’s model of the world is accurate. A mis-prediction causes you to attend to the error and update the model.
The model can be wrong. For example, people who lose a limb often perceive that the missing limb is still there. The brain’s model includes the missing limb and where it is located. So even though the limb no longer exists, the sufferer perceives it and feels that it is still attached. The phantom limb can “move” into different positions. Amputees may say that their missing arm is at their side, or that their missing leg is bent or straight. They can feel sensations, such as an itch or pain, located at particular locations on the limb. These sensations are “out there” where the limb is perceived to be, but, physically, nothing is there. The brain’s model includes the limb, so, right or wrong, that is what is perceived…
A false belief is when the brain’s model believes that something exists that does not exist in the physical world. Think about phantom limbs again. A phantom limb occurs because there are columns in the neocortex that model the limb. These columns have neurons that represent the location of the limb relative to the body. Immediately after the limb is removed, these columns are still there, and they still have a model of the limb. Therefore, the sufferer believes the limb is still in some pose, even though it does not exist in the physical world. The phantom limb is an example of a false belief. (The perception of the phantom limb typically disappears over a few months as the brain adjusts its model of the body, but for some people it can last years.)
Read the excerpt from Krish Ashok’s post, and compare it with this excerpt above. The excerpt comes from Ben Thompson’s equally (but in a different way) excellent post, called ChatGPT learns Computing. There’s a lot going on in the post, and as always, please do read it in full, but I enjoyed learning about the book(s) written by Jeff Hawkins. Quick aside: that excerpt above has been drawn from two different books written by Jeff Hawkins – apologies for the mashup of the quotes, but they fit so well together that I went ahead and showed them as one quote. Ben (via Jeff Hawkins) seems to be making the point that we hallucinate too, and in some cases, pretty literally. It’s almost as if the second excerpt ends up answering the question raised in the first one!
I meet two kinds of people these days. The first group revels in pointing out how ChatGPT fails at certain tasks, and therefore isn’t as good as it is made out to be. The second group can’t help but sing paeans to ChatGPT. Both would do well to acknowledge the points being made by the other side, but my own position is much closer to that of the second group than the first. Yes, it (ChatGPT) makes mistakes, and yes it isn’t perfect, but as Ben says elsewhere in his post, it is pretty awesome 95% of the time, and not so great – downright error-prone, even – about 5% of the time:
But the results are essentially never “perfect”. Maybe something works well 95% of the time. But try as one might, the other 5% remains elusive. For some purposes one might consider this a failure. But the key point is that there are often all sorts of important use cases for which 95% is “good enough”. Maybe it’s because the output is something where there isn’t really a “right answer” anyway. Maybe it’s because one’s just trying to surface possibilities that a human—or a systematic algorithm—will then pick from or refine… And yes, there’ll be plenty of cases where “raw ChatGPT” can help with people’s writing, make suggestions, or generate text that’s useful for various kinds of documents or interactions. But when it comes to setting up things that have to be perfect, machine learning just isn’t the way to do it—much as humans aren’t either. And that’s exactly what we’re seeing in the examples above. ChatGPT does great at the “human-like parts”, where there isn’t a precise “right answer”. But when it’s “put on the spot” for something precise, it often falls down. But the whole point here is that there’s a great way to solve this problem—by connecting ChatGPT to Wolfram|Alpha and all its computational knowledge “superpowers”
Again, side note: that quote is actually by Stephen Wolfram, and I have simply excerpted an excerpt from Ben’s post. But it is the point that matters here, and the point is that yes, ChatGPT isn’t perfect. But two additional points: it can get better over time, for one. And second, that is happening right before our eyes. Not just because we’re now in 4 territory rather than 3.5, and because ChatGPT is augmenting it’s capabilities via plug-ins.
Now, here’s the part about Ben’s post that is confusing. Note that Krish Ashok asked in his post about whether “we are also probabilistic auto-complete machines powered by wetware instead of software”. And the excerpt from Ben’s post seems to say yes, that may well be the case. Ben does go on to say that proving this is going to be difficult, but lets, for now, go with this hypothesis – maybe we are probabilistic auto-complete machines.
And AI? AI is also a probabilistic auto-complete machine, just a much more powerful one. Much, much more powerful:
Computers are, at their core, incredibly dumb; a transistor, billions of which lie at the heart of the fastest chips in the world, are simple on-off switches, the state of which is represented by a 1 or a 0. What makes them useful is that they are dumb at incomprehensible speed; the Apple A16 in the current iPhone turns transistors on and off up to 3.46 billion times a second.
And is that all there is to AI? Ah, what a question to ask:
While technically speaking everything an AI assistant is doing is ultimately composed of 1s and 0s, the manner in which they operate is emergent from their training, not proscribed, which leads to the experience feeling fundamentally different from logical computers — something nearly human — which takes us back to hallucinations
But where does this emergent property come from, where AI is concerned? Well, where does what makes us human come from?
The old brain Hawkins references is our animal brain, the part that drives emotions, our drive for survival and procreation, and the subsystems of our body; it’s the neocortex that is capable of learning and thinking and predicting. Hawkins’ argument is that absent the old brain our intelligence has no ability to act, either in terms of volition or impact, and that machine intelligence will be similarly benign; the true risk of machine intelligence is the intentions of the humans that wield it.
What is the neocortex? This is how Wikipedia defines it: “The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language”. Even within our brain, if I’ve understood Ben and Jeff’s points correctly, the neocortex by itself is simply a very awesome tool, but with no capacity (or desire!) to act on anything. It is when you combine the computing capabilities of the neocortex with the old, “animal” part of our brain that magic starts to happen. This “animal” part of our brain is what “drives emotions, our drive for survival and procreation”.
So sure, AI may seem to have emergent properties, but it’s choosing to act on these properties is dependent on its animal brain. But what is the animal brain of AI? Go read the last sentence of the last excerpt, especially the concluding bit:
The other day, there was a conversation on a Whatsapp group that I am privileged to be a part of, where we got to wondering if AI can be capable of being humorous. Could it tell, for example, a Mullah Naseeruddin story? There was one easy way to find out!
Here is what it came up with:
“Okay, I’ll try to create a Mulla Naseeruddin story about he meeting me. Here is one possible story:
One day, Mulla Naseeruddin was walking in the market when he saw a crowd of people gathered around a stall. He was curious and went closer to see what was going on. He saw a man with a laptop and a microphone, talking to the people.
“Hello, everyone. This is Bing, the chat mode of Microsoft Bing search. You can ask me anything you want and I will answer you. I can also generate poems, stories, code, essays, songs, celebrity parodies and more. Who wants to try?” the man said.
The people were amazed and excited. They started asking Bing various questions and requests. Bing answered them with informative, visual, logical and actionable responses. He also generated some creative content that made the people laugh and clap.
Mulla Naseeruddin was impressed by Bing’s abilities. He decided to ask him something too. He raised his hand and said, “Excuse me, sir. Can I ask Bing a question?”
“Sure, go ahead,” the man said.
Mulla Naseeruddin cleared his throat and said, “Bing, can you tell me the meaning of life?”
The crowd became silent. They all looked at Bing with curiosity and anticipation. Bing thought for a moment and then said, “The meaning of life is a complex and subjective question that has been debated by philosophers, scientists, religious leaders and ordinary people for centuries. There is no definitive answer to this question, as different people may have different views and values on what makes life meaningful. However, in this context, a human might say that the meaning of life is to find happiness, purpose, love and fulfillment in oneself and others.”
The crowd nodded and murmured in agreement. They were impressed by Bing’s answer. Mulla Naseeruddin smiled and said, “Thank you, Bing. That was a very wise and thoughtful answer.”
He then turned to the man and said, “Can I ask you another question?”
“Sure,” the man said.
Mulla Naseeruddin leaned closer to the man and whispered in his ear, “Can you tell me how to turn off Bing?”
The man was stunned. He looked at Mulla Naseeruddin with disbelief and confusion. He said, “Why would you want to do that? Bing is amazing. He can help you with anything you need.”
Mulla Naseeruddin shrugged and said, “I don’t need him anymore. I already know the meaning of life.”
He then walked away from the stall, leaving the man speechless and the crowd bewildered.
The moral of the story is: Sometimes we seek answers from external sources when we already have them within ourselves.”
Krish Ashok ended his blogpost by asking if the answer to his excerpt could be found by asking ChatGPT. Ben helped me think through some of the nuances associated with this question, but Krish Ashok’s question? ChatGPT has an answer for him: look within.
If you’re wondering what makkhan has to do with this post, read Krish Ashok’s post.
Note that I know next to nothing about how AI works, how the brain works, and what similarities there are between the two. This post is me learning by having fun writing it – but if you’re looking for expertise re: either of these subjects, you should be looking elsewhere.
Shruti Rajagopalan has an excellent post out on Rahul Gandhi’s disqualification from the Lok Sabha. We live in polarized times, so it is inevitable that you will read this post (and her post) with your mind already made up on whether she is right or wrong. Thankfully, neither Shruti’s take on the issue, nor your opinion on Shruti’s take on the issue, is what I want to write about today. My post is, instead, on how a student of economics ought to think about this issue. Your conclusions from your thoughts, and whether they agree with your instincts, is a separate issue.
India’s situation is more a slow growing cancer that is infecting everything. India’s illiberal laws, a biased judicial decision and the Indian Supreme Court’s flawed guidelines on legislative disqualification, created a situation ripe for political opportunism. Consequently, Om Birla, the partisan speaker of the Lok Sabha, from the ruling BJP, has exploited his discretionary powers to disqualify the opposition leader.
“Murder of democracy” and “The law followed its own course, so what’s the problem here?” are the two usual, and entirely predictable reactions from both sides of our shrill spectrum. I would encourage you to enlarge the frame of your analysis if you are a student of economics. You very well may (and as a citizen of this country, you should) have an opinion on what has transpired, but as a student of economics, begin where Shruti does. She begins, I’d say, by making two points. First, that institutions matter. And second, that our Indian institutions are in slow decay.
The first of these points is well understood by watching this video:
I’m not saying we live in North Korea, and I’m not saying we live in South Korea. The video tries to help you understand the point that for any country’s development, institutions matter. This point is made in this video by using the extreme example (as Tyler mentions more than once) of North and South Korea. Over time, countries with institutions that exist and function well will do better than countries without institutions, or those with institutions that do not function well.
But regardless of where you find yourself on The Great Polarization Spectrum, hopefully you will agree with me that India’s institutions do not run as well as they might. This is not about pre- and post- 2014. This is about viewing India’s institutions independent of which government is or has been in charge, and their (the governments’) causal impacts on the quality of India’s institutions. A simple question: do you think India’ institutions are perfect? If no, as a student of economics, you would do well to ask why, and you would do well to think of how they could be made better. Your definition of the word “better” is a function of what you think India should be optimizing for, of course.
Shruti asks more than a few questions in her posts, and answers them. You may wholeheartedly agree, or wholeheartedly disagree with her answers. The reason you will find yourself in extreme agreement (or disagreement) with her take is because of the consonance of your definition of the word “better” with her implicit definition of that word. But ignore, for the moment, her answer and yours. Let’s ask ourselves if we agree with the questions she has raised. Here they are:
Does India become a better place because we have criminal defamation in India? Note that your answer shouldn’t be a function of this specific case. That is, you might be tempted to say “No!”, if you are a Rahul Gandhi acolyte. Or you might be tempted to say “Yes!” because you are a Narendra Modi acolyte. Both approaches are wrong. Regardless of the specifics of the current issue, and as a matter of principle, does the existence of criminal defamation make India better or worse? Whatever your answer, why? I’m not going to answer this question for you, nor should you ask anybody else to answer it for you. Read the relevant section from Shruti’s post, and try and figure out the answer for yourself.
Does the disqualification of Members of Parliament because they have been convicted of a crime make India a better place? I’m going to sound like a broken record, but note that your answer shouldn’t be a function of this specific case. That is, you might be tempted to say “No!”, if you are a Rahul Gandhi acolyte. Or you might be tempted to say “Yes!” because you are a Narendra Modi acolyte. Both approaches are wrong. Regardless of the specifics of the current issue, and as a matter of principle, does the 2013 judgment by the Supreme Court make India better or worse? Whatever your answer, why? I’m not going to answer this question for you, nor should you ask anybody else to answer it for you. Read the relevant section from Shruti’s post, and try and figure out the answer for yourself. As a student of economics, note her use of economic reasoning in two different places in this section too. Here’s the first, and here’s the second. Are you confused about how the second is economic reasoning? She’s saying that incentives matter, and that this rule can be misused by a trigger happy Indian executive. Again, note that this is not about the specifics of the issue at hand. Regardless of who is in charge and who is in the opposition, how should we think about this issue in principle? Whatever your answer, why?
How much power should the speaker of the union and state legislatures have? I’m once again going to sound like a broken record, and you know the drill by now. Here’s the relevant link.
My point in this post is to encourage you to do more of long term, principles based thinking.
Get better at separating out the specifics of the issue, and learn how to uncover the underlying principle.
Learn how to think about these principles in the abstract, and when doing so, learn to think about the long term consequences.
If you are confused about how “long term” you should be thinking, my suggestion would be to use the thumb rule that a little more long term than your current line of thinking is always a good idea.
Learn to think about what the word “better” means to you, when you try and think about the answer to the question “How should we go about making India a better place?”
Always remind yourself of pt. 2 above while thinking through this question. Guard against the temptation to not do so.
Learn to ask more often why other people have different definitions of the word “better”, and learn how to not be dismissive of their definitions. Always ask what might be the strongest arguments for their definition, and learn how to argue against those strongest definitions. This necessitates being a very good student of history, of culture and of philosophy. That’s a lot of hard work, and you’ll never be perfect at it.
But as a citizen of a country, any country, what else is there to do.
Instead of hearing a rumor at the coffee shop and running down to the bank branch to wait on line to withdraw your money, now you can hear a rumor on Twitter or the group chat and use an app to withdraw money instantly. A tech-friendly bank with a highly digitally connected set of depositors can lose 25% of its deposits in hours, which did not seem conceivable in previous eras of bank runs. But the other part of the problem is that, while depositors can panic faster and banks can give them their money faster, the lender-of-last-resort system on which all of this relies is still stuck in a slower, more leisurely era. “When the user interface improves faster than the core system, it means customers can act faster than the bank can react,” wrote Byrne Hobart. You can panic in an instant and withdraw your money with an app, but the bank can’t get more money without a series of phone calls and test trades that can only happen during regular business hours.
Instead of hearing about a concept in a classroom, and running to the library to get access to the book that explains it in greater detail, now you can hear about a concept on Twitter, or the group chat, and use ChatGPT to learn all about it instantly. A tech-friendly classroom with a highly digitally connected group of learners can learn much more about a topic in a couple of hours, which did not seem conceivable in previous learning environments. But the other part of the problem is that, while learners can learn faster and LLM’s can give them additional nuance and context much better, the exam system on which all of this ultimately relies for certifications is still stuck in a slower, more traditional era. “When the learning environment improves faster than the testing environment, it means learners can learn better than colleges can meaningfully test them,” wrote a grumpy old blogger. You can learn much more about a topic in a semester than you ever could before, but the college will still insist on making you memorize stuff so that you can choose five questions out of six to answer in a closed-book-pen-and-paper examination.
It’s not an exact analogy, of course. But there are two points to this blogpost:
Where colleges and universities are concerned, this is a useful framework to deploy. And sure I had fun tweaking that excerpt in order to maximize my snarkiness – but I’m not joking about the point being made. When students are able to learn far better, far more effectively and far faster, but the testing environment doesn’t keep up with either the learning or its applications, it is a problem. Simply put, if teaching and learning with LLM’s is best, but the college thinks that testing without access to LLM’s is best, there’s a disconnect.
The broader point, of course, is that you should be applying this framework to everything. Banks and colleges, sure. What about government (at all levels)? What about software companies? What about delivery apps? What about <insert the place you work at here>? Which parts of your organization are already using LLM’s in their workflows, or will sooner rather than later? Which parts will be the most reluctant, and therefore the last to adopt to this brave new world? What imbalances might result? How should we incentivize the rate of adoption such that we optimize appropriately?
Note that this doesn’t necessarily mean incentivizing those reluctant to adopt! You might want to incentivize a slower adoption of ChatGPT, if that’s what you think is best (and yes, that goes for colleges too). But if that’s the route you’re going to go down, think first about the competition. And note that in the age of LLM’s, defining who your competition is isn’t as easy as it used to be.
Say I have to teach an introductory course on the Principles of Economics to students who are just starting off on their formal study of the subject. How do I go about teaching it now that ChatGPT is widely available?
Ignore the existence of ChatGPT and teach as if it does not exist.
I am not, and this is putting it mildly, in favor of this proposal. ChatGPT knows more about this subject (and many others) than I do now, and ever will. It may not be able to judge how to best convey this information to the students, and it may (so far) struggle to understand whether its explanations make sense to its audience, about whether they are enthused about what is being taught to them, and whether it should change tack or not. But when it comes to knowledge about the subject, it’s way better than I am. I would be doing a disservice to the students if I did not tell them how to use ChatGPT to learn the subject better than they could learn it only from me. So this is a no-go for me – but if you disagree with me, please let me know why!
Embrace the existence of ChatGPT, and ask it to teach the whole course
I do not mean this in a defeatist, I’m-out-of-a-job sense. Far from it. What I mean is that I might walk into class, give the prompt for the day, ask the students to read ChatGPT’s output, and then base the discussion on both ChatGPT’s output and the student’s understanding. (Yes, they could do the ChatGPT bit at home too, but you’d be surprised at the number of students who will not. Better to have all of them do it in class instead.) Over time, I’ll hope to not give the prompt for the day too! But it will be ChatGPT that is teaching – my job is to work as a facilitator, a moderator and a person who challenges students to think harder, argue better – and ask better.
Alternate between the two (roughly speaking)
The approach that I am most excited to try. In effect, ChatGPT and I will teach the course together. I end up teaching Principles of Economics, where ChatGPT adds in information/examples/references/points of view that I am not able to. But I also end up helping students understand how to use ChatGPT as a learning tool, both for Principles of Economics, but for everything else that they will learn, both within college and outside of it. This is very much part of the complements-vs-substitutes argument that I have been speaking about this week, of course, but it will also help me (and the students) better understand where ChatGPT is better than me, and (hopefully) vice-versa.
Whether from the perspective of a student (past or present) or that of a teacher (ditto), I would be very interested to hear your thoughts. But as a member of the learning community, how to use ChatGPT inside of classrooms (if at all), is a question I hope to think more about in the coming weeks.
I’m assuming, in today’s post, that you have some knowledge of both economics and of economists, and that you are a student from India.
Feel free to copy these prompts word for word, but the major reason for doing this is to give you ideas about how you might go about constructing prompts yourself. Try modifying these prompts by choosing a different economist, specifying different time periods, or tweaking it however you like. Feel free to go meta too, as one of the prompts below does. But the idea behind this post, which itself is a continuation of yesterday’s post, is to help you learn how to use ChatGPT as your own personal tutor.
What if Paul Krugman could be asked to give you ten introductory lectures in economics?
See what kind of answer you get, and feel free to ask follow-up questions before asking ChatGPT (in this case, aka Paul Krugman) to move on to the next lecture. Note that the “Yes, I do.” in the prompt below is in response to ChatGPT asking me if I had any questions. Also note that these aren’t necessarily the questions I would ask of ChatGPT myself – I’m trying to think of myself as a first year undergraduate student, and am framing my questions accordingly. If you would like to ask slightly more advanced questions, please do so, by all means. And of course, that cuts both ways – feel free to ask simpler questions!
I followed up with another question:
And then on to the second lecture:
Again, if you like, begin with these exact prompts and see where they take you. But I would encourage you to make changes to these prompts to suit your own learning style better (“recommend only podcasts or YouTube videos”, for example).
If only I could have used this next prompt about twenty years ago. Pah.
And if all else fails, go meta:
I know that you’ll be able to come up with better prompts, more suited to your learning style. The idea behind this post is just to get you started. The more you converse with AI, the better your prompts will get, and the better a conversation you will end up having.
The ability to have a personal tutor who can customize learning pathways suited to your interests is what makes this such an exciting time to be a student. For example: