Games and Microsoft Excel

Via Navin Kabra on Twitter:

Make Examinations Relevant Again

Alice Evans (and if you are unfamiliar with her work, here’s a great way to begin learning more about it) recently tweeted about a topic that is close to my heart:

And one of the replies was fascinating:


I’ve asked students to create podcasts in the past for assignments, but not yet for final or semester end examinations, because I am constrained by the rules of whichever university I’m teaching in. There are some that allow for experimentation and off-the-beaten-path formats, but the vast majority are still in “Answer the following” mode.

But ever since I came across that tweet, I’ve been thinking about how we could make examinations in this country better, more relevant, and design them in such a way that we test skills that are applicable to the world we live in today, rather than the world of a 100 years ago.

To me, the ideal examination would include the following:

  • The ability to do fast-paced research on a collaborative basis
  • The ability to work as a team to be able to come up with output on the basis of this research
  • The ability to write (cogently and concisely) about how you as an individual think about the work that your team came up with

What might such an examination look like? Well, it could take many forms, but here’s one particular form that I have been thinking about.

Imagine an examination for a subject like, say, macroeconomics. Here’s a question I would love to ask students to think about for such an examination today. “Do you and your team find yourself on Team Transitory or Team Persistent when it comes to inflation today? The answer, in whatever format, should make sense to a person almost entirely unacquainted with economics.”

This would be a three hour long examination. Say the exam is for a cohort of 120 students. I’d split the class up into 10 groups of 12 each, and ask each group to spend one hour thinking about this question, and doing the research necessary to come up with an answer. They can discuss the question, split the work up (refer to textbooks, refer to material online, watch YouTube videos, discuss with each other, appoint a leader – whatever it is that they need to do) and come up with an outline of what their answer is.

The next hour would be coming up with the answer itself: write a blogpost about it, or record audio, or record video. The format is up to them, as is the length. The only requirement is that the output must answer the question, and must include reasons for their choice. Whether the background information that is required to make sense is to be given (or referenced, or skipped altogether) is entirely up to the students.

And the final hour must be spent on a short write-up where each individual student submits their view about their team’s submission. Given that the second hour’s output was collaborative, does the student as an individual agree with the work done? Why? Or why not? What would the student have liked to have done differently? This part must be written, for the ability to write well is (to me) non-negotiable.

To me, this examination will encompass research (which can only be done in an hour if the students are familiar enough with the subject at hand, so they need to have done their homework), collaboration and the ability to think critically about the work that they were a part of. Grading could be split equally on a fifty-fifty basis: half for the work done collaboratively, and half for the individual essay submission.


Sure, there would be some problems. Students might object to the groups that have been formed or students might end up quarreling so much in the first two hours that they’re not left with much time. Or something else altogether, which is impossible to foresee right now.

But I would argue that such examinations are more reflective of the work that the students will actually do in the world outside. More reflective than “Answer the following” type questions, that is.

The point isn’t to defend this particular format. The point is to ask if the current format needs to change (yes!) and if so how (this being only one suggestion).

Right now, examinations provide a 19th century solution to very real 21st century problems, and their irrelevance becomes ever more glaring by the day.


We need to talk about examinations, and we aren’t.

A video and a book recommendation

For your Sunday morning:

Fans of science fiction will know which book is coming next, of course. But for the uninitiated, do give Seveneves a whirl if you haven’t read it yet. And if this happens to be your first Neal Stephenson book, well, your timing couldn’t be better. Go read this one next. The OG metaverse!

Tiago Forte’s Best Reads of 2021

This should keep all of us busy for a while 🙂

If you are unfamiliar with Tiago’s work, this essay is a good place to start.

About Teaching Python to Students of Economics

This is a bit of a rushed post, my apologies. I hope to come back to this post and do a better job, but for the moment a placeholder post and a request:

Read the whole thread (including the responses). We (and by we I mean not just all of us at the Gokhale Institute, but higher education in economics in India) should be building out more courses of this nature.

If anybody is already doing this, please do get in touch. I would love to learn more about how to try and start something like this for my university.

Are offline exams better? No.

This is a continuation of a series. The first post, this Monday, asked how we might transition from online to offline education when (if?) the pandemic ends. The second post was about me trying to figure out in which ways offline classes are better. This post is about me trying to figure out ways in which offline examinations are better.

Offline examinations, in the context of this post, are defined as examinations in which students sit in a classroom for three hours, and write detailed answers using pen and paper, without having access to their textbooks or to internet enabled devices.

They aren’t better.

That’s it.


I cannot tell you how strongly I feel about this. Note that this post is about higher education, not about school level exams. But that being said, the idea that an offline examination replicates real life conditions is patently ridiculous.

When was the last time, in the course of your normal workday, that you sat in a room in which you couldn’t access the help of your colleagues or the internet, with only pen and paper, and did work? And even if you were to say to me that such a thing has happened, did that work involve regurgitating what you already know? Or was that work about generating new ideas without being distracted by the internet?

Offline examinations are not about generating new ideas. They aren’t about testing how well you would do in a realistic work setting. I honestly do not know what they are about, and I cannot for the life of me understand why they existed up until covid-19 came knocking.

Offline examinations need to go, and I would love to learn why I am wrong about this. Please help me understand.

Should students of law be taught statistics?

I teach statistics (and economics) for a living, so I suppose asking me this question is akin to asking a barber if you need a haircut.

But my personal incentives in this matter aside, I would argue that everybody alive today needs to learn statistics. Data about us is collected, stored, retrieved, combined with other data sources and then analyzed to reach conclusions about us, and at a pace that is now incomprehensible to most of us.

This is done by governments, and private businesses, and it is unlikely that we’re going to revert to a world where this is no longer the case. You and I may have different opinions about whether this intrusive or not, desirable or not, good or not – but I would argue that this ship has sailed for the foreseeable future. We (and that’s all of us) are going to be analyzed, like it or not.

And conclusions are going to be made about us on the basis of that analysis, like it or not. This could be, for example, a computer in a company analyzing us as a high value customer and according us better service treatment when we call their call center. Or it could be a computer owned by a government that decides that we were at a particular place at a particular time on the basis of the footage from a security camera.

In both of these cases (and there are millions of other examples besides), there is no human being who makes these decisions about us. Machines do. This much is obvious, because it is now beyond the capacity of our species to deal manually with the amount of data that we generate on a daily basis. And so the machines have taken over. Again, you and I may differ on whether this is a good thing or a bad thing, but the fact is that it is a trend that is unlikely to be reversed in the foreseeable future.

Are the conclusions that these machines reach infallible in nature? Much like the humans that these machines have replaced, no. They are not infallible. They process information much faster than we humans can, so they are definitively better in handling much more data, but machines can make errors in classification, just like we can. Here, have fun understanding what this means in practice.

Say this website asks you to draw a sea turtle. And so you start to draw one. The machine “looks” at what you’ve drawn, and starts to “compare” it with its rather massive data bank of objects. It identifies, very quickly, those objects that seem somewhat similar in shape to those that you are drawing, and builds a probabilistic model in the process. And when it is “confident” enough that it is giving the right answer, it throws up a result. And as you will have discovered for yourself, it really is rather good at this game.

But is it infallible? That is, is it perfect every single time? Much like you (the artist) are not, so also with the machine. It is also not perfect. Errors will be made, but so long as they are not made very often, and so long as they aren’t major bloopers, we can live with the trade-off. That is, we give up control over decision making, and we gain the ability to analyze and reach conclusions about volumes of data that we cannot handle.

But what, exactly, does “very often” mean in the previous paragraph? One error in ten? One in a million? One in an impossibly-long-word-that-ends-in-illion? Who gets to decide, and on what basis?

What does the phrase “major blooper” mean in that same paragraph? What if a machine places you on the scene of a crime on the basis of security camera footage when you were in fact not there? What if that fact is used to convict you of a crime? If this major blooper occurs once in every impossibly-long-word-that-ends-in-illion times, is that ok? Is that an acceptable trade-off? Who gets to decide, and on what basis?


If you are a lawyer with a client who finds themselves in such a situation, how do you argue this case? If you are a judge listening to the arguments being made by this lawyer, how do you judge the merits of this case? If you are a legislator framing the laws that will help the judge arrive at a decision, how do decide on the acceptable level of probabilities?

It needn’t be something as dramatic as a crime, of course. It could be a company deciding to downgrade your credit score, or a company that decides to shut off access to your own email, or a bank that decides that you are not qualified to get a loan, or any other situation that you could come up with yourself. Each of these decisions, and so many more besides, are being made by machines today, on the basis of probabilities.

Should members of the legal fraternity know the nuts and bolts of these models, and should we expect them to be experts in neural networks and the like? No, obviously not.

But should members of the legal fraternity know the principles of statistics, and have an understanding of the processes by which a probabilistic assessment is being made? I would argue that this should very much be the case.

But at the moment, to the best of my knowledge, this is not happening. Lawyers are not trained in statistics. I do not mean to pick on any one college or university in particular, and I am not reaching a conclusion on the basis of just one data point. A look at other universities websites, conversations with friends and family who are practicing lawyers or are currently studying law yields the same result. (If you know of a law school that does teach statistics, please do let me know. I would be very grateful.)


But because of whatever little I know about the field of statistics, and for the reasons I have outlined above, I argue that statistics should be taught to the students of law. It should be a part of the syllabus of law schools in this country, and the sooner this happens, the better it will be for us as a society.

Why, exactly, might mandatory offline attendance be better?

I’d ended yesterday’s post by asking two questions: why is mandatory offline attendance in classrooms a good thing, and why are offline examinations better than online ones. I’ll try and list out arguments for mandatory offline attendance in today’s post.

A quick note before we begin: I don’t think mandatory offline attendance is better. I think a hybrid system is here to stay, no matter how reluctant universities and college are about it. But it is precisely for this reason that I want to write out this post – I want to force myself to “write for the other side”. Doing so helps me understand that point of view better, and two things are likely to happen as a consequence. I can sharpen my own arguments as a consequence of understanding theirs better. Second, maybe I’ll end up modifying my views by better understanding theirs.

  • Conversations are much more likely to take place in a classroom than in an online setting. Being physically present in a classroom along with others and with the professor dramatically increases the chance that a conversation is initiated and sustained. I can personally attest to this, and I am fairly confident that most people involved in academia (students and teachers) will do so as well. To the extent that you think conversations about whatever is being taught is a good thing (and I most certainly do), offline classes are definitively better.
  • Peer pressure to attend a class, and to listen once you are in class is much higher in an offline setting.
  • A classroom is conducive to learning. Your bedroom or living room, no matter how comfortable, is not. To the extent that you think priming is a real phenomenon with tangible, measurable outcomes, offline classes are likely to be better.
  • There are positive externalities (spillovers) to attending offline classes. Serendipitous conversations in corridors with people from other classes or professors, being able to walk into a professor’s office for a chat after class, the continuation of discussions of what happened in class over a cup of chai at the canteen are all much much more likely after having attended an offline class.
  • The over-the-shoulder effect tends to be underrated by folks in favor online classes. A student peering over your shoulder at your work can in a glance offer a quick correction or tip, and it is still much easier for a professor to walk through a physical classroom to take in the level of understanding of the students. VR, AR and metaverses may well be on their way, but we aren’t quite there just yet.
  • There is a performative aspect to offline classes that is all but impossible to recreate online. Watching a physics professor teach about pendulums by climbing onto one requires a physically present, and obviously involved audience. It will not have the same impact if conducted online. And my hunch is that the class is likely to be recalled much more effectively if you were physically present in class.
  • Retention based on visual cues works better than most other memory techniques, and visual cues are much more likely in a social setting than the cozy comfort of your home. See this as an example of what I am trying to get at (and please don’t hesitate to correct me if I’m wrong!)
  • What else?

Is Online Education Transitory?

Students are finally making their way back into colleges across the country. Omicron, and whatever variant follows next will make the road bumpy, and there remains a significant chance that there will be some U-turns along the way. But we’re finally limping back towards something approaching normalcy. Or so one hopes.

But the transition isn’t smooth, and cultural adjustments are going to be tricky. What sort of cultural adjustments? Here goes:

  • Lockdowns and restrictions have been in place long enough for a culture of online learning to have emerged. In the context of this blog post, I define the word culture to mean social behaviors and norms that have emerged among students during the past eighteen (or so) months. There is more to culture than that, I am well aware, but it is this specific aspect of the word that I am focusing on.
  • Students across India have gotten used to the following aspects of this culture:
    • Listening to a lecture that is being delivered need not be a community based event. You can listen to a lecture alone, anywhere, as opposed to along with your classmates in a classroom.
    • Listening to a lecture need not by a synchronous event. That is, you don’t need to listen when the professor is speaking. One can listen later, as per one’s own convenience.
    • Listening to a lecture need not be a 1x event. Amit Varma’s point about being able to listen to somebody else speaking at even 3x applies to lectures as much as it does to podcasts. Students who find a particular professor boring may even argue that the point applies with even greater force to lectures than it does to podcasts!
    • Students feel much more comfortable calling out online examinations for the farce that they are. And let me be clear about this: online examinations are a farce. If you are a part of any university’s administration in this country, I urge you to speak to students, their parents, and recruiters about this issue. I repeat, online examinations are a farce. This is important, and it needs to be called out. We’re very much in Emperor’s New Clothes territory in this regard, and that is where the cultural aspect comes in.
  • At the moment, most colleges (if not all) are not making classroom attendance mandatory, at least for the students. Students may be on campus, but not necessarily in the classroom. Most students I have spoken to (in a completely unscientific fashion, I should add, so this is strictly anecdotal) think this to be the best of all worlds. They are not at home, they are with friends, and they are not in a classroom. It doesn’t get better than this, as far as they are concerned.

So now, assuming you find yourself in even limited agreement with what I have written above, think about the scenario I am about to outline. Imagine that you are a university administrator with the power to mandate offline attendance in classrooms and offline examinations for your students. And at some date in the foreseeable future, you decree that this must happen.

And some students come along and ask an entirely reasonable (to them, at any rate) question: why?

Why are offline attendance and offline examinations better than what we have right now?

What would your answers be?

On Decentralization

Andrew Batson has a nice post out about an essay in the Palladium magazine. The theme of both the essay and the blog post is decentralization in China.

Dylan Levi King has a nice essay out in Palladium on the history of decentralization in China, opening with the assertion that “the most significant reform carried out in China after 1978 was one of systematic decentralization.” It is difficult to disagree with this. As the best China scholarship of the last few decades has made clear, local initiative played a central role in the country’s growth miracle–see for instance Jean Oi’s book on local state corporatism, or Xu Chenggang’s classic article on “regionally decentralized authoritarianism”.

https://andrewbatson.com/2021/11/29/the-consensus-on-centralization/

The essay is a reflection on how decentralization has evolved (and retreated) under the various leaders who have been in charge of the central Chinese government, beginning with Mao, and ending with Xi Jinping. As always, please read the whole thing.


The essay makes the rather unsurprising point that under Xi’s leadership, China is becoming ever more centralized. But the interesting (if not entirely surprising) nugget is that the attempt to increase the degree of centralization began about thirty years ago – Xi is the first leader since then who’s been very successful at it.

Well, so far, at any rate. See this thread, for example:

But the essay helps us think about a question which should be of interest to a student of economics: what is the appropriate level of decentralization? I mean this to be a one-size-fits-all question: for any organization, institution or level of governance, how should we think about the appropriate level of decentralization?

Think about the answer to this question in regard to your own college/school, for example. Who do you need to approach for permission in order to hold an event in your college? Does any prof have the ability to give permission, or are they likely to pass your question up to the head of the department? What about the head of the department? Are they likely to take the decision, or will they pass the question up to the principal or the director? In other words, how much decision-making authority is vested in the lower levels of hierarchy? And how much decision-making authority should be vested in the lower levels of hierarchy?


It is a question with far reaching implications: a centrally driven decision making system retains all the power at the centre, and everybody knows who to go to for getting approval. On the other hand, this is likely to make the system rather inflexible, with very little decision-making authority at lower levels.

Here’s a very simple example: let’s say you’re fifteen minutes late while checking out of a hotel. Should you be charged a fine or not? Should this be up to the clerk who is helping you check out, or should the clerk just blindly follow the “rule” with zero decision-making authority? If you (the guest) then kick up a ruckus, should the clerk call their superior? Should the superior call their superior? And on and on…

Management consultants agonize about this, as do politicians and bureaucrats. But so do government officials, professors in universities and even parents! What is the appropriate level of decentralization is an important question in literally any organization!


So how do we go about building a model in our heads to think about this issue?

Here’s one way to think about it:

Let’s assume that we’re seeking to optimize for the long term growth and stability of the organization in question. That is, to me, an entirely reasonable assumption. Concretely, the management consultant in charge of instituting check-out processes in the hotel is charged with creating a process that will optimize for the long term growth and stability of the hotel chain.

Should the management consultant vest, then, the clerk with the power to waive off the late fee? Under what circumstances? To what extent? With what amount of leeway given for mitigating circumstances? Maybe the clerk can waive off the late fees only for a certain number of times per month? Can HR track which clerks waive off fees the least across the year, and decide bonuses accordingly? Or should clerks be rewarded for building out customer loyalty by waiving off late fees by default for a period of up to an hour beyond the checkout time?

What about re-evaluation requests for semester-end examinations? What about disciplinary committees for deciding upon the punishment for low attendance? The decision to sell land in order to meet revenue requirements by local governments? As you can see, once you start to think of hierarchies and organizations, this can get very complicated very quickly.


And within the field of economics (at least for a specific context), the Oates Theorem is a good starting point to think about this analytically:

Many years ago in Fiscal Federalism (1972), I formalized this idea in a proposition I referred to as “The Decentralization Theorem.” The basic point is that if there are no cost advantages (economies of scale) associated with centralized provision, then a decentralized pattern of public outputs reflecting differences in tastes across jurisdictions will be welfare enhancing as compared to a centralized outcome characterized by a uniform level of output across all jurisdiction

Oates, Wallace E. “On the evolution of fiscal federalism: Theory and institutions.” National tax journal 61.2 (2008): 313-334.

In English, what this means is that so long as centralized provisioning doesn’t have any “bulk” benefits, lower levels of hierarchy will always know more about “local” tastes and preferences, and therefore decision making ought to be as decentralized as possible.

Put another way, a one-size fits all rule won’t be as optimal for the hotel chain as letting the clerk in question decide on a case-by-case basis.


So as a thumb rule, the more one decentralizes, the better. Alas, decentralizing decision-making also has the knock-on effect of decentralizing power, and that tends to not go well with those who, well, have power.

And so while effective decentralization has economic benefits, it also has political consequences. Which is why it makes sense to ask what one is optimizing for. And occasionally, it behooves all of us to ask what one should be optimizing for.

The answers are often wildly different, and more’s the pity.