If you’ve been reading the posts of the past few days, I invite you to watch this video carefully, and ask yourselves which parts of it you agree with, which parts you disagree with, and why.
Month: June 2023
A Twitter Thread on the background to the T-Distribution
Tusks, Slavery and Economics
MRU.org is just a magical website if you are a student of economics, and one of my favorite videos on it is the one below. It’s only four minutes long, please do watch it if you haven’t seen it already:
The noblest of intentions, you’ll agree – but one of the most important lessons of economics is that the principles of economics really and truly matter. And in this case, the noblest of intentions had one of the most tragic outcomes possible.
That’s slavery. Now let’s talk about elephant tusks. Specifically, burnt and powdered elephant tusks:
If you visit Nairobi National Park, you will see rhinos, hippos, and giraffes, all within sight of the city skyline. You also will see an organized site showing several large mounds of burnt and powdered elephant tusks. They are a tribute to the elephant, and along with the accompanying signs, a condemnation of elephant poaching.
https://marginalrevolution.com/marginalrevolution/2023/06/elephant-tusks-incentives-and-the-sacred.html
Starting in 1989, the government had confiscated a large number of tusks from the poachers, and as part of their anti-poaching campaign they burnt those tusks and placed the burnt ashes on display in the form of mounds. There are also several signs telling visitors that it is forbidden to take the ashes from the site. There have since been subsequent organized tusk burns.
In essence, the government is trying to communicate the notion that the elephant tusks are sacred, and should not be regarded as material for either commerce or poaching or for that matter souvenir collecting. “We will even destroy this, rather than let you trade it.”
If you have seen (or are already familiar with) the video, how might you make use of your knowledge to think about this problem? Will burning these tusks make the situation better, or worse? Tyler answers the obvious question in his post:
“The economist of course is tempted to look beneath the surface of such a policy. If the government destroys a large number of elephant tusks, the price of tusks on the black market might go up. The higher tusk price could in turn motivate yet more poaching and tusk trading, thus countermanding the original intent of the policy.”
Why does he say that the “higher tusk price could in turn motivate yet more poaching”? Why does he not say it will motivate more poaching? Well, he’d have to calculate the elasticity of the supply curve to make a definitive statement one way or the other.
Read the rest of his post for some typically delightful Tylerian takeaways, including an academic paper called “Elephants“.
But whether it is the poaching of elephants or the slave trade, there is a deeper question at play here which Tyler alludes to towards the end of his post. But before we get there, a little anecdote which I once read in a cookbook. I’ve forgotten which cookbook (of course!), but the idea was that while in the process of cooking dinner for everyone, the author would simply fry some onions and garlic to start with. The aroma of these ingredients being fried would let everybody know that dinner was Being Prepared.
That is, Something Was Happening, And That’s Good Enough For Now.
Why do I bring this up now? Because in some cases, under some time horizons, and for some areas of optimization, a non-optimal response from an economic theory perspective may actually be… optimal.
Should vaccines be free or not? Should healthcare be free or not? Should education be free or not? If this raises your hackles, go with these: should tusks be burnt or not? Should slaves in a slave market be purchased and then set free or not?
Well, in the first set of questions, ask these additional questions: should be free or not for whom? For how long? Are you optimizing for more people staying alive and healthy, or are you optimizing for the fiscal health of the government? What if the long term fiscal health of the government allows you to save more lives in the future? What if giving vaccines away for free allows you to save lives that are here and present now? Will the government be able to run such programmes efficiently? Is it worth running these programmes even after knowing that governments can’t run these programmes efficiently? Why? Why not?
Or as Tyler puts it:
Many non-economists think only in terms of the sacred and the symbolic goods in human society. They ignore incentives. Furthermore, our politics and religious sects encourage such modes of evaluation.
https://marginalrevolution.com/marginalrevolution/2023/06/elephant-tusks-incentives-and-the-sacred.html
Many economists think only in terms of incentives, and they do not have a good sense of how to integrate symbolic goods into their analysis. They often come up with policy proposals that either offend people or simply fall flat.
Wisdom in balancing these two perspectives, he ends his post, is often at the heart of good social science (and, I would add, therefore at the heart of good policymaking). Or, as I like to put it, the truth always lies somewhere in the middle.
Economists would be better off if they didn’t use only economic analysis all the time. Non-economists would be better off if they used economic analysis some of the time. The trick lies in knowing when to stop in the first instance, and when to start in the second instance.
If only we had definitive answers to both, life would be so much easier.
But then again, I would then have had no reason to write here on EFE either.
So it goes!
Let’s Brew Some Beer
Back when I used to work at the Gokhale Institute, I would get a recurring request every year without fail. What request, you ask? To get AB InBev to come on campus. To the guys at AB InBev – if you’re reading this, please do consider going to GIPE for placements. The students are thirsty to, er, learn.
But what might their work at AB InBev look like?
I don’t know for sure, but it probably will not involve working with barley and hops. It should, though, if you ask me, because today, building statistical models about other aspects of selling beer today might rake in the moolah. But there will be a pleasing historical symmetry about using stats to actually brew beer.
You see, you can’t – just can’t – make beer without barley and hops. And to make beer, these two things should have a number of desirable characteristics. Barley should have optimum moisture content, for example. It should have high germination quality. It needs to have an optimum level of proteins. And so on. Hops, on the other hand, should have beefed up on their alpha acids. They should be brimming with aroma and flavor compounds. There’s a world waiting to be discovered if you want to be a home-brewer, and feel free to call me over for extended testing once you have a batch ready. I’ll work for free!
But in a beer brewing company, it’s a different story. There, given the scale of production, one has to check for these characteristics. And many, many years ago – a little more than a century ago, in fact – there was a guy who was working at a beer manufacturing enterprise. And this particular gentleman wanted to test these characteristics of barley and hops.
So what would this gentleman do? He would walk along the shop-floor of the firm he worked in, and take some samples from the barley and hops that was going to be used in the production of beer. Beer aficionados who happen to be reading this blog might be interested to know the name of the firm bhaisaab worked at. Guinness – maybe you’ve heard of it?
So, anyway, off he’d go and test his samples. And if the results of the testing were encouraging, bhaisaab would give the go-ahead, and many a pint of Guinness would be produced. Truly noble and critical work, as you may well agree.
But Gosset – for that was his name, this hero of our tale – had a problem. You see, he could never be sure if the tests he was running were giving trustworthy results. And why not? Well, because he had to make accurate statements about the entire batch of barley (and hops). But in order to make an accurate statement about the entire batch, he would have liked to take larger samples.
Imagine you’re at Lasalgaon, for example, and you’ve been tasked with making sure that an entire consignment of onions is good enough to be sold. How many sacks should you open? The more sacks you open, the surer you are. But on the other hand, the more sacks you open, the lesser the amount left to be sold (assume that once a sack is open, it can’t be sold. No, I know that’s not how it works. Play along for the moment.)

So how many sacks should you open? Well, unless your boss happens to be a lover of statistics and statistical theory for its own sake, the answer is as few as possible.
The problem is that you’re trying to then reach a conclusion about a large population by studying a small sample. And you don’t need high falutin’ statistical theory to realize that this will not work out well.
Your sample might be the Best Thing Ever, but does that mean that you should conclude that the entire population of barley and hops is also The Best Thing Ever? Or, on the other hand, imagine that you have a strictly so-so sample. Does that mean that the entire batch should be thought of as so-so? How to tell for sure?
Worse, the statistical tools available to Gosset back then weren’t good enough to help him with this problem. The tools back then would give you a fairly precise estimate for the population, sure – but only if you took a large enough sample in the first place. And every time Gosset went to obtain a large enough sample, he met an increasingly irate superior who told him to make do with what he already had.
Or that is how I like to imagine it, at any rate.
So what to do?
Well, what our friend Gosset did is that he came up with a whole new way to solve the problem. I need, he reasoned, reasonably accurate estimates for the population. Plus, khadoos manager says no large samples.
Ergo, he said, new method to solve this problem. Let’s come up with a whole new distribution to solve the problem of talking usefully about population estimates by studying small samples. Let’s have this distribution be a little flatter around the centre, and a little fatter around the tails. That way, I can account for the greater uncertainty given the smaller sample.
And if my manager wants to be a little less khadoos, and he’s ok with me taking a larger sample, well, I’ll make my distribution a little taller around the center, and a little thinner around the tails. A large enough sample, and hell, I don’t even need my new method.

And that, my friends, is how the t-distribution came to be.
You need to know who Gosset was, and why he did what he did, for us to work towards understanding how to resolve Precision and Oomph. But it’s going to be a grand ol’ detour, and we must meet a gentleman, a lady, and many cups of tea before we proceed.
Oomph Vs Precision
Let’s assume that you get a call from a relative. Said relative has enjoyed packing away the carbs for years on end, and is built along, shall we say, fairly generous proportions. But also – and this is the good news – said relative would now like to shed some kilos.
You, trained in statistics and fitness, have been enlisted as an important team member of Team Let’s Lose Some Weight.
And so off you go to do your research online, and after barely half an hour of ferreting around on Google, you come back and announce that there are two pills that will get the job done. While both have some unfortunate and unavoidable side-effects, both are also guaranteed to help you lose weight. You don’t need to do anything else, you tell your relative. No exercise, no diet, no lemon water in the morning, none of that jazz. Pop the pills, and you’re guaranteed to lose weight.
But, you go on to say, there’s also bad news.
What’s the bad news, asks the relative.
Well, you say. The first pill – Oomph is its name – it will help you lose nine kgs on average in a month.
Nine kgs, goes the old relative. And you call that bad news?!
Hang on, you say. Not so fast. Yes, nine kgs, but focus on the next two words, no? “On average”.
And what does that mean, asks the old relative testily.
Well, you say, always glad to show off your knowledge of statistics, it means you could lose anywhere between 4.5 kgs to 13.5 kgs. On average you will lose about nine kgs. But could be more, could be less.
Almost never, you say, will it be less than 4.5 kgs. And almost never, you say, will be more than 13.5 kgs. But somewhere within that range, you go on reassuringly, you will lose the kgs.
Ah, goes the old relative. Like that. OK, then. And what about the second pill?
Ah, the second pill, you say. Well, in the case of the second pill, you will lose only 2.25 kgs.
Pshaw, snorts the old relative. Peanuts compared to the first one, no?
Well, yes, true, you concede. But on the other hand, on average it will be somewhere between 2.05 kgs and 2.45 kgs.
Ah, says the old relative. So what you mean to say is that the first pill gives great results, but more uncertainty. And the second pill gives not so great results, but less uncertainty.
Couldn’t have put it better myself, you say. Well done.
Well, that leaves only one question then, doesn’t it?
As an expert in statistics, which one do you recommend?
How I wish I could take credit for this example, since it is such a great question to think about. It is, alas, not my own. It is taken from a lovely book, which happens to be the subject of both this post, and quite a few others in this week. The name of the book is “The Cult of Statistical Significance“, by Stephen T. Ziliak and Deirdre N. McCloskey.
And let’s leave aside for the moment the question of what the book is about – although we’ll get to it, don’t you worry. But for the moment, please do think about the answer to the question: which one do you recommend?
Remember, one helps you lose somewhere between 4.5 to 13.5 kgs, while the other helps you lose somewhere between 2.05 to 2.45 kgs. Do you pick the pill with the greater uncertainty (but more weight loss), or do you pick the pill with the lesser uncertainty (but lesser weight loss)?
This is, of course, a topic that has been discussed before on this blog. We’re talking, all statisticians will tell you, about the signal to noise ratio:
A clear signal with insane amounts of noise ain’t a good thing, and an unclear signal with next to no noise is also not a good thing.
https://econforeverybody.com/2021/05/18/on-signals-and-noise/
So the first pill – Oomph is its name – has a clear signal (9 kgs!), but also insane amounts of noise (whaddya mean, somewhere between 4.5 kgs to 13.5 kgs?!).
And the second pill – Precision is its name – has an unclear signal (only 2.25 kgs?!), but also next to no noise (wow, plus or minus 200 grams only!).
So we have a situation where we must choose between a not-so-good thing and another not-so-good thing. So what do we choose? What should we choose?
So here’s the thing.
If you’ve been trained in statistics (as I have), you should be saying, choose the pill with the higher signal to noise ratio.
Stop right here, if you’ve been trained in statistics, and tell me if you agree or disagree with me. If you disagree with me, please tell me why. Has to be the signal-to-noise ratio, no?
Right, so let’s go ahead and calculate the signal to noise ratio. Here’s the formula we will use:

The hypothesized null effect is, in both cases, zero. In the first case, the observed effect (on average) is 9, and the variation is 4.5. In the second, the observed effect (on average) is 2.25, and the variation is .2
So: (9-0)/4.5 and (2.25-0)/.2
Giving us, effectively, a signal to noise ratio of 2 and 10, respectively.
Well, you confidently tell your old relative, I’ve run the tests and done the analysis. And my conclusion is that you should take the second pill.
Precision, you mean?
And why is that?
You sigh. Deeply. Explaining statistics to laypeople is such a chore, but someone has got to do it.
Because, you say in your best professorial tone, the signal to noise ratio is the highest in the case of the second pill. Not just higher, you say patiently – five times higher. 10 compared to 2! It’s not even close.
But this old relative of yours is nothing if not curmudgeonly and commonsensical.
So you mean to tell me, goes the o.r., that with Precision, the one that you’re recommending…
… my best case scenario is that I will lose 2.45 kgs.
But, in the case of Oomph, the one that you’re not recommending…
… my worst case scenario is that I will lose 4.5 kgs.
Have I got that right?
For an uncomfortably long period of time, there is a strained silence. And then in a small voice, you say that you will get back to the old relative, and off you go to learn more about where you went wrong while learning statistics for all these years.
So where did you go wrong?
There’s good news, and here’s bad news.
The good news is that you didn’t go wrong. You learnt correctly.
The bad news? Statistics itself took a wrong turn, and hasn’t corrected itself since.
How? We’ll find out soon enough, stay tuned.
Halfer. I mean, c’mon! Obviously a halfer.
Shrug 2
The Gift That Keeps on Giving: The p-value
Naman Mishra, a friend and a junior from the Gokhale Institute, was kind enough to read and comment on my post about Abhinav Bindra and the p-value. Even better, he had a little “gift” for me – a post written by somebody else about the p-value:
P values are the probability of observing a sample statistic that is at least as different from the null hypothesis as your sample statistic when you assume that the null hypothesis is true. That’s a pretty convoluted but technically correct definition—and I’ll come back it later on!
https://statisticsbyjim.com/hypothesis-testing/p-values-misinterpreted/
It is convoluted, of course, but that’s not a criticism of the author. It is, instead, an acknowledgement of how difficult this concept is.
So difficult, in fact, that even statisticians have trouble explaining the concept. (Not, I should be clear, understanding it. Explaining it, and there’s a world of a difference).
Well, you have my explanation up there in the Abhinav Bindra post, and hopefully it works for you, but here is the problem with the p-value in terms of not how difficult the concept i, but rather in terms of its limitations:
We want to know if results are right, but a p-value doesn’t measure that. It can’t tell you the magnitude of an effect, the strength of the evidence or the probability that the finding was the result of chance.
https://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/
In other words, the p-value is not the probability of rejecting the null when it is true. And here’s where it gets really complicated. I myself have in classes told people that the lower the p-value, the safer you should fail in rejecting the null hypothesis! And that’s not incorrect, and it’s not wrong… but well, it ain’t right either.
Consider these two paragraphs, each from the same blogpost:

But also, there’s this, from earlier on in the same blogpost:

“This.”, you can practically hear generation after generation of statistics students say with righteous anger. “This is why statistics makes no sense.”
“Boss, which is it? Can p-values help you reject the null hypothesis, or not?”
Fair question.
Here’s the answer: no.
P-values cannot help you reject the null hypothesis.
…
…
…
You knew there was a “but”, didn’t you? You knew it was coming, didn’t you? Well, congratulations, you’re right. Here goes.
But they’re used to reject the null anyway.
Why, you ask?
Well, because of four people. And because of beer and tea. And other odds and ends, and what a story it is.
And so we’ll talk about beer, and tea and other odds and ends over the days to come.
But as with all good things, let’s begin with the beer. And with the t*!
*I’ve wanted to crack a stats based dad joke forever. Yay.
Understanding Statistical Inference
If you were to watch a cooking show, you would likely be impressed with the amount of jargon that is thrown about. Participants in the show and the hosts will talk about their mise-en-place, they’ll talk about julienned vegetables, they’ll talk about a creme anglaise and a hajjar other things.
But at the end of the day, it’s take ingredients, chop ’em up, cook ’em, and eat ’em. That’s cooking demystified. Don’t get me wrong, I’m not for a moment suggesting that cooking high-falutin’ dishes isn’t complicated. Nor am I suggesting that you can become a world class chef by simplifying all of what is going on.
But I am suggesting that you can understand what is going on by simplifying the language. Sure, you can’t become a world class cook, and sure you can’t acquire the skills overnight. But you can – and if you ask me, you should – understand the what, the why and the how of the processes involved. Up to you then to master ’em, adopt ’em after adapting ’em, or discard ’em. Maybe your paneer makhani won’t be quite as good as the professional’s, but at least you’ll know why not, and why it made sense to give up on some of the more fancy shmancy steps.
Can we deconstruct the process of statistical inference?
Let’s find out.
Let’s assume, for the sake of discussion, that you and your team of intrepid stats students have been hired to find out the weight of the average Bangalorean. Maybe some higher-up somewhere has heard about how 11% of India is diabetic, and they’ve decided to find out how much the average Bangalorean weighs. You might wonder if that is the best fact-finding mission to be sent on, but as Alfred pointed out all those years ago, ours not to reason why.
And so off we go to tilt at some windmills.
Does it make sense to go around with a weighing scale, measuring the weight of every single person we can find in Bangalore?
Nope, it doesn’t. Apart from the obvious fact that it would take forever, you very quickly realize that it will take literally forever. Confused? What I mean is, even if you imagine a Bangalore where the population didn’t change at all from the time you started measuring the weight to the time you finished – even in such a Bangalore, measuring everybody’s weight would take forever.
But it won’t remain static, will it – Bangalore’s population? It likely will change. Some people will pass away, and some babies will be born. Some people will leave Bangalore, and some others will shift into Bangalore.
Not only will it take forever, it will take literally forever.
And so you decide that you will not bother trying to measure everybody’s weight. You will, instead, measure the weight of only a few people. Who are these few people? Where in Bangalore do they stay, and why only these and none other? How do we choose them? Do we pick only men from South Bangalore? Or women from East Bangalore? Only rich people near MG Road? Or only basketball players near National Games Village? Only people who speak Tamil near Whitefield? Only people who have been stuck for the last thirteen months at Silk Board? The ability to answer these questions is acquired when we learn how to do sampling.
What is sampling? Here’s our good friend, ChatGPT:
“Sampling refers to the process of selecting a subset of individuals from a larger population to gather data. In the case of a survey of the sort you’re talking about, you would need to define the target population, which could be the residents of Bangalore city. From this population, you would need to employ a sampling method (e.g., random sampling, stratified sampling) to select a representative sample of individuals whose weights will be measured.”
One sample or many samples? That is, should you choose the most appropriate sampling method and collect only one humongous sample across all of Bangalore city, or many different (and somewhat smaller) samples? This is repeated sampling. Monsieur ChatGPT again:
“You might collect data from multiple samples, each consisting of a different group of individuals. These multiple samples allow you to capture the heterogeneity within the population and account for potential variations across different groups or locations within Bangalore city. By collecting data from multiple samples, you aim to obtain a more comprehensive understanding of the weight situation.”
So all right – from the comfort of your air-conditioned office, you direct your minions to sally forth into Namma Bengaluru, and collect the samples you wish to analyze. And verily do they sally forth, and quickly do they return with well organized sheets of Excel. Each sheet containing data pertaining to a different, well-chosen sample, naturally.
What do you do with all these pretty little sheets of samples? Well, you reach a conclusion. About what? About the average weight of folks in Bangalore. How? By studying the weights of the people mentioned in those sheets.
So, what you’re really doing is you’re reaching a conclusion about the population by studying the samples. This is statistical inference. Our friend again:
“Statistical inference involves drawing conclusions about the population based on the information collected from the sample. Statistical inference helps you make generalizations and draw meaningful conclusions about the larger population using sample data.”
Remember those pretty little Excel sheets that your minions bought back as gifts for you? If you so desire, you can have Excel give you the average weight for each of those samples. Turns out you do so desire, and now you have many different averages in a whole new sheet. Each average is the average of one of those samples, and let’s say you have thirty such samples. Therefore, of course, thirty such averages.
What to do with these thirty averages, you idly wonder, as you lazily swing back and forth in your comfortable chair. And you decide that you will try and see if these averages can’t be made to look like a distribution. Let’s say the first sample has an average weight of sixty five kilograms. And the second one sixty-three kilograms. And the seventh one is sixty-seven kilograms. And the twenty-first is seventy kilograms. Can we arrange these averages in little groups, lightest to the left and heaviest to the right, and draw little sticks to indicate how frequently a particular value occurs?
That’s a sampling distribution.
“A sampling distribution is the distribution of a statistic, such as the mean, calculated from multiple samples. In the context of estimating the average weight of citizens in Bangalore, you would collect weight data from multiple samples of individuals residing in the city. By calculating the mean weight in each sample, you can examine the distribution of these sample means across the multiple samples. The sampling distribution provides insights into the variability of the estimate and helps you assess the precision of your findings regarding the average weight in the population.”
What’s that mean – “the sampling distribution provides insights into the variability of the estimate and helps you assess the precision of your findings regarding the average weight in the population”?
Well, think of it this way. Let’s say you have to report back what the average weight is. That is, what is the average weight, in your opinion, of all of Bangalore. Should you pick the first sample and report it’s value? Or the eighth sample? Or the twenty-third?
Why not, you decide in a fit of inspiration, take the average of all these averages? Whatay idea, sirjee! Because even if you assume that one sample might be a little bit off from the population mean, our father what goes? Maybe the second sample will be a little bit off, but on the other side! That is, the first sample is a little lighter than the actual value. But the second sample might well be a little heavier than the actual value! The average of the two might result in the two errors canceling each other out! And if taking the average of two averages is a good idea, why, taking the average of the thirty averages is positively brilliant.
But hang on a second, you say. Just you wait, you say, for you’re on a roll now. If we can take – and follow me closely here – the average of these averages, well then. I mean to say, why not…
… why not calculate the standard deviation of these averages as well! Not only do we get the average value, but we also get the dispersion, on average, around the average value. Ooh, you genius, you.
This latest invention of yours, it has a name. It’s called the standard error:
“The standard error is a measure of the variability or uncertainty associated with an estimate. In the case of estimating the average weight of citizens in Bangalore, the standard error would quantify the uncertainty surrounding the estimated mean weight. It is typically calculated based on the observed variability in weight within the sample(s) and the information contained in the sampling distribution of the sample means. By considering the spread of the sample means around the population mean, the standard error provides an indication of how much the estimated average weight may deviate from the true population average. A smaller standard error suggests higher precision, indicating that the estimated average weight is expected to be closer to the true population average. Conversely, a larger standard error indicates greater uncertainty and variability, implying that the estimated average weight may have a wider range of potential values and may be less precise.”
Well, not quite. The standard error is actually the standard deviation of the sampling distribution divided by the square root of the number of samples. Ask, ask. Go ahead, ask. Here’s why:
“Imagine you have a larger sample size. With more observations, you have more information about the population, and the estimates tend to be more precise. Dividing the standard deviation by the sample size reflects this concept. It adjusts the measure of variability to match the precision associated with the sample size.”

Larger the sample size, lower the standard error. Also known as ” more data is always better”. Which is why, since time immemorial, every stats prof in the world has always responded to questions about how large your sample size should be with the following statement:
“As large as possible”.
They’re your friends, you see. They wish you to have smaller standard errors.
And so, the sampling distribution gives you the following gifts:
- An estimate of what the average weight is for the city of Bangalore
- This estimate is obtained by studying many samples, and taking the average of all of these samples. We can hope that errors in each samples are cancelled out by other errors in other samples
- The calculation of the standard error of the sampling distribution tells how much the estimated weight varies around the population mean.
- Not only do you have a very good guess about the value, but you also have a very good guess about the error that is implicit in your guess. Buy one get one free, you might smugly tell your superior.
And that, my friends, is the process of statistical inference.
But kahaani, as you might have guessed, abhi baaki hai. We’ll get back to this in a future post.
With n-1 degrees of freedom
“Yaar matlab kyon?!”
… is a sentiment expressed by every student who has slogged through an introductory course on statistics. You plow your way through mean, median and mode for the five thousandth time, you nod your head throughout the tedium that is the discussion on the measure of central dispersion, and you get the fact that the sample and population are different things. So far so good.
But then the professor plonks down the formula for standard deviation, and for the first time in your life (but not the last! Oh dear me no, anything but last) you see n-1 in the denominator.
And if it isn’t the class immediately after lunch, and if you are attending instead of bunking, there is a non-zero chance that you will, at worst, idly wonder about the n-1. At best, you might raise a timid hand and ask why it is n-1 rather than simply n.
Historically, students in colleges are likely to be met with one of three explanations.
One:
“That’s the formula”. This is why you’re better off bunking rather than attending these kind of classes.
Two:
“You do divide by n, but in the case of the population standard deviation. This is the formula for the sample standard deviation”. You hear this explanation, and warily look around the class for support, for the battle clearly isn’t over. You know that you should be asking a follow-up question. But that much needed support is not forthcoming. Everybody else is studiously noting something of critical importance in their notebooks. “OK, thank you, Sir”, you mumble, and decide that you’re better off bunking more often.
Three:
“Because you lose one degree of freedom, no?”, the professor says, in a manner which clearly suggests that this ought to be bleedin’ obvious, and can we please get on with it. “Ah, yes, of course”, you respond, not even bothering to check for support. And you decide the obvious, but we knew where this was going already, didn’t we?
So what is this “degrees of freedom” business?
Here’s an explanation in three parts.
First, simple thought experiment. Pick any three numbers. Done? Cool.
Now, pick any three numbers such that they add up to ten. Done? Cool.
In the second case, how many numbers were you free to pick? If you say “three”, imagine me standing in front of you, raised eyebrow and all. “Really, three?” I would have said if was actually there. “Let’s say the first number is 5, and the second number is 3. Are you free to pick the third number?”
And you aren’t, of course. If the first number is 5 and the second number is 3 and the three numbers you pick must add up to 10, then the third number has to be…
2. Of course.
But here’s the point. The imposition of a constraint in this little exercise means you’ve lost a degree of freedom.
Here are some examples from your day to day life:
“Leave any time you like, but make sure you get home by tem pm”. Not much of a “leave any time you like” then, is it? Congratulations, you’ve lost a degree of freedom.
“You can buy anything you want, so long as it is less than a thousand rupees”.
“You can do what you like for the rest of the evening, but only after you finish all your homework”.
The imposition of a constraint implies the loss of a degree of freedom.
Got that? That was the first part of the explanation. Now on to the second part.
So ok, you now know what a degree of freedom is. It is the answer to the question “In a n-step process, how many steps am I free to choose?”. Note that this is not a technically correct explanation, and those howls of outrage you hear are statistics professors reading this and going “Dude, wtf!”. But ignore the background noise, and let’s move on.
But why does the sample standard deviation lose a degree of freedom? Why doesn’t the population standard deviation lose a degree of freedom?

Today’s a good day for thought experiments, so let’s indulge in one more. Imagine that you stay in Bangalore, and that you have to take a rickshaw and then the metro to reach your workplace. It takes you about twenty minutes to find a rickshaw, sit in it, swear at Bangalore’s traffic, and reach the metro. If you’re lucky it takes only fifteen minutes, and if you’re unlucky, it takes thirty. But usually, twenty.
It takes you about forty minutes to get into the metro, get off from the metro and walk to your office. If you’re lucky, thirty five minutes, and if you’re unlucky, forty five minutes. But usually, forty.
If reaching on time is of the utmost importance, and you have to reach by ten am, when should you leave home?
Nine am would be risky, right? You’ve left yourself with zero degrees of freedom in terms of potential downsides. If either the rickshaw ride or the metro ride end up going a little bit over the usual, you’ll have a Very Angry Boss waiting for you.
But late night parties are late night parties, snooze buttons are snooze buttons, and here you are at nine am, hoping against hope that things work out fine. But alas, the rickshaw ride ends up taking twenty five minutes.
Now that the rickshaw ride has ended up taking twenty five, you have only thirty five minutes for the metro part of your journey. You used up some degrees of freedom on the first leg of the journey, and you have none left for the second.
Go look at that formula shown up above. Well, both formulas. What does x-bar stand for? Average, of course. Ah, but of the sample or the population? Any student who has ratta maaroed the formulas will tell you that this is the sample average. The population wala thingummy is called “mu”.
Ab, but now hang on. You’re saying that you want to understand what the population standard deviation looks like, and you’re going to form an idea for what it looks like by calculating the sample standard deviation. But the sample standard deviation itself depends upon your idea of what the population mean looks like. And where did you get an idea for what the population mean looks like? From the sample mean, of course!
But what if the sample mean is a little off? That is, what if the sample mean isn’t exactly like the population mean? Well, let’s keep one data point with us. If it turns out to be off, we’ll have that last data point be such that when you add it to the calculation of the sample mean, we will guarantee that the answer is eggjhactlee equal to the population mean. Hah!
Well ok, but that does then mean that you have… drumroll please… lost one degree of freedom.
In much the same way that taking more time on the rickshaw leg means you can’t take time on the metro leg…
Keeping one datapoint in hand when it comes to the mean implies that you lose that one degree of freedom when it comes to the sample standard deviation.
And that is why you have n-1 degrees of freedom.
I said three parts, remember? So what’s the third bit? If you think you’ve understood what I’ve said, go find someone to explain it to, and check if they get it. Only then, as <insert famous meme of your choice here> says, have you really understood it.