Understanding Statistical Inference

If you were to watch a cooking show, you would likely be impressed with the amount of jargon that is thrown about. Participants in the show and the hosts will talk about their mise-en-place, they’ll talk about julienned vegetables, they’ll talk about a creme anglaise and a hajjar other things.

But at the end of the day, it’s take ingredients, chop ’em up, cook ’em, and eat ’em. That’s cooking demystified. Don’t get me wrong, I’m not for a moment suggesting that cooking high-falutin’ dishes isn’t complicated. Nor am I suggesting that you can become a world class chef by simplifying all of what is going on.

But I am suggesting that you can understand what is going on by simplifying the language. Sure, you can’t become a world class cook, and sure you can’t acquire the skills overnight. But you can – and if you ask me, you should – understand the what, the why and the how of the processes involved. Up to you then to master ’em, adopt ’em after adapting ’em, or discard ’em. Maybe your paneer makhani won’t be quite as good as the professional’s, but at least you’ll know why not, and why it made sense to give up on some of the more fancy shmancy steps.


Can we deconstruct the process of statistical inference?

Let’s find out.

Let’s assume, for the sake of discussion, that you and your team of intrepid stats students have been hired to find out the weight of the average Bangalorean. Maybe some higher-up somewhere has heard about how 11% of India is diabetic, and they’ve decided to find out how much the average Bangalorean weighs. You might wonder if that is the best fact-finding mission to be sent on, but as Alfred pointed out all those years ago, ours not to reason why.

And so off we go to tilt at some windmills.

Does it make sense to go around with a weighing scale, measuring the weight of every single person we can find in Bangalore?

Nope, it doesn’t. Apart from the obvious fact that it would take forever, you very quickly realize that it will take literally forever. Confused? What I mean is, even if you imagine a Bangalore where the population didn’t change at all from the time you started measuring the weight to the time you finished – even in such a Bangalore, measuring everybody’s weight would take forever.

But it won’t remain static, will it – Bangalore’s population? It likely will change. Some people will pass away, and some babies will be born. Some people will leave Bangalore, and some others will shift into Bangalore.

Not only will it take forever, it will take literally forever.

And so you decide that you will not bother trying to measure everybody’s weight. You will, instead, measure the weight of only a few people. Who are these few people? Where in Bangalore do they stay, and why only these and none other? How do we choose them? Do we pick only men from South Bangalore? Or women from East Bangalore? Only rich people near MG Road? Or only basketball players near National Games Village? Only people who speak Tamil near Whitefield? Only people who have been stuck for the last thirteen months at Silk Board? The ability to answer these questions is acquired when we learn how to do sampling.

What is sampling? Here’s our good friend, ChatGPT:

“Sampling refers to the process of selecting a subset of individuals from a larger population to gather data. In the case of a survey of the sort you’re talking about, you would need to define the target population, which could be the residents of Bangalore city. From this population, you would need to employ a sampling method (e.g., random sampling, stratified sampling) to select a representative sample of individuals whose weights will be measured.”

One sample or many samples? That is, should you choose the most appropriate sampling method and collect only one humongous sample across all of Bangalore city, or many different (and somewhat smaller) samples? This is repeated sampling. Monsieur ChatGPT again:

“You might collect data from multiple samples, each consisting of a different group of individuals. These multiple samples allow you to capture the heterogeneity within the population and account for potential variations across different groups or locations within Bangalore city. By collecting data from multiple samples, you aim to obtain a more comprehensive understanding of the weight situation.”

So all right – from the comfort of your air-conditioned office, you direct your minions to sally forth into Namma Bengaluru, and collect the samples you wish to analyze. And verily do they sally forth, and quickly do they return with well organized sheets of Excel. Each sheet containing data pertaining to a different, well-chosen sample, naturally.

What do you do with all these pretty little sheets of samples? Well, you reach a conclusion. About what? About the average weight of folks in Bangalore. How? By studying the weights of the people mentioned in those sheets.

So, what you’re really doing is you’re reaching a conclusion about the population by studying the samples. This is statistical inference. Our friend again:

“Statistical inference involves drawing conclusions about the population based on the information collected from the sample. Statistical inference helps you make generalizations and draw meaningful conclusions about the larger population using sample data.”

Remember those pretty little Excel sheets that your minions bought back as gifts for you? If you so desire, you can have Excel give you the average weight for each of those samples. Turns out you do so desire, and now you have many different averages in a whole new sheet. Each average is the average of one of those samples, and let’s say you have thirty such samples. Therefore, of course, thirty such averages.

What to do with these thirty averages, you idly wonder, as you lazily swing back and forth in your comfortable chair. And you decide that you will try and see if these averages can’t be made to look like a distribution. Let’s say the first sample has an average weight of sixty five kilograms. And the second one sixty-three kilograms. And the seventh one is sixty-seven kilograms. And the twenty-first is seventy kilograms. Can we arrange these averages in little groups, lightest to the left and heaviest to the right, and draw little sticks to indicate how frequently a particular value occurs?

That’s a sampling distribution.

“A sampling distribution is the distribution of a statistic, such as the mean, calculated from multiple samples. In the context of estimating the average weight of citizens in Bangalore, you would collect weight data from multiple samples of individuals residing in the city. By calculating the mean weight in each sample, you can examine the distribution of these sample means across the multiple samples. The sampling distribution provides insights into the variability of the estimate and helps you assess the precision of your findings regarding the average weight in the population.”

What’s that mean – “the sampling distribution provides insights into the variability of the estimate and helps you assess the precision of your findings regarding the average weight in the population”?

Well, think of it this way. Let’s say you have to report back what the average weight is. That is, what is the average weight, in your opinion, of all of Bangalore. Should you pick the first sample and report it’s value? Or the eighth sample? Or the twenty-third?

Why not, you decide in a fit of inspiration, take the average of all these averages? Whatay idea, sirjee! Because even if you assume that one sample might be a little bit off from the population mean, our father what goes? Maybe the second sample will be a little bit off, but on the other side! That is, the first sample is a little lighter than the actual value. But the second sample might well be a little heavier than the actual value! The average of the two might result in the two errors canceling each other out! And if taking the average of two averages is a good idea, why, taking the average of the thirty averages is positively brilliant.

But hang on a second, you say. Just you wait, you say, for you’re on a roll now. If we can take – and follow me closely here – the average of these averages, well then. I mean to say, why not…

… why not calculate the standard deviation of these averages as well! Not only do we get the average value, but we also get the dispersion, on average, around the average value. Ooh, you genius, you.

This latest invention of yours, it has a name. It’s called the standard error:

“The standard error is a measure of the variability or uncertainty associated with an estimate. In the case of estimating the average weight of citizens in Bangalore, the standard error would quantify the uncertainty surrounding the estimated mean weight. It is typically calculated based on the observed variability in weight within the sample(s) and the information contained in the sampling distribution of the sample means. By considering the spread of the sample means around the population mean, the standard error provides an indication of how much the estimated average weight may deviate from the true population average. A smaller standard error suggests higher precision, indicating that the estimated average weight is expected to be closer to the true population average. Conversely, a larger standard error indicates greater uncertainty and variability, implying that the estimated average weight may have a wider range of potential values and may be less precise.”

Well, not quite. The standard error is actually the standard deviation of the sampling distribution divided by the square root of the number of samples. Ask, ask. Go ahead, ask. Here’s why:

“Imagine you have a larger sample size. With more observations, you have more information about the population, and the estimates tend to be more precise. Dividing the standard deviation by the sample size reflects this concept. It adjusts the measure of variability to match the precision associated with the sample size.”

https://en.wikipedia.org/wiki/Standard_error

Larger the sample size, lower the standard error. Also known as ” more data is always better”. Which is why, since time immemorial, every stats prof in the world has always responded to questions about how large your sample size should be with the following statement:

“As large as possible”.

They’re your friends, you see. They wish you to have smaller standard errors.

And so, the sampling distribution gives you the following gifts:

  1. An estimate of what the average weight is for the city of Bangalore
  2. This estimate is obtained by studying many samples, and taking the average of all of these samples. We can hope that errors in each samples are cancelled out by other errors in other samples
  3. The calculation of the standard error of the sampling distribution tells how much the estimated weight varies around the population mean.
  4. Not only do you have a very good guess about the value, but you also have a very good guess about the error that is implicit in your guess. Buy one get one free, you might smugly tell your superior.

And that, my friends, is the process of statistical inference.

But kahaani, as you might have guessed, abhi baaki hai. We’ll get back to this in a future post.