The Data and The Narrative

This week is Back to College at the Gokhale Institute. A podcast that I started a couple of years ago has become a tradition of sorts at the start of each semester at the BSc programme.

For about a week, we have people come and speak to us. All of them answer a simple question in a variety of ways. And that question is this: what would you do differently if you got the chance to go back to college? It’s a simple question, and can be answered in myriad ways. Here are some of the past talks, if you’re interested.

There’s one theme that has come up in all of the talks so far, and often enough for me to want to emphasize on it further. All of the speakers have spoken about the importance of doing the analysis, but also having the ability to build a story around it. Most folks are perhaps good at one, but not the other, and rarely both.

As an economist, almost all of the speakers have said, we have nowadays the ability to build models and run regressions. Building out a more sophisticated model, tweaking it, refining it, is either already possible, or can be learnt relatively easily. But where we lose out on, as young economists entering the workforce, is in our ability to explain what we’ve done.

I often say in my classes on statistics that the most underrated skill that a statistician possesses is the English language. I usually get confused laughter by way of response, but I am, of course, getting at much the same point. Unless you have the ability to explain what your model implies for the business problem at hand, you haven’t really done your work. And when I say explain, I mean using the English language.

Each of our speakers for the week so far have made the same point in their own way. Technical ability is table stakes. The differentiator is the ability to expand on what you’ve done, in a way that resonates with the listener. And resonance means the ability to tell a story about how what you’ve done is A Good Thing For The Business.

There are many other lessons to have come out of this week’s talks, and more, I’m sure, to come. But this is worth internalizing and working upon for all of us (myself included): it’s about the analysis and the narrative.

JEP, p-values and tests of statistical significance

The Summer 2021 issue of the Journal of Economic Perspectives came out recently:

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or the entire issue, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Summer 2021 issue, which in the Taylor household is known as issue #137.

https://conversableeconomist.wpcomstaging.com/2021/07/29/summer-2021-journal-of-economic-perspectives-available-online/

(JEP is a great journal to read as a student. If you’re looking for a good place to start, may I recommend the Anomalies column?)

Of particular interest this time around is the section on statistical significance. This paper, in particular, was an enjoyable read.


And reading that paper reminded of a really old blogpost written by an ex-colleague of mine:

The author starts off by emphasizing the importance of developing a statistical toolbox. Indeed statistics is a rich subject that can be enjoyed by thinking through a given problem and applying the right kind of tools to get a deeper understanding of the problem. One should approach statistics with a bike mechanic mindset. A bike mechanic is not addicted to one tool. He constantly keeps shuffling his tool box by adding new tools or cleaning up old tools or throwing away useless tools etc. Far from this mindset, the statistics education system imparts a formula oriented thinking amongst many students. Instead of developing a statistical or probabilistic thinking in a student, most of the courses focus on a few formulae and teach them null hypothesis testing.

https://radhakrishna.typepad.com/rks_musings/2015/09/mindless-statistics.html

If you are a student of statistics, and think that you “get” statistics, please read the post in its entirety. Don’t worry if you get confused – that is, in a way, the point of that post. It challenges you by asking a very simple question: do you really “get” statistics? And the answer is almost always in the negative (and that goes for me too!)


And my final recommendations du jour is this (extremely passionately) written tirade:

We want to persuade you of one claim: that William Sealy Gosset (1876-1937)—aka “Student” of “Student’s” t-test—was right, and that his difficult friend, Ronald A. Fisher (1890-1962), though a genius, was wrong. Fit is not the same thing as importance. Statistical significance is not the same thing as scientific importance or economic sense. But the mistaken equation is made, we find, in 8 or 9 of every 10 articles appearing in the leading journals of science, economics to medicine. The history of this “standard error” of science involves varied characters and plot twists, but especially R. A. Fisher’s canonical translation of “Student’s” t. William S. Gosset aka “Student,” who was for most of his life Head Experimental Brewer at Guinness, took an economic approach to the logic of uncertainty. Against Gosset’s wishes his friend Fisher erased the consciously economic element, Gosset’s “real error.” We want to bring it back.

https://www.deirdremccloskey.com/docs/jsm.pdf

Although it might help by reading this review first:

However, thanks to an arbitrary threshold set by statistics pioneer R.A. Fisher, the term ‘significance’ is typically reserved for P values smaller than 0.05. Ziliak and McCloskey, both economists, promote a cost-benefit approach instead, arguing that decision thresholds should be set by considering the consequences of wrong decisions. A finding with a large P value might be worth acting upon if the effect would be genuinely clinically important and if the consequences of failing to act could be serious.

https://www.nature.com/articles/nm0209-135

Statistics is a surprisingly, delightfully conceptual subject, and I’m still peeling away at the layers. Every year I think I understand it a little bit more, and every year I discover that there is much more to learn. The symposium on statistical significance in this summer’s issue of the JEP, RK’s blogpost and Deirdre McCloskey’s paper are good places to get started on unlearning what you’ve been taught in stats.

On Confidence Intervals

As with practically every other Indian household, so with mine. Trudging back home after having written the math exam was never much fun.

It wasn’t fun because most of your answers wouldn’t tally with those of your friends. But it wasn’t fun most of all because you knew the conversation that waited for you at home. Damocles had it easy in comparison.

“How was the exam?”, would be the opening gambit from the other side.

And because Indian kids had very little choice but to become experts at this version of chess very early on in life, we all know what the safest response was.

“Not bad”.

Safe, you see. Non-committal, and just the right balance of being responsive without encouraging further questioning.

It never worked, of course, because there always were follow-up questions.

“So how much do you think you’ll get?”

There are, as any kid will tell you, two possible responses to this. One brings with it temporary relief, but payback can be hellish come the day of the results. This is the Blithely ConfidentTM method.

“Oh, it was awesome! I’ll easily get over 90!”

The other response involves a more difficult conversation at the present juncture, but as any experienced negotiator will tell you, expectations setting is key in the long run.

“Not sure, really.”

Inwardly, you’re praying for a phone call, a doorbell ring, the appearance of a lizard in the kitchen – anything, really, that will serve as a distraction. Alas, miracles occur all too rarely in real life.

“Well, ok”, the pater would say, “Give me a range, at least.”


We’ve all heard the joke where the kid goes “I’ll definitely get somewhere between 0 and 100!”.

Young readers, a word of advice: this never works in real life. Don’t try it, trust me.

But joke apart, there was a grain of truth in that statement. That was the range that I (and every other student) was most comfortable with.

Or, in the language of the statistician, the wider the confidence interval, the more confident you ought to be that the parameter will lie within it.1


What range should one go with? 0-100 is out unless you happen to like a stinging sensation on your cheek.

You’re reasonably confident that you’ll pass – it wasn’t that bad a paper. And if you’re lucky, and if your teacher is feeling benevolent, you might even inch up to 80. So, maybe 40-80?

“I’ll definitely pass, and if I’m lucky, could get around 60 or so”, you venture.

“Hmmm,” the pater goes, ever the contemplative thinker. “So around 60, you’re saying?”

“Well yeah, around that”, you say, hoping against hope that this conversation is approaching the home stretch now.


“Around could mean anything!”, is the response. “Between 50 and 70, or between 40 and 80?! Which is it?!”

And that, my friends, is the intuition behind confidence intervals. Your parents are optimizing for accurate estimates (a narrower range), and you want to tell them that sure, you can have a narrower range – but the price they must pay is lesser confidence on your part.

And if they say, well, no, we want you to be more confident about your answer, you want to tell them that sure, I can be more confident – but the price they must pay is lower accuracy (a broader range).

And sorry, you can’t have both.

(Weird how parents get to say that all the time, but children, never!)

But be careful! This little story helps you get the intuition only. The truth is a little more subtle, alas:

The confidence interval can be expressed in terms of samples (or repeated samples): “Were this procedure to be repeated on numerous samples, the fraction of calculated confidence intervals (which would differ for each sample) that encompass the true population parameter would tend toward 90%

https://en.wikipedia.org/wiki/Confidence_interval#Meaning_and_interpretation

Or, in the case of our little story, this is what an Indian kid could tell their parents:

Were I to give the math exam a hundred times over, I would score somewhere between 50 and 70 about ninety times. And I would score between 40 and 80 about 95 times.


Now, if you ask where we get those specific sets of numbers from ( [50-70, {90}] , [40-80, {95}] ) , that takes us into the world of computation and calculation. Time to whip out the textbook and the calculator.

But if you are clear about why broader intervals imply higher confidence, and narrow intervals imply lower confidence, then you are now comfortable about the intuition.

And I hope you are clear, because that was my attempt in this blogpost.


Kids, trust me. Never try this at home.

But please, do read the Wikipedia article.

  1. Statisticians reading this, I know, I know. Let it slide for the moment. Please.[]

Probability, Expected Value…

… in No Country For Old Men

No Such Thing As Too Much Stats in One Week

I wrote this earlier this week:

Us teaching type folks love to say that correlation isn’t causation. As with most things in life, the trouble starts when you try to decipher what this means, exactly. Wikipedia has an entire article devoted to the phrase, and it has occupied space in some of the most brilliant minds that have ever been around.
Simply put, here’s a way to think about it: not everything that is correlated is necessarily going to imply causation.


But if there is causation involved, there will definitely be correlation. In academic speak, if x and y are correlated, we cannot necessarily say that x causes y. But if x does indeed cause y, x and y will definitely be correlated.

https://econforeverybody.com/2021/05/19/correlation-causation-and-thinking-things-through/

And just today morning, I chanced upon this:

And so let’s try and take a walk down this rabbit hole!

Here are three statements:

  1. If there is correlation, there must be causation.

    I think we can all agree that this is not true.
  2. If there is causation, there must be correlation.

    That is what the highlighted excerpt is saying in the tweet above. I said much the same thing in my own blogpost the other day. The bad news (for me) is that I was wrong – and I’ll expand upon why I was wrong below.
  3. If there is no correlation, there can be no causation

    That is what Rachael Meager is saying the book is saying. I spent a fair bit of time trying to understand if this is the same as 2. above. I’ve never studied logic formally (or informally, for that matter), but I suppose I am asking the following:
    ..
    ..
    If B exists, A must exist. (B is causation, A is correlation – this is just 2. above)
    ..
    ..
    If we can show that A doesn’t exist, are we guaranteed the non-existence of B?
    ..
    ..
    And having thought about it, I think it to be true. 3. is the same as 2.1

Rachael Meager then provides this example as support for her argument:

This is not me trying to get all “gotcha” – and I need to say this because this is the internet, after all – but could somebody please tell me where I’m wrong when I reason through the following:

Ceteris paribus, there is a causal link between pressing on the gas and the speed of the car. (Ceteris paribus is just fancy pants speak – it means holding all other things constant.)

But when you bring in the going up a hill argument, ceteris isn’t paribus anymore, no? The correlation is very much still there. But it is between pressing on the gas and the speed of the car up the slope.

Forget the phsyics and accelaration and slope and velocity and all that. Think of it this way: the steeper the incline, the more you’ll have to press the accelerator to keep the speed constant. The causal link is between the degree to which you press on the gas and the steepness of the slope. That is causally linked, and therefore there is (must be!) correlation.2

Put another way:

If y is caused by x, then y and x must be correlated. But this is only true keeping all other things constant. And going from flat territory into hilly terrain is not keeping all other things constant.

No?


But even if my argument above turns out to be correct, I still was wrong when I said that causation implies correlation. I should have been more careful about distinguishing between association and correlation.

Ben Golub made the same argument (I think) that I did:

… and Enrique Otero pointed out the error in his tweet, and therefore the error in my own statement:


Phew, ok. So: what have we learnt, and what do we know?

Here is where I stand right now:

  1. Correlation doesn’t imply causation
  2. I still think that if there is causation, there must be correlation association. But that being said, I should be pushing The Mixtape to the top of the list.
  3. Words matter, and I should be more careful!

All in all, not a bad way to spend a Saturday morning.

  1. Anybody who has studied logic, please let me know if I am correct![]
  2. Association, really. See below[]

Correlation, Causation and Thinking Things Through

Us teaching type folks love to say that correlation isn’t causation. As with most things in life, the trouble starts when you try to decipher what this means, exactly. Wikipedia has an entire article devoted to the phrase, and it has occupied space in some of the most brilliant minds that have ever been around.

Simply put, here’s a way to think about it: not everything that is correlated is necessarily going to imply causation.

For example, this one chart from this magnificent website (and please, do take a look at all the charts):

https://www.tylervigen.com/spurious-correlations

But if there is causation involved, there will definitely be correlation. In academic speak, if x and y are correlated, we cannot necessarily say that x causes y. But if x does indeed cause y, x and y will definitely be correlated.

OK, you might be saying right now. So what?

Well, how about using this to figure out what ingredients were being used to make nuclear bombs? Say the government would like to keep the recipe (and the ingredients) for the nuclear bomb a secret. But what if you decide to take a look at the stock market data? What if you try to see if there is an increase in the stock price of firms that make the ingredients likely to be used in a nuclear bomb?

If the stuff that your firm produces (call this x) is in high demand, your firm’s stock price will go up (call this y). If y has gone up, it (almost certainly) will be because of x going up. So if I can check if y has gone up, I can assume that x will be up, and hey, I can figure out the ingredients for a nuclear bomb.

Sounds outlandish? Try this on for size:

Realizing that positive developments in the testing and mass production of the two-stage thermonuclear (hydrogen) bomb would boost future cash flows and thus market capitalizations of the relevant companies, Alchian used stock prices of publicly traded industrial corporations to infer the secret fuel component in the device in a paper titled “The Stock Market Speaks.” Alchian (2000) relates the story in an interview:
We knew they were developing this H-bomb, but we wanted to know, what’s in it? What’s the fissile material? Well there’s thorium, thallium, beryllium, and something else, and we asked Herman Kahn and he said, ‘Can’t tell you’… I said, ‘I’ll find out’, so I went down to the RAND library and had them get for me the US Government’s Dept. of Commerce Yearbook which has items on every industry by product, so I went through and looked up thorium, who makes it, looked up beryllium, who makes it, looked them all up, took me about 10 minutes to do it, and got them. There were about five companies, five of these things, and then I called Dean Witter… they had the names of the companies also making these things, ‘Look up for me the price of these companies…’ and here were these four or five stocks going like this, and then about, I think it was September, this was now around October, one of them started to go like that, from $2 to around $10, the rest were going like this, so I thought ‘Well, that’s interesting’… I wrote it up and distributed it around the social science group the next day. I got a phone call from the head of RAND calling me in, nice guy, knew him well, he said ‘Armen, we’ve got to suppress this’… I said ‘Yes, sir’, and I took it and put it away, and that was the first event study. Anyway, it made my reputation among a lot of the engineers at RAND.

https://www.sciencedirect.com/science/article/abs/pii/S0929119914000546

I learnt about this while reading Navin Kabra’s Twitter round-up from yesterday. Navin also mentions the discovery of Neptune using the same underlying principle, and then asks this question:

Do you know other, more recent examples of people deducing important information by guessing from correlated data?

https://futureiq.substack.com/p/best-of-twitter-antifragility-via

… and I was reminded of this tweet:


Whether it is Neptune, the nuclear bomb or the under-reporting of Covid deaths, the lesson for you as a student of economics is this: when you marry the ability to connect the dots with the ability to understand and apply statistics, truly remarkable things can happen.

Of course, the reverse is equally true, and perhaps even more important. When you marry the ability to connect the dots with a misplaced ability to understand and apply statistics, truly horrific things can happen.

Tread carefully when it comes to statistics!

On Signals and Noise

Have you ever walked out of a classroom as a student wondering what the hell went on there for the past hour? Or, if you are a working professional, have you ever walked out of a meeting wondering exactly the same thing?

No matter who you are, one of the two has happened to you at some point in your life. We’ve all had our share of monumentally useless meetings/classes. Somebody has droned on endlessly about something, and after an eternity of that droning, we’re still not sure what that person was on about. To the extent that we still don’t know what the precise point of the meeting/class was.

One of the great joys in my life as a person who tries to teach statistics to students comes when I say that if you have experienced this emotion, you know what statistics is about. Well, that’s a stretch, but allow me to explain where I’m coming from.


Image taken form here: https://en.wikipedia.org/wiki/Z-test

Don’t be scared by looking at that formula. We’ll get to it in a bit.


Take your mind back to the meeting/class. When you walked out of it, did you find yourself plaintively asking a fellow victim, “But what was the point?”

And if you are especially aggrieved, you might add that the fellow went on for an hour, but you’re still not sure what that was all about. What you’re really saying is that there was a lot of noise in that meeting/class, but not nearly enough signal.

You’re left unsure about the point of the whole thing, but you and your ringing ears can attest to the fact that a lot was said.


Or think about a phone call, or a Whatsapp call. If there is a lot of disturbance on the call, it is likely that the call won’t last for very long, and you may well be unclear about what the other person on the call was trying to say.

What you’re really saying is that there was a lot of noise on the call, but not nearly enough signal.


That is what the signal-to-noise ratio is all about. The clearer the signal, the better it is. The lower the noise, the better it is. And the ratio is simply both things put together.

A class that ends with you being very clear about what the professor said is a good class. A good class is “high” on the signal that the professor wanted to leave you with. And if it is a class in which the professor didn’t deviate from the topic, didn’t wander down side-alleys and didn’t spend too much time cracking unnecessary jokes, it is an even better class, because it was “low” on disturbance (or to use another word that means the same thing as disturbance: noise).


That, you see, is all that the formula up there is saying. How high is the signal (x less mu), relative to the noise (sigma, or s). The higher the signal, and the lower the noise, the clearer the message from the data you are working with.

And it has to be both! A clear signal with insane amounts of noise ain’t a good thing, and an unclear signal with next to no noise is also not a good thing.

And all of statistics can be thought of this way: what is the signal from the data that I am examining, relative to the noise that is there in this dataset. That is one way to understand the fact that the formula can look plenty scary, but this is all it is really saying.

Even this monster, for example:

https://www.statsdirect.co.uk/help/parametric_methods/utt.htm

Looks scary, but in English, it is asking the same question: how high is the signal, relative to the noise. It’s just that the formula for calculating the noise is exuberantly, ebulliently expansive. Leave all that to us, the folks who think this is fun. All you need to understand is the fact that this is what we’re asking:


What is the signal, relative to the noise?


And finally speaking of noise, that happens to be the title of Daniel Kahneman’s latest book. I have just downloaded it, and will get to it soon (hopefully). But before recommending to you that you should read it, I wanted to explain to you what the title meant.

And if you’re wondering why I would recommend something that I haven’t read yet, well, let me put it this way: it’s Daniel Kahneman.

High signal, no noise.

Think range, not point

I attended a talk recently, in which the topic of pure public goods was covered, and the 2×2 matrix came up for discussion:

Source:https://medium.com/@RhysLindmark/club-goods-digital-infrastructure-and-blockchains-c1e911ebb697

Quick background: this is about the concept of public goods. A good that is rivalrous is a good that only one person can use at a time. The laptop on which I am typing this out is a rivalrous good. Only I can use it, and when I am using it, nobody else can.

A good that is non-excludable is one which I cannot prevent people from using. This blog, for example, is one which anybody, anywhere can see at any point of time. It is, and will always be free.

Have fun playing around with the matrix, and asking yourself where you would place which good. If you would like to give you examples to play around with, here’s a short list:

  1. Classes in a university
  2. Water in the water tank in your housing society
  3. A course on Coursera
  4. Seats on a bus
  5. The Mumbai-Pune expressway

But things can quickly get complicated! I gave the example of a laptop earlier on this post. What if five students are watching a movie on a laptop? A good that was rivalrous suddenly become non-rivalrous.

I also gave the example of this blog. What if I move over to Substack and turn this into a paid blog? A good that was non-excludable can suddenly be made excludable.

There are two points to make over here – the first is that context really matters.

But the second point, and the one that I want to talk about today, is the idea that those four boxes up top shouldn’t be thought of as discrete boxes, but rather as a continuum. Within each box, a good can lie either definitively in one box, or closer towards the edge, or indeed can jump across the boundary of the box, depending upon the context.

Statisticians would call this range estimates, rather than point estimates. Amit Varma would say that we all contain multitudes. Both are referring to the same underlying idea. That idea being this one:

When passing judgment upon a person, a concept or an institution, realize that your judgment doesn’t necessarily hold true for all possible scenarios. The same person can be good in one context, and bad in another. I’m good (I hope) at explaining concepts, but horrendous at meeting deadlines.1

The United States of America can be wonderful in certain contexts, and less than wonderful in others. India too, of course.

The point is, when you think about nebulous, hard-to-pin-down concepts, don’t think in definitive terms of a narrow point estimate. Think, rather, in terms of a range. Always a better idea, and one that I need to internalize better myself.

My thanks to the Anupam Mannur for helping my crystalize this idea, and to a friend who shall remain unnamed for helping me realize that I need to apply it in more areas than I do at present.

  1. Or very, very good at missing them.[]

Team “Kam Nahi Padna Chahiye”

Every time we host a party at our home, we engage in a brief and spirited… let’s go with the word “discussion”.

Said discussion is not about what is going to be on the menu – we usually find ourselves in agreement about this aspect. It is, instead, about the quantity.

In every household around the world, I suppose, this discussion plays out every time there’s a party. One side of the debate will worry about how to fit in the leftovers in the refrigerator the next day, while the other will fret about – the horror! – there not being enough food on the table midway through a meal.

There is, I should mention, no “right” answer over here. Each side makes valid arguments, and each side has logic going for it. Now, me, personally, I quite like the idea of leftovers, because what can possibly be better than waking up at 3 in the morning for no good reason, waddling over to the fridge, and getting a big fat meaty slice of whatever one may find in there? But having been a part of running a household for a decade and change, I know the challenges that leftovers can pose in terms of storage.

You might by now be wondering about where I am going with this, but asking yourself which side of the debate you fall upon when it comes to this specific issue is also a good way to understand why formulating the null hypothesis can be so very challenging.


Let’s assume that there’s going to be four adults and two kids at a party.

How many chapatis should be made?

Should the null hypothesis be: We will eat exactly 16 chapatis tonight

With the alternate then being: 16 chapatis will either be too much or too little


Or should the null hypothesis be: We will eat 20 chapatis or more

With the alternate being: We will definitely eat less than 20 chapatis tonight.


The reason we end up having a “discussion” is because we can’t agree on which outcome we would rather avoid: that of potentially being embarrassed as hosts, or the one of standing, arms exasperatedly akimbo, in front of the refrigerator post-party.

It is the outcome we would rather avoid that guides us in our formation of the null hypothesis, in other words. We give it every chance to be true, and if we reject it, it is because we are almost entirely confident that we are right in rejecting it.

What is “almost entirely“?

That is the point of the “significant at 1%” or “5%” or “10%” sentence in academic papers.


Which, of course, is another way to think about it. This set of the null and the alternate…

H0: We will eat 20 chapatis or more

Ha: We will eat less than 20 chapatis

… I am not ok rejecting the null at even 1%. Or in the language of statistics, I am not ok with committing a Type I error, even at a probability (p-value) of 1%.

A Type I error is rejecting the null when it is true. So even a 1% chance that we and our guests would have wanted to eat more than 20 chapatis* to me means that we should get more than 20 chapatis made.

At this point in our discussions (we’re both economists, so these discussions really do take place at our home), my wife exasperatedly points out that not once has the food actually fallen short.

Ah, I say, triumphantly. Can you guarantee that it won’t this time around? 100% guarantee?

No? So you’re saying there’s a teeny-tiny 1% chance that we’ll have too few chapatis?

Well, then.

Boss.

Kam nahi padna chahiye!

*Don’t judge us, ok. Sometimes the curry just is that good.

Calling Bullshit: An Appreciation

This past Tuesday, I went on a long rant about exams in general, and exams especially in the year 2020. That rant was inspired by a Twitter thread put out by Prof. Carl Bergstrom.

Now, if you happen to share my views on examinations, I’m guessing you were already likely to be a fan of Prof. Bergstrom. Today, your fandom might just go up a couple of notches. Check out the first paragraph on my favorite discovery of 2020 so far – Calling Bullshit:

The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. Higher education rewards bullshit over analytic thought. Startup culture elevates bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit — and take advantage of our lowered guard to bombard us with bullshit of the second order. The majority of administrative activity, whether in private business or the public sphere, seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit.

https://www.callingbullshit.org/

He and his collaborator on the project, Prof. Jevin West, are nothing if not thorough:

What do we mean, exactly, by bullshit and calling bullshit? As a first approximation:

Bullshit involves language, statistical figures, data graphics, and other forms of presentation intended to persuade by impressing and overwhelming a reader or listener, with a blatant disregard for truth and logical coherence.

Calling bullshit is a performative utterance, a speech act in which one publicly repudiates something objectionable. The scope of targets is broader than bullshit alone. You can call bullshit on bullshit, but you can also call bullshit on lies, treachery, trickery, or injustice.

In this course we will teach you how to spot the former and effectively perform the latter.

https://www.callingbullshit.org/

There’s a book, there’s videos of the course lectures (yes, you can earn credits for learning about bullshit), there’s a list of heuristics about detecting bullshit when it comes to interpreting visualizations, reading academic papers, and facial detection algorithms. There are case studies too!

And hey, if you insist on being politically correct (there’s merit in the argument that you shouldn’t, but hey, entirely your call) – well, they got you covered:

If you feel that the term bullsh!t is an impediment to your use of the website, we have developed a “sanitized” version of the site at callingbull.org. There we use the term “bull” instead of “bullsh!t” and avoid other profanity. Be aware, however, that some of the links go to papers that use the word bullsh*t or worse.

https://www.callingbullshit.org/FAQ.html

Some weeks ago, I promised somebody that I would come up with a lecture on demystifying statistics – and set myself the challenge of trying to come up with lecture notes without using a single equation.

As is the case with 95% of the things I really want to do, I promptly forgot all about it.

I haven’t seen all the videos yet on Calling Bullshit, but it does seem as if outsourcing this exercise – at least in part – to this fantastic website would be a really good idea.

Check out the syllabus here. A part of me is tempted to say that I would like to run this module as a summer school at GIPE, but you will remember what I said about things I really want to do.

But hey, there’s always hope, right?

Or should I be calling bullshit on myself?