Would your choice (or answer) be impacted by the way a particular question was framed?
Econs, to use a word coined by Richard Thaler, would scoff at such a notion. Reality, of course, indicates otherwise. Should a price be framed as a discount during a sale, should it be offered at a premium for the rest of the year? Should an insurance firm advertise how dangerous a disease is or how effective the treatment is? And should a hospital take the opposite route?
Again, in the world of the Econ, each of these questions do not matter, one’s answer should change. Work done over years, and the intuition of all us Humans indicate that it does matter. That, in a nutshell, is the framing effect.
And where the second case study I spoke of yesterday is concerned: I’ll just outsource the job to Dan Ariely.
Yesterday was all about fallacies: the gambler’s fallacy, the conjunctive and disjunctive fallacy, the base rate fallacy, the framing effect (which, if you think about it, is a kind of fallacy) and one of my favorite stories from behavioral economics: the organ donation case from Europe. We’ll get to each one in turn.
The word fallacy comes from the Latin word fallere, which means to be decieved – also making me wonder if the phrase “to fall for” is related to the same word. The story with the three “fallacies” that we’re talking about in this post is the same: our brain falls prey to some errors that are in retrospect irritatingly obvious.
The Gambler’s Fallacy
Have you ever felt you were on a lucky streak while playing a game of cards – felt as if you were invincible while rolling a pair of die? That’s the hot hand fallacy. On the other hand, have you ever felt tempted to continue playing a game because you have been losing for a while, and feel like your luck is due to turn for the better? That’s the reverse of the hot hand fallacy – but both are really (excuse the pun) two sides of the same coin: you think the next outcome is a function of the previous ones, whereas basic probability teaches you that they are independent events.
The Conjunctive (and Disjunctive) Fallacy
Say a person is introduced to me at a party, and I learn that this person has done her education in finance and investing, and is very enthusiastic about Picasso’s paintings. Which of the two options I describe below is likelier?
One, that she is in banking. Two, that she is in banking and has some Picasso prints hanging on her wall at home.
Most people (myself included) would be tempted to go with the second option. But again, basic probability teaches us that this is wrong, because we are talking about the intersection of two things (the probability that she is a banker AND the probability that she has Picasso prints hanging at home) – and two things happening at the same time is unlikelier than either of these things happening individually.
The disjunctive fallacy is the same idea in reverse. Meet my friend Rahul. Rahul is six and a half feet tall, is built like a bull, and can demolish multiple plates of food at the same sitting. He spends a couple of hours a day (at least!) at the gym. What is likelier:
- Rahul is a wrestler
- Rahul is a wrestler or a classical singer
The point is that once should always choose the second option because it includes the first in any case.
The Base Rate Fallacy
I’ll not get into the math in this case, but just give you the short description of the idea: false positives are going to be overwhelmingly likely if the base rate is very low, and the human brain has trouble understanding this idea.
Here’s what this means:
Of the millions of people who transit through airports the world over, a vanishingly small minority will actually be terrorists. If the algorithms used to detect them are even slightly inaccurate, they are likely to err on the side of over-identifying potential terrorists. We would therefore be wrong to assume that every person who is flagged as a potential terrorist actually is a terrorist.
(Note that it is way more complicated than this, but that’s the shortest version I could manage).
This post is already a fairly dense one in terms of ideas, so I’ll split this one into two, and cover the rest tomorrow: the framing effect, and the European organ donation saga.
- India’s cheap airline tickets: not necessarily a good thing. Maybe somebody should write “India Airborne”?
- The cleanest village in India
- On Maharashtra’s plastic ban.
- On the (international) economics of empires, past and present.
- Expect many more of these over the coming month or so: on the tenth anniversary of the crash.
Teaching undergraduates is a whole lot of fun, because generally speaking, their curiosity hasn’t been completely killed just yet. And this seems to be true with the people who have chosen to attend the behavioral economics workshop as well – they’ve (hopefully voluntarily) chosen to spend their afternoons attending a workshop over the course of every workday this week. Catch ’em young!
We kickstarted things yesterday by speaking about ways and means to think about microeconomics in a rather conventional sense. I chose to not bore them to death by talking about utility functions and all of that, firstly because it is the worst way on the planet to get people thinking about economics, and secondly because they are all economics students to begin with – the indoctrination has been done by their colleges already.
I spoke instead about the Choices, Costs, Incentives and Horizons framework, which I have spoken about earlier on the blog. Within each of these concepts, however, I added a sprinkling of behavioral economics. What, for example, are your choices when confronted with a buffet spread?
And how long before you realize, if at all, that not eating it is also a choice? Sometimes, being presented with a choice to consume blinds us to the option of not doing so – which explains why checkout counters at supermarkets tend to have chewing gum on sale.
When it comes to costs, we spoke about opportunity costs and how it is often misunderstood – the people attending the workshop are paying me the fees of the program, plus they are paying fifteen hours of their time. Fifteen hours that they could have spent doing something else.
In addition, we spoke about sunk costs. My favorite example is of how I and my wife were finally able to go out for a movie together after the birth of our daughter – and we ended up watching Happy New Year. And yet, even though the movie was tripe of exceptional quality, we chose to sit through the whole thing. Neither of us enjoyed the movie, and by the end, every second was exquisite torture, but we went through the whole experience. This after I’ve been teaching the concept of the sunk cost fallacy for over a decade.
Incentives are both fairly well understood and applied in conventional economics – but how about negative incentives? Rather than reward yourself with a nice shirt if you lose weight, how about allowing a friend of yours to post a picture of you on Facebook where the paunch is especially noticeable? Which is likelier to be more effective?
Finally, horizons: exercise today evening, or finish an episode of your favorite series on Netflix? We tend to go for short term pleasure over long term gains – and that is to our detriment in the long run. But our brain, unfortunately, is not trained to think about long term consequences.
Finally, we spoke a little bit about signaling and it’s importance to us. That’s a topic deserving of a separate blog post entirely, but I will ask you guys a question I asked everybody in class:
Imagine you are able to attend the best college in the world, and are able to handpick the people who will teach you whatever courses you want. The ideal education, structured just the way you want it. The only problem is, you won’t get a degree at the end of it. Or, you could get, right here and now, a degree of your choice from whichever college you like – but you will not be able to attend a single class. Which of these options would you pick?
The question is based, of course, on a question that Bryan Caplan asks in his excellent book: The Case Against Education. Let me know your answer, I am genuinely interested.
Finally, we spoke about Kahneman’s “fast and slow” thinking. How and why it evolved the way it did, why it may have been of help in the fast, but isn’t much use in the world we live in today.
It was an exceptionally fun session, and hopefully it will continue in a similar vein for the rest of the week.
This week’s posts were going to be about podcasts that I listen to, but I’ll push that out to next week.
I and a colleague of mine at the Gokhale Institute (which is where I work) are running a five day seminar at the Institute on behavioral economics. This one is for undergraduate students only, but based on how this one turns out, we might do a couple more through the year. But for that reason, I figured we might take a look at behavioral economics is, and explore work being done in this area, and why it matters.
In this post, I’ll give you an overview of behavioral economics, and in the five subsequent posts that follow, I’ll detail what we spoke about in each session.
First things first: behavioral economics really is a tautology, because economics is the study of choice, and we make our choices given what we know and given what we feel.
The trouble is, modern economic theory (most, but not all of it) would tend to say that what we feel ought not to matter, and in fact doesn’t actually matter in the real world. Except we’ve all demolished a big fat bowl of ice-cream because we’re feeling blue, the diet be damned. We’ve all bought items on sale on Amazon, when we clearly had no need for them. And we’ve all chosen to play a game on the phone over completing a task at hand, and hang the consequences. I could go on (and not just where individuals are concerned, but firms and governments too!), but you get the picture.
We’re all predictably irrational.
In a sense, behavioral economics is about the first word in that link. As a social scientist, it’s not much use to say that we’re irrational. That’s akin to saying that there’s nothing that we can say, do or predict about the choices that all of us make.
But predictably irrational? Ah, how exactly? If our irrationality can actually be modeled, then perhaps we could understand how and why we make the choices we do. Even better, maybe we could push people towards eating more salads and less ice-cream. Although you should note that there are some people in my tribe who don’t necessarily think this to be a good idea.
Still, the study of
a) whether we think “rationally” or not, and…
b) if not, then can we think systematically about how we are “irrational” and why…
c) and can we use our findings from this exercise to make people, institutions and therefore societies behave differently (and hopefully better)…
…is the study of behavioral economics.
And the five day version (duly expanded) of this is what Savita Kulkarni and I will be talking about at Gokhale Institute over the course of the next five days. And I’ll keep you guys updated as we go along.
- It’s not just India’s statisticians who need to revise their data, by the way.
- This story is from a while back, but still worth reading. Especially as an Indian.
- On thinking about money, and what it really means. (Who knows?)
- Via MR. As if eating there wasn’t bad enough.
- On contagion in the currency wars.