A Canadian's random thoughts on personal finance

Sep 22, 2008

The fallacy of large numbers

Today's post from Michael James just reminded me of a letter I sent to the editor of Money Sense magazine last summer:

In your recent article "10 Laws of Building Wealth", you scoffed at the investor that would choose a guaranteed payment of $3000 instead of an 80% chance of $4000. Your rationale was that the expected payoff for the latter is $3200, which is more attractive than $3000. However, the justification for using the "expected value" to compare alternatives is based on the Law of Large Numbers, which does not apply if we're given the choice just once. In that case, there's no way to combine risk and reward into a single neat metric, and the choice of $3000 versus an 80% shot at $4000 depends, quite rationally, on the risk aversion of the individual.

15 comments:

Michael James said...

Thanks for the link. I would add that the correct choice depends on a rational level of risk aversion. People can be very irrational, and their level of risk aversion can vary dramatically based on irrelevant factors. A man sitting beside his mother would likely take the $3000. The same man might take the chance to make $4000 if he is sitting with his friends. Personally, I would take the chance on $4000 if I was convinced that the bet was fair.

Patrick said...

Hi Michael. Do you buy lottery tickets when the jackpot is high enough to make the expected return higher than the cost of the ticket? I don't. The reason? No matter the jackpot, I'm still certain that I will be wasting my money on the ticket. I am more certain of this than of almost anything else in my life. What difference does the jackpot make if I won't win it?

Michael James said...

Hi Patrick. If such a lottery existed, then I'd form a syndicate to buy every ticket. Unfortunately, no such lotteries exist (to my knowledge). When the top prize exceeds the cost of buying one of every possible ticket, it is still a losing bet because multiples of most tickets have been sold and the winners will very likely have to share the top prize. Most lotteries pay out half or less of the money they take in. This makes each ticket worth less than half of what you pay for it. Of course, if you could choose a combination of numbers that are unpopular with other players you could improve your odds, but the large number of randomly-generated tickets that get sold seems to eliminate the possibility of actually making the bet profitable.

Patrick said...

Well, that has happened, but the point I'm trying to make is just that "expected value" has no significance if you get only one kick at the can.

Also, a lottery ticket is only worth half what you paid for it if the prize had been claimed the previous week; otherwise, the jackpot builds, and the expected value of a ticket grows.

Michael James said...

Hi Patrick,

It wasn't clear from the Wikipedia write-up whether the tickets had a positive expected outcome. If the syndicate had shared the top prize with a few other players they would have lost money. In any case, I don't doubt that some lotteries have offered tickets with a positive expectation. I'm not aware of any that continue to do so.

The last account I read about how the 6/49 runs made it clear that they would never allow a ticket to have a positive expected outcome. I don't remember all the details, though.

Back to the more central question. In my previous messages I wasn't advocating being driven by the expected outcome. I think that a logarithmic utility curve is a good approximation to reality. So, if your net worth is x, then you compare ln(x+3000) to 0.2*ln(x)+0.8*ln(x+4000). My x is large enough that I would take the bet. I would do this even though my intuition tells me to take the sure $3000.

Patrick said...

That logarithmic utility curve is interesting. It could be a way to defeat my assertion that "there's no way to combine risk and reward into a single neat metric". You may have guessed that I'm a fan of logarithms so it is particularly appealing to me.

It does arrive at a surprisingly small answer though: you should take the bet if you have a net worth over $4943. (Thank you online Newton's Method calculator!) Intuitively it seems very unwise to bet 60% of your net worth this way.

Patrick said...

Hi Michael,

On reflection, the right-hand side of your logarithmic utility equation is an expected-value calculation, so I remain skeptical as to its applicability in a situation where you get only one kick at the can.

Also, I doubt that net worth is the proper baseline to use in that formula. Suppose I have a large net worth tied up in a pension fund, and a cash-flow problem. In that case, my net worth is not relevant, and a guaranteed $3000 could help substantially.

Perhaps a person's liquid assets are more relevant. Take each asset and discount it by some "liquidity factor" to compute your "liquid net worth". Or perhaps a person's short-term liabilities are more relevant.

Michael James said...

Hi Patrick,

Whether you get just one kick at the can or several, an expected value calculation will give the right answer as long as you have the right utility function.

A logarithmic utility function based on net worth isn't perfect, but it seems to work very well in most situations. However, it's not hard to concoct scenarios where it doesn't work. If someone has a gun to your head demanding repayment of a $3000 loan immediately, you should take the sure $3000. If you already have 99% of the world's wealth, it's hard to see how there could be any utility in having more.

I don't find your short-term liquidity cases very compelling as an argument against using net worth for the utility calculations. There are rare cases where people have genuine cash-flow problems. However, most cases are simply based on habits. If my after-tax pay is, say, $5000 each month, is it likely that I really have my life set up so that I must spend it all every month? Much more likely is that I have a bunch of stupid habits that waste money. I could stop ordering pizza, or stop buying clothes I don't need, or stop buying coffee at Starbucks. People tend to develop spending habits that exactly match their thousands of dollars of pay each month to the point where a found $100 seems like a big deal. This irrational human tendency makes it seem like we should take a sure $3000 over an 80% chance at $4000. Very few middle class adults are genuinely better off taking the sure $3000.

Having said all this, I fully understand why so many people would take the sure $3000. My instincts go in this direction as well. But any mathematical justification for this would be hopelessly contrived. Apparently, Jason Zweig chose an excellent example, because it seems to push our irrational buttons making us want to take the safe route even though the math tells us that taking the chance is the right thing to do.

Patrick said...

Hi Michael,

Some interesting points there. Something to think about, certainly. However, I still think expected value calculations only apply when the Law of Large Numbers applies. Without the latter, what is the justification for using expected value to guide decisions?

Michael James said...

Hi Patrick,

Suppose that you are offered a chance to toss a fair coin to win or lose $100. I wouldn't do it, and I'm guessing that you wouldn't either. What if we changed it so that you get either win $110 or lose $100? At this point, I would go for it as long as I was convinced that the coin toss would be fair. Others might not take this $110/$100 bet. However, as we increase the $110 to $120, $200, $1000, or $1,000,000, eventually any sane person would eventually take the bet unless the situation was contrived so that losing $100 meant death or some other horrible outcome.

So, even though the law of large numbers does not apply because the bet will happen only once, we would all take a chance on losing $100 if the upside was big enough. The point at which the upside is just barely large enough to get you to take the bet defines a couple of points on the utility of money curve. This utility function gets defined by your willingness to take chances, and so by the definition of the utility function, the expected value of the utility of the outcomes applies.

Unfortunately, you can't just ask a person a series of betting questions to work out his utility function because people give contradictory answers. You can ask the same question using different words and get two different answers. There is a wide gap between the choices that people make and what is actually best for them. My assertion is that the log-utility function is reasonably close to what is actually best for people even though it is not close to the choices that they would actually make.

Perhaps we are too conservative in certain types of situations to protect ourselves from being tricked as in the case where the coin being tossed is biased. There are other types of situations where people (particularly young men) are unreasonably reckless.

Patrick said...

Hi Michael,

We're venturing into St. Petersburg Paradox territory here. That very paradox, I think, demonstrates the weakness of expected utility when it comes to decision making, particularly when a game is played only once.

Michael James said...

Hi Patrick,

The St. Petersburg Paradox is resolved by recognizing that utility is bounded. As I mentioned earlier, there is little point in owning more than 99% of all wealth on earth. A log utility curve works nicely over "normal" ranges of wealth, but breaks down as you get into extreme wealth. Based on log utility, each wealth doubling is equally valuable, but I would say that over about $10 million, each successive doubling in wealth is worth less than the last.

This bound on utility resolves all versions of the paradox regardless of whether the bet is performed once or many times.

Patrick said...

Hi Michael,

Sorry, I don't buy that explanation either. I know it works mathematically, as does Bernoulli's notion that people neglect unlikely events, but neither one describes reality. Honestly, what is the reason you wouldn't pay $10,000 to play the St Petersburg game? Is it really because you evaluate winnings beyond $10M to have equal utility? I find that hard to believe; I submit it's because you know you're unlikely to win back your investment, plain and simple.

Perhaps a better one is the Allais paradox. That's harder to explain with expected utility theory.

Michael James said...

Hi Patrick,

We need to make a distinction between the choices people make and what is actually best for them. The fact that people may consistently make a particular choice does not make the choice correct. I assert that log-utility works reasonably well to determine what people should do. What actually drives them to make crazy choices is another matter entirely.

As for the St. Petersburg Paradox, if we cap log-utility at $10 million, a person with a net worth of $100,000 should not be willing to pay more than $9.27 to play the game. This is a far cry from your $10,000 figure.

The Allais paradox is based on people's choices, not what is actually best for them. It is clear that people should either prefer both 1A and 2A or 1B and 2B depending upon the utility function. The fact that they don't make one of these choices makes them irrational.

Patrick said...

Hi Michael,

You're right: sometimes in economics, the line is blurred between theories for making rational choices versus theories for explaining what people actually do. Allais himself claimed that choosing 1A and 2B was rational, and I take it you would dispute that.

Perhaps I can conclude this way: given that even experts in economics can't agree that expected utility is a good metric for making rational choices, it's unfair to ridicule someone who picks the sure $3000 over the 80% chance of $4000.