The question that interests me is: what is the effect of spreading out one large investment into many smaller investments? I specifically want to exclude the effect of investing earlier or later, as that will naturally have predictable results.

To answer this question, I started with the daily closing values of the S&P 500 over the last 50 years. I simulated a series of investment strategies, each of which involves investing N dollars every N trading days, then looking at the total investment value after waiting for N/2 trading days. A year has about 250 trading days, so the time horizon I used was 256 days, and the N values were all powers of 2 ranging from 1 to 256. Every dollar in any of these schemes is invested for an average of 128 days, which means no scheme has the money invested any longer than any other scheme. Each day for the last 50 years, I computed the value that each scheme would produce over the prior "year" (where a year is 256+N/2 trading days).

So, with a total $256 investment over a year, each scheme produces the following results:

N (days) Average Std deviation

1 $266.73 8.4%

2 $266.77 8.4%

4 $266.77 8.4%

8 $266.77 8.4%

16 $266.77 8.4%

32 $266.77 8.5%

64 $266.77 8.6%

128 $266.77 8.9%

256 $266.65 10.3%

So on average, it seems spreading out your investments over the course of the year has very little effect. I wasn't expecting that. I guess you learn something every day.

According to these results, there's no point at all in investing more often than once per month. With a single lump sum invested for a year, your standard deviation is only 30% more than it would be with daily investments.

Is dollar-cost averaging a myth? What other conclusion could I draw?

## 6 comments:

Patrick: I want to make sure I understand what you have done. You have D days of market data, where D is about 12500. For N=1, you invested a dollar a day for the first 256 days of market data, and checked the return on the 257th day (or 256.5-th day?). This was repeated starting from the second day of market data, then the third, etc., until running out of data. Then you averaged the results of the D-256 runs. For N=2, you invested $2 every other day and checked the returns after 257 days (so that the average dollar would have been invested for 128 days). These results were averaged over D-257 runs. This continued until the N=256 case where the results were averaged over D-384 cases.

Perhaps you did things a little differently from this to begin each case. For N=256, for example, you may have used the first day of market data as the end of a year, and checked the market on the 129th day. In this way, all of the cases could have had the same number of runs.

Let me know if I'm close.

Hi Michael. That sounds right. I dealt with the start conditions simply by using about 15000 days of data and then ignoring the results from the first few thousand, so at the start of my averaging period (which was about 12500 days, as you said), the simulations were already in a steady state.

Thanks very much for taking a look at this.

Your analysis for N=256 is essentially finding the average 128-day return. For N=128, you are finding the average 64-day and 192-day returns and averaging these values. This continues to N=2 where you are averaging the average 1-day, 3-day, ..., 255-day returns. N=1 is a little different because you get into half days to keep the same pattern.

Taking your N=256 result, we get a 128-day average return of r=266.65/256-1=4.160156%. For N=128, we expect to get a result equal to ($256)*((r+1)^(1/2)+(r+1)^(3/2))=$266.7054. This difference of 6 cents is only half of the 12 cents that you found, but it's close enough to show that the difference is small. Moving to the N=64 case, the difference I calculated was a little over a penny. Subsequent differences were much less than a penny.

So, I'd say that your experimental results are consistent with theory. The reasoning behind dollar-cost averaging is fundamentally flawed. Making investments every 3 months or less gives lower volatility and higher returns, but inside of this time frame there is little advantage.

Maybe I'll write a post to discuss this.

Thanks, Michael. That's very interesting. Until I did this study, I thought dollar-cost averaging actually worked. I'd be very interested to see your article if you decide to write it.

If I understand correctly it sounds like you've added up the same numbers multiple ways, which couldn't possibly giving anything but the same value.

For example if you take the value of investing $1 everyday for 256 days it will always be the same as the average of $2 every odd day and $2 every even day, since the sum of those two is investing $2 every day so the average is $1 every day.

I don't know the best way to deal with it. Though plot of the distribution would be interesting to see.

FF: I think you're pretty much right. The bit that surprises me most is that the standard deviation didn't change much either.

Post a Comment