You know that saying “all’s well that ends well?” Well, I disagree. Ending well doesn’t mean that all is well. Judging a scenario based solely on the ending is an example of results-oriented thinking, and it’s a logical fallacy that’s plagued multi-cellular organisms since the days of primordial soup.
In brief, results-oriented thinking describes when you forgo logic and instead use the individual outcome of an decision to determine whether your thought process was “right” or “wrong.” Winning doesn’t always mean your strategy was right. And losing doesn’t mean your strategy was wrong. Let’s look at an example.
Easy Example: Coin Flip
Let’s bet; I’ll flip a coin. If it lands heads, then I pay you $200. If it lands tails, then you pay me $100. This clearly favors you.
We start. I flip the coin once. It lands tails, and you pay me $100. In theory, you were supposed to have the advantage—at least, over the long run. But in this one example, you lost. And it doesn’t feel too good, does it? Your hard-earned money is gone in an instant. All the “theoretical outcomes” in the world won’t change that.
But the real test occurs when I offer the same bet again. Your brain might be a bit clouded with negative emotion. The results of the previous loss are still weighing on your mind. Will you let prior outcomes affect your future decision making? The math has not changed. The rational choice has not changed. The odds are still in your favor.
The choice should be easy. You have a 50% chance of winning $200—that’s an expected value of 0.5 * 200 = $100. You also have a 50% chance of losing $100, for an expected value of -$50. So your total expected value is $100 – $50 = $50. If we flipped the coin 100 times, you’d expect to profit about $50 * 100, or $5000.
But after losing $100 right off the bat, your results-oriented brain might suggest you not take the bet—even though that’s not the logical choice.
And that’s why results-oriented thinking is not logical.
Bad beats hurt
Behavioral economics is the pop science du jour, and the subtopic of loss aversion is one of its most cited ideas. Loss aversion refers to the idea that people would rather avoid a loss than acquire an equivalent gain. For example, the pain of a $100 speeding ticket is greater than the joy of winning a $100 lotto ticket.
Loss aversion helps explain the prevalence of results-oriented thinking. Animals are wired to avoid pain. Anytime there is a painful outcome at play, we have a tendency to over-weigh that outcome.
Over time, the idea that risk leads to loss can become so ingrained that any risk starts to become scary. One common comment I receive describes how people had a big loss in the stock market in their past, and therefore refuse to put any more of their money into stocks. We can all empathize with this idea. But mathematically, it’s not rational.
Both gamblers and investors (ironic?) have to constantly fight results-oriented thinking. For example, a poker player might lose three hands in a row after she starts with a pair of aces—the best hand. Should she re-consider whether aces are actually good? We know the answer is no—a pair of aces will always be best, whether they’ve been recently winning or losing.
Investing is harder, since the odds aren’t as clear. If Aunt Ethel starts saving money in 1998 and then is hit by the Dot Com bubble, should she conclude that the stock market is a bad bet? By 2002, her investment is down 20%. But now in 2020, her investment would be up 200%. So do we care about either result?
We consider the results, but don’t depend solely on them. Stock market results make no promises, only suggestions. And they suggest that you’ll typically make money over long periods of time. The longer you wait, the higher your odds of success.
Enjoying this article? Subscribe below to get new articles emailed straight to your inbox
Results-oriented vs. learning from the past
“I jumped off the barn roof and broke my ankle. I learned my lesson. In the future, I won’t jump off the barn roof anymore.”
BUT, that’s results-oriented thinking! He only decided that roof jumping was bad because he had a bad result. I just learned about this idea on some two-bit blog.
Well…kinda. Maybe. Not really.
The real world is not a coin-flip nor a deck of cards. We don’t typically have the luxury of knowing probabilities before an event occurs. Instead, we have to rely on some combination of past results, common sense, analysis, prediction, estimated probabilities, etc.
You should combine information from many different sources. For example, you might do well to understand how past attempts have gone. Common sense tells you roof-jumping is a high-risk choice. You could look up data online about health outcomes after falling from certain heights. All this data would probably come together to tell you that your first jump off the barn was stupid, and to do it again would be just as stupid.
Using past results data to inform a future decision is not inherently negative. In fact, it can be very important. But relying solely on individual outcomes can be a problem, especially when not understanding the bigger picture.
Bayesian analysis is used by statisticians to adjust subsequent behaviors by using prior results. For a crazy example, how might you find a submarine that tragically sank, or a plane that vanished off the radar? In the case of the USS Scorpion or Air France 447, the answer is Bayesian analysis.
Search crews start with all possible causes and all possible locations, and apply estimated probabilities to those causes and locations. The result of this task is a map (literally) of probabilities that point to the most likely locations to find the wreckage. Then, a search path is constructed that starts with the most likely locations and works its way downward. And that path is constantly readjusted and recalculated as evidence is (or is not) found.
Bayesian analysis uses results as a vital component of determining probabilities. But results aren’t the only factor that Bayesian analysis considers. Also important: it considers all results, not just a cherry-picked (or particularly painful) subset. I found The Signal and the Noise to be an excellent book that describes Bayesian analysis using excellent examples and clear language.
Derek Jeter’s defense
Derek Jeter, if you don’t know him, was the shortstop for the New York Yankees from 1995 to 2014. Just this week, he was voted into the Baseball Hall of Fame. Congrats Derek! It’s well-deserved. Jeter will go down as one of the most successful hitters in the history of the game.
But for the sake of this article, I want to focus on Jeter’s defense. If you ask a baseball fan about Jeter at shortstop, they’d conjure up an image of Jeter sprinting towards a ground ball, scooping it up on the run, and the jumping in the air to throw the ball to first base. Like this…
Jeter made that his “signature” play, successfully completing it in the most high-pressure situations.
But in the modern-age of baseball statistics, there are methods of evaluating defensive talent that go beyond the highlight reel.
For example, the statistic defensive runs saved tells us how many runs were prevented by Jeter’s presence on the field. A great fielder, presumably, would have a strong DRS impact. But Jeter’s DRS was actually one of the worst in the league during his tenure.
The same holds true for many other detailed defensive metrics—such as ultimate zone rating and range runs—both of which Jeter placed dead last among shortstops since 2003.
These are the kind of statistics one could discover after lots of observation and data mining. They aren’t as obvious as a running, jumping, bang-bang play at first base.
Jeter, it turns out, often had to rely on his athletic theatrics only because he reacted slowly to batted balls and had such a limited range. Observing Jeter’s highlights led many fans to a false results-oriented conclusion.
Results in your life
If your neighbor won the Lottery, would you go out and start buying tickets? While I understand the desire, I hope you’d agree with me that the rational choice is no. The Lottery is still probabilistically rigged against you.
If a meteor struck the mayor in the forehead, would you add meteor strikes to your insurance policy? Again, while space rocks are intimidating, the rational answer is no. Your forehead is safe (or is it?!)
There are scenarios that we encounter every day—missed deadlines, stuck in traffic, Wegmans parking lot is overflowing—where results-oriented thinking leads to incorrect conclusions about our choices.
So the next time you swear you’ll start showing up the the airport four hours early, consider whether that’s rationally best, or simply a results-oriented rashness to prevent you from missing flights.
Now, about that coin flip…don’t you owe me some money???