Overconfidence in FP&A and Its More Predictable Future with AI
I was getting dinner with a good friend of mine when the topic of “year-end bonus” was brought up. I have long expected a small check this year, as my firm has missed its revenue forecast by nearly 20%.
It is not uncommon for companies to have a rather rosy outlook on the future. A recent study has found that public companies' earnings guidance is wrong about 70 percent of the time. To attract and retain investors, leaderships at companies are pressured to come up with new service lines, create new products, and capture more customers to meet stakeholders’ expectations. But the reality is, perpetual growth is almost unrealistic.
So why do companies consistently make the same mistake? The answer might not be as malicious and deceiving as you might think. It is actually in our human nature to fall for the trap of “overconfidence.”
In the book "Thinking, Fast and Slow," Daniel Kahneman discusses the bias of “planning fallacy.” As human beings, we undervalue the unknowns, and the idea of the “hubris hypothesis” prompts us to make overly optimistic predictions. Ironically, optimism is socially praised. Firms hire decision-makers who communicate a high level of sanguineness and confidence but are prone to neglect statistical facts and rationality. Kahneman argues that the future has a high degree of randomness and, oftentimes, believe it or not, cannot be predicted.
But we always feel like we can control the future. Restaurant owners believe that they are going to be successful but forget the many businesses that failed in the same suite prior. CEOs have faith in their new business strategies and project a 20 percent increase in revenue before launching but soon face harsh realities of the product-market fit. What do we do at this point?
In my opinion, the answer lies in having “reserved optimism.” Reserved optimism is a positive mindset with critical objective judgments. It is a good blend of faith and statistical facts, an expectation adjusted by bias.
If you have a background in mathematics, you are probably no stranger to the bias estimator formula: bias(ˆθ) = Eθ(ˆθ) − θ. The Greek letter theta (θ) represents an unknown parameter of interest. Essentially, what this formula entails is that our expectation of the result should be what we think the result is while adjusted based on the degree of our biases. It is plural because we tend to neglect many parameters of biases.
What are the types of biases we need to pay attention to while making an expectation? First, we have Anchoring Bias. Kahneman defines Anchoring Bias as WYSIATI ("what you see is all there is”), and it is our System 1, reactive brain, heavily relying on the first piece of information we receive, thereby blinding us from looking for further evidence that supports our expectation. Second, Confirmation Bias is also commonly spotted in decision-making errors. It is our tendency to give supporting arguments more weight than contradicting arguments. Third, Overconfidence Bias prompts our decision-making solely based on our positive outlook on the situation, yet overlooks the statistical facts. Just to name a few.
Looking back at my experience in financial valuation and forecasting, I rarely see robust formulas that catch these biases. I would not argue for a lack of professionalism behind these decision-making processes nor would I challenge my peers’ intellectual reasoning, but the truth is, unless it is a high-stakes scenario, many FP&A professionals are victims of their own blind spots and fallacies.
So what are some of the things to consider for financial budgeting and forecasting? Scenario analysis is a good starting point. By modeling out the worst, base, and best-case scenarios, you would have a pretty good idea of where your future lies. But to go a step further, statistical analysis should be woven into these scenario models. It takes a higher level of meticulousness to uncover the chances of each scenario happening, without giving all scenarios equal weighting.
Last but not least, and the most challenging one of them all, adopting a “reserved optimism” mindset. Companies should be open and considerate of diverse opinions, and if able, model those into your forecast and projection. Employees should adopt critical thinking and independent decision-making skills, instead of mindlessly rallying behind any exciting ideas.
The dilemma is, it is costly and time-consuming to create these robust formulas for a more accurate projection, as if we are not already working overtime and eating lunch at our desks. But here is where I think AI will become the bridge between academia and the professional world. AI infrastructure and ML models are going to be pivotal by processing large amounts of statistical data and combining them with robust formulas. With the increasing maturity of the field as well as the development in processing power, the burdensome task will soon be automated. At least, I hope.
But at the end of the day, AI is not the final solution. In an industry like real estate, where values are partially sentimental, AI can fall short on making an accurate prediction. Although sentimental values seem so nuanced right now, LLM might one day be able to capture them. I believe there is still a long way to go, but it is definitely not unachievable.