The Outliers: Strategize Through First Principle Thinking Among a World of Sheep
Four months after being laid off from my job at EY, I found myself hopelessly dragging yet another rejection email to the “dead interviews” folder. Maybe this idea to make a lateral move from real estate finance to business operations and strategy within tech was too quixotic. Maybe I didn't have what it took. With every additional rejection, my insecurity deepened, doubts intensified, and anxiety piled on. I found myself stuck behind the very hurdle many had warned me about – it is not so easy to break into a new industry.
It wasn't so much about not being able to get interviews. Over the course of four months, I interviewed at some of the largest tech companies in Silicon Valley—Meta, Uber, Apple, just to name a few. The interview processes for business operations positions were more or less the same: recruiter screen, hiring manager interview, case study, and a final round consisting of a case presentation and behavioral interviews. Yet, the pattern was consistent—I kept getting stuck at the final round, unable to secure an offer. What knowledge gap was holding me back from qualifying for these positions?
As many consulting-trained professionals would do, I went on an Internet spree to find the best “bizops frameworks” and “case study structures” I could deploy. I wholeheartedly believed the issue lay in my lack of systematic training, and rigidly applying these industry frameworks seemed like the safest approach.
I soon realized this couldn’t have been further from the truth.
The Inspiration
While I was waiting to hear back from Uber’s recruiting team, I met with my friend Kyle for a work session at a café in the Arts District. Kyle formerly worked in business operations at one of the top tech companies in the Valley, and I valued his opinion deeply. While anxiously waiting for Uber’s decision, I wanted to get his impression of my case. The case involved analyzing a dataset of UberEATS' historical operations, identifying three operational issues, and providing strategic recommendations.
As Kyle went through my slides, I watched his face closely. By the fourth slide, his brows furrowed and scrunched toward the center of his face. I already knew it was an “uh-oh” situation. Kyle painstakingly read through all 12 slides of my “strategic recommendations,” and, with nothing great to say, remarked: “If you think bizops is about coming up with a list of strategies, then you are absolutely wrong.”
What is it about then?
“It is about understanding the relationship of things, and that’s where you should start,” Kyle said as he put his headset back on and delved back into his work.
The Feedback
To figure out how to jump over this hurdle, I first needed to identify it vividly. During one of my interviews, I connected well with the manager at ABC Company. Let’s call him Allen. After being rejected from ABC Company, I felt more upset about missing the chance to work with Allen than about the decision itself. Allen had this rebelliousness about him—unconventional, maybe even unpopular—but I felt an immense trust and gravitation toward him. His sarcastic but honest manner of speaking was raw but truthful.
I reached out to Allen on LinkedIn with a brief message along the lines of: “Thank you for considering me for the position. I’ve been stuck at the same position in multiple interview processes, so I’d really appreciate any pointers you could provide.” Within hours, Allen emailed me an encouraging message and scheduled time to chat.
Before connecting with Allen, I had a basic understanding of where my problems lay, as Kyle had implied: a mistaken focus on breadth over depth and a superficial attempt to derive strategies without understanding underlying relationships. But these issues seemed vague to me. What does it mean to go meticulously in-depth on a case with such a broad prompt? And, more importantly, how?
During our call, Allen put the pieces together for me.
Dare to Question
The case I worked on for ABC Company involved analyzing product defect data from the past year to identify operational bottlenecks. Let’s imagine this case in the context of an ecommerce business, such as Amazon. The defects included:
Broken item (refund without return)
Order cancellation (full refund)
Missing order (full/partial refund)
Others (full/partial refund)
Although I had many questions upon first reviewing the dataset, I took it all at face value. This is where I, like many former consultants, fell short. We’re trained to work with the data provided by the client, but we often fail to question its legitimacy. For example, what exactly constitutes a “broken” item? Is it merely a cosmetic flaw, or is it completely non-functional? In other words, how do we define “broken”?
To go a step further, it’s crucial to question the underlying causes of such defects. Are they the merchant’s fault? Are they caused during delivery? Or could they be fraudulent claims from customers trying to exploit the system?
You might think this second-order reasoning would suffice, but Allen pushed even further.
“If customers are making fraudulent refund requests, what’s their rationale?” Allen asked. “Are they price-sensitive? Do we see repeat offenders, or is this behavior isolated? For each group, what strategies can we implement to address their unique behaviors?”
“It is usually a good sign when you have more questions than answers,” Allen added. “Being skeptical is just the start.”
Build Your Assumptions
In order for your strategic recommendations to work, you need underlying conditions to support them. These are the assumptions you base your hypotheses on. If these assumptions don’t hold true, your recommendations will fail. That’s why clearly stating assumptions is critical—you’re constructing a hypothetical world around them.
Assumptions can be qualitative or quantitative.
For example, in the ecommerce case, one strategic recommendation might be to mandate photo evidence for product defect claims before issuing refunds. The qualitative assumption here is that customers have less incentive to act dishonestly when required to provide evidence. Quantitatively, you might assume that 5% of customers file fraudulent claims. If this policy is enforced, how much revenue could the company save?
Other Data Needed
In continuation of the previous thought process, we now enter another territory—requesting additional data. If 5% of customers are constantly asking for refunds for “defective products,” how do we quantify the amount of revenue saved by enforcing a photo-evidence policy? Consider the underlying equation for this calculation:
Total Revenue Saved =
Total Number of Customers × 5% × Average Refund Value per Defect × Average Number of Refunded Defects
The obvious missing data include Total Number of Customers, Average Refund Value per Defect, and Average Number of Refunded Defects.
From here, you can go even deeper. The goal is to structure your analysis meticulously to ensure it is as accurate as possible. For instance, can we segment the data further by attributes of the item? Maybe based on the price of the item? Can we identify any patterns or seasonality in the data?
Take Risk
I realized there are truly no right answers to these questions. It’s not about coming up with the most actionable strategies or even the most unique ones. Instead, it’s about being skeptical, staying curious, and clearly communicating how you would structure your testing.
Starting with first principles thinking, you begin by questioning the prompt you’re presented with: What does it mean for a product to be “defective”? This is where 90% of candidates fail—they overlook the most fundamental condition upon which their entire analysis is built.
From there, let your curiosity guide you to second-, third-, or even fourth-order reasoning. What caused these defects? Who is responsible for them? How would you structure your testing? What other sub-hypotheses are necessary for your solution to work? What additional data might you need to refine your analysis further?
The Realization
Saturday morning, Sarah and I decided to go on a short hike up to the Griffith Observatory. My mind was clouded as I waited to hear back from yet another interview I felt I had just bombed.
“How would you answer the question, ‘How do you go about testing if a company is paying fair wages?’” I asked Sarah. This was a real interview question that had completely caught me off guard. In the moment, I scrambled to list all the thoughts that popped into my head: maybe look at the team’s revenue return compared to their wages, or perhaps compare it to similar companies’ pay ranges.
“I think the first question is: how do you define ‘fair’?” Sarah said. “And if they let you define it however you want, then you need to figure out what ‘fair’ means to you.”
Sarah’s words lit a lightbulb in my head. I realized the first step is to define what success looks like. With a clearer and narrower definition of success, your recommendations can be more targeted, and your progress becomes measurable.
Later that afternoon, we went to a scalp treatment in Alhambra. The treatment included a scalp analysis and a head spa, which was really enjoyable. Sarah had been frustrated by her hair loss, and we were excited to see if the analysis could uncover issues and provide explanations.
The technician sat us down in front of a monitor and magnifying detector. She pointed the detector at Sarah’s scalp and said, “Your scalp looks healthy, just a little oily.”
“But I’ve been experiencing hair loss,” Sarah said, visibly frustrated.
“Maybe you use your phone too much,” the technician replied.
I gasped and let out an audible laugh. It didn’t take a magnifying detector to tell that Sarah’s scalp was oily after our sweaty hike, and it definitely didn’t require a specialist to tell us that phone usage might contribute to hair loss.
“This is me during interviews,” I joked to Sarah as she sat down next to me after the "insightful" analysis.
Jokes aside, this was the moment I realized why I had been failing my interviews. I had hoped the technician would offer industry knowledge or meaningful insights—something like, “Your hair is falling out because of X, Y, Z” or “You should try A, B, C to help with that.” Instead, all we got were basic, superficial observations.
Interviews are the same. To stand out, you need to go below the surface. Stating the obvious won’t cut it. Allen had told me he’d given the same interview prompt to dozens of candidates, and they all came up with similar strategies and answers. What sets you apart isn’t the uniqueness of your answers—because often they aren’t that different—it’s your ability to challenge assumptions, dig deeper, and communicate your structured thought process.
The Why and The How
One-dimensional thinking is ubiquitous in today’s world. I’m not making excuses for myself, but rather diagnosing possible causes—and perhaps you can resonate with some of them.
In the professional world, especially in third-party services firms, there are limited opportunities to challenge the status quo. Often, there’s friction when requesting additional data from clients, so we learn to work with what’s given—unless something is absolutely indispensable for our analysis.
In school, we master following prompts. Clear rubrics and guidelines dictate how we work, and playing it safe becomes second nature. I imagine there’s more creative freedom in liberal arts, but in STEM fields, answers are typically black and white.
In social life, we’re discouraged from questioning. Agreeability directly correlates to likability. We conform, accept, and hold back when things don’t make sense, all to better fit in.
Or maybe, like me, you’ve had authority figures in your life who haven’t been kind to your ideas. Maybe you were often told you were wrong, too immature, or naive—and your ideas were replaced with “better” ones. Over time, you stopped being creative and stopped speaking up for yourself.
No matter your “why,” the world can benefit from a little more pushback. Every question, every challenge, gets you closer to the truth.
This way of thinking has also changed how I view friendships and relationships. While it’s comforting to always be agreed with, isn’t it better to have someone who challenges you and pushes you to grow? At the same time, it also takes a certain mindset to welcome pushback and criticism. Is your goal to always be right, or to improve and get closer to universal truths?