I built a GPT that doesn’t give me answers

Most founders I know use AI to code. I use it to run consulting interviews with myself.
Startups rarely fail from lack of information. They fail because they chase the wrong path. That’s why I built a custom GPT called Startup Strategist. It doesn’t try to be smart. It tries to think clearly. Its job is to help me break down ambiguous problems, the way a McKinsey consultant would.
This came out of a simple frustration: most language models answer too quickly. But most of my problems don’t have clear answers. And if an answer shows up too fast, I usually don’t trust it. What I need is a tool that helps me navigate uncertainty, not resolve it.
To build it, I borrowed two frameworks I’ve found helpful over the years: the idea maze and the issue tree. The first, popularized by Chris Dixon, is about scenario planning. The second, common in consulting, is about structured problem decomposition. Together, they form a kind of mental scaffolding for exploring hard ideas.
To see how well this worked, I ran a head-to-head test. I gave both Startup Strategist and Gemini Pro the same challenge:
“Why do Southeast Asian low-cost carriers operating fifth-freedom flights from Taiwan to Japan often underperform local budget airlines on load factor?”
It’s a messy, real-world question with no clean answer. But it’s exactly the kind of ambiguity startups face.
Gemini Pro delivered a fluent, well-researched synthesis. It quoted sources, cited statistics, and summarized key trends. But it skipped a crucial step—it accepted the question at face value and moved straight to conclusions. Some of its assumptions were off, and it failed to reframe or challenge the initial observation. In the end, it cited incorrect data, misread the load factor dynamics, and produced a convincing—but wrong—narrative.
Startup Strategist took a different approach. First, it clarified what I was really asking: is this a pricing problem, a branding issue, or a structural mismatch in route economics? Then it built out a full issue tree with competing hypotheses:
- Maybe the timing of the flights is bad.
- Maybe the cabin configuration doesn’t fit short-haul needs.
- Maybe the brand is unknown in Taiwan, so travelers don’t trust it.
- Maybe the airline optimizes for Bangkok–Tokyo, and the Taiwan–Japan segment is just filler.
- Maybe pricing is designed to avoid cannibalizing the long-haul route.
- Maybe local players like Tigerair dominate distribution, local promotions, and OTA visibility.
Then it asked: do you want me to pull data? Compare this with Tigerair? Run a hypothesis check on each factor?
We did. And the data backed it up: Tigerair’s load factor on Japan routes regularly exceeds 90%, while carriers like Thai Lion Air have struggled below 60% on some segments. Startup Strategist had already identified this as a likely root cause before seeing the numbers.
This was exactly the behavior I wanted, a tool that doesn’t just answer, but thinks with me. It questioned, broke down, synthesized, and asked again.
AI won’t replace founders. But the right kind of AI can make founders more reflective, more structured, and harder to fool—including by themselves. That’s what I’m trying to build.