I rarely use Deep Research when the question itself isn’t clear yet.

Deep Research works great when the structure of the answer is more or less obvious: review a conference, map a market, fact-check something quickly.

Recently, for example, Deep Research gave me a solid overview of the RSAC cybersecurity conference – following a plan that barely changes year to year: announcements, themes, releases, patterns, trends vs previous years, and a quality synthesis.

In other words, when the job is to collect and compress, it’s genuinely useful.

But I often deal with a different kind of task.

I recently finished another piece of research: 10 stages, two models on almost every one. And there Deep Research would have been less useful – because the question itself changed 4 times over the course of the work. By stage five, I was investigating something quite different from what I’d asked at stage one.

I think of this as guided research.

The point is that the value comes not just from the models’ answers, but from the ability to reshape the question between stages.

Here’s what it looked like.

I was exploring a domain that was new to me, along with several product hypotheses within it.

After the first stage, both models produced a long overview and ranked the priorities. I chose a direction that both had left as secondary. Not that they were wrong – they were looking at the problem from the outside, while I was drawing on my own resources, experience, and constraints. With Deep Research, we’d most likely have gone down a branch that would have been less relevant to me.

After the third stage, the disagreement between the models gave rise to a different question entirely. Not “what are the options,” but “what specifically in this workflow can you own.” That was another turn in the research.

At stage seven, I carved out a separate round – not about growth, but about failure: not “what can you build,” but “at what signals should you stop.”

Finally, by stage nine, the result was a decision tree: if one thing is confirmed, move forward; if not, narrow down or stop.

What makes this approach different:

  • At each stage, the same task goes to two different models. Often one builds a bold construct while the other critiques it. Sometimes both get creative, then argue. The real value often lies in their disagreements.
  • I capture stage results in markdown files rather than leaving them in chat memory. Each next stage can draw on previous ones, and the context doesn’t dilute.
  • Between stages, I’m not just talking to AI advisors – I’m shifting priorities, reshaping the question, and adding what the models don’t know.

Deep Research would have given me a good answer in 30 minutes. But it would have answered a question that, as it turned out, was asked too early.

When the job is to collect and compress, I use Deep Research. When the job is to figure out what to actually do, I use a staged process where you can change the question itself after every step.

Find me on: LinkedInGitHubTelegramMax