LLM Interview Method Transforms Complex Task Design: Experts Say It's a Game-Changer
Breaking: New AI Technique Lets LLMs Interview Humans to Generate Context
A novel approach called the 'Interrogatory LLM' is changing how developers and knowledge workers prepare complex tasks for large language models. Instead of writing pages of context manually, the AI interviews the human to extract all necessary information.

Martin Fowler, software engineering thought leader and author of industry-standard Bliki, describes the method: The LLM asks me all the questions it needs to create appropriate context. Once done, it produces a report for another session to carry out the next step.
How It Works
The technique was first detailed by entrepreneur Harper Reed on his blog. A crucial rule: the LLM must ask only one question at a time. Fowler notes that keeping the AI on track often requires repeated reminders.
In practice, a developer or designer can feed the system key data points and specify external sources to consult. The LLM then interrogates the human until it has enough material to build a comprehensive context document.
Two Key Use Cases Emerge
First, using the Interrogatory LLM to create context for a subsequent AI task. Second, using it to validate existing documents. Fowler explains: Give the LLM a software specification, then ask it to interview a human expert to check if the document is accurate. People often find reviewing hard, so a conversation with an LLM can be more fruitful.
Both approaches can be chained: one LLM builds a document, another interviews different experts to review it.
Background
The concept originated from Harper Reed's work on improving how humans interact with AI for complex software design. Traditional methods required experts to manually write detailed specifications—a time-consuming process prone to omission.
By offloading the interrogation to the LLM, the human can focus on answering questions rather than structuring information. This mirrors techniques used in user experience research, but adapted for machine consumption.
What This Means
For many professionals, especially those who struggle with writing, this technique offers a lifeline. Fowler, a natural writer himself, acknowledges the challenge: Many folks find writing hard, often very hard. This can be a real problem when we need to get information out of someone's head.
Even if the AI-generated output has 'that tang of AI-writing', Fowler argues that having the information is better than not having it at all—especially when deadlines or lack of writing skill prevent proper documentation.
The approach could democratize knowledge capture, enabling non-writers to contribute critical context without suffering through the writing process. It also promises to reduce errors in software specifications and accelerate feature design.
Expert Reactions
Harper Reed, who first publicized the method, emphasized the single-question rule: It forces the LLM to focus and prevents overwhelming the user. The result is a structured conversation that builds context piece by piece.
Fowler predicts broader adoption: This technique is more broadly applicable than just AI context generation. It can change how any organization extracts expertise from its people.
Next Steps
Early adopters are experimenting with the Interrogatory LLM in software design sprints and knowledge base creation. Researchers are studying how to optimize the interrogation process, including question ordering and the handling of ambiguous answers.
As LLMs become more conversational, the line between human-led and AI-led interviews will blur. For now, the Interrogatory LLM provides a structured, repeatable method to bridge human expertise and machine performance.
This is a developing story. Experts recommend trying the technique with a clear list of sources and strict one-question-at-a-time prompting.
Related Articles
- 6 Critical Improvements from Cloudflare's 'Code Orange: Fail Small' Project
- Breaking: Kazakhstan Renews Coursera Deal – 235,000+ Students to Gain AI and Digital Skills for Global Economy
- From Tutorials to Hired: A 90-Day Roadmap for Your First Cloud Engineering Role
- A Step-by-Step Guide to Enhancing AI Accuracy with Knowledge Graphs and Graph RAG
- Unlocking LLM Efficiency: TurboQuant and KV Cache Compression Explained
- A Deep Dive into Python Memory Management: From Arenas to Garbage Collection
- Dataiku Names Winners of 2025 Partner Certification Challenge, Emphasizing Human Expertise in AI Deployment
- Departures from the FDA: Six Former Officials Explain Their Reasons for Leaving