2WHAV: The Executable Prompting Framework Built on LLM-First Principles
In a previous article, we demonstrated how LLM-First documentation improves an LLM’s ability to retrieve information. By optimizing the structure of knowledge, we achieved measurable gains in accuracy (+5.8%) and efficiency (+40%).
Now, let’s apply that same principle of structural rigor to code generation.
Consider a common request:
“Write a JavaScript function
getUserData(userId)that fetches user data from/api/users/{userId}.”
An LLM will typically generate a function like this:
// ❌ Syntactically correct, but operationally naive.
async function getUserData(userId) {
const response = await fetch(`/api/users/${userId}`);
if (!response.ok) {
throw new Error(`HTTP Error! Status: ${response.status}`);
}
return response.json();
}
This code is valid, but it operates on the optimistic assumption of a perfect environment. In a real-world application, it’s brittle. It lacks handling for transient network errors, server-side failures, or latency, making it unsuitable for production.
From Vague Request to Engineering Spec
The root cause of this fragility is ambiguity in the prompt. To address this, we can use 2WHAV (What → Where → How → Augment → Verify), a framework that translates a simple request into a detailed engineering specification.
2WHAV is a direct application of LLM-First principles: it replaces prose with hierarchy, structured formats (like tables and checklists), and explicit constraints, guiding the LLM through a formal engineering workflow instead of a creative writing exercise.
A Practical Workflow: Using the LLM to Build the Spec
You don’t need to write these detailed prompts from scratch. A more efficient workflow involves using the LLM as a partner in the specification process.
1. Provide Context:
First, instruct the LLM on the framework’s structure.
“Analyze the methodology described in the GitHub repository
https://github.com/fra00/2WHAV. We will use it to structure a code generation request.”
(If your LLM lacks web access, you can paste the README content directly).
2. Request Elaboration:
Next, provide your initial, simple prompt and ask the LLM to expand it using the framework.
“Now, using the 2WHAV framework, expand the following prompt into a detailed specification for a production-ready API client. Fill the necessary details using software engineering best practices. My prompt is: ‘Write a JavaScript function
getUserData(userId)…’ “
This two-step process leverages the LLM’s knowledge of design patterns to flesh out your high-level goal into a complete technical contract.
The Resulting Blueprint
The simple fetch request is transformed. The most significant additions appear in the AUGMENT section, where non-functional requirements are specified.
<details>
<summary>Click to see the expanded 2WHAV prompt</summary>
## WHAT: Objective
- **Task:** Create a resilient `fetchUserData(userId)` function.
- **Output:** A structured object: `{ status: 'success', data }` or `{ status: 'error', error }`.
## HOW: Interface
- **API Contract:** The function must only use the global `fetch` API.
## AUGMENT: Strategic Directives
- **1. Automatic Retry:** Implement up to 3 retry attempts on server errors (5xx).
- **2. Exponential Backoff:** Use an increasing delay between retries (e.g., 100ms, 200ms, 400ms).
- **3. Request Timeout:** The operation must time out after 5 seconds of inactivity.
- **4. In-Memory Caching:** Implement a 5-minute cache (TTL) to prevent redundant requests for the same `userId`.
## VERIFY: Validation Checklist
- [ ] Are retry, backoff, timeout, and caching mechanisms implemented?
- [ ] Does the function's return value match the specified object structure?
</details>
From Naive Code to Resilient Component
Given this detailed specification, the LLM generates a far more robust implementation.
// ✅ A resilient, production-ready component.
const MAX_RETRIES = 3;
const INITIAL_BACKOFF_MS = 100;
const CACHE_TTL_MS = 5 * 60 * 1000;
const userCache = new Map();
const delay = (ms) => new Promise(resolve => setTimeout(resolve, ms));
async function fetchUserData(userId) {
// 1. Caching
const cached = userCache.get(userId);
if (cached && (Date.now() - cached.timestamp < CACHE_TTL_MS)) {
return { status: 'success', data: cached.data };
}
for (let attempt = 0; attempt < MAX_RETRIES; attempt++) {
try {
// 2. Timeout
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 5000);
const response = await fetch(`/api/users/${userId}`, { signal: controller.signal });
clearTimeout(timeoutId);
if (response.ok) {
const data = await response.json();
userCache.set(userId, { data, timestamp: Date.now() });
return { status: 'success', data };
}
// 3. Retry with Backoff on server errors
if (response.status >= 500 && attempt < MAX_RETRIES - 1) {
await delay(INITIAL_BACKOFF_MS * 2 ** attempt);
continue;
}
return { status: 'error', error: `Client Error: ${response.status}` };
} catch (error) {
if (attempt < MAX_RETRIES - 1) {
await delay(INITIAL_BACKOFF_MS * 2 ** attempt);
}
}
}
return { status: 'error', error: "Request failed after all retries." };
}
(Note: Code is functionally complete but condensed for readability).
This resulting code is fundamentally different. It’s a component designed for resilience, addressing real-world operational challenges like transient failures and latency. Each feature—caching, retries, timeouts—is a direct implementation of a rule from our specification.
Why It Works: Forcing a Structured Reasoning Process
This is more than just providing clearer instructions. The rigid structure of 2WHAV fundamentally changes how the LLM “thinks” about the problem. A narrative prompt invites a broad, associative brainstorming session. A 2WHAV prompt enforces a sequential, logical engineering workflow.
The framework acts as a cognitive guide rail:
-
WHAT&WHERE(Architecture First): Before writing any code, the model is forced to consider the high-level goal, the output contract, and the system’s architecture. This crucial step, often skipped in narrative prompts, compels the LLM to think like an architect. -
HOW(Implementation with Constraints): The LLM’s focus is then narrowed to implementation, but within a sandbox of strict rules—a defined API, mandatory syntax, and specific patterns. Its “creativity” is channeled into solving the problem correctly within the given constraints. -
AUGMENT&VERIFY(Built-in QA): Finally, the model is directed to consider non-functional requirements (like resilience) and then to self-validate its output against a checklist. This introduces a self-correction loop into its generation process.
This structured process mitigates randomness and guides the LLM to produce a solution that is not just plausible, but architecturally sound.
Addressing the Overhead: Is It Always Worth It?
A valid concern is the potential overhead of writing such a detailed prompt. Does every simple function require this level of specification?
The answer is no. The 2WHAV framework is modular by design to manage this trade-off.
-
For simple, non-critical tasks: A minimal prompt using just
WHAT(goal),HOW(rules), andVERIFY(checklist) is often sufficient. This provides clarity without excessive length. -
For complex components with decision logic: Adding the
WHERE(architecture) section becomes necessary to define states and priorities. -
For production-critical systems: The full framework, including
AUGMENT(resilience, performance), is justified because the cost of failure is high.
The goal is not to maximize prompt length, but to match the level of rigor to the complexity and criticality of the task.
Conclusion: A Shift in Collaboration
This structured approach changes our role in the development process. We shift from simply prompting for code to architecting a solution.
- The developer acts as the architect, setting the high-level requirements and reviewing the detailed specification.
- The LLM acts as a senior engineer, translating the specification into a robust implementation and even suggesting best practices to include in the spec itself.
By moving from ambiguous conversations to formal specifications, we can guide LLMs to produce code that is not just syntactically correct, but truly production-ready.
Links
- Explore the 2WHAV framework on GitHub Link
- Read the principles behind it at LLM-First Documentation Link
- Descrizione LLM-first documentation It’s Time to Write Docs for Machines First, Then for Humans