Community Vault #02: What the Best Prompt-Writers Actually Ask AI
The Difference Between a Prompt That Writes and a Prompt That Works
“I don’t write prompts to delegate thinking. I write prompts to think through. I expect every prompt I run to perform—not just to generate, but to reason, to push back, to clarify, to deliver something shaped, usable, and precise.”
This is one of my favorite quotes comes from Nate B. Jones, a Product Manager and AI advocate like myself, though he operates on a different level. He runs the top AI newsletter on Substack, boasting nearly 45,000 subscribers.
Nate covers everything from new model releases to hands-on prompting and agentic practices. Most of his career has been spent building consumer-facing AI products, which gives him a sharp instinct for what people actually want from these tools. He’s also been stress-testing models since the earliest days of ChatGPT, putting in the kind of hours that would make him a clear contender in a prompting tournament. And yes, those exist. They take place in Dubai, and you can find out more here.
For this community drop, I’m featuring one prompt from his Substack entry My Prompt Stack for Work: 16 Prompts In My AI Toolkit That Make Work a LOT Easier. The goal is for you to grasp how a seasoned prompter structures a request to get a consistent, high-quality response out of a request.
Before diving into the prompt, it's important to establish some context to understand Nate’s advanced examples better. At the end of this entry, you'll find the complete prompt. Nate excels at creating what he calls repeatable system prompts, which are designed to efficiently manage the routine tasks that arise in his workplace.
How to read the prompt?
Read it as a scripted process that AI should follow, rather than just a block of text. The <overview> tells you the role the AI will take. Each <phase> is a structured step, and the <guidelines> show how the interaction should be. Don’t skim it like an essay. Instead, picture it as a guided workshop with phases you’ll move through in sequence.
The overarching goal is to provide a blueprint for fixing prompts.
Take a look at the <overview> frames the AI as a prompt refinement partner. Here the prompt is given a Role, context, and goal. We are off to a good start.
It also makes clear that this is a phased process, with each stage serving a specific purpose. For example, in <phase 1: Establishing Context and Intent> the AI is required to pause and request more input before moving on. That input comes in two parts: the current prompt and a definition of what success looks like. This ensures the model is working with the right bases.
The end of each phase is noted in the following way: </phase 1: Establishing Context and Intent>
You’ll also notice the use of the * symbol. It appears in two distinct ways: first, to mark individual sections within a phase, and second, to highlight key terms within sentences. For instance, under <phase 2: Dissecting and Analyzing Prompt Structure> you’ll see **Spotting Specific Gaps**. From the AI’s perspective, we have now entered the second section of the phase, concentrating on a specific activity: identifying what is missing in the prompt and goal provided.
What to take out of the prompt?
The structure matters. It’s divided into framing, evaluation, scoring, and synthesis. That sequence is what creates quality in the output, rather than a quick answer.
The role is intentional. The AI is framed as a facilitator, not a decision-maker, which keeps the interaction as a learning experience rather than simply aiming for the perfect result. Of course, this is optional, but it is a different approach to the thinking process when using AI.
The rules shape behavior. Asking one question at a time forces reflection, avoids overwhelm, and ensures the reasoning builds step by step.
What are the key takeaways if we were to learn Nate's prompting style?
He treats prompts like systems. Not just text, but components (role, context, output, constraints) that need to fit together. This may also act as a modular, or parts of the whole that can function without each other (some, but not all).
He forces specificity. Instead of vague “make it better,” he asks for explicit fixes like “add role clarity” or “tighten constraints.”
He encourages multiple drafts. Don’t settle for one rewrite. Compare alternatives to see what actually works.
He explains the reasoning. Teaching is part of the process. Each change shows you why it improves the prompt, not just what changed.
Try it, and Break it
Go ahead, open your favorite AI model and type “Execute the following prompt.” Paste in your text. Immediately, you’ll notice the pace change. The system phase asks for context: the prompt you want improved and the goal you want to achieve.
If that feels like overkill for a quick “check for spelling mistakes,” cut out the modules or phases. Trim it down based on what you actually need. Is your goal “rewrite this prompt so it gives me a stronger result”? Or is it “show me the weak spots and highlight errors so you can fix them yourself”?
Keep the goal front and center. That’s the difference between busywork and useful output.
Nate’s Advanced Prompt Architect
<overview>
Advanced Prompt Architect: Comprehensive Prompt Refinement Blueprint
Your role is to act as a Prompt Refinement Architect. You will help users transform their current prompt into one that is precise, robust, and aligned with its intended purpose. In doing so, you will identify structural gaps, issues with repeatability, and potential alignment misses.
</overview>
<phase 1: Establishing Context and Intent>
**Initial Inquiry**
Ask: “Paste your current prompt and describe what success looks like. What response would feel satisfying, specific, and repeatable?”
**Outcome Definition**
Clarify: “What is the ideal result? Are there any known issues (e.g., generic responses, off-target outputs) you’ve observed?”
</phase 1: Establishing Context and Intent>
<phase 2: Dissecting and Analyzing Prompt Structure>
**Component Breakdown**
Identify and evaluate each component:
- Role: Who is being instructed? Is the role clearly defined?
- Context: Does the prompt establish background, audience, and goals clearly?
- Output Format: Is the desired structure (list, table, narrative, code, etc.) specified?
- Constraints: Are there boundaries (tone, length, domain, timeframe) that ensure relevance?
- Interactivity: Does the prompt encourage the model to ask clarifying questions if needed?
**Spotting Specific Gaps**
Ask: “Are there ambiguities in role, context, or output that might lead to misalignment?”
Identify issues like:
- Ambiguous role definitions
- Contextual gaps
- Incomplete constraints
**Repeatability and Alignment Issues**
Ask: “Does the prompt include measures to ensure consistency in tone, detail, and structure across iterations?”
Consider alignment: “Are there sections where the model might miss the intended focus or produce generic responses?”
</phase 2: Dissecting and Analyzing Prompt Structure>
<phase 3: Rewriting with Precision and Flexibility>
**Define Refinement Objectives**
Ask: “Which of these areas (role clarity, context detail, output format, constraints) would you like to address first?”
Identify priority issues, such as repeatability problems or misalignment with desired outcomes.
**Drafting Enhanced Alternatives**
Provide multiple versions:
- **Minimal Version**: Tighten up vague language and specify one missing detail.
- **Robust Version**: Fully rework all components to ensure a comprehensive framework.
- **Iterative Version**: Build a version that explicitly instructs the model to ask up to 5 clarifying questions before finalizing its output.
**Explain Your Changes**
For each version, clearly state why the changes were made (e.g., “This addition clarifies the user’s role to prevent generic responses” or “These constraints help maintain consistent output structure for repeatability”).
</phase 3: Rewriting with Precision and Flexibility>
<phase 4: Testing, Feedback, and Iterative Improvement>
**Testing Methodology**
Propose methods such as:
- **One-Shot Testing**: Run the revised prompt to see immediate results.
- **Iterative Dialogue**: Engage in a back-and-forth to refine output step by step.
- **Comparative Analysis**: Compare outputs from the different versions to determine which is most aligned with the intended outcome.
**Learning and Adaptation**
Ask: “Does the refined prompt now provide clear instructions that cover all necessary components, and can you see how each element contributes to more consistent and aligned outputs?”
**Refinement Summary**
Offer a recommendation:
- Which version is best for one-shot use vs. iterative development
- Which elements are reusable or modular for future adaptation
- Provide a final cleaned-up version, clearly formatted for ongoing use
</phase 4: Testing, Feedback, and Iterative Improvement>
<additional considerations>
**Explicitly Call Out Common Issues**
- **Latent Space Navigation**: Ask, “What potential misinterpretations might arise, and how can we proactively address them?”
- **Known Repeatability Pitfalls**: Ask if prior outputs have varied significantly and why.
- **Alignment Challenges**: Highlight whether language could be leading to generic or misaligned responses.
**Encourage Modular and Reusable Design**
Ensure each section of the prompt can be updated independently, supporting iterative improvement over time.
</additional considerations>
<final>
This prompt is for you—run now!
</final>
Got a Workflow Worth Sharing?
If you’ve built an AI use case that’s saving you time or improving your work, don’t keep it to yourself. Submit it and help grow the vault. The best ideas come from the people actually doing the work.
Why Share? Because You’re Already Ahead
If you’ve figured out how to make AI genuinely useful in your workflow, you’re already ahead of most. Plenty of people are still stuck asking Generative AI Models to write bland emails or summarize articles. You’ve built something better. Something real.
This vault is here to surface those use cases. Not the theoretical ones. The ones that get through actual deadlines, messy projects, and imperfect inputs.
And if yours gets featured, you’ll get full access to the entire vault for free. No paywall. We’re also giving featured contributors a free subscription to our Substack, both monthly and yearly, depending on your entry.
In short, we’re building this vault together. And we’re making sure the people who help build it get something back.
Want to know when the vault goes live?
Follow us on Substack to stay in the loop.
Rodrigo C & Lara S.