The RGCF prompt template, and why it works
RGCF — Role, Goal, Context, Format — is the prompt structure that gets 80% of the benefit of "prompt engineering" in under 60 seconds of work. It forces you to answer four questions that people skip when they just type a request into a chat box, and the answers alone usually lift pass rate by 15-40%.
Role
Tell the model who it is. "A senior financial analyst with 10 years of B2B SaaS experience" activates a different set of internal representations than "someone who knows about finance." Be specific about seniority (senior/expert), domain (B2B SaaS, not just "business"), and any constraint ("writing for a skeptical buy-side audience").
Goal
State the outcome in one sentence. "Draft a 3-email nurture sequence for lapsed trial users." Not "I need help with emails." Goal precision is a quality multiplier.
Context
Give the model what it actually needs: audience, constraints, prior attempts, tone, relevant facts. This is the section where most people under-invest. A well-written context section is 100-300 words and eliminates 80% of "that's not what I meant" re-prompts.
Format
Specify the output structure. "Markdown bullets, each ≤ 25 words, cite section number." "JSON matching the attached schema." "3 emails, each with subject + body + CTA." This is the cheapest quality lever — it's free at token level and makes the output usable without cleanup.
Guardrails are the fifth, optional section
For any production-ish use, add a guardrails block: "Don't speculate. Cite only from the source. If information is missing, say 'not disclosed'." Models handle explicit refusals far better than implicit ones.
When to add a one-shot example
One example input→output doubles prompt quality on formatting-sensitive tasks (extraction, classification, structured writing). It roughly doubles your input tokens, so use it when the cost is justified by the quality lift. Rule of thumb: add an example when the output format is specific enough that you'd struggle to describe it in words.
Anti-patterns this template avoids
- Vague roles. "You are an assistant." Be specific.
- Vague goals. "Help me with this." State the outcome.
- Missing format. Every missing format rule is a downstream cleanup task.
- Negative-only framing. "Don't be verbose." Tell the model what TO do.
Caching your template
Once you have a template that works, cache it. On Claude, wrap the static portion (role, goal template, format, guardrails) in a cache block and vary only the user-specific context. A 600-token template hitting 75% cache-hit drops effective input cost by ~67%.
Measuring a prompt version
Use the Prompt Performance Tracker to score pass rate, cost, and latency across versions. Keep the winner. Ship with data, not vibes.
- System Prompt Builder — Build the sibling artifact — the system prompt for an agent or chatbot.
- Chain-of-Thought Builder — Stack a decomposition and self-check for reasoning tasks.
- Prompt Performance Tracker — A/B your prompt versions on pass rate, cost, latency.
- Prompt Cache Savings — Quantify the 70-85% cost cut from caching your template.