AI Economy Hub

Prompt template generator

Build a reusable RGCF prompt in 60 seconds — role, goal, context, output format, and guardrails.

Loading tool…

Get weekly marketing insights

Join 1,200+ readers. One email per week. Unsubscribe anytime.

Frequently asked questions

1.Why RGCF over just writing a paragraph?

Structure forces you to answer the four questions users skip — role, goal, context, format. Pass rate on user prompts lifts 15-40% from structure alone with no engineering effort.

2.Does this work for agents too?

RGCF is for per-turn user prompts. For agent system prompts, use the System Prompt Builder — it's a different artifact with persona, tools, refusals, and safety.

3.Should I add a one-shot example?

Yes if the output format is specific enough that you'd struggle to describe it in words. It roughly doubles input tokens, so gate it behind a quality-lift test.

4.How long should a prompt be?

For RGCF, 400-800 tokens is typical. Above 2,000, diminishing returns. Below 200, you're probably under-specifying.

5.Can I cache the template?

Yes — put the static role + format + guardrails in a cache block, vary only the per-request context. 67% cost cut typical on Claude.

The RGCF prompt template, and why it works

RGCF — Role, Goal, Context, Format — is the prompt structure that gets 80% of the benefit of "prompt engineering" in under 60 seconds of work. It forces you to answer four questions that people skip when they just type a request into a chat box, and the answers alone usually lift pass rate by 15-40%.

Role

Tell the model who it is. "A senior financial analyst with 10 years of B2B SaaS experience" activates a different set of internal representations than "someone who knows about finance." Be specific about seniority (senior/expert), domain (B2B SaaS, not just "business"), and any constraint ("writing for a skeptical buy-side audience").

Goal

State the outcome in one sentence. "Draft a 3-email nurture sequence for lapsed trial users." Not "I need help with emails." Goal precision is a quality multiplier.

Context

Give the model what it actually needs: audience, constraints, prior attempts, tone, relevant facts. This is the section where most people under-invest. A well-written context section is 100-300 words and eliminates 80% of "that's not what I meant" re-prompts.

Format

Specify the output structure. "Markdown bullets, each ≤ 25 words, cite section number." "JSON matching the attached schema." "3 emails, each with subject + body + CTA." This is the cheapest quality lever — it's free at token level and makes the output usable without cleanup.

Guardrails are the fifth, optional section

For any production-ish use, add a guardrails block: "Don't speculate. Cite only from the source. If information is missing, say 'not disclosed'." Models handle explicit refusals far better than implicit ones.

When to add a one-shot example

One example input→output doubles prompt quality on formatting-sensitive tasks (extraction, classification, structured writing). It roughly doubles your input tokens, so use it when the cost is justified by the quality lift. Rule of thumb: add an example when the output format is specific enough that you'd struggle to describe it in words.

Anti-patterns this template avoids

  • Vague roles. "You are an assistant." Be specific.
  • Vague goals. "Help me with this." State the outcome.
  • Missing format. Every missing format rule is a downstream cleanup task.
  • Negative-only framing. "Don't be verbose." Tell the model what TO do.

Caching your template

Once you have a template that works, cache it. On Claude, wrap the static portion (role, goal template, format, guardrails) in a cache block and vary only the user-specific context. A 600-token template hitting 75% cache-hit drops effective input cost by ~67%.

Measuring a prompt version

Use the Prompt Performance Tracker to score pass rate, cost, and latency across versions. Keep the winner. Ship with data, not vibes.

Keep going

More free tools