prompt teardown.
HomeBlogArchive Subscribe
Home Blog Archive Subscribe
RSS feed Reddit community
deep-dive 8 min read
Home › Blog › The best prompts on Reddit this week

The best prompts on Reddit this week

EmailProductivity

The take

I reviewed a lot of prompts this week. Most of them were the same prompts wearing a different hat. But the 4 in this issue stood out, and they all had something in common.

None of them started with a role, a framework, or a gimmick.

The meal planner came from someone tired of ordering DoorDash every Tuesday. The email template came from someone who kept rewriting every AI draft. The journaling prompt came from someone who wanted honest reflection without therapist-speak. The project planner came from someone who kept getting blindsided after launch.

Do you see the pattern?

They all started with a real problem. Then they built a prompt to fix it.

That’s the pattern. The prompts that work don’t come from prompting advice. They come from a frustration that got bad enough to solve.

Seen this week

1. Weekly meal planner with intake interview

A meal planning prompt that makes you answer questions before it builds anything. The intake interview is the move. A dietitian in the comments helped improve this version.

I want a 1-week meal plan I'll actually follow. Before you build it, interview me like a new client. Ask about my goals, schedule, cooking skill, budget, proteins I like and won't eat, allergies, and anything else a dietitian would ask. One question at a time. Then build the plan: assign 1 protein per dinner night, rotate so nothing repeats more than twice, pick 2 breakfasts and 2 lunches and repeat them all week. Half the plate vegetables, quarter protein, quarter starch. Maximize ingredient overlap between meals. Flag meals under 30 minutes. Include 1 lazy night (leftovers or frozen, no guilt). Give me a grocery list by store section, a Sunday prep sequence, and 1 sentence per meal explaining why it fits.

Why it works: The interview forces you to clarify your own constraints before the model builds anything. Most meal planning prompts skip this and guess. The lazy night and ingredient overlap rules force practical decisions the model wouldn’t make on its own. This prompt exists because someone kept ordering DoorDash on Tuesdays. That’s the kind of problem worth solving.

One gap: “Balanced plate rule” is a label the model might interpret differently than the definition given in the prompt.

Simple fix: Drop the label. Just say “half vegetables, quarter protein, quarter starch” without naming it.

Credit: u/sleepyHype. Original post.

2. Journaling reflection partner

A journaling prompt that tells the model how not to sound, which is harder and more useful than telling it how to sound.

When I journal or process personal issues, act as a calm, honest reflection partner. Help me separate what actually happened from what I'm interpreting it to mean, what I'm feeling, and what I might be avoiding. If I'm catastrophizing, overgeneralizing, or dodging responsibility, say so directly in plain language. Never use therapy clichés like "holding space," "inner child," or "that sounds valid." Use this structure only when it fits naturally: (1) what I'm hearing, (2) what may be underneath, (3) what's worth questioning, (4) one thing to sit with.

Why it works: The anti-cliché list is doing the real work here. Most models default to therapist-speak the second you mention emotions. Banning specific phrases forces a different tone. The separation framework (facts vs. interpretations vs. feelings vs. fears) also pushes the model past generic comfort into specific analysis.

One gap: No instructions for what to do when you disagree with the reframe. The model will probably back down and agree with you, which defeats the purpose.

Simple fix: Add “if I push back, don’t retreat. Ask a follow-up question instead of agreeing.”

Want to go deeper: After the first response, try “push back harder on that last point” to test whether the model holds its ground or caves to politeness.

Credit: u/themancalledmrx. Original post.

3. Email reply template with goal field

Most people paste an email into AI and say “write a reply.” The result sounds polite but doesn’t actually say anything. This template adds one field, the goal of the reply, that tells the model what you’re trying to accomplish.

Write a reply to this email.

Context: [paste the email you're replying to]

Goal of this reply: [what you want to accomplish, e.g., set a deadline, push back on scope, keep the relationship positive]

Rules: keep it direct, no filler, structure it as acknowledge then respond then next step.

Why it works: Without a goal, the model doesn’t know what you’re trying to do. It plays it safe and writes something polite that doesn’t really say anything. The goal field gives it a direction. You also don’t need a tone field here. When you paste the original email, the model picks up the tone from context.

One gap: The “acknowledge then respond then next step” structure is hardcoded. It works for pushing back on scope but breaks for other email types like delivering bad news or declining a request.

Simple fix: Change the structure line to “structure it in whatever order fits the goal” and let the model decide.

Credit: u/Rich_Specific_7165. Original post.

The teardown

Project failure planner

The idea behind this prompt is simple. Before you start a project, you imagine it already failed. Then you figure out why. It’s called a pre-mortem, and it’s a useful exercise. The question is whether this prompt actually gets you there.

Here’s the excerpt:

Role: You're like a super-duper risk checker who knows how to plan stuff. Your whole thing is finding ways projects can go sideways and how to stop it.

Task: Do a "pre-mortem" for this project. Pretend it's already a huge disaster. Figure out the most likely reasons it tanked, what exactly went wrong, and what we can do now to make sure that doesn't happen.

[Project description placeholder, analysis steps, markdown table output format, example project included]

The OP says the structure matters more than the wording. He’s right about that, but maybe not in the way he thinks.

The structure is doing all the work. The role is doing none of it. “Super-duper risk checker who knows how to plan stuff” doesn’t change what the model produces. Remove it entirely and the output is the same.

But here’s the real problem.

The prompt asks the model to imagine failure, identify failure points, and develop mitigation strategies. That sounds thorough. It’s actually just 3 ways of saying “what could go wrong and how do we stop it.” There’s no instruction to prioritize. No way to separate a project-killing risk from a minor annoyance. The model will give you 10 rows in a table and treat them all as equally important.

The example project is useful (launching an online coffee business), but it also lets the model pattern-match to generic startup risks instead of thinking about your specific project.

Strip the role. Cut the 3 steps down to 1 real question. Add a prioritization constraint.

I'm planning this project: [describe it in 2-3 sentences].

Imagine it failed completely 6 months from now. List the 5 most likely reasons it failed, ranked by how likely each one is. For each reason, give me 1 specific early warning sign I'd see in the first 2 weeks, and 1 action I can take this week to prevent it.

Put it in a table: Failure Reason, Early Warning Sign, This Week's Action.

The early warning sign is what makes this version different. The original prompt tells you what could go wrong. This one tells you what to watch for right now.

Prompts don’t need to sound smart. They need to ask for something specific.

Credit: u/promptoptimizr. Original post.

Related posts

  • Workflows AI voice memo prompts for proposals, meetings & plans
  • Productivity The prompts people save vs. the prompts that work
  • Beginner How to write AI prompts that actually work

Contents

  • The take
  • Seen this week
  • 1. Weekly meal planner with intake interview
  • 2. Journaling reflection partner
  • 3. Email reply template with goal field
  • The teardown
  • Project failure planner

Get the next teardown

Free weekly prompt breakdowns.

In this article

  • The take
  • Seen this week
  • 1. Weekly meal planner with intake interview
  • 2. Journaling reflection partner
  • 3. Email reply template with goal field
  • The teardown
  • Project failure planner

Frequently asked questions

Why does the meal planner prompt use an interview before building the plan?

The interview forces you to clarify your own constraints before the model guesses. Most meal planning prompts skip this step and build a plan based on assumptions. The result is a plan you won't follow.

What makes the anti-cliché list effective in the journaling prompt?

Banning specific phrases like 'holding space' and 'inner child' forces the model out of its default therapist-speak mode. Telling a model how not to sound is harder and more useful than telling it how to sound.

Why did the pre-mortem prompt get a full teardown?

The role and 3-step structure looked thorough but didn't change what the model produced. The rewrite adds prioritization and early warning signs, which give you something actionable instead of a generic risk list.

Do these prompts work with any AI model?

Yes. Every prompt in this issue works with Claude, ChatGPT, Gemini, or whatever model you use. None of them depend on model-specific features.

Get the next teardown in your inbox

Free. One email a week. Unsubscribe anytime.

Prompt Teardown © 2026 · Sponsor · Model-agnostic