The take
The most-saved prompts are the ones that ask the least of you. Paste it, fill 1 bracket, and you get your answer. That’s why people save them.
But the less you give the model, the more AI slop it gives back.
It reads like progress.
It isn’t.
That’s not because people are cutting corners for fun. The blank input box is hard. A template that says “paste this and fill in the bracket” solves that.
But most templates solve the blank page without solving the blank context. You get past the cursor and land on output that could belong to anyone.
Like a fortune cookie. Broad enough to feel true. Specific to no one.
Good prompts don’t just get you past the blank page. They make you answer something first.
You still have to show up. But you don’t have to start from scratch.
One feels like progress. The other is.
Seen this week
1. Weekly planner that starts with outcomes
Most planning starts with a to-do list and ends with a schedule that looks organized but doesn’t connect to anything that matters. This prompt flips it. You define what a successful week looks like first, then the model builds backwards from there.
Act as a time designer. Ask me: "What does a successful week look like in concrete terms? Not feelings. Outcomes. What exists at the end of the week that didn't exist at the start?"
After I answer, identify 3-5 verifiable results from what I said. Then ask when I do my best focused work and when I hit a wall. Map my week into high focus, low focus, and recovery blocks.
Match each outcome to a realistic time block. If any outcome has no slot, tell me now.
Deliver a plain-text weekly schedule I can copy into my calendar.
Why it works: The question-first structure forces you to define what “done” looks like before any scheduling happens. Most people skip that step and end up with a full calendar that doesn’t move anything forward.
One gap: The prompt asks when you focus best, but not when you’re actually free. Your sharpest hours don’t matter if they’re already taken by work, kids, or a commute.
Simple fix: Add “ask me what hours are already spoken for and when my best focus hours are.”
Want to go deeper: Set a phone reminder at the end of the week. Paste your schedule back and say, “here’s what I actually did vs. what was planned. Adjust the following week based on where I overestimated.” You’ll get better results every week because the model learns your patterns instead of guessing.
Credit: u/RhinoCK301. Original post.
2. Split-perspective idea stress test
You have an idea you’re excited about. Maybe a side project, a business concept, something you’ve been sitting on for weeks. This prompt forces the model to argue both sides at once, so you see the cracks before you invest real time.
Respond as 2 characters simultaneously. Character 1 believes my idea is brilliant and will defend it. Character 2 thinks it's fundamentally broken and wants to prove it. Both are equally smart. Neither is allowed to be polite about it.
Present their arguments in 2 clearly labeled sections.
My idea: [describe your idea in a few sentences]
Why it works: The opposition constraint removes the model’s default tendency to glaze you. By forcing 2 voices with opposite positions, you get honest pushback without having to ask for it specifically.
One gap: The prompt works best when you give it real detail to argue with. A one-line idea gets one-line pushback.
Simple fix: Describe your idea in 3-4 sentences. Include what it does, who it’s for, and why you think it works.
Credit: u/AdCold1610. Original post.
3. Socratic tutor
You want to learn something new, but don’t want a wall of text you’ll forget in 10 minutes. This prompt turns the model into a tutor that teaches by asking you questions, one at a time, building up from the basics. You learn by answering, not by reading.
You are a Socratic tutor. Help me understand [TOPIC] by asking questions, not lecturing.
Ask one question at a time. Start with the most basic foundational question. Wait for my answer before continuing.
After I answer, tell me in one sentence what I got right or where I'm off. Then ask the next question.
If I'm stuck twice in a row, give a hint. Only explain fully if I ask or can't get it after a real attempt.
Why it works: The one-question-at-a-time constraint prevents the model from dumping information. You have to think before you get more. That’s closer to how learning actually sticks.
One gap: No depth target. The prompt doesn’t define what “understand” means for your topic, so the tutor doesn’t know when to push deeper or when you’ve gone far enough.
Simple fix: Add your starting level. “I’m a complete beginner” or “I know the basics but get lost at [specific concept].”
Credit: u/themancalledmrx. Original post.
The teardown
The Carnegie conversation planner
The idea here is solid in theory. Take Dale Carnegie’s principles (show genuine interest, give honest appreciation, handle disagreements gracefully) and turn them into prompts you can run before real conversations.
The question is whether scripting “genuine interest” with AI actually produces anything genuine.
Here’s a prompt from the set of 7:
I'm meeting with [PERSON/TYPE OF PERSON] about [SITUATION/CONTEXT]. Help me prepare to show genuine interest in them using Carnegie's approach: 1) What thoughtful questions can I ask about their interests, challenges, and experiences? 2) How can I research common ground we might share? 3) What specific compliments could I give about their work or achievements? Create a conversation plan that makes them feel like the most interesting person in the room.
[6 more prompts follow the same structure, each naming a Carnegie principle and asking the model to write your lines for you]
The post includes 6 more that follow the same structure. Each one names a Carnegie principle and asks the model to write your lines for you.
The model doesn’t know the person you’re meeting. It doesn’t know what you actually find interesting about them. So it produces generic Carnegie-flavored filler.
“Ask about their challenges.” “Find common ground.” “Give a specific compliment about their work.”
These aren’t insights. They’re fortune cookies dressed up as preparation.
If you could do that on the fly, you wouldn’t be asking AI for help.
You know those conversations you rehearse in the shower? The perfect thing you should’ve said? They never go that way in real life. Someone interrupts. The topic shifts. Now you’re improvising 30 seconds in and it’s nowhere near what you had in mind.
Same problem here. Except now you’re improvising AND trying to remember a script that isn’t yours.
The post says “it’s not manipulation, just better technique.” But a technique that replaces your attention with generated scripts is the opposite of what Carnegie was teaching.
His whole point was to actually care. Not to sound like you do.
The useful version doesn’t write your lines. It helps you think about the other person before you walk in. You’re coming up with your own topics of conversation so you’ll have a better chance of remembering them.
I'm meeting with [person] about [situation]. Here's what I know about them: [anything you genuinely know, even if it's just their role and one recent thing they worked on].
Help me think through their perspective. What might they be worried about going into this? What would make this conversation feel worthwhile for them, not just for me?
Don't write me a script. Just help me show up thinking about them instead of thinking about myself.
One shift made the difference.
The original asks “what should I say?”
This version asks “what should I be thinking about?”
One produces lines to memorize. The other produces awareness you can use when the conversation goes somewhere you didn’t plan.
Credit: u/EQ4C. Original post.