How I write prompts that make millions

I've written over 10,000 prompts for commercial apps. Steal my process.

Day 198/100

 

Firstly. You lot are crazy. All the Founders Seats sold out.
I now have 30 content strategies to make in the next week…

Honestly though. Thank you. It means the world to me to see people excited for what we’re building and the future of content on the internet.
Truly.
Thank you.

If you missed out, or are just waiting for v3 drop and the new pricing.
It’s today. +50% extra credits on any plan for the rest of the month.

Now. Back to the show.

Hey—It's Tim. 

I ran a prompt workshop last night at midnight for a team in Geneva, then rolled straight into finalizing Penny.v3’s content prompts before today’s wave one push.

At this point sleep is a suggestion and I’m 63% coffee.

The easiest newsletter for me to write today is how I write prompts.
I’ve written over 10k of them in the last 3 years. And more than half sit inside million dollar ARR software.

I’ll show you the spine I use, how I debug when things go sideways, and a few reusable “workers” you can steal today.

This is sleep depravation giving you the secrets. Enjoy :)

The uncomfortable truth no one wants to hear

Most “bad prompts” are your fault. Which is good because it means we can fix them.

If you can’t sketch the method you’d use to do the task, the model has to invent it.
If you don’t define exactly what you want, you’ll never get it.

Prompts are how you externalize judgment: what to do, in what order, with what constraints, and what “done” looks like.

Six Parts. You need at least 4 of them.

Goal

The outcome in one line.

“Draft a 700–900 word Tech Thursday on prompt design with three copy-and-paste templates.”

If you can’t write this, uh, good luck I guess…

Role

Who’s talking, to whom, from what vantage point?

“You are an in-house content lead who sounds like a friendly cynic. Audience: B2B content marketers who ship weekly.”

You don’t need to be as crazy specific as we had to with this. A base overview is good enough if you can define the next parts well. The ROI is better spent there.

Steps

The method you’d use if the model didn’t exist. This is where most prompts fall apart.

Research → structure → draft → tighten → QA.

To me, this is the second most important part of the prompt.

The best part of this is it can be refined. I'll explain that process further down.
If you can’t list steps, do a 90-second “paper plan” first.

Constraints

Draw the box.

No listicles-in-disguise. 
No fake sources. 
Cite only from the last 24 months. Avoid ‘leverage’ and ‘unlock’. 
Keep headers under 60 chars.

Constraints prevent chaos more than “be creative” provokes brilliance.

Inputs

What fuel do you provide? Notes, links, quotes, data.
Swap these and the same prompt becomes a reusable worker.

Outputs The MOST IMPORTANT part by far.

The most important step. And the reason why you think the LLM's are bad.

You never defined how it should give you the information.

If you want a table. Ask for a table.
If you want a CSV, ask for a CSV.
If you want it in markdown, or JSON, or with certain parameters.
TELL IT TO DO THAT.

The more specific you are here. The more likely you're gonna get an output that is what you're looking for.

Bonus Sections.

Planning (GPT-5 shines here)

A tiny plan before writing.

“Skim inputs for tone, extract 5 takeaways, sequence from problem → method → template → checklist.”

This is you pre-wiring the reasoning path.

Just having a line of the “approach angle” can save half the time on getting it to do what you want.

Stop conditions

When to stop “thinking” and just ship.

You know when it start repeating it self? Yeah. Your fault. You never asked it to check it’s work before it returned the output.

“Stop before searching LinkedIn. Return once you have ≥5 sound examples or you’re repeating yourself.”

How to refine your prompts

This is the part where you pretty much everyone goes wrong.
Especially if the goal is to make a prompt that works for you.

The game isn’t “ask again nicely.” It’s diagnose → rewrite → re-run. Treat the model like a junior who can explain what went wrong and how to fix it.

Start with a diagnosis, not a redo

Paste this verbatim after a miss:

I expected: <describe your expected outcome in one sentence>.
What I got: <paste or summarize the result>.

Where did my prompt fail? Identify exactly which section(s) caused this:
- Goal
- Role
- Steps
- Constraints
- Inputs
- Outputs
- Planning
- Stop conditions

For each problem, explain the failure in 1–2 lines and propose a precise fix.
Then, rewrite my entire prompt with those fixes applied. Keep my original intent.
Return ONLY:
1) A bullet list of issues + fixes
2) The fully rewritten prompt

That’s it. That’s how I write prompts that do pretty much whatever I want them to do.
Specificity + Constant Refinement.

New Penfriend drops later today.

Expect the full update on what we did and why it’s the best content writer on the internet, tomorrow.

✌️ Tim "I sleep when v3 ships" Hanson
CMO @Penfriend.ai

Same brain, different platforms: X, Threads, LinkedIn.

P.S. Keep this between me and you.

If you preface your prompt with “If you’re uncertain about any input, ask three clarifying questions before proceeding, completion quality jumps without derailing into 20 questions.

Also, duplicating the Outputs section at top and bottom is the laziest reliability hack I know.

Don’t tell Legal I said “lazy.”

 

Penfriend.ai
Made by content marketers. Used by better ones.
 

What to do next

  • Share This Update: Know someone who’d benefit? Forward this newsletter to your content team.

  • Get your First 3 Articles FREE EVERY MONTH! We just dropped the biggest update we’ve ever done to Penfriend a few weeks ago. Tone matching with Echo, Hub and Spoke models with Clusters, and BoFu posts.

  • Let Us Do It For You: We have a DFY service where we build out your next 150 articles. Let us handle your 2025 content strategy for you.