Start with a strong primary goal for your SEO content
Before you throw a prompt at ChatGPT, take a full stop and ask yourself: What’s the organic traffic job this post needs to accomplish? Don’t just say, “rank for this keyword.” That’s a direction, not a job. The real goal might be to:
Thank you for reading this post, don't forget to subscribe!- Capture commercial traffic for a mid-funnel comparison topic
- Win snippet placements for longtail question traffic
- Feed internal links into a critical money page
- Test a new keyword cluster for audience fit
I once tried automating an ecommerce FAQ block using ChatGPT without understanding the intent — users were searching for “how long does X take to ship” but the model kept producing “We ship fast!” no matter how much I tweaked it. It wasn’t until I rephrased the prompt to include “Answer this exact user question based on warehouse timelines and return policy location” that the answers finally made sense.
Prompt structure tip: Embed your goal in the actual instruction. Instead of saying “Write an SEO article about X,” say:
This article should rank for [intent keyword] by answering [real user search intent], targeting [user type] who are trying to [solve problem/do task].
To conclude, prompts without strategic direction usually lead to surface-level content that users bounce from almost immediately.
Prime your prompts with content type expectations
ChatGPT does not inherently understand the purpose of complex content structures like a vs post, walkthrough, or topical deep-dive — unless you state it outright.
There’s a huge difference in output quality between a basic prompt like:
Write about Notion AI vs Obsidian
…and something more primed like:
You are writing a comparison post to help software teams choose between Notion AI and Obsidian. Each tool should be independently described, followed by side-by-side comparisons based on integration flexibility, privacy, learning curve, and pricing. Include personal quirks that affect real usage experience.
Table structures also guide structure properly: For comparison posts, feed in something like:
Feature | Notion AI | Obsidian |
---|---|---|
Offline usage | Mostly online, some caching | Fully offline-first |
Mobile editing | Fine but laggy with AI | Snappy but AI plug-ins unreliable |
Export formats | PDF, markdown | Markdown, HTML, Vaults |
This is all promptable — don’t wait for the AI to invent your content strategy. If you don’t pipe in a table, you’ll probably get generic paragraphs with no hooks.
In summary, smart post templates and specs are half the work of prompting good SEO content.
Control tone and depth by modeling one block
ChatGPT can mimic tone surprisingly well—but it mimics better than it invents. If you want detailed, human-feeling content, your best move is to give it one perfect paragraph and ask it to match it.
I usually write one paragraph myself like this:
“When I tested the automatic retargeting option in Facebook Ads, I noticed something weird — it fired to cold audiences too. It turned out that the custom event tracking I set in Segment was firing twice. So instead of optimizing, it just spammed broadly. Don’t let that happen — always inspect FB events in Events Manager and make sure last-touch attribution is off when setting ‘ViewContent’ and ‘AddToCart’ triggers.”
Then I prompt: “Use this tone, paragraph structure, and technical specificity in every section that follows.” This minimizes that classic AI fluffiness you get like:
Facebook Ads provides great tools for powerful automation. Many users have found success. Here are some benefits you might want to consider.
You know the one. 😬
The bottom line is — sample tone paragraphs beat style descriptions every time.
Use specific prompting formats for different post types
I have a running Notion template called Prompt Frameworks by Content Type — here’s what it includes, and why patterning matters:
Content Type | Prompt Notes |
---|---|
Troubleshooting | Specify the error, symptom, version, and tools used; include real example cases |
Comparison Post | Include technical specs + anecdotal differences + pros/cons in daily use |
Feature Breakdown | Each feature should have: what it does, what problem it solves, example context |
Workaround Post | Use steps with screenshots, confirmations, version notes, common failure cases |
I once tried generating an Airtable automation tutorial using general prompts like “Explain how to use Airtable automations” — the results were circular and vague. Switching to this structure worked better:
“Write an actionable solution for someone trying to automatically email a record summary when a checkbox is marked. Include: field setup, formula quirks, step-by-step automation configuration, and screenshots where relevant.”
Ultimately, prompt architecture has as much impact as keyword research.
Prompt for originality using edge-case digging
Original content comes from original situations — not keyword stuffing. ChatGPT doesn’t always know how to dig for that unless you pull it out by force.
Most good prompts for originality follow this structure:
“Walk through a situation where [X seems to work normally], but when [unexpected condition Y] happens, the output breaks. Describe how a user might discover this in practice, and how to fix it.”
Examples of this:
- “Build a Notion gallery that works until someone filters by a formula name”
- “Set up a Google Tag Manager event that double-fires if nested iFrames exist”
In each case, the edge case isn’t an extra — it’s the content. That’s what makes it stand out and get links over time. Most of my long-term performing posts are like this: not winners on day one, but steady stats from being the only site that addressed that obscure tagging bug or that weird Zapier pagination issue.
At the end of the day, feeding prompts originality constraints is what separates topically correct AI content from genuinely useful material.
Optimize prompt cycles with tooling integrations
Typing prompts into ChatGPT manually is fine in a pinch, but serious SEO content workflows benefit from integration tools. I use a mix of:
- Notion AI for rough topic outlining and content block generation in context
- Raycast Quick AI for fast internal linking suggestions across posts using natural queries
- A custom Superhuman + Make.com automation that parses my email newsletters into prompt-friendly summaries for future post inputs (that one was a pain to get working 🤯)
These aren’t must-haves, but they cut down hours of fiddly prompting. Especially when you’re reusing post outlines or maintaining tone across multiple pieces.
Finally, use ChatGPT’s Custom Instructions to permanently set things like:
- Audience you’re writing for (e.g. SEO leads in SaaS)
- Preferred content structures (comparison table → features → user quotes)
- Default tone markers (“Use technical but relatable tone, with real platform references and occasional annoyance.”)
Most people treat ChatGPT prompting like a one-off task. I treat it like calibrating a developer environment — once it’s set up well, everything gets easier.
To wrap up, AI prompts aren’t magic spells — they’re more like configuration files for a system that can only reflect what it’s given.
Use post-submission prompts to support optimization
Even after a blog post is live, prompts stay useful. I often refeed old posts with updates like these:
“Here is an existing post. Check for outdated product references or tool integrations no longer accurate.”
Or:
“Read this article. Suggest up to 3 relevant FAQ entries that match pain points or uncertainties in the content.”
This helps keep the piece fresh without rewriting it. I also use prompts like:
“Based on this article and its subheaders, suggest 3 new related topics I could rank for and why.”
This is a sneaky way to let ChatGPT surface longtail ideas from your own content footprint — way better than brainstorming cold.
In a nutshell, prompting doesn’t stop once the content is published. It keeps looping as maintenance, discovery, and expansion.