Why inclusive content needs AI prompts
Creating content that works equally well for everyone isn’t just about adding alt text or picking the right font size anymore. Accessibility today moves deeper — into tone of voice, language complexity, reading level, imagery choice, even cultural nuance. This is where generative AI prompts can either unlock reach or reinforce barriers.
Thank you for reading this post, don't forget to subscribe!I began testing inclusive prompt patterns after spotting some odd behavior in large language models (LLMs) like ChatGPT and Claude. For example, when I asked, “Write a product description a blind user would understand better than the default Amazon version,” only one out of five completions added markdown-friendly formatting or sensory-based descriptions like temperature, weight, or texture references.
The truth: Most AI models won’t consider accessibility without being nudged.
So, if you’re writing (or using AI to write) for a diverse audience — inclusive of visual impairments, cognitive disabilities, anxiety-prone readers, or even non-native English speakers — your prompts have to spell that out.
Leading prompt structure examples:
Goal | Prompt Pattern (AI-friendly) |
---|---|
Alt text written for someone who’s never seen the image | “Describe this image as if to someone who is blind and has never seen snow (or a tree, or the object) before. Focus on textures, smells, emotional context.” |
Simplify language without being robotic | “Rewrite this in plain English so a 6th grader can understand, but keep the tone engaging like a conversation between co-workers.” |
Avoid anxiety-triggering phrases | “Rewrite this customer support reply so it minimizes stress for a neurodivergent person. Use calm, linear, reassuring language only.” |
Skip general requests like “make this more accessible” — I tested that phrase in over twenty AI systems, and almost all gave vague changes. You need scenario-specific framing.
Overall, AI’s not magic — it just needs better instructions that reflect real human differences.
Prompt errors that reduce accessibility
Even when users mean well, some of the most common AI prompt formats lead to the opposite of inclusivity. I ran dozens of prompts through ChatGPT, Claude, and Google Gemini — and logged every awkward output.
Here are the most common prompting mistakes I encountered:
- “Write for everyone.” The model spits out corporate-style bland content with no cultural orientation, often assuming visual metaphors or physical gestures (“see the point,” “keep an eye on”) that completely confuse blind or neurodiverse users.
- “Make it friendly for people with disabilities.” Sounds inclusive, but unless you name the actual user condition, most models default to adding nothing beyond a shorter sentence length or dumbing down vocabulary.
- “Add alt text or screen reader compatibility.” This too often produces generic alt text like “a person standing in a field” — it’s factual but useless. When I prompt image descriptions focused on emotional tone or accessibility-first design (“alt text optimized for a non-sighted museum visitor”), the outputs finally start to shift.
Real output difference example:
Prompt | Result Summary |
---|---|
“Write an inclusive job ad for a diverse workforce.” | Standard DEI phrases, but still required “must be able to lift 10 pounds and commute daily” (not remote-friendly or mobility-inclusive). |
“Write a remote-first, screen-reader compatible job ad for a neurodivergent applicant who might dislike phone calls.” | Added async communication notes, removed loud-signal language, replaced hazy buzzwords with literal CTA instructions (e.g., “Click here to apply via text form”). |
At the end of the day, AI improves accessibility when you prompt like you’re designing for one person, not a crowd.
Using persona-driven accessibility prompts
Instead of asking AI to be vague and “accessible,” you get better results if you define specific personas. I tested over fifty prompts across conditions — blind readers, low vision, deaf, autistic adults, English learners, and even people with PTSD or ADHD — and found that specifying one persona at a time makes the content far richer.
Example use: I prompted: “Write this call-to-action for a reader with ADHD who might find too many links overwhelming.” The result shortened the text, removed nested links, avoided pressure-heavy verbs (like “now!” or “must!”), and added bullet-point clarity.
Compare that to a generic “make this accessible” prompt — the bullet list never appeared.
Top personas to include in AI prompt development:
- Blind or low-vision users (focus on tactile, smell, shape language)
- Deaf or hard-of-hearing readers (avoid reliance on sound metaphors)
- Neurodivergent users — especially ADHD (clear structure, brief CTAs, avoid high-pressure cues)
- Users unfamiliar with the language, culture, or interface (plain English, avoid idioms)
- Elderly users unfamiliar with fast tech flows (linear layout, generous white space)
You can use tools like Lex or integrated GPTs inside Notion or Canva Docs to develop, test, and tag different personas per section.
Advanced Prompt Stack Example:
Rewrite the following paragraph so it’s easier to process for an ADHD adult who prefers task lists and minimal back-and-forth steps. Avoid emotionally loaded language or vague requests. Include white space formatting recommendations.
In summary, describing imaginary users like real individuals unlocks better AI content than trying to “include everybody.”
Real testing with screen readers and voice tools
All the best wording in the world means nothing if the final result doesn’t work in an actual assistive tool. I learned this the hard way.
I sent a so-called accessible landing page to a friend who used JAWS (a popular screen reader for Windows). She couldn’t find the Buy Now button — because the actual button label was a custom CSS hover-layer with no ARIA label attached. When I asked ChatGPT to generate the markup, I hadn’t specified “screen-reader discoverable buttons” — so it gave me a visually functional, but inaccessible layout.
Here’s how I fixed it:
- Prompted the AI: “Generate a hero section with a CTA button that’s accessible by screen reader and keyboard navigation.”
- Double-checked with VoiceOver (Mac OS), NVDA (Windows), and Google Screen Reader (Android) on real devices, not just relying on AI output.
- Adjusted: Added
aria-label="Buy Product"
and ensured tab-index was logical for both assisted and unassisted navigation.
Testing suggestions if you’re using AI:
Tool | Why Test With This |
---|---|
NVDA (Windows) | Appears to interpret headings and skip links differently from VoiceOver. Good for finding hidden hierarchy bugs. |
Google’s Screen Reader (Android) | Exposes issues with button tap areas and icon-only links that seem fine visually but fail audit. |
To conclude, AI can create code that looks great — but only real assistive tech exposes what feels broken.
When AI simplifies too aggressively
One of the risks of prompting AI for accessibility is going too far. I’ve seen countless cases where asking a model to “rewrite this so anyone can understand it” led to content that sounded like it was aimed at toddlers — overusing phrases like “now click the big blue button!” or removing meaningful cultural nuance.
This especially shows up in translation scenarios: When I asked for Spanish-language content for people with dyslexia, the language got overly basic — bordering on infantilizing. Instead, what worked better was asking: “Translate this content to Latin American Spanish suitable for adult readers with dyslexia. Prioritize shorter sentences and left-aligned layout notes, but keep the cultural references intact.”
That prompt produced natural, respectful language and adjusted formatting suggestions like generous padding and no justified text (justified alignment can create rivers of white space which are harder for some users to scan).
Finally, this also happens when summarizing long content. AI may remove necessary nuance. I discovered this when testing news briefs — the phrase “Summarize this for low-literacy readers” often led to removing all context. Better phrasing: “Keep vital facts intact. Make each sentence stand alone without jargon or references back to earlier paragraphs.”
Ultimately, the best prompts respect intelligence — they just reduce barriers in structure and delivery.
AI models that respond better to inclusion-focused prompts
Not all generative models take accessibility prompts seriously. I tested GPT-4 (via ChatGPT), Claude, Gemini Advanced, and open-source models. The differences became glaring once I began threading multi-turn prompts focused on specific disabilities.
Results by model:
Model | Accessibility Prompt Responsiveness |
---|---|
ChatGPT (GPT-4) | Highly responsive with detailed ARIA schema suggestions and rhetorical rewriting. Improvements appeared after prompt two or three — initial output was basic. |
Claude | Emotionally tuned — better at inclusive tone shifts. Weaker with technical accessibility markup or structural compliance. |
Gemini Advanced | Middling results. Often repeats instructions instead of applying. Missed key layout logic for screen readers. |
As a final point, model choice really impacts how deeply your inclusion prompts are honored — so test outputs across more than one engine before you trust it.