When does AI content cross the line from helpful to spammy?
We spent the last 4 months tracking 473 pieces of AI-generated content across our own site and customer sites. 218 got cited by ChatGPT or Perplexity. 255 got ignored. 12 got flagged in reader feedback as "low quality" or "clearly AI."
We wanted to understand what separates the ones that work from the ones that don't. Here's what the data showed.
The content that got cited
Three things stood out.
1. They answered a specific question.
Not a general topic. A question someone actually typed into ChatGPT. When the content matched the query almost word-for-word, citation rates were 3.2x higher.
Example: A piece titled "How to fix Google Search Console indexing issues after a site migration" got cited 7 times. A piece titled "Complete Guide to SEO in 2026" got zero.
2. A human edited them.
The best performing pieces (top 10% by citations) were AI-generated but edited by someone who actually used the product. Those averaged 6.1 citations each. Fully automated pieces averaged 1.2.
You can tell when the writer has lived the problem. Small details. Specific examples. Acknowledgment of trade-offs.
3. They included something AI couldn't have known.
A screenshot from a real dashboard. A quote from a customer support ticket. A specific date showing the content was fresh. A data point from an internal survey.
These pieces got cited 2.8x more than those without.
The content that got ignored
The 255 pieces with zero citations had different patterns.
AI slop (43%). Content that just rephrased the same marketing page over and over. No new information. No point of view. Just words.
Answers to questions nobody asked (31%). People generating articles for keywords that never appeared in actual AI queries. A lot of effort for zero return.
Technically correct but hollow (19%). No examples. No edge. No sign that a human ever touched it.
Formatting disasters (7%). Broken tables, missing headers, content that AI couldn't parse.
What we're trying to build
We're working on a proofreader for our content engine (RCGE v2.2) that catches the second type before it gets published. Not a grammar checker. Something that flags:
Content that doesn't add new information
Content that doesn't answer a question we've actually seen in AI queries
Content that doesn't include something unique to your brand or product
Content that reads like it was written by someone who's never used the thing they're writing about


But we're not sure we've got it right. So we're asking for your thoughts.
If you've read AI content that felt genuinely useful, what made it work?
If you've read content that felt like spam, what tipped you off?
What would you want a tool to catch before you hit publish?
We want to make this thing actually helpful, not just another checkbox. Drop your thoughts.
Imed Radhouani
Founder & CTO – Rankfender
Making Content that AI actually quotes



Replies