Imed Radhouani

I Spent 6 Months Building a Product AI Would Never Mention. Here's What I Learned.

byβ€’

Six months ago, I launched a product.

Beautiful landing page. Great onboarding. Real customers. Solid retention.

One problem: AI never mentioned it.

Not in ChatGPT. Not in Perplexity. Not in Gemini.

We were invisible. And I didn't know why.

So I spent the next 6 months reverse-engineering the answer. Here's what I learned.

πŸ—οΈ What I Built

The product was solid. Nothing revolutionary, but genuinely useful:

  • Solved a real problem

  • 4.8 stars on reviews

  • 20% MoM growth

  • Customers who found us, loved us

But "found us" was the problem. Organic discovery had flatlined. Referrals carried us. New users from search? Zero.

Then someone asked ChatGPT about our category. Screenshotted it. Sent it to me.

We weren't there.

Not #1. Not #2. Not #10. Nowhere.

πŸ” What I Thought Mattered (It Didn't)

What I Prioritized

What AI Actually Cares About

Beautiful design

Structured content

Clever copy

Direct answers

SEO keywords

Complete questions

Backlinks

Original data

Social proof

Comparison tables

I was optimizing for humans. AI is not human.

πŸ“š Lesson #1: AI Doesn't Read β€” It Extracts

Humans read your About page. They feel your brand voice. They appreciate the design.

AI does none of that.

AI scans your page looking for one thing: answers to specific questions.

If your page doesn't directly answer "what is the best tool for X" in a way AI can extract, you don't exist.

What I fixed:

I stopped writing clever marketing copy. Started writing direct answers.

Before

After

"We help teams collaborate better"

"We're a project management tool for remote design teams. Here's exactly how we work."

Feature lists

Problem-solution paragraphs

Vague benefits

Specific outcomes

πŸ“Š Lesson #2: AI Loves Comparisons β€” Even If You Lose

This one hurt.

I avoided mentioning competitors. Why give them free airtime?

Wrong move.

Every AI answer about your category includes comparisons. Always. If you don't provide the comparison, AI finds someone who does β€” and that someone might be your competitor.

What I fixed:

I created a comparison page against the market leader. Honest. Balanced. Including where they won.

Before

After

Never mentioned competitors

Dedicated "Us vs. Them" page

Hoped to win by default

Acknowledged trade-offs

Invisible in comparisons

Cited in 40% of AI answers

One founder told me: "I was scared to compare. Turned out it was the only way to get cited."

🧠 Lesson #3: AI Can't Invent β€” You Must Give It Data

This was the biggest blind spot.

I assumed AI would know we were good. Our product spoke for itself.

AI doesn't know anything. It only repeats patterns it's seen.

If the pattern doesn't include your unique data point, it doesn't exist.

What I fixed:

I started publishing one original data point per month.

Month

Data Point

Result

1

"We surveyed 100 users about their biggest frustration"

First citation

2

"Average time to first value: 4.2 minutes"

Cited by 3 platforms

3

"43% of our users come from referrals"

Quoted in Perplexity

AI can't invent your data. If you don't publish it, it doesn't exist.

πŸ“ˆ The 6-Month Turnaround

Metric

Month 0

Month 6

AI mentions (monthly)

0

47

Keywords with presence

0

23

Share of voice

0%

18%

Organic traffic from AI-influenced searches

0

+340%

Support tickets about comparisons

12/month

3/month

The product didn't change. The content did.

πŸ§ͺ The 30-Day Experiment (What I'd Do Differently)

If I could go back, here's exactly what I'd do in the first 30 days:

Week 1:

  • List 10 questions customers actually ask

  • Write direct answers (300 words each)

  • Add them as FAQ sections

Week 2:

  • Create one comparison page vs. biggest competitor

  • Be honest. Include a table.

Week 3:

  • Survey 50 customers (one question: "what almost stopped you?")

  • Publish one data point from the answers

Week 4:

  • Check AI mentions (or let Rankfender do it)

  • Repeat what worked

πŸš€ How Rankfender Would Have Saved Me 6 Months

I built this playbook the hard way β€” by failing for half a year.

Rankfender now does this automatically:

RAIVE tells you exactly what questions AI is asking about your category β€” so you know what to answer.

RCGE v2.2 generates comparison pages and FAQ content based on your competitors and customer questions.

ROSE v1.0 scans your site and ensures every page is structured for AI extraction.

I wasted 6 months so you don't have to.

🎁 The Offer

Want to check if AI mentions your product?

DM me:

  • Your domain

  • Your top 2 competitors

I'll run a free AI visibility audit and send you:

  • Every mention across ChatGPT, Perplexity, Gemini

  • What questions you're answering (and missing)

  • Your biggest content gap

First 20 DMs get it. No card. No catch.

πŸ‘‡ Your Turn

Three questions:

  1. Have you checked if AI mentions your product?

  2. What's one question you wish AI would answer about you?

  3. Which of these 3 lessons hit closest to home?

Drop a comment. I read every one.

Imed Radhouani
Founder & CTO – Rankfender
Helping products get the AI visibility they deserve

62 views

Add a comment

Replies

Best
JosΓ© marin
That is a gold advice. At this moment I have the same problem so I’ll implement this. Thanks for sharing
Imed Radhouani

@josemarinThat means a lot, thank you!

Honestly, hearing that you're going to implement it is what makes these threads worth writing.

Quick heads-up: When you do implement, check back in 2-3 weeks and run another audit. The first citation often comes faster than you expect β€” sometimes within days if you hit the right question.

And if you hit any snags or want a second pair of eyes on what you're building, just DM me. Happy to take a look.

What's the first question you're planning to answer? Curious what your audience is asking.

JosΓ© marin
@imed_radhouani thanks man, Since I'm creating an agent for AWS costs, I'd like Cirrondly to appear as a reference when someone searches for FinOps.
Imed Radhouani

@josemarinΒ That's exactly the right move β€” and FinOps is a space where original data will win hard.

AWS cost optimization is full of generic advice. If you publish something unique β€” "we analyzed 100 AWS bills and found the top 3 hidden cost drivers" β€” AI will latch onto it fast.

I'll get Cirrondly set up with a 1-month full access trial. No limits, no card needed. Just the full platform so you can:

  • Track every mention across ChatGPT, Perplexity, Gemini

  • See what FinOps questions AI is actually answering

  • Identify content gaps competitors are filling

  • Monitor progress as you publish

Just DM me the account email and I'll flip the switch.

Really excited to see what you build. FinOps needs more real data. Go make some noise.

swati paliwal

What's one original data point you published that surprised even you; and got AI to cite it fastest?

Imed Radhouani

@swati_paliwalΒ Great question! The one that surprised me most was a benchmark study we published internally.

The data point: We analyzed 500 SaaS pricing pages and found that 63% had outdated pricing information on third-party sites β€” not their own pages. Reddit, G2, old press releases, competitor comparison pages.

I expected maybe 20-30%. 63% shocked me.

What happened next: Within 2 weeks, that stat was cited in 3 different AI answers about "pricing accuracy" and "SaaS trust signals." It's been quoted in Perplexity multiple times since.

Why it worked: It was a problem every founder experiences but no one had quantified. AI had nothing else to cite on that topic.

Honestly? This data point is what led me to build Rankfender.

I realized that if 63% of brands have outdated info floating around, and AI is citing it, founders are losing deals without ever knowing why. The problem wasn't visibility β€” it was accuracy. So we built RAIVE to track every mention across AI platforms, flag errors, and help founders fix them before they cost deals.

The lesson: Pick a pain point your audience feels but can't measure. Put a number on it. Then build something to solve it.

What's one metric from your users that might surprise even you?