I Spent 6 Months Building a Product AI Would Never Mention. Here's What I Learned.
Six months ago, I launched a product.
Beautiful landing page. Great onboarding. Real customers. Solid retention.
One problem: AI never mentioned it.
Not in ChatGPT. Not in Perplexity. Not in Gemini.
We were invisible. And I didn't know why.
So I spent the next 6 months reverse-engineering the answer. Here's what I learned.
ποΈ What I Built
The product was solid. Nothing revolutionary, but genuinely useful:
Solved a real problem
4.8 stars on reviews
20% MoM growth
Customers who found us, loved us
But "found us" was the problem. Organic discovery had flatlined. Referrals carried us. New users from search? Zero.
Then someone asked ChatGPT about our category. Screenshotted it. Sent it to me.
We weren't there.
Not #1. Not #2. Not #10. Nowhere.
π What I Thought Mattered (It Didn't)
What I Prioritized | What AI Actually Cares About |
|---|---|
Beautiful design | Structured content |
Clever copy | Direct answers |
SEO keywords | Complete questions |
Backlinks | Original data |
Social proof | Comparison tables |
I was optimizing for humans. AI is not human.
π Lesson #1: AI Doesn't Read β It Extracts
Humans read your About page. They feel your brand voice. They appreciate the design.
AI does none of that.
AI scans your page looking for one thing: answers to specific questions.
If your page doesn't directly answer "what is the best tool for X" in a way AI can extract, you don't exist.
What I fixed:
I stopped writing clever marketing copy. Started writing direct answers.
Before | After |
|---|---|
"We help teams collaborate better" | "We're a project management tool for remote design teams. Here's exactly how we work." |
Feature lists | Problem-solution paragraphs |
Vague benefits | Specific outcomes |
π Lesson #2: AI Loves Comparisons β Even If You Lose
This one hurt.
I avoided mentioning competitors. Why give them free airtime?
Wrong move.
Every AI answer about your category includes comparisons. Always. If you don't provide the comparison, AI finds someone who does β and that someone might be your competitor.
What I fixed:
I created a comparison page against the market leader. Honest. Balanced. Including where they won.
Before | After |
|---|---|
Never mentioned competitors | Dedicated "Us vs. Them" page |
Hoped to win by default | Acknowledged trade-offs |
Invisible in comparisons | Cited in 40% of AI answers |
One founder told me: "I was scared to compare. Turned out it was the only way to get cited."
π§ Lesson #3: AI Can't Invent β You Must Give It Data
This was the biggest blind spot.
I assumed AI would know we were good. Our product spoke for itself.
AI doesn't know anything. It only repeats patterns it's seen.
If the pattern doesn't include your unique data point, it doesn't exist.
What I fixed:
I started publishing one original data point per month.
Month | Data Point | Result |
|---|---|---|
1 | "We surveyed 100 users about their biggest frustration" | First citation |
2 | "Average time to first value: 4.2 minutes" | Cited by 3 platforms |
3 | "43% of our users come from referrals" | Quoted in Perplexity |
AI can't invent your data. If you don't publish it, it doesn't exist.
π The 6-Month Turnaround
Metric | Month 0 | Month 6 |
|---|---|---|
AI mentions (monthly) | 0 | 47 |
Keywords with presence | 0 | 23 |
Share of voice | 0% | 18% |
Organic traffic from AI-influenced searches | 0 | +340% |
Support tickets about comparisons | 12/month | 3/month |
The product didn't change. The content did.
π§ͺ The 30-Day Experiment (What I'd Do Differently)
If I could go back, here's exactly what I'd do in the first 30 days:
Week 1:
List 10 questions customers actually ask
Write direct answers (300 words each)
Add them as FAQ sections
Week 2:
Create one comparison page vs. biggest competitor
Be honest. Include a table.
Week 3:
Survey 50 customers (one question: "what almost stopped you?")
Publish one data point from the answers
Week 4:
Check AI mentions (or let Rankfender do it)
Repeat what worked
π How Rankfender Would Have Saved Me 6 Months
I built this playbook the hard way β by failing for half a year.
Rankfender now does this automatically:
RAIVE tells you exactly what questions AI is asking about your category β so you know what to answer.
RCGE v2.2 generates comparison pages and FAQ content based on your competitors and customer questions.
ROSE v1.0 scans your site and ensures every page is structured for AI extraction.
I wasted 6 months so you don't have to.
π The Offer
Want to check if AI mentions your product?
DM me:
Your domain
Your top 2 competitors
I'll run a free AI visibility audit and send you:
Every mention across ChatGPT, Perplexity, Gemini
What questions you're answering (and missing)
Your biggest content gap
First 20 DMs get it. No card. No catch.
π Your Turn
Three questions:
Have you checked if AI mentions your product?
What's one question you wish AI would answer about you?
Which of these 3 lessons hit closest to home?
Drop a comment. I read every one.
Imed Radhouani
Founder & CTO β Rankfender
Helping products get the AI visibility they deserve



Replies
Great ideas, thank you. I'm about to release an app (helps control doomstrolling, called PIM - Please Inconvenience Me).
I may make comparison tables in blog posts a big part of the marketing. And also think about marketing from the perspective of "what does AI think about my app?"
Rankfender
@sylvia_moestl_vasilikΒ Love the name PIM β Please Inconvenience Me. That's the kind of memorable, contrarian branding that sticks.
You're thinking exactly right on both fronts:
1. Comparison tables are your best friend. In a category like doomscrolling blockers, AI is constantly comparing tools: Freedom vs. Opal vs. ScreenZen. If you're not in those tables, you don't exist. Create "PIM vs. Freedom" and "PIM vs. Opal" pages before you launch. Be honest about where you win (the "inconvenience" angle is unique) and where they win. AI loves that.
2. "What does AI think about my app?" is the question every founder should ask. For PIM, the query isn't just "doomscrolling apps." It's:
"how to stop doomscrolling reddit"
"app that makes it annoying to open instagram"
"inconvenience-based screen time tools"
If you're not in the answers to those questions, you're invisible.
Quick audit offer: When you're ready, DM me your domain. I'll run PIM through Rankfender and send you:
Every mention across ChatGPT, Perplexity, Gemini
What questions AI is answering about your category
Which comparison tables you're missing
No card. Just data. Good luck with the launch β genuinely excited to see what you build.
Rankfender
@josemarinThat means a lot, thank you!
Honestly, hearing that you're going to implement it is what makes these threads worth writing.
Quick heads-up: When you do implement, check back in 2-3 weeks and run another audit. The first citation often comes faster than you expect β sometimes within days if you hit the right question.
And if you hit any snags or want a second pair of eyes on what you're building, just DM me. Happy to take a look.
What's the first question you're planning to answer? Curious what your audience is asking.
Rankfender
@josemarinΒ That's exactly the right move β and FinOps is a space where original data will win hard.
AWS cost optimization is full of generic advice. If you publish something unique β "we analyzed 100 AWS bills and found the top 3 hidden cost drivers" β AI will latch onto it fast.
I'll get Cirrondly set up with a 1-month full access trial. No limits, no card needed. Just the full platform so you can:
Track every mention across ChatGPT, Perplexity, Gemini
See what FinOps questions AI is actually answering
Identify content gaps competitors are filling
Monitor progress as you publish
Just DM me the account email and I'll flip the switch.
Really excited to see what you build. FinOps needs more real data. Go make some noise.
What's one original data point you published that surprised even you; and got AI to cite it fastest?
Rankfender
@swati_paliwalΒ Great question! The one that surprised me most was a benchmark study we published internally.
The data point: We analyzed 500 SaaS pricing pages and found that 63% had outdated pricing information on third-party sites β not their own pages. Reddit, G2, old press releases, competitor comparison pages.
I expected maybe 20-30%. 63% shocked me.
What happened next: Within 2 weeks, that stat was cited in 3 different AI answers about "pricing accuracy" and "SaaS trust signals." It's been quoted in Perplexity multiple times since.
Why it worked: It was a problem every founder experiences but no one had quantified. AI had nothing else to cite on that topic.
Honestly? This data point is what led me to build Rankfender.
I realized that if 63% of brands have outdated info floating around, and AI is citing it, founders are losing deals without ever knowing why. The problem wasn't visibility β it was accuracy. So we built RAIVE to track every mention across AI platforms, flag errors, and help founders fix them before they cost deals.
The lesson: Pick a pain point your audience feels but can't measure. Put a number on it. Then build something to solve it.
What's one metric from your users that might surprise even you?
Great advice! Been doing exactly this with my pet identity app (animalid.app). Added FAQ schema, breed guides, comparison content. The 'AI doesn't read, it extracts' lesson hit hardest β we were writing beautiful pet-owner copy that AI completely ignored. Restructured everything around direct answers to questions like 'how do I keep my pet's vaccination records.' Night and day difference.
Rankfender
@jefflaΒ That's such a great example β and exactly what I mean when I say "AI doesn't read, it extracts."
Your before/after is perfect:
Before
After
Beautiful pet-owner copy
Direct answers to questions
"We help you care for your pet"
"How do I keep my pet's vaccination records?"
AI ignored it
AI cites it
The vaccination records example is the sweet spot. That's exactly the kind of specific, question-based content AI grabs. Not "we simplify pet care" β but a direct, structured answer to a real question people ask.
For Animalid you're also sitting on a data goldmine: pet ownership patterns, breed popularity by region, vaccination compliance rates. Any of those as original data points would get cited fast.
Quick question β after restructuring, how long before you saw AI start picking you up? Days, weeks? Always curious about the timing.
@imed_radhouaniΒ Thanks Imed
On timing: we restructured about 3-4 weeks ago β added FAQ schema, question-based guides, and direct-answer content blocks. We just ran our first formal benchmark today: 20 queries across GPT-4o, Claude, and Gemini. Honest results β we're only getting picked up on branded queries so far. Generic queries like "best pet health tracking app" still return the incumbents (PetDesk, 11pets, etc.).
So the answer is: not yet for generic discovery, but the branded signal is there. I think the original data angle you mentioned is our best lever β we're sitting on vaccination compliance patterns and breed-specific health data that nobody else is publishing. That's next.
Curious if you've seen a typical lag between restructuring and generic query pickup, or if it's more about hitting a citation threshold?
I have checked on occasion to see if are mentioned. 2. The one question I wish AI would answer about my company is to mention as us when people are looking for a quantum networking and communications resource. 3. Lesson 3 I assume is met by my blogging. Is there another way?
Rankfender
@bill_xΒ Quick answers to your three:
1. Checking occasionally is where most founders get stuck. AI answers change constantly β 15-20% shift daily. If you check once a month, you're missing 80% of what's being said. This is exactly why we built RAIVE to track daily.
2. That's a great question to target. "Quantum networking and communications resources" is your category. Now ask: what are the specific questions people ask? "Best quantum networking tools for researchers" "How to get started with quantum communications" "Quantum networking vs classical" β those are the queries where AI will cite you if you answer them directly.
3. Blogging helps, but there's a better way. Blog posts get lost. What works faster: a dedicated "Quantum Networking Resources" page with structured content, FAQs, and a comparison table. One page that directly answers the question you wish AI would answer. Then link to it from your blog. That page will get cited long before any individual blog post.
Quick offer: When you're ready, DM me your domain. I'll run a free audit and show you exactly what questions AI is answering about quantum networking β and who's winning the citations. Could help you target the right keywords from day one.
@imed_radhouani trying to DM you my domain seatplan.com but can only seem to comment...
Rankfender
@benjackson74Β Hey β just saw this! The DM system on Product Hunt can be tricky.
Best way to get started: head to Rankfender and create a free account with your domain. Once you're in, I can personally grant you an extended trial so you can run full audits on seatplan.com and see everything.
No card needed β just sign up, and I'll flip the switch.
Looking forward to seeing what you find!