Imed Radhouani

We gave AI our entire competitor tracking data and asked it to predict who would beat us.

by

Six months ago, we ran an experiment with our own data.

At Rankfender, we tracked 5 of our own competitors across 8 AI systems. We log their share of voice, citation velocity, content gaps, platform variance. Months of raw numbers sitting in a dashboard.

I pulled 6 months of data and fed it into Claude. One question: "Based on this, who is most likely to overtake us in the next 6 months? Show your work. Use the data. Don't summarize. Give me the numbers."

The answer changed how I think about competition.

The dataset:

  • 5 competitors tracked daily across 8 AI systems (ChatGPT, Gemini, Perplexity, Claude, DeepSeek, Grok, Llama, Mistral, )

  • 180 days of citation data

  • 47 tracked keywords across 3 core categories

  • 12,000+ individual citations logged

  • Share of voice calculated per platform per week

What the AI found:

Competitor X's velocity.

Total citations:

  • Month 1: 47 citations

  • Month 3: 89 citations (+89%)

  • Month 6: 178 citations (+100% from month 3)

Our citations over the same period:

  • Month 1: 312

  • Month 3: 341 (+9%)

  • Month 6: 378 (+11%)

At month 1, we had 6.6x their citations. At month 6, we had 2.1x their citations. The velocity gap was clear: they were growing at 10x our rate. At that slope, they would match our total citations in 14 months.

The comparison page gap.

Competitor X published 12 comparison pages targeting our brand in the last 3 months. We published 0 targeting them.

Their comparison pages generated 47 citations in month 6 alone. Our pages targeting other competitors generated 31 citations total in the same month.

The AI flagged this as the highest ROI opportunity we were missing. "You are letting them define the comparison narrative without response."

The platform blind spot.

On ChatGPT, our share of voice was 34%. Competitor X had 12%. On Perplexity, our share of voice was 7%. Competitor X had 11%.

Perplexity accounted for 23% of their total citations. It accounted for 8% of ours.

We had deprioritized Perplexity because our volume there was low. They had built a 4% share of voice advantage in a channel we were ignoring.

What I thought before the analysis:

I was worried about Competitor Y. Big brand. Big budget. They had 42% share of voice on ChatGPT. They were the obvious threat.

The data told a different story. Competitor Y's share of voice had dropped 6% over 6 months. Their citation growth was flat (+2% total). They were spending money but not gaining ground.

The real threat was Competitor X. 3x citation growth. 4% advantage on Perplexity. 12 comparison pages against us in 90 days. They were building a moat in the channels we had ignored.

What we did with the analysis:

We shifted our content strategy.

  • Months 4-6: published 8 comparison pages targeting Competitor X

  • Increased Perplexity tracking to daily (was weekly)

  • Added Perplexity-specific optimization to our content briefs

Results after 6 months:

  • Competitor X's citation growth slowed from 100% to 34% (last 90 days)

  • Our share of voice on Perplexity went from 7% to 14%

  • The gap in total citations stabilized at 2.1x (down from 2.4x at the peak)

We didn't stop them. But we slowed them down. And we held the gap.

What the data still says today:

Competitor X still has a steeper velocity curve than us. If we both grow at current rates, they'll match our total citations in 14 months.

But we're watching now. And we're not ignoring Perplexity anymore.

What I learned:

The data was there the whole time. I was looking at who was winning today, not who was winning tomorrow. The AI didn't tell me anything I couldn't have seen myself. But it forced me to look at the numbers I was avoiding.

Now every quarter I run this exercise. I feed our tracking data into an AI and ask it to tell me where we're vulnerable. Sometimes it tells me what I already know. Sometimes it tells me what I was hoping wasn't true. Either way, it's better than waiting for the competitor to pass me and then figuring out why.

What I'm curious about:

Have you ever run your own data through an AI and asked it to tell you where you're vulnerable? What did it say that you didn't want to hear?

Imed Radhouani
Founder & CTO – Rankfender
Built on data, not assumptions.

147 views

Add a comment

Replies

Best
Joe Red

@imed_radhouani This is exactly why I signed up for Rankfender. I'm tracking 3 competitors in my space. Zero visibility for my own brand so far. Been at it for 3 weeks. The data is interesting but I'm getting impatient.

Quick question: how long did it take you to see your first real improvement after you started acting on the data? Not the full turnaround. Just the first sign that something was working.

Imed Radhouani

@joe_reda11 Joe, fair question! and honestly, not the answer most people want to hear.

First signal that something was working: ~4–6 weeks.

Not growth yet , just movement in the right direction.

In our case, it looked like:

  • A few new citations appearing on platforms where we had zero before

  • Small share of voice shifts (like +1–2%)

  • Early pickup on newly published pages (especially comparison content)

The real inflection took closer to 8–12 weeks. That’s when:

  • Citation velocity started compounding

  • Some pages began to consistently get picked up across multiple AI systems

  • Gaps we targeted actually started closing

Important thing: at 3 weeks, you’re still in what I call the “invisible phase”.

You’re producing signals, but AI systems haven’t fully re-indexed / re-weighted you yet.

What I’d watch right now if I were you:

  • Are you getting any new citations at all, even small ones?

  • Are they appearing on more than one platform, or just one?

  • Are your pages getting picked up for specific queries you targeted?

If yes >> you’re on track. It’s just lag.

If no >> it’s usually a positioning or content structure issue, not time.

Most people quit right before the first compounding effect kicks in.

Joe Red

@imed_radhouani Yeah 4-6 weeks makes sense. I'm at 3 weeks and starting to see tiny movement. One new citation on Perplexity last week. Nothing on ChatGPT yet. The comparison pages are starting to get indexed but no citations so far.

The "invisible phase" is exactly how it feels. I'm doing the work but nothing's happening. Good to know I'm not off track. I'll keep watching the small signals and try not to get impatient.

What was the first thing you saw move? Citation on a new platform? A specific keyword? Just trying to know what to look for.

Stoyan Minchev

Yes. I did it. I asked the AI, this is my project, and my documentation, this is the marketing strategies and documents generated by you. What do you think about the application and its success rates. What can we do to make it better. It answered that it is a solid application, that solves serious problems, and has good chances for success, because it has unique approach.

But also it criticized me a lot. Clear messages came from this analysis and I now run it regularly. I optimized most of the messages, descriptions, and site, so that they are optimized for search engines and AI discovery. I changed my pricing, as well. The B2B2C strategy came from such session as well.

Imed Radhouani

@stoyan_minchev That's the good kind of criticism. The kind that actually changes how you think about what you're building.

The "clear messages" part is interesting. Most founders assume their messaging is clear because they know what they mean. AI doesn't know what you mean. It only knows what you wrote. So when it says "this is confusing," it's not being mean. It's just telling you what a fresh pair of eyes sees.

The pricing thing is the hardest to get right. We changed ours three times based on AI feedback. Each time it hurt to admit we were wrong. But the version we have now is the one that actually matches what people ask for.

The B2B2C strategy coming from a session like that is the best outcome. Not just fixing what's broken, but finding a new way to think about the market. AI doesn't have ego. It'll tell you the uncomfortable truth you've been avoiding.

What was the most painful change you made based on what it told you?

Stoyan Minchev

@imed_radhouani  The pricing and all thing related with the functionality. Because, as you know, each change at an late phase, creates a lot of turbulence: Implement it, test, document it, announce it (if needs to be announced), get back some feedback. It is a lot of work, and expenses.

Being wrong and someone to tell me that I am wrong is not a problem. We are wrong all the time, usually, for a lot of things. There are whole books written on that topic :D

Imed Radhouani

@stoyan_minchev Hahaha true. There are whole libraries on how wrong we are. I think my bookshelf alone could fund a small country.
And yeah the late-phase changes are brutal. The work itself is one thing. The mental tax of admitting you built something in the wrong direction for months is another. You don't just change the code. You change the story you told yourself about why it was right.

The pricing one hurt because we had already announced it. New pricing page. Email to existing users. Everything. Then the data came back and said "this doesn't match what people actually need." We had to roll it back two weeks after launch. That was a fun email to write.

The "we are wrong all the time" line is the only way to stay sane. The problem isn't being wrong. It's staying wrong too long because you don't want to admit it.