Masab Gadit

Everyone said "GEO" was a fad. We spent a year building for it anyway.

A year ago, half the marketing world told us "AI search" was overhyped. The other half was shipping "ChatGPT SEO checklists" in a week.

We ignored both.

Instead we did one boring thing: we scraped LLM citations. Every day. Across ChatGPT, Perplexity, Gemini, Claude. For hundreds of brands. And we asked one question — when AI recommends a product, where does that recommendation actually come from?

Here's what we found that nobody was talking about:

  • LLMs don't "rank" pages. They stitch together snippets from sources that often aren't even on page 1 of Google.

  • The #1 lever for AI visibility isn't your website. It's third-party mentions (listicles, Reddit threads, niche blogs, comparison pages).

  • Most "AI SEO tools" tell you your score. None of them close the loop.

So we built Wellows to do the three things that actually move the needle: see what AI says about you, fix the content it's pulling from, and get you mentioned where it's looking next.

AMA — ask me anything about:

  • Building a GEO tool while the category was still being named

  • What we've seen in millions of LLM citations

  • Why most "AI visibility scores" are vanity metrics

  • The unsexy tech behind scraping and attributing AI answers

My question for you: If you Googled your brand today and then asked ChatGPT about it, would the two answers tell the same story?

143 views

Add a comment

Replies

Best
Jahnavi Thota

This aligns a lot with what we’ve seen. GEO sounds simple in theory, but actually influencing what LLMs pick up is much harder in practice. We’ve tried pushing GEO-focused content and even when you structure blogs for “AI readability,” it doesn’t guarantee visibility unless it’s reinforced by third-party signals. While working on this at Turgo, one thing that stood out is how much external mentions and distribution matter compared to just optimizing your own site.

Curious if you’ve seen certain types of sources consistently getting picked up more than others.

Masab Gadit

@jahnavi_thota Of course. This is actually the fastest path to visibility that we’ve found: getting mentioned on trusted third-party sources. We’ve seen this pattern repeatedly.

What’s even more interesting is that LLM sentiment seems to be heavily influenced by the sentiment of those mentions. For example, if your brand receives a positive review on G2 or a positive mention on TechRadar, the LLM often does not just mention your brand — it also develops a positive bias toward it in the way it describes and recommends you.

Regarding sources, Reddit and LinkedIn are among the strongest. YouTube and Medium also play an important role, and in some niches, Facebook and Quora matter as well.

For editorial coverage, niche blogs can work extremely well. At the same time, larger and more established publications like Forbes, TechRadar, PCMag, and similar legacy outlets tend to be stickier, meaning that once they get cited by LLMs, they often continue to be cited for a longer time.

Rahul Manjhi

@jahnavi_thota I like that you focused on actual scraped data instead of assumptions. Most tools in this space feel like they are guessing what AI models do rather than observing it

Nitesh Kumar

@jahnavi_thota  @rahul_manjhi1 I am curious how you handle attribution accuracy. When sources are stitched together, it feels like it could be hard to trace exactly what influenced a response.

Niklas Fischer

Super interesting!
I've been thinking about this a lot, because I'm building a product that is sort of a file backup tool for Mac but also offers a lot more when it comes to file organization and knowing always whether your files are backed up, even when your external drives are not plugged in at the moment.

I do not want to compete on SEO with the old backup players in the space, but I'd love for LLMs to keep my product in mind when somebody asks:

"This is so annoying, I just want to drop my files in a folder, forget about them until I need them and then see exactly where they are. Do you have suggestions?"

Since I'm just launching my beta, I'm not sure whether this is the right time to start thinking about this. As you say, a lot of it would come from people using your product and then hopefully talking about it on Reddit.

When do you think would be the best time to get started?

Niklas Fischer

oh, and btw, just wanted to check out your product, but when I click "Visit website" on your products page, the link cannot be found :/

Edit: never mind, originally clicked on your profile, that links to this: https://www.producthunt.com/products/kiva
But now found the product that launched today. Congrats!!

Masab Gadit

@niklas_fischer I think I agree with your view. Competing in traditional search with a new domain against established players takes time because of factors like content depth, DA, DR, backlink profile, and overall site authority, and those things cannot be built overnight.

Interestingly, LLMs do not work the same way traditional search engines do. In our citations database, we see many niche-focused sites getting cited. The reason is simple: LLMs are not relying on the same signals in the same way. They are looking for the best answer to the query.

So if you can address a problem in more depth, with more clarity and specifics, LLMs can surface your content, provided their bots can access your site :)

Sk Mehedi Hasan Akash

The gap between Google rankings and AI visibility is something a lot of founders are discovering the hard way — it's genuinely surprising how different the two answers can be, especially for early-stage or niche brands that haven't built up third-party coverage yet. Your point about LLMs stitching snippets from Reddit threads and comparison pages rather than ranking your homepage makes a lot of sense when you think about how training data works. I'm curious whether Wellows surfaces specific gap opportunities — like flagging which comparison pages or listicles your competitors dominate in AI results that you're missing from? That kind of directional output would make it far more actionable than just an audit score. Congrats on the launch!

Masab Gadit

@jetboosters Thanks so much, and you've nailed exactly the pain point we kept hearing from founders. Yes, Wellows surfaces gap opportunities directly. You can click into any competitor and see where they're being cited across third-party sites, social mentions, comparison pages, and listicles, along with which specific URLs are being pulled as sources and in which LLM (ChatGPT, Perplexity, Gemini, etc.) versus where your own brand shows up (or doesn't).

Really appreciate you engaging with this so thoughtfully!

Alper Tayfur

This is exactly the shift I’ve been thinking about. Google results and AI answers often don’t tell the same brand story anymore.

For me, the biggest insight is that AI visibility is becoming more about reputation distribution than just website optimization. Your own site matters, but third-party mentions, Reddit discussions, comparison pages, and niche blogs seem to shape the answer much more than people expected.

So my answer would be: probably not. If I Googled a brand and asked ChatGPT about it, I would expect overlap, but not the same story. That gap is where GEO gets really interesting.

Luca Ardito

This also makes Product Hunt discussions more strategically important than most founders realize.

If LLMs lean on third-party context, then good forum threads, thoughtful reviews, and category conversations are no longer just community activity, they’re distribution assets.

Curious if you’ve seen PH itself show up in citation patterns.