Andrew Stewart

Case Study: how Product Hunt can improve AI visibility in 2026

by

Product Hunt is best known for its homepage, a daily leaderboard of the most creative and innovative products on the internet. Makers go all out to win launch day, because that visibility matters. Product Hunt also plays a significant role in how products appear in Google search results.

What surprised us was that AI assistants like ChatGPT were rarely citing Product Hunt in product recommendations.

AI assistants such as ChatGPT and Gemini rely on reviews, alternative lists, and structured product information gathered across the web. Product Hunt is strong across all three. In theory, this should make Product Hunt a natural source for AI-driven product recommendations and comparisons. In practice, it was not happening.

We set out to understand why LLMs were not citing Product Hunt and whether we could change that. The most recent Orbit Awards provided a clean test case, and a new tool called @Gauge made the impact measurable. Gauge tracks LLM visibility across major AI models using a large, search-informed set of prompts, giving us a statistically meaningful way to measure citation rate.

We focused on AI dictation apps, the first Orbit Awards category, as a controlled test, and aimed to promote AI visibility through a new style of category page. After several targeted iterations, Product Hunt shifted from near zero AI citations to consistent inclusion across multiple models. We are now rolling these changes out across Product Hunt. Product Hunt is becoming part of the AI retrieval layer.

Key Lessons

We view AI visibility as a new distribution layer. Our goal is to ensure that authentic community signal on Product Hunt is systematically surfaced in AI product research workflows.

1. AI visibility is measurable

Track citation rate like SEO. Instrument it, monitor it, iterate.

2. Terminology drives retrieval

If your language does not match dominant queries, you will not be cited. Naming alone can materially change visibility.

3. Authority beats volume

One high-signal, well-structured page can outperform dozens of lower-quality URLs.

4. Model behavior is volatile

Citation patterns shift after model updates. Continuous monitoring is required.

How we're tracking AI Visibility

There are many tools which systematically run prompts against several AI models daily, and measure visibility of a product within prompts. We're using a new tool called @Gauge. We chose because a) their citation tracking is more useful than the alternatives we've considered (in part after they quickly shipped some of our feature requests!) and b) their prompt generation seems to be quite good, allowing us to create representative prompts to track without imbuing any bias.

For the purpose of this post, we did not alter the prompts generated by Gauge, and monitored visibility with respect to those prompts over time. This is directionally valid, even if it is not a good absolute measurement.

We care about cross-model performance, but we have focused on ChatGPT and Google AI Overview as we believe they are the highest-impact channels.

Wispr Flow and SuperWhisper AI visibility

We'll showcase how Product Hunt now contributes to significant LLM visibility for a well-known product and a promising underdog.

Wispr Flow

@Wispr Flow has very strong AI visibility. But, Wispr Flow's visibility in ChatGPT was cut in half following a ChatGPT update:

The same update appeared to cause ChatGPT to cite our article much more frequently. As a result, we significantly softened the blow to Wispr Flow’s visibility drop:

  • Wispr is mentioned in 13% of ChatGPT answers that research and compare voice-to-text tools.

  • Product Hunt pages that mention Wispr are cited in 6.7% of those answers. Wispr Flow is mentioned in ~62% of these citations (not shown), meaning ProductHunt is contributing to around 32% of Wispr Flow’s ChatGPT visibility.

It’s important to point out that we don’t know how much causation there is, ie. what the visibility would look like without these citations.

Superwhisper

@superwhisper is another excellent product that has less AI visibility than Wispr. Once again, Product Hunt plays a critical role in SuperWhisper being visible in LLM search.

For some reason, Google AI Overview seems to prefer mentioning Superwhisper (at a rate of 5.5%) compared to ChatGPT (at a rate of 1.6%). In both cases, Product Hunt is causing a meaningful percent of visibility, but it is more pronounced in Google AI Overview. SuperWhisper's visibility in Google AI Overview is currently about 5.5%.

We shipped a meaningful change to our page around Jan 22 (see below). After this change, Superwhisper is mentioned in answers citing our page around 1.6% of the time.

Notably, our visibility is essentially only caused by one URL (a second URL is cited 0.1% of the time). In comparison, the other non-biased sources (reddit and youtube) have dozens of URLs contributing to visibility:

And, we appear to be contributing twice as much to Superwhisper's AI visibility than the best performing Reddit thread or Youtube link (not shown).

This signals that LLMs consider Product Hunt pages to have high signal, authenticity, and authority.

How to AEO

So what are the lessons we learned from optimizing Product Hunt pages for LLM visibility? We feel we've barely scratched the surface with this small, targeted experiment, but we've discovered that we seem to have the leverage needed to move the needle.

Page content

We have done a lot of experimentation on what we show on category pages.

Prior to revamping the AI Dictation Apps page, there were essentially no citations. (Unfortunately, our Gauge data doesn't go back far enough to see this cleanly.) Changing from the old version to the new "roundup" page created a baseline citation rate of around 0.4% across all models.

Recently, we’ve begun sourcing frequently asked questions from community content. When we shipped this, the ChatGPT citation rate 10x'd.

We initially attributed a citation jump to the FAQ addition. In retrospect, the timing aligns more closely with a ChatGPT model update (see below), making the causal link unclear.

However, other models did begin to cite the page more often. For instance, Google AI Overview citations roughly doubled after introducing the FAQ.

While the impact is much noisier, the same feature applied to other category pages increased search impressions by nearly 200% -- this is evidence that Q&A content will increase AI visibility in more cases that our dictation app category page.

SEO optimization

Under the hood, LLMs use web search to research on user’s behalf. LLM search queries are different from human, Google queries, but there are similarities. For AI dictation apps, we realized that most humans look for speech-to-text software not AI dictation apps. We changed the title of the category page from “The best AI dictation apps” to “The best AI dictation and speech-to-text software” (on January 7). This tripled our category page citation rate overnight.

So, SEO fundamentals are a key precursor to AI visibility. This should be a surprise to nobody, but this minor change is a great anecdote to highlight the importance of SEO fundamentals.

Hard won lessons

LLM games

ChatGPT and other AI bots can still be easily gamed. Old-timers will remember the days of keyword stuffing to game Google search results. We are in the early, easily-gamed phase of LLM search, similar to early SEO. A common tactic right now is mass-producing authoritative-sounding listicles where the publisher names their own product as “best” across multiple categories. LLMs scrape and confidently cite this content.

This dynamic rewards self-promotion over user signal. We believe that authentic, user content is the best way to inform product selection decisions. We also believe that it is in the best interest of OpenAI, Google, and Anthropic to address this gaming to serve their users.

AI bots receive frequent updates, and, for some of the updates, we see Product Hunt citation rate rise as some unbiased (Zapier, Reddit) or biased (Wisprflow.ai) sources fall, and other biased sources (speechify.com) jump significantly.

LLM differences

Somewhat unsurprising, there is a lot of variability between chat agents (ChatGPT vs Gemini vs Claude vs etc). And each agent implements web search differently. Our AI Dictation App category page is almost never cited by Microsoft Copilot, which uses Bing search for web search, or Perplexity, who we were unknowingly blocking due to a Cloudflare<>Perplexity feud.

Our opinion is to focus on the chatbots where your users actually search. Visibility is model-specific.

Where are we heading

We see all of the hard work that makers put into their launches. After launch, Product Hunt users leave genuine reviews and toss around ideas in discussions. This information is tremendously helpful for those doing product research on ChatGPT and other AI chatbots.

We view AI visibility as a new distribution layer. Our goal is to ensure that authentic community signal on Product Hunt is systematically surfaced in AI product research workflows.

4.3K views

Add a comment

Replies

Best
Dead Head Studio

I recently created a launch page but have been trying to figure out the most optimal approach to get the word out. From the terrible marketing I have deployed so far, it appears the product is liked, but I have not had much put into outreach. Thanks for the tips!

Deepak Gupta
Really solid case study, @andrew_g_stewart The finding that terminology alignment (speech-to-text vs. AI dictation) tripled citation rate overnight is something we see consistently across B2B SaaS companies too. One pattern worth adding from what we have observed working on AI search optimization at GrackerAI: the gap between "mentioned by the LLM" and "recommended by the LLM" is massive. A lot of companies celebrate getting cited, but the real business impact comes from being in the consideration set when the AI frames a response as a recommendation vs. just listing options. The language models treat review-backed mentions very differently from informational ones, which is exactly why PH community signal is so valuable here. Yea, biased sources gaming citations. We have tracked cases where a single self-published "best of" "top" listicle outranks dozens of authentic review sources in LLM responses. The encouraging thing is that with each model update, the weighting seems to shift toward platforms with genuine user signals. PH is well-positioned for that long game. Have you seen any difference in citation behavior when products have active discussion threads vs. just reviews? In my experience, conversational content (like PH forums) tends to get pulled into LLM responses more than structured review pages.
Philipp

Thanks for sharing your insights. For sure it is important to get cited by LLMs. However, I am asking myself if these are as valuable as former google rankings. Are people actually visiting the page or are they staying within the chat App and never visit the site. This might have a big impact on website owners and monetisation potential. What does your data say? Did people actually visit your site after they saw you in the chat?

Vadim Ermolin

That's a very valuable read! I notice how Product Hunt and Reddit can boost your visibility in LLMs.

However, can you please "fix the reviews" part, cause I already had 3 customers for my product who left the reviews here, and none of them are visible? The support team states they are visible, but I must be blind or something.

I plan to run an Email campaign inviting my 1500+ users for a review on Product Hunt, just want to be sure it will work!

Abdul Rehman

Really appreciate you sharing the behind-the-scenes thinking here. Treating AI visibility like SEO feels like the right mental model for 2026. Love seeing PH adapt early 🚀

Joe

Citation rate as KPI. They are treating AI assistants as distribution intermediaries, and they’re optimizing content structure, terminology, FAQ schema, and page authority to increase retrieval probability.

Joe

And what’s wild is how small changes made big impact. Changing “AI dictation apps” to “AI dictation and speech-to-text software” tripled citation rate overnight.

Yukendiran Jayachandiran

@andrew_g_stewart The "terminology drives retrieval" finding has a second-order effect worth highlighting.

The gap between how makers describe their products and how users query AI assistants is significantly wider than in traditional search. A maker writes "intelligent data extraction pipeline with schema inference." A user asks ChatGPT "how do I get product prices from any website without writing code."

In traditional SEO, Google understood synonyms and related terms well enough to bridge this gap. LLMs are more literal about matching domain vocabulary. If your product description never uses the exact phrasing people actually type into ChatGPT, you will not get cited regardless of product quality.

For makers, the highest-leverage AEO move might not be structured schema or FAQ pages. It might be doing real user interviews, documenting the exact natural language people use to describe the problem, and using those phrases verbatim in your product listing and documentation.

I have been auditing my own PH listing with this lens and realized almost every line was written in builder-speak, not buyer-speak. Suspect most makers here have the same blind spot.

swati paliwal

The case study nails the AI visibility angle, but the bigger story is what happens after discovery.

Yes, LLM citations are the new distribution layer. But users know AI is “searching for them.” They don’t stop there. They validate. Users are smart!

A few things I’m seeing across the ecosystem:

• Secondary research is real. After an LLM mentions a product, users click into Reddit, Product Hunt, and community threads & not just the discovered product homepage.
• Community citations get more trust-clicks. A Reddit discussion or PH review is 100% more credible than a polished landing page.
• Website traffic is going down. Even when visibility increases, direct site visits don’t always scale proportionally.

So the winning strategy isn’t just “optimize pages for LLM parsing.”

It’s:

  1. Structured, AI-readable content

  2. Strong presence in community ecosystems

  3. Real human signals (reviews, discussions, comparisons)

AI drive discovery and communities drive decision.

Ruxandra Mazilu

Congrats and thank you for sharing your work! Super interesting insights.

If you had to pick, what would be the main thing you would advise new founders to prioritize regarding their AI visibility on Product Hunt?

(also, my context - I've just started preparing the launch page for PostGod)