Control group: a two-hour roadmap review meeting. Six people in a room (virtual). We debated features. We argued about timelines. We discussed dependencies. We left feeling productive.
Test group: We fed the same roadmap into Claude. No slides. No politics. No one trying to protect their pet project. Just the raw plan. The prompt: "Analyze this roadmap. Identify the three most likely failure points. Use first principles reasoning. Assume we will follow your recommendations without ego. If you need more data, ask for it."
Everyone is panicking about the March 2026 Core Update. It started rolling out on March 27 and will take up to two weeks to complete . The spam update hit just three days earlier and finished in 19.5 hours, the fastest spam update on record .
But here's what the data actually says.
JetDigitalPro analyzed 600,000 web pages across the update period. The correlation between AI usage and ranking penalties was 0.011, effectively zero . Google isn't penalizing AI content. It's penalizing low-value content that happens to be AI-generated.
Websites relying on mass-produced AI output without human oversight saw traffic drops of 60-80% . Affiliate sites were hit hardest 71% saw negative impacts .
Hey PH Community ! We've been heads down building. Four new things in the works. I want to know which one matters most to you.
RASE v1.0 App Store Intelligence
Tracks how your mobile app appears in AI answers (ChatGPT, Perplexity) and in store search. If you build apps, this tells you where you're visible and where you're invisible.
We built a model to generate 1,000 questions that people actually ask. Not random prompts. We scraped 50,000 real user queries from search logs, forum threads, and support tickets across 12 industries. We clustered them by intent and generated 1,000 representative questions.
We asked those same 1,000 questions to 5 AI models: ChatGPT (GPT-4), Gemini (Ultra), Perplexity (Pro), Claude (4.5 Sonnet), and Llama (3). We ran the experiment daily for 30 days. We tracked every citation at the source level.
The goal: measure citation overlap. How often do these models cite the same source for the same question?
The code works. The design sings. Customers who find you, love you.
But here's the problem AI will never just know.
Unlike Google, which crawls everything and figures it out eventually, AI learns from patterns. And if your product doesn't fit those patterns, you simply don't exist.
For 20 years, SEO was a human game. You wrote for people, optimized for Google's crawlers, and built backlinks by convincing other humans to link to you. The inputs were human. The outputs were human.
GEO is different. You're optimizing for language models that extract and synthesize. The inputs are structured data, schema markup, comparison tables. The outputs are citations, not clicks.