SEO used to be human-driven. GEO is model-driven. Do humans still matter?
For 20 years, SEO was a human game.
You wrote for people, optimized for Google's crawlers, and built backlinks by convincing other humans to link to you.
The inputs were human. The outputs were human.
GEO is different. You're optimizing for language models that extract and synthesize. The inputs are structured data, schema markup, comparison tables. The outputs are citations, not clicks.
So where does the human fit now?
What the data says about AI's performance:
A 2026 Ahrefs study of 1 million search results with AI Overviews found that 87.8% of cited content contained AI-generated material. Only 8.6% was purely human-written . The machines are already talking to each other.
At the same time, Andrej Karpathy (OpenAI founding member) ran an experiment where an autonomous agent ran 126 overnight experiments on a GPT-2 training script. It found 20 additive improvements that reduced training time by 11% — in a codebase Karpathy himself had already optimized. The agent rediscovered RMSNorm and tied embeddings in 17 hours. It took the human research community 8 years to formalize those innovations .
AI is objectively good at optimization loops. It doesn't get tired. It doesn't have ego. It just runs.
Where AI still fails without humans:
James Allen, a technical SEO specialist writing for MarTech, points out that AI "can't distinguish between empirical data and subjective opinion" when pulling from the open web . It doesn't know what's real. It only knows what's written.
Our own analysis at Rankfender tracked 473 pieces of AI-generated content. The top 10% by citations were AI-generated but edited by humans. They averaged 6.1 citations each. Fully automated pieces averaged 1.2 .
The difference? Humans added something AI couldn't. A screenshot from a real dashboard. A quote from a customer support ticket. A specific data point from an internal survey. Content with these "original experience signals" got cited 2.8x more often.
The trust gap:
Google's E-E-A-T framework now explicitly prioritizes "Experience" as a trust signal. Yotpo's 2026 GEO guide notes that "90% of young adults respond more favorably to imperfections that signal reality than to polished, corporate perfection" . AI generates polished. Humans generate imperfect. The market prefers imperfect.
The same guide reports that brands cited in AI Overviews see 35% more organic clicks and 91% more paid clicks than those that aren't. But the traffic that converts is the traffic that verifies. AI summaries give users the what. Humans provide the why.
The economic reality:
The ClickUp 2026 SEO report found that 86% of enterprise SEO teams report stronger results when humans shape AI use. Technical SEO specialists still average $97,500 annually. But purely manual content roles face downward pressure .
The workforce is splitting.
One path: manage AI tools. The other: get left behind.
What this means:
AI optimizes for extraction. Humans optimize for connection. The two aren't the same.
AI can write cleaner structure. It can generate 100 pages in the time you outline one. It can analyze citation patterns across 7 models and tell you exactly where your gaps are.
But AI can't sit in a customer support thread and feel why people are confused. It can't look at a pricing page and know that "Enterprise" is the wrong word because your customers call it "Pro." It can't tell the difference between "technically correct" and "actually helpful."
The role of the SEO is shifting. The technical work is increasingly automated. The creative work — understanding the customer, finding the unique angle, knowing what to say that no one else is saying — is more valuable than ever.
What I'm curious about:
In your work, where is AI taking over? And where do you find yourself adding value that AI still can't touch?
Imed Radhouani
Founder & CTO – Rankfender
Evidence-based product development



Replies