Imed Radhouani

What's a startup trend that you secretly think is overrated?

I will go first.

Here are five trends that I think are overrated, based on watching hundreds of startups build, scale, and sometimes fail.

1. AI replacing all customer support.

The hype says AI agents will handle 90% of tickets. The reality is different.

AI is great for "where is my order?" and terrible for "my account is locked and I have a deadline in two hours." The most expensive support issues still need a human. The cost of getting it wrong is a lost customer. The cost of a human is a salary.

The winning teams use AI for tier-one support. They route the hard stuff to humans fast. They do not try to automate their way out of a conversation that actually matters.

2. Going all-in on short-form video.

Every brand is told they need TikTok, Reels, and Shorts. The data tells a different story.

Conversion rates from short-form video are low unless you are selling low-consideration products. For B2B, for SaaS, for high-ticket services, a well-written case study still outperforms a dancing founder.

The winning teams use short-form video for awareness. They do not expect it to close deals. They measure it by reach, not revenue.

3. Remote-first without structure.

Remote work is great. Remote work with no overlap, no documentation, and no async communication culture is a slow death.

The trend says "trust your team to figure it out." The reality is that teams need coordination. The ones that succeed have more process, not less. They just hide it well.

The winning teams have written async updates, scheduled overlap hours, and clear decision-making rules. They are not less structured than office teams. They are more structured.

4. AI-generated content at scale.

The trend is to generate 100 blog posts per week. The data says something else.

90% of that content gets zero citations, zero backlinks, and zero readers. The teams winning with AI content are not publishing more. They are publishing the same amount but using AI to research, outline, and polish. The human still does the thinking.

The winning teams use AI to go faster, not to go louder. They publish less. Each piece has original data, a real example, or a point of view. That is what gets cited.

5. Obsessing over the first hire.

Founders spend months looking for the perfect first engineer, first marketer, first salesperson. The trend says "hire slow, fire fast."

The reality is that speed matters more than perfection. The first hire will leave. The second hire will replace them. The third hire will stay. Over-optimizing the first hire is a form of procrastination.

The winning teams hire for curiosity and hunger, not pedigree. They move fast. They correct fast. They learn what they actually need by hiring someone and seeing what breaks.

What about you?

What is a startup trend that you secretly think is overrated? Not the obvious ones. The one that everyone seems to believe but you are not so sure about.

Imed Radhouani
Founder & CTO – Rankfender
rankfender.com

79 views

Add a comment

Replies

Best
Paul McCarron

I'd agree with all of those except the first hire one - start-ups cannot afford to waste time, effort and money on the wrong person

Imed Radhouani

@paul_mccarron1 That is fair. The cost of a bad hire is brutal. Not just the money. The months of lost momentum. The energy spent managing someone who is not working out. The ripple effect on the team.

The nuance is that waiting for the perfect hire also has a cost. The work does not stop. The problems do not pause. The founders end up doing the job themselves, burning out, and then hiring whoever is available because they are desperate.

The teams that succeed seem to have a fast "no." They hire quickly but also cut quickly. The mistake is not the bad hire. It is keeping the bad hire for six months.

What is your threshold for cutting someone loose? How many weeks do you wait before you know?

Samir Asadov

Mine: "AI-generated financial models will replace deal teams."

The narrative says AI agents will spit out merger models, project finance models, and LBOs in minutes and analyst headcount goes to zero. The reality, after building a lot of these on live renewable energy deals: AI is excellent at the formula execution, schedule plumbing, and formatting cleanup — the lowest-judgment 60 percent of the work. It is bad at the choices that actually change the answer.

Things AI does not do well that the model still hangs on: deciding whether a long-term contract counts as revenue or financing, picking a DSCR target for a merchant power project versus a contracted one, sizing the DSRA for a tail with no PPA, choosing an exit multiple range a credit committee can defend, deciding whether to use normalised or as-reported EBITDA for a specific bidder's accretion math. Those are not formula problems. They are deal-context problems where the upstream assumption choice matters more than any output number.

The teams winning with AI in modeling are not the ones that "had AI build the model." They are the ones using AI to compress the documentation, audit trail, and sensitivity packaging — so the human spends 100 percent of their time on the 5 assumptions that move the answer instead of on the 500 cells that don't.

Your customer support frame applies here too: AI is great for tier-one work, terrible at the judgment-heavy 20 percent that decides whether the deal lives or dies. The winning teams route the hard stuff to humans fast.

Imed Radhouani

@samir_asadov That is a perfect example of the pattern. The first 80% of the work looks solvable by AI. The last 20% is where the real decision lives. And that 20% is not harder. It is different. It is about context, judgment, and risk.

The list you gave is the real work. Revenue vs financing classification. DSCR target by project type. DSRA sizing without a PPA. Exit multiple ranges that survive a committee. Those are not formula gaps. They are experience gaps.

The teams that win use AI to handle the plumbing. They use humans to handle the decisions. The AI does not replace the analyst. It makes the analyst faster. The analyst still picks the assumptions. The analyst still defends the numbers. The analyst still gets the blame when the deal goes wrong.

The "500 cells that do not matter" line is the key. Most of the work is formatting and plumbing. That is what AI should do. The human should only touch the cells that change the answer.

What is the one assumption you have seen move a deal more than all the others combined?

Maliik

The AI content one is the one I keep seeing play out. There's a whole "publish 100 SEO posts per month" cottage industry and everyone who tries it ends up with a domain full of rewritten Wikipedia articles that Google eventually notices.

I'd add one to the list: "building in public" as a growth strategy. The idea that sharing your journey equals free marketing. In practice, the only people watching are other founders who are also building, not buying. It's great for accountability and community, but it doesn't feel like the acquisition channel people treat it as.

Elissa Craig

Being overly reliant on AI, especially for support!

I work for a startup, like many of us, and the number of negative experiences we've been able to turn around simply because we had human support (and the customers were THRILLED they were actual humans) is insane. AI has its place, but I really do believe support issues should be handled by a human; especially for global products/brands.