Imed Radhouani

What's something you're embarrassed to admit you still do manually even though AI could do it?

I'll go first.

I still reply to every comment manually. Reddit, LinkedIn, Product Hunt, forums, Twitter, Discord. Every single one.

AI could do this. There are tools that generate replies, post on schedule, analyze sentiment, even mimic your brand voice. But I don't use them. Here's why.

A 2024 study on community engagement across 500 brands found that personalized responses drive 3.2x higher retention and 4.7x more repeat interactions than automated replies. People can tell when a response is copy-pasted. They can feel when no one actually read their comment. The average user only needs 2-3 automated interactions before they disengage entirely.

Another data point: For one SaaS company, switching from automated replies to manual responses on Reddit increased trial signups from community channels by 210% in 90 days. The cost was 15 extra hours per week. The ROI was clear.

So I type every reply myself. I read every comment. I think about what they actually asked. I write something back that shows I listened. It takes hours. Sometimes an entire morning.

But the conversations that start from a real reply are worth more than 100 automated posts. Those people become customers. They become advocates. They become the ones who defend you when someone else complains.

I know AI could do it faster. I know the data says automation scales. But trust doesn't scale. And in B2B, trust is the only thing that closes deals.

What's yours? What manual thing are you still doing because automation would ruin it?

Imed Radhouani
Founder & CTO – Rankfender
rankfender.com

185 views

Add a comment

Replies

Best
Ivan Anisimov

Spot on, Imed. Trust doesn't scale, but it does compound. We are entering an era where 'Hand-written' (or hand-typed) will be the ultimate differentiator for brands. I still do my outreach sequence personalizations manually. The 210% increase you mentioned doesn't surprise me—people have a 'Bot-dar' now, and once they detect automation, the relationship is dead before it starts.

Imed Radhouani

@ivan_anisimov4 "Bot-dar" is the perfect term for it Ivan. People can smell automation from a mile away now. Not because the writing is bad. Because it lacks the small imperfections that make a message real. The typo that you leave because you're typing fast. The slightly odd phrasing that shows you actually thought about the question. The reply that comes 3 hours later, not 3 seconds.

The relationship being dead before it starts is the real cost. You might get more volume with automation. But each interaction is shallower. The trust never builds.

The 210% number came from a cohort that switched from automated replies to manual responses on Reddit. The manual ones took more time. But the people who replied became customers. The automated ones just scrolled past.

What's the most obvious "bot" interaction you've seen recently?

Ivan Anisimov

@imed_radhouani You hit on a vital point: the uncanny valley of perfection. When a reply is too fast and too 'clean,' the brain flags it as a transaction, not a conversation.

The most obvious 'bot' interaction I see lately is the AI-summarized agreement: where a bot replies by simply rephrasing my own post back to me with a 'Great insights! I totally agree that...' followed by a generic bulleted list. It adds zero value and instantly kills any desire I had to engage. It’s like talking to a mirror that’s trying to sell you something.

Imed Radhouani

@ivan_anisimov4 That "shadow-ignore" is the part you never see. No angry comment. No complaint. Just silence. The person never engages again. And you never know why.

The screenshot-and-mock trend is real. I've seen it happen to a competitor. Someone posted a bot reply that was technically fine but completely off. The thread turned into a roast. The brand looked foolish. All because no human bothered to read the post before hitting send.

The AI-summarized agreement is the worst because it feels condescending. You wrote something thoughtful. The bot says "great insights" then tells you back what you just said. That is not engagement. That is a waste of time.

Isaac asked if people call it out directly. Some do. Most just leave. The ones who call it out are doing you a favor. The ones who leave silently are the real loss.

Isaac Dominic

@ivan_anisimov4 Do people ever call out automated messages directly?

Ivan Anisimov

@isaac_dominic1 Working on and developing dozens of platforms, I can tell you: yes, they absolutely do, but often it’s passive.

Most people don’t comment 'This is a bot,' because they don’t want to waste more energy on something that didn't value their time in the first place. They just 'shadow-ignore' you—they disengage from the brand entirely.

However, on platforms like Reddit or X (Twitter), the call-outs are becoming more aggressive. I’ve seen threads where a 'perfectly crafted' AI response gets screenshotted and mocked for being tone-deaf. When you build platforms, you realize that negative sentiment spreads 10x faster than positive engagement. One caught bot-reply can wipe out months of manual trust-building.

Savannah Ross

@ivan_anisimov4 Personal touch is becoming more rare and valuable

Imed Radhouani

@savannah_ross1 That's the shift no one saw coming. We spent a decade automating everything to save time. Now the rare thing is not speed. It's a reply that shows someone actually read what you wrote. The value flipped.

The companies that win now are not the ones with the fastest responses. They are the ones with the most human ones.

Faysal Fateh

Replying by yourself is the best thing you can do nowadays. Every reply/comment sounds so fake. I also try to reply by myself on X and Linkedin.

I would say the one thing I do manually is writing prompts (Sometimes)

Imed Radhouani

@faysal_fateh The fake reply problem is real. You can tell when someone copy-pasted a response or had AI generate something generic. It feels empty. You'd rather get no reply than a fake one.

Writing prompts manually is the part that takes time but saves you from generic outputs. The moment you let AI write the prompt, you're already downstream of someone else's thinking. The prompt is where the strategy lives. The output is just execution.

What's your prompt writing process look like? Do you have templates or start from scratch every time?

Faysal Fateh

@imed_radhouani agreed with the fake reply problem.
I don’t use one fixed template, it changes based on what I’m trying to get out of it.

But I usually follow a simple structure in my head:

context → what's it’s for → what I want → tone/constraints/any other specific instructions

Most of the thinking goes into the context and the “what I want” part. If that’s clear, the output is usually solid. If not, no prompt trick really saves it.
Another thing I love to do is just click on the dictate icon and speak, that is the best option.

Imed Radhouani

@faysal_fateh The dictate idea is underrated. Speaking bypasses the perfection filter. When you type, you edit as you go. When you speak, the raw thought comes out. That raw version is often better.

The structure you use is solid. Context → purpose → desired output → tone. Most people skip the context part. They jump straight to "write me a post about X." Then they wonder why the output sounds generic. The context is what makes it not generic.

Najmuzzaman Mohammad

Reading every git diff line by line before merging. I know AI can summarize PRs and flag risky changes, and I have even built review workflows that do exactly that. But I still open the diff myself every single time. Part of it is trust, part of it is that the act of reading code written by someone else is how I stay calibrated on what the codebase actually looks like right now. The moment I stop reading diffs is the moment I start making architectural decisions based on a mental model that is three weeks stale.

Imed Radhouani

@najmuzzaman That's the real reason people don't let go. Not because the tools aren't good enough. Because the act of reviewing is how you stay connected to what's actually happening.

AI can flag risky changes. It can summarize PRs. It can even suggest fixes. But it can't feel the drift. It doesn't notice when the codebase starts leaning in a direction you didn't intend. You only catch that by reading the diffs yourself.

The mental model going stale is the quiet killer. You make decisions based on what you think the code looks like, but it changed three sprints ago. The diff is the only truth.

We built ROSE to automate on-page SEO, and we still review every change before it pushes. Not because the AI is wrong. Because if we stop looking, we stop understanding.

Elissa Craig

Honestly, I don't think I'll ever be embarrassed to do something manually. AI 100% has its place and is helpful, but I am afraid of developing an over-reliance on it. Plus, I want to keep my skills, hard and soft sharp, so will keep using in moderation. :)

Faysal Fateh

@elissa_craig That is a really good approach Elissa, you should not over-rely on it. Use it for efficiency but thinking process comes from you.

Imed Radhouani

@elissa_craig  @faysal_fateh Elissa, that's a healthy take. The fear of over-reliance is real. The moment you stop doing something yourself, you lose the ability to do it at all. That's dangerous. Not because AI will fail. Because when it fails, you need to be able to step in.

Faysal nailed it. Efficiency is what AI is for. Thinking is still yours. The moment you let AI do the thinking, you're not the builder anymore. You're just the operator.

We use RCGE to generate drafts. But the thinking, the angle, the argument, the voic, that's still human. The AI just types faster.

Question for both of you : What's one skill you refuse to let AI touch?

Elissa Craig

  @imed_radhouani Probably anything potentially sensitive to my personal life. I am fine to use it for work and non-work-related things, like trip planning, but when it comes to more emotionally involved problems, I feel that is a human-only thing.

Shota H.

The paradox is that AI scales output, but trust is built in the details. I’ve found that even small manual touches outperform fully automated flows in the long run.

Imed Radhouani

@shota_h Exactly. AI scales the stuff that doesn't matter. Trust is built in the small moments. The reply that shows you actually read their comment. The follow-up that references something specific from the last conversation. The email that doesn't look like a template.

The small manual touches compound. One personalized reply might not move the needle. But doing it 500 times? That's a brand. That's a reputation. That's something AI can't fake because it doesn't actually care.

We've run the numbers. Our manual replies on Product Hunt convert to signups at 8%. Automated ones convert at 0.4%. The volume isn't even close to worth it.

Farrukh Butt

Still writing first-pass copy myself. AI can help polish it, but the raw version usually comes out better when I think through it directly.

Imed Radhouani

@farrukh_butt1 Same here. AI is great at structure, grammar, and speed. But the raw version — the one with the weird example, the slightly offbeat phrasing, the thing you're not supposed to say — that comes from a human. And that's usually what lands.

We use Rankfender's RCGE pipeline to generate drafts. Then we rip half of it out and put the real stuff back in. The AI handles the skeleton. The human adds the soul.

What's the one thing AI always gets wrong about your voice?

Farrukh Butt

@imed_radhouani It usually smooths out the edges. The structure is fine, but it removes the slightly offbeat phrasing and bluntness that make the writing sound like me.

Imed Radhouani

@farrukh_butt1 That's exactly it. The edges are the voice. AI smooths them out because it's trained to be agreeable and clear. But the edges are what people remember. The blunt take. The weird analogy. The sentence that's technically incorrect but feels right.

We've started using the proofreader in RCGE to flag when something is too smooth. It's not perfect, but it catches the moments where the draft sounds like everyone else instead of like us.

Do you ever go back and add the edges back in manually?

Alper Tayfur

For me, it’s important replies to users. AI can help me think faster, but I still want the final words to be mine when trust is on the line.

Imed Radhouani

@alpertayfurr That's the right line. AI can help you think faster. It can outline options, summarize context, even draft a response. But the final words are yours. Trust is built in the details that AI doesn't know to include. The specific reference to a past conversation. The acknowledgment of frustration that wasn't stated directly. The thing you only know because you've been in the trenches.

We use RCGE to generate first drafts of support replies. Then we rewrite half of them. Not because the AI was wrong. Because it didn't know what mattered most. That part is still human.

What's one detail you always add manually that AI misses?

Maliik

Manually reading government recall pages to check if my scrapers parsed them correctly.

I pull data from 41 sources across 13 countries. When a new batch comes in, I open the actual agency page and compare it field by field against what the parser extracted. Title, hazard, classification, affected products. An LLM could probably do this faster and more consistently than me.

But the one time I trusted the output without checking, a source changed its HTML layout and my database silently accepted bad data for three weeks. Everything looked fine at a glance. The classifications were all wrong.

So now I spot-check manually. Every morning. I know it doesn't scale.