Hitesh

Hitesh

Professionally curious
5 points

Forums

How do you understand the difference between interest and intent?

Two conversations. Same week.

First founder said,
Really interesting product. Love what you re building.

Great energy. Smart questions. Strong validation.

We never heard back.

What if your 95%+ retention hides a 60-day sales cycle?

From the outside, it looks simple.
Strong retention. Happy customers. Steady growth.

What most people don t see: our average deal takes ~60 days to close.

Some move faster. Many don t.
And that changes how you run GTM entirely.

Long sales cycles stretch everything:

Introducing the Flexprice MCP Server.

You shouldn t need to open five dashboards just to change pricing.

Now you don t.

Plug Cursor, Claude Code, VS Code, Gemini, Windsurf or any MCP-compatible client directly into your Flexprice workspace and prompt your billing infrastructure like it s code.

What’s one metric you trust more than likes and signups?

Startup land rewards motion.
Announcements, launches, funding headlines, feature drops - it all looks like acceleration.

But visible activity isn t the same as real progress.

Shipping fast doesn t mean you re building the right thing.
Raising capital doesn t mean you found product-market fit.
Talking about scale doesn t mean you solved anything painful.

A lot of ecosystems reward velocity because it s easy to measure.
Markets reward outcomes because they re impossible to fake.

What does “good marketing” even mean in 2026, when everyone can ship and everyone can post?

Emergent isn t just doing marketing. They re making it feel inevitable.

They picked a moment with attention gravity (India AI Impact Summit in Delhi), then stacked surfaces that create I keep seeing them energy:

  • Billboards across the city + Economic Times print ads

  • A narrative number big enough to force curiosity: $100M ARR run-rate in 8 months

  • Credibility signals and proximity without being subtle

  • And a product unlock right after: now on mobile, build from your phone

The genius is they re not explaining the product.
They re engineering belief: this is the platform, everyone s building, you re late.

Can you really do outcome-based pricing if you can’t measure outcomes?

Last week I met a Voice AI company. We barely talked product. The real heat was pricing, not how much, but what exactly are we charging for?

They don t want per-minute, per-seat, or per-API-call anymore. They want per resolved call, per booking, per qualified lead, per deflection.

Sounds clean. Until you try to define resolved.
Who validates it?
What if their CRM says something else?
What if attribution breaks?

At that point, the metric becomes the product. And the infrastructure behind that metric becomes the business model.

Are credits becoming the default pricing language for AI products?

Subscription pricing struggles when value is variable.
Pure usage pricing is accurate, but messy to explain, messy to predict, and easy to hate when the bill surprises you.

Credit-based pricing sits in the middle:

  • Simple for customers: I bought 10,000 credits

  • Flexible for teams: bundle tokens, GPU time, storage, calls into one unit

  • Better for finance: prepaid revenue, clearer burn, fewer billing shocks

  • Better for product: you can experiment with packaging without rebuilding billing every time

The bigger trend is this:
We re moving from pricing as a plan to pricing as a runtime.

Why does running one outbound motion feel like orchestrating four different systems?

Every Monday, this is my GTM reality-

  • One tool for prospect discovery + enrichment.

  • One for basic LinkedIn workflows.

  • Another just for LinkedIn messaging.

  • And a separate one for email sequences.

Same list. Same campaign. Different dashboards.

If I want to remove one company, I remove it everywhere.
If I pause outreach, I double-check multiple tools to make sure nothing accidentally goes out.

Is ambition contagious or is burnout?

Spend enough time around driven builders, and your standards rise. You want to ship faster. Do more. Stay ahead.

That part is powerful.

But here s what I ve been noticing about myself:

I treat growth as urgent.
I treat health as optional.
Deadlines feel fixed.
Sleep feels flexible.
Momentum feels critical.
Recovery feels negotiable.

Are we confusing chaos with creativity?

Vibes are powerful. They spark ideas fast and give you momentum before overthinking takes over.
But vibes without structure just create noise.

That's where prompt engineering matters.
It's the bridge between inspiration and execution. It turns abstract intent into concrete instruction.

It's what turns "I want something cool" into:

  • Here's the outcome

  • Here's the user

  • Here are the constraints

  • Here are the edge cases

What if the outbound channel you're betting on is the wrong one for your market?

I've been talking to founders across different stages and ICPs, and here's what's surprising: there's no consensus anymore.
1. Cold email is crushing it for some teams and completely dead for others.
2. LinkedIn DMs are either goldmines or ghost towns.
3. And somehow, cold calls are quietly working for a subset of B2B companies.

It feels like the best practice playbooks don't account for how much this varies by your specific ICP, deal size, and market maturity.

So I'm curious about your experience, not what you think should work, but what's actually generating pipeline for you right now. Is it cold emails? Calls? LinkedIn outreach? Or have you found success with a completely different motion?

Would love to hear what's working in your world. What outbound channel is moving the needle for you?

When you launched on Product Hunt, how did you pick your category?

Most founders treat categories like labels.
Product Hunt treats them like distribution.

Categories weren t added to classify products.
They were added because one global feed stopped working.
Too much noise. Too little intent.

Your category decides:

  • who sees you

  • how you re evaluated

  • the quality of feedback you get

How many AI tools do you know, but can’t actually use?

I realized I was stuck in AI FOMO.
Bought multiple courses. Knew every tool by name.
Hadn t built a single working automation.

So I stopped and asked one question:
"What repetitive task can I hand off to AI today?"

Not after another course. Not after learning more. Today.

That shift mattered.

YC RFS 2026: here’s the breakdown that actually matters

A lot of people read YC RFS Spring 2026 as a trend list.
It s not. It s a signal of where work inside companies is quietly breaking.

Here s how this shows up in real teams:

Product teams
YC references @Cursor , but the opportunity isn t coding faster.
It s helping PMs synthesize interviews, metrics, and feedback to decide what to build next.

Finance and hedge funds
Firms like Renaissance, Bridgewater, and D.E. Shaw won by systematising decisions.
AI-native hedge funds push this further with continuous, machine-driven strategies.

Why is defining relevance still the hardest part of building AI features?

As more teams build AI agents, search, and personalized feeds, one problem keeps surfacing.
Not generation.
Not model quality.

It s retrieval and ranking. Deciding what information should show up and in what order.

Most teams solve this by stitching together systems. Vector search for meaning. Keyword search for precision. Custom logic for business rules. Over time, relevance logic spreads everywhere and becomes hard to change.

@Shaped approaches this differently.

Can Product Hunt actually bring in customers after launch day?

It did for us.
3 customers came to @Flexprice last week. No ads, no cold DMs. Just conversations.

Most people treat Product Hunt as a one-day spike.
I treat it like a community of builders.

We launched Flexprice last year and learned (the hard way) what works here and what doesn t.
So now I keep it simple:

I support makers launching on Product Hunt for free
I give honest product feedback as a real user
I help with launch strategy when useful

Why do so many outbound efforts stall even when the ICP looks “correct” on paper?

We kept hearing get your ICP right.
But what we learned is that who you reach out to first matters just as much as who eventually decides.

In most companies, there isn t one ICP. There s a sequence.

  • Someone experiences the problem daily.

  • Someone else prioritizes it.

  • Another person signs off on it.

If you jump straight to the top, you often lack context.
If you stay too low, momentum dies.

When did we forget to celebrate before we connect?

Watched a launch yesterday. By morning, the founder's DMs were full of pitches from other builders. No questions about the product. Just "here's what I'm working on."

Look, networking is part of this. We all need it. But we're skipping a step.
Launch day used to mean something. Try the product. Ask real questions. Then connect.

Now we've optimized so hard for efficiency that we skip straight to pitching.
@Mastra a hit #3 yesterday despite this. But think about what that says quality products have to fight through noise just to get noticed.

Here's my take: we're not wrong to network. We're just moving too fast.

Why does Cursor keep winning on Product Hunt?

I looked into a few of their launches and what stood out wasn t a secret hack.
It was how little they tried to launch.

Their tagline isn t hype.
It s literal. Write, edit, and chat about your code with AI.

No buzzwords. No promises. Just what it does.

The pattern is simple:

Is using AI for literature reviews unethical, or are we asking the wrong question?

This debate often gets framed as Should researchers use AI for literature reviews?

I think the real question is different.

Is it ethical to spend hundreds of researcher hours on mechanical work when that time could be spent advancing actual knowledge?

Think about a researcher spending an entire weekend searching papers, skimming irrelevant abstracts, copying citations, and fixing references. That s not insight or discovery. That s overhead.