Baltazar Torres

Rising freshman at Babson building Probado Testing :)

by

Hey Product Hunt!

I’m Baltazar, a rising freshman at Babson College, and I’m the founder of Probado — a platform built to help early-stage founders get honest, structured, and affordable feedback on their MVPs.

What is Probado?

Probado connects startups with vetted testers who are paid to give high-quality feedback—not just surface-level reactions.

But it’s more than a simple testing service.

We’re using AI not only to summarize tester insights, but also to:

• Highlight trends across multiple testers

• Offer actionable improvement suggestions

• Compare your product against industry benchmarks

• Recommend tweaks to boost usability, clarity, or conversions

Whether you’re launching a SaaS tool, mobile app, or landing page, Probado helps validate your idea quickly and affordably.

Why I’m here:

I’m building publicly and learning as I go. My biggest current challenge?

Finding consistent, high-quality testers that go beyond just clicking around.

Founders: how do you find people who actually care and contribute meaningfully?

Builders: how do you scale quality feedback loops early?

Progress so far:

We’ve built out our first version manually and now have around 40 vetted testers ready to work with startups. The full website is launching in just a couple weeks! We’ve been onboarding testers, refining our feedback process, and collaborating closely with early founders to make sure Probado is truly valuable from day one.

Open to all:

If you’ve got advice, feedback, or questions—drop them here!

I’m always down to jam with anyone in the early-stage world.

Let’s build smarter products together 🙌

9 views

Add a comment

Replies

Best
Oleksiy K

This is a really solid direction, especially the focus on structured feedback instead of random opinions.

One thing we’ve seen working with early-stage founders is that the biggest problem isn’t just finding testers — it’s getting context-aware feedback. People click around, but they don’t always understand the product’s goal, so their feedback becomes noisy.

Your idea of combining vetted testers + AI summarization + benchmarking could solve that, especially if you can surface patterns like:

  • where users get confused in the first 30–60 seconds

  • which flows consistently fail across testers

  • what actually blocks conversion vs. just “feels off”

A thought: you might want to segment testers more deeply (by experience level, domain, even mindset). Feedback from a founder-type user vs. a casual user can be very different — both useful, but for different decisions.

From our side at Mobiwolf, when we help teams build MVPs, the difference between “some feedback” and “actionable feedback” often determines whether the next iteration actually improves the product or just adds noise.

Curious — how are you planning to ensure tester quality as you scale? That seems like the hardest part long-term.