All activity
Justkellersleft a comment
Very interesting. Measuring soft skills in a realistic way is always hard because people modify their behavior when they know they are being observed (i.e., the Hawthorne Effect). I wonder if the fact that they are being audited by an AI would defuse that behavior.

Vantage in Google LabsPractice & assess future-ready skills with AI-simulated team
Justkellersleft a comment
This is really cool. For running locally, are the lightweight transcription models usually sufficient? I am concerned with tying up too much memory while still getting reasonable accuracy.

OpenWispr100% local open source AI speech-to-text model
Populous helps builders test product ideas, messaging, landing pages, pitch decks, and GTM bets against simulated customer populations. Define your audience, set the research mission, and get directional customer signal in minutes.

Populous Customer signal when you can’t wait
Justkellersleft a comment
We started Populous from a simple frustration: builders can now move incredibly fast, but customer research still moves on the old timeline. AI has made it easier than ever to build, prototype, write copy, launch pages, and test ideas. But when it comes time to answer “who actually cares?”, “which message lands?”, “where will users hesitate?”, or “is this worth building?”, teams still wait on...

Populous Customer signal when you can’t wait
Justkellersleft a comment
I agree, but I would frame it differently. I would say the bottleneck is signal. What do people want? What messages resonate? And what's going to make a difference to user behavior?
When everything is easy to build, taste becomes the bottleneck
Ray RenJoin the discussion
Justkellersleft a comment
I use a combination of Replit, ChatGPT, and Cursor. I have also found that now ChatGPT Codex offers much higher limits than Claude. That's clearly their play to gain market share. I have a question: What's everyone used to test or validate their prototypes?
