We ran a red teaming test before launch—here’s what surprised us (and what we’re still debating)
Before launching our AI assistant, we worked with a red teaming vendor (let’s call them “L”) to check how safe our product really was. We were expecting a few corner cases or prompt injection attempts. What we got was a pretty eye-opening report: infinite output loops, system prompt leaks, injection attacks that bypass moderation, and even scenarios where malicious content could be inserted by...
p/introduce-myself
Hey everyone! I’m Soomin, a Product Manager based in Seoul. ⠀ 💪 Over the past few years, I’ve worked on various AI-driven and mobile services, and I’m currently building a personal AI assistant that helps people get things done more effortlessly. ⠀ 🤓 Lately, I’ve been exploring how emerging LLM technologies and services can be meaningfully integrated into products—especially to narrow the gap...



