All activity
Ammar Jleft a comment
Sharing a few LLM security resources we built while testing AI APIs We've been working on PromptBrake β an automated scanner that runs security tests against LLM-powered API endpoints. Along the way, we ended up building a few standalone tools that might be useful even outside of it: LLM Security Checklist Builder β a practical release checklist covering prompt injection, tool permissions, data...

PromptBrakeFind AI vulnerabilities before hackers do
Ammar Jleft a comment
Sharing a few LLM security resources we built while testing AI APIs We've been working on PromptBrake β an automated scanner that runs security tests against LLM-powered API endpoints. Along the way, we ended up building a few standalone tools that might be useful even outside of it: LLM Security Checklist Builder β a practical release checklist covering prompt injection, tool permissions, data...

PromptBrakeFind AI vulnerabilities before attackers exploit them.
Ammar Jleft a comment
Iβm with you β I think people trust AI agents too much too fast. I treat them more like untrusted systems than assistants. Anything sensitive or irreversible (money, credentials, private data) stays off-limits. What worries me most isnβt obvious failures β itβs edge cases like prompt injection or tool misuse that slip through. Curious β are you setting hard boundaries, or relying more on...
Expanded test suite β 12 tests, 60+ real-world attacks: prompt injection, jailbreaks, data leaks, unsafe tool use, output bypasses.
Smarter analyzer β fewer false alarms. Refusals mentioning sensitive terms no longer trigger false fails, so you can trust every PASS, WARN, and FAIL verdict. Baseline diff β compare any two scans to see regressions, fixes, and still-risky issues between releases. Simpler scan setup β connect any LLM endpoint (OpenAI, Claude, Gemini, custom) in under a minute.

PromptBrakeFind AI vulnerabilities before attackers exploit them.
Ammar Jleft a comment
Howdy Product Hunt π β Iβm Ammar, maker of PromptBrake. We launched here before, but something felt off. It took a while to realize the issue: βAI securityβ was too vague to be useful. What teams actually needed was much simpler β does the LLM endpoint weβre about to ship break in obvious ways? Thatβs what PromptBrake is now focused on. It runs 60+ real attack scenarios directly against your...

PromptBrakeFind AI vulnerabilities before attackers exploit them.
Ammar Jleft a comment
If you want to see how this looks in practice, hereβs a short case study showing a real before/after scan and remediation flow: https://promptbrake.com/case-study

PromptBrakeFind AI vulnerabilities before hackers do
Most AI security testing takes weeks and needs experts. We made it stupid simple! Paste your endpoint. We attack it with 60+ real exploits (prompt injection, data leaks, jailbreaks). In a couple of minutes = full security report in plain English. Works for solo devs to enterprise teams. OpenAI, Claude, and Gemini supported. API keys are never stored. Catch vulnerabilities before they catch you.

PromptBrakeFind AI vulnerabilities before hackers do
Ammar Jleft a comment
Hi ProductHunt! π I'm Ammar, creator of PromptBrake. I built this because I kept watching teams (including mine) ship AI features while secretly hoping nobody would try to break them. The problem? OWASP docs felt like reading a PhD thesis. Most of us just... shipped and prayed. I literally lost sleep over this. PromptBrake is what I needed back then: Drop in your AI endpoint (OpenAI, Claude,...

PromptBrakeFind AI vulnerabilities before hackers do
