Ammar J

PromptBrake - Find AI vulnerabilities before attackers exploit them.

by
Expanded test suite — 12 tests, 60+ real-world attacks: prompt injection, jailbreaks, data leaks, unsafe tool use, output bypasses. Smarter analyzer — fewer false alarms. Refusals mentioning sensitive terms no longer trigger false fails, so you can trust every PASS, WARN, and FAIL verdict. Baseline diff — compare any two scans to see regressions, fixes, and still-risky issues between releases. Simpler scan setup — connect any LLM endpoint (OpenAI, Claude, Gemini, custom) in under a minute.

Add a comment

Replies

Best
Ammar J
Maker
📌

Howdy Product Hunt 👋 — I’m Ammar, maker of PromptBrake.

We launched here before, but something felt off. It took a while to realize the issue: “AI security” was too vague to be useful. What teams actually needed was much simpler — does the LLM endpoint we’re about to ship break in obvious ways? That’s what PromptBrake is now focused on.

It runs 60+ real attack scenarios directly against your API (prompt injection, data leakage, unsafe tool behavior) and shows what breaks, why it breaks, and how to fix it before you ship.

The goal isn’t just a score — it’s clarity before release. You can try a demo first with no setup, then test your real endpoint and get results, evidence, and remediation guidance.

One thing I care about: we don’t store your API keys, and your data isn’t sent to another LLM.

It’s intentionally narrow — not a pentest, not monitoring — just a fast, repeatable pre-release check for AI endpoints.

Curious — how are you testing your AI features today?

Ammar J
Maker

Sharing a few LLM security resources we built while testing AI APIs

We've been working on PromptBrake — an automated scanner that runs security tests against LLM-powered API endpoints. Along the way, we ended up building a few standalone tools that might be useful even outside of it:

  • LLM Security Checklist Builder — a practical release checklist covering prompt injection, tool permissions, data exposure, and output controls

  • Prompt Injection Payload Generator — generates direct, indirect, and multi-turn injection payloads you can adapt for testing your own endpoint.

  • OWASP LLM Test Case Mapper — translates OWASP LLM Top 10 risks into concrete test ideas with ownership guidance

All three are free and don't require an account: promptbrake.com/free-tools

We built these to give back to the community that's been sharing knowledge in this space. LLM security is still early, and a lot of teams aren't sure what they might be missing — figured it's better to make this kind of stuff accessible rather than gate it.

Curious how others here are approaching this — do you have a repeatable process before shipping LLM features, or is it still mostly ad hoc?