Ana

How are you dealing with vibe coding security risks in AI-generated code?

by

I’ve been using a lot of AI-generated code lately, and while it definitely speeds things up, security feels like a weak spot.

I’ve run into issues like missing auth, exposed endpoints, and weak configs stuff that AI doesn’t really flag unless you explicitly ask.

Curious how others are handling this:

  • Do you rely more on manual reviews or tools?

  • Any workflows that consistently catch vulnerabilities?

  • Have you faced any real incidents because of AI-generated code?

I was reading about vibe coding security risks and it pretty much aligns with what I’ve been seeing.

Would love to hear your approach 👇

51 views

Add a comment

Replies

Best
Deangelo Hinkle

this is something I have been thinking about a lot. I usually rely on manual review after generating code, but I am not fully confident I catch everything. It feels like speed improves but responsibility increases at the same time

Lakeesha Weatherwax

@deangelo_hinkle I ran into a small issue once where an endpoint was left exposed because I didn’t double check properly 😅 nothing serious happened, but it made me more careful. Now I review everything line by line before using it

Yosun Negi

@deangelo_hinkle  @lakeesha_weatherwax I have not faced a real incident yet, but I can see how it could happen easily. Especially with configs and permissions, small mistakes can lead TO BIGGER PROBLEMS

John Marg

From my experience using Candor Data Platform, I avoid relying on AI-generated code blindly. Every output goes through review, validation, and testing before being used.

Having structured workflows and clear visibility into data and logic helps identify potential risks early, especially in areas like input handling, dependencies, and access control.

Monk Mode

Security was the #1 concern when I built TokenBar (a menu bar app that handles API keys for 20+ AI providers). Here is what I did:

1. Everything runs locally. No cloud, no backend, no accounts. Your API keys never leave your Mac. This eliminates the entire class of server-side vulnerabilities and data breaches.

2. macOS Keychain for secrets. API keys are stored in the system keychain, not in plain text config files. Claude initially generated code that stored keys in UserDefaults (bad!). I caught that in review and moved it to Keychain.

3. Manual review of all AI-generated networking code. Claude sometimes generates code that logs request headers (which contain auth tokens) or does not validate SSL certificates properly. Every HTTP call in my app was hand-reviewed.

4. No unnecessary permissions. The app only needs network access to call provider APIs. No file system access, no camera, no contacts. Sandboxed via App Sandbox.

The meta lesson: AI-generated code is not inherently less secure than human code. But it tends to take shortcuts that a security-conscious developer would not. The fix is review, not avoidance.

Mert Kılıçkaplan

This is one of the biggest questions I keep coming back to while building with vibe coding:

Am I actually building a secure app?

Even though I am a product manager with basic coding knowledge, and I understand the general architecture and logic of app development, I still do not feel like I have ever gone deep enough into security.

So I built a process for myself:

Every time I add a new feature, I ask the language model I am using to review that feature for security issues, and then review the entire project afterward.

After that, I use another tool or language model to run a second security review.

In this way, I try to cross-check everything, sometimes repeating the process multiple times with two or even three different tools and models.

I am still not sure whether that is actually enough.

As a solo vibe coder, this is one of the areas I feel most unsure about.