Hey PH — Park, product engineer building a security layer for AI coding tools
Hey everyone 👋
I’m Park, a product engineer currently building something around AI code security.
I’ve been using tools like Cursor and Claude a lot, and honestly, the speed is insane.
But after a while, I started noticing something:
The code often looks correct on the surface, but if you slow down and inspect it, there are subtle issues hidden inside — things that are easy to miss if you trust the output too quickly.
That got me thinking that maybe the real problem isn’t just “bad code”, but how we interact with AI-generated code.
So I started building a small project that sits between the IDE and the model and checks outputs in real-time.
Still early, but I’d love to connect with others working on similar problems in AI dev tooling.
What are you building these days?


Replies