I know Claude is typically used for code but I'm laying in bed right now so I can't check with cursor.
But look at it another way, AI is trained on what we give it, if it's spitting out insecure code it's because we're feeding it insecure code. At least in my experience, software engineers aren't typically trained in secure coding practices, so neither is the AI model.
AI code analyzers will likely be a good way to defend against problems, if they're used. We already have static analysis tools that accomplish similar tasks and the compilers in some languages will give clear warnings. I think this is just a natural evolution of technology. New technology, new problems, and there's always a constant back and forth of discovering and solving new problems.
The biggest issue in my opinion isn't AI generated code, but AI itself. Just look at anyone practicing prompt injection, if you connect an AI to execute functions in your program, I think that is the worst possible thing you can do for security. AIs can be persuaded to do the wrong thing, and I'm not convinced that's a solvable problem given our current understanding of these systems.
Report
Not secured, but it will help creating and validating products way more faster. We have created an MVP that it's written 80% by AI.
It's secured? Not very secured... but if it won't be used, who cares? 😁
Replies
I just asked ChatGPT to give me code to register a user and the code it gave me has a plain to see SQL injection vuln. https://chatgpt.com/share/67e76f96-84e0-8008-b2d8-70912c1c8381
I know Claude is typically used for code but I'm laying in bed right now so I can't check with cursor.
But look at it another way, AI is trained on what we give it, if it's spitting out insecure code it's because we're feeding it insecure code. At least in my experience, software engineers aren't typically trained in secure coding practices, so neither is the AI model.
AI code analyzers will likely be a good way to defend against problems, if they're used. We already have static analysis tools that accomplish similar tasks and the compilers in some languages will give clear warnings. I think this is just a natural evolution of technology. New technology, new problems, and there's always a constant back and forth of discovering and solving new problems.
The biggest issue in my opinion isn't AI generated code, but AI itself. Just look at anyone practicing prompt injection, if you connect an AI to execute functions in your program, I think that is the worst possible thing you can do for security. AIs can be persuaded to do the wrong thing, and I'm not convinced that's a solvable problem given our current understanding of these systems.