The code generated by the AI will remain here. Are we not very safe in the end?

The 2025 coding is not about labouring on fragments or taking a long time to debug. This is the overall atmosphere. AI-generated code is most of the code in future products and it has become an important toolkit for modern developers. Called “Vibe encoding”, code generated using tools like GitHub Copilot, Amazon Codewhisperer, and Chat GPT will be specifications, rather than exceptions to reducing build time and increasing efficiency. But is the convenience of AI-generated code likely to be a darker threat? Will the generated AI increase vulnerabilities in the security architecture, or will the developer’s “Vibe Code” approach in terms of security?
“Because the vulnerabilities in AI-generated code are one of the least discussed topics today,” said Sanket Saurav, founder of DeepSource. “There are still a lot of code generated by platforms such as Copilot or Chat GPT that cannot be scrutinized by humans, and security vulnerabilities can cause catastrophic disasters for affected companies.”
Saurav Quotation 2020 Solarwinds Hack “Extinction incidents” that companies can face if they do not install the correct security guardrail when using AI-generated code. “Static analysis can identify unsafe code patterns and poor coding practices,” Sarav said.
Attack through library
Security threats to AI-generated code can take creative forms and can target libraries. Library in programming is useful and reusable code that developers use to save writing time.
They often solve routine programming tasks, such as managing database interactions and helping programmers rewrite code from scratch.
One such threat to libraries is called “illusion”, where AI builds code to show vulnerability by using fictional libraries. Another latest attack on AI-generated code is called “Slopsquatting”, where attackers can penetrate the database directly against the library.
Addressing these threats may require more mindfulness than the term “Vibe encoding” suggests. Professor Rafael Khoury has been closely following developments in AI-generated code security when speaking at the University of Quebec, and is convinced that new technologies will improve their security.
exist 2023 PaperProfessor Khoury investigated the results of requiring Chatgpt to make code without more context or information, which led to unsafe code. These are the early days of chat GPT, and Khoury is now optimistic about the path forward. “Since then, a lot of research is being done, and the future is looking for a strategy to use LLM that can lead to better results,” Khoury said. “Security is getting better, but we are not where we can give direct tips and get security code.”
Khoury continues to describe A promising study Where do they generate the code and then send this code to the tool that analyzes vulnerability analysis. The method used by this tool is called a discovery line exception that generates AI (or flags for short).
“For example, these tools send flags that might identify vulnerabilities in line 24, for example, developers can send them back to the LLM and ask them to investigate and resolve the issue,” he said.
Khoury suggests that this kind of back and forth may be crucial in fixing vulnerable code. “This study shows that with five iterations, you can reduce vulnerability to zero.”
That being said, the banner method is not without its problems, especially since it can cause false positives and false negatives. In addition, there are also limitations on the length of code that LLMS can create, and the behavior of joining fragments together can add another layer of risk.
Keep people in a loop
Some players in “Vibe Encoding” suggest splitting the code and ensuring that humans stay right and center in the most important edits of the code base. “When writing code, think about submissions.”
He added: “Dissolve a large project into smaller pieces that are usually submitted or pulled. The agent builds smaller scales, at a time a function is an isolated one. This ensures that the code output is well tested and well understood.”
At the time of writing, Windsurf has approached more than 5 billion lines of AI-generated code (by its previous name Codeium). Hou said the most pressing question they answered was whether developers were aware of the process.
“AI is able to do a lot of editing on many files at the same time, so how do we make sure developers are actually understanding and reviewing what’s going on instead of blindly accepting everything?” Hou asked, adding that they made a lot of investments in Windsurf’s UX, “in a variety of intuitive ways, it’s possible to completely lock down the work of AI and keep humans completely in the loop.”
That’s why as “ambient coding” becomes more mainstream, people in the loop must be more cautious about their vulnerability. From “illusion” to “slopsquatting” threats, the challenge is real, but so is the solution.
Emerging tools such as static analysis, iterative refining methods (such as logos) and thoughtful UX designs show that safety and speed do not have to be mutually exclusive.
The key is to get developers involved, informed and controlled. With the right guardrail and a “trust but validation” mentality, AI-assisted coding can be both revolutionary and responsible.