You’ve Got a Friend in Me: How AI Coding Tools and Security-Aware Developers Can Work Together Safely
Code created by AI can be rife with vulnerabilities and flaws, but AI-assisted coding tools can still be helpful if paired with security-aware, human developers.
Join the DZone community and get the full member experience.
Join For FreeWhile 2023 saw a groundswell of activity, coupled with an enormous surge in popularity for artificial intelligence’s latest iteration, generative AI, it is clear that this transformative technology is only just getting started. Most experts predict that 2024 will see it reach even greater heights.
And given that writing code in multiple languages was one of the first demonstrations of generative AI’s capabilities, it’s no surprise that recent surveys found that up to 93% of development teams are using AI tools as part of their workflow. It’s only recently that those teams have started to experience some of the negative side effects of that choice, as the code created by AI can be full of exploitable vulnerabilities.
In fact, AI coding tools are not — as it currently stands — capable of reliably recognizing what vulnerable code looks like, much less trying to avoid those same errors when generating new code. In a comprehensive study conducted by the University of Maryland, UC Berkeley, Google, and others, researchers studied 11 AI models belonging to four different families or groups and tasked them with trying to uncover vulnerabilities in computer code generated by AI tools. With both high false positive rates and low accuracy in detecting vulnerabilities, the researchers concluded that AI simply was not ready to take on that role. And it may never be, because even after the researchers tried to tweak the models by feeding them lots of examples of secure code, the models still failed at unacceptably high rates.
In another study conducted by the University of Quebec, researchers asked popular generative AI, ChatGPT, to generate 21 different programs and applications in a variety of languages. While all of the applications coded by the AI worked as intended, only five were secure from a cybersecurity standpoint. The rest had dangerous vulnerabilities that attackers could have used to compromise any organization that deployed them.
Human and AI Teams, Unite!
Those who study AI development should not be too surprised by those aforementioned studies, as well as many others who got similar results, including myself. Generative AIs can create new content by drawing on large language models, which are basically enormous databases of compiled human knowledge that the AI knows how to quickly navigate and parse to come up with new responses and content. The problem is that the large language models contain both the good and bad aspects of everything humans have created.
In terms of code, the result is that lots of vulnerabilities and bad practices are included in those large language models. Every single OWASP vulnerability is absolutely hidden in those models, as well as thousands of others. If a human ever made a mistake and coded a vulnerable application, chances are that it’s included in the models that AIs now draw from. And there is a good chance that an AI will use them too when generating complex new applications.
So, what is the solution? Can we still get the speed advantages that AI coding offers while also generating secure code? Yes, but only when humans and AI team up, with each one doing what they do best.
For all its amazing feats, AI does not really understand context. You can ask it to program, for example, the infrastructure for an online store, but it really does not know what you are trying to sell, how to show the right products to the right visitors, or any of the complexities of your backend supply chain. Writing code that takes all of that into consideration is much more suited to human developers, whose main mission is to write programs that help with or solve some kind of business problem. Humans, especially ones who have worked in their respective industries for a long time, are very good at solving problems like that. But even the best programmers sometimes need a refresher when it comes to the nuts and bolts of complex implementations.
For example, in the recent Stack Overflow Developer Survey, 63% of the developers surveyed said that they spend at least 30 minutes or more every single day searching for answers about how to implement some aspect of an application or program they are building. That so-called cognitive switching, where a human has to move from a creative process to a more rote or research-heavy one, eats up a lot of otherwise productive time every day.
But that is where AI tools are best equipped to lend a virtual hand. If a human developer works on an overall program or application like the aforementioned online store and then gets stuck on some aspect of that implementation, that is where an AI tool can help. For example, writing code to connect a product database to a supply chain application can be done extremely quickly by an AI tool, and doesn’t require the AI to understand the overall context of what it’s working on or the overall mission of the project. It’s also much less likely to implement vulnerabilities if it’s only being tasked with working on key, fundamental type aspects of the code.
Trust, but Verify
Pairing human developers with AI tools, allowing each to play to their strengths, is a rewarding strategy that can speed up the coding process while limiting some of the dangers posed by AI-induced vulnerabilities in code. Tasking AI with smaller elements of the code creation process is less risky, but vulnerabilities can still pop up.
That is why it’s critical for developers to have security training. If a human understands both the overall context of the project they are working on and best security practices, then they can quickly verify that any part of the code they asked the AI to write is correct and free from vulnerabilities or other problems like business logic flaws. A lot of time will still be saved because the human developer does not have to look up how to create the part of the code generated by the AI, and can simply check to make sure that it’s secure, fixing any flaws that might have crept into it.
AI tools are impressive and can greatly assist in speeding up the development process. However, they are not able to reliably write secure code, and recent studies suggest that they may never achieve that goal. However, they can still be incredibly useful in software development, so long as they are paired up with skilled, highly trained, and security-aware developers.
Opinions expressed by DZone contributors are their own.
Comments