How Developers Can Work With Generative AI Securely
Four tips to help the SDLC strike a balance between the improved productivity that generative AI brings and the risks it poses to code security.
Join the DZone community and get the full member experience.
Join For FreeIf you work in software development, or indeed within any sector of the technology industry, you will have undoubtedly been part of discussions about, read headlines on, or even trialed a platform for generative artificial intelligence (AI). Put simply, this new and quickly evolving technology is everywhere.
Yet along with the exciting promise of greater productivity with AI code generation tools — GitHub argues the increase in developer productivity due to AI could boost global GDP by over $1.5 trillion — there is also increased risk. These risks include concerns around code quality, as AI models can produce complex code that is both difficult to understand and explain.
There is also the risk of complexity around IP ownership, as conversations around the intellectual property rights, ownership, and copyright of AI-generated code are still ongoing. As this technology evolves, guidance will become clearer, but this will take time. Currently, if working with AI-generated code that is trained on open-source software, a failure to adhere to this software's license requirements may well constitute a violation of copyright.
Finally, AI-generated code can contain a number of vulnerabilities, albeit inadvertently. If the AI has been trained on insecure code, for example, it will therefore create insecure code. Simply: garbage in, garbage out.
Putting Security First
So, what can developers do to ensure they can make the most of generative AI without risking security?
- See generative AI as a junior coding partner: Developers should go into working with generative AI coding tools with the expectation of lower quality code that contains vulnerabilities.
- Stay vigilant with AI prompts: Revealing confidential information via an AI prompt is a big privacy risk, and there is currently limited understanding around how services truly handle their customer data.
- Integrate more code reviews: As with traditional coding, code reviews are an important process within the software development lifecycle (SDLC). Reviewing the security and quality of AI-generated code is crucial, as it may seem coherent on the surface but not necessarily correct and secure following testing.
- Embrace continuous training: Considering reviewing and testing AI-generated code is so crucial, it's hugely important for the software developers behind the prompts and the delivery of the end product, app or service, to have a good understanding of secure coding. These professionals need training in how to recognize and address vulnerabilities, and as the threat landscape evolves so rapidly, this training also needs to be delivered on a continuous basis to best empower everyone across the SDLC.
It is certainly possible to strike a balance between the improved productivity that generative AI can enable, and the risks it can pose to code security and quality with these guidelines. However, at the foundation of this balance has to be continuous, programmatic secure coding training for the human developer so that generative AI becomes a useful tool rather than a source of insecure code.
Opinions expressed by DZone contributors are their own.
Comments