How to Improve Security in AI-Assisted Software Development
Now, it’s clear that the artificial intelligence (AI) genie is out of the bottle — for good. This extends to software development, with a GitHub survey showing that 92% of US developers already use AI coding tools inside and outside of work. They say AI technology helps them improve their skills (cited by 57%), increase productivity (53%), focus on building/creating rather than repetitive tasks (51%), and avoid burnout (41%).
It’s safe to say that AI-assisted development will become more of the norm in the near future. Organizations must develop policies and best practices to effectively manage it all, just as they do with cloud deployments, bring your own device (BYOD), and other workplace technology trends. But that oversight is still a work in progress. For example, many developers are engaging in so-called “shadow AI” using these tools without the knowledge or approval of their organization’s IT department or management.
These managers include chief information security officers (CISOs), who are responsible for determining safeguards so that developers understand which AI tools and practices are OK and which are not. CISOs need to lead the transition from the uncertainty of shadow AI to a more visible, controlled, and managed bring-your-own-AI (BYOAI) environment.
Now is the time to transform, as recent academic and industry research reveals a precarious state of affairs: 44% of organizations are concerned about the risks associated with AI-generated code, according to the 2024 State of Cloud Native Security Report (PDF). Research from Snyk shows that 56% of software and security team members say insecure AI recommendations are common. Four out of five developers use AI to bypass security policies (i.e., shadow AI), but only one in 10 scans most of their code, often because the process adds cycles to code reviews, slowing down the overall workflow.
In a Stanford University study, researchers found that only 3% of developers who used AI assistants wrote secure products, compared to 21% of developers who didn’t use AI. 36% of developers who used AI wrote products that were vulnerable to SQL injection attacks, compared to 7% of developers who didn’t use AI.
Adopting a well-conceived and executed BYOAI strategy will greatly help CISOs overcome challenges as developers can leverage these tools to quickly write code. Through close collaboration between security and coding teams, CISOs will no longer stand outside the coding environment, in the dark about who is using what. They will foster a culture where developers recognize that they cannot blindly trust AI, as doing so will lead to a ton of problems later on. Many teams are already familiar with the need to “work backwards” to fix poor coding and security that wasn’t addressed in the first place, so perhaps AI security awareness will make this more apparent to developers in the future as well.
So how does a CISO achieve this state? By combining the following practices and perspectives:
Build visibility. The surest way to eliminate shadow AI is to remove AI from the shadows, right? CISOs need to understand what tools development teams are using, what they’re not using, and why. That way, they have a clear picture of where the code is coming from and whether AI involvement is introducing cyber risks.
Achieve a balance between security and productivity. CISOs cannot and should not prevent teams from finding their own tools. Instead, they must seek a delicate balance between productivity and security. They need to be willing to allow relevant AI-related activities within certain limits, provided that these activities can achieve production goals with minimal or at least acceptable risk.
In other words, as opposed to adopting a “deny department” mentality, CISOs should create guidelines and an acceptance process for their development teams with a mindset that says, “We appreciate you discovering new AI solutions that will enable you to create software more efficiently. We just want to make sure your solutions don’t cause security issues that ultimately hinder productivity. So let’s work on this together.”
Measure. Again, in the spirit of collaboration, the CISO should work with the coding team to develop key performance indicators (KPIs) that measure software productivity and reliability/security. KPIs should answer questions like, “How much are we producing with AI? How fast are we producing? Is the security of our processes getting better or worse?”
Remember, these are not "security" KPIs. They are "organizational" KPIs that must be aligned with company strategy and goals. In the best case, developers will see KPIs as something that better informs them, rather than a burden. They will recognize that KPIs can help them achieve "more/faster/better" while controlling for risk factors.
Development teams may be more open to a “security first” partnership than CISOs expect. In fact, these team members prioritize security reviews alongside code reviews when deploying AI coding tools. They also believe collaboration leads to cleaner, more secure code.
Therefore, CISOs should move quickly to advance AI visibility and KPI initiatives to support the “just right” balance that enables optimal security and productivity outcomes. After all, the genie never goes back into the bottle. Therefore, it is critical to ensure that the genie is able to do our best work without introducing unnecessary risk.