Why Do We Encourage Poor Coding Patterns?
Poor coding patterns can lead to security problems. So why do we walk into that trap?
Join the DZone community and get the full member experience.
Join For FreeFor what feels like an eternity at this point, we’ve discussed “shifting left” in the SDLC, taking into account security best practices from the start of software development. DevSecOps was a great leap forward, in no small part because of the emphasis on shared responsibility for security, and the power of a security-aware developer to thwart common vulnerabilities as they write code.
We have also known — again, for eons — that the type of secure code training chosen to engage and upskill developers makes all the difference. Low-effort solutions motivated solely by regulatory compliance do not build up the bright security minds of the future, and most security awareness professionals have worked that out. Dynamic, contextually relevant learning is best, but it’s critical that the nuances within are understood.
If we’re going to have a fighting chance against threat actors — and they always have a head start on an organization — developers need a holistic training environment, with layered learning that continually builds skills steeped in best practices.
Developer-Driven Defensive Security Measures Are Not an Automatic Win
Our ethos revolves around the developer being central to a preventative security strategy, right there at the code level upwards. That’s a given, and security-skilled developers provide the easiest path to thwarting the types of common security bugs that are apparent in poor coding patterns (like those discovered as part of the Log4Shell exploit, as one recent, devastating example).
However, the defensive techniques that we can engage to upskill developers do vary, even if they can rightly exist in the same training bucket.
For example, imagine you were told how to bake a cake, using only directions based on what not to do. “Don’t overbake it”, and “don’t forget the eggs” leaves it open for interpretation, and a huge potential for mistakes that will have an end result fit for Nailed It!. The same is true for defensive security education; what not to do is a very limited part of the conversation and offers no practical advice to truly act with a defensive mindset. You can tell developers, “don’t misconfigure that API”, but with no understanding of what constitutes correct and secure configuration, there is a lot of room for error.
Developers won’t have a positive impact on vulnerability reduction without a foundational understanding of how the vulnerabilities work, why they are dangerous, what patterns cause them, and what design or coding patterns fix them in a context that makes sense in their world. A dynamic, holistic approach allows layers of knowledge to give a full picture of what it means to code securely, defend a codebase, and stand up as a security-aware developer. And yes, part of that layered learning should be dedicated to offense and understanding the mindset of an attacker; this is critical to hone lateral thinking skills, which are invaluable in threat modeling and defensive strategy.
Reinforcing Poor Coding Patterns Is a Pitfall We Can’t Ignore
An unfortunate reality with some methods of developer learning is that the “defensive” part — even when the training is structured with offensive techniques — can reinforce bad habits, even if they are technically validating code security.
Production of high-quality code should be the baseline in all software development, but the definition of “quality” still appears to be up for debate. The reality is that insecure code cannot be viewed as quality code, even if it is otherwise functional and beautiful. The kicker is that secure code isn’t inherently high quality, either. In other words, poor coding patterns can fix a security problem, but in doing so, introduce another one, or potentially break the software entirely.
Let’s take a look at an example of poor-quality code in the form of a fix for broken authentication, as well as the most secure version for best practice:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
namespace BadFixesAPI.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class AlertsController : ControllerBase
{
private DatabaseContext context = new DatabaseContext();
[HttpGet(Name = "GetAlerts")]
// Does not ensure that the user is authenticated
public IEnumerable<Alert> Get()
{
return context.GetAlerts();
}
[HttpGet(Name = "GetAlerts")]
// Ensures that the user is authenticated, but does not check any roles
[Authorize()]
public IEnumerable<Alert> GetBadFix()
{
return context.GetAlerts();
}
[HttpGet(Name = "GetAlerts")]
// Ensures that the user is authenticated, AND that they have the "Administrator" role
[Authorize(Roles = "Administrator")]
public IEnumerable<Alert> GetGoodFix()
{
return context.GetAlerts();
}
}
}
In the first snippet, there is no check to verify that the user is authenticated, which is about as unsafe as it gets. The second, while better in terms of it performing an authentication check, fails to investigate assigned roles and whether permissions are high enough for the information requested. The third checks both user authentication and that they are assigned the “Administrator” role. In an era where least privilege access control should be the norm in most instances, it is critical that roles are set up and checked to ensure that information is only accessible on a need-to-know basis.
The highest priority for developers is to build features, and while security is not intentionally on the back burner, they don’t necessarily have the skills to avoid poor coding patterns that lead to security bugs, and the benchmark of a good engineer rarely includes secure coding prowess. We indirectly encourage those bad habits if the features are awesome enough, and it’s this mindset that has to change. The problem is, the way some learning pathways encourage hands-on code remediation also potentially reinforces code that is secure but of substandard quality. By applying a binary “yes this is secure / no this is not secure” assessment, rather than looking deeper into whether it is truly the best approach to resolve the bug and maintain the integrity of the software, there are devils in the detail that go unnoticed.
Without taking developers through the entire process for a complete view of secure coding, this approach perpetuates the same issues it’s trying to solve. Imagine if we all got our licenses based solely on our ability to drive a vehicle to a destination; a pass mark even though we ran red lights, drove through a hedge, and narrowly missed a pedestrian crossing the street to arrive there. We completed the goal, but the journey we took to get there matters most.
Developers Need To Be Enabled to Care More About Creating Secure Software
The modern developer has to keep a lot of plates spinning, and it’s no surprise they find security training a bore, especially when it’s not implemented with their workday in mind and takes them away from their deadlines and priorities with little benefit. It’s also completely unfair to change their KPIs to include an emphasis on secure coding when they don’t have the skills built up from regular, right-fit learning opportunities and supplementary tooling. However, the importance of secure software development cannot be overstated, and getting developers on side with this is crucial.
As a former developer, generally, we want to do a great job, and being seen as a cut above others in terms of quality output is very motivating. Incentivizing developers to engage with continuous security skill-building is a no-brainer, and they should be rewarded for recognizing the importance of code-level security. Security champion programs, bug bounties, and hackathons can be great opportunities to build a positive security culture, and those who roll their sleeves up and get involved should get the loot.
Opinions expressed by DZone contributors are their own.
Comments