When we talk about progress, typically, digital advancement is at the forefront of the conversation. We want everything better, faster, more convenient, more powerful, and we want to do it for less money, time, and risk. For the most part, these “impossible” objectives are eventually met; it might take several years and multiple versions (and a team of developers who might start a coup if they’re asked to switch gears on feature design one more freaking time), but every day, code is out there changing the world. However, with great software expansion comes great responsibility, and the reality is, we’re simply not ready to deal with it from a security perspective. Software development is no longer an island, and when we account for all aspects of software-powered risk - everything from the cloud, embedded systems in appliances and vehicles, our critical infrastructure, not to mention the APIs that connect it all - the attack surface is borderless and out of control. We can’t expect a magical time where each line of code is meticulously checked by seasoned security experts - that skills gap is not closing any time soon - but we can, as an industry, adopt a more holistic approach to code-level security.Let’s explore how we can corral that infinite attack surface with the tools at hand:
Perfect security is not sustainable, but neither is putting on a blindfold and pretending everything is blue skies. We already know that organizations knowingly ship vulnerable code, and clearly, this is a calculated risk based on time to market with new features and products. Security at speed is a challenge, especially in places where DevSecOps isn’t the standard development methodology. However, we only need to look at the recent Log4Shell exploit to discover how relatively small security issues in code have opened up opportunities for a successful attack, and see that the consequences of those calculated risks to shipping lower-quality code could be far greater than projected.
An alarming number of costly data breaches are caused by poorly configured cloud storage environments, and the potential of sensitive data exposure resulting from access control errors continues to haunt security teams in most organizations. In 2019, Fortune 500 company First American Financial Corp. found this out the hard way. An authentication error - one that was relatively straightforward to remediate - led to the exposure of over 800 million records, including bank statements, mortgage contracts, and photo IDs. Their document links required no user identification or login, rendering them accessible to anyone with a web browser. Worse still, they were logged with sequential numbers, meaning a simple change of number in the link exposed a new data record. This security issue was internally identified before being exposed in the media, however, failings in categorizing it properly as a high-risk security issue, and failure to report it to senior management for urgent remediation caused a fallout that is still being navigated today.There is a reason that broken access control now sits at the very top of the OWASP Top 10: it’s as common as dirt, and developers need verified security awareness and practical skills to navigate best practices around authentication and privileges in their own builds, ensuring checks and measures are in place to protect sensitive data exposure. The nature of APIs makes them especially relevant and tricky; they are very chatty with other applications by design, and development teams should have visibility across all potential access points. After all, they can’t take into consideration unknown variables and use cases in their quest to provide safer software.
It makes sense that a large component of a security program is dedicated to incident response and reaction, but many organizations are missing valuable risk minimization by not utilizing all the resources available to prevent a security incident in the first place.Sure, there are comprehensive stacks of security tooling that assist in uncovering problematic bugs, but almost 50% of companies admitted to shipping code they knew was vulnerable. Time constraints, the complexity of toolsets, and a lack of trained experts to respond to reporting all contribute to what has essentially been a calculated risk, but the fact that code needs to be secured in the cloud, in applications, in API functionality, embedded systems, libraries, and an ever-broadening landscape of technology, ensures we will always be one step behind with the current approach.Security bugs are a human-caused problem, and we can’t expect robots to do all the fixing for us. If your development cohort is not being effectively upskilled - not just a yearly seminar, but proper educational building blocks - then you are always at risk of accepting low-quality code as standard, and the security risk that goes with it.
Developers are rarely assessed on their secure coding abilities, and it’s not their priority (nor is it a KPI in a lot of cases). They cannot be the fall guys for poor security practices if they’re not shown a better path or told it is a measurement of their success. Too often, though, there is an assumption within organizations that the guidance provided has been effective in preparing the engineering team to mitigate common security risks. Depending on the training and their awareness to apply security best practices, they may not be prepared to be that desirable first line of defense (and stop endless injection flaws clogging up pentest reports). The ideal state is that learning pathways of increasing complexity are completed, with the resulting skills verified to ensure it actually works for the developer in the real world. However, this requires a cultural standard where developers are considered from the beginning, and correctly enabled. If we as an industry are going out into the wilderness to defend this vast landscape of code we’ve created ourselves, we’ll need all the help we can get… and there is more right in front of us than we realize.