QA is often called upon to deflect some percentage of bugs, which means it’s our obligation to test for as many different bugs as we can. Most of us find it uncontroversial to pursue a wide variety of test cases, spanning from the expected to the unexpected, and possibly even the malicious.

But how many of us are comfortable sitting across from the auditor and talking about security testing? This is a situation I’ve encountered multiple times in my industry. That fear is particularly real if the company doesn’t have solid security requirements, or doesn’t apply them to every project. QA needed to talk to information security to make sure we had it right.

Our information security team had a problem, too: How could they ensure that security requirements were met? Penetration testers, red teams, and dynamic scanning are all powerful ways to find the flaws in your software, but those are often applied post-release, when the software is already available to users—legitimate and otherwise. Static analysis tools offer some easier options for testing pre-production, and certification for those of us in regulated industries can go some distance to raising the bar for the attacker, if the items certified are sensible ones.

All of those are good practices, but we found they didn’t go far enough. Many of our security bugs were appearing post-release, causing angst for the testers as well as frustration for the business. We had a huge opportunity to add value.

This is where I found myself a few years ago: with a fresh security bug, a business case in hand for why I should be given time to focus on finding more of them, and very little knowledge of how to do it efficiently. We had risk-based testing (RBT) practices in place already, but I had no idea at the time of how to isolate security risks. It was time to build skills. I took classes, did research, and talked to infosec. They taught me the basics of threat modeling.

Testers know already that if you only test where you think problems might be, you miss subtle flaws that can become critical in production. Typical RBT practices do not include security concerns and often don’t take more than a cursory look at the architecture of the project. The point is to test the highest priorities first, so if an RBT practice isn’t catching the highest priorities at all, the practice is seriously flawed. Adding a threat model gives a much more complete picture and can totally reshuffle the priorities of testing.

We started by obtaining a data flow diagram for the system under test. For the purposes of this article, let’s use a sample web application, including a login, a query to a database, and the ability to print the generated report. If I were to think of the top priority based on that alone, I’d probably reach for the login first, motivated by experience with buggy authentication systems, followed by the database query.

Applying a threat modeling methodology—it doesn’t matter which you use, so long as it works for your system and your company—showed that while we may have chosen the login functionality as our highest risk based on gut feel and experience, the actual highest priority was protection of sensitive customer data. In addition, the highest-priority items were all found by the threat model, not the initial RBT matrix.

We needed a way to prioritize them even further, in order to make the best of limited resources. This was a different sort of shifting left: not moving QA further up the development chain, but rather introducing security testing earlier than it would otherwise begin.

That answer was found in robust security standards. My industry has strong regulations driving exactly what we need to do, but even in the absence of regulation, models such as the OWASP Top Ten and CWE Top 25 can present a list of vulnerabilities to be assessed and removed from software. We found that each point of a standard and each item on a list generated at least one test case. I developed generic test cases from those standards to allow other testers easy access to the requirements, and we have begun to implement them.

Of course, we can’t ignore our functional tests in order to focus strictly on security! This adds a lot of test cases, and they’re typically items that take both expertise and time. I rapidly found that one person could not support every project, and that the other testers on the team did not have the training to do the basics without guidance.

Our solution was once again collaboration. Our information security team had begun a security champion program, and I doubled as the champion for my QA team. Other people took up the role in other parts of the organization, and that led to a new path for testers to get vital information. Not only could we direct questions to the security champions, but the creation of a second QA security role took a lot of the load.

We closed the feedback loop by taking information back to infosec. QA knew already how many bugs had security impacts; now we shared their resolutions, and we received reports from the scans infosec was already running.

What we saw is that after a few months of work, teams that implemented security testing processes had fewer bugs, of lower severity, than those that did not. Not only that, but they had fewer production incidents related to security than teams that had no deliberate security testing practices.

I assume these difficulties are why security testing in QA is an uncommon practice. With the effects we’ve seen, I can confidently say that while this is a difficult path to follow, it leads to good results.

This article was originally published by Sylvia Killinen on StickyMinds.

X
X