This week Wednesday, as we were kicking off BrowserStack Champions program with a meeting of program participants around a fireside chat, something in the conversation in relation to all things going on at work pushed an invisible button in me. We were talking about security testing, as if it was something separate and new. At work, we have a separate responsibility for security, and I have come to experience over the years that a lot of people assume and expect that testers know little of security. Those who are testers, love to box security testing separate from functional testing and when asked for security testing, only think in terms of penetration testing. Those who are not testers, love to make space for security by hiring specialists in that space and by the Shirky Principle, the specialists will preserve the problem to which they are a solution.
Security is important. But like with other aspects of quality, it is too important for specialists. And the ways we talk about it under one term "security" or "security testing", are in my experience harmful for our intentions of doing better in this space.
Like with all testing, with security we work with *risks*. With all testing, what we have at stake when we take risk can differ. Saying we risk money is too straightforward. We risk:
- other people's discretionary money, until we take corrective action.
- our own discretionary money, until we take corrective action.
- money, lives, and human suffering where corrective actions don't exist
We interpret standards for proposals of controls and practices that the whole might entail. This alone, by the way, can be work of a full time person. So we make choices on standards, we make choices of detail of interpretation.
We coordinate reactions to new emerging information, including both external and internal communication.
We monitor the ecosystem to know that a reaction from us is needed.
We understand legal implications as well as reasons for privacy as its own consideration, as it includes a high risk in the third category: irreversible impacts.
And finally, we may do some penetration testing. Usually for the purpose of it is less of finding problems, but to say we tried. In addition, we may organize for a marketplace for legally hunting our bugs and selling our bugs with high implications to us over the undesired actors, through a bug bounty program.
So you see, talking about security testing isn't helpful. We need more words rather than less. And we need to remove the convolution of assuming all security problems are important just as much as we need to remove the convolution of assuming all functional problems aren't.