On Cem Kaner's BBST Foundations, there is a discussion about the ratio of testers vs. developers - and a great bunch of articles pointing out that there is no consistent way of counting what is a tester, what tasks such person is supposed to do and that there are many activities that are sometimes considered testing and sometimes not.
Build-and-release responsibilities are probably an easy example of grey area, where at least I see a lot of differences between organizations. Test automation mentions testing, but is another grey area. We really don't even want to single out the "tester" and "developer" work, but discuss who eventually will do which task.
I'm not sure about my sources in detail (could be BBST materials), but I know I picked up this detail from Cem Kaner years ago, and have applied it in my work since: If you get your first ever tester into your organization, quality might go down. The reason is that people start assuming testing belongs with the tester - now that there is one. I remember Cem pointing out that you might need more than one, when you start building a team. And this advice is from years back.
I was inspired to write about adding testers from this tweet:
The tweet is very much a lossy medium and the reasons for quality going down could be many fold. But the first idea that comes to my mind is that none managed the risk of perceptions of testing being tester work.
When I joined my organization, I put significant effort in emphasizing regularly in communications that whatever I will do as testing isn't in any way away from the developers work load. I helped my teams' developers find time for their testing (and fixing) that did not exist before - at least that is how they perceived it. I regularly asked product managers previously doing acceptance testing to keep doing it, giving specific requests ensuring they would not leave it undone just for the reason of me joining. There was one of me, and almost 20 developers for the two products I worked for.
We recruited a remote tester, but there was still 2 of us to 20 developers. If the ratio was higher, getting my message across about the risk of perceiving testing as something testers do would have been more difficult.
Lesson learned: if you add testers without managing the risks, quality can easily go down while you think you're investing more into it. People just tend to think that way. I still manage the risk every day, and see occasional symptoms of some developers relying on me catching what they miss - for not testing themselves.
Build-and-release responsibilities are probably an easy example of grey area, where at least I see a lot of differences between organizations. Test automation mentions testing, but is another grey area. We really don't even want to single out the "tester" and "developer" work, but discuss who eventually will do which task.
I'm not sure about my sources in detail (could be BBST materials), but I know I picked up this detail from Cem Kaner years ago, and have applied it in my work since: If you get your first ever tester into your organization, quality might go down. The reason is that people start assuming testing belongs with the tester - now that there is one. I remember Cem pointing out that you might need more than one, when you start building a team. And this advice is from years back.
I was inspired to write about adding testers from this tweet:
When @TomasRihaSE 's org added testers, quality went down #devops #sastsq1
— Ulrika Malmgren (@Ulrikama) February 19, 2015
I had a completely opposite experience 2,5 years back, when I joined my current organization. When my organization added a tester, quality went up.The tweet is very much a lossy medium and the reasons for quality going down could be many fold. But the first idea that comes to my mind is that none managed the risk of perceptions of testing being tester work.
When I joined my organization, I put significant effort in emphasizing regularly in communications that whatever I will do as testing isn't in any way away from the developers work load. I helped my teams' developers find time for their testing (and fixing) that did not exist before - at least that is how they perceived it. I regularly asked product managers previously doing acceptance testing to keep doing it, giving specific requests ensuring they would not leave it undone just for the reason of me joining. There was one of me, and almost 20 developers for the two products I worked for.
We recruited a remote tester, but there was still 2 of us to 20 developers. If the ratio was higher, getting my message across about the risk of perceiving testing as something testers do would have been more difficult.
Lesson learned: if you add testers without managing the risks, quality can easily go down while you think you're investing more into it. People just tend to think that way. I still manage the risk every day, and see occasional symptoms of some developers relying on me catching what they miss - for not testing themselves.