This week I was a guest in podcast where we found one another in Polywork. It's a development podcast, and we talked of testing, and the podcaster is an absolute delight to talk to. The end result will air sometime in December.
One question he asked left me thinking. Paraphrasing, he asked:
"When do you start testing?"
I have been thinking about it since. I was thinking about it today listening to TestFlix and the lovely talk by Seema Prabhu and the vivid discussion her talk created on the sense of wall between implementing and testing. And I am still thinking about it.
The reason I think about this is that there is no single recipe I follow.
I start testing when I join a project, and will test from whatever I have there at that moment. Joining a new product early on makes the testing I do look very different than joining a legacy product in support mode. But in both cases, I start with learning the product hands-on, asking myself: "why would anyone want to use this?"
After being established in the project, I find myself usually working with continuous flow of features and changes. It would be easy to say I start testing features as we hear about them, and I test them forever since until they are retired. More concretely, I am often taking part in handoff of the request of that feature, us clarifying it with acceptance criteria that we write down and the product owner reviews to ensure it matches their idea. But not always, because I don't need to work on every feature, as long as we never leave anyone alone carrying the responsibility of (breaking) changes.
When we figure out the feature, it would be easy to say that as a tester, I am part of architecture discussions. But instead, I have to say I am invited to be part of architecture discussions, but particularly recently I have felt like the learning and ownership that needs to happen in that space benefits from my absence, and my lovely team gives a few sentence summary that makes me feel like I got everything from their 3-hours - well, everything that I needed anyway, in that moment. Sometimes me participating as a tester is great, but not always.
When the first changes are merged without being integrated with the system, I can start testing them. And sometimes I do. Yet more often, I don't. I look at the unit tests, and engage with questions about them and may not execute anything. And sometimes I don't look at the change when it is first made.
When a series of changes becomes "feature complete", I can start testing it. And sometimes I don't. It's not like I am the only one testing it. I choose to look when it feels like there is a gap I could help identify. But I don't test all the features as they become feature complete, sometimes I test some of the features after they have been released.
Recently, I have started testing of features we are planning for roadmap next year. I test to make sure we are promising on a level that is realistic and allows us to exceed expectations. As a tester here, I test before a developer has even heard of the features.
In the past, I have specialized in vendor management and contracts. I learned I can test early, but reporting results of my testing early can double the price of the fixed price contract without adding value. Early conversations of risks are delicate, and contracts have interesting features requiring a very special kind of testing.
When people ask when do I start testing, they seem to look for a recipe, a process. But my recipe is varied. I seek for the work that serves my team best within the capabilities that I have in that moment of time. I work with the knowledge that testing is too important to be left for only for testers, but that does not mean that testers would not add value in it. But to find value, we need to accept that it is not always located in the same place.
Instead of asking when do I start testing, I feel like reminding that I never stop. And when you don't stop, you also don't start.
When do YOU start testing?