I have now been with a new team for two months, and my work with the team is starting to take a recognizable shape. I'm around, and I hold space for testing. I define with examples where testing my team does did not provide all the results for quality we may wish for. And I refine, yet again with examples in addition to rules, what we are agreeing to build. Every now and then, I fix problems I run into, with the rule of thumb of being aware of reporting time in relation to fixing time, and taking the fixing upon myself if the two are proportionally positioned.
Let's talk about what these concepts mean in practice.
Holding space for testing
Right now as official communication goes, my team does not have a tester. They are in process of hiring one. I am around temporarily, with other responsibilities in addition to showing up for them. The team owns testing, and executes their idea of what it looks like: automating programmer intent on three levels, and showing up as brilliant colleagues so that no one is alone with responsibility of change.
For this service, I listen more than I talk. I more often talk with raise of an eyebrow or other facial expression. When I talk, I talk in questions even in cases I think I hold the answer. I use my questions to share the questions we could all have.
I listen to more than words - I listen to actions, and I listen to coherence of words and actions. By listening, I notice learning, I notice patterns of what is easy and what is hard. That information is input for other services.
Being around, I remind people that testing exists without doing anything on it. I see programmers looking at me telling "you'd want me to test this", explaining how proud they are of the way they configured fast feedback and provide value by showing up whole-heartedly to share the joy. I understand. And I am delighted for their success.
Learning to do less to achieve more has been one of the hardest skills I have been working on. It's not about me. It's about the human system, it's about the results, it's about us all together.
Defining lack of results with examples
When I turn to testing the verb, I start with (contemporary) exploratory testing. I might look what I see when I explore by changing developer intent tests on unit/integration/e2e levels, or I might look at what I see when I explore different users interfaces: the GUI, the APIs, the config files, the 3rd party integrations, the environments. It's a user who turns off the computer the system runs on. It's a user who reconfigures and reboots, It's a user who builds their own integration programming on the APIs. The visible user interface isn't the only access point for active actors in the system.
I go and find some of what others may have missed. If I can drop an example when developing isn't complete, I can mention it in passing, changing the outcome. If I get to do ensemble programming, I do a lot of this. I don't (yet) with my current team. I drop examples in dailys, on our teams channel, and I watch for reactions. When time is right, defined with a fuzzy criteria of my limits of tracking, I turn the examples into written examples as bug reports for those that still need addressing.
I put significant effort into using the examples as training for future avoidance, over bugs we just need to address. But with continuous work on building my ideas of what results we may be missing, I help my team not leak uncertainty from changes and features as much as I can.
In last weeks, I have sensed management concern over things going on longer than first expected, but we are now building the baseline of not leaking. The whole estimating under conditions of uncertainty creates a game where we incentivize building what we agreed over what we learned we need, and awareness of cost (in time) needs to move to a continuous process.
Refining what we build with rules and examples
I also do that "shifted left" testing and work with the product owner and the team on what we start working on. When product owner writes their acceptance criteria, I too write acceptance criteria, I don't merely review theirs. I write my own as a model of what is in my head, and I compare my model to their model. I improve theirs by combining the two models. Seeing omissions from creating multiple models from different perspectives seems to be more effective than thinking I have some magic "tester mindset" that makes me good at reviewing other people's materials.
Right now my day to day includes significant effort in figuring out the balance of specifying before implementing and adding details we are missing. I'm learning again that the only model of rules and examples that turns to code is one with the developers, and as a tester (or as PO), I am only adding finesse on that model. Too much finesse is overwhelming. And I really want to avoid testersplaining features.
Finding new rules and examples is expected, and some of the best work I can do in this area is to help us stay true and honest on what insights are new - to manage the expectations of time and effort.
Fixing problems
Finally, I fix problems. I create pull requests of fixes, that the developers review. They are like bug reports, but with a step further in fixing it.
I'm sure I do something other too, but these seemed like the services I have seen myself recently provide.