Friday, April 23, 2021

Learning without Impact on Results

As I talk about exploratory testing, I find myself intertwining three words: design, execution, learning. Recently I have come to realize that I am often missing a third one: results. I take results for granted.

When I frame testing, I often talk about the idea that learning in a way that would give us a 1% improvement would mean we are able to do what we did before by investing 4 minutes less on the same results. There is no learning without impact on results. 

Except there is. I recently talked to a fellow tester who asked me a question that revealed more about my thoughts that I had realized:

Aren't we all learning all the time?

This made me think of a past colleague who told me that the reason I find it hard to explain what exploratory testing is that the way I learn as an exploratory tester is learning in a different dimension than our usual words entail. 

So let's try an example - watching a tester at work. 

They were testing a feature for many weeks. The task in Jira was open, collecting the little dots tasks collect when they don't move forward in the process. The daily reports included mentions of obstacles, testing, preparing for testing, figuring things out. The feature wasn't a little one, but had relevant work ongoing on developer side showing up as messages on small conversation channel asking about unblocking dependent tasks, and figuring out troubles. The tester was on the channel, but never visible. 

Finally, the work came to conclusion and the task was still open for finalizing test automation. Finally a pull request emerged. 20 lines of code. Produced by producing 2000 lines of code, where it was actually 200 lines of code produced 10 times. Distilled learning, tightly packaged. 

Reflecting on the investment, the company got three results for quite a number of hours of work: 

  • One fixed bug
  • One regression test
  • A tested feature
  • A tester with 'learning'
Yet, I would find myself dissatisfied with the result. The work had the appearance of design, execution and learning intertwined, used automation to explore and to document and could be a model example of what I mean by contemporary exploratory testing. What I missed out on was the balance of learning making the tester better, and learning making the product and organization better. 

The visible external results were one fixed bug and one regression test, in an environment that appears to force communication visible. No tester-initiated early discussions leading to aha-effects improving the product before it was coded. Limited reports on suspicious behaviors. Suspicious coverage of a tested feature no one could analyze or add to. And eventually, late discoveries of issues in the area by people coming after, making a point of remaining alone being a wrong choice.

Sometimes when we think of 'learning', we overemphasize knowledge acquisition not actionable results of the learning. It is like our work as testers is to know it all, but do nothing else with it than know it. We collect information in ourselves, and lock it up. And that is not sufficient. 

Instead the actionable results matter. The learning in exploratory testing is supposed to improve the information and our colleagues who also learn through our distilled learning.