Friday, November 18, 2011

Falling into a trap - talking of Exploratory Testing

I attended the public defense of dissertation for Juha Itkonen today. His doctoral dissertation with the title "Empirical studies on exploratory software testing" is a topic particularly close to my heart, since it was the topic I started with but failed to continue striving for in the academic sense. Juha started in the same research group after me, took the relevant topic and made all the effort to research and publish what he could in the limited timeframe (of nearly 10 years).

When Juha's opponent started, he went first for the definition of exploratory testing and distinguising exploratory testing and non-exploratory testing. This, I find, is a trap, and an easy one to fall into.

Juha explained some of the basic aspects of what he has summarized of exploratory testing, but struggled to make the difference of exploratory vs. non-exploratory. He pointed out that learning is essential in exploratory testing, and changing direction based on learning. And the the opponent pointed out that typically people who test based on test cases also learn and add new tests as needed based on what they learned. Juha went on explaining that if you automate a script and remove the human aspect, that is clearly not exploratory. And the opponent pointed out that a human could look at the results and learn.

What Juha did not address in his defense, or in the papers I browsed through, is the non-linear order of the tasks. He still works on the older definition of simultaneous activities, and thus, in my opinion, misses a point of tester being in control what happens first, after, again and for how long each of the activities endure to achieve a goal.

A relevant challenge (to me) is that all testing is exploratory. There's always a degree of freedom, tester's control and learning when you do testing. Trying to make an experienced tester follow a script, even if such existed, is not something that happens - they add, remove, repeat and do whatever needs to be done, including hiding the fact that they explore. All real-world approaches are exploratory testing to a degree.

I still have a problem to solve: if all testing is exploratory, what words should I use to describe the wasteful, document-oriented approaches, that take a lot of effort for creating little value. There's a lot of real-world examples where we plan too much, what we plan is the wrong things as we do the planning at time we know the least. And what's even worse, due to be belief in planning, we reserve too little time for the hands-on testing work and related learning, and with the schedule pressure, fail to learn even when the opportunity supposingly is there.

Low quality exploratory testing is still exploratory testing. Getting to the value comes from skill. And with skill, you are likely to get better results with some planning and enough exploration-while-testing, than skipping the planning. And yet with skill, you could use too much time on planning and preparing, due to logistics of how the software arrives to you.

Ending with the idea from the opponent for today's dissertation: the great explorers (Amundsen - south pole / example coming from a norwegian professor) planned a lot. What made the difference between those that were successful and those that were not, was not whether they planned or not, but that they planned holistically, thinking of all kinds of aspects, and keeping their minds open for learning while at it - making better choices for the situation at hand and not sticking to the plan like carrying back stone samples at cost of one's life.