Saturday, April 17, 2021

Start with the Requirements

There's a whole lot of personal frustration on watching a group of testers take requirements, take designs of how the software is to be built, design test cases, automate some of them, execute others manually, take 10x more time than I think they should AND leak relevant problems to next stages in a way that makes you question if they were ever there in the process. 

Your test cases don't give me comfort when your results say that the work that really matters - information on quality of the product - is lacking. Your results include the neatly written steps that describe exactly what you did so that someone else with basic information about the product can take them up next time for purposes of regression. Yet that does not comfort me. And since existence of those tests makes next rounds of testing even less effective for results, you are only passing forward things that make us worse, not better.

Over the years, I have joined organizations like this. I've cried for the lack of understanding in the organization on what good testing looks like, and wondered if I will be able to rescue the lovely colleagues doing bad work just because they think that is what is asked of them.

Exploratory testing is not the technique that you apply on top of this to fix it by doing "sessions". It is the mindset you intertwine in to turn the mechanistic requirements to designs to test cases to executions manual and automated into a process of thinking, learning and RESULTS. 

Sometimes the organization I land in is overstructured and underperforming. If the company is successful, usually it is only testing that is over structured and underperforming, and we just don't notice bad testing when great development exists. 

Sometimes the organization I land in has really bad quality and testing is "important" as patching it. Exploratory testing may be a cheaper way of patching. 

Sometimes the organization I land in has excellent quality and exploratory testing is what it should be - finding things that require thinking, connections and insight. 

It still comes down to costs and results, not labels and right use of words. And it always starts with requirements. 



Friday, April 16, 2021

The dynamics of quadrants models

If you sat through a set of business management classes, chances are you've heard your teacher joke about quadrants being *the* model. I didn't get the joke back then, but it is kind of hilarious now how the humankind puts everything in two dimensions and works on everything based on that. 

The quadrants popularized for Gartner in market research are famous enough to hold the title "magic" and have their own Wikipedia page. 

Some of my favorite ways to talk about management like Kim Scott's Radical Candor are founded on a quadrants model. 

The field of testing - Agile Testing in particular - has created it's own popular quadrant model, the Agile Testing Quadrants.  

Here's the thing: I believe agile testing quadrants is a particularly bad model. It's created by great people but it gives me grief:

  • It does not fit into either of the two canonical quadrant model types of "move up and right" or "balance". 
  • It misplaces exploratory testing in a significant way
Actually, it is that simple. 

The Canonical Quadrant Models

It's like DIY of quadrants. All you need is two dimensions that you want to illustrate. For each dimension, you need some descriptive labels for opposite ends like hot - cold and technology - business. See what I did there? I just placed technology and business into opposing ends in a world where they are merging for those who are successful. Selecting the dimension is a work of art - how can we drive a division into two that would be helpful, and for quadrants, in two dimensions? 


The rest is history. Now we can categorize everything into a neat box. Which takes us to our second level of canonical quadrant models - what they communicate

You get to choose between two things your model communicates. It's either Move Up and Right  or Balance.

Move Up and Right is the canonical model of Magic Quadrants, as well as Kim Scott's Radical Candor.  Who would not want to be high on Completeness of Vision and Ability to Execute when it comes to positioning your product in a market, or move towards Challenging Directly while Caring Personally to apply Radical Candor. The Move Up and Right is the canonical format that sets a path on two most important dimensions. 

Move Up and Right says you want to be in Q3. It communicates that you move forward and you move up - a proper aspirational message. 

The second canonical model for quadrants is Balance. This format communicates a balanced classification. For quadrants, each area is of same size and importance. Forgetting one, or focusing too much on another would be BAD(tm).    


Each area would have things that are different in two dimensions of choice. But even when they are different, the Balance is what matters. 

Fixing Agile Testing Quadrants

We discussed earlier that I have two problems with agile testing quadrants. It isn't a real model of balance and it misrepresents exploratory testing. What would fixing it look like then?

For support of your imagination, I made the corrections on top of the model itself. 

First, the corner clouds must go. Calling Q1 things like unit tests automated when they are handcrafted pieces of art is an abomination. They document the developer's intent, and there is no way a computer can pull out the developer's intent. Calling Q3 things manual is equally off in a world where we look at what exactly our users are doing with automated data collection in production. Calling Q4 things tools is equally off, as it's automated performance benchmarking and security monitoring are everyday activities. That leaves the Q2 that was muddled with a mix already. Let's just drop the clouds and get our feet on the ground. 

Second, let's place exploratory testing where it always has been. Either it's not in the picture (like in most organizations calling themselves Agile these days) or if it is in the picture, it is right in the middle of it all. It's an approach that drives how we design and execute tests in a learning loop. 

That still leaves the problem of Balance which comes down to choosing the dimensions. Do these dimensions create a balance to guide our strategies? I would suggest not. 

I leave the better dimensions as a challenge for you, dear reader. What two dimensions would bring in "magic" to transforming your organizations testing efforts?  

Monday, April 5, 2021

Learning from Failures

As I opened Twitter this morning and saw announcement of a new conference called FailQonf, I started realizing I have strong feelings about failure. 

Earlier conversations with past colleague could have tipped this off as he is pointing out big success comes our of many small successes, not failures and I keep insisting on learning requiring both success and failure. 

Regardless, failure is fascinating.

When we play a win-lose -game, one must lose for the other to win. 

A failure for one is a small bump in the road for others.

In telling failure stories, we often imagine we had a choice of doing things differently when we don't. 

A major failure for some is a small consequence for others. 

Power dynamics play a role in failure and its future impacts, perhaps even more than success. 

Presenting a failure in a useful way is difficult. 

Missing an obvious bug. Taking the wrong risk. Trusting the wrong people. 

Failures and successes are a concept we can only talk about in hindsight, and I have a feeling it isn't helping us looking forward. 

We need to talk about experiments and learning, not success and failure. 

And since I always fail to say what I should be saying, let's add a quote:


 

Saturday, April 3, 2021

Exploratory Testing the Verb and the Noun

 I find myself repeating a conversation that starts with one phrase:

"All testing is exploratory" 

With all the practice, I have become better at explaining how I make sense in that claim and how it can be true and false for me at the same time. 

If testing is a verb - 'Exploratory testing the Verb' - all testing is exploratory. In doing testing in the moment, we are learning. If we had a test case with exact steps, we would probably still look around to see a thing that wasn't mentioned explicitly. 

If testing is a noun - 'Exploratory testing the Noun' - not all testing is exploratory. Organizations do a great job at removing agency with roles and responsibilities, expectations, and processes. The end result is that we get less value for our time invested. 

I realized this way of thinking was helping me make sense on things from a conversation with Curtis, known as the CowboyTester on twitter. Curtis is wonderful and brilliant, and I feel privileged to have met him in person in a conference somewhere in the USA. 

Exploratory Testing the Noun matters a lot these days. For you to understand exactly how it matters, we need to go back to discussing the roots of exploratory testing. 

Back in the days of first mentions of exploratory testing by Cem Kaner in 1984, the separation from all things testing came from the observation that some companies heavily separated, by their process, the design and execution of tests. Exploratory testing was coined to describe a skilled, multidisciplinary style of testing where design and execution are intertwined, or like they said in the early days "simultaneous". The way testing was intertwined on the rehearsed practitioners of exploratory testing made it appear simultaneous, as it is typical to watch someone doing exploratory testing and work at same time on things in the moment and long term, details and big picture, and design and execution. The emphasis was on agency - the doer of activity being in control of the pace to enable learning. 

Exploratory testing was the Noun, not the Verb. It was the framing of testing so that agency leading to learning and more impactful results was in place. 

For me, exploratory testing is about organizing the work of testing so that agency remains, and encouraging learning that changes the results. 

When we do Specification by Example (or Behavior Driven Development that seems to be winning out in phrase popularity), I see that we do that most often in a way I don't consider exploratory testing. We stick to our agreements of examples and focus on execution (implementation) over enabling the link between design and execution where every execution changes the designs. We give up on the relentlessness on learning, and live within the box. And we do that by accident, not by intention. 

When we split work in backlog refinement sessions, we set up our examples and tasks. And the tasks often separate design and execution because it looks better to an untrained eye. But to close those tasks then to the popular notion of continuous stream, we create the separation of design and execution that removes exploratory testing. 

When we ask test cases for compliance, to be linked to the requirements, we create the separation of design and execution that removes exploratory testing. 

When supporting a new tester, we don't support the growth of their agency by pairing with them, ensuring they have design and execution responsibility, we hand them tasks that someone else designed for them. 

Exploratory testing the noun creates various degrees of impactful results. My call is for turning up the degrees of impactful, and it requires us to recognize the ways of setting up work for the agency we need in learning .




Friday, April 2, 2021

Learning for more impactful testing

I wrote a description of exploratory testing and showed it to a new colleague for feedback. In addition to fixing my English grammar (appreciated!), she pointed out that when I wrote on learning while testing, I did not emphasize enough that the learning is really supposed to change the results to be more impactful.

We all learn, all the time, with pretty much all we do. I have seen so many people take a go at exploratory testing the very same application, and what my colleague pointed out really resonated: it's not just learning, it's learning that changes how you test.

Combining many observations into one, I get to watch learning that does not change how you test. They look at the application, and when asked for ideas on what they would do to test it, the baseline is essentially the same and question after each test on what they learned produces reports of observations, not actions on those observations. "I learned that works", "I learned that does not work". "I learned I can do that", "I learned I should not do that". These pieces include a seed that needs to grow significantly before the learning of previous test shows up in the next tests. 

It's not that we do design and execution simultaneously, but that we weave them together into something unique to the tester doing the testing. The tester sets the pace, and learning years and years speeds up the pace so that it appears as if we are thinking on multiple dimensions all at the same time.  

The testing I did a year ago still helps me with the testing I do today. I build patterns over long times, over various applications, and over multiple organizations offering a space in which my work happens. I learn to be more impactful already during the very first tests, but continue growing from the intellectual push testing gives. 

We don't have the answer key to the results we should provide. We need to generate our answer keys, and our skills to assess completeness of those answer keys we create in the moment. Test by test. Interaction by interaction. Release by release. Tuning in, listening, paying attention. Always weaving the rubric that helps us do better, one test at a time. 

The Conflated Exploratory Testing

Last night I spoke at TSQA meetup and received ample compensation as inspirations people in that session enabled through conversations. Showing up for a talk can feel like I'm broadcasting, and that gives me the chance of sorting out my thoughts on a different topic every single time, but when people show up for conversation after, that leaves me buzzing. 

Multiple conversations we ended up having were on the theme of how conflated exploratory testing is, and how we so easily end up with conversations that lead nowhere when we try to figure it out. The honest response from me is that it has meant different things in different communities, and it must be confusing for people who haven't traversed through the stages of how we talk about it. 

So, with half serious tongue in cheek, I set out to put the stages of it on this note. Thinking of good old divisive yet formative "schools of testing" work, I'm pretty sure we can find schools of exploratory testing. What I would hope to find though is group of people that would join in describing the reasons why things are the way they are with admiration and appreciation of others, instead of ending up with the one school to rule them all with everything positive attached to it. 

Here's stages, each still existing in the world I think I am seeing:

  • Contemporary Exploratory Testing
  • Agile Technique Exploratory Testing
  • Deprecated Exploratory Testing
  • Session-based Exploratory Testing
  • ISTQB Technique Exploratory Testing
  • The Product Company Exploratory Testing

As I am writing this post, I realize I want to sort this thinking out better and I start working on slides of comparison. So with this one, I will leave it as an early listing, making a note of the inspirations yesterday:

  • The recruiting story, where people show up telling they schedule exploratory testing session in the end, only to find them cancelled with other activities taking over. The low-priority task of session framing, unfortunately common with the Agile Technique Exploratory Testing. 
  • The middle era domination by managing with sessions story, where session based test management hijacked the concept becoming the defining criteria for exploratory testing to survive in organizational frames not founded on trust. 
  • The common course providers forcing it into a technique story, where people learned to splash some on top to fix humanity instead of seeking power from it.
  • The unilateral deprecation story, where terrible twins marketing gimmicks shifted conversations to testing in a particular namespace to create a possibility of coherent terms in a bubble. 
I believe we are far from done on understanding how to talk about exploratory testing amongst doers, towards enablers like managers or how to create organizational frames that enable the style of learning.