Thursday, April 29, 2021

I don't trust your testing and how to change that

Over the years, I have been senior to a lot of testers. I've been in the position to refuse to give them work. I have been in the position to select what their primary focus in teams becomes. I've been in position to influence the fact they no longer work with the same thing. 

Being in position of this kind of influence does not mean you have to be their manager. It means you need to have the ear of the managers. So from that position, inspired by the conversations we had with Exploratory Testing Academy Ask Anyone Anything session, I am going to give you advice on how to work to be appreciated by someone like me. (note: it means NOTHING really, but a thought exercise)

Let's face it: not all testers are that great. We all like to think we are, yet we don't spend enough time in thinking about what makes us great. 

So lets assume I don't trust your testing and you want to change that - a tongue-in-cheek guide to how-to. 

1. Talk about your results

I know I can search your bugs in Jira and no matter how much folks like me explain that not everything is there, in reality a lot of it is there in organizations that believe that work does not exist without a card in jira. Don't make me go and search. 

Talk about your results:

  • the information you have found and shared with the developers ("bugs")
  • the information you have found and could teach others ("lesson learned", "problems solved")
  • the changes in others you see your questions are creating
  • the automation you wrote to test, the automation  you rewrote to test, the automation you left behind for regression purposes
  • the coverage you created without finding problems
Small streams, continuously, consistently. 

2. Create documentation in a light-weight manner

You can lose my trust with both too much and too little. Create enough, and especially, create the right kind. Some of it is for you, some of it is for your team. 

If you like tracking your work with detailed test cases, I don't mind. But if creating test cases does not intertwine with testing, and if it keeps your hands off the application more than hands on it, I will grow suspicious. Results matter - make your results tell the story of what you create for you makes sense for you. 

If others like using your detailed test cases, I will also make you responsible for their results. I have watched good testers go bad with other people's test cases. So if you need detail and make detail available for others, I expect you to care that their results don't suffer. Most likely I'd be looking out for how you promote your documentation for others and be inclined towards suspicion. 

If you don't create documentation, it is just as bad. The scout rule says you need to leave things easier for those who come after you and I expect you to provide enough for that. 

3. Don't get caught being forgetful

A good tester needs to track a lot of different levels information and tasks. If you keep forgetting things, but you don't have your own system of recalling them, this raises a concern. 

I don't want to be the one who reminds you on you doing your work. Once is fine, repeatedly reminding and you should recognize you have something to work on. Like prioritizing your work to manageable amounts. Or offloading things into notes that help *you*. You are responsible for your system, and if you are not, trust will soon fade away. 

If you create a list of things to test, test them. If you don't test them, tell about it. And never ever tell me that you did not know how to, did not ask for help and let bugs slip just because you didn't work your way through a rough patch. 

4. Do your own QA of testing

After you tested, follow through what others find. Don't beat yourself up, but show you make an effort to learn and reflect. Have someone else test with you or after you. Watch later what the customers will report. Make an effort to know.

5. Co-create with me or pretty much anyone

Like most people, I have self-inflated sense of self and I like the work I have done. Ask me to do the work with you, co-create idea lists, plans, priorities, whatnot and by inserting a bit of me into what really is you most likely makes me like what you do a lot. 

Pro tip. This works on actual managers too. And probably on people who are not me. You also might gain something by tapping into people around you, and just seeing you use others makes you appear active, because you are! 

6. Mind the cost

Your days at work cost money. Make sure what you cost and what your results are are balanced. Manage the cost - you make choices on how much of time you give on a particular thing. Be thoughtful and focus on risks. 

Time on something isn't just a direct cost of the time. It is also time away from something. In the time I spent writing this article, I could have:
  • Had a fruitless argument over terminology with someone on the internet
  • Set up a weather API and tried mocking it
  • Read the BDD books - Formulation that I have been planning on getting at all week
  • Played a fun game of Don't Starve Together
  • ....
We make choices of excluding things from our lives all the time. And I will have my perspective on *your* choices of cost vs. results. 

All in all, I think we are all on a learning journey, and not alone. Make it show and allow yourself to improve and grow. 

Sunday, April 25, 2021

The English Native Speakers Are Missing Out

I read on interview this week. Already a few years old interview focused on asking Mervi Hyvönen, then about to retire with Kela, the Finnish Social Insurance Institution, about her career. 

Mervi started working with testing in 1973.  The first software she tested was using punch cards as input mechanism for the program and the data. Before retiring, she tested software in the same organization on five different decades.

The really interesting piece in the article was her description of their approach to testing: Kela has been long known to be holding space for great exploratory testing. She describes they were doing exploratory testing before the first messages from across the seas reached Finland and gave it a new name. 

Those of us aware of history of exploratory testing, may remember the term was coined by Cem Kaner in 1984, and mentioned in his Testing Computer Software 1st edition in 1988. He did not invent it, he coined it - he saw others in addition to his organizations doing it, and he gave it a name. It got down to testing history as the way product companies in Silicon Valley tested. It was also the way the pioneering companies in Finland tested. And I am pretty sure Finland is not the only corner of world erased from the history in this case.

When conferences seek great speakers for conferences, non-native English speakers are often secondary. 

When we read research, the shared research language is English. And we know that a lot of scientific work of the past was written in German, and that the Russian scientific community still publishes heavily in their native language. 

When I chose to start speaking and later blogging, I started first with Finnish. Finns learn better when the material is in their own language. But the two languages authorship was taking a toll on my contents, and I later shifted. 

When Agile Manifesto came about, I had already a few years in researching "lightweight software development methods". 

When Exploratory Testing as term started show up in Finland, I had already been doing it. 

The appearance of information in the world is very English language centric, and it creates a significant bias. 

I don't want Mervi, the pioneer of Exploratory Testing in Finland, be forgotten. Her organization grew to hundreds to testers, and those testers took testing as they knew it forward to other organizations. Local companies creating professionals who moved around made testing here what it is today, not reading a few articles in English but applying the in Finnish. 

You all just don't know about it as long as the only language you read is English. As long as the only conversations you have are in English. And even when we don't know something exists, it very well might. 

Friday, April 23, 2021

Learning without Impact on Results

As I talk about exploratory testing, I find myself intertwining three words: design, execution, learning. Recently I have come to realize that I am often missing a third one: results. I take results for granted.

When I frame testing, I often talk about the idea that learning in a way that would give us a 1% improvement would mean we are able to do what we did before by investing 4 minutes less on the same results. There is no learning without impact on results. 

Except there is. I recently talked to a fellow tester who asked me a question that revealed more about my thoughts that I had realized:

Aren't we all learning all the time?

This made me think of a past colleague who told me that the reason I find it hard to explain what exploratory testing is that the way I learn as an exploratory tester is learning in a different dimension than our usual words entail. 

So let's try an example - watching a tester at work. 

They were testing a feature for many weeks. The task in Jira was open, collecting the little dots tasks collect when they don't move forward in the process. The daily reports included mentions of obstacles, testing, preparing for testing, figuring things out. The feature wasn't a little one, but had relevant work ongoing on developer side showing up as messages on small conversation channel asking about unblocking dependent tasks, and figuring out troubles. The tester was on the channel, but never visible. 

Finally, the work came to conclusion and the task was still open for finalizing test automation. Finally a pull request emerged. 20 lines of code. Produced by producing 2000 lines of code, where it was actually 200 lines of code produced 10 times. Distilled learning, tightly packaged. 

Reflecting on the investment, the company got three results for quite a number of hours of work: 

  • One fixed bug
  • One regression test
  • A tested feature
  • A tester with 'learning'
Yet, I would find myself dissatisfied with the result. The work had the appearance of design, execution and learning intertwined, used automation to explore and to document and could be a model example of what I mean by contemporary exploratory testing. What I missed out on was the balance of learning making the tester better, and learning making the product and organization better. 

The visible external results were one fixed bug and one regression test, in an environment that appears to force communication visible. No tester-initiated early discussions leading to aha-effects improving the product before it was coded. Limited reports on suspicious behaviors. Suspicious coverage of a tested feature no one could analyze or add to. And eventually, late discoveries of issues in the area by people coming after, making a point of remaining alone being a wrong choice.

Sometimes when we think of 'learning', we overemphasize knowledge acquisition not actionable results of the learning. It is like our work as testers is to know it all, but do nothing else with it than know it. We collect information in ourselves, and lock it up. And that is not sufficient. 

Instead the actionable results matter. The learning in exploratory testing is supposed to improve the information and our colleagues who also learn through our distilled learning. 

Saturday, April 17, 2021

Start with the Requirements

There's a whole lot of personal frustration on watching a group of testers take requirements, take designs of how the software is to be built, design test cases, automate some of them, execute others manually, take 10x more time than I think they should AND leak relevant problems to next stages in a way that makes you question if they were ever there in the process. 

Your test cases don't give me comfort when your results say that the work that really matters - information on quality of the product - is lacking. Your results include the neatly written steps that describe exactly what you did so that someone else with basic information about the product can take them up next time for purposes of regression. Yet that does not comfort me. And since existence of those tests makes next rounds of testing even less effective for results, you are only passing forward things that make us worse, not better.

Over the years, I have joined organizations like this. I've cried for the lack of understanding in the organization on what good testing looks like, and wondered if I will be able to rescue the lovely colleagues doing bad work just because they think that is what is asked of them.

Exploratory testing is not the technique that you apply on top of this to fix it by doing "sessions". It is the mindset you intertwine in to turn the mechanistic requirements to designs to test cases to executions manual and automated into a process of thinking, learning and RESULTS. 

Sometimes the organization I land in is overstructured and underperforming. If the company is successful, usually it is only testing that is over structured and underperforming, and we just don't notice bad testing when great development exists. 

Sometimes the organization I land in has really bad quality and testing is "important" as patching it. Exploratory testing may be a cheaper way of patching. 

Sometimes the organization I land in has excellent quality and exploratory testing is what it should be - finding things that require thinking, connections and insight. 

It still comes down to costs and results, not labels and right use of words. And it always starts with requirements. 

Friday, April 16, 2021

The dynamics of quadrants models

If you sat through a set of business management classes, chances are you've heard your teacher joke about quadrants being *the* model. I didn't get the joke back then, but it is kind of hilarious now how the humankind puts everything in two dimensions and works on everything based on that. 

The quadrants popularized for Gartner in market research are famous enough to hold the title "magic" and have their own Wikipedia page. 

Some of my favorite ways to talk about management like Kim Scott's Radical Candor are founded on a quadrants model. 

The field of testing - Agile Testing in particular - has created it's own popular quadrant model, the Agile Testing Quadrants.  

Here's the thing: I believe agile testing quadrants is a particularly bad model. It's created by great people but it gives me grief:

  • It does not fit into either of the two canonical quadrant model types of "move up and right" or "balance". 
  • It misplaces exploratory testing in a significant way
Actually, it is that simple. 

The Canonical Quadrant Models

It's like DIY of quadrants. All you need is two dimensions that you want to illustrate. For each dimension, you need some descriptive labels for opposite ends like hot - cold and technology - business. See what I did there? I just placed technology and business into opposing ends in a world where they are merging for those who are successful. Selecting the dimension is a work of art - how can we drive a division into two that would be helpful, and for quadrants, in two dimensions? 

The rest is history. Now we can categorize everything into a neat box. Which takes us to our second level of canonical quadrant models - what they communicate

You get to choose between two things your model communicates. It's either Move Up and Right  or Balance.

Move Up and Right is the canonical model of Magic Quadrants, as well as Kim Scott's Radical Candor.  Who would not want to be high on Completeness of Vision and Ability to Execute when it comes to positioning your product in a market, or move towards Challenging Directly while Caring Personally to apply Radical Candor. The Move Up and Right is the canonical format that sets a path on two most important dimensions. 

Move Up and Right says you want to be in Q3. It communicates that you move forward and you move up - a proper aspirational message. 

The second canonical model for quadrants is Balance. This format communicates a balanced classification. For quadrants, each area is of same size and importance. Forgetting one, or focusing too much on another would be BAD(tm).    

Each area would have things that are different in two dimensions of choice. But even when they are different, the Balance is what matters. 

Fixing Agile Testing Quadrants

We discussed earlier that I have two problems with agile testing quadrants. It isn't a real model of balance and it misrepresents exploratory testing. What would fixing it look like then?

For support of your imagination, I made the corrections on top of the model itself. 

First, the corner clouds must go. Calling Q1 things like unit tests automated when they are handcrafted pieces of art is an abomination. They document the developer's intent, and there is no way a computer can pull out the developer's intent. Calling Q3 things manual is equally off in a world where we look at what exactly our users are doing with automated data collection in production. Calling Q4 things tools is equally off, as it's automated performance benchmarking and security monitoring are everyday activities. That leaves the Q2 that was muddled with a mix already. Let's just drop the clouds and get our feet on the ground. 

Second, let's place exploratory testing where it always has been. Either it's not in the picture (like in most organizations calling themselves Agile these days) or if it is in the picture, it is right in the middle of it all. It's an approach that drives how we design and execute tests in a learning loop. 

That still leaves the problem of Balance which comes down to choosing the dimensions. Do these dimensions create a balance to guide our strategies? I would suggest not. 

I leave the better dimensions as a challenge for you, dear reader. What two dimensions would bring in "magic" to transforming your organizations testing efforts?  

Monday, April 5, 2021

Learning from Failures

As I opened Twitter this morning and saw announcement of a new conference called FailQonf, I started realizing I have strong feelings about failure. 

Earlier conversations with past colleague could have tipped this off as he is pointing out big success comes our of many small successes, not failures and I keep insisting on learning requiring both success and failure. 

Regardless, failure is fascinating.

When we play a win-lose -game, one must lose for the other to win. 

A failure for one is a small bump in the road for others.

In telling failure stories, we often imagine we had a choice of doing things differently when we don't. 

A major failure for some is a small consequence for others. 

Power dynamics play a role in failure and its future impacts, perhaps even more than success. 

Presenting a failure in a useful way is difficult. 

Missing an obvious bug. Taking the wrong risk. Trusting the wrong people. 

Failures and successes are a concept we can only talk about in hindsight, and I have a feeling it isn't helping us looking forward. 

We need to talk about experiments and learning, not success and failure. 

And since I always fail to say what I should be saying, let's add a quote:


Saturday, April 3, 2021

Exploratory Testing the Verb and the Noun

 I find myself repeating a conversation that starts with one phrase:

"All testing is exploratory" 

With all the practice, I have become better at explaining how I make sense in that claim and how it can be true and false for me at the same time. 

If testing is a verb - 'Exploratory testing the Verb' - all testing is exploratory. In doing testing in the moment, we are learning. If we had a test case with exact steps, we would probably still look around to see a thing that wasn't mentioned explicitly. 

If testing is a noun - 'Exploratory testing the Noun' - not all testing is exploratory. Organizations do a great job at removing agency with roles and responsibilities, expectations, and processes. The end result is that we get less value for our time invested. 

I realized this way of thinking was helping me make sense on things from a conversation with Curtis, known as the CowboyTester on twitter. Curtis is wonderful and brilliant, and I feel privileged to have met him in person in a conference somewhere in the USA. 

Exploratory Testing the Noun matters a lot these days. For you to understand exactly how it matters, we need to go back to discussing the roots of exploratory testing. 

Back in the days of first mentions of exploratory testing by Cem Kaner in 1984, the separation from all things testing came from the observation that some companies heavily separated, by their process, the design and execution of tests. Exploratory testing was coined to describe a skilled, multidisciplinary style of testing where design and execution are intertwined, or like they said in the early days "simultaneous". The way testing was intertwined on the rehearsed practitioners of exploratory testing made it appear simultaneous, as it is typical to watch someone doing exploratory testing and work at same time on things in the moment and long term, details and big picture, and design and execution. The emphasis was on agency - the doer of activity being in control of the pace to enable learning. 

Exploratory testing was the Noun, not the Verb. It was the framing of testing so that agency leading to learning and more impactful results was in place. 

For me, exploratory testing is about organizing the work of testing so that agency remains, and encouraging learning that changes the results. 

When we do Specification by Example (or Behavior Driven Development that seems to be winning out in phrase popularity), I see that we do that most often in a way I don't consider exploratory testing. We stick to our agreements of examples and focus on execution (implementation) over enabling the link between design and execution where every execution changes the designs. We give up on the relentlessness on learning, and live within the box. And we do that by accident, not by intention. 

When we split work in backlog refinement sessions, we set up our examples and tasks. And the tasks often separate design and execution because it looks better to an untrained eye. But to close those tasks then to the popular notion of continuous stream, we create the separation of design and execution that removes exploratory testing. 

When we ask test cases for compliance, to be linked to the requirements, we create the separation of design and execution that removes exploratory testing. 

When supporting a new tester, we don't support the growth of their agency by pairing with them, ensuring they have design and execution responsibility, we hand them tasks that someone else designed for them. 

Exploratory testing the noun creates various degrees of impactful results. My call is for turning up the degrees of impactful, and it requires us to recognize the ways of setting up work for the agency we need in learning .

Friday, April 2, 2021

Learning for more impactful testing

I wrote a description of exploratory testing and showed it to a new colleague for feedback. In addition to fixing my English grammar (appreciated!), she pointed out that when I wrote on learning while testing, I did not emphasize enough that the learning is really supposed to change the results to be more impactful.

We all learn, all the time, with pretty much all we do. I have seen so many people take a go at exploratory testing the very same application, and what my colleague pointed out really resonated: it's not just learning, it's learning that changes how you test.

Combining many observations into one, I get to watch learning that does not change how you test. They look at the application, and when asked for ideas on what they would do to test it, the baseline is essentially the same and question after each test on what they learned produces reports of observations, not actions on those observations. "I learned that works", "I learned that does not work". "I learned I can do that", "I learned I should not do that". These pieces include a seed that needs to grow significantly before the learning of previous test shows up in the next tests. 

It's not that we do design and execution simultaneously, but that we weave them together into something unique to the tester doing the testing. The tester sets the pace, and learning years and years speeds up the pace so that it appears as if we are thinking on multiple dimensions all at the same time.  

The testing I did a year ago still helps me with the testing I do today. I build patterns over long times, over various applications, and over multiple organizations offering a space in which my work happens. I learn to be more impactful already during the very first tests, but continue growing from the intellectual push testing gives. 

We don't have the answer key to the results we should provide. We need to generate our answer keys, and our skills to assess completeness of those answer keys we create in the moment. Test by test. Interaction by interaction. Release by release. Tuning in, listening, paying attention. Always weaving the rubric that helps us do better, one test at a time. 

The Conflated Exploratory Testing

Last night I spoke at TSQA meetup and received ample compensation as inspirations people in that session enabled through conversations. Showing up for a talk can feel like I'm broadcasting, and that gives me the chance of sorting out my thoughts on a different topic every single time, but when people show up for conversation after, that leaves me buzzing. 

Multiple conversations we ended up having were on the theme of how conflated exploratory testing is, and how we so easily end up with conversations that lead nowhere when we try to figure it out. The honest response from me is that it has meant different things in different communities, and it must be confusing for people who haven't traversed through the stages of how we talk about it. 

So, with half serious tongue in cheek, I set out to put the stages of it on this note. Thinking of good old divisive yet formative "schools of testing" work, I'm pretty sure we can find schools of exploratory testing. What I would hope to find though is group of people that would join in describing the reasons why things are the way they are with admiration and appreciation of others, instead of ending up with the one school to rule them all with everything positive attached to it. 

Here's stages, each still existing in the world I think I am seeing:

  • Contemporary Exploratory Testing
  • Agile Technique Exploratory Testing
  • Deprecated Exploratory Testing
  • Session-based Exploratory Testing
  • ISTQB Technique Exploratory Testing
  • The Product Company Exploratory Testing

As I am writing this post, I realize I want to sort this thinking out better and I start working on slides of comparison. So with this one, I will leave it as an early listing, making a note of the inspirations yesterday:

  • The recruiting story, where people show up telling they schedule exploratory testing session in the end, only to find them cancelled with other activities taking over. The low-priority task of session framing, unfortunately common with the Agile Technique Exploratory Testing. 
  • The middle era domination by managing with sessions story, where session based test management hijacked the concept becoming the defining criteria for exploratory testing to survive in organizational frames not founded on trust. 
  • The common course providers forcing it into a technique story, where people learned to splash some on top to fix humanity instead of seeking power from it.
  • The unilateral deprecation story, where terrible twins marketing gimmicks shifted conversations to testing in a particular namespace to create a possibility of coherent terms in a bubble. 
I believe we are far from done on understanding how to talk about exploratory testing amongst doers, towards enablers like managers or how to create organizational frames that enable the style of learning.