Saturday, April 17, 2021

Start with the Requirements

There's a whole lot of personal frustration on watching a group of testers take requirements, take designs of how the software is to be built, design test cases, automate some of them, execute others manually, take 10x more time than I think they should AND leak relevant problems to next stages in a way that makes you question if they were ever there in the process. 

Your test cases don't give me comfort when your results say that the work that really matters - information on quality of the product - is lacking. Your results include the neatly written steps that describe exactly what you did so that someone else with basic information about the product can take them up next time for purposes of regression. Yet that does not comfort me. And since existence of those tests makes next rounds of testing even less effective for results, you are only passing forward things that make us worse, not better.

Over the years, I have joined organizations like this. I've cried for the lack of understanding in the organization on what good testing looks like, and wondered if I will be able to rescue the lovely colleagues doing bad work just because they think that is what is asked of them.

Exploratory testing is not the technique that you apply on top of this to fix it by doing "sessions". It is the mindset you intertwine in to turn the mechanistic requirements to designs to test cases to executions manual and automated into a process of thinking, learning and RESULTS. 

Sometimes the organization I land in is overstructured and underperforming. If the company is successful, usually it is only testing that is over structured and underperforming, and we just don't notice bad testing when great development exists. 

Sometimes the organization I land in has really bad quality and testing is "important" as patching it. Exploratory testing may be a cheaper way of patching. 

Sometimes the organization I land in has excellent quality and exploratory testing is what it should be - finding things that require thinking, connections and insight. 

It still comes down to costs and results, not labels and right use of words. And it always starts with requirements. 



Friday, April 16, 2021

The dynamics of quadrants models

If you sat through a set of business management classes, chances are you've heard your teacher joke about quadrants being *the* model. I didn't get the joke back then, but it is kind of hilarious now how the humankind puts everything in two dimensions and works on everything based on that. 

The quadrants popularized for Gartner in market research are famous enough to hold the title "magic" and have their own Wikipedia page. 

Some of my favorite ways to talk about management like Kim Scott's Radical Candor are founded on a quadrants model. 

The field of testing - Agile Testing in particular - has created it's own popular quadrant model, the Agile Testing Quadrants.  

Here's the thing: I believe agile testing quadrants is a particularly bad model. It's created by great people but it gives me grief:

  • It does not fit into either of the two canonical quadrant model types of "move up and right" or "balance". 
  • It misplaces exploratory testing in a significant way
Actually, it is that simple. 

The Canonical Quadrant Models

It's like DIY of quadrants. All you need is two dimensions that you want to illustrate. For each dimension, you need some descriptive labels for opposite ends like hot - cold and technology - business. See what I did there? I just placed technology and business into opposing ends in a world where they are merging for those who are successful. Selecting the dimension is a work of art - how can we drive a division into two that would be helpful, and for quadrants, in two dimensions? 


The rest is history. Now we can categorize everything into a neat box. Which takes us to our second level of canonical quadrant models - what they communicate

You get to choose between two things your model communicates. It's either Move Up and Right  or Balance.

Move Up and Right is the canonical model of Magic Quadrants, as well as Kim Scott's Radical Candor.  Who would not want to be high on Completeness of Vision and Ability to Execute when it comes to positioning your product in a market, or move towards Challenging Directly while Caring Personally to apply Radical Candor. The Move Up and Right is the canonical format that sets a path on two most important dimensions. 

Move Up and Right says you want to be in Q3. It communicates that you move forward and you move up - a proper aspirational message. 

The second canonical model for quadrants is Balance. This format communicates a balanced classification. For quadrants, each area is of same size and importance. Forgetting one, or focusing too much on another would be BAD(tm).    


Each area would have things that are different in two dimensions of choice. But even when they are different, the Balance is what matters. 

Fixing Agile Testing Quadrants

We discussed earlier that I have two problems with agile testing quadrants. It isn't a real model of balance and it misrepresents exploratory testing. What would fixing it look like then?

For support of your imagination, I made the corrections on top of the model itself. 

First, the corner clouds must go. Calling Q1 things like unit tests automated when they are handcrafted pieces of art is an abomination. They document the developer's intent, and there is no way a computer can pull out the developer's intent. Calling Q3 things manual is equally off in a world where we look at what exactly our users are doing with automated data collection in production. Calling Q4 things tools is equally off, as it's automated performance benchmarking and security monitoring are everyday activities. That leaves the Q2 that was muddled with a mix already. Let's just drop the clouds and get our feet on the ground. 

Second, let's place exploratory testing where it always has been. Either it's not in the picture (like in most organizations calling themselves Agile these days) or if it is in the picture, it is right in the middle of it all. It's an approach that drives how we design and execute tests in a learning loop. 

That still leaves the problem of Balance which comes down to choosing the dimensions. Do these dimensions create a balance to guide our strategies? I would suggest not. 

I leave the better dimensions as a challenge for you, dear reader. What two dimensions would bring in "magic" to transforming your organizations testing efforts?  

Monday, April 5, 2021

Learning from Failures

As I opened Twitter this morning and saw announcement of a new conference called FailQonf, I started realizing I have strong feelings about failure. 

Earlier conversations with past colleague could have tipped this off as he is pointing out big success comes our of many small successes, not failures and I keep insisting on learning requiring both success and failure. 

Regardless, failure is fascinating.

When we play a win-lose -game, one must lose for the other to win. 

A failure for one is a small bump in the road for others.

In telling failure stories, we often imagine we had a choice of doing things differently when we don't. 

A major failure for some is a small consequence for others. 

Power dynamics play a role in failure and its future impacts, perhaps even more than success. 

Presenting a failure in a useful way is difficult. 

Missing an obvious bug. Taking the wrong risk. Trusting the wrong people. 

Failures and successes are a concept we can only talk about in hindsight, and I have a feeling it isn't helping us looking forward. 

We need to talk about experiments and learning, not success and failure. 

And since I always fail to say what I should be saying, let's add a quote:


 

Saturday, April 3, 2021

Exploratory Testing the Verb and the Noun

 I find myself repeating a conversation that starts with one phrase:

"All testing is exploratory" 

With all the practice, I have become better at explaining how I make sense in that claim and how it can be true and false for me at the same time. 

If testing is a verb - 'Exploratory testing the Verb' - all testing is exploratory. In doing testing in the moment, we are learning. If we had a test case with exact steps, we would probably still look around to see a thing that wasn't mentioned explicitly. 

If testing is a noun - 'Exploratory testing the Noun' - not all testing is exploratory. Organizations do a great job at removing agency with roles and responsibilities, expectations, and processes. The end result is that we get less value for our time invested. 

I realized this way of thinking was helping me make sense on things from a conversation with Curtis, known as the CowboyTester on twitter. Curtis is wonderful and brilliant, and I feel privileged to have met him in person in a conference somewhere in the USA. 

Exploratory Testing the Noun matters a lot these days. For you to understand exactly how it matters, we need to go back to discussing the roots of exploratory testing. 

Back in the days of first mentions of exploratory testing by Cem Kaner in 1984, the separation from all things testing came from the observation that some companies heavily separated, by their process, the design and execution of tests. Exploratory testing was coined to describe a skilled, multidisciplinary style of testing where design and execution are intertwined, or like they said in the early days "simultaneous". The way testing was intertwined on the rehearsed practitioners of exploratory testing made it appear simultaneous, as it is typical to watch someone doing exploratory testing and work at same time on things in the moment and long term, details and big picture, and design and execution. The emphasis was on agency - the doer of activity being in control of the pace to enable learning. 

Exploratory testing was the Noun, not the Verb. It was the framing of testing so that agency leading to learning and more impactful results was in place. 

For me, exploratory testing is about organizing the work of testing so that agency remains, and encouraging learning that changes the results. 

When we do Specification by Example (or Behavior Driven Development that seems to be winning out in phrase popularity), I see that we do that most often in a way I don't consider exploratory testing. We stick to our agreements of examples and focus on execution (implementation) over enabling the link between design and execution where every execution changes the designs. We give up on the relentlessness on learning, and live within the box. And we do that by accident, not by intention. 

When we split work in backlog refinement sessions, we set up our examples and tasks. And the tasks often separate design and execution because it looks better to an untrained eye. But to close those tasks then to the popular notion of continuous stream, we create the separation of design and execution that removes exploratory testing. 

When we ask test cases for compliance, to be linked to the requirements, we create the separation of design and execution that removes exploratory testing. 

When supporting a new tester, we don't support the growth of their agency by pairing with them, ensuring they have design and execution responsibility, we hand them tasks that someone else designed for them. 

Exploratory testing the noun creates various degrees of impactful results. My call is for turning up the degrees of impactful, and it requires us to recognize the ways of setting up work for the agency we need in learning .




Friday, April 2, 2021

Learning for more impactful testing

I wrote a description of exploratory testing and showed it to a new colleague for feedback. In addition to fixing my English grammar (appreciated!), she pointed out that when I wrote on learning while testing, I did not emphasize enough that the learning is really supposed to change the results to be more impactful.

We all learn, all the time, with pretty much all we do. I have seen so many people take a go at exploratory testing the very same application, and what my colleague pointed out really resonated: it's not just learning, it's learning that changes how you test.

Combining many observations into one, I get to watch learning that does not change how you test. They look at the application, and when asked for ideas on what they would do to test it, the baseline is essentially the same and question after each test on what they learned produces reports of observations, not actions on those observations. "I learned that works", "I learned that does not work". "I learned I can do that", "I learned I should not do that". These pieces include a seed that needs to grow significantly before the learning of previous test shows up in the next tests. 

It's not that we do design and execution simultaneously, but that we weave them together into something unique to the tester doing the testing. The tester sets the pace, and learning years and years speeds up the pace so that it appears as if we are thinking on multiple dimensions all at the same time.  

The testing I did a year ago still helps me with the testing I do today. I build patterns over long times, over various applications, and over multiple organizations offering a space in which my work happens. I learn to be more impactful already during the very first tests, but continue growing from the intellectual push testing gives. 

We don't have the answer key to the results we should provide. We need to generate our answer keys, and our skills to assess completeness of those answer keys we create in the moment. Test by test. Interaction by interaction. Release by release. Tuning in, listening, paying attention. Always weaving the rubric that helps us do better, one test at a time. 

The Conflated Exploratory Testing

Last night I spoke at TSQA meetup and received ample compensation as inspirations people in that session enabled through conversations. Showing up for a talk can feel like I'm broadcasting, and that gives me the chance of sorting out my thoughts on a different topic every single time, but when people show up for conversation after, that leaves me buzzing. 

Multiple conversations we ended up having were on the theme of how conflated exploratory testing is, and how we so easily end up with conversations that lead nowhere when we try to figure it out. The honest response from me is that it has meant different things in different communities, and it must be confusing for people who haven't traversed through the stages of how we talk about it. 

So, with half serious tongue in cheek, I set out to put the stages of it on this note. Thinking of good old divisive yet formative "schools of testing" work, I'm pretty sure we can find schools of exploratory testing. What I would hope to find though is group of people that would join in describing the reasons why things are the way they are with admiration and appreciation of others, instead of ending up with the one school to rule them all with everything positive attached to it. 

Here's stages, each still existing in the world I think I am seeing:

  • Contemporary Exploratory Testing
  • Agile Technique Exploratory Testing
  • Deprecated Exploratory Testing
  • Session-based Exploratory Testing
  • ISTQB Technique Exploratory Testing
  • The Product Company Exploratory Testing

As I am writing this post, I realize I want to sort this thinking out better and I start working on slides of comparison. So with this one, I will leave it as an early listing, making a note of the inspirations yesterday:

  • The recruiting story, where people show up telling they schedule exploratory testing session in the end, only to find them cancelled with other activities taking over. The low-priority task of session framing, unfortunately common with the Agile Technique Exploratory Testing. 
  • The middle era domination by managing with sessions story, where session based test management hijacked the concept becoming the defining criteria for exploratory testing to survive in organizational frames not founded on trust. 
  • The common course providers forcing it into a technique story, where people learned to splash some on top to fix humanity instead of seeking power from it.
  • The unilateral deprecation story, where terrible twins marketing gimmicks shifted conversations to testing in a particular namespace to create a possibility of coherent terms in a bubble. 
I believe we are far from done on understanding how to talk about exploratory testing amongst doers, towards enablers like managers or how to create organizational frames that enable the style of learning.  



Wednesday, March 24, 2021

Reality Check: What is Possible in Tester Recruitment?

I delivered a talk at Breakpoint 2021 on Contemporary Exploratory Testing. The industry at large still keeps pinning manual testing against automated testing as if they are our options, when we should reframe all work to exploratory testing in which manual work creates the automation to do the testing work we find relevant to do. 

You can't explore well without automating. You can't automate well without exploring. 

Intertwining - in level of minutes passing on as we explore - discovery and documenting with automation, we enable discovering with a wider net using that automation. 

A little later in the same day, Erika Chestnut delivered her session on Manual Testing Is Not Dead... Just the Definition. A brilliant, high-energy message talking about a Director of QA socializing the value of testing centering the non-automated parts to make space for doing all the important work that needs doing on making testing an equal perspective at all tables for software and services around the software - a systems perspective. Erika was talking about my work and hitting all the right notes except for one: why would doing this require to me manual or like Erika reframed it: "humanual"? 

When I talk about contemporary exploratory testing including automation as way of documenting, extending, alerting and focusing us on particular types of details, I frame it in a "humanual" way. Great automation requires thinking humans and not all code for testing purposes needs to end up "automated" running in a CI pipeline. 


The little difference in our framing is in saying what types of people we are about to recruit to fulfill our visions of what testing ends up as in our organizations, as leaders of the craft. My framing asks for people who want to learn programming and collaborate tightly also on code in addition to doing all the things people find to do when they can't program. Erika's framing makes it clear there will be roles where programming isn't a part of the required skills we will recruit for. 

This leads me to a reality check.

I have already trained great contemporary exploratory testers, who center the "humanual" thinking but are capable of contributing on code and finding a balance between all necessary activities we need to get done in teams. 

I have already failed to recruit experienced testers who would be great contemporary exploratory testers, generally for the reason that they come with either "humanual" or "automated" bias, designing testing to fit their skillsets over the entire teams' skillset.

If you struggle with automation, it hogs all your time. If you struggle with ideas that generate information without automation, automation is a tempting escape. But if you can do both and balance both, wouldn't that be what we want to build? But is that realistically possible or is the availability of testers really still driving us all to fill both boxes separately? 

Or maybe, just maybe, we have a lost generation of testers who are hard to train out of the habits they already have, even if I was able to make that move at around 20 years of experience. The core of my move was not on needing the time for the "humanual" part first, but it was getting over my fear of activating a new skill of programming that I worried would hog all my time - only to learn that the fear did not have to become reality. 





Saturday, March 13, 2021

Two Little Ideas for Facilitation of Online Ensembles

Today was a proud moment for me: I got to watch Irja Straus deliver a remote Ensemble Testing experience at Test Craft Camp 2021. We had paired on testing the application together, worked on facilitation ideas and while at all of that, developed what I want to think of as a solid friendship. 

The setup of Irja's Ensemble today was 12 participants, fitting into a 45 minute session. Only a subset of participant got to try their personal action as the main navigator and the driver, but there were a lot of voices making suggestions for the main navigator. The group did a great job under Irja's guidance, definitely delivering on the idea that a group of people working on the same task in the same (virtual) space, at the same time, on the same computer is a great way of testing well. 

Watching the session unfold, I learned two things I would add to my palette of facilitation ways.

1. Renaming the roles

Before the session, I had proposed to Irja to try the kid-focused terminology for ensembling. Instead of saying driver (the person on the keyboard), we would say 'hands'. Instead of saying navigator (the person making decisions), we would say 'brains'. 

Irja had adapted this further, emphasizing we had a 'major brain' and 'little brains', incorporating the whole ensemble into the work of navigation still explaining that the call in multiple choices on where the group went is on one person at a time. 

Watching the dynamic, I would now use three roles:

  • hands
  • brain
  • voice
You hear voices, but you don't do what voices say unless it is aligned in the brain. 

2. Popcorn role assignment

Before the session, I had proposed to Irja a very traditional rotation model where the brain becomes the hands and then rotates off to the voices in the ensemble.

Irja added an element of volunteering from the group into the roles. When the first people were volunteers, the session started with a nice energy and modeled volunteering for the rest of them. Time limitation ensured that same people did not repeat volunteering into same roles. One person first volunteering as hands and later volunteering as brains dropped the preplanned rotation off a bit, but also generated a new way of facilitating it.

Have people volunteer, with idea that we want to go through everyone as both brains and hands, but the order in which you play each part is up to the ensemble. Since physically moving people to the keyboard was not required, a popcorn assignment works well. 

 


Tuesday, February 23, 2021

End to End Testing as a Test Strategist

For the last 10 months, I've worked on a project that assigned me end to end testing responsibility. As I joined the project, I had little clue on what the work would end up being. I trusted that like all other work, things will sort out actively learning about it. 

Armed with the idea of job crafting ('make the job you have the job you want') I knew what I wanted: 

  • principal engineer position with space for hands-on work
  • making an impact, being responsible
  • working with people, not alone
With a uniquely open mandate, I figured out my role would be a mix of three things:
  1. tester within a team
  2. test manager within a project delivering a system
  3. test coach within the organization 
The learnings have been fascinating. 

My past experience of growing teams into being responsible for the system rather than their own components was helpful, and resulted in seeing both confusion and importance of confusion of one team member choosing scope of work bigger than the team. 

I had collected more answers to why than many people with longer history in the company, focusing on connections. The connections enabled me to prioritize the testing I did, and ask others for testing I wasn't going to do myself. 

The way I chose to do end to end testing differed from what my predecessor had chosen it to be for them, as I executed a vision of getting end-to-end tested, but in small pieces

I focused on enabling continuous delivery where I could, taking pride in the work where my team was able to release their part of the system 26 times in 2020. 

I focused on enabling layering of testing, having someone else test things that were already tested. It proved valuable often, and enabled an approach to testing where we are stronger together. The layered approach allowed me to experience true smoke testing (finding hardware problems in incremental system testing).

The layered approach also helped me understand where my presence was needed most for the success of the project, and move my tester presence from one team in the project to another half-way through. I believe this mobility made a difference in scale as I could help critical parts from within. 

I came to appreciate conversations as part of figuring out who would test things. I saw amazing exploratory testing emerge done by developers following hunches of critical areas of risks from those conversations. 

Instead of taking my end to end testing responsibility as a call to test, I took it as a call to facilitate. While I have used every single interface the system includes (and that is many!), every interface has someone who knows it deeper and has local executable documentation - test automation - that captures their concerns as well as many of mine. 

Looking back, it has been an effort to do end to end testing as a test strategist. You can't strategize about testing without testing. My whole approach to finding the layers is founded on the idea that I test alongside the others. Not just the whole team, but all of the teams, and in a system of size, that is some ground to cover. 


Sunday, February 14, 2021

Three routes to fixing problems test cases hide

I'm a big fan of exploratory testing, which often means I have reservations about test cases or at least ideas of how to interpret test cases in a way that does not require such an intensive investment into writing things down, that help us write things down that are needed and how to not think of test cases - or any documentation for that matter - to represent all the testing we are doing. 

Today I wanted to share three experiences from my career from three different organizations on how we tweaked our test cases to fix a problem all three organizations shared: using a lot of time for testing, but leaking significant bugs we believed we should be able to find. 

Organization 1: Business Acceptance Testing

The first example is from an organization where I managed business acceptance testing. I was working with two different projects, moving the business acceptance testing phase from months after months endeavor to something that would fit 30-days. One of my projects had a history of writing detailed test cases, the other had a history of not really using test cases. In getting the timeframe condensed, understanding what we had in mind to do and being able to reprioritize was essential. 

For both projects, we used Quality Center (HPs ALM solution was called that back then). Both projects started with test data in mind, and that is what we used as a starting point for our tests. We selected our test data to a set of criteria, wrote the criteria down on the test case title summarizing the business need for that particular data type. And as test steps, we used Quality Center's concept of test templates - a reusable set of steps that described the processes the two teams were running on a high level, same for every single test case. 

Thus our test cases were titles, with template test checklists to help us analyze and reprioritize our work. Same looking tests on first week could take a day, and later in the cycle, we could spend 15 minutes. The test case looked same, but we used it different, to explore. 

On one of the two projects, they had a history of writing test cases where steps also described the detail, and were concerned that giving those up may mean they forget to cover something as the information of changes isn't easy to pass around for a whole group doing acceptance testing. So we split out weeks into two first where we used the "old style" detailed tests and two latter where we used the new style. We found all problems during the latter two weeks, but in general, the software contractor had done a really great job with regards to their testing and the numbers of bugs we had to deal with were record low. 

Organization 2: Product testing with Reluctant Developers

The second example is from an organization I joined as their first and only test specialist. With their project manager's leadership, they had figured out writing test cases into word documents, one for each major area of the product. Tracking that the test cases were completed was central to the way they tested amongst the group of developers. Automation, on unit or system level, was not a thing yet for them. 

As I joined, the project manager wanted me to start creating test case documents like they had, improving them, and had ideas of how many test cases they would expect me to complete every day. 

Sampling one of the existing test specifications, it had 39 pages, 46 test cases, and 3 pieces of relevant information I could not figure out without reading the text based on commonly available knowledge. 

I made a deal with the project manager to write down structured notes while I tested, and we got to a place where I was trusted with testing, reluctant developers were trusted to test with me, and the test cases went away. Instead we used checklists of features to remind us what could be checked to design tests in the moment with regards to what the changes to the system were. 

Organization 3: Product testing with certification requirements

The third example is from an organization with a history of writing test cases that are traced back to requirements. Test cases are stepwise instructions. 

The change I introduced was to have two kinds of test cases: [Scenario] and [Feature]. Scenarios are what we use to test with and leave a lot of room for what exactly needs to be verified. Same test could be a week or an hour. For Scenarios, the steps are features as checklist - what features are part of that user journey. When we feel we need a reminder of how-to see a basic, sunny day scenario of a feature to remember what testing starts from, that is where Feature tests come in. The guideline was to write down only what wasn't obvious and keep instructions concise. There can be a feature test, without any steps at all. Steps are optional. 

Clearly, the test cases don't describe all the testing that takes place. But they describe seeing that what we promised that would be there, is there, and help us remember and pass on the information of how to see a feature in action. 

The Problems Test Cases Hide

Test cases can lead people into thinking that when they've done what they designed to do - the test cases - they are done testing. But testing does not work that way. The ways software can fail are versatile and surprising. And we care about results - information and bugs - over the documentation. 

Too much openness does not suite most of us. But too much prescription suites us even worse. And if prescription is something we want to invest in, automation is a great way of documenting in a prescribed manner. 


Saturday, February 13, 2021

Faking testing and getting caught

A lot of my work is about making sense into the testing we do, and figuring out the quality of testing. The management challenge with testing is that after they come to terms in investing in it, it isn't obvious if the investment is worth it. Surely, the teams do testing. But do they do it so that it provides the results we expect? Faking testing isn't particularly hard, and the worst part is that a lot of the existing processes encourage faking testing over testing that provides results. With an important launch, or an important group of customers we're building a business with, we don't want to be surprised in scale or type of issues we have never even discussed before. 

I find myself in hunt of two aspects of quality in testing: effectiveness and efficiency. 

Effectiveness is the idea that no matter what the testing we do is, does it give us the layers we need to not be caught red-handed with our customers? To be effective at testing, we have two main approaches:

  • Build well - bad testing won't make quality worse
  • Test well -  when issues emerge, finding them would be good
A good way to be effective at testing is to bring in more eyes at the problem. Don't ask only the developer-developer pair to test, add a third person you believe will take testing to heart and prioritize it. But why settle for three people if you could have ten, or 10 000. Grow the group. From unit testing to system testing, to end-to-end testing of systems, to pilot customers to beta customers, to real customers in production with continuous releases. That will tell you of effectiveness of earlier stages, and make you effective overall. 

This effectiveness consideration is the most important work I do, and sometimes - I would even say often, I catch people in this activity on faking testing. Most often people don't even realize they are faking testing, and they approach testing with wrong skills and focus, not providing the result that would be reasonable to expect for investing into that layer. 

Faking testing, no matter how unintentional, looks like we do a lot and we do, but we get nothing out. And we notice only when there is a need of testing well, because building well did not alone give us sufficient results. The later stages, hopefully designed into the overall testing in times close to the faked testing rather than discovering quality of testing at the delivery, reveal bugs in scales and types that we would reasonably expect to be found if we did the stages well. 

While effectiveness is the key, after effectiveness is in place I look at efficiency. There is only so much we should have to invest in testing on various layers, testing should increase our productivity and not slow us down, and we might be investing in a box of testing thinking it cover more boxes than it should for the return the investment gives us. 

These puzzles often fill my days, and I try to help people learn exploratory testing to get away from faking testing. They don't want to fake it, they just do what they think they should.


Friday, February 12, 2021

In Search of a Test Automation Strategy

The world of software as I know it has changed for me. I no longer join projects in preparation of a testing phase that happens in the end, but I am around from when I'm around until I am no longer around, building testing that survives when I am gone.

Back in the day of testing phase at the end of a project, test strategy used to be the ideas you prepared in order to work through the challenging phase. It gave the tests you would do a framing, guiding design. It usually ended up being written down in a test plan under a heading of approach, and it was one of the most difficult things to write in a way that was specific to what went down in that particular project.

With agile, iterations and testing turning continuous, figuring out test strategy did not get easier. But the ideas guiding test design turned into something that was around for longer, and in use longer. I talked about what ideas stuck with me at DEWT5 in 2015, and same ideas guide my testing to this day. 


Since then, I'm working even more on the strategy we share and visualizing it to nudge it forward. Seeing the strategy in action in a new team can be dug out of the team, asking the team to visualize their testing activities. 
The strategy I set does not matter, if it does not turn to action with the team. We now move versatile groups of people across different roles and interests. 

This week gave me a chance to revisit my ways on a theme of test automation strategy. I have never written one. I have read many, and I would not write any of those. But it stopped me to think of the ideas that guide my test automation design right now. These are the ideas that I brainstormed:
  • Start with the end in mind
    • Release time with minimal eyes on system. Rely on TA (test automation) on the release decision. 
    • TA keeps track of what we know so that it remains known when we change things
  • Incremental, incomplete, learning
    • Work towards flow of TA value - small streams become a significant pool over time. Moving for better continuously matter, not starting well or perfect.
    • Something imperfect but executable is better than great ideas and aspirations. Refactor to reveal patterns.
  • Timing
    • Feedback nightly, feedback on each change. 
    • Maintain ability to run TA on every version supported for customers
  • Early agreement
    • Design automation visibility and control interfaces at epic kickoffs
  • Scope
    • For each epic (feature), add the positive case to TA. Target one. More is allowed but don't overstretch.
    • Unit and software integration tests cover cruft of functionality. TA is for system level scenarios including hardware (as it is embedded for us). 
    • Not only regression TA, also data, environments, reliability, security and performance in automation. 
    • Acceptance tests for interfacing teams monitor expected dependencies.
    • Save the data. Build on the data. But first learn to run it. 
  • People
    • Invest in skilled TA developers through learning and collaboration
    • Require developers to maintain automation for breaking changes
    • To facilitate GUI selectors, GUI devs create first test with keywords
    • Allow for a "domain testing expert" who only contributes in pull request reviews on TA
  • Practices
    • Suites and tags give two dimensions to select tests, use tags for readiness
    • Seek to identify reusable data sets and oracles
    • Reuse of keywords supported through reviews and refactoring time
I guess this is as close to a test automation strategy I'm about to get. 


Thursday, February 11, 2021

Requirements Traceability

I'm all for understanding the products we build on what they do, and why we think doing those things might be relevant. The language I choose on requirements is the language of discovery. Instead of us requiring something, we discover needs and constraints, and make agreements we visualize in structures and writings to create a common understanding between all of us in organizations and our customers. Instead of setting a requirement, I like to think of us setting a hypothesis of something being useful, and testing that. And since I like to approach it as uncertain, I can easily embrace the idea that we learn and it changes. I don't need the concept of managing change to requirements, it is already built in with incremental delivery.

Thus, there are very few things that annoy me to the brink of wanting to change jobs but this one is one of those: requirements traceability between the magical requirement and the test case

In organizations that belong to the school of requirements, this is the heartbeat that conflicts with my heartbeat set on releases. Releases include capabilities at that time, and understanding what capabilities we had, as well as idea of replacing those capabilities systematically with newer set of capabilities. Arrhythmia is irregularities in heart beat, and it serves as a description as how I feel in the conflict of these two worlds I need to learn to adapt together without giving up on what I have grown to see important. 

In search of congruency, I put significant effort in doing things I consider valuable. I have never considered it valuable to take a requirement written down in a tool, and create a test case written down in a tool and link those two together. I don't write test cases - I test directly against the requirements and creating this other document feels like wasted time. Also, I don't believe that the requirements we create are the best representation of the understanding that leads us in finding information I seek to find in testing, and often following someone else's structure takes away from my ability to contribute information that adjust that structure for the benefit of all stakeholders. 

So, requirements traceability does not only waste my time in creating material I consider useless but it  also makes creating the results I expect to create harder. Over my career, I have needed to set straight on a good path of testing that provides results many organizations that started off with a requirements-centric straightjacket creating testers I would recommend letting go. 

So I push through once more with what I will do:

  • Given a requirement, I will confirm it in an exploratory testing session but only document that with closing of the epic, at a point of time we first introduce it into a release in the making
  • I will work to include it in test automation, and keep a version of test automation around that matches that release while it is supported. I will not offer a link back to requirements from specific automation cases. 
  • When using a feature is hard to figure out, I will write feature-specific instructions to document what I have learned while testing
  • I will create whatever material supports continuous, growing testing, without linking it to requirements.
  • I will care not only for my own work now, but for the work that comes after I am long gone
I recognize we are seeking these benefits from the mechanism the industry standards push:

  • managing expectations on what the product does and does not do
  • enabling support when product is in maintenance by
    • recognizing defects (as being against specification) from adaptation requests
    • retesting connected requirements when changes are needed
  • ensuring we don't miss someone's perspective in delivery by making 'proof' of testing on all perspectives - connecting two ends of a process allows for being systematic
  • making testing accountable for monitoring delivery scope on empirical level
  • having an up-to-date description of configuration of each delivered version
  • replacing old product with new could be easier
  • tracking project against requirements to deliverables giving sense of control to project manager 



Monday, January 25, 2021

Having Testers Makes Quality Worse

I'm a professional tester. I have been one for quarter of a century. I know by my results and my feedback that I have been useful for my organizations. But the longer I've worked in these types of roles, the more I understand that while a tester may be a solution, a tester may also be a problem in a people system. And whether we, testers, become problems has very little to do with us, and a lot to do with the other people around us. 

Today, I want to dig in a little deeper on an idea that I have been on for most of last year. 

First of all, let's start with what my career of quarter of a century is really about. While I am a professional tester, a significant part of my professional skill is knowing when to refuse to do testing alone and only do it while pairing to grow people around me. Another significant part is to walk away from teams that grow better when they have no testers, and join teams that are in places where they can grow having a fully contributing tester. My career is on *testing* not on *being a tester* and that difference is a significant one. I believe, paraphrasing Elizabeth Hendrickson, that testing is too important to be left for just testers. It belongs to everyone in software development. Sometimes it is useful to allow space for people to concentrate on it, and sometimes people who get to concentrate on it are called testers. 

Growing the Understanding Over Multiple Jobs 

With this blog post, I want to look at back over the last three jobs I have held. They all have one thing in common: I work in product development, with lovely, brilliant but imperfect software developers who have been soaked into culture describing what the organization expects of them with regards to testing. Yet, the situations in the teams I have worked in could not be much more different.

The first position of the three was one where I was hired as the first tester the organization had. They had 20+ years of software development experience and a very successful software business, and they had managed to do this without a tester. They were seeking one as they had identified, based on internal product management feedback, that there could be enough work for someone so that product management did not have to do all of their testing. Also, they had a metric on big visible error screens per logins, which hinted they really needed someone to tell them how to reproduce problems customers were seeing but not complaining about. 

For the years I spent with the organization, I could see they needed different things from me at different times. I was a hands-on tester for them for the first year, reporting all those cases the customers didn't complain about but were experiencing and the developers fixed them. They needed me to catch up with the understanding of what feedback they had been missing. But as they learned what they had not known, they learned to know it, and through pairing (and stealth pairing), the developers built rapport for their testing discipline.  Finally, we got to place where I did my best to avoid reporting bugs and work with developers to improve their understanding of how problems emerged. In the end as I was leaving, they did not need me - they needed a voodoo doll of me they could look at and go "you'd want me to test this, you'd want me to try different values, you'd want me to try more times". And while being the person who holds space for good work to happen, it was evident it was my time to see other teams. Since it was always one of me and 20 of them lovely developers, we worked out the idea of not expecting me to do all the testing from the start. 

The second position was one that I rejoined after being away from a decade. While I had been away, the team had found themselves into a position where there was a good test automation discipline and hired people they vaguely called testers, into writing system level automation that was a full time job to keep maintained and going forward. I found myself building bridges of ideas of tests I had from exploring to test automation, and building a culture of fix-and-forget with zero bugs on backlogs. The developers were soaking in every piece of feedback, and fixing things so effortlessly that in hindsight I feel like there weren't problems to address. The culture of managers staying out of the way and senior colleagues wanting to make space for great quality combined with documenting with test automation, extending every test idea over different environments could have been run with developers. The challenge I addressed mostly was one of communication - with our code being part of the system, understanding where what testing would take place wasn't always easy and straightforward. 

The third position is one that I hold now. I joined one team first to learn that while they could use me for a while, their long term would be better if I walked away and left them to try out their lessons now without a tester. I joined another team starting a few weeks ago, and see a similar pattern of needing full-time people on tending the test automation system, even when whole team pitches in. My work, again, appears to be addressing communication, and understanding where testing of various pieces and perspectives reside. I test to learn, to understand, and to distribute my lessons. I have now multiple teams without testers, and some of those do brilliant. The pattern emerging seems to be one of ownership, and that isn't always an easy one with systems of scale in multi-team setting. 

Was I getting somewhere?

I've been hired with organizations as a tester, and by choosing the pattern of how I operate with the idea that testing belongs to us all, I've come to be appreciated for results. I've been a tester, test manager and test coach depending on what the day needs me to be, and promptly addressed the risks of assuming tester = testing. I've seen most developer do well with testing, and I've seen many developers do better with testing than professional testers who focus on testing. I've seen how having a tester, and making space for a tester to do testing, makes quality worse. But I have also seen testers who truly bring added value, in understanding what we are building, understanding the customer and the domain, and thinking in terms of systems over components. And increasingly, I've seen tester/developers who specialize in programming test systems be critical in sharing a good baseline practice with their teams, and find new labels like SDET, eventually dropping the T while still continuing on the test side of systems. 

There is no one clear recipe for testers and developers. As a recruiting manager, you need to understand where your team is now, and where a tester could take them. But it always comes with a risk: having testers makes quality worse if developers let go of testing they are doing now. 



Sunday, January 3, 2021

Contemporary Exploratory Testing

We all know what testing looks like: it's when we hunt information, using the application, it's interfaces and all existing information as our external imagination to go even deeper in empirical understanding of what is true and what is an illusion. It involves a person or multiple people, using programming to get to places in time, but it is all framed in this quest of someone learning what more we could try knowing. 

Yet when organizations set up "testing", they ask for resources, pay limited attention to skills, focus on plans, covering requirements, writing and executing test cases, cutting down in testing when project schedules fall short. 

When we say "all testing is exploratory", we have the individual tester on their good day in mind. I call this exploratory testing, the verb. The act of testing is inherently exploratory when people are involved. No matter how strict a test case you gave, how much you told not to leave that path described, the human mind wonders and the human fingers make mistakes revealing more than the test intended. 

The organizational frame however makes a huge difference on how that inherently exploratory testing takes place, and how much of its wings are cut. I call this exploratory testing, the noun. Organizations often set up frames for testing that are far from exploratory, and get results that are far from what we would mean by results of exploratory testing. 

Exploratory testing (verb AND noun) is focus of my learning and teaching. I want to create excellent products with excellent testing, and I feel that the 36 year old coining of the term needs a revisit from its watered down interpretations. Thus I sometimes add still one more word to explain what I am aiming for: contemporary exploratory testing.

Contemporary is about today. I use that word to get away from the ideas of ISTQB and agile testing where exploratory testing was considered a technique, a thing you do for unknown unknowns on top of all the other testing recipes. For me, it has for the last 25 years of my career in it been an approach, a foundation of agency, learning and systems thinking that frames my choices of using all the other recipes. And I believe that is what it is at its roots, so calling my approach contemporary is saying I want to take a step away from some of the popular notions of it being the idea of just spending time with an application to find bugs. 

With this new year, I welcome you to join my exploration of how to understand and teach contemporary exploratory testing. I have many subproject on it:

  • Exploratory Testing Podcast. I just published my first episode yesterday, and will continue to do so on a monthly cadence. 
  • Exploratory Testing Academy. I work with  Parveen Khan (UK), Angela Riggs (US), Irja Straus (Croatia) and Mirja Pyhäjärvi (Finland) to create free testing video courses and a series of paid facilitated courses with focus on learning by doing testing under various constraints for various systems under test. 
  • Exploratory Testing Book. I finish the book I started so that you don't have to read all the individual articles and get out a timeline of where I am now compared to what I wrote earlier
  • Exploratory Testing Slack. I want to bring together people that are figuring out contemporary exploratory testing. I take it more as a forum of practitioners than a forum of consultants. 
  • Exploratory Testing Twitter account. I use this for promotions, because the one advice I was given on marketing (collect people's emails) is the one advice I don't want to use. I want pull over push, even if it is against all things marketing. 
I have a full time job at a company as a tester. My job pays well, better than average developers, and I have my company specific goals there. I do this all because I feel the one thing I should do better is scale. I make my work available for free to support scale. I have things I could use extra money on, but I leave the payment part for the community based on value. You can pay (but don't have to) for my book in progress. You can pay by buying me coffee. Since I am Finnish, you can never donate me money - only pay for my services. But I want that to be optional, and work against paywalls in my own little way. 

If out of this I create more great testers, more testing trainers, more love of testing in both professional testers and programmers - my aspirations are fulfilled.