Friday, May 7, 2021

Let down by the Realworld app

This week, I finally got to making space in my calendar to pair up with Hamid Riaz on start of creating a new exploratory testing course centered around yet another test target: the Realworld App. Drawn in by the promise of "mother of all demo apps" and the lovely appearance of a programmer community who had contributed examples for the demo app frontends and backends in so many sets of technology, I had high expectations. I wanted to create a course that would teach the parts of exploratory testing that are hard and we only get to in good agile teams that have their quality game in shape

Instead, I was reminded again that demo apps without the active agile team, no matter how good the original developer may have been, never get to the point of having their quality game in shape. 

The first example implementation I picked up from the list was a end-to-end testing demo setup, only to learn it had been years since it was last updated (sigh), and the most basic of the instructions on how to get it running relied on half-docker (good), half local OS, and local OS was expected to be anything but what I had - Windows. While I have worked my way through of setting all too many projects that expect mac / linux to build on my work Windows, I did not feel up for the task today. So next option. 

Next I picked up a nodejs implementation, no dockerizing involved but I could add that to make it more isolated for anyone testing after me on the course. At this point Hamid joined me. 

Without too much effort, we got through installing Mongo, and all the dependencies needed. We installed postman, imported the tests and eventually also the environment variables provided, and run the tests only to note that some of the functionality that was supposed to be there, was no longer there - the years between the last changes and the latest of MongoDB seemed to do the trick of making the application fail, and we were not told of the version the code expected. 

After the pairing, I summed up on twitter: 

I should have known then. Software that is not maintained is not worthwhile target for realworld testing. 

When office work the next day added to inspiration of forgetting that results of testing don't stay valid for long even when you don't change anything let alone when you do, I concluded: 

My search for a worthwhile test target to teach testing that does not just succeed for the mere sloppiness of the org that created them still continues. 

 

Thursday, May 6, 2021

Pink Tax on Access to Agile Heroes

It is a year of celebration, 20 years since the Agile Manifesto. We see people coming together to discuss and present time and events leading up to it, reflect on the time and events after it, and aspire for futures that allow for better than we ever had. 

Today, one of those events popped on my timeline in Twitter. 

I was carefully excited on the idea to hear from some of the early agile heroes who were around but not at Snowbird. Until I clicked on the link to realize that access to my heroes, so rarely available, is a paid event and I have heard the perspective others in Snowbird amplified in large scale free online events almost a little too much this year. 

I have two problems with this on *Agile Alliance* in particular. First of all, by paywalling this early group they limit the public's access to this group - and they were already limited by not being in Snowbird. 

Second, asking this money from people like myself who really want to hear from my agile heroes is a form of pink tax. The pink tax refers to the broad tendency for products marketed specifically toward women to be more expensive than those marketed for men, despite either gender's choice.
You know, that idea how things that are for women make good business by being more expensive. Because women are not a minority, we are half of the world, and we want things that are not made for the default of people: men. And I do deeply crave to hear that people like me, my heroes, were around when I was around, even if they are missing from the visible history. 

Being a woman, you get to pay more for the physical items with the excuse of production costs. You get to pay more for the virtual items in games

Agile Alliance could do better on promoting access to early heroes that did not make it to Snowbird. 


Please note: I don't say that the people speaking are *women*. I say I am a woman. I did not check how the people on that list identify. I know only some of them. 

Thursday, April 29, 2021

I don't trust your testing and how to change that

Over the years, I have been senior to a lot of testers. I've been in the position to refuse to give them work. I have been in the position to select what their primary focus in teams becomes. I've been in position to influence the fact they no longer work with the same thing. 

Being in position of this kind of influence does not mean you have to be their manager. It means you need to have the ear of the managers. So from that position, inspired by the conversations we had with Exploratory Testing Academy Ask Anyone Anything session, I am going to give you advice on how to work to be appreciated by someone like me. (note: it means NOTHING really, but a thought exercise)

Let's face it: not all testers are that great. We all like to think we are, yet we don't spend enough time in thinking about what makes us great. 

So lets assume I don't trust your testing and you want to change that - a tongue-in-cheek guide to how-to. 

1. Talk about your results

I know I can search your bugs in Jira and no matter how much folks like me explain that not everything is there, in reality a lot of it is there in organizations that believe that work does not exist without a card in jira. Don't make me go and search. 

Talk about your results:

  • the information you have found and shared with the developers ("bugs")
  • the information you have found and could teach others ("lesson learned", "problems solved")
  • the changes in others you see your questions are creating
  • the automation you wrote to test, the automation  you rewrote to test, the automation you left behind for regression purposes
  • the coverage you created without finding problems
Small streams, continuously, consistently. 

2. Create documentation in a light-weight manner

You can lose my trust with both too much and too little. Create enough, and especially, create the right kind. Some of it is for you, some of it is for your team. 

If you like tracking your work with detailed test cases, I don't mind. But if creating test cases does not intertwine with testing, and if it keeps your hands off the application more than hands on it, I will grow suspicious. Results matter - make your results tell the story of what you create for you makes sense for you. 

If others like using your detailed test cases, I will also make you responsible for their results. I have watched good testers go bad with other people's test cases. So if you need detail and make detail available for others, I expect you to care that their results don't suffer. Most likely I'd be looking out for how you promote your documentation for others and be inclined towards suspicion. 

If you don't create documentation, it is just as bad. The scout rule says you need to leave things easier for those who come after you and I expect you to provide enough for that. 

3. Don't get caught being forgetful

A good tester needs to track a lot of different levels information and tasks. If you keep forgetting things, but you don't have your own system of recalling them, this raises a concern. 

I don't want to be the one who reminds you on you doing your work. Once is fine, repeatedly reminding and you should recognize you have something to work on. Like prioritizing your work to manageable amounts. Or offloading things into notes that help *you*. You are responsible for your system, and if you are not, trust will soon fade away. 

If you create a list of things to test, test them. If you don't test them, tell about it. And never ever tell me that you did not know how to, did not ask for help and let bugs slip just because you didn't work your way through a rough patch. 

4. Do your own QA of testing

After you tested, follow through what others find. Don't beat yourself up, but show you make an effort to learn and reflect. Have someone else test with you or after you. Watch later what the customers will report. Make an effort to know.

5. Co-create with me or pretty much anyone

Like most people, I have self-inflated sense of self and I like the work I have done. Ask me to do the work with you, co-create idea lists, plans, priorities, whatnot and by inserting a bit of me into what really is you most likely makes me like what you do a lot. 

Pro tip. This works on actual managers too. And probably on people who are not me. You also might gain something by tapping into people around you, and just seeing you use others makes you appear active, because you are! 

6. Mind the cost

Your days at work cost money. Make sure what you cost and what your results are are balanced. Manage the cost - you make choices on how much of time you give on a particular thing. Be thoughtful and focus on risks. 

Time on something isn't just a direct cost of the time. It is also time away from something. In the time I spent writing this article, I could have:
  • Had a fruitless argument over terminology with someone on the internet
  • Set up a weather API and tried mocking it
  • Read the BDD books - Formulation that I have been planning on getting at all week
  • Played a fun game of Don't Starve Together
  • ....
We make choices of excluding things from our lives all the time. And I will have my perspective on *your* choices of cost vs. results. 


All in all, I think we are all on a learning journey, and not alone. Make it show and allow yourself to improve and grow. 



Sunday, April 25, 2021

The English Native Speakers Are Missing Out

I read on interview this week. Already a few years old interview focused on asking Mervi Hyvönen, then about to retire with Kela, the Finnish Social Insurance Institution, about her career. 

Mervi started working with testing in 1973.  The first software she tested was using punch cards as input mechanism for the program and the data. Before retiring, she tested software in the same organization on five different decades.

The really interesting piece in the article was her description of their approach to testing: Kela has been long known to be holding space for great exploratory testing. She describes they were doing exploratory testing before the first messages from across the seas reached Finland and gave it a new name. 

Those of us aware of history of exploratory testing, may remember the term was coined by Cem Kaner in 1984, and mentioned in his Testing Computer Software 1st edition in 1988. He did not invent it, he coined it - he saw others in addition to his organizations doing it, and he gave it a name. It got down to testing history as the way product companies in Silicon Valley tested. It was also the way the pioneering companies in Finland tested. And I am pretty sure Finland is not the only corner of world erased from the history in this case.

When conferences seek great speakers for conferences, non-native English speakers are often secondary. 

When we read research, the shared research language is English. And we know that a lot of scientific work of the past was written in German, and that the Russian scientific community still publishes heavily in their native language. 

When I chose to start speaking and later blogging, I started first with Finnish. Finns learn better when the material is in their own language. But the two languages authorship was taking a toll on my contents, and I later shifted. 

When Agile Manifesto came about, I had already a few years in researching "lightweight software development methods". 

When Exploratory Testing as term started show up in Finland, I had already been doing it. 

The appearance of information in the world is very English language centric, and it creates a significant bias. 

I don't want Mervi, the pioneer of Exploratory Testing in Finland, be forgotten. Her organization grew to hundreds to testers, and those testers took testing as they knew it forward to other organizations. Local companies creating professionals who moved around made testing here what it is today, not reading a few articles in English but applying the in Finnish. 

You all just don't know about it as long as the only language you read is English. As long as the only conversations you have are in English. And even when we don't know something exists, it very well might. 




Friday, April 23, 2021

Learning without Impact on Results

As I talk about exploratory testing, I find myself intertwining three words: design, execution, learning. Recently I have come to realize that I am often missing a third one: results. I take results for granted.

When I frame testing, I often talk about the idea that learning in a way that would give us a 1% improvement would mean we are able to do what we did before by investing 4 minutes less on the same results. There is no learning without impact on results. 

Except there is. I recently talked to a fellow tester who asked me a question that revealed more about my thoughts that I had realized:

Aren't we all learning all the time?

This made me think of a past colleague who told me that the reason I find it hard to explain what exploratory testing is that the way I learn as an exploratory tester is learning in a different dimension than our usual words entail. 

So let's try an example - watching a tester at work. 

They were testing a feature for many weeks. The task in Jira was open, collecting the little dots tasks collect when they don't move forward in the process. The daily reports included mentions of obstacles, testing, preparing for testing, figuring things out. The feature wasn't a little one, but had relevant work ongoing on developer side showing up as messages on small conversation channel asking about unblocking dependent tasks, and figuring out troubles. The tester was on the channel, but never visible. 

Finally, the work came to conclusion and the task was still open for finalizing test automation. Finally a pull request emerged. 20 lines of code. Produced by producing 2000 lines of code, where it was actually 200 lines of code produced 10 times. Distilled learning, tightly packaged. 

Reflecting on the investment, the company got three results for quite a number of hours of work: 

  • One fixed bug
  • One regression test
  • A tested feature
  • A tester with 'learning'
Yet, I would find myself dissatisfied with the result. The work had the appearance of design, execution and learning intertwined, used automation to explore and to document and could be a model example of what I mean by contemporary exploratory testing. What I missed out on was the balance of learning making the tester better, and learning making the product and organization better. 

The visible external results were one fixed bug and one regression test, in an environment that appears to force communication visible. No tester-initiated early discussions leading to aha-effects improving the product before it was coded. Limited reports on suspicious behaviors. Suspicious coverage of a tested feature no one could analyze or add to. And eventually, late discoveries of issues in the area by people coming after, making a point of remaining alone being a wrong choice.

Sometimes when we think of 'learning', we overemphasize knowledge acquisition not actionable results of the learning. It is like our work as testers is to know it all, but do nothing else with it than know it. We collect information in ourselves, and lock it up. And that is not sufficient. 

Instead the actionable results matter. The learning in exploratory testing is supposed to improve the information and our colleagues who also learn through our distilled learning. 


Saturday, April 17, 2021

Start with the Requirements

There's a whole lot of personal frustration on watching a group of testers take requirements, take designs of how the software is to be built, design test cases, automate some of them, execute others manually, take 10x more time than I think they should AND leak relevant problems to next stages in a way that makes you question if they were ever there in the process. 

Your test cases don't give me comfort when your results say that the work that really matters - information on quality of the product - is lacking. Your results include the neatly written steps that describe exactly what you did so that someone else with basic information about the product can take them up next time for purposes of regression. Yet that does not comfort me. And since existence of those tests makes next rounds of testing even less effective for results, you are only passing forward things that make us worse, not better.

Over the years, I have joined organizations like this. I've cried for the lack of understanding in the organization on what good testing looks like, and wondered if I will be able to rescue the lovely colleagues doing bad work just because they think that is what is asked of them.

Exploratory testing is not the technique that you apply on top of this to fix it by doing "sessions". It is the mindset you intertwine in to turn the mechanistic requirements to designs to test cases to executions manual and automated into a process of thinking, learning and RESULTS. 

Sometimes the organization I land in is overstructured and underperforming. If the company is successful, usually it is only testing that is over structured and underperforming, and we just don't notice bad testing when great development exists. 

Sometimes the organization I land in has really bad quality and testing is "important" as patching it. Exploratory testing may be a cheaper way of patching. 

Sometimes the organization I land in has excellent quality and exploratory testing is what it should be - finding things that require thinking, connections and insight. 

It still comes down to costs and results, not labels and right use of words. And it always starts with requirements. 



Friday, April 16, 2021

The dynamics of quadrants models

If you sat through a set of business management classes, chances are you've heard your teacher joke about quadrants being *the* model. I didn't get the joke back then, but it is kind of hilarious now how the humankind puts everything in two dimensions and works on everything based on that. 

The quadrants popularized for Gartner in market research are famous enough to hold the title "magic" and have their own Wikipedia page. 

Some of my favorite ways to talk about management like Kim Scott's Radical Candor are founded on a quadrants model. 

The field of testing - Agile Testing in particular - has created it's own popular quadrant model, the Agile Testing Quadrants.  

Here's the thing: I believe agile testing quadrants is a particularly bad model. It's created by great people but it gives me grief:

  • It does not fit into either of the two canonical quadrant model types of "move up and right" or "balance". 
  • It misplaces exploratory testing in a significant way
Actually, it is that simple. 

The Canonical Quadrant Models

It's like DIY of quadrants. All you need is two dimensions that you want to illustrate. For each dimension, you need some descriptive labels for opposite ends like hot - cold and technology - business. See what I did there? I just placed technology and business into opposing ends in a world where they are merging for those who are successful. Selecting the dimension is a work of art - how can we drive a division into two that would be helpful, and for quadrants, in two dimensions? 


The rest is history. Now we can categorize everything into a neat box. Which takes us to our second level of canonical quadrant models - what they communicate

You get to choose between two things your model communicates. It's either Move Up and Right  or Balance.

Move Up and Right is the canonical model of Magic Quadrants, as well as Kim Scott's Radical Candor.  Who would not want to be high on Completeness of Vision and Ability to Execute when it comes to positioning your product in a market, or move towards Challenging Directly while Caring Personally to apply Radical Candor. The Move Up and Right is the canonical format that sets a path on two most important dimensions. 

Move Up and Right says you want to be in Q3. It communicates that you move forward and you move up - a proper aspirational message. 

The second canonical model for quadrants is Balance. This format communicates a balanced classification. For quadrants, each area is of same size and importance. Forgetting one, or focusing too much on another would be BAD(tm).    


Each area would have things that are different in two dimensions of choice. But even when they are different, the Balance is what matters. 

Fixing Agile Testing Quadrants

We discussed earlier that I have two problems with agile testing quadrants. It isn't a real model of balance and it misrepresents exploratory testing. What would fixing it look like then?

For support of your imagination, I made the corrections on top of the model itself. 

First, the corner clouds must go. Calling Q1 things like unit tests automated when they are handcrafted pieces of art is an abomination. They document the developer's intent, and there is no way a computer can pull out the developer's intent. Calling Q3 things manual is equally off in a world where we look at what exactly our users are doing with automated data collection in production. Calling Q4 things tools is equally off, as it's automated performance benchmarking and security monitoring are everyday activities. That leaves the Q2 that was muddled with a mix already. Let's just drop the clouds and get our feet on the ground. 

Second, let's place exploratory testing where it always has been. Either it's not in the picture (like in most organizations calling themselves Agile these days) or if it is in the picture, it is right in the middle of it all. It's an approach that drives how we design and execute tests in a learning loop. 

That still leaves the problem of Balance which comes down to choosing the dimensions. Do these dimensions create a balance to guide our strategies? I would suggest not. 

I leave the better dimensions as a challenge for you, dear reader. What two dimensions would bring in "magic" to transforming your organizations testing efforts?