Saturday, May 29, 2021

Scale - Teaching Exploratory Testing

At work, I hold three roles within a position of a principal test engineer: a tester, a test project facilitator and a test competence facilitator. I have 37.5 hrs a week to  invest in all those three, and while my productivity has soared compared to early days of my career, days feel just as short now as they ever did. 

Some of my hobbies resemble my work, but they reach outside the organization: speaking, writing, organizing, teaching, reading. I deliver an occasional paid training and paid keynote. I write my book. I try to record my podcast with limited success. I structure my thoughts into talks and articles.

At this point of my life and career, I chose my theme of the year to be scale. Scale of impacts I induce at work I care for. Scale of teaching forward what I have learned. Enabling others in scale. 

With close to 450 sessions delivered for audiences that were not my work, more people know me than who I recognize. Things like someone telling me when we first met while I can name them only for the pairing we did in the last month are all too common. It's not that I forget people, it's that I have never come to remember everyone. 

As I reflect on my goal of scale, I come to an idea of what scale would look like for my teaching. It would look like me teaching a network of teachers who teach forward. I already took some steps towards this launching where all course material is Creative Commons Attribution, allowing you all to use it for your business, even to make a business of your own. 

I make time to teach free courses every now and then, like during the Exploratory Testing Week. I make time to teach commercial courses every now and then, as my side job, but my availability is limited as I love my day-job and the assignment that allows me to choose in transforming quality and testing. 

I need other people, willing to teach, that I would teach my exercises, my materials and adjust them to what they feel they want to take forward. I have a lot of theory and example material, as slides. Like with Exploratory Testing Foundations, I make them available at Exploratory Testing Academy. But I also have a lot of experiential exercises, where facilitating and framing the exercise is where value for learning is best created. 

Would you want to learn to teach experiential testing exercises? 

I envision a series of sessions where we would first experience the exercise as participants, but then turn the roles around first into looking at what facilitating such exercise means and then practicing facilitation while I support, watch and give feedback after. I haven't done this yet, so we could discover what works together. 

You could learn to teach different testing experiences with different applications. I use:

  • a Textbox
  • E-Primer
  • Weather App
  • Gilded Rose
  • Zippopotamus
  • Dark Function Editor
  • Freemind
  • Xmind
  • Conduit
  • ApprovalTests
  • RobotFramework
I also have exercises on understanding your tester personality, agile incremental test planning, test strategy, test retrospective for release and feature, business value and many many more I would be happy to pass on. 

Interested? Let me know. You might also tell what you'd like to start from, because all the exercises I have created since 2001 when I started teaching on the side would fill a few years to go through on the side of a regular job. For prioritizing my time, I ask you to consider my goal - scale. Could you help me with that? If your answer is yes, I'm going to trust you and dedicate some time to help you learn this. Send me an email: maaret(at)

In case it isn't already clear, I am not looking to invoice anyone to teach them. I will volunteer my time for free within constraints of what I can make available. I want more people in the world to experience experiential learning and for myself to make an impact. 

Friday, May 28, 2021

A New Experiential Exploratory Testing Exercise on focusing feedback to own application

Over the years of first me learning testing and then me teaching testing to strengthen what I have learned, I have come to appreciate an experiential style of learning. Instead of me lecturing how testing is like detective work where we are supposed to find all the information others around us are missing about quality, I prefer us doing testing to learn this.

For this purpose, I have created many starting points for us experiencing testing together. I find that the selection of the application we will test will guide the lessons we are able to pull out. Every single application has something in common, but also forced thinking that is different in some relevant way. Today I wanted to talk about one of these experiential exercises, to give you an example of a favorite. 

Then again, what makes a favorite? Some of the exercises I have been adding most recently get my attention. This week I added an exercise on targeting our testing on our application when there’s our part and a 3rd party API. 

I tested solo on Saturday. I paired with Irja Straus from Croatia on Monday. And I paired with Hamid Riad from Sweden on Wednesday. And as always with new targets, while I had a rough idea of what I would want to teach, hands-on exploratory testing taught me things that surprised me. 

When I tested solo, I set up the application. Like with most open source applications, the setup wasn’t straightforward, and before I could run it on my machine with a good conscience, I had to update many javascript dependencies to get to a comfortable state without critical security warnings. Similarly, I set up the mocks I needed for the exercise, and solved issues around CORS. CORS is a browser security function that is supposed to protect us a bit from bad implementations, apparently introduced since the application had been created. 

From what I thought would be a solo exploratory testing session, I soon discovered it would become a session to update documentation, fix dependencies and the project in general to get to running it. I finished my tests by playing barely enough to know the API had some intelligence and was not just a REST frontend to a database. Interesting 3rd party API promises good for my exercise on focus for the exploratory tester *away* from the interesting API, into our application. 

When I paired with Irja, we got to the focus on our application very easily. From a simple demo, we got to using the mocks to test and really left the API alone for most of the session except for some comparison results and a deep focus into the promises of functionality the API specification had to offer. We learned about value dependencies, what worked and what didn’t. We ended up reading code to know what ‘works as implemented’ means for this application - including calling the API with a typo that reimplements logic that API would encapsulate in the application. Starting from not knowing anything about the application, I learned about the data we would need to take control over as we explored the API forcing our front-end to show us functionality that matched data. We would find domain concepts that weren’t implemented, problems with what was implemented and report our discoveries. 

When I paired with Hamid, we first fell into the lure of the interesting API, playing with values from the user interface and testing the 3rd party API more than our own application. Even though I first thought that was not what I wanted out of this exploratory testing exercise, I soon learned it was exactly what I wanted. Each of us in the pair building on the ideas of the other, we learned about how the API deals with mistaken inputs and what input validation our application should have. The time we spent on the 3rd party API paid back when then turning our focus on our application. While the previous session gave me the experience on the API specification and set my expectations on that, this session found interesting but different problems. We also fixed some bugs while at it. 

I’m planning to test the exercise as I now envision it at Socrates UK Digital open space conference next week. This time I would run it as an ensemble testing exercise, and see where the group goes with the constraint and facilitation, and like in the end of every experiential session, pull out the lessons the group can share. 

I have other exercises that teach other things. I used two full training days facilitating people through various exercises this week as I was teaching my exploratory testing work course. I’m a big fan of experiential learning, and that in general we learn testing by doing testing. We can inspire and motivate, make sense and clarify models with the material around the exercises. Yet, discovering your lessons - my intended core lessons and some surprises - is immensely valuable. 

It’s the kind of teaching I want to figure out how to scale. I can scale talks of theory and experiences. But I am yet to find out how to scale experiential teaching of the lessons I have created.

Sunday, May 16, 2021

Testers Need to Read Code

Moving from organization to organization, and team to team, I find that there is always a communication gap between testers and developers. The gap does not mean lack of respect, or not getting along. It means that there is information one party holds the other could use but don't get. 

Year after year, I have watched organizations advice *developers* to write guidance to testers into Jira tickets, to link documentation and provide demos as part of the tiny handoffs. I watched the community of testers advice *testers* to ask for that documentation and those demos, and ask questions. 

Yet, I find myself looking at a gaping hole in a lot of teams and organizations around the ideas of the role of *code* in these handoffs. 

Cutting through the chase: I believe all testers need to start to read code

And when I say all testers, I don't say this lightly. I have worked extensively with business testers doing acceptance testing from the point of view of domain expertise. I have worked extensively with career testers working side by side with developers. I have worked extensively with testers specializing in creation of test systems. And I've heard this from them all: time is limited, and we need to make choices where we spend our time. 

First, let's talk about code: reading and writing of it. 

Learning Programming Through Osmosis -talk, 2016

We keep forgetting that none of the programmers we have in teams know everything of programming. They usually know enough to make progress through googling things they need to complete task at hand. And they know more tomorrow than they did today - every task we complete grows us a little more. 

Thus when I call for *all testers* to *read code*, I don't assume fluency from day 1. I don't assume we all see the same things. But starting on that journey of reading creates skills we need in doing the tester work better. 

Think of it as if you were peeling layers:

  • Start with CHANGE in mind - change matters for prioritizing the testing you do
    • Read commit messages and pay attention to the component that was changed. 
    • Read contents for size to start building your own idea of relationship of size, content and impact to testing
    • Read names of the things that change, instead of the details that change, to target your testing to concepts
    • Ask some questions about what you see is changing
    • Combine this new source of information with your hands on experience testing the versions with the changes and build a model that supports your prioritization
  • Deepen with IMPORTANCE in mind - know which parts of implementation matter
    • Read folder structures to understand the models built in code
    • Read to notice frequent change over stable function
    • Read to notice many contributors over one
Reading code does not mean reviewing code. You may eventually want to review code and leave your "approved" or "needs work" notes, but it is by no means a starting requirement. 

The starting requirements is that instead of trusting secondary agreed sources for the information about what you have at hand for testing, using code as a source of information is essential. Mostly things don't change without the code changing *somewhere in the ecosystem*, and seeing the change helps you target testing.

It is not sufficient to test by discovering all the promises the software fulfills and mechanically going through them again and again, inspired by what the jira ticket requests as a change. Reading code gives you an added layer. 

It's a three layer prioritization problem:
  • understand width of what it does for all types of stakeholders
  • understand the business to narrow it down to important 
  • understand the tech to localize the search
Like with learning to read and write, we start with reading first. Some of us never learn to write that best-selling book. 

What I ask for is not a big thing, you could try 15 minutes a day - starting with asking where to go to find this information. 

Friday, May 7, 2021

Let down by the Realworld app

This week, I finally got to making space in my calendar to pair up with Hamid Riaz on start of creating a new exploratory testing course centered around yet another test target: the Realworld App. Drawn in by the promise of "mother of all demo apps" and the lovely appearance of a programmer community who had contributed examples for the demo app frontends and backends in so many sets of technology, I had high expectations. I wanted to create a course that would teach the parts of exploratory testing that are hard and we only get to in good agile teams that have their quality game in shape

Instead, I was reminded again that demo apps without the active agile team, no matter how good the original developer may have been, never get to the point of having their quality game in shape. 

The first example implementation I picked up from the list was a end-to-end testing demo setup, only to learn it had been years since it was last updated (sigh), and the most basic of the instructions on how to get it running relied on half-docker (good), half local OS, and local OS was expected to be anything but what I had - Windows. While I have worked my way through of setting all too many projects that expect mac / linux to build on my work Windows, I did not feel up for the task today. So next option. 

Next I picked up a nodejs implementation, no dockerizing involved but I could add that to make it more isolated for anyone testing after me on the course. At this point Hamid joined me. 

Without too much effort, we got through installing Mongo, and all the dependencies needed. We installed postman, imported the tests and eventually also the environment variables provided, and run the tests only to note that some of the functionality that was supposed to be there, was no longer there - the years between the last changes and the latest of MongoDB seemed to do the trick of making the application fail, and we were not told of the version the code expected. 

After the pairing, I summed up on twitter: 

I should have known then. Software that is not maintained is not worthwhile target for realworld testing. 

When office work the next day added to inspiration of forgetting that results of testing don't stay valid for long even when you don't change anything let alone when you do, I concluded: 

My search for a worthwhile test target to teach testing that does not just succeed for the mere sloppiness of the org that created them still continues. 


Thursday, May 6, 2021

Pink Tax on Access to Agile Heroes

It is a year of celebration, 20 years since the Agile Manifesto. We see people coming together to discuss and present time and events leading up to it, reflect on the time and events after it, and aspire for futures that allow for better than we ever had. 

Today, one of those events popped on my timeline in Twitter. 

I was carefully excited on the idea to hear from some of the early agile heroes who were around but not at Snowbird. Until I clicked on the link to realize that access to my heroes, so rarely available, is a paid event and I have heard the perspective others in Snowbird amplified in large scale free online events almost a little too much this year. 

I have two problems with this on *Agile Alliance* in particular. First of all, by paywalling this early group they limit the public's access to this group - and they were already limited by not being in Snowbird. 

Second, asking this money from people like myself who really want to hear from my agile heroes is a form of pink tax. The pink tax refers to the broad tendency for products marketed specifically toward women to be more expensive than those marketed for men, despite either gender's choice.
You know, that idea how things that are for women make good business by being more expensive. Because women are not a minority, we are half of the world, and we want things that are not made for the default of people: men. And I do deeply crave to hear that people like me, my heroes, were around when I was around, even if they are missing from the visible history. 

Being a woman, you get to pay more for the physical items with the excuse of production costs. You get to pay more for the virtual items in games

Agile Alliance could do better on promoting access to early heroes that did not make it to Snowbird. 

Please note: I don't say that the people speaking are *women*. I say I am a woman. I did not check how the people on that list identify. I know only some of them.